EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 18 min ago

Image sensors enrich smartphone photography

Thu, 07/04/2024 - 23:18

Three ISOCELL image sensors from Samsung bridge the gap between smartphone main and secondary cameras for enhanced imaging across all angles. The ISOCELL HP9 is the industry’s first 200-Mpixel telephoto sensor for smartphones, according to Samsung, while the ISOCELL GNJ and ISOCELL JN5 are 50-Mpixel sensors.

The ISOCELL HP9 features 200 million 0.56-µm pixels in a 1/1.4-in. optical format. A highly refractive material applied to the microlens allows the HP9 to accurately direct light to the corresponding RGB color filter. The result is more vivid color reproduction and improved focus, with 12% better light sensitivity (based on SNR10) and 10% improved autofocus contrast compared to its predecessor. HP9 also includes 2x or 4x in-sensor zoom modes, enabling up to 12x zoom when paired with a 3x zoom telephoto module.

Leveraging dual-pixel technology with an in-sensor zoom function, the ISOCELL GNJ delivers 50 million 1.0-μm pixels in a 1/1.57-in. optical format. An upgraded material for deep trench isolation minimizes crosstalk between adjacent pixels, allowing the sensor to capture more detailed and precise images. The GNJ sensor boasts low power consumption, achieving a 29% improvement in preview mode and a 34% improvement in 4K video mode at 60 frames/s.

The ISOCELL JN5 has a resolution of 50 million 0.64-μm pixels in a 1/2.76-in. optical format. It can be used across main and sub cameras, including wide-angle, ultra-wide-angle, front, and telephoto, ensuring a consistent camera experience from various angles.

Follow the product page links below to learn more about each ISOCELL image sensor.

ISOCELL HP9 product page

ISOCELL GNJ product page

ISOCELL JN5 product page

Samsung Electronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Image sensors enrich smartphone photography appeared first on EDN.

Sneak diodes and their impact on your designs

Thu, 07/04/2024 - 09:00

Semiconductor companies don’t always highlight the inner details of certain products. One insidious issue is the presence of sneak diodes which is a nasty issue that has been discussed before regarding a D/A converter, but now let’s look at another example of getting into trouble.

The following sketch is a low frequency clock oscillator that uses an op-amp. Typically, I would choose a type TL082 for this purpose. Also, just as a side note, this kind of oscillator was a key part of many of the high voltage power supplies made by Bertan High Voltage when I was employed there.

Figure 1 Low frequency clock oscillator that depends on the ability of U1 to accept large differential voltages between its input pins.

This circuit depends on the ability of U1 to accept large differential voltages between its input pins. Nominally, the junction of R1, R2 and R3 will step between Vcc/3 and 2*Vcc/3 and the top of C1 will swing back and forth between those two voltage levels along a time constant set by R4*C1.

The TL082 (Figure 2) is well suited to this purpose because there is nothing connected between its two input pins to restrict the differential input voltage from reaching the limits that I’ve just described.

Figure 2 The TL082 op-amp that is well-suited for the low frequency clock oscillator described in Figure 1.

However, not every op-amp has this property. As an example, please consider the Analog Devices OP184/284/84 op-amps in Figure 3.

Figure 3 The OP184/284/484 op-amps where the maximum possible differential voltage is limited by QL1 and QL2, these two act as paralleled back-to-back diodes.

The maximum possible differential voltage is limited by QL1 and QL2 which act as paralleled, back-to-back diodes. Those diodes limit the maximum differential input voltage. In linear service, these diodes being there probably wouldn’t matter, but the oscillator of Figure 1 is not a linear circuit. 

Two SPICE simulations demonstrate the impact this diode difference has on the oscillator (Figure 4).

Figure 4 A comparison of the TL082 op-amp (U1) and the OP184/284/484 op-amps, showing the effect the diodes have on the oscillator.

The oscillation frequency with the two diodes present is almost an octave higher with those diodes than it would be without them.

The TL082 is by no means the only op-amp suitable for service in this kind of oscillator circuit, but not every op-amp is suitable for it. Non-linear circuits other than this kind of oscillator might be affected in some way as well.

Just be certain of your particular case(s).

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sneak diodes and their impact on your designs appeared first on EDN.

Peculiar precision full-wave rectifier needs no matched resistors

Wed, 07/03/2024 - 09:00

A classic analog application is the precision active full-wave rectifier. Many different implementations exist of this theme, each with its own supposed advantages. However, one circuit element needed by (almost) all active full-wave rectifier designs is an inverter with matched resistors to set its gain to an accurate -1.0. In such topologies, symmetry of rectification relies upon and can be no better than the accuracy of this resistor match. For an example, see a well known (veritable classic!) design in Figure 1 with op-amp U1b acting as the inverter and R1 and R2 as its matched gainset resistors. Unless R1 = R2, rectifier output for negative Vin excursions are (very) unlikely to equal output for positive Vin excursions. 

Figure 1 Conventional precision rectifier design with R1 and R2 matched symmetry-resistors.

For positive Vin inputs, D1 turns off and D2 conducts, establishing non-inverting unity gain for the circuit that’s unaffected by resistor values: Vout/Vin = +1.

For negative inputs, D1 conducts, D2 turns off and U1b becomes an inverter with gain Vout/Vin = –R2/R1 = -1 only if R2 = R1. Otherwise, not, creating poor rectification symmetry.

Figure 2 shows another (less conventional) design. But unconventional or not, here are Q2 and Q3 acting as the inverter and matched gainset symmetry-resistors R1 and R2 performing just as in Figure 1.

Figure 2 Unconventional rectifier with discrete circuit inverter still uses symmetry-setting resistors: R1 and R2.

But now, just to break the monotony, regard Figure 3. Note the (shocking) absence of matched resistors. Here’s how this nonconformist works.

Figure 3 Unconventional precision rectifier design without matched symmetry-resistors.

Q1 and Q2 provide simple cross-over compensation to cancel the Vbe drops of Q3 and Q4. Consequently, negative Vin excursions are inverted by A1 and output by Q4 to filter R3C3. Meanwhile, positive Vin excursions turn Q3 on, causing C2 to integrate their time and current product: charge. The accumulated charge is stored as voltage on C2 which is added to subsequent opposite polarity half-cycles with Q3 and Q4 acting as a simple full-wave charge pump. The net result: 

Vout = Avg(Abs(Vin)) R3 / R2 / R1.

Accurate rectification symmetry is therefore inherent as long as transistor Vbe’s match reasonably well which, being the same type and operating in similar contexts, they will.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Peculiar precision full-wave rectifier needs no matched resistors appeared first on EDN.

Altair eying a place in EDA’s shifting landscape

Tue, 07/02/2024 - 16:27

The EDA industry is known for the trio—Cadence, Siemens EDA and Synopsys—that dominates it and how these companies turned into giants by acquiring smaller EDA outfits. Now, another EDA player is on the horizon, taking a similar path of serial acquisitions to attain design automation software glory.

Altair, a supplier of simulation and data analytics solutions, is cutting deals to expand its EDA footprint in several design automation areas. It has just announced that it will acquire Metrics Design Automation, a Canadian company built on a simulation-as-a-service (SaaS) business model for semiconductor simulation and design verification.

Figure 1 Merging simulation with workload and workflow optimization technology could bolster design verification tools. Source: Altair

The cloud-based business model has the potential to make high-caliber EDA tools much more affordable and accessible at a time when IC design verification has high licensing costs and may require hundreds and sometimes thousands of seats to run a single-chip simulation. Moreover, these EDA tools run on desktop machines and are not typically cloud-native or cloud-enabled.

Altair plans to combine its silicon debug tools with Metrics’ digital simulator, DSim, to offer simulation and debug capabilities as a desktop app, on company servers, or in the cloud. This will allow design engineers to pay only for what they use. DSim will be available through Altair One, Altair’s cloud gateway, where it will also be available for desktop download.

The combined solution will support Verilog and VHDL RTL for digital circuits in ASICs and FPGAs. Metrics is led by Joe Costello, an EDA industry veteran credited with turning Cadence Design Systems into a billion-dollar firm.

A plethora of EDA deals

Earlier this year, Altair named EDA Expert a channel partner for distributing its HyperWorks design and simulation platform within France. EDA Expert, founded in 2012 and headquartered in Arcueil, France, provides technical expertise and training to help manufacturers define suitable solutions for designing and manufacturing electronic systems and analyzing electronic boards.

Then, in June 2022, Altair announced acquisition of Concept Engineering, a supplier of automatic schematic generation tools, electronic circuit and wire harness visualization platforms that provide on-the-fly visual rendering, and electronic design debug solutions. Concept Engineering’s software would be integrated into Altair’s Electronic System Design suite and available via Altair Units.

Concept Engineering’s reactive visualization technology would help organizations accelerate their designs that have specific design architecture requirements as well as rigorous service needs. Next, its design debug solutions covered register transfer level (RTL), gate, and transistor design abstractions for both analog and digital disciplines.

Figure 2 Concept Engineering’s automatic schematic generation and visualization software components help developers create high-performance debugging cockpits, shorten software tool development cycles, lower software development and maintenance costs, and increase the product quality of EDA tools. Source: Altair

Finally, in September 2017, Altair announced that it would buy Runtime Design Automation, a Santa Clara, California-based company specializing in scalable solutions for high-performance computing (HPC). Runtime primarily served design engineers leveraging EDA tools to design CPUs, GPUs, and system-on-chips (SoCs).

Carving an EDA niche?

Altair calls itself a computational intelligence specialist, but its technology roadmap is increasingly converging and colliding with EDA tool offerings. It’s steadily accumulating EDA solutions in its technology arsenal to claim a stake in the EDA industry, which is now being transformed by artificial intelligence (AI) and cloud computing technologies.

Moreover, HPC, which Altair calls its forte, is taking center-stage in the semiconductor realm. So, the Troy, Michigan-based company might be aiming to carve out an EDA niche in this burgeoning market.

Still, Altair is nowhere near the EDA’s big three: Cadence, Siemens EDA and Synopsys. So, will Altair continue the acquisition spree and eventually challenge the dominance of the EDA trio? Or will it become an acquisition target over time due to its strengths in HPC, cloud, and AI? We at EDN will closely watch the developments in the acquisition sphere of the EDA industry.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Altair eying a place in EDA’s shifting landscape appeared first on EDN.

Resurrecting an inkjet printer, and dissecting a deceased cartridge

Tue, 07/02/2024 - 14:00

I purchased my Epson Artisan 730 color inkjet all-in-one (printer, copier, and scanner):

in September 2012, coincident with a move to Colorado (my even older Artisan 800 is still in occasional use by my wife). Speaking of “occasional use”, I’ve also used mine only sporadically, given that the monochrome outputs of the Brother laser multifunction printers in both of our offices work fine for most purposes and are significantly less expensive to operate on a per-page basis. Truth be told, mine probably still had its original ink cartridges installed when I recently tried to use it to print out a “batteries inside” notice, which needed to be bright red in color, to be taped to the outside of a package I was preparing to ship. And unsurprisingly, therefore, the result wasn’t as desired; the printer spat out a completely blank sheet of paper.

The inkjet cartridges, it turns out, were dried up inside (and/or empty; the software driver’s built-in diagnostics routine can’t differentiate between the two possible states). But when I replaced the cartridges with fresh ones:

the printer still spat out blank sheets of paper. That’s because, I eventually realized, the flexible multi-tube-harness that transports the ink from the cartridges to the print heads was also clogged by desiccated ink remnants (full disclosure: the following photo was snapped after the completion of the procedure described in the next couple of paragraphs):

Replacement harnesses weren’t available, my research indicated, and it also suggested that attempts to disassemble the printer were highly likely to lead to its demise. Determined to do everything possible to prevent this otherwise perfectly good device from ending up at the landfill, I kept plugging away with Google searches and eventually came across this video:

I went with this cleaning solution, and it took several fluid applications, each time followed by a few hours’ wait and then head clean and nozzle check operation attempts, but the Artisan 730 is thankfully back in business. I was left with the aforementioned “dead” inkjet cartridges:

which piqued my curiosity; how did they work, actually? And how did Epson and its competitors, such as long-disdained HP, both determine a particular cartridge’s remaining-ink level and attempt to prevent printer owners from using less expensive third-party alternatives? I decided to take one apart, randomly grabbing the light magenta one as my chosen victim:

Conceptually, here’s a how-it-works video I found that Wired Magazine did about a decade ago:

It’s not directly relevant here because, as I earlier noted, the print heads aren’t built into the cartridges; instead, they’re on the other end of the now-unclogged flexible tubing. But I still found the video interesting. And here’s a how-they-work (both in an absolute sense and vs thermal alternatives) Epson tech brief that I came across, which may also be of interest to you.

Also, in the earlier rubber-banded-stack photos, you might have noticed that the black ink cartridge has a “98” moniker while the others are “99”. Epson sells two versions of each cartridge color variant; “98s” have higher ink capacity than the less expensive “99” ones. A typical six-color bundle sold at retail combines a high-capacity black “98” (since monochrome printing is more common than full color, per my earlier mentioned Brother laser case study example) with standard capacity “99” variants of the others (more expensive all-“98” bundles are also available, obviously, as my earlier photos of the replacement cartridges indicate).

With that background info out of the way, let’s dive in. The cartridge enclosure construction is pretty beefy, understandably so due to the obvious desire to prevent leaks, and is further bolstered by a nearly impenetrable (for reasons that will soon be visibly obvious) sticker on one side:

That said, the seam around the install-orientation hole, whose purpose will be obvious once you see what the bay looks like absent cartridges (note the mounting pins toward the bottom):

and is on the opposite end from the same-side ink nozzle, looks promising:

And we’re inside. Behind that tough black plastic cover is, I suspect, the ink reservoir:

But for now, this electrical engineer’s top priority is checking out that multi-contact mini-PCB:

This side we’ve already seen in its installed state:

but the underside is now first-time exposed to view, too:

I’m guessing that under that opaque epoxy blob is the authentication chip (more likely, die). But if you look closely at the earlier mini-PCB-less shot, you’ll note that there’s still more “guts” to go below. Let’s get the broader plastic end assembly off next:

Whatever this is, I assume it modulates (and measures?) the amount of ink in the “tank” and flowing through the nozzle. Specific ideas, readers? A piezo something-or-other, mebbe?

And here are some views of what’s driving it (along with the earlier-seen mini-PCB, of course):

With no further meaningful progress seemingly possible here, I returned my disassembly attention to the sticker side, aided by a box cutter and focusing on the circular-pattern section you might have noticed in previous photos:

Hmmm. It appears that I’ve found the “port” used to fill the cartridge with ink on the assembly line. And it also appears that the cartridge still has at least some viable ink inside:

The light magenta dribble eventually petered out:

And after tediously piece-by-piece ripping off the recalcitrant black plastic sheet you saw earlier covering the other side, here’s what I found inside:

The ink-input port is the circular section in the upper left. Why there are so many chambers inside…🤷‍♂️ And the output nozzle extends downward in the lower left area. I’ll conclude with a few more shots of both it and the “flow-control faucet” for it from both sides:

And wrap up with this no more revealing, but far more messy, alternative teardown clip I found:

Those preparatory gloves, paper towel and newspaper sure were a wise move! 😂 Let me know your thoughts on what I uncovered, along with what’s still to be identified, in the comments.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Resurrecting an inkjet printer, and dissecting a deceased cartridge appeared first on EDN.

Implementing AI at the edge: How it works

Mon, 07/01/2024 - 14:39

While the talk about artificial intelligence (AI) at the edge is all the rage, there are fewer design examples of how it’s actually done. In other words, how AI applications are implemented at the edge. Below is a design example of how Panasonic implemented an AI function in its e-assisted bike.

Panasonic recently launched electric assist bicycle for school commuting, TiMO A. This e-assisted bike bypasses the need for additional hardware such as a sensor for tire air pressure. Instead, it incorporates a microcontroller (MCU) alongside an edge AI development tool to create a tire pressure monitoring system (TPMS) that leverages an AI function.

Figure 1 The e-bike powertrain comprises basic units, including a power unit (with an on-board charger, junction box, inverter, and DC-to-DC converter) and a motor unit. Source: STMicroelectronics

The bike runs an AI application on the MCU to infer the tire air pressures without using pressure sensors. If necessary, the system generates a warning to inflate the tires based on information from the motor and the bicycle speed sensor. As a result, this new function simplifies tire pressure monitoring system (TPMS) design while enhancing rider safety and prolonging the life of tires.

Panasonic combined the STM32F3 microcontroller from STMicroelectronics with its edge AI development tool, STM32Cube.AI, which converts neural network (NN) models learned by general AI frameworks into code for the STM32 MCU and optimizes these models.

STM32F3 is based on the Arm Cortex-M4, which has a maximum operating frequency of 72 MHz. It features a 128-KB flash along with analog and digital peripherals optimal for motor control. In addition to the new inflation warning function, the MCU determines the electric assistance level and controls the motor.

STM32Cube.AI enabled Panasonic to implement this edge AI function while fitting into STM32F3 embedded memory space. Panasonic leveraged STM32Cube.AI to reduce the size of the NN model and optimize memory allocation throughout the development of this AI function. STM32Cube.AI optimized the NN model developed by Panasonic Cycle Technology for the STM32F3 MCU quickly and implemented it in the flash memory, which has limited capacity.

Figure 2 STM32Cube.AI, which makes artificial neural network mapping easier, converts neural networks from popular deep learning libraries to run optimized inferences on STM32 microcontrollers. Source: STMicroelectronics

This design example shows how edge AI works in both hardware and software, which can facilitate a wide range of designs in industrial and consumer domains.

“By combining the STM32F3 MCU with STM32Cube.AI, we were able to implement the innovative AI function without the need to change hardware,” acknowledged Hiroyuki Kamo, manager of the software development section at the Development Department of Panasonic Cycle Technology.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Implementing AI at the edge: How it works appeared first on EDN.

Simple low-pass filters tunable with a single potentiometer

Mon, 07/01/2024 - 14:00

A scheme of simple band-pass RC- and LR- filters on operational amplifiers containing only one capacitor or inductor and 3 resistors is proposed. A comparison is made of the amplitude-frequency characteristics of the proposed filters, as well as the RC filter of Robert Allen Pease and its modified LR- variant.

Wow the engineering world with your unique design: Design Ideas Submission Guide

From the whole set of simple low-frequency filters, one can highlight the Sallen-Key filters [1, 2]. Despite their attractive external simplicity, these filters are far from easy to set up and require the use of coordinated parts.

The RC filter, proposed in 1971 by an engineer of George A. Philbrick Research—Robert Pease—Figure 1 [3, 4], has several unique properties. It is extremely simple, and its resonant frequency is controlled by only one potentiometer R2, and the transmission coefficient of the filter almost does not depend on the resistance value of this potentiometer. The amplitude-frequency characteristics of this filter when adjusting the potentiometer R2 are shown in Figure 1 [5].

Figure 1 Electrical diagram of the Pease RC-filter and its amplitude-frequency characteristics when R2: 1) 10.0 kΩ; 2) 3.0 kΩ; 3) 1.0 kΩ; 4) 0.3 kΩ; 5) 0.1 kΩ; 6) 0.03 kΩ.

By slightly modifying Pease’s circuit, namely, by replacing capacitors with inductors, we get a modified filter circuit. The amplitude-frequency characteristics of the modified LR-filter during the adjustment of the R2 potentiometer are shown in Figure 2 [5].

Figure 2 Electrical diagram of the modified LR-filter and its amplitude-frequency characteristics when R2: 1) 0.03 kΩ; 2) 0.1 kΩ; 3) 0.3 kΩ; 4) 1.0 kΩ; 5) 3.0 kΩ; 6) 10.0 kΩ. L1=L2=20 mH.

In addition to the op-amp, the filters discussed above contain 5 components each. However, it is possible to offer even simpler filters that contain only 4 where the elements R3 + R4 can be replaced with one potentiometer.

The “resonant” frequency of the RC filter, Figure 3, is determined from the expression:

where f0 is in Hz, R is in Ω, C is in F, a is a constant depending on the model of the op-amp.

So, for example, for LM324 a ≈ 426. The equivalent Q-factor of the filter Q is proportional to the expression:

where b is a constant (b ≈ 110).

In the calculations: C = C1; R = R3 + R4. Thus, the “resonant” frequency of the filter depends only on the nominal values of the elements R = R3 + R4 and C = C1. The ratio R2/R1 does not affect the frequency of the “resonance”, but affects only the value of the equivalent quality factor of the filter and the transmission coefficient of the filter at the frequency of the “resonance”.

Figure 3 Electrical diagram of the RC-filter with the adjustment of the “resonance” position by the potentiometer R4.

The amplitude-frequency characteristics of the RC-filter are shown in Figure 4.

Figure 4 Amplitude-frequency characteristics of the RC-filter with the adjustment of the “resonance” position when the resistance value R = R3 + R4 varies.

Replacing the capacitor C1 with the inductor L1 and swapping the frequency-determining components R and L, we get the LR-version of the filter, Figure 5. Its amplitude-frequency characteristics with varying values of R are shown in Figure 6.

The “resonant” frequency of the LR-filter, Figure 3, is determined from the expression:

where f0 is in Hz, R is in Ω, L is in H, and a is a constant. The ratio R2/R1 affects the same parameters as before.

Figure 5 Electrical diagram of the LR-filter with the adjustment of the “resonance” position by the potentiometer R4.

Figure 6 Amplitude-frequency characteristics of the LR-filter with the adjustment of the “resonance” position when the resistance value R = R3 + R4 varies.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 800 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

Related Content

References

  1. Sallen R.P., Key E.L. “A Practical Method of Designing RC Active Filters”. IRE Transactions on Circuit Theory, 1955, Vol. 2, № 1 (March), pp. 74–85.
  2. Tietze U., Schenk Ch. “Halbleiter-Schaltungstechnik”, 12. Auflage, Berlin-Heidelberg, Springer Verlag, 2002, 1606 S.
  3. Pease R. “An easily tunable notch-pass filter”. Electronic Engineering, December 1971, p. 50.
  4. Hickman I. “Notches, Top”. Electronics World Incorporating Wireless World, 2000, V. 106, No. 2 (1766), pp. 120–125.
  5. Shustov M.A. “Circuit Engineering. 500 devices on analog chips”. St. Petersburg: Science and Technology, 2013, 352 p.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Simple low-pass filters tunable with a single potentiometer appeared first on EDN.

Assume nothing. Question everything.

Fri, 06/28/2024 - 14:00

When I was still in elementary school and household flashlight batteries were D-cells made using carbon-zinc chemistry, some of those D-cells were set in cardboard tubes that enshrouded a zinc can as in Figure 1 below.

Figure 1 Carbon-zinc D-cell from sixty-plus years ago where the negative post is the zinc can and the positive post is a small tip. Source: John Dunn

I remember taking such cells apart to see what was inside and digging out a center carbon core that connected to the positive post tip. The negative post of the cell was the metal can made of zinc.

I guess that these old battery materials weren’t threateningly toxic, so I got away unscathed for having done that. Doing something like that today, I hate to imagine the possibilities.

In the ensuing decades, I have always naively assumed that the outer cans of D-cells, and other sized cells too, were the negative terminal but I have labored lo these many years under a false assumption.

When two D-cells of a household flashlight recently gave out and I replaced them, for absolutely no reason in particular, I took off the outer cover of one of the used-up cells and got quite a surprise, Figure 2.

Figure 2 In modern alkaline D-cells the can is the positive end of the cell. Source: John Dunn

In these modern products, the can is the positive end of the cell, not the negative.

In terms of the product’s purpose, this difference amounts to nothing significant, but simply finding out this fact drove home to me that some of my lifelong assumptions should not be taken as immutable.

Science fiction writer Robert Heinlein prefaced one of his novels with the quote “Forgive him Caesar for he comes from a far place and believes the ways of his people to be natural law.”

Assume nothing. Question everything. (Many sources.)

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Assume nothing. Question everything. appeared first on EDN.

Intel bolsters EMIB packaging with EDA tools enablement

Fri, 06/28/2024 - 10:28

Intel’s embedded multi-die interconnect bridge (EMIB) technology—aiming to address the growing complexity in heterogeneously integrated multi-chip and multi-chip (let) architectures—made waves at this year’s Design Automation Conference (DAC) in San Francisco, California. It delivers advanced integrated IC packaging solutions that encompass planning, prototyping, and signoff across a broad range of integration technologies such as 2.5D and 3D IC.

At DAC, Intel exhibited tie-ups with key EDA and IP partners to ensure that their heterogenous design tools, flows and methodologies, and reusable IP blocks are fully enabled and qualified to support EMIB assembly technology.

Figure 1 A silicon bridge is embedded inside a package to connect multiple dies. Source: Intel

At the heart of these initiatives was Intel Foundry’s Package Assembly Design Kit (PADK), which enables engineers to create EMIB-based package designs. Intel’s PADK—comprising a design guide, rules, and stack-up that enable chip designers to complete and verify an EMIB design efficiently—aims to address chip design complexity and facilitate EDA tool enablement.

PADK enables reference flows that support tools from all major EDA vendors to facilitate a PADK-driven assembly verification. Below is a sneak peek at Intel’s Foundry’s recent collaborations with major EDA vendors for EMIB enablement.

Collaboration with EDA trio

  1. Simens EDA

At DAC 2024, Siemens EDA announced tool certifications for EMIB enablement for designing highly complex ICs and advanced packaging. The certifications include Solido SPICE—part of the Solido Simulation Suite software—for the foundry’s Intel 16 and Intel 18A process nodes.

Earlier, in February 2024, Siemens EDA announced the availability of the EMIB reference flow to allow design engineers carry out early package assembly prototyping, hierarchical device floorplanning, co-design optimization, and verification of the complete detailed implementation. The reference flow, built around Intel Foundry’s PADK, enables engineers to tackle the full range of critical tasks needed for a successful design and tape-out.

Figure 2 The EMIB reference flow enables design engineers to create high-density interconnect for heterogeneous chips. Source: Siemens EDA

  1. Synopsys

Synopsys also exhibited a multi-die reference flow for Intel Foundry on the DAC 2024 floor. Powered by Synopsys.ai EDA suite, it aims to provide designers with a comprehensive and scalable solution for fast heterogeneous integration using EMIB assembly technology.

The reference flow, enabled by Synopsys 3DIC Compiler, provides a unified co-design and analysis solution to accelerate the development of multi-die designs at all stages from silicon to systems. Moreover, Synopsys 3DSO.ai, which is natively integrated with Synopsys 3DIC Compiler, enables optimization for signal, power, and thermal integrity.

Ansys, a supplier of electrothermal tools currently in the process of being acquired by Synopsys, is also providing multi-physics signoff solutions for Intel’s 2.5D chip assembly technology, which uses EMIB technology to connect the die flexibly and without the need for through-silicon vias (TSVs). Its RedHawk-SC Electrothermal EDA platform enables multi-physics analysis of 2.5D and 3D ICs with multiple dies.

  1. Cadence Design Systems

Cadence, another member of EDA trio, has also joined hands with Intel Foundry to certify an integrated advanced packaging flow utilizing EMIB technology to address the growing complexity in heterogeneously integrated multi-chip(let) architectures. This EMIB flow enables design teams to seamlessly transition from early-stage system-level planning, optimization and analysis to DRC-aware implementation, and physical signoff without converting data between different formats.

EDA tool enablement

Intel, which has led the packaging technology development curve for a couple of decades, has now launched two advanced packaging technologies to scale silicon area by connecting multiple dies in a single package. While EMIB connects multiple chips side by side in a package, chips are stacked on top of one another in a 3D fashion in Foveros.

Rahul Goyal, VP and GM for product and design ecosystem enablement at Intel, says EMIB technology embodies a differentiated approach to multi-die assembly compared to traditional stacking techniques. Intel has already implemented EMIB technology in its own chips, including GPU Max Series (code-named Ponte Vecchio), 4th Gen Intel Xeon and Xeon 6 processors, and Intel Stratix 10 FPGAs.

Figure 3 Intel Foundry developed EMIB to connect multiple dies in a single package. Source: Intel

However, EMIB, like other advanced packaging technologies, presents new challenges related to the design and packaging complexities of multi-die architectures. So, incorporating a variety of EDA tools into Intel’s PADK is a good start. It will help chip designers implement and verify EMIB designs effectively and efficiently.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Intel bolsters EMIB packaging with EDA tools enablement appeared first on EDN.

Wi-Fi module teams with Alexa Connect Kit

Thu, 06/27/2024 - 22:44

A standalone Wi-Fi module, Quectel’s FLM263D works with the Alexa Connect Kit (ACK) SDK for Matter, enabling rapid setup with Amazon Alexa. In addition to Alexa, the FLM263D allows IoT devices to connect seamlessly with other Matter-compliant smart home assistants, including Google Home, Samsung SmartThings, and Apple HomeKit.

Housed in a 17.3×15.0×2.8-mm LCC package, the 2.4-GHz Wi-Fi module also supports Bluetooth LE 5.2. Along with a built-in PCB antenna, it integrates a processor operating at up to 320 MHz, 512 kbytes of SRAM, and 4 Mbytes of flash memory.

Ben McInnis, director of Smart Home at Amazon, said, “We created ACK to make it simpler for device makers to build smart home devices with a fully-managed service without having deep expertise in multiple wireless protocols, complex cloud connectivity, and the necessary maintenance of cloud infrastructure. We are excited about this launch because Quectel’s FLM263D adds to ACK’s portfolio of available solutions and offers more choices for device makers.”

To protect connected devices, the FLM263D complies with WPA, WPA2, and WPA3 security standards, including secure boot and Mbed TLS encryption. The module offers five GPIOs that can alternatively function as PWM or I2C communication channels. It requires a 3.0-V to 3.6-V power supply and operates over a temperature range of -40°C to +105°C.

The FLM263D is particularly useful for smart light bulb applications. Quectel expects to roll out additional modules targeting smart switches, smart plugs, and other smart household devices.

FLM263D product page 

Quectel Wireless Solutions 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wi-Fi module teams with Alexa Connect Kit appeared first on EDN.

SiC diode lineup spans 5 A to 40 A

Thu, 06/27/2024 - 22:44

Sixteen new Gen 3 1200-V SiC Schottky diodes from Vishay increase the efficiency and reliability of switching power designs. Covering current ratings ranging from 5 A to 40 A, the SiC diodes feature a merged PiN Schottky (MPS) structure, with the backside thinned using laser annealing.

 These Gen 3 SiC diodes exhibit a low capacitance charge as low as 28 nC and a minimal forward voltage drop of just 1.35 V. Moreover, with typical leakage currents as low as 2.5 µA at 25°C, they minimize conduction losses, ensuring high system efficiency even during light loads and idling.

Intended for use in harsh environments, the devices operate at temperatures of up to +175°C with forward surge ratings as high as 260 A. Typical applications include AC/DC power factor correction and DC/DC high-frequency output rectification in PSFB and LLC converters.

The Gen 3 1200-V SiC diodes come in a variety of though-hole and surface-mount packages. Samples and production quantities are available now, with lead times of 13 weeks.

Click here to access the datasheets of the 16 new devices, as well as other MPS diodes.

Vishay Intertechnology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SiC diode lineup spans 5 A to 40 A appeared first on EDN.

GaN MMIC power amps operate in Ka band

Thu, 06/27/2024 - 22:44

Mitsubishi will begin sampling its GaN MMIC power amplifiers for use in Ka band satellite communication earth stations next month. The MGFGC5H3102 and MGFGC5H3103 cover a frequency band of 27.5 GHz to 31 GHz and deliver 8 W and 14 W of output power, respectively.

While the mainstream frequency for satellite communications is currently the Ku band (13 GHz to 14 GHz), the higher-frequency Ka band enables the use of multibeam technology and provides wider bandwidth for transmitting more data. The expansion of Mitsubishi’s GaN HEMT lineup with Ka band products will support the growth of satellite news gathering (SNG) services and SATCOM emergency systems.

The 8-W MGFGC5H3102 and 14-W MGFGC5H3103 are supplied as small bare chips, and their miniaturization will contribute to smaller SATCOM earth satellites. According to Mitsubishi, the devices’ increased power added efficiency of more than 20% at maximum linear power will also help reduce power consumption in these stations.

The GaN MMIC power amplifiers were exhibited at the recent IEEE MTT-S International Microwave Symposium. Datasheets were not available at the time of this announcement.

Mitsubishi Electric

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN MMIC power amps operate in Ka band appeared first on EDN.

RF front end shrinks package size

Thu, 06/27/2024 - 22:44

Joining Spectrum Control’s SCi Blocks RF portfolio is an RF front-end system-in-package (SiP) for next-generation defense systems. Customizable and digitally enabled, the RF+ SiP provides the capabilities of an integrated microwave assembly in a surface-mount BGA package that is just 30×30 mm.

The RF+ SiP front end serves as an effective coprocessor to direct RF FPGAs and mixed-signal control processors. Its surface-mount design and integrated digital control are expected to reduce lifecycle costs by up to 86%, according to the company. With its wideband performance, compact footprint, and volume-ready design, Spectrum believes the device will enable cost-effective mass production. This applies to midrange precision-guided munitions or any application where size, weight, power consumption, and cost (SWaP-C) are critical factors.

The initial product in the SCRS series is a full RF front end covering a range of 6 GHz to 18 GHz with filtering and 2 GHz to 20 GHz without. It offers instantaneous bandwidth of 2 GHz and full signal isolation without packaging signal degradation. The part also delivers 15 dB of gain with a noise figure of 6 dB to 10 dB. Other features include an onboard FPGA for software-controlled signal conditioning, signal-level detection, self-tuning, on-chip power regulation/sensing, temperature sensing, and a standard digital control interface.

This first SCRS series RF+ SiP is sampling now. Customization options include frequency band, power amplification, and block conversion.

RF+ SiP product page  

Spectrum Control  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RF front end shrinks package size appeared first on EDN.

Intel unveils high-speed optical I/O chiplet

Thu, 06/27/2024 - 22:44

A fully integrated optical compute interconnect (OCI) chiplet from Intel delivers up to 4 Tbps of bidirectional data transfer. The company’s Integrated Photonics Solutions (IPS) Group hosted a live demonstration of the chiplet co-packaged with an Intel CPU at the Optical Fiber Communications Conference (OFC) 2024. Designed to meet the high-bandwidth demands of emerging AI infrastructure, the chiplet is well-suited for data centers and HPC applications.

This first OCI implementation supports 64 PCIe 5.0 channels transmitting 32 Gbps in both directions—4 Tbps total—over distances of up to 100 meters using fiber optics. It uses dense wavelength-division multiplexing (DWDM) and consumes just 5 pico-Joules per bit. This is significantly more energy-efficient than pluggable optical transceiver modules, which consume about 15 pico-Joules per bit, according to Intel.

The OCI chiplet combines a silicon photonics IC, which includes on-chip lasers and optical amplifiers, with an electrical IC. While the chiplet demonstrated was co-packaged with an Intel CPU, it can be integrated with next-generation CPUs, GPUs, IPUs, and other SoCs.

Intel’s current optical I/O chiplet is a prototype. The company is working with select customers to integrate the chiplet with their SoCs.

For more information about Intel silicon photonics and the OCI chiplet, click here.

Intel

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Intel unveils high-speed optical I/O chiplet appeared first on EDN.

When is a “good-enough” battery good enough?

Thu, 06/27/2024 - 17:28

Engineers are familiar with non-experts who quickly extrapolate from recent technical advances to future results, casually stating “at this rate, such-and-such will reach ‘x’ in five years” or something along those lines. Engineers are also familiar with the real world of extending advances, which generally has complex nonlinearities in design, development, test, and manufacturing.

Let’s look at production first. It’s been demonstrated that once the initial issues are resolved, volume increases and costs drop, often dramatically, as production gains more experience on the sourcing, assembly, test, and costing There are hundreds of examples including ICs, smartphones, microwave ovens, and more. The slope and depth of improvement depends on many factors, of course, but eventually a lower limit is reached, but the shape of the curve is clear, Figure 1.

Figure 1 Once manufacturing gets going, works through the various impediments, and masters the process, progress is rapid until it reaches a plateau. Source: Bill Schweber

Ironically, the situation in the product design and development phase which precedes manufacturing is often the opposite. Instead, there’s the informal 80/20 (or sometimes shown as 90/10) rule, which states that completing the last 20% of a project often takes 80% of the time. To put it another way, that last 20% takes as much time and effort as the first 80%, Figure 2.

Figure 2 For design and development, progress is usually rapid until the final-stage issues need to be resolved, when it slows down considerably. Source: Bill Schweber

This is not surprising, since it’s those final details of design validation, prototype, test, tracking down and eliminating the last bugs, and similar that consume time and energy. There’s an inflection point in the rate of progress where the challenges and effort required increase drastically while progress is modest or marginal.

For example, trying to trim system dissipation by a small amount to get the run time past a marketing-defined objective can take as long as the bulk of the design/debug effort. This is a situation which affects leading-edge products such as smart phones as one obvious example.

That’s why it’s important to recognize that there are many times when performance which is “good enough” and appropriately tailored to the application and user needs, budget, and availability is the better design alternative.

Consider this: one area getting marginal gains at great cost R&D time and effort is batteries. It seems like every university is deep into battery research; among the many reasons for this is that’s where the grant money is these days. Of course, many commercial organizations and companies are also doing battery research, again using similar grants as well as their own or investor money.

Everyone is hoping to improve battery attributes such as capacity, charge time, number of operating cycles, safety, cost, manufacturability, and more. The harsh reality is that despite all this effort, actual progress over the last few years has been slow (except perhaps on cost, due to manufacturing progress), and is measured in a few percent or fractions of a percent.

Still, the aspects which researchers are trying to improve are not necessary priorities for everyone. Some end-users can accept reduced capacity (run time) if the tradeoff is a much lower cost, simplified materials and manufacturing, or improved safety, or environmental impact.

That’s why I was intrigued by the research paper “Water-in-Polymer Salt Electrolyte for Long-Life Rechargeable Aqueous Zinc-Lignin Battery” by a team based at Linköping University (Sweden) and published in the Wiley journal Energy and Environmental Materials. Their battery is not the best by any measure, but it is pretty good, low cost, and easy to fabricate, Figure 3. It’s a good fit with low-cost solar panels and can bring a modest amount of electrical power to areas where it is not available.

Figure 3 The battery developed by the researchers is small and the technology appears to be scalable. Source: Linköping University

They worked with zinc-metal batteries (ZnBs) which use widely available zinc and a non-flammable aqueous electrolyte. The primary difficulty with zinc batteries has been poor durability due to zinc reacting with the water in the battery’s electrolyte solution, which leads to the generation of hydrogen gas and dendritic growth of the zinc, thus rendering the battery essentially unusable.

To stabilize the zinc, they used a substance called a potassium polyacrylate-based water-in-polymer salt electrolyte (WiPSE). The researchers have demonstrated that when WiPSE is used in a battery containing zinc and lignin, stability is very high. Lignin is a tough, woody biopolymer that binds cellulose and hemicellulose fibers and provides stiffness to plants, and it is the second most abundant polymer after cellulose, Figure 4. It is a widely available byproduct of the manufacture of paper products. Both zinc and lignin are inexpensive, and the battery is easily recyclable.

Figure 4 Lignin is a waste product from the paper industry. Source: Linköping University

I won’t review the chemistry details of their project; you can read it in their paper if you are interested. They claim their battery is stable and can deliver over 8000 cycles at a high current rate of 1 amp/gram while maintaining about 80% of its performance, with 75% capacity retention up to 2000 cycles at a lower current rate (0.2 A/gram), Figure 5. In addition, the battery retains its charge for approximately one week, significantly longer than other similar zinc-based batteries that discharge in just a few hours.

Figure 5 a) Schematic illustration of Zn-lignin battery with WiPSE. b) Cyclic voltammetry (CV) of both Zn and L-C electrode in WiPSE showing that the device exhibits 1.3 V of cell potential. c) CV at different sweeps and d) GCD at different current rates of Zn-lignin battery in WiPSE. e) Presents capacity retention and coulombic efficiency estimation as cyclic stability analysis of Zn-lignin battery in WiPSE up to 8000 cycles at 1 A/gm current rate. f) Voltage vs time plot for 48 hours to analyze the self-discharge of Zn-lignin battery in WiPSE. Source: Linköping University

Whether this zinc-lignin battery will actually be deployed successfully in another story. Will all advances, and especially with batteries, there is usually a difficult journey with many problems when scaling from lab prototype to pilot production, and even more when transitioning from pilot phase to full-scale production.

Another virtue of these zinc-lignin batteries may be that they can be fabricated successfully on a smaller scale; that would be a major plus as well. Currently, the batteries developed in the lab are small. But they believe they are scalable—the weakness of so many battery advances.

Whatever the outcome, I like that these batteries do not attempt to outperform or even perform comparably to more expensive, complicated, and often hazardous lithium-based chemistries. Instead, they adapt to the fact that less stringent requirements mean that the R&D can possibly shrink the time-versus-hassle path before the curve’s inflection point is reached.

If only this were the case for many other projects which keep engineers working long hours, trying to get that last percent of performance increase or cost decrease—well, that would be nice but unlikely to happen.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post When is a “good-enough” battery good enough? appeared first on EDN.

The good, the bad, and the ugly of zero trims

Wed, 06/26/2024 - 17:51

Manual amplifier nulling circuits are simple topologies, typically consisting of just a trimmer pot and a couple of fixed resistors intended to allow offset adjustment by a (usually small) symmetrical fraction of bipolar supply voltages. So, it’s surprising how many variations exist, some very good, some very not. Figure 1 is an example of the latter case.

Figure 1 The bad: Attenuation of the supply voltages is done with subtraction instead of division, destroying the PSRR of the amplifier.

Wow the engineering world with your unique design: Design Ideas Submission Guide

This zero trim is a bad idea because attenuation of the supply voltages is done with (V+ – V–) subtraction instead of division. This virtually destroys the PSRR of the amplifier. That’s pretty bad.

Figure 2 corrects this serious defect, achieving attenuation with a proper (R3/R2) voltage divider instead of PSRR-robbing subtraction. But it still isn’t very pretty. Here’s why.

Figure 2 The ugly: An attempt to correct for the destroyed PSRR can be done by achieving attenuation with a voltage divider instead; however, the supply rails must be symmetrical, leading us back to our PSRR problem.

 Figure 2 can only give the (usually) desirable symmetrical trim range if the supply rails are likewise symmetrical (and vice versa). You could add a series resistor between R1 and the larger rail voltage to fix the problem, but that would (at least partly) revive the PSRR shortcoming of Figure 1. Ugly.

Figure 3 fixes both problems.

Figure 3 The good: Setting R2 = R3(-V+/ V–)/2 to get a symmetrical trim range.

All you have to do is set R2 = R3(-V+/ V–)/2 to get a symmetrical trim range regardless of the actual supply rail voltage ratio.

And I think that’s pretty good.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The good, the bad, and the ugly of zero trims appeared first on EDN.

Power Tips #130: Migrating from a barrel jack to USB Type-C PD

Tue, 06/25/2024 - 17:46

Over the last few years, the USB Type-C® with Power Delivery (PD) standard has been adopted in a wide variety of electronics. This adoption has been driven by benefits such as a unified port (reducing e-waste), the convenience of a reversible connector, and high-power capability.

As Table 1 shows, the latest release of USB PD 3.1 extends the power capability of USB up to 240 W, more than doubling the 100 W of available power from the previous USB PD 3.0 specification. This allows a wide range of new applications to now be powered from USB. In order to reduce e-waste, the European Union and India have started passing legislation mandating USB Type-C for personal electronics in 2025, and it is expected that this trend will likely extend to other applications such as power tools, smart speakers, vacuum cleaners, e-bike chargers and networking. These trends and regulations are forcing manufacturers to seek out simple and inexpensive ways to convert the power connectors on their products from a barrel jack to a USB-C connector.

Table 1 USB power standards where the latest USB PD 3.1 release extends the power capability of USB up to 240 W. Source: Texas Instruments

In this Power Tip, we will discuss system power considerations and demonstrate how you can quickly and easily implement a USB-C connector and power management circuitry that negotiates the appropriate USB PD contract for the power requirements your design.

USB PD power flows

It is also worth noting that there are three types of power flow in the USB PD ecosystem: devices that can only sink power, devices that can only source power, or devices that allow bi-directional power flow (dual-role power.) In this article, we’ll focus on sink-only applications.

Before a sink device utilizing USB PD can accept power from a USB PD power source, some hand-shaking and negotiation must take place between the device being powered and the power source. This is because the voltage on the USB PD power bus can be variable from 5 V to 48 V, depending on the power capability of the power source. Obviously, you would not want to apply 48 V to a sink device that is only designed to operate from a 15 V input source. In a USB PD sink application, a dedicated device called a port controller is needed to perform this power contract negotiation and provide protections like over-current and over-voltage. Previously, adding a USB PD port controller configured with the proper functionality required in-depth knowledge of the USB certification and a large amount of firmware development effort. To simplify the power architecture and reduce design complexity, a preprogrammed USB PD controller allows the designer to configure the maximum and minimum voltage and current sink capability through a simple resistor-divider setting, as shown in Table 2. This removes the need for external electrically erasable programmable read-only memory (EEPROM), an MCU, or any type of firmware development.

Table 2 The ADCIN pin of a preprogrammed USB PD controller that allows designers to configure the max and min voltage as well as the current sink capability through a simple resistor divider setting. Source: Texas Instruments

Negotiating power contracts and matching system power requirements

Before converting your product to USB PD, it is important to understand the limitations and requirements of the USB PD ecosystem. On the source side of the cable, a USB PD power source will be providing power to your system, but the person using your product could connect absolutely any USB PD adapter or other power source. You need to consider what power contract is needed to provide full power to your system. In addition, consider how your system will behave if insufficient power is available from that adapter.

The available current through the USB Type-C cable is limited to 3 A for voltages below 20 V, and 5 A for voltages 20 V and above. Additionally, USB PD power sources are only required to generate the minimum voltage necessary to provide rated power at the maximum allowed cable current. For example, a 45 W adapter will typically provide a maximum output voltage of 15 V, since 45 W divided by 3 A is 15 V.

What if your system is designed to run from a 15 V source, but needs 50 W of power? In this case, you need to configure your port controller to accept a higher voltage contract (e.g., 20 V) to ensure you have enough power to run your system, and you need to ensure your system is designed to handle this slightly higher input voltage. This may require you to make slight modifications to your product beyond just adding the USB Type-C connector and port controller. Additionally, typically you still want your product to be functional when connected to a USB PD source with insufficient power capacity, but perhaps operate at reduced performance level.

Design example

Consider, a product that needs to charge a 4S-7S battery at 27 W that was previously powered through a 15 V barrel jack. In this example, a buck-or-boost converter was used, since the battery voltage could be higher or lower than the 15 V input, depending on the state of charge. Converting this design to a USB PD input only requires a simple stand-alone USB PD controller like the TPS25730 and buck-boost battery charger. Figure 1 shows the system architecture. You can see that only a few components were required to convert the barrel jack to a USB PD port. The simple resistors connected to the ADCIN1 through ADCIN4 pins set the power profile without the need for any firmware development. In this case, the product must still charge from a 5 V power source even though available power is reduced, so the TPS25730 is configured for a 20 V maximum voltage and 5 V minimum voltage, with the operating current set to 3 A.

Figure 1 The 27W USB PD sink-only charger reference design block diagram. Source: Texas Instruments

Input voltage dynamic power management

Besides supporting a USB PD source input, the design should also support legacy USB input sources, such as 5 V and 2 A. To avoid collapse of the input voltage when the input power is limited, the BQ25756E provides an input voltage dynamic power-management feature in the BQ25756E which will reduce the charge current if the input voltage drops to a value set by the parameter Vin_dpm. The Vin_dpm should be set slightly lower than the input voltage minus the voltage drop through the cable and power path so that it can maximize the battery charge current while not overloading the input source, or creating an instability on the input bus.

Figure 2 shows experimental results charging from a 5 V, 2 A source with a 1 meter USB cable (0.25 Ω resistance). When you set Vin_dpm to 4.75 V, you can see that the input charge current is limited and unstable (left side of Figure 2). When properly configured, with the Vin_dpm set to 4.35 V to account for the resistive drop, the input voltage is stable and the charge current is increased by 50%, which will significantly shorten charging times.

Figure 2 Input dynamic power management when charging from a 5 V, 2 A source with a 1 m USB cable. Source: Texas Instruments

Implementing USB PD

With a simplified USB PD controller and battery charger architecture, you don’t need to have in-depth knowledge of USB PD. Not only can you eliminate the need for an extra MCU and EEPROM (and exert no firmware effort), but you can use just a simple resistor divider to configure your voltage and current sink capability and quickly convert your barrel jack to a USB Type-C input. For complete details of the example design highlighted here, check out the 27W USB Power Delivery Sink-Only Charger Reference Design for 4- to 7-Cell Batteries.

Author bios

Max Wang has been a systems engineer for the Power Design Services team at Texas Instruments, where he is responsible for power solution and reference designs for industrial and personal electronics applications. He recently created a series of high-efficiency compact AC/DC and DC/DC USB Type-C® PD charger solutions. Before joining TI, he worked at Delta, Power Integrations, and Infineon. He has a master’s degree in electrical engineering from Zhejiang University in Hangzhou, China.

 

 

Brian King is a systems manager and senior member technical staff at Texas Instruments. He has over 28 years of experience in power supply design, specializing in isolated AC-DC and DC-DC applications. Brian has worked directly with customers to support over 1300 business opportunities and has designed over 750 unique power supplies using a broad range of TI power supply controllers with a focus on maximizing efficiency and minimizing solution size and cost. He has published over 45 articles related to power supply design, and since 2016 is the lead organizer and content curator for the Texas Instruments Power Supply Design Seminar (PSDS) series, which provides training to thousands of power engineers worldwide on a regular basis. Brian received an MSEE and a BSEE from the University of Arkansas.

Relate Content

 

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #130: Migrating from a barrel jack to USB Type-C PD appeared first on EDN.

Reducing error of digital potentiometers

Tue, 06/25/2024 - 16:47

A common problem for digital potentiometers (DigiPots) is the effect of wiper resistance, which produces quite noticeable non-linearities of regulation at both ends of the resistance range. These effects also lead to an increased tempco in these areas, since the high wiper resistance temperature coefficient will dominate.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 is reproduced from Figure 4-6 on the datasheet for the “Digital Potentiometer MCP41XXX_42XXX” (Microchip, DS11195C, page 15). The absolute gain response is quite typical, so it is repeated as a good illustration. The circuit uses the DigiPot as a gain control element, the green line is added to show the ideal gain control.

Figure 1 Gain versus decimal code for inverting and differential amplifier circuits (black line) reproduced from Digital Potentiometer MCP41XXX/42XXX and ideal gain control (green line). Source: Microchip

Now, let’s see how we can reduce the influence of the wiper resistance on both ends of the resistive element.

A solution shown in Figure 2 exploits the fact that wiper resistance of a DigiPot isn’t related with its nominal total resistance. The idea is quite simple and straightforward: two DigiPots from the same chip are connected abreast. (Both DigiPots must be programmed with the same code.) As shown in Figure 1, the absolute error—induced by a non-zero wiper resistance (r)—gets lower by value of r/2.

Figure 2 A solution that reduces the errors of digital potentiometers by exploiting the fact that the wiper resistance of the DigiPot is not related to its nominal total resistance. Source: Peter Demchenko

The solution is more suitable for non-single DigiPots, since they can guarantee an acceptable resistor-matching.

The solution may be also beneficial for the Rheostat mode tempco and Rheostat INL error, where both can be reduced. 

While the wiper resistance of a DigiPot isn’t related to its nominal resistance, the wiper resistance may increase when its nominal increases; this can destroy the advantages of the circuit, so it is important to be careful.

To maintain the same total resistance, both DigiPots should have 2x the nominal total resistance, which may be difficult to ensure sometimes, since the assortment of nominal values is rather restricted. 

Note also that the larger nominal value of the DigiPot can lead to some reduction of the effective frequency range.

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Reducing error of digital potentiometers appeared first on EDN.

Connected MCUs incorporate Wi-Fi 6/6E, BLE 5.4

Tue, 06/25/2024 - 10:40

A new family of connected MCUs incorporating long-range Wi-Fi 6/6E and Bluetooth Low Energy 5.4 is targeted at cost-optimized, power-efficient, and small form-factor products for smart home, industrial, wearables, and Internet of Things (IoT) applications.

These connected MCUs can be used as the main processor in an IoT device or as a subsystem in more complex designs to fully offload connectivity for IoT applications. They are available in three versions: CYW55913 for tri-band (2.4/5/6 GHz), CYW55912 for dual-band (2.4/5 GHz), and CYW55911 for single-band (2.4 GHz) support.

AIROC CYW5591x connected MCUs feature extensive peripherals and GPIO support. Source: Infineon

Infineon’s new connected MCUs—also touted as Wi-Fi 6 system-on-chips (SoC)—are built around Arm Cortex M33 192-MHz processor and a TrustZone CC312 security subsystem to provide root of trust (RoT) and cryptographic services. Moreover, its quad-SPI with XIP facilitates on-the-fly encryption/decryption for flash and PSRAM.

On the wireless front, these connected MCUs operate at up to +24 dBm transmit power for Wi-Fi and are optimized with up to +19 dBm transmit power for Bluetooth Low Energy 5.4, which supports Bluetooth low energy 2 Mbps, LE Long Range, Advertising Extensions, and Advertising code selection for LE Long Range.

These connected MCUs also offer easy-to-use software development platform comprising ModusToolbox software, RTOS and Linux host drivers, a fully-validated Bluetooth stack and multiple sample code examples, and Matter software enablement.

Wireless module suppliers like AzureWave, Murata, and USI are starting to incorporate Infineon’s new AIROC CYW5591x connected MCUs into their modules. Infineon is already sampling these Wi-Fi 6 SoCs to alpha customers.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Connected MCUs incorporate Wi-Fi 6/6E, BLE 5.4 appeared first on EDN.

Thermal analysis tool aims to reinvigorate 3D-IC design

Mon, 06/24/2024 - 19:29

The mainstream adoption of 3D-IC has become a question mark due to critical challenges ranging from early-stage chip designs to 3D assembly exploration to final design signoff. A new EDA tool claims to address these issues by integrating thermal analysis directly into all stages of the IC design flow, spanning early analysis to signoff analysis, while offering multiple-use models.

At this year’s Design Automation Conference (DAC) in San Francisco, California, Siemens EDA unveiled Calibre 3DThermal software for thermal analysis, verification, and debugging in 3D integrated circuits (3D-ICs). It enables chip designers to rapidly model, visualize, and mitigate thermal effects in their designs from early-stage chip design to package-inward exploration to design signoff.

Figure 1 Calibre 3DThermal is a thermal analysis solution based on a complete understanding of the 3D-IC assembly. Source: Siemens EDA

In all design flows, Calibre 3DThermal captures and analyzes thermal data across the entire design lifecycle. Siemens EDA has already joined forces with UMC to deploy a thermal analysis flow based on Calibre 3DThermal.

What’s hampering 3D ICs

Semiconductor engineering teams focusing on designing and manufacturing bleeding edge, next-generation chips are turning to chiplets and 3D-IC architectures to integrate more functionality into ever-shrinking footprints. However, despite lots of talk, commercially available semiconductors based on 3D-IC architectures are still quite hard to find in the marketplace.

Why? 3DIC architectures—which place multiple dies or chiplets next to one another or even stack dies vertically in a single package—present a range of new complexities and challenges due to higher numbers of active dies in close proximity to each other or stacked vertically.

In other words, squeezing multiple active dies in such close proximity—side-by-side or stacked vertically—in a single package comes with a host of new and vexing challenges. These challenges—sometimes categorized as multi-physics—often relate to controlling heat dissipation since excessive heat can impact the end device’s performance and reliability.

“There has been a view that 3D IC is going to take over the world, but no one is going to abandon Moore’s Law transistor scaling,” said Michael White, senior director of physical verification product management for Calibre design solutions at Siemens EDA. “However, 3D IC will be used for heterogeneous solutions in compute-intensive artificial intelligence (AI) chips.”

At advanced nodes like 2-nm, 3D IC makes sense, he added. “Whether it’s application processor, CPU or GPU, parts like I/O and HBM are going to be separate dies or separate chiplets, and it’s all going to be packaged in 2.5D or 3D IC.” However, in these advanced packages, controlling heat dissipation becomes imperative.

Moreover, design engineers can’t afford to wait until the assembly is complete to identify and correct errors; it can severely disrupt design schedules.

“There is a lot of heat to be managed,” White said. “Otherwise, it can impact transistor behavior in this new multi-physics domain.” He also added that thermal impacts could couple with stress impacts coming from new materials, how we stack, and placing of through silicon vias (TSVs) close to active transistors.

Thermal analysis to rescue

White makes the case for a shift-left approach with Calibre physical verification to help designers do things right the first time instead of close to tape-out. While talking to EDN before the launch of Calibre3DThermal, he pointed to its key feature, feasibility analysis, which allows chip designers to start the initial analysis with minimal inputs. “Once more information is available, it continuously refines the accuracy of the analysis.”

Figure 2 The shift-left approach enables chip designers to identify and resolve issues early in design flow with signoff-quality solutions. Source: Siemens EDA

John Ferguson, senior director of DRC/3DIC product management for Calibre design solutions, pointed out that chip designers spend years developing complex 3D ICs, and after a thermal signoff, if they find a problem, there is nothing they can do about it. “The idea of feasibility analysis is to start finding potential problems early.”

Chip designers can later perform more detailed analyses considering metalization details and their impact on thermal considerations as more detailed information becomes available. This progressive approach enables designers to refine their analysis, apply fixes like floorplanning changes, and add stacked vias or TSVs to avoid thermal hotspots and dissipate heat more effectively.

The iterative process continues until the final assembly is complete. Ferguson is quick to note that Calibre3DThermal is a bit different than traditional thermal analysis. “We have a faster way of performing thermal analysis in which the Calibre part will work upfront to look at the die level information, create accurate models, and pass that for creating models at the package level.”

Calibre with multi-use models

Calibre 3DThermal—developed to address the challenges of 3D-IC architectures where controlling heat dissipation is a key requirement—offers fast and accurate approaches to identifying and rapidly addressing complex thermal issues. It allows designers to iterate thermal analysis at whichever design stage they are working on.

Thermal analysis at this advanced level requires a complete understanding of the 3D-IC assembly, so Calibre 3DThermal embeds a custom version of Siemens’ Simcenter Flotherm software solver engine to create precise chiplet-level thermal models for static or dynamic simulation of full 3D-IC assemblies. Next, debugging is streamlined through the traditional Calibre RVE software results viewer.

It’s worth noting that even when you put a known good die (KGD) into a package, you might get heat issues.

“Once you have more dies, you can perform more mature thermal analysis at a much more fine-grained level,” Ferguson said. “When you bring all dies into the package, that’s when you add extra accuracy and then look at selective chiplets or selective IPs in those chiplets.”

Now that chip designers have information at the dies and package levels, this information can be passed upstream to the board level or even to the large system level, like a jet engine design.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Thermal analysis tool aims to reinvigorate 3D-IC design appeared first on EDN.

Pages