EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 33 min 15 sec ago

Forget Tesla coils and check out Marx generators

Fri, 09/23/2022 - 16:52

Low-voltage designs with rails in the single digits get a lot of attention these days, for reasons I don’t need to detail to this audience. Still, there are many situations where rails at hundreds of volts are needed, such as EVs. There are also many important uses for even higher-voltage systems ranging into the thousands of volts, such as physics experiments, safety tests, and even some mass-market consumer products.

I’m always fascinated by the clever ways engineers and scientists have devised to increase a supply voltage by orders of magnitude. You’re undoubtedly familiar with and may have even built a Tesla coil, used for dramatic science demonstrations as well as serious research. (There are many websites showing how to build your own—with suitable safety-related caveats, of course.) As its name implies, the core design uses step-up transformer coils. That’s simple enough in principle, but the “devil” and danger is in the details of the implementation, of course. Another high-voltage scheme is the flyback converter, widely used in CRT-based TVs until they became obsolete, but still used in other applications.

There’s yest another high-voltage topology which is much less known but often used when high-voltage pulses are needed: the Marx generator. It’s not new, as it was first described by Erwin Otto Marx in 1924. Marx generators generate a high-voltage pulse from a low-voltage DC supply, and they are used in high-energy physics experiments, as well as to simulate the effects of lightning on products such as power-line switchgear and aviation equipment.

As with the Tesla coil, the concept is simple, as is the schematic diagram, Figure 1. It operates by charging a number of capacitors in parallel, then quickly connecting them in series. Initially, the capacitors are charged in parallel to voltage VC by a DC power supply through resistors RC. The individual spark gaps are “open” as the voltage VC across them is below their breakdown voltage, thus allowing to capacitors to continue charging. The last spark gap isolates the output of the generator from the load.

Figure 1 The Marx generator schematic diagram shows its simplicity, as a cascaded series of repeated spark-gap, resistor, and capacitor stages. Source: Wikipedia

Once the charged voltage is high enough for the first spark gap to trigger (breakdown), the critical sequential action begins. The short-circuit, which now occurs across the gap, puts the first two capacitors in series, so there is a voltage of about 2VC across the second spark gap. Now the second gap breaks down and adds to the third capacitor, with a cascade of  breakdowns across all of the gaps.

To generate the final spark, the last gap connects the output of the capacitors to the load. In principle, this output voltage is the sum of the voltages across all the capacitors; in practice, it is somewhat less. One of the interesting features about this design is that the voltage across each of the charging resistors is equal to the charging voltage and not the final voltage even as the array charges up; this greatly simplifies component procurement and layout, and also reduces costs.

How much voltage can you generate with this topology? The answer is simple: as much as you want and can afford. It’s used for megavolt-level research-laboratory systems, Figure 2, but you can also generate a few thousand volts from a 1.5 V AA battery.

Figure 2 This megavolt Marx generator used for testing high-voltage power-transmission components at TU Dresden, Germany. Source: Wikiwand

Looking at the schematic diagram, bill of materials, and construction details, it seems that building a Marx generator is easier than doing the popular the Tesla coil, since it doesn’t require windings or as many high-voltage components, Figure 3. There are many web sites showing how to build your own (of course, the usual high-voltage warnings apply).

Figure 3 Compared to the Tesla coil, the Marx generator has a simpler physical construction – but there are still high and dangerous voltages, of course. Source: Electric Stuff – UK

You can also buy one in kit form (the main PC board only, or board plus all components), Figure 4, from Eastern Voltage Research. This unit produces 3 to 4 inches of output arc and 90-kV maximum theoretical output voltage depending on input voltage source, spark gap tuning, and atmospheric conditions. (Note that the company is high-voltage-device agnostic, as they also offer Tesla-coil kits.)

Figure 4 You can also buy a tabletop Marx generator as a ready-to-assemble kit, yielding an output arc up to 4 inches and voltage as high as 90 kV. Source: Eastern Voltage Research

Of course, there are other ways to boost low voltages to much-higher ones. Voltage-multiplier circuits (doublers, triplers, and cascades of these) can also reach thousands of volts. Like the Marx generator, these “multiply” the source voltage by charging capacitors in parallel and discharging them in series. One important difference is that voltage multipliers are powered with alternating current and produce a steady DC output voltage, whereas the Marx generator produces a pulse. Also, there is no open ‘”spark” with the multiplier circuit, which makes it more suitable to use in consumer or mass-market products where higher voltages are needed.

Have you ever built any of these higher-voltage projects, either for personal use or commercial production? If you came from a lower-voltage background, what were the most important lessons you learned about component selection, procurement, or use? What about physical layout? 


Related content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Forget Tesla coils and check out Marx generators appeared first on EDN.

Tektronix and Anritsu collaborate to build new PCIe 6.0 receiver test

Fri, 09/23/2022 - 16:41

Tektronix, in collaboration with Anritsu, has introduced a PCIe 6.0 receiver test solution that it says renders fast, high-quality measurements. The receiver automation software runs on the Tektronix DPO70000SX series of real-time oscilloscopes and Anritsu’s MP1900A BERT.

The software performs PCIe 6.0 stressed eye calibration at 64 GT/s (PAM4), providing confidence that designs are thoroughly tested at the required bit error rate target. Intuitive step-by-step tools provide link training routines for the MP1900A BERT to ensure the receiver is tested accurately. The PAMJET DSP tool provides critical 64-GT/s jitter and noise measurements with instrument noise compensation.

The receiver test automation software provides a single control panel to manage the Tektronix oscilloscope and Anritsu BERT during receiver calibration. An intuitive software wizard guides users through short- and long-channel calibration steps for accurate and repeatable calibration at 64 GT/s.

The Tektronix PCIe 6.0 receiver test solution is available worldwide for use with DPO70000SX real-time oscilloscopes. For more information and pricing, click here.


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Tektronix and Anritsu collaborate to build new PCIe 6.0 receiver test appeared first on EDN.

Fabless startup leverages Xpeedic EM simulator

Fri, 09/23/2022 - 16:39

Chipletz has adopted Xpeedic’s Metis EM simulation tool for its upcoming Smart Substrate products, which will bundle multiple ICs in a single package. The fabless substrate vendor looks to bridge the gap between the slowing of Moore’s law and the rising demand for compute performance with its advanced packaging technology.

Metis is a fast EM simulation tool that integrates with both chip and package design tools to address capacity, accuracy, and throughput requirements. Its 3D EM solver covers simulation from DC to high frequencies and cross-scale simulation from nano to centimeter level.

“The Chipletz Smart Substrate products will be a welcome addition to toolkits of designers working on advanced 2.5D and 3D IC packaging,” said Feng Ling, CEO of Xpeedic. “Smart Substrate will facilitate multiple ICs from different vendors in a single package, especially important for the AI workloads, immersive consumer experiences and high-performance computing markets. We’re pleased to have a role in the delivery of this advanced packaging technology.”

“Xpeedic and its Metis EM simulation tool are helping us meet our unique signal and power integrity analysis challenges by delivering unprecedented performance advantages for runtime and memory usage” commented Bryan Black, CEO of Chipletz.

Chipletz is targeting delivery of its initial products to its customers and partners in early 2024.



Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Fabless startup leverages Xpeedic EM simulator appeared first on EDN.

80 reference designs released for motor commutation sensors

Fri, 09/23/2022 - 16:38

With the Resolver 4.0 catalog from Renesas, engineers can select from 80 market-ready designs based on the company’s IPS2 motor commutation sensors. Each design for the inductive position sensors targets a unique motor shaft or pole-pair configuration and comes with complete design files, measurement reports, tools, and guidelines.

The Resolver 4.0 catalog offers turnkey solutions that can be implemented in a wide range of applications, including automotive systems, robotics, servo motors, home automation, and medical. The designs provide completed schematics, fully wired PCB designs, and Gerber files, along with software stacks, bill of materials, and more. An inductive coil optimization tool allows the design of optimized sensing elements based on experiments with air gap variations and performance simulations, including accuracy and error analysis.

According to Renesas, the IPS2550 and IPS2200 inductive position sensors weigh significantly less than conventional magnet-based sensors and deliver fast rotational speeds to support high-speed motor commutation for passenger cars and motor control for industrial equipment. They also offer stray-field immunity, are less sensitive to noise and vibration, provide better efficiency and accuracy, and are less susceptible to error than magnets.

The Resolver 4.0 catalog is free of charge and can be downloaded using the links to the product pages below. The IPS2550 (automotive) and ISP2200 (industrial) sensors are available now in full production.

IPS2550 product page

IPS2200 product page

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 80 reference designs released for motor commutation sensors appeared first on EDN.

ADS improvements for DDR5 simulation

Fri, 09/23/2022 - 16:36

Keysight’s PathWave Advanced Design System (ADS) 2023 for high-speed digital design includes improvements for DDR5 simulation. In addition to DDR5, the new Memory Designer capabilities in PathWave enable modeling and simulation of other interface standards, such as LPDDR5/5x, GDDR6/7, and HBM3.

PathWave ADS 2023 ensures rapid simulation setup and advanced measurements, while providing designers critical insights to overcome signal integrity challenges. Its Memory Designer quickly constructs parameterized memory buses using the new pre-layout builder, allowing designers to explore system trade-offs that reduce design time and derisk product development of memory systems.

Keysight reports that PathWave ADS 2023 for high-speed digital design completes simulations up to 80% faster. It leverages cloud-based high-performance computing using parallel processing to accelerate Memory Designer and EM simulation runtime. PathWave also automates design-to-test workflows with an easy connection between simulation and measurement domains to  enable comparison of stored data against measured results from physical prototypes.

To learn more about PathWave ADS 2023 for high-speed digital design, click here.

Keysight Technologies  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ADS improvements for DDR5 simulation appeared first on EDN.

How to manage changing IP in an evolving SoC design

Fri, 09/23/2022 - 16:36

In a previous article, Getting started in structured assembly in complex SoC designs, an unexceptional system-on-chip (SoC) design was shown to contain hundreds of intellectual property (IP) blocks. Also, it was demonstrated how connections between these IP blocks may involve hundreds or thousands of ports with multiple tie-off options.

Some IP blocks may come from third-party suppliers, while others are developed internally. The problem is that any of these blocks may experience revision changes throughout the course of design. This is especially true of internally developed IP, which may undergo multiple revisions due to evolving specifications and requirements. Managing these changes as the design evolves can quickly become a nightmare.

Why do things change?

The ancient Greek philosopher Heraclitus of Ephesus (535-475 BC) famously noted: “The only constant in life is change.” When it comes to the IP blocks forming an SoC, the goal is to make the process of change as easy as possible (Figure 1).

Figure 1 The goal is to make change in IP blocks smooth. Source: Arteris IP

In the case of IP from third-party vendors, changes during a particular project are relatively rare. One exception is when the design team detects and reports a bug or other issue, and the vendor responds by generating a new revision of the IP to address the problem. Another scenario is when it becomes necessary to replace an IP from one vendor with an equivalent IP from another vendor, which—among other things—may necessitate changes at the interface.

By internally generating IP, change—especially in the early stages of the design—is the normal modus operandi. In many cases, an IP block starts out as a stub in the form of a black box with port definitions. As the design progresses, more and more details are added, eventually resulting in the completed IP. Even then, changes to the IP, including modifications to the interfaces, may persist long into the development process.

An SoC design is typically represented as a hierarchical netlist of blocks, with the lowest level being the IP blocks themselves. This netlist is captured in a hardware description language (HDL) such as Verilog or VHDL at the register-transfer level (RTL) of design abstraction.

Traditionally, this netlist has been created by hand using an extensible and customizable text editor like VI or GNU Emacs. Although some die-hard designers still use this approach, the fact that a single IP block instantiation may now involve a thousand connections and span hundreds of lines of code means that it’s becoming increasingly untenable.

If the netlist is hand-crafted, it can be hard enough to update a single IP instantiation. The problem is only exacerbated when multiple instantiations of the same block exist. The result is that implementing IP changes by hand is time-consuming and prone to error, increasing risk, degrading productivity, and impacting time to market (TTM).

Managing change in IP

The foundational step to managing change is for the hierarchical RTL netlist to be described in such a way as to facilitate assembly, refinement and update through abstraction and automation. This is achieved by using the IP-XACT standard, which comes in an XML-based format. IP-XACT was originally developed by The SPIRIT Consortium, which subsequently merged with Accellera, and the standard is now supported as IEEE 1685.

Several companies have developed internal, proprietary tools that employ IP-XACT. One approach for managing IP change is to use tools like Magillem IP Deployment Technology. In the case of IP blocks acquired from third-party vendors, these blocks will typically come supplied with a corresponding IP-XACT model. If not, such a model will need to be created. Similarly, in the case of internally generated IP blocks, each block will require an associated IP-XACT model. Unfortunately, the IP-XACT standard is complex and unfamiliar to many. However, the tools can be used to read an existing RTL representation of an IP block and automatically generate the corresponding IP-XACT model. As part of this, users can add control and status registers (CSRs) to legacy IPs.

If members of the design team create the top-level hierarchical netlist by hand, this can be read by Magillem, which can automatically create a hierarchical graphical representation of the design. Alternatively, if the design team starts with a clean slate, the tools can be used to display the collection of IP blocks available. The users can employ a drag-and-drop interface to capture the design hierarchy and place and connect the desired blocks. The tool can then generate a correct-by-construction RTL netlist of the design.

But this is just the start. What makes tools like this powerful is that it’s possible to create scripts associated with individual blocks of IP, hierarchical blocks, and the design as a whole. These scripts, which can be in Python, Tcl (pronounced “tickle”) or Java, may be captured using a regular text editing tool. Alternatively, clicking on a block in the graphical view of the design allows the user to create or edit a script associated with that block (Figure 2).

Figure 2 Scripts can be associated with IP blocks, hierarchical blocks, or the design as a whole. Source: Arteris IP

In conjunction with Magillem’s application programming interface (API), these scripts can be instructed to perform tasks like updating version X.X of an IP block with X.Y. Furthermore, a script can be used to update all instances of the same block. This is similar in concept to a “search and replace” function in an editor, except that the tool also performs appropriate checks, such as ensuring that the ports still match. If any problems are detected, the user is alerted and supported by tools that aid in addressing the issues. An even more powerful feature is that scripts can call other scripts as required.

Simulation and verification

The combination of Magillem’s API and scripting capabilities goes far beyond managing change. In the case of simulation, for example, it is common to have multiple representations of each IP block, from a simple stub to a full-blown implementation with the possibility of one or more limited-function implementations.

Traditionally, selecting between these various representations has been performed using “ifdef” type statements embedded in the netlist to control the compiler. Although this sounds simple, actually employing this approach to define multiple views of a complex SoC can quickly become unwieldy. The alternative is to use scripts to perform tasks like “swap out representation X with Y.”

Similarly, although the focus of this article has been on the management of design IP, these techniques can also be applied to managing associated verification IP.

Adoption of IP-XACT tools

As SoC designs have continued to evolve in size and complexity—with many devices featuring hundreds of IP blocks and tens of thousands of connections—creating and maintaining the hierarchical RTL netlist by hand is no longer tenable.

In addition to speeding the generation of the initial netlist and ensuring a correct-by-construction design, the API and scripting facilities provided by IP-XACT-based tools like Magillem facilitate managing changes to the IP throughout the development process. The aim is to speed development, increase productivity, reduce risk, and decrease time to market.

Ryan Y. Chen is field applications engineering manager at Arteris IP.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How to manage changing IP in an evolving SoC design appeared first on EDN.

Transimpedance amplifier enables 50-Gbps PAM4 5G deployments

Fri, 09/23/2022 - 16:34

Semtech announced the production of the GN1700 linear transimpedance amplifier (TIA) for emerging 50-Gbps PAM4 5G front and mid haul deployments. Intended for use with 50-Gbps SFP56 PAM4 5G wireless optical modules, the GN1700 can be paired with the company’s Tri-Edge GN2255 and GN2256 bidirectional clock data recovery (CDR) ICs with integrated drivers for optimized performance.

Semtech reports that 50-Gbps PAM4 architectures will be used to enable 5G deployments, such as 64T64R massive MIMO-based macro base stations and mmWave small base stations. IC technology that enables optical fiber communication in these deployments is a critical part of the network enabling the wired backbone of this architecture.

“With the production of our FiberEdge GN1700 and Tri-Edge 5G portfolio, including GN2255 and GN2256, our customers can now enable successful pilot qualifications for 50-Gbps PAM4 systems,” said Raza Khan, senior market manager for Semtech’s Signal Integrity Products Group. “This is a great milestone and a testament to Semtech’s on-time execution on our innovative product offerings.”

Datasheets for Semtech’s signal integrity products are available by creating an online account.

GN1700 product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Transimpedance amplifier enables 50-Gbps PAM4 5G deployments appeared first on EDN.

Sneaky peak: sneaky feedback paths that de-stabilize an otherwise stable feedback loop

Tue, 09/20/2022 - 20:50

I have long held to the belief that electrons are smarter than people. Even the best engineers can fall prey to subtleties that electrons will readily act upon, especially when it comes to finding sneak feedback paths that can really screw up an otherwise stable feedback loop. We look here at a case study.

There was this multiple output DC power supply whose design was occasionally loop unstable and I was looking for the reason why and seeking to find a remedy. I set up an injected signal, call that “E-test”, as shown in Figure 1 so that by examining E2 with respect to E1, I could look at the gain and phase properties of the feedback loop.

Figure 1 Basic loop gain test plan for E-test.

There was a galvanic isolation barrier in the design, so the test setup was placed as follows (Figure 2):

Figure 2 Loop gain test plan in more detail.

We now look at how the Isolation Barrier Circuit was configured (Figure 3):

Figure 3 The alternating action clamping isolation barrier circuit.

The signal input voltage, E (on the left), gets transferred to the signal output voltage, E (on the right) by having a DC current source that drives the transformer secondary center tap to induce alternate clamping action via the two diodes on the transformer’s primary. We lose some level due to the Vcesat of the two NPN transistors and the forward voltage drops of the four diodes, but a linear transfer function from input to output is very closely achieved. A more detailed circuit is shown in Figure 4.

Figure 4 The alternative action clamping isolation barrier circuit in a bit more detail than Figure 3.

Please take mental note of the 8-volt power supply at the 1N4623 zener diode. We will return to consider the nature of those two parts a little later.

This pair of curves shown in Figure 5 shows the output of the isolation barrier circuit and the subsequent output of a  PWM control signal versus input to the isolation barrier circuit. For the sake of feedback loop control, that is all we need.  

Figure 5 The isolation barrier circuit linearity.

Although now superseded, the Hewlett-Packard 4395A Network Analyzer was used for loop testing (Figure 6).

Figure 6 To the left, the HP 4395A network analyzer used for our E-test. To the right, its attachment to the unit under test (UUT).

The 4395A was attached to the UUT via the 1:1 interface transformer shown in Figure 7. The braid of the coaxial cable served as the test transformer’s primary while the center conductor of the cable served as the test transformer’s secondary. The two 100 Ω resistors provide a nearly 50 Ω load for the analyzer’s RF output while the 100 Ω and 10 Ω resistors create a very small E-test in order to keep the power supply’s operating status as close to normal as possible while we do our measurements.

Figure 7 The test transformer and its attachment to the HP 4395A network analyzer.

We ran our loop gain tests at various levels of excitation for E-test and got a big surprise.

 As the test signal level from the analyzer was taken from 0 dBm downward to -12 dBm, we had different test results (Figure 8).

Figure 8 Loop gain Seen for (a) 0 dBm, (b) -3 dBm, (c) -6 dBm, (d) -9 dBm, (e) -10 dBm and (f) -12 dBm excitation from the network analyzer.

While the loop gain roll-off characteristic looked good at first when the network analyzer was set to an output level of 0 dBm, the roll-off characteristic changed dramatically as the excitation level was changed.

The culprit was discovered as follows (Figure 9):

Figure 9 The sneak feedback path.

The 8 volts of power was being derived from the very same inverter that the PWM action was controlling, which led to a sneak feedback path as shown above. The on resistance of the zener diode facilitated that path and, in my suspicion, Rzener varied versus test excitation which led to the weird test results.

The zener was replaced with an active IC as follows (Figure 10):

Figure 10 Eliminating the sneak feedback path.

By using the LM136 with its extremely low dynamic resistance and changing one resistor to restore the PNP transistor’s Q-point, the sneak feedback path was eliminated.

Test results became the following (Figure 11):

Figure 11 Loop gain and loop phase with sneak path removed.

With the sneak feedback path broken, the gain-phase results were good and alike to each other at all levels of test drive.

We had incorrectly assumed the power supply was a linear system. Because of the zener’s behavior, the power supply was really a non-linear system.

Starting from scratch as it were, loop gain and loop phase tests should always be run at varying levels of excitation to see if the test results match each other at every excitation level. If they don’t, you have a non-linearity somewhere that may cause trouble for you and/or for your end user.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sneaky peak: sneaky feedback paths that de-stabilize an otherwise stable feedback loop appeared first on EDN.

Class 2 ceramic capacitors—can you trust them?

Tue, 09/20/2022 - 19:14

When Ceramic Capacitors Go Bad – Aging.

Capacitor Aging applies to all Class 2 ceramic capacitors as they are built of ferroelectric materials. C0G types (Class 1) do not exhibit this aging effect, however, they are built out of non-ferroelectric dielectric materials.

All ferroelectric materials age, yes even ferrite-based magnetic parts age, as do the X7R and other high-density capacitor types.

This aging happens whether the part is in use or, just sitting in a bin somewhere. All Class 2 capacitors will lose capacitance over time.

This aging is due to the magnetic dipoles in the structure becoming less random with time, changing the dielectric constant of the material; this is a reversible process. When the capacitor dielectric material is taken to above its Curie temperature of around 125oC, the material becomes random again and the capacitor returns to its original value. This is called reforming or de-aging. After reforming, the aging process starts all over again. Even reflow soldering will probably heat the capacitors enough so that they reform, as noted on the Johanson Technology FAQ page [1] which states:

“After the soldering process, the capacitors have essentially been De-Aged.”

Although Johanson Technology suggests that if you are purposely going to de-age your capacitors on a board, you subject them to a 150oC soak for 1.5 hours, just to make sure you get all the capacitors to at least the Curie temperature.

The rate of capacitance aging for X7R types is nominally given as 2.5 to 3% per decade-hours of time since reforming (Figure 1).

This means: After manufacturing, the capacitor loses around 3% for every decade-hour since the capacitor material was last at the material’s Curie temperature.

The next most common Class 2 dielectric used in electronics, the X5R is typically given an aging rate of 3 to 7% depending on the manufacturer, although most manufacturers quote the larger 5 to 7% values. This suggests that just specifying a “X5R type” from several different manufacturers and expecting similar results, can lead to very different aging performance.

Figure 1 A typical capacitor aging chart, like the kind that you will find on some manufacturers’ data sheets. The upper curve is for a C0G. It is flat because these types do not exhibit the aging phenomena. The middle curve is for a typical X7R which may age at around -3% per decade-hour. The lower curve is for an X5R which is reported to age at any rate from -3% to -7% per decade-hour depending on which datasheet you look at. It turns out that these sorts of curves are inaccurate when the capacitors are biased, and in circuit.

If the rate of capacitance change for an X7R capacitor type is 3% per decade-hour. The net capacitance loss, compared to the datum point of 10 hours will be:

  • at 100 Hours, the capacitor will be -3%
  • at 1000 Hours it will be -9%
  • at 10,000 hours it will be -12%
  • at 100,000 hours or 11 years, the change will nominally be -15%

This aging is the same basic effect as applying DC bias to the capacitor. More DC bias (field strength) causes more of the magnetic dipoles in the material to line up, causing a decrease in the dielectric constant of the material.

What About Other Factors?

It was always assumed that the DC bias change and the aging capacitance changes happened independently and were merely additive.

Recently, however, it has been documented by Vishay [2] that adding DC bias to an X7R capacitor can increase the aging rate substantially. Vishay calculated that their capacitors have a nonlinear aging rate when biased to 100% of rated voltage and they report that some of their competitors may have an even greater rate of aging under DC bias (Figure 2).

Figure 2 Vishay’s study of X7R aging when 100% rated DC bias is included. The upper curve is for a Vishay capacitor and the lower curve(s) are some of the worst performances that they measured. Source: Vishay Vitramon [2]

Figure 2 shows some results of a 50V capacitor being biased at 100% of the rated working voltage. Vishay in their report also measured some 50V capacitors at 40% bias voltage. There, the aging rate was more linear, but, they report that some capacitors still exhibit substantial aging in the first 1000 hours. See the referenced report [2] for all the details.

The Vishay article also looked at the aging recovery with the removal of DC bias and found that this de-aged the capacitors at least somewhat and they recovered at least partially from the lost capacitance. Again, the results, according to Vishay, were highly dependent on the manufacturer tested.

The Vishay study did not present any data past 1000 hours.

Now you might well ask: “What about the effect of aging if I have DC bias and the operating temperature at higher than 25oC?”

That is an excellent question, a report published in the Journal of Electroceramics in 2008 [3] also seems to show that for X7R types, the effect of DC bias. And, with increased operating temperature, produces yet again an increased and nonlinear aging rate. However, the good news is that this aging rate seems to settle down in the 10,000- to 100,000-hour range to a maximum loss of about -25% compared to the 10-hour datum.

These nonlinear aging rates show a bottoming out with time, making sense from a material’s perspective. As voltage or time is applied to a Class 2 capacitor, the materials’ magnetic dipoles become less random. But there is a point where all the dipoles are 100% aligned, either through applied voltage or time aging, yet there will still be some capacitance as the material still has some dielectric constant, albeit much reduced.

What you have done by applying DC Bias and/or increasing operating temperature is just to accelerate the aging process.

The Vishay study used the classic 0.1µF, 50V-rated, 0603 size, X7R capacitor for their tests. It is not clear how a newer 2.2µF, 10V-rated, 0603 size, X7R capacitor would perform when similarly tested. These newer, lower-rated voltage, higher capacitance capacitors are what we circuit designers are all using more of, and it seems like more work needs to be done to give us the confidence that we have a handle on what the 10,000- to 100,000-hour capacitance limits might actually be in real-world use cases.


A comparison chart may be built for an X7R capacitor based on available data. Table 1 shows the cumulative effects of DC bias, temperature, and time aging on two capacitors that might be picked for a modern application.

Table 1 Comparison of two X7R, 0603-sized capacitors from manufacturers’ data. Both are assumed to have 5V bias and be operating at 70 Deg C. Even though the initial capacitance was double on one of the parts, the final result at 100,000 hours is much closer. All data is based on manufacturer’s data sheets, 100,000-hour aging is estimated.

The first capacitor is a 1µf, 25V, 0603 size, and the second is a 2.2µf, 10V, 0603 size, both are assumed to be biased at 5V and operated at 70oC. The total aging at 100,000 hours is due to normal aging, plus DC bias, plus operating temperature and is extrapolated to be -25% worst case from references [2] and [3]. Please note: The key word above is “extrapolated”, as I have no data of my own to back this up.

Even this linear multiplicative adding of terms is misleading as the total cannot be greater than probably an 80% capacitance drop total under any circumstances. This is because when all the magnetic dipoles are 100% lined up, the material will still have some residual dielectric constant. Hence, the situation is more complex than the simple back-of-the-napkin linear calculation that Table 1 shows.

More likely is the situation in Figure 3, which was derived from several manufacturers’ published data on DC bias effects alone. Figure 3 does show what happens to the capacitance of the capacitor when the dielectric material dipole alignment is increased from 0% (totally random) to 100% (totally aligned) which would represent the absolute worst case of DC bias, operating temperature, and aging combined.

Figure 3 A plot was made by studying several manufacturers’ curves of DC bias versus capacitance change and was extrapolated to this curve that shows the likely capacitance change versus a X7R capacitor dielectric material dipole alignment. The 0% is random alignment (left-hand side x-axis), and 100% is when the dipoles are aligned (right-hand side x-axis) showing approximately nearly 80% possible total capacitance loss.


The takeaway from all this for me is:

1) I had severe issues after the “Great Capacitor Shortage” of 2017 in how X7R parts acted when the manufacturers were scrambling to meet orders and substitutions, both known and unknown were made. I found a worse drop in capacitance with DC bias, among other parametric issues between capacitor batches produced before and after the shortage took hold in seemingly identical part numbers.

This makes me leery of trusting decades-old manufacturers’ published information, especially when the technology is changing as rapidly as it is. Even if you do your own reliability studies, you can’t be sure when the next capacitor shortage will change all the formulations again and make it all for naught.

2) The newer information on increased aging rate with DC bias and the elevated operating temperature seems to suggest that at 10 years, the designer might be wise to add another 25% to the expected X7R capacitance drop due to aging + operating temperature + DC bias aging effect. This is in ADDITION to the initial capacitance drop due to tolerance, temperature coefficient, and DC bias alone.

3) This accelerated DC bias + elevated operating temperature capacitance drop suggests that using high temperature, accelerated life testing to at least 1000 hours may help to understand the expected true capacitance change expected for longer expected lifetime products. Note: You can’t go much above 90oC for fear of de-aging the capacitors while you are testing them.

4) Using low rated voltage, high capacitance X7R capacitors running at high working voltage percentages may be problematic for bulk output filtering of a switching power supply, where the capacitance is used to stabilize the control loop, especially if you have to reach a longer operational lifetime. Test to at least 1000 hours at elevated temperatures or use another tried and true capacitor technology like tantalum or aluminum electrolytic for your bulk capacitance needs.

5) Using low rated voltage, high capacitance X7R capacitors running at high working voltage percentages may be fine for low dropout regulators (LDO) output filtering applications. In these applications, a maximum series resistance value, and perhaps some minimum capacitance value might be needed, but at the opposite extremes of these values will usually still provide a stable regulator. Check the regulators’ data sheet to verify.

6) Since X7R is the best of the bunch of all the rest of the Class 2 dielectric capacitors, it seems to strongly suggest that X5R’s be relegated to only high frequency bypassing on multi-megahertz digital circuits where the most important aspect of the capacitor is series inductance rather than any capacitance value. Be sure to see Part I of this article and the notes about piezoelectric effects also.

Bonus – Check Those Data Sheets

I looked at the manufacturers published capacitance versus DC bias data for two common, 0603 size, X7R capacitor types. The first one is the common 0.1µF, 50V that is used everywhere for decoupling (Bonus Figure 1), the second is a high density 1µf, 10V type (Bonus Figure 2).

Bonus Figure 1 A comparison of three manufacturers 0.1µF, 50V, 10%, X7R capacitors capacitance versus DC bias.

Bonus Figure 2 A comparison of three manufacturers 1µF, 10V, 10%, X7R capacitors capacitance versus DC bias.

As can be seen, every manufacturer has a different formulation for their X7R dielectric, and it changes based on the rated capacitor voltage. Keep this in mind when you run into a shortage and pick some other “equivalent” part number, it may not be as equivalent as you think!


[1] Christopher England, Johanson Dielectrics, “CERAMIC CAPACITOR AGING MADE SIMPLE” https://www.johansondielectrics.com/ceramic-capacitor-aging-made-simple

[2] Vishay Vitramon, Paul Coppens, Eli Bershadsky, John Rogers, and Brian Ward, “Time-Dependent Capacitance Drift of X7R MLCCs”, Vishay Vitramon, December 2021 https://www.vishay.com/docs/45263/timedepcapdrix7rmlccexptoconstdcbiasvolt.pdf

[3] Tsurumi, T., Shono, M., Kakemoto, H. et al. “Mechanism of capacitance aging under DC-bias field in X7R-MLCCs”, Journal of Electroceramics, Volume 21, 2008. https://link.springer.com/article/10.1007/s10832-007-9071-0

Steve Hageman has been a confirmed “Analog-Crazy” since about the fifth grade. He has had the pleasure of designing op-amps, switched-mode power supplies, gigahertz-sampling oscilloscopes, lock-in amplifiers, radio receivers, RF circuits up to 50 GHz, and test equipment for digital wireless products. He knows that all modern designs can’t be done with Rs, Ls, and Cs, so he dabbles with programming PCs and embedded systems just enough to get the job done.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Class 2 ceramic capacitors—can you trust them? appeared first on EDN.

Test challenges in calibrating power for server designs

Tue, 09/20/2022 - 15:45

The global pandemic has accelerated the adoption of emerging semiconductor technologies to meet market demands, which has enabled companies with superior technology to outperform their competition. More than 50% of companies will need to build new digital businesses to stay economically viable, and recovery from the pandemic will involve permanent changes to many dimensions of an organization, including the pace at which it conducts business; its core value proposition; and the talent.

With digital and technology-driven disruptions creating a winner-takes-all dynamic in an expanding number of industries, only a subset of organizations is likely to thrive. In today’s competitive semiconductor market, where top companies have robust and expansive technology portfolios that are always evolving, a strong technology foundation is critical for success. The time is now for these companies to make bold and innovative investments in advanced technology and digital capabilities.

Effects of pandemic on digital ecosystem

The pandemic has amplified the need for technology growth and has encouraged innovation across the entire digital ecosystem, from big data and artificial intelligence (AI) to cloud computing and Internet of Things (IoT). Traditional brick and mortar retail companies have embraced technology to remain relevant and meet the demands of tech-savvy consumers. Big data has facilitated the digitization of various industries and exponential growth in eCommerce. Additionally, with travel restrictions imposed nearly worldwide, work from home became ubiquitous, resulting in an unanticipated surge in the uptake of cloud gaming and high-performance computing (HPC) as a service.

Figure 1 The above chart shows the compound annual growth rate for server sales. Source: Teradyne

The server market supporting AI hyperscalers is expected to grow by 50% year over year through 2025, while cloud gaming CAGR is expected to grow at an astonishing 72% through 2025. However, super high-performance systems have especially challenging power requirements clocking up to 10.2 KWatts per server. The emerging class of exascale high-performance computers—computing systems capable of calculating at least 1018 IEEE 754 Double Precision operations per second—and trillion parameter AI models for tasks such as accurate conversational AI require months to train, even with the processing power of today’s supercomputers.

Figure 2 In a typical server architecture, all processing components require power usage. Source: Teradyne

Power management challenges due to higher demand for computing power

As computing power increases, transistor count per die increases in tandem. Although process node counts have decreased over the years, die size is increasing as transistor counts double every 18 months. So, onboard real estate for power management devices decreases to accommodate larger processors. Consequently, increasing current draw coupled with decreasing availability of board development area for silicon-based MOSFETs, which supply current to the processors, results in an interesting power management challenge.

Figure 3 The above data highlights battling current requirements and available design space. Source: Teradyne

An insatiable appetite for higher power processors, for applications such as AI training servers, drives a substantial increase in MOSFET drivers. To keep heat generation as low as possible and maximize energy efficiency, these devices are designed with low RDSON to deliver hundreds of amps of current to the processors they are powering. However, high-volume MOSFET drivers with extremely low RDSON measuring less than 1 mΩ create challenges for semiconductor test.

Test challenges for achieving precise measurements

Testing semiconductors prior to the installation in final application is critical to ensure devices meet specified requirements for the lifetime of their use. Sustaining competitive cost of test (COT), while providing complete test coverage, requires precision high-power instrumentation to operate accurately and efficiently.

Measuring precision voltages across the 1 mΩ gate resistance on the MOSFET driver requires tens of amps of current to flow through. High bandwidth, precision and power instruments from automated test equipment (ATE) can efficiently measure RDSON resistance accurately. However, parasitic resistance from the device interface boards (DIB) and a device test socket’s contact resistance, which can measure up to 50 times of the MOSFET driver’s RDSON, pushes the boundaries of maintaining optimal utilization of the test cell.

Additionally, high current pulsing can cause magnetic coupling into adjacent traces, compromising measured value for adjacent sites in high parallelism solution. Unfortunately, the industry practice of shielding or closely coupling high-current traces is not viable when dealing with current-induced magnetic coupling. In order to address this challenge, the high-current traces must be laid out as broadside differential pairs to optimize the magnetic field cancelation. Pulsing substantial current through high-contact resistance generates excessive heat and damages contact pins over time.

Precise RDSON measurements of devices could be achieved by meticulously designing onboard circuitry supporting application calibration to eliminate path and contact resistance. The onboard circuit has a secondary function to ensure the safe operation of all instruments deployed. Optimal throughput could be achieved by maintaining high equipment efficiency via an ideal test environment.

New power instruments with improved bandwidth, coupled with innovative test techniques, help to prolong the contact pin lifespan by shortening pulse width and incorporating an ultra-efficient contact resistance check before each test execution. Increasing the power instrument’s bandwidth delivers faster DI/DT, translating to shorter test times and the possibility of increasing site counts, resulting in higher overall throughput. Prolonging the contact pin lifespan also reduces consumables’ expenses.

Boosting energy efficiency with new materials like GaN

As deep-learning AI becomes more pervasive, the insatiable demand for computing power will ensue and supporting power management semiconductors will experience intense growth. Meanwhile, the carbon footprint from data centers is attracting attention and regulatory policies are being enacted to ensure data centers are equipped with energy-efficient equipment. In 2019, data centers consumed about 2% of the world’s electricity, but this number is expected to rise to up to 8% by 2030.

The efficiency of MOSFETs typically maxes out at 95%. To meet the growing energy consumption of data centers, new materials and processes such as gallium nitride (GaN) are being developed to address the shortcomings of traditional semiconductor materials. With higher efficiency and switching frequencies, GaN power supplies deliver more power than their silicon-based predecessors with a similar footprint. “Turbocharged” GaN power supplies delivering higher power on a similar footprint could increase overall server density by up to 56% on existing racks.

Figure 4 The above table compares material properties of silicon and gallium nitride. Source: Teradyne

Gallium nitride power supplies deliver three benefits compared to silicon-based power supplies. First, existing data centers can increase their data density. Second, more efficient power supplies translate into lower operating costs. Finally, the data center can reduce its CO2 emissions as part of the global goal to achieve net-zero emissions by the year 2050.

The primary industry challenge with GaN transistors is the high dynamic on-resistance which is difficult to measure when switched at required high frequencies. Top test equipment manufacturers are working to develop the precision instrumentation required to guarantee GaN RDSON specifications. Soon, GaN will replace silicon as the preferred material technology for delivering power, but until GaN’s hard switching dynamic on-resistance can be measured consistently and accurately, silicon-based processors will continue to be utilized.

As demand for high-performance computing applications, often delivered as a service, increases, semiconductor companies must rise to the challenge to remain competitive by adopting new technologies and processes. Advances in power management and new materials like GaN will ensure the technology is able to keep up with the applications driving it. However, with these new technologies come a number of challenges, both for manufacturing and test. Those that can be nimble enough to adapt will find success in these new and emerging markets.

Aik-Moh Ng is a product manager for analog power test products at Teradyne.

Lauren Getz is a product manager for analog power test products at Teradyne.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Test challenges in calibrating power for server designs appeared first on EDN.

Aukey dash cam teardown redux: this time, the DRA5 gets a look

Mon, 09/19/2022 - 20:26

Back in January of last year, I mentioned that the prior July I’d purchased and subsequently non-destructively tore down Aukey’s DRA1 dash cam. What I didn’t tell you at that time was that earlier that same month, I’d also bought Aukey’s DRA5 dash cam, with the same teardown aspiration in mind. Both products were bought on sale from Amazon; $29.88 plus tax for the DRA1, versus $26.17 plus tax for the DRA5 (ironically, neither product can be found on Amazon any longer; Aukey was one of the many China-based merchants purged in mid-2021). And, although the DRA5’s more diminutive dimensions, translating into a tinier LCD, are presumably the rationale for its slightly lower price, it’s not necessarily a downgrade at the end of the day.

Potentially quite the contrary, actually; the smaller DRA5’s installed presence on the dash board or windshield is less obvious to others than with the bulkier DRA1, and the DRA5 is also easier to rotate in order to capture footage of what’s going on not only in front of you but also behind and to the side of your vehicle (if, for example, you’re pulled over for a traffic stop and want to record the consequent interaction with the police). Plus, independent reviews claim that the DRA5 delivers higher image quality than does its DRA1 sibling; more on that in a minute:

For comparison, here’s that same reviewer’s earlier take on the DRA1:

I’ll start with some “stock” images of the DRA5:

The accessories suite is a topic of some confusion. Reviews I’d read had indicated that the DRA5 includes only permanent adhesive-based mounts, versus the DRA1 which also included a temporary suction mount; reviews of the DRA5 also made a point of noting the suction-option omission. And indeed, as you’ll soon see, that (adhesive-only) was the case with the DRA5 I bought in early July 2020. But both the earlier product and the following accessories “stock” photos show only a suction mount, as is also solely listed in the user manual:

For comparison purposes, before diving into the DRA5, here’s the earlier “stock” photo for the DRA1 from my January 2021 teardown writeup:

In contrast, here’s the latest version, complete with a suction mount, right off Aukey’s website:

And here’s the original suite of accessories, showing both temporary and permanent mount options, along with the DRA1’s dimensions (presumably unchanged):

versus the latest iteration of the accessory suite from Anker’s website and user manual:

Methinks Anker’s been over time tweaking its included-mounts options for both products, whether for cost-reduction reasons, in response to reviewer and customer feedback, or both.

Onward. Here’s a table containing specification excerpts from both products’ user manuals. Note that, as my earlier review mentioned, there’s some dispute as to whether the DRA1 uses (as documented) the GalaxyCore GC2053 2 Mpixel image sensor and/or the lower-end GalaxyCore GC2023; the DRA5 seemingly exclusively relies on the higher-end GC2053:

I’ve already mentioned the image sensor model discrepancy between various Aukey documents (and versions of them), which could at least in part explain the image quality variance between the DRA1 and DRA5 noted in the earlier reviewer video. One other discrepancy I found is regarding the DRA1 aperture; the user manual claims that it’s f/2.4, while the product page specs it at f1.8; the DRA5’s aperture is consistently documented at f/2. If the former DRA1 spec is correct, it could further explain the image quality discrepancy between the two dash cams; whereas f/2 would lead to narrower depth of field than f/2.4, it would conversely translate into slightly better light capture (exposure) capabilities, particularly important when using the dash cam after dark. That all said, both dash cams apparently employ the same system (including image) processor, the Novatek NT96658. To wit, the image-related specs for the two dash cams are identical—resolutions, frame rates, and formats—along with recording modes and the like. 

Overview…err…over…let’s dive inside the DRA5, shall we? I’ll begin with a shrinkwrap side-of-box shot to show a label no longer present once the wrapper is discarded:

Now…off with that clear plastic cover!

Opening the lid, who wants to bet that that’s the dash cam inside the protective white bag?

Underneath it and the black Styrofoam it’s also nestled in is the documentation-and-accessories assortment, per earlier comments, no suction mount in my particular case: 

Here’s a closeup of the “cigarette lighter” power adapter, revealing its specs:

And now back to that mysterious white bag; hey, I was right!

Time for some pre-dissection shots; front, with the microphone above the lens and the speakers below it (at least per the user manual; stand by for contrary evidence) and the as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

One side: once again, Aukey went with a geriatric mini-USB power input. And re the GPS input, it’s an industry-standard 4-pin 3.5 mm female connector. You mate it to an Aukey GM-32 (or third-party equivalent) external GPS antenna-plus-receiver, which’ll set you back another ~$20:

Other side: that’s the microSD memory card slot, supporting capacities up to 128 GBytes (Class 10 or higher write speeds recommended):

Two bottom-side views, showcasing even more passive ventilation slots, along with dubious certification claims:

A top-side peek at the mount locking clip:

And last, but not least, that 1.5” LCD, non-touch-supportive therefore accompanied by control buttons below it (along with a user-feedback LCD in the upper right corner):

Here’s a refreshing change of pace; getting inside required only my fingernails to breach the seam between the two case halves:

Open sesame:

Disconnecting the flex cable between the PCB and display at the PCB end enables an unobstructed view of the LCD backside:

along with our first glimpse of the PCB (stack, as it turned out; keep reading):

Particularly notable is the earlier-mentioned system SoC, along with the switches associated with the four control buttons seen before. Three of the four screws whose removal are necessary to get the PCB out of the half-case are also obvious to the naked eye. And who wants to bet that there’s a fourth screw under that black foam piece in the upper right corner?

I win again!

Unsurprising, especially in retrospect (but then again, what isn’t), given the DRA5’s much smaller size than the DRA1 precursor, Aukey went with a two-PCB stack this time around versus being able to squeeze everything onto one circuit board. The approach necessitates two flex cables this time around, one (which we’ve already seen) between the processor board and the LCD, and the newly revealed one between the processor board and image sensor board. Unsnapping another connector…

The other side of the processor board now can be viewed unobscured:

At left are the GPS and power input connectors. At right is the microSD slot. The mic connects to the PCB at lower left, with the speaker connections at upper right; hold that thought. At top is the other end of the flex cable connecting this PCB to the image sensor board. And in-between the flex cable connector and speaker solder points…is that a battery I see? Just like the one in the DRA1? Even though both dash cams were supposedly supercapacitor-based? Hmm…

Discrepancy snark concluded (or not?), let’s look at what was previously attached to the other end of that flex cable:

Remember how I previously mentioned that the user manual said that the microphone was above the lens? Sorry, Anker, that’s the speaker; the microphone is in the lower right corner. Then again, you did the same transducer switcheroo with the DRA1, so at least you’re consistently wrong. Sigh…discrepancy snark now concluded.

Here’s a closeup of the PCB, clarifying the path forward:

If you look closely, you might be able to tell that the heads of the two screws in the center are slightly bigger than the one in the upper right corner or the ones in the lower corners. Sufficiently loosening them releases their hold on the lens assembly:

And removing the other three screws enables extraction of the board from the case and a look at the image sensor itself:

One of the lens mount screws remains associated with the PCB, as you can see. And look at that weird-shaped bubble of what’s presumably supposed to be the environment (moisture, dust, etc.) barrier translucent adhesive at the lens base-to-PCB junction:

With the PCB removed, the lens can also be extracted out the back of the case:

Here’s the infrared filter at its back end:

And two side views, once again showing evidence of assembly-line focus fine-tuning, subsequently retained via a dab of fast-drying solid-grip glue:

I’ll conclude with some unexpected news. As regular readers may already realize, whenever possible I strive to conduct my teardowns in a non-destructive manner so that I can reassemble my victims and, after confirming ongoing functionality, donate them to charity. Although I was able to accomplish this with the earlier DRA1, I doubted I’d be able to replicate my success this time around, given the multi-PCB and multi-cable added complexity of the DRA5. Nevertheless, I persisted. And after carefully putting Humpty Dumpty back together again:

Woo hoo! Excuse me while I finish typing so that I can pat myself on the back. I’ll hand the keyboard over to you, dear readers, for your thoughts in the comments.

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Aukey dash cam teardown redux: this time, the DRA5 gets a look appeared first on EDN.

Linearized portable anemometer with thermostated Darlington pair

Mon, 09/19/2022 - 18:20

This self-heated constant-temperature-delta transistor anemometer is cheap, rugged, and sensitive.  It relies on the relationship between airspeed (AF) and thermal impedance (ZT = oC/W) of a heated air flow sensor as shown in the formula below for a 2N4401 transistor in TO-92:

ZT = ZJ + 1/(SC + KT √AF )
ZJ = junction-to-case thermal impedance = 44°C/W
SC = still-air case-to-ambient conductivity = 6.4 mW/°C
KT = “King’s Law” thermal diffusion constant = 0.75 mW/°C√fpm
AF = air flow in ft/min

If the transistor junction is held at a constant temperature differential above ambient (e.g., Dt = 31oC), the power required to do so will be a function of air speed P = 31/ZT as shown in Figure 1.  Note the annoying non-linearity.

Wow the engineering world with your unique design: Design Ideas Submission Guide

 Figure 1 Power dissipated versus air flow of TO92 held at a constant 31oC above ambient      (Pw = 31/ZT).

Figure 2 shows a practical portable thermostat circuit to achieve and maintain this delta-T utilizing a Darlington transistor pair (Q1 and Q2) to compensate for ambient temperature and convert the resulting nonlinear Pw curve into a linearized anemometer air flow readout.

Figure 2 Linearized portable Darlington anemometer schematic.

Here’s how it works.

Q1 serves as the self-heated sensor modeled in the Figure 1 math, with Q2 providing ambient temperature compensation. Op-amp A2 runs a feedback loop that forces the Vbe differential between Q1 and Q2 (and thus the temperature differential between Q1 and ambient) to hold a constant 31oC. It does this (with the help of Darlington current gain) by forcing Q1’s current draw (I) through R3 to drive Q1’s power dissipation (Pw) to follow the Figure 1 curve of heat-vs-air flow. The resulting voltage developed (IR3) is the basis of the air speed measurement.

Okay so far. But how does compensation for Figure 1’s nonlinearity happen?

Well, it turns out the function of Q1’s Pw vs collector current, I, isn’t linear either. In fact:

Pw = 5vI – I2R3

That quadratic I2 term is very useful. It’s responsible for the lovely curve shown in Figure 3.

Figure 3 Q1 power versus collector current.

The 2nd-order curvature of Figure 3 is what compensates for the bend in Figure 1. Although the match isn’t perfect, when inverted, offset, and scaled by op-amp A1, the realized output is a calibrated readout (1V = 100fpm) of air speed that differs from ideal by less than +/- 5% from 0 to 250fpm, as shown in Figure 4.

Figure 4 Darlington anemometer output versus actual airspeed.

The resulting sensitivity to relatively slow air flow is ideal for the measurement of cooling-fan forced-air distribution, air infiltration tracking in HVAC installations, and many similar applications where the achieved measurement accuracy and range are adequate.

Dynamic response to changes in airflow is good with a Q1 forced thermal time constant of about three seconds. Also, solid-state sensor durability is better than that of delicate hot-wire sensors.

A detail of Figure 2 worthy of mention is Q3, which I include to preclude the possibility of the A2 feedback loop getting “stuck” when a transient or other misadventure might cause R3 voltage drop to exceed 2.5 V. This is a potentially bad thing because the Pw vs I curve would go “over-the-top” and invert the I vs Pw feedback term from negative to positive, causing A2’s output to latch with the Darlington saturated and stay thus stuck for as long as power is provided. 

If saturation approaches, Q3 conducts and forces A2 to limit Darlington drive to a safe level until the transient passes and normal temperature regulation can recover.

Another useful detail is “upside-down” regulator U1 which provides not only necessary stability for the 5 V power rail, but also “splits” input power and provides an unregulated, but still useful, negative rail for the op-amps. This simple but handy trick is described in an earlier Design Idea.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a ways. In all, a total of 64 submissions have been accepted since his first contribution was published in 1974.

Related Content




googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Linearized portable anemometer with thermostated Darlington pair appeared first on EDN.

The ups and downs in the CMOS image sensor market

Mon, 09/19/2022 - 14:30

CMOS image sensors—the unsung hero of the opto-semiconductor market—are staring at what some technology managers in China call a perfect storm amid a slowdown in smartphone shipments and a pause in the increase of embedded cameras being designed in new handsets. According to IC Insights’ August 3Q Update of The McClean Report, the CMOS image sensor market is on track to suffer its first decline in 13 years, with sales expected to fall 7% to $18.6 billion.

It’s worth mentioning that nearly two-thirds of CMOS image sensors are used in mobile phones. An average handset incorporates three cameras, one on the front, facing the user for selfies, and two main cameras on the backside of the phones. On the other hand, a high-end smartphone could feature five or more cameras.

Another factor, as pointed out by some industry observers, has been the U.S. trade bans on China. As a result, Sony, the market leader in CMOS image sensors, has been struggling to match image-resolution requirements for camera phones produced by the leading Chinese system manufacturers in the first half of 2022. However, according to Yole Intelligence, part of Yole Group, the market has now stabilized after a bubble caused by CMOS image sensor stockpiling as a consequence of U.S. sanctions against major China-based companies.

Figure 1 CMOS image sensors are expected to slowly regain growth momentum. Source: IC Insights

At the same time, however, both Yole and IC Insights forecast new growth cycles from smartphone upgrades and other markets such as automotive cameras, medical imaging, and intelligent security networks. IC Insights’ August 3Q Update expects CMOS image sensor sales to rise by a CAGR of 6.0% between 2021 and 2026 to reach $26.9 billion in the final year of the forecast.

CMOS image sensor’s quest for new growth venues is apparent from recent announcements. For instance, Sony, which accounted for about 43% of CMOS image sensor sales worldwide in 2021, has recently announced a 1/3-type CMOS image sensor for security cameras with approximately 5.12 megapixels. It simultaneously delivers both full-pixel output of the whole captured image and high-speed output of regions of interest.

Figure 2 The CMOS image sensor for security cameras simultaneously delivers a full-pixel output of captured images and high-speed output of regions of interest. Source: Sony Semiconductor Solutions (SSS)

The new image sensor leverages Dual Speed Streaming technology to output all of the pixels in a captured image at a maximum rate of 40 frames per second while simultaneously outputting specific user-set regions of interest at high speed. As a result, it can provide comprehensive images of scenes and support high-speed recognition of specific objects at a high level of detail.

In the post-Covid design world, even the CMOS image sensor, the ever-trustable growth engine, wasn’t spared from ups and downs. The good news is that it’s now stabilizing while continuing to innovate and seek sockets in new design areas like automotive and security imaging. As Yole puts it, the CMOS image sensor market has bottomed out at 2.8% year-on-year growth in 2021 and is ready to start a new growth cycle in 2022.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The ups and downs in the CMOS image sensor market appeared first on EDN.

Google joins NIST in a bid to democratize chip design

Fri, 09/16/2022 - 16:56

Another attempt to democratize chip design is on the horizon, this time constituting government, industry, and academia. Google has joined hands with the National Institute of Standards and Technology (NIST) to develop and produce chips that researchers at universities as well as engineers at startups will be able to use without restriction or licensing fees.

SkyWater Technology will manufacture these chips at its fab in Bloomington, Minnesota on 200-mm wafers. Google will pay the initial cost of setting up production and subsidize the first production run. And NIST, with its university research partners, will design the circuitry for the chips. NIST’s research partners include the University of Michigan, the University of Maryland, George Washington University, Brown University, and Carnegie Mellon University.

NIST plans to design as many as 40 chips, and researchers will be able to put these open-source chips to use in nanosensors, bioelectronics, and advanced devices needed for artificial intelligence (AI) and quantum computing. The legal framework of this collaboration eliminates licensing fees, which is expected to dramatically bring down the cost of these chips. Otherwise, the cost of designing a chip can run into hundreds of thousands of dollars, posing a major hurdle for university researchers and startup engineers.

NIST developed this chip to measure the performance of memory devices used by AI algorithms.

According to NIST director Laurie E. Locascio, the collaboration was planned before the recent passage of the CHIPS Act, but now it certainly looks part of the efforts to enhance the U.S. leadership in the semiconductor industry. It will, for instance, allow design engineers to prototype designs and integrate chips in their production cycles quickly and efficiently.

Though we have seen somewhat similar efforts to democratize semiconductor design in the past, with Google, a known disruptor in the technology world, it seems to be a more credible effort. And the momentum built around the CHIPS Act could certainly help this semiconductor endeavor operating on an open-source model.

NIST will host a virtual workshop on chip design to be carried out in collaboration with Google on 20-21 September 2022.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Google joins NIST in a bid to democratize chip design appeared first on EDN.

GaN FETs for 48V DC/DC conversion

Fri, 09/16/2022 - 16:53

EPC expands its portfolio of off-the-shelf GaN FETs in thermally enhanced packages with the introduction of the 100-V, 3.8-mΩ EPC2306. The device is footprint-compatible with the previously released 100-V, 1.8-mΩ EPC2302. Engineers can trade off on-resistance versus price to optimize designs for efficiency or cost by dropping in a different part number in the same PCB footprint.

The EPC2306 enhancement-mode GaN power transistor is intended for 48-V DC/DC conversion in high-density computing, 48-V BLDC motor drives for e-mobility and robotics, solar optimizers and microinverters, and Class D audio applications. In addition to low RDS(on) of 3.8 mΩ, the FET provides low QG, QGD, and QOSS for low conduction and switching losses. Its thermally enhanced QFN package has an exposed top and a footprint of just 3×5 mm.

A half-bridge development board featuring the EPC2306 GaN FET simplifies the evaluation process to speed time to market. With a maximum voltage of 100 V and maximum output current of 45 A, the EPC90145 mounts all critical components on a 50.8×50.8-mm board.

Available now from Digi-Key, the EPC2306 GaN FET costs $3.08 in lots of 1000 units, while the EPC90145 development board costs $200.

EPC2306 product page

EPC90145 product page

Efficient Power Conversion

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN FETs for 48V DC/DC conversion appeared first on EDN.

Linear redrivers minimize signal integrity issues

Fri, 09/16/2022 - 16:51

Three 1.8-V, 20-Gbps linear redrivers from Diodes allow equalizer and flat gain adjustments to improve signal integrity in USB4 Gen3, Thunderbolt 4.0, and DisplayPort 2.0 connections. The devices offer various settings to mitigate channel loss and extend reach, and they are transparent to channel link training.

Aimed at laptops, desktops, monitors, and docking stations, the PI2DPX2020, PI2DPX2023, and PI2DPX2063 operate in a 4-to-4 configuration and exhibit ultra-low latency of less than 300 ps for better interoperability and data throughput. Auto power-savings modes are built in. Each part is housed in a tiny 32-pin WLGA package with dimensions of 2.85×4.5 mm and operates over a temperature range of -40°C to +85°C.

The PI2DPX2020 redriver provides configurable operating modes for maximum design-in flexibility. These include 20-Gbps/40-Gbps USB4 Gen3 (x1/x2), 20.625-Gbps/41.25-Gbps Thunderbolt 4.0 (x1/x2), 10-Gbps/20-Gbps USB4 Gen 2 (x1/x2), 20-Gbps USB4 Gen2/2 lanes of DisplayPort 2.0 and 4 lanes of DisplayPort 2.0.

The 4-lane PI2DPX2023 20-Gbps DisplayPort 2.0 (UHBR20) redriver supports pin-strap equalizer and gain parameter control, while for the PI2DPX2063 20-Gbps DisplayPort 2.0 (UHBR20) redriver achieves the same control via the I2C interface pin.

In lots of 5000 units, the PI2DPX2020, PI2DPX2023, and PI2DPX2063 cost $2.99, $3.00, and $2.95, respectively.

PI2DPX2020 product page

PI2DPX2023 product page

PI2DPX2063 product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Linear redrivers minimize signal integrity issues appeared first on EDN.

Keysight teams with IBM to advance Open RAN in Europe

Thu, 09/15/2022 - 19:00

Keysight Technologies has signed a memo of understanding with IBM to accelerate Open RAN deployments in Europe. IBM aims to employ Keysight’s Open Radio Architect (KORA) test solutions at its Open RAN Center of Excellence (CoE) in Spain to help European mobile operators bring to market applications that meet the architecture standards defined by the O-RAN Alliance.

The goal of the partnership is to integrate Keysight’s software-centric Open RAN test, measurement, and emulation tools with IBM’s Cloud Pak for Network Automation, an AI-powered telecommunications cloud platform for automating network operations. Keysight’s Open RAN solutions enable vendors to verify conformance, interoperability, performance, and security, resulting in the deployment of fully interoperable RAN equipment.

IBM’s CoE intends to use Keysight’s RuSIM radio unit simulator to validate O-RAN distributed units; CoreSIM to verify the performance of Open RAN equipment; and Nemo Wireless Network Solutions to optimize and monitor networks.

“IBM’s hybrid cloud, automation and security solutions are utilized by some of the world’s largest telcos to support their efforts for the next era of communication,” stated Oscar Gonzalez Nogueira, Industry Partner at IBM. “The integration of Keysight’s tools into IBM’s Cloud Pak for Network Automation will further support our ecosystem of CSPs to enhance application and network automation.”

Keysight Technologies


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Keysight teams with IBM to advance Open RAN in Europe appeared first on EDN.

Image sensor improves in-car safety and comfort

Thu, 09/15/2022 - 18:59

ST’s VD/VB1940 automotive-grade dual image sensor monitors the entire vehicle interior covering both the driver and all passengers. While driver monitoring systems (DMS) promise greater road safety by assessing driver alertness, ST’s sensor can empower applications like child-presence detection, passenger safety-belt checks, vital-sign monitoring, gesture recognition, and video/picture recording.

The VD/VB1940 is a 5.1-Mpixel image sensor with both rolling and global shutter modes. Specifically designed to manage RGB and near-infrared (NIR) operations, the sensor outputs RGB Bayer color images on one side and full-resolution NIR images on the other side. The device captures the high dynamic range (HDR) color images needed for an occupant monitoring system, plus the high-quality NIR images typically captured by standard DMS sensors.

The VD/VB1940 captures up to 60 frames/s at full resolution and is fully configurable through an I2C serial interface. Compliant with ISO 26262 standards and ASIL-B safety levels, the part contains cybersecurity features that prevent hacking.

Samples of the VD1940 (bare die) and VB1940 (BGA package) sensors are available now for model year 2024 vehicles.

VD/VB1940 product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Image sensor improves in-car safety and comfort appeared first on EDN.

Cross-platform tools ease ML development on PSoC 6 MCUs

Thu, 09/15/2022 - 18:58

Users of Edge Impulse’s Studio environment can now access Infineon’s Modus Toolbox for building edge machine learning applications on PSoC 6 MCUs. The collaboration expands the Modus Toolbox MCU configuration software and ecosystem to now include the Edge Impulse cloud platform.

Develop and configure applications on the Infineon PSoC 6-based CY8CKIT-062S2-43012 Pioneer Kit coupled with the CY8CKIT-028-SENSE expansion kit for interfacing accelerometer, gyroscope, magnetometer, microphone, pressure, and temperature sensors. Data from these sensors are used with Edge Impulse Studio for generating TinyML-based AI models, optimized for low-power, low-cloud-cost edge environments. Models can then be deployed on any PSoC 6-based MCU.

“With the performance and extremely low-power design of the PSoC 6, running TinyML models down at the edge becomes even more capable than before. By using Edge Impulse to simplify the barrier to machine learning, product makers can focus on real data they collect from the device to make an innovative and effective product,” said Danny Watson, director and software product marketing manager, Infineon.

For more information on the cross-platform offering, click here.

Edge Impulse 

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cross-platform tools ease ML development on PSoC 6 MCUs appeared first on EDN.

Improved comparators distinguish between A = B = 0 and A = B = 1 states to enable better designs

Tue, 09/13/2022 - 19:00

Traditional digital comparator ICs are electronic analogs of mechanical lever scales. Like their mechanical counterparts, they compare two logical signals and produce an output (typically a voltage level) indicating the relationship of the inputs, i.e., A > B, A < B, and in some cases, A = B.

Wow the engineering world with your unique design: Design Ideas Submission Guide

As useful as they are, these simple comparators have a few problems including:

  1. In order to obtain a visual indication of the comparison, the comparator’s output must be connected to a transistor which drives an LED.
  2. If the comparator is used to monitor the presence of two supply voltages, an error condition occurs if both input voltage sources are switched off. In this instance the digital comparator will indicate a misleading “normal” status, i.e., A = B, even though both supplies are inoperative.

The comparator presented in this DI solves these problems and adds some other useful functionality. It is based on discrete elements, which allows you to achieve the maximum result with a minimum of electronic components. In addition, this solution provides visual indications (with LEDs) of the previously available comparison states (A > B; A < B and A = B), it also indicates the state of the inputs when A = B = 0 and A = B = 1.

Before we get into the details, let’s review some of the differences between analog and digital comparators.

Analog comparators usually have a configurable switching threshold. If the input signal exceeds this threshold, the comparator output signal switches the output level from logical one to logical zero or vice-versa.

Digital comparators allow you to compare the ratio of logical signal levels at inputs A and B. These devices can indicate: A = B; A > B; A < B.

Figures 2-4 show schematics of simple universal-purpose comparators that use multiple LEDs (LED1 to LED3) to indicate the relationships between the levels of inputs A and B as follows:

The equivalent scheme of states can be seen in Figure 1.

Figure 1 Basic improved comparator equivalent schemes.

Circuit 1: Comparator Realized using BJTs

The comparator shown in Figure 2 is formed by bipolar transistors VT1, VT2, and the three status indicator LEDs mentioned earlier.

 Figure 2 Comparator with bipolar transistors.

The circuit functions as follows:

If there are no logic level signals at inputs A and B, the transistors are closed (off), causing the current through LED3 to flow through resistors R1, R3 and R2, R4. This led indicates the state A = B = 0.

If a unit level logical “1” signal is applied to input A and a logical zero is applied to input B, the VT1 transistor opens (turns ON), causing LED1 to light, indicating that A > B.

If a unit level logical “1” signal is applied to input B and a logical zero is applied to input A, the VT2 transistor opens and the LED2 illuminates, indicating the condition A < B.

If a unit level logical “1” signal is applied to both inputs A and B, both transistors VT1 and VT2 will be open, so that neither of the LEDs will light, indicating the condition A = B = 1.

The comparator, shown on Figure 2, has a switching threshold of about 3 V. This circuit has an interesting feature insofar as the switching of LEDs is not instantaneous, but as a gradual change in their brightness. This characteristic makes this type of comparator convenient to use for monitoring the level of stereo audio signals. It can also be connected to the outputs of a stereo amplifier and used to drive multi-colored LEDs that add a visual effect to musical compositions.

Circuit 2: Improved Comparator Realized using FETs

The digital comparator on field-effect transistors shown in Figure 3 also has a switching threshold of 3 V, which allows it to be used in TTL or CMOS digital devices operating at logic levels from 3 to 15 V, and possibly higher.

If necessary, the comparators’ switching thresholds, in both Figures 2 and 3, can be adjusted by changing the values of the input resistive dividers (i.e., R5 & R6, R7 & R8).

 Figure 3 Comparator with bipolar transistors.

Circuit 3: Improved Comparator with Adjustable Threshold

The digital comparator shown in Figure 4 is based on an A1 LM393 comparator chip and has an adjustable threshold that can be smoothly varied between from 0 to 20 V using potentiometer R3.

Figure 4 Digital comparison based on the A1 LM393 comparator chip.

Conclusions and Applications

The digital comparators shown in Figures 2-4 can fully solve the problem of monitoring two supply voltages because they provide a positive indication of which of the voltages are missing. The power supply voltage of all these comparators is not critical and can almost always use the application’s existing power supply; provided it is 5V or higher.

These improved comparator circuits eliminate the “blind spot” in conventional designs which cannot distinguish between both inputs being at “1” or “0”. All three designs described in this DI also feature built-in LED drive capability.

This type of circuit can also be adapted for some other interesting applications beyond monitoring power supplies including:

  • A two-channel logic tester that allows you to visually monitor the presence and level of logic levels at two points of the digital device being monitored or repaired.
  • A circuit for electrically isolated data transmission when using optronic pairs as LEDs, including those with an open optical channel.
  • A safety interlock which will not allow a mechanism to be activated until two (or more) sensors indicate that it is properly configured.
  • The variable threshold comparator (Circuit 3) can be modified for use as a simple analog alarm for temperature, voltage, or other variables.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Improved comparators distinguish between A = B = 0 and A = B = 1 states to enable better designs appeared first on EDN.