EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 39 min ago

Tiny board jumpstarts motor-drive design

Thu, 07/25/2024 - 20:35

ST’s motor-drive reference design packs a 3-phase gate driver, an STM32G0 MCU, and a 750-W power stage on a circular PCB that is just 50-mm in diameter. The small form factor of the EVLDRIVE101-HPD board makes it suitable for both home and industrial equipment. It easily fits into handheld vacuums and power tools, as well as drones, robots, and drives for industrial equipment.

Leveraging the company’s STDRIVE101 3-phase gate driver, the reference design offers a variety of driving techniques for brushless motors, including trapezoidal or field-oriented control, with sensored or sensorless rotor-position detection. The IC contains three half bridges with 600-mA source/sink capability and operates from 5.5 V to 75 V.

The power stage of the EVLDRIVE101-HPD is based on 60-V N-channel power MOSFETs with output current up to 15 ARMS. Their low 1.2-mΩ on-resistance allows operation at very high load current, enabling power delivery up to 750 W.

Developers can use the STM32G0 microcontroller’s single-wire-debug (SWD) interface to interact with it, while support for direct firmware updates enables easy application of bug fixes and new features.

The EVLDRIVE101-HPD motor-control reference design costs $92.

EVLDRIVE101-HPD product page

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Tiny board jumpstarts motor-drive design appeared first on EDN.

PSoC-based eval kit focuses on edge AI

Thu, 07/25/2024 - 20:35

The PSoC 6 AI evaluation kit from Infineon offers essential tools for creating embedded AI and ML system designs for consumer and IoT applications. Powered by a PSoC 6 MCU, the evaluation board executes inferencing next to the sensor data source, providing enhanced real-time performance and power efficiency compared to cloud-centric architectures.

Along with the PSoC 6 MCU, the board provides a barometric air pressure sensor, digital MEMS microphone, radar sensor, 6-axis IMU, and 3-axis magnetometer. It also features a 2.4-GHz Wi-Fi and Bluetooth 5.4 combo module and antenna.

All of these components are mounted a 35×45-mm board, which is about the size of a cracker. This economical board, with its broad range of sensors and wireless connectivity, enables in-field data collection, easy prototyping, and model evaluation.

The PSoC6 AI evaluation kit is supported by Infineon’s ModusToolbox and Imagimob Studio. Imagimob Studio allows users to build AI models from scratch, optimize existing models, and access off-the-shelf Ready Models.

The PSoC 6 AI evaluation kit, designated the CY8CKIT-062S2-AI, costs $37.50.

CY8CKIT-062S2-AI product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PSoC-based eval kit focuses on edge AI appeared first on EDN.

Power amp targets multicarrier transmitters

Thu, 07/25/2024 - 20:34

A broadband power amplifier, the GRF5112 from Guerrilla RF, provides enhanced compression performance over large fractional bandwidths of up to 40%. The device’s broad single-tuned responses enable multicarrier base stations to simultaneously transmit across two or more cellular bands using a single RF lineup.

The GRF5112 GaAs pHEMT amplifier can be tuned over select bands within a frequency range of 30 MHz to 2700 MHz. At a frequency of 1.8 GHz, the amplifier provides a gain of 17.1 dB, OP1dB compression of 32.2 dBm, OIP3 linearity of 40 dBm, and a low noise figure of 1.7 dB when measured on the device’s standard evaluation board. De-embedded noise figure values are approximately 0.2 dB lower.

“Building upon the GRF5115 core, this latest iteration streamlines tuning while ensuring consistent performance across process and temperature variations. Our design team has also integrated additional tuning handles within the core to optimize linearity for specific bands and bias conditions,” said Jim Ahne, vice president of marketing at Guerrilla RF.

Like other GRF amplifier cores, the GRF5112 features a flexible biasing architecture that allows customizable tradeoffs between linearity and power consumption. Supply voltages can vary between 1.8 V and 5.25 V.

Prices for the GRF5112 in a 3×3-mm QFN-16 package start at $1.47 in lots of 10,000 units. Samples and evaluation boards are now available.

GRF5112 product page

Guerrilla RF 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power amp targets multicarrier transmitters appeared first on EDN.

EDA tools enable PCIe, UCIe simulation

Thu, 07/25/2024 - 20:34

System Designer for PCIe from Keysight enhances design productivity by supporting simulation workflows compatible with industry standards. The design environment, which is part of the Advanced Design System (ADS) suite, allows engineers to model and simulate PCIe Gen5 and Gen6 systems. Additionally, Keysight has added new features to its Chiplet PHY Designer, a simulation tool that complies with UCIe standards.

System Designer for PCIe automates the setup for multilink, multilane, and multilevel (PAM4) PCIe systems. The design environment also includes the PCIe AMI Model Builder, enabling the creation of models for both transmitters and receivers, and supporting NRZ and PAM4 modulations. A streamlined workflow with simulation-driven virtual compliance testing ensures design quality and reduces design iterations.

Chiplet PHY Designer for UCIe estimates chiplet die-to-die link margin, measures voltage transfer function (VTF) for channel compliance, and analyzes forward clocking. New design exploration and report generation features accelerate signal integrity analysis and compliance verification.

For more information, apply for a free trial, or obtain a price quote, follow the product page links below.

System Designer for PCIe

Chiplet PHY Designer 

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post EDA tools enable PCIe, UCIe simulation appeared first on EDN.

Video interface IP runs on multiple UMC nodes

Thu, 07/25/2024 - 20:34

Faraday’s MIPI D-PHY and V-by-One PHY IP portfolios are now compatible with UMC fabrication processes across nodes from 55 nm to 22 nm. The company’s video interface IP can be used in AIoT, industrial, consumer, and automotive applications, supporting both ASIC and IP business models.

The MIPI D-PHY IP on 22 nm offers a low operating voltage of 0.8 V, achieving a 12% reduction in power consumption and a 10% decrease in chip area compared to its 28-nm predecessor. It provides multiple transmit lanes with data rates ranging from 80 Mbps to 2.5 Gbps per lane. Additionally, the IP accommodates customizable combo I/O for various video receive interfaces and features flexible data and clock lane configurations.

Compatible with the V-by-One HS V1.4 and V1.5 standards for high-speed data transmission, Faraday’s V-by-One HS PHY IP on 22 nm handles data rates from 600 Mbps to 4 Gbps per lane. It cuts power consumption by 20% while operating at 0.8 V and decreases chip area by 30% compared to its 28-nm predecessor. The PHY IP also supports scrambling and clock data recovery.

Faraday’s fabless ASIC design services and silicon IP help customers streamline their R&D efforts and accelerate time-to-market.

Faraday Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Video interface IP runs on multiple UMC nodes appeared first on EDN.

Visual overload alert

Thu, 07/25/2024 - 19:16

Despite the title, this DI does not describe a gadget to tell you when to don your shades, but instead features a useful add-on to (analog) audio kit. Built into a mixer, for example, it will show when the output of any stage is approaching clipping, perhaps due to excessive bass or treble boost. Built into a project box as a stand-alone unit, it’s handy during circuit development. It may not show you where the problem is but will show that some stage is in danger of becoming overloaded. It’s shown in Figure 1.

Figure 1 The diodes combine the signals to be monitored, and the comparators check if any of them is close to your chosen limit, either negative or positive. If so, the LED flashes.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The outputs of the circuits to be monitored—as few or many as you choose—are each connected to a pair of the input diodes. The most positive and negative peaks of the inputs, less than a diode drop, then appear across R1. The inputs are not measurably loaded, nor is there any significant interaction between them.

Comparators A1 and A2 check those peak voltages against references determined by R2/3/4, their commoned outputs pulling low when the relevant limits are exceeded. That rapidly discharges C1, turning on Q1 and thus LED1. C1 slowly charges back up through R5 and R6, holding Q1 on while it does so. Q2, an n-channel JFET, is used as a constant-current diode, limiting the LED current to its IDSS or saturation drain current value, which is around 7–8 mA for the 2N5485 shown and largely independent of rail voltages from <9 V to >30 V. Make sure the device can withstand the peak supply voltage, though data sheet values are usually conservative. When built into equipment where the supply is fixed, a suitable resistor can be used instead, but a JFET is best for the stand-alone version, where supplies will vary.

With the values shown, peaks of >~10 µs will be detected, corresponding to a half-cycle at 20 kHz, giving LED flashes of ~20–50 ms duration depending on the supply voltage. If that voltage is great enough to cause breakdown of Q1’s gate-source protection diodes, the flash time will be reduced somewhat as R5 will effectively be partially shorted, but no damage will occur owing to the high resistor values. For a longer flash time, increase R5/R6; increasing C1 will slug the response time. DC levels above or below the relevant limits will turn the LED on continuously.

Only +V and -V power rails are needed, a central ground being unnecessary, so it can freely be used with either single or split supplies up to a total of 30 V or so. Connecting C2 across the supply right by the LED is good practice, though the latter’s current pulses are small. An extra decoupling cap across U1 is not needed.

To allow for different power-supply voltages, input swings, and headroom, it’s only necessary to change R3, which may be found by using the following equation:

R3 = (R2 + R4) / (VSS / (VCLIP × 10^(-h / 20)) - 2 VF) - 1)

where:   

  • R2 = R4 = 10k
  • VSS is the total rail-to-rail supply voltage
  • VCLIP is the pk-pk voltage, at clipping, of the stages being monitored
  • h is the chosen headroom in dB
  • VF is a p-n diode’s typical forward voltage, say 600 mV

A couple of examples: With ±15 V rails, a ±14 V maximum input swing, a choice of 3 dB headroom, and R2 = R4 = 10k, R3 comes out as 32,736 Ohms; or 33k. A single 12 V rail, a ±4.5 V input swing, and 2 dB headroom gives 19,663 Ohms, or 20k, for R3. (For the stand-alone version, I used a 50k pot plus a 10k resistor to cover all eventualities.)

Note that the voltage across R2 must be greater than 2 V, or the LM393 will misbehave. While it can sense at or below ground, though that does not concern us here, at least one input of a comparator must be more than 2 V below the positive rail.

While not shown on the schematic, the input lines should use screened leads (the screens being earthed, naturally) preferably with a few hundred ohms at their input ends to isolate the stages being monitored from the leads’ capacitance.

Simple as this circuit is, I have found it helps to give warnings of mismatches between the gains of cascaded audio stages. In use, it will normally be just flickering on musical peaks; if not, you are probably not using your full dynamic range or S/N ratio. If it’s flashing much of the time, that may just be down to Mahler, Wagner, or death metal, but if it’s solid, check for a blown op-amp somewhere! Of course, you may have heard the effects anyway.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Visual overload alert appeared first on EDN.

IoT: GenAI voice helps generate speech recognition models

Thu, 07/25/2024 - 12:05

A new generative AI feature brings voice recognition to tiny devices with a text-to-speech (TTS) synthetic dataset generation capability. It enables developers to generate synthetic speech data with greater precision and tailor voice attributes like pitch, cadence, and tone to meet specific application requirements.

SensiML, a subsidiary of QuickLogic, has incorporated this generative AI feature into Data Studio, its dataset management application for Internet of Things (IoT) edge devices. This new feature will allow embedded device developers to utilize TTS and AI voice generation to rapidly create hyper-realistic synthetic speech datasets that are essential for building robust keyword recognition, voice command, and speaker identification models.

The new TTS and AI voice generation feature enables seamless integration into existing Data Studio workflows. Source: SensiML

This genAI capability aims to eliminate the time-consuming and costly process of manually recording phrases from large populations of diverse speakers and thus accelerate the time-to-market for voice-enabled IoT devices. “Developers can now harness synthetic speech technology to create highly accurate and diverse training datasets, accelerating the deployment of intelligent voice-controlled applications directly on microcontrollers,” said Chris Rogers, CEO of SensiML.

To understand how it works, let’s take the example of a home security system that uses voice commands for activation and status updates. This text-to-speech and AI voice generator feature will allow developers to efficiently create extensive voice datasets, enabling the system to recognize a wide range of user commands accurately.

Moreover, it allows developers to custom-build their own ML code for IoT devices needing to handle complex voice and sound recognition tasks directly on-device without the need for constant connectivity or high computational power. That’s crucial for applications operating in environments where connectivity may be inconsistent and where fast, reliable processing is critical.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post IoT: GenAI voice helps generate speech recognition models appeared first on EDN.

Take-Back-Half precision diode charge pump

Wed, 07/24/2024 - 17:57

Nearly four decades ago (in his Designs for High Performance Voltage-to-Frequency Converters), famed designer Jim Williams cataloged five fundamental techniques for voltage to frequency conversion. One of those five is reproduced in Figure 1.

Figure 1 Precision charge pump closes feedback loop to make “crude V→F” accurate fromDesigns for High Performance Voltage-to-Frequency Converters.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Williams concisely summarizes how this famous topology works: “The DC amplifier controls a relatively crude V→F. This V→F is designed for high speed and wide dynamic range at the expense of linearity and thermal stability. The circuit’s output switches a charge pump whose output, integrated to DC, is compared to the input voltage The DC amplifier forces V→F operating frequency to be a direct function of input voltage.”

Earlier in “Designs for…” William had presented several terrific VFC designs embodying Figure 1’s concept, that utilized a variety of different charge pumps. Two of these were diode types. More examples of Williams VFC designs incorporating diode pumps are detailed in his fascinating (and entertaining!) narrative of his creative design process: The Zoo Circuit(Chapter 18).

The success of these and other diode-pump equipped designs proves the utility of diodes in precision applications. However, an inherent challenge in working diode pumps into VFCs is accommodation of the inconvenient fact that no (real) diode is ideal. Diodes incur non-linear and temperature-dependent voltage drop, shunt capacitance, reverse recovery charge, and other “charming” idiosyncrasies. Inspection of any good VFC with a diode pump (including Williams’s excellent designs) will reveal a significant fraction of circuitry and part count dedicated to mitigating these quirks. Figure 2 sketches where some of these errors arise and their effects on pump accuracy.

Figure 2 The realities of a diode pump where errors can arise such as non-linear and temperature-dependent voltage drop, shunt capacitance, reverse recovery charge, and more.

 If the diodes in Figure 2’s pump were perfect, then each Vpp cycle of the input frequency would output a dollop of charge Q = -VC, and we’d therefore have Vout = FVCR. But since they’re not, forward voltages (Vd), shunt capacitances (Cs), etc. subtract from the net charge pumped leaving Q = – (VC – 2Vd(C + Cs)) making Vout = F(VC – 2Vd(C + Cs))R.

 Traditional circuit tricks for (at least partially) canceling these errors and nulling out (most) of the tempco they introduce (e.g., 2mV/oC for each Vd) include adding strings of diodes in series with VFC voltage references and calibration trims in input networks. Although they can be made to work, fine tuning these remedies in a given design can be complex and none of it is particularly elegant or easy.

Figure 3 shows an approach that’s entirely different from reference tweaking: “Take-Back-Half”, or TBH!

Figure 3 TBH adds a half-amplitude reverse-polarity pump that subtracts error terms.

 TBH adds a new opposite-polarity pump in parallel with the usual diode pair, driven by a 1:2 ratio capacitive voltage divider with the same total capacitance. The result is to generate opposing charge packets that have half the nominal signal amplitude but equal error signal amplitude. Consequently, when the charges are summed, half the desired signal is “taken back” from the net pump output, but all the error goes away.

This leaves only the original, ideal-diode-case output:  Q = -VC and Vout = FVCR.

This verbiage might sound garbled and confusing (I know) but the analog algebra is simple and (I hope) clear. Please see Figure 3.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Take-Back-Half precision diode charge pump appeared first on EDN.

How passive cooling advances electronics sustainability

Wed, 07/24/2024 - 09:16

Finding appropriate methods of cooling electronics allows engineers, designers and other professionals to prioritize sustainability by finding solutions that will make the products last longer and become more energy efficient.

Passive techniques are popular because they are usually less expensive and more reliable than active ones because there are no fans or other moving parts to break. What have researchers explored, and how can their findings improve future electronics designs?

Applying hot spot reducing methods

Graphene is a material known for being extraordinarily strong yet lightweight. Those studying it have also learned it can conduct and dissipate heat efficiently, leading engineers to want to learn more about its capabilities as a passive cooling mechanism in electronics.

One example associated with a European Union-funded project comes from Swedish startup Tenutec, which uses graphene as additives or multilayered films for passive cooling in electronics. The company stands out from others with its sustainable manufacturing method. It enables graphene production with a carbon footprint of only 0.85 kilograms of CO2 equivalents per kilogram. That is several hundred times less carbon-intensive than other well-established methods.

Figure 1 The use of graphene as additives or multilayered films has significant merits in passive cooling. Source: Tenutec

Additionally, its technique enables dispersion of graphene into one to three layers without harmful chemicals. Because the venture’s passive cooling methods eliminate hot spots in electronics, they also improve sustainability by lengthening products’ life spans.

This passive cooling work began during research at Sweden’s Chalmers University of Technology. Researchers developed and improved their graphene production method there, eventually realizing that the current market conditions and consumer demands made the technique marketable.

Regardless of the precise innovations applied, many electronics manufacturers want compact and effective solutions with the accompanying data to prove their worth. Another hot spot-eliminating technology can dissipate heat at levels of 1,000 watts per square centimeter, making it a good solution for devices’ power components.

Whether professionals use graphene sheets or alternatives to keep their devices at the right temperature, potential users will want assurances of effectiveness.

Improving performance of metal-organic frameworks

Numerous improvements in passive cooling options for electronics involve metal-organic frameworks (MOFs)—porous materials that pull water vapor from the air. However, they typically have low thermal conductivity. One research team sought to improve that characteristic by using a water adsorption process to control interfacial heat transfers from contacted surfaces to MOFs.

Figure 2 Metal-organic frameworks (MOFs) are porous materials that pull water vapor from the air. Source: IntechOpen

This group applied simulations and comprehensive measurements during their approach to determine its effectiveness. The results indicated that the water adsorption method made the interfacial thermal conductance approximately 7.1 times better than the MOFs performed without them.

The researchers also concluded that adsorbed water molecules within the MOFs formed dense channels, creating thermal pathways that moved heat away from the hot surfaces. They determined this cooling innovation created a sustainable way to regulate temperatures in electronics and other critical devices while simultaneously expanding possibilities that use MOFs for passive cooling.

Supporting sustainability while keeping electronics cool

Even as consumers use electronics on daily basis, many are increasingly concerned about the waste generated when those products stop working or get discarded. Similarly, they want manufacturers to offer solutions that work well while reducing environmental burdens.

Passive cooling technologies are central to these demands because electronics must exhibit adequate thermal management capabilities. Overheating can shorten their life spans and endanger users. However, when strategies meet sustainability needs while maintaining effectiveness, consumers and designers reap the benefits.

Ellie Gabel is a freelance writer as well as associate editor at Revolutionized.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How passive cooling advances electronics sustainability appeared first on EDN.

Waveform generators and their role in IC testing

Tue, 07/23/2024 - 20:33
Introduction

Semiconductors are the essential component fueling the growth of industries such as automotive, renewable energy, communications, information technology, defense, and consumer electronics. The rise of their importance began in the late 1950s when Jack Kirby and Robert Noyce invented integrated circuits (IC), which built electronic components and circuits on a common semiconductor base. ICs quickly replaced vacuum tube-based electronic equipment because they were more power efficient, saved space, and more reliable.

Over the past six decades, ICs have advanced significantly and are used in many industries as they are critical components in numerous products and processes. ICs can be multiple discrete components packaged together, such as digital logic circuits, microcontrollers, microprocessors, digital memory storage, analog circuits and amplifiers, radio frequency (RF) / microwave (MW) analog components and circuits, and integrated power circuits. This article focuses on using waveform generators to test various types of ICs.

IC design and test process flow

IC design and testing are complex processes involving precision and expertise to meet required specifications. Engineers engage in iterations, optimizations, and validations to ensure the final IC achieves the desired performance and reliability. In Figure 1, the process begins with software modeling and simulation based on IC specifications. Subsequently, the design is etched onto a photomask and transferred to a silicon wafer during the wafer foundry stage. After wafer testing, the ICs are packaged and undergo functional testing to ensure they function correctly.

Figure 1 The IC design and test process flow including IC design and simulation, wafer processing, parametric testing, lead frame/wire bonding, package testing, and ending with functional test. Source: Keysight

Wafer-level verification testing

During the design or front-end IC manufacturing stage, the ICs tend to be tested at the wafer level. Testing ICs at the upstream wafer-level process can be challenging, especially when using wafer-probing tools. However, it is necessary because the packaging process is costly and complex. Figure 2 shows wafer probing and testing in progress.

Figure 2 Wafer-level IC probing and testing where basic functional verifications can be performed such as catastrophic shorting, leakage, power supply, and general input / output conditions. Source: Keysight

At the wafer level, you can perform tests for basic function verifications such as catastrophic shorting, leakage, power supply, and general input / output conditions. Signal sources can come from programmable DC power supplies, source and measure units, and general-purpose waveform generators.

During the IC design stage, test engineers can perform noise, DC parametric, and S-parameter characterization work at wafer-level probing tests. This process drastically reduces the time to the first measurement and provides accurate and repeatable device and component characterization.

Package testing

After the ICs are placed on lead frames, wire-bonded to their respective leads and encapsulated, they are in their final physical form. Tests are conducted to ensure that the packaged ICs meet packaging expectations, such as no short circuits, open or weak connections, proper electrical isolation between internal circuits, and more.

Waveform generators provide clean signals and controlled frequency and amplitude noise levels for signal integrity and low-frequency noise tests. Figure 3 shows how waveform generators can provide controlled simulated signals into ICs for an oscilloscope to test signal integrity.

Figure 3 The eye diagram of an IC signal integrity test where waveform generators can provide clean signals and controlled frequency and amplitude noise levels. Source: Keysight

Post-packaging functional testing

Post-packaging functional testing, also known as end-of-line testing, is often complex and tedious. This process is the last testing stage, during which the ICs are extensively tested to ensure they meet specified performance and quality standards before they are shipped to customers.

Waveform generators generate complex variable patterns, real-world signals, and even extreme use-case signals to ensure that all ICs shipped meet the required performance specifications and functionality. Modern waveform generators are versatile in generating all kinds of signals, such as digital, analog, complex modulated, low to high frequency, burst, synchronized, and arbitrary waveform signals for all IC applications.

Preferred waveform generator characteristics

Waveform generators on the market have a wide range of specifications. Testing and characterizing ICs requires stringent specifications. IC design engineers need a source that produces a clean, low-distortion, stable, and reliable signal. The signal generated should not vary regardless of frequency or sample rate. Furthermore, certain waveform generator specifications for IC testing are important.

A clean and stable signal source

A clean signal source provides true and unadulterated signals without noise or interference from other foreign signals. The signals are measurable by the purity of a signal void of harmonic distortions and jitter. A clean and stable signal is necessary when testing ICs because engineers want:

  • The best product specification: ICs require precise and accurate signals to characterize and validate their functions and performances. The more errors introduced from the signal source, the more degraded the product specification becomes due to measurement uncertainties.
  • To avoid false test results: A stable signal source creates a consistent test process. Consequently, the test results can accurately characterize the behavior of the ICs. If the signal source is unstable, problems such as false test results affect downstream tests. Shipping the incorrectly characterized product to customers is the worst-case scenario.
  • Repeatable and reliable performance: Clean signals will also provide optimized repeatability test conditions to gauge the true performance of ICs. They will not have unwanted harmonics and noise, rendering test results inaccurate. Furthermore, a test can be made more reliable by replacing a real-world signal with a signal created by a waveform generator.
Noise additive

Besides having clean signals to characterize the performance of IC devices, adding noise to test signals simulates real-world noisy transmission, crosstalk, and EMI. Instead of getting the best product performance specifications, adding noise stresses the IC under test and determines the robustness of the products.

Suitable waveform generators can produce variable noise bandwidth to control the frequency content of the test signal. Figure 4 illustrates that this approach enables controlled stress testing of the ICs under test.

Figure 4 Adding controlled noise into a test signal (top image) results in a noisy ECG signal (bottom image). Source: Keysight

Mixed signals

Many applications require mixed-signal ICs, which are essentially ICs with digital and analog circuits built-in and packaged together. Applications that use mixed-signal ICs include analog-to-digital converters, digital-to-analog converters, power management circuits, microcontroller circuits, and physical parameter sensing measurements such as temperature, humidity, and pressure. Waveform generators can simulate both digital and analog signals to test mixed-signal ICs.

Arbitrary waveform signals created by software

Modern waveform generators can generate arbitrary waveforms to simulate real IC test applications. These generators usually come with software applications that create arbitrary waveforms.

Importing simulated or real signals

The most direct method for importing signals is digitizing a real-world test signal using an oscilloscope, saving it in a format that is readable with your software application, digitally manipulating or conditioning the test signal, and then transferring it to a waveform generator to regenerate the signal.

Another common method is to use waveform builder software to generate custom arbitrary waveforms and combining them into the desired simulated test signal. Some IC design engineers may want to generate the waveforms directly in MATLAB or Python programming and transfer those waveforms to the waveform generators. For example, Figure 5 shows how MATLAB understands the plotting of a complex waveform. The waveform is a simulation of a section of an electrocardiograph (ECG) heart signal showing part of the PQRST points. In fact, this waveform shows only the RST points for the purpose of creating a T-wave rejection test waveform. MATLAB can model waveforms using math equations and translate all these points into a complex ECG test signal.

Figure 5 Using math equations, assembling into a simulated cardio ECG test signal in MATLAB. Source: Keysight

Figure 6 shows the output of a cardio ECG test signal generated from MATLAB. The MATLAB software application offers options to send waveform points as a binary block to an arbitrary waveform function generator. The reason for sending a waveform as binary data rather than ASCII data is simple—the binary data is much smaller than the equivalent ASCII data.

Figure 6 MATLAB can transfer the above-simulated cardio ECG test signal into a waveform generator. Source: Keysight

These methods enable engineers to create the desired test signals for cataloging and storing in digital waveform libraries. This approach enables consistent and organized testing for many types of IC test applications.

Creating waveforms in playlist test sequences

Most modern waveform generators can play various segments of waveforms in sequence. Design engineers can build a playlist of test sequences with waveforms of incremental changes or good or bad signals to test the IC responses. Depending on your waveform generator’s capabilities, you can combine individual arbitrary waveform segments into user-defined lists or sequences to form longer, more complex waveforms.

The need for waveform generators

Waveform generators are versatile test instruments essential throughout IC design and manufacturing processes. They can generate all kinds of signals, such as digital, analog, complex modulated, low to high frequency, burst, synchronized, and arbitrary waveform signals, for many types of IC applications.

Designers can take advantage of the powerful capabilities of waveform generators to create clean and stable signals as well as to control IC stress testing by adding incremental noise content to the test signals. Waveform generators can also generate all kinds of arbitrary waveforms to simulate real IC test applications, this is critical as ICs are getting smaller and integrate more complex functions.

Bernard Ang has been with Keysight Technologies (previously Hewlett Packard and Agilent Technologies) for more than 30 years. Bernard held roles in manufacturing test engineering, product engineering, product line management, product development management, product support management, and product marketing. He is currently a product marketer focusing on data acquisition systems, digital multimeters, and education product solutions. Bernard received his Bachelor of Electrical Engineering from Southern Illinois University, Carbondale, Illinois. 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Waveform generators and their role in IC testing appeared first on EDN.

Rising respins and need for reavaluation of chip design strategies

Tue, 07/23/2024 - 09:37

According to the wisdom of French philosopher Jean-Baptiste Alphonse Karr, “Plus ça change, plus c’est la même chose,” or “The more things change, the more they stay the same.” This adage holds significant relevance in the fast-paced world of the semiconductor industry. Currently, the industry is undergoing a profound technological shift fueled by diverse applications that mandate intricate custom chip designs.

Ground-breaking technologies such as artificial intelligence (AI), autonomous vehicles, edge processing and chiplets are triggering an avalanche of advancements in the semiconductor market. Pioneering technologies are paving the way for high-growth markets, maintaining a competitive edge for products and driving the demand for increasingly sophisticated systems-on-chips (SoCs) to power burgeoning applications.

As a result of design complexity and market competition, innovative chip development strategies have become essential for expedited market entry and revenue growth. Tapping into these technological advances is a strategic imperative to secure market leadership.

 

The established hybrid design landscape

Over the past two decades, OEMs, Tier 1 suppliers and system designers have embraced a hybrid chip design model, predominantly operating independently. These companies frequently resort to customer-owned tooling (COT) for chip design, subsequently engaging with back-end services companies and wafer production management teams.

The COT model necessitates the recruitment of specialized semiconductor engineers from various disciplines for SoC development—a challenging feat due to the scarcity and steep cost of engineers. To address this need, companies often outsource talent to help manage temporary workload peaks and meet specific skillset demands. However, this workaround may not lead to forming a permanent, skilled team.

Large enterprises and startup companies alike must pay closer attention to the severe financial implications of design errors, which can sabotage budgets and delay market entry. In a recent study, a leading EDA firm reveals that over 60% of all first-time designs require a silicon re-spin. With millions of dollars of NRE on the line each time, plus the cost of delayed time to market, the rising complexity in chip design significantly amplifies the risk of errors, making any mistake potentially career-ending.

Figure 1 A 2020 functional verification study conducted by Siemens EDA and Wilson Research Group shows only 32% of 2020’s designs claimed first-silicon success.

Against this backdrop, the tech landscape continues to experience growth from venture capital-backed startups, particularly in the AI realm. These agile companies often utilize the COT model but face similar hurdles in designing distinctive, complex chips for their products. The technical expertise required to create sophisticated SoCs often exceeds their core competencies.

This underscores the need for experienced partners’ guidance throughout the chip design journey. Also, they frequently cannot source wafers directly from the industry’s leading foundry, TSMC, and instead are routed to a Value Chain Alliance (VCA) partner for mask creation and wafer production management.

These trends are driving a resurgence of ASIC design companies that now focus on “design and supply” services, offering a broad spectrum of technologies for customers to choose from. These firms possess the technical skills to guide customers in making informed selections of third-party IP and comprehend chiplet interconnect requirements, sophisticated SoC power management, 3D packaging, and more.

In short, this minimizes risk with new chip implementations and corresponding financial impacts. So, a new generation of ASIC companies with broad experience and stable engineering teams is emerging, capable of providing solid technology recommendations.

The imperative for a revamped model

Companies can preempt potential setbacks by collaborating with the new generation of ASIC design and supply firms that can manage the entire silicon development process. This necessity is spurring a reevaluation of chip design strategies. The quest for unique differentiation and shorter development cycles is moving companies toward a collaborative relationship with their ASIC design partners.

This shift signals the demand for a new paradigm where companies are seeking alternatives capable of supporting the complete chip ecosystem, from inception to delivery. Adopting an integrated ASIC design and supply model offers significant advantages over traditional ASIC houses and reduces the investment associated with COT models.

An integrated ASIC design and supply model involves cross-functional teams collaborating closely with customers to define the entire semiconductor development and manufacturing process, including packaging, final testing and product lifecycle management.

Today’s SoCs are intricate, multi-billion-transistor devices custom-built for specific applications. The cost of developing such high-end chips can easily exceed $50 million, with the photomask set alone at advanced process nodes ranging from $10 million to $20 million. A collaboration with a technologically advanced, single-source ASIC design house can expedite chip development and help ensure first-time silicon success.

Figure 2 A single-source ASIC design house can expedite chip development and help ensure first-time silicon success. Source: Sondrel

Rich Wawrzyniak, principal analyst for The SHD Group, emphasizes the growing importance of ASIC-class services by stating, “In today’s complex technological landscape, ASIC-class services have become an essential part of the equation for handling advanced semiconductor design implementations.”

In the face of rapidly evolving technologies and the pressure to accelerate time to market, partnering with a single-source ASIC design and supply company appears increasingly beneficial. With its specialization in managing the entire chip development process, such a company can help chip designers architect their future and secure a competitive advantage.

Ian Walsh, Sondrel’s regional VP for America, is based in the company’s U.S. office in Santa Clara, California.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Rising respins and need for reavaluation of chip design strategies appeared first on EDN.

The Pixel Watch: An Apple alternative with Google’s (and Fitbit’s) personal touch

Mon, 07/22/2024 - 17:49

I’ve been intending for a while now to share my experiences with the first-generation Google Pixel Watch. And, with the second-generation successor already eight months old as I write these words in mid-June 2024, along with rumors of the third-generation offering already beginning to circulate, I figured it was now or never to actualize that aspiration! The two generations are fairly similar; I’ll point out relevant differences in the paragraphs that follow.

The first-gen Pixel Watch (black frame and black rubberized “active” band version shown above; other color combinations also offered, along with accessory bands made from other materials) was unveiled at the 2022 Google I/O conference and entered production that same October. Its development was predated by several key business moves by the company. In January 2019, smartwatch manufacturer (and Google partner) Fossil sold some of its IP to Google as well as transferring part of its R&D team to the acquiring company, all for $40 million. And that same November, Google announced that it planned to spend $2.1 billion to purchase Fitbit, an acquisition that finally closed in January 2021 after a lengthy U.S. Justice Department evaluation of potential antitrust concerns.

Next up, some personal history. As regular readers may remember, I’ve long been an admittedly oft-frustrated user of smartwatches from Google’s various partners (Huawei, LG and Motorola, to be precise), all based on a common Wear OS or precursor-branding Android Wear software foundation. I eventually bailed on them, instead relying on my long-running, Android smartphone-compatible (in contrast to Apple Watches, for example) Garmin and Withings smartwatches. But in doing so I’d foregone any hands-on testing of the newer Wear OS 3 (currently at v4, with v5 enroute) which blended in design elements of the legacy Tizen O/S from Google’s new smartwatch partner, Samsung, along with any personal evaluations of newer smartwatch SoCs from Qualcomm and Samsung.

The Wear OS drought ended when, last September, I saw that not only had Google dropped the price tag of the LTE-enhanced version of the Pixel Watch by $60, to $339.99, it was also tossing in two years of free Google Fi-supplied cellular data service:

After using cellular data for ~9 months now, it’s nice to have but not essential, at least for me. Were I regularly wearing the watch while exercising away from my smartphone, for example, I might feel differently. But given that my Pixel 7s are regularly in close proximity, direct Internet connectivity from the smartwatch isn’t a necessity, plus it incrementally impacts battery life whenever the watch is untethered from the phone and not on a known Wi-Fi network.

About that battery life…when I started using the smartwatch, I struggled to squeeze a full day of between-charges wear out of it. Now, thanks to both Google-supplied software updates and my fine-tuning of the power management settings, I can often go for 30 hours or more. And if I were to disable the twist-wrist-to-turn-on-backlight, relying solely on manual watch face taps to wake up the display, I’d likely be able to stretch the battery life even further.

That said, my Garmin watch only loses ~10% of its battery capacity per day; it’ll run for well beyond a week between charges as long as I’m not activating its GPS subsystem (of course).

And my Withings watch? I intentionally took it instead of the Pixel Watch with me to California last month so that I didn’t need to bother packing a charger; its svelte body is also easier than the alternative long-lasting Garmin to tuck underneath a buttoned-down dress shirt sleeve. Upon my return to Colorado five days later, its stored battery charge still reported 100% full.

By the way, you might have noticed something about the Pixel Watch in the more recent (earlier today, in fact) two on-wrist pictures I took of it. I switched from the default “active” band, which quickly started exhibiting visible usage evidence from being removed from and then reattached to my wrist 1x per day (for recharging), to a stretchable Spigen Lite Fit band. Also, although one of the standard watch faces (the cool-looking, IMHO, Concentric) is shown, I sometimes instead toggle to the third-party Pixel Minimal one I purchased, which (in spite of its seemingly contrary name) lets me squeeze even more info into the display: daily step count, heart rate, date, weather, and battery charge. For obvious reasons I’ve already noted, that last one’s important.

A bit more on the battery. The first-generation Pixel Watch leverages wireless charging, akin to that used by Apple’s various Watch models and generations:

This approach is admittedly convenient. But it’s also slow; it takes ~2 hours to fully charge the watch from a drained state, a not-insignificant percentage of the subsequent wear-before-charge-again time. To wit, the Pixel Watch 2 moved to a more traditional, Fitbit-like multi-pin-based connector, notably (from reviews I’ve seen) boosting charging speed in the process.

And the upcoming Pixel Watch 3, per leaked images, will not only be thicker but also come in a larger-face variant. One benefit of the form factor increase is room inside for a larger, higher capacity battery. Plus, as my wife, now with a Christmas-present Apple Watch Ultra replacement for her soon-obsoleted Series 4 (a pending demise I’d forecasted back when I bought the successor for her) says, “go big or go home” (translation: she likes her watches “chunky”).

Another notably difference between the two Pixel Watch generations is that whereas my first-gen model runs on a Samsung Exynos 9110 dual-core processor, the Pixel Watch 2 switches to a quad-core Qualcomm SW5100 SoC. That said, the performance of mine is perfectly acceptable (that said, I haven’t comparatively tried its successor yet!). Other enhancements with the second-generation model:

  • A switch from a stainless steel to lighter aluminum body
  • An enhanced-function cardiac sensor suite, and
  • New skin temperature and electrodermal activity (EDA) stress sensors

similarly don’t provide sufficient upgrade motivation, at least for me.

In closing, two other oddities of note. For some unknown reason, the Pixel Watch isn’t compatible with the Wear O/S app that comes with Android. Instead, as part of the initial pairing process:

a dedicated Watch app gets installed.

Also, I can’t for the life of me get native Google Wallet support working with the watch:

Again, at worst a minor nuisance, since I usually also have a phone with me. Still…🤷‍♂️

What are your thoughts on Google’s branded Wear OS smartwatches, both in comparison to alternatives from other Wear OS licensees and those based on other smartwatch operating systems (including Fitbit’s)? Sound off in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Pixel Watch: An Apple alternative with Google’s (and Fitbit’s) personal touch appeared first on EDN.

Sparse AI MCU facilitates voice processing and cleanup at edge

Fri, 07/19/2024 - 15:38

While major microcontroller (MCU) suppliers like Infineon, Renesas, and STMicroelectronics have been incorporating artificial intelligence (AI) capabilities into their chips to facilitate applications at the edge, two smaller outfits have joined hands to offer a sparse AI MCU. Femtosense integrated its Sparse Processing Unit 001 (SPU-001), a neural processing unit (NPU), with ABOV Semiconductor’s MCU.

The outcome, AI-ADAM-100, is an MCU built on sparse AI technology to enable on-device AI features such as voice-based control in home appliances and other products. It cleans up voice/audio data before it is sent to the cloud, improving reliability and accuracy and reducing the volume of data sent to the cloud. AI-ADAM-100 enables designers to implement voice language interfaces at the edge, even for devices that are not connected to the cloud.

AI-ADAM-100 allows manufacturers to implement voice language interfaces at the edge, even for devices that are not connected to the cloud.

“With sparsity integrated throughout the AI development stack, the AI-ADAM-100 is the first device on the market to fully unlock the advantages of sparse AI,” said Sam Fok, CEO for Femtosense. The sparse AI MCU is being targeted at home appliances as well as small form factor, battery-operated devices such as high-fidelity hearing aids, industrial headsets, and consumer earbuds.

But how does sparse AI work? It reduces the cost of AI inferencing by zeroing out irrelevant portions of an algorithm and then only allocating hardware memory and compute resources to the remaining nonzero, relevant portions of the algorithm. As a result, a system that stores and computes only nonzero weights can deliver up to a 10x improvement in speed, efficiency, and memory footprint.

AI-ADAM-100 enables home appliance manufacturers to implement sophisticated wake-up and control functionality. This allows other system controllers and connectivity modules to drop into sleep mode and consume substantially less power when a user is not interacting with the system. Moreover, ABOV has verified AI-ADAM-100’s voice command recognition performance under multiple noise conditions.

AI-ADAM-100 adds a new flavor to the ongoing marriage of AI and MCUs. It brings language processing and voice cleanup capabilities to an MCU and works with a manufacturer’s own AI models, whether dense or sparse.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sparse AI MCU facilitates voice processing and cleanup at edge appeared first on EDN.

IR emitting diode boosts radiant intensity

Fri, 07/19/2024 - 04:05

The TSHF5211, an 890-nm infrared emitting diode from Vishay, delivers a typical radiant intensity of 235 mW/sr at a drive current of 100 mA. According to the manufacturer, this represents a 50% increase over previous-generation devices.

Based on a surface emitter chip, the TSHF5211 offers a temperature coefficient of VF of -1.0 mV/K. It also provides a narrow ±10° half angle of intensity and switching times of 15 ns. These features make the high-intensity emitter well-suited for smoke detectors and industrial sensors, as it enables good spectral matching with silicon photodetectors in these applications.

The TSHF5211 IR emitting diode is housed in a clear, untinted leaded plastic package. Samples and production quantities are available now, with lead times of 20 weeks for large orders.

TSHF5211 product page 

Vishay Intertechnology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post IR emitting diode boosts radiant intensity appeared first on EDN.

Mux/demux ICs switch high-speed signals

Fri, 07/19/2024 - 04:05

Two multiplexer/demultiplexer switches from Toshiba handle high-speed differential signals, such as PCIe 5.0, USB4, and USB4 V.2. The TDS4A212MX and TDS4B212MX, which have different pin assignments, can be used as a 2-input, 1-output multiplexer and a 1-input, 2-output demultiplexer in PCs, server equipment, and mobile devices.

Both of these devices are manufactured on Toshiba’s silicon-on-insulator process to achieve high bandwidth. The TDS4A212MX has a -3-dB bandwidth of 26.2 GHz typical, while the TDS4B212MX achieves a higher -3-dB bandwidth of 27.5 GHz typical. Optimized pin assignments enhance the high-frequency characteristics of the TDS4B212MX. In contrast, the pin assignment for the TDS4A212MX is designed with circuit board layout considerations in mind.

Main specifications for these switches include:

Toshiba is now shipping the TDS4A212MX and TDS4B212MX multiplexer/demultiplexer devices.

TDS4A212MX product page

TDS4B212MX product page

Toshiba Electronic Devices & Storage 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Mux/demux ICs switch high-speed signals appeared first on EDN.

Arduino beginners’ kit sparks creation

Fri, 07/19/2024 - 04:04

Arduino’s Plug and Make Kit lets beginners, hobbyists, and do-it-yourselfers build an IoT smart device and interact with it. At its core is the UNO R4 WiFi main board, which employs a Renesas RA4M1 Arm Cortex M4 microcontroller and Espressif ESP32 S3 for Wi-Fi and Bluetooth 5 connectivity. Seven starter projects with step-by-step instructions allow users to assemble components without soldering or breadboards and control their device via Arduino Cloud dashboards.

The kit furnishes seven Modulino sensor and actuator nodes, including a buzzer, 6-axis inertial measurement unit, temperature/humidity sensor, buttons, knob, LED strip, and proximity sensor. A Modulino base board provides the project’s physical frame, while Qwiic cables connect the Modulino nodes and the UNO R4 WIFI board. A USB-C cable with a USB-A adapter is also included.

Online resources are available to help integrate projects with the Arduino ecosystem. These include free programming tools, a smartphone app to monitor and control IoT devices, and Arduino Cloud templates to get up and running.

The Plug and Make Kit costs $87. It can be purchased from the Arduino Store, as well as from distributors DigiKey, Farnell, and Mouser Electronics, to name a few.

Plug and Make Kit

Arduino

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Arduino beginners’ kit sparks creation appeared first on EDN.

AI/ML platform leverages Microchip MPU

Fri, 07/19/2024 - 04:04

Edge Impulse announced that Microchip’s SAMA7G54 MPU is now fully integrated into its platform to enable easy training of ML models. The integration allows developers to build, train, and deploy machine learning models on edge devices powered by the SAMA7G54, particularly in camera-based applications.

Based on a single Arm Cortex-A7 core, the SAMA7G54 microprocessor features imaging and audio subsystems with 12-bit parallel or MIPI-CSI2 camera interfaces. It also offers dual Ethernet options and advanced security functions, including secure boot and hardware crypto accelerators. For edge AI applications, the SAMA7G54 supports camera-based edge machine learning tools like Edge Impulse’s Faster Objects, More Objects (FOMO) algorithm, which brings object detection to highly constrained devices, and image classification models.

Edge Impulse and Microchip expect their collaboration to streamline the creation of AI and ML models for edge hardware, accelerating the adoption of AI at the edge. For more information on integrating the Microchip SAMA7G54 with Edge Impulse, visit the Edge Impulse documentation here and Microchip’s documentation here.

Edge Impulse 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI/ML platform leverages Microchip MPU appeared first on EDN.

IC duo forms smart automotive cockpit

Fri, 07/19/2024 - 04:04

Infineon and MediaTek have joined forces to create a digital cockpit system that they say reduces BOM costs for both hardware and software. The solution pairs an Infineon Traveo CYT4DN microcontroller with MediaTek’s entry-level Dimensity Auto SoC.

The Traveo CYT4DN MCU acts as a safety companion to the SoC, ensuring compliance with ASIL-B safety standards for automotive clusters. It monitors the SoC’s content and takes over with reduced functionality in case of an error, while also performing regular companion functions such as vehicle network communication.

This cockpit solution supports a resolution of 1920×720 pixels for both cluster and infotainment displays. The ASIL-B-compliant Traveo MCU drives the cluster, ensuring reliability. By running under the open-source Android OS, the Dimensity Auto SoC simplifies software, reduces software cost, and eliminates the need for a hypervisor or expensive commercial OS. Suppliers and manufacturers can self-maintain and update the software, further reducing expenses.

Infineon and MediaTek anticipate that their IC combination will make digital cockpits affordable for all vehicles, including entry-level models.

Traveo CYT4DN product page 

Dimensity Auto product page

Infineon Technologies

MediaTek

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post IC duo forms smart automotive cockpit appeared first on EDN.

Micropillar surface yields lower-temperature boiling, better heat shedding

Thu, 07/18/2024 - 18:12

System designers spend a lot their time, mental energy, and effort on heat, sources, intensity, and especially how to get it away from sensitive components (a mentor once told me that “away” is that wonderful place where the heat becomes someone else’s problem). Understanding the mechanisms by which excess heat can be channeled and conveyed are important parts of the design plan. Among the many options are heat sinks, pipes, and bridges to draw the heat away locally, as well as active and passive cooling with convection, conduction, fans, and air or liquid fluids.

Now, a multi-university team lead by researchers at Virginia Polytechnic Institute and State University (better known as Viiginia Tech or VPI) has leveraged a subtle thermal phenomenon called the Leidenfrost effect to lower the temperature at which water droplets can hover on a bed of their own vapor—around 230°C—and thus accelerate heat transfer. You may have observed this thermal-physics effect without realizing what it is when you sprinkle small drops of water on the surface of a hot pan.

Wait…everyone knows water boils at 100°C under standard conditions, so what’s going on? The Leidenfrost effect occurs because there are two different states of water coexisting. If you could see the water at the droplet level, you would observe that the entire droplet doesn’t boil at the surface, only part of it does. The heat vaporizes the bottom, but the energy doesn’t travel through the entire droplet. The liquid portion above the vapor is receiving less energy because much of it is used to boil the bottom.

That critical hot temperature is well above the 100°C boiling point of water because the heat must be high enough to instantly form a vapor layer. If it is too low, and the droplets don’t hover; if too high, the heat will vaporize the entire droplet.

That liquid portion remains intact, and this is seen as the levitation and hovering of liquid drops on hot solid surfaces on their own layer of vapor (no, this levitation is not some sort of anti-gravity effect). It is called Leidenfrost effect due to its formal discovery in the late 18th century by German physician Johann Gottlob Leidenfrost.

The Leidenfrost effect has been studied extensively for over 200 years, but the Virginia Tech team was able to use advanced instrumentation such as high-speed video camera operating at 10,000 frames per second for their project.

The traditional measurement of the Leidenfrost effect assumes that the heated surface is flat, which causes the heat to hit the water droplets uniformly. The team has found a way to lower the starting point of the effect by using a specially created surface covered with micropillars, thus giving the surface interface new properties.

Their micropillars were 0.08 millimeters tall, arranged in a regular pattern 0.12 millimeters apart, and fabricated on a silicon wafer by means of photolithography and deep reactive ion etching. A single droplet of water encompasses 100 or more of them, as these tiny pillars press into a water droplet, releasing heat into the interior of the droplet and making it boil more quickly, Figure 1.

Figure 1 Leidenfrost-like droplet jumping dynamics on a hot micropillared surface. a) Selected snapshots of Leidenfrost-like droplet jumping on the micropillared substrate ([D, L, H] = [20, 120, 80] μm) with surface temperature 𝑇W = 130°C. The inset in (a) is the scanning electron micrography (SEM) of the micropillared substrate. a) Height variation of the center of mass of the droplet shown in (a). The time 𝑡 = 0 msec denotes the onset of the interfacial deformation. Source: Virginia Polytechnic Institute and State University

Compared to the traditional assessment that the Leidenfrost effect triggers at 230°C, their array of micropillars press more heat into the water than a flat surface. This causes microdroplets to levitate and jump off the surface within milliseconds at lower temperatures because the speed of boiling can be controlled by changing the height of the pillars. With the pillars, the temperature at which the floating effect started was down to 130°C significantly lower than that of a flat surface.

The Leidenfrost effect is more than an intriguing phenomenon to watch; it is also a critical point in heat transfer performance, Figure 2.

Figure 2 Droplet jumping velocity and equivalent thermal boundary layer (TBL) thickness. a) Jumping velocity of droplets with different volumes during vibrational jumping (on substrate [D, L, H] = [20, 120, 20] μm ) and Leidenfrost-like jumping (on substrate [D, L, H] = [20, 120, 80 μm ). b) Simulated results of temperature distribution of quiescent TBL on the substrates with micropillar height H = 20 μm and H = 80 μm , respectively. c) Thickness of equivalent TBL on substrates with different micropillar heights (from 20 μm to 80 μm ) an different substrate temperatures (from 120 °C to 140 °C). d) Phase map of occurrence of droplet jumping behaviors on substrates with different micropillar heights placed on hot plate at different temperatures. Source: Virginia Polytechnic Institute and State University

Another benefit of micropillars is that the generation of vapor bubbles in their presence is able to dislodge microscopic foreign particles from surface roughness and suspend them in the droplet. This means that the boiling bubbles can physically move thermal-blocking impurities away from the surface while removing heat.

There’s a very rough heat-transfer analogy here with “solid state” cooling via standard heat sinks. With a heat sink, it is critical to minimize thermal impedance between the heat source and heat sink. Since even apparently flat surfaces have tiny surface imperfections, any mating between source and sink surfaces will have micro-voids and nearly invisible air pockets which act as micro-insulators and impede heat flow.

The standard solution is to interpose an extremely thin layer of thermal grease or a thermally conductive pad to fill those gaps and provide a thermally continuous, gap-free source to sink path, Figure 3. These micropillars have a similar role, using their intrusion into the cooling liquid to which they are transferring heat.

Figure 3 The use of an interposed thermal grease layer or pad is essential to ensuring minimal thermal impedance between heat source and sink. Source: Taica Corporation/Japan

The team is not using overused words such as “revolutionary” or “breakthrough”; what they have done is look at this effect with a new perspective to see how and if it can be leveraged. If you want to read the full story including relevant intense thermal-physics equations and analysis, check their paper “Low-temperature Leidenfrost-like jumping of sessile droplets on microstructured surfaces” published in Nature Physics. (I had to look that word up, too: “sessile” is an adjective regularly in some technical disciplines, meaning “attached directly by the base, not raised upon a stalk”.) While that formal paper is behind a paywall, a pre-print version is here; both version also have links to some short but captivating videos of drops and their motions.

Their deeper insight of the potential modern-day thermal implications of the Leidenfrost effect may not result in any actual advances in cooling techniques and technologies; these sorts of project usually do not (but sometimes they certainly do have a huge impact). Either way, it’s interesting to see what modern solid-state material-fabrication tenancies, coupled with advanced instrumentation, can show us about fairly old physics.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Micropillar surface yields lower-temperature boiling, better heat shedding appeared first on EDN.

Hold that peak with a PIC

Wed, 07/17/2024 - 18:57

Capturing transient analog signals with a microcontroller normally involves adding a full-fat peak-hold circuit as an external peripheral. This novel approach minimizes that extra hardware by using a µP’s ability to switch its pins between analog and digital modes on the fly. While this DI specifically uses a PIC, the principle can be applied to any device with that capability.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the basics. We may want to add some complications later.

Figure 1 The basic peak-hold circuit. The PIC pin labelled ANA samples the voltage on C1 and then resets it to ground, ready for the next sample.

A1 and D1 form an “active diode”, which rapidly charges C1 to the peak input voltage through R1 whenever A1’s non-inverting input is higher than the diode’s output voltage and hence that on the inverting input. C1 holds its charge as it has no discharge path—leakages excepted—until the PIC needs to sample it, when the ADC is assigned to the relevant input pin (marked as ANA) which starts the acquisition period, during which C1’s charge is shared with the PIC’s internal CHOLD. Once this is done, the conversion can be started, which also immediately disconnects that pin from the ADC, allowing it to be changed from analog input to digital output (active low) to discharge C1, resetting the circuit ready for the next cycle. Thus, a single processor pin performs two functions. Figure 2 shows typical code for the essentials.

Figure 2 Simplified code for capturing the voltage held on C1 and then immediately discharging it to reset the circuit ready for the next sampling cycle.

Now that we’ve got it working, it’s time to point out its shortcomings and suggest some workarounds. 

The voltage across C1 can never be higher than a diode-drop below A1’s VDD, which limits the effective measurement range. (Although a Schottky diode with its lower forward voltage could be used for D1, the higher reverse leakage will compromise accuracy.) If the input must cover the full span, it’s easiest to pot it down first, and either accept a slightly limited resolution on measurements or use a lower reference voltage (2.55 V might be ideal) for the DAC. A1’s VDD can be boosted—see later on—to allow a full positive swing. Similarly, its VSS could be pushed negative if readings needed to be taken very close to ground. Again: see later.

Any input offset in A1 will affect precision. 1 LSB is about 13 mV when using 8 bits with a 3.3 V reference, or ~800 µV with 12 bits, so the allowable offset is half that. (The MCP6021’s offset is quoted as being 500 µV at most.)

Note that while C1’s voltage will be measured with respect to the PIC’s AVSS—or perhaps its VREF- pin—it will be discharged to DVSS. (The lower pin-count devices combine AVSS and DVSS on a single ground pin.) Be cautious of any relative offset between them if accuracy is paramount at low input levels. Microcontrollers are often put to sleep during analog measurements to minimize such errors, which can vary according to how hard the device is working.

A more subtle source of errors is inherent in the ADC’s operation. Internally, it uses a small capacitor (CHOLD), anywhere from 10 pF to 120 pF depending on the device’s vintage, to hold the input for processing. The charge on the external capacitor C1 is shared with the internal one during the acquisition time, so unless the ADC is actually connected to the pin when the input pulse arrives, it will read low, scaled by C1 / (C1 + CHOLD). With C1 = 10 nF and if the DAC’s CHOLD = 10 pF, as in the more modern PICs, the error will be ~1 LSB for a 10-bit result, but negligible for 8 bits; lower values of C1 will lead to greater errors.

If that input pulse is shorter than the reset period and arrives while the pin is being held low, it will be attenuated and effectively lost. (And make sure that A1’s decoupling cap can source the inevitable power transient.) Adding an extra MOSFET (extra GPIO pin required, as shown in Figure 2, below) allows ‘instant’ resetting (or around a thousand times faster, probably within a single instruction cycle), and to a genuine ground rather than the PIC’s internal one. (The ADC’s pin would then be left in analog mode.) A cure in ultra-critical situations might be to duplicate the hold circuitry on another pin and sample each channel alternately, selecting the higher reading in code.

In my original application, which was measuring the strength of RF signal bursts, none of these points was a problem, as the input was always between 0.2 and 2.5 V and lasted for hundreds of microseconds, while the output was scaled to read from 0 to 9.

Despite these reservations, this open-loop approach can be faster than the standard configuration which wraps an op-amp round the capacitor. Because C1 is driven directly, the rise-time of the input pulse can now be as fast as you like. A1’s output may overshoot momentarily, but the glitch will be absorbed by the longer time-constant of R1C1.

For accuracy, R1 should be chosen so that the op-amp’s output drive never exceeds its current-limit value, as that would break the feedback loop, resulting in overshoot and a falsely high reading. Also, for clean operation, time-constant R1C1 should be no less than A1’s rail-to-rail slewing time. The 10n + 47R (470 ns is about the same as the measured slew) allowed for accurate measurements of 2.5 V pulses as short as ~3 µs. Experiments showed that R1 could be reduced to 27R, giving a -10% error for 1 µs / 2.5 V input pulses.

C1’s discharge time to half an LSB will be ~1.6 × (NumberOfBits + 1) × C1 × ROUT(LOW), where the latter term will typically be ~100 Ω for PICs working at 3.3 V. (That “~1.6” is of course 1 / (1 – 1 / e).) For 8 bits, 10 nF, and 100 Ω; that’s about 14 µs, which can be reduced if you don’t need to measure right down to ground. (Some PICs can struggle there, anyway, especially if they use an internal op-amp in the ADC’s input path.) Choosing to cancel the reset and re-enable the analog input as soon as the A–D conversion finished, which took ~20 µs in my implementation, was more than adequate and simplified the code.

A1 is shown as a Microchip MCP6021 (CMOS, RRIO, 2.5–5.5 V, 10 MHz GBW, <500 µV offset). The MCP6001 is cheaper but less well-specified. As an aside, the dual MCP6022 is great for 5 V experimenting and prototyping because it is available in DIP-8.

As drawn in Figure 1, A1 can be fed from a GPIO pin, allowing it to be powered down when the PIC is asleep. This obviously limits its VDD to the PIC’s supply voltage, restricting the input range as noted above. If you need the full range and a higher switched rail is available, use that; if not, a simple voltage-doubler, probably fed from a PWM output, provides a fix.

The MCP6021’s output drives low to within ~5 mV of its VSS (<1/2 LSB with 8 bits). To operate right down to ground, another voltage-doubler can provide a boosted negative feed, with a simple regulator reducing this to -0.6 V for low-voltage op-amps. Make sure that the total voltage across A1 is within its limits; an extra diode in the positive doubler—D6 in Figure 3—may be needed to guarantee this. All these add-ons are lumped together in Figure 3. PICs’ pin-protection diodes are rated at 25 mA and should be safe with the increased voltages under any fault conditions. While these simple PIC-driven voltage-doublers are only good for a few milliamps, they could help power other devices if need be.

All this raises a reality-checking question: what’s powering the upstream circuitry, and is it really delivering a rail-to-rail signal? If not, we don’t need to fuss.

Figure 3 Boosting the op-amp’s supply rails can give true rail-to-rail operation while an extra MOSFET allows “instantaneous” resetting of C1.

Another reality check: if both boosted rails are available, why not use a higher-voltage, non-RRIO op-amp? The negative regulator Q2/3, etc. then becomes unnecessary. The extra complications shown in Figure 2 probably won’t be needed here anyway but may come in handy elsewhere.

Largely because of a PIC’s limitations, the simple circuit of Figure 1 is accurate rather than absolutely precise, but has still proved reliable and useful, especially where board space was at a premium. It could also be appropriate as a front end for an external peak-sensing A–D peripheral. The underlying principle could also be used in microprocessor-based kit to clamp a signal line to ground, albeit with 100 Ω or so effectively in series, perhaps where a MOSFET would add too much capacitance.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Hold that peak with a PIC appeared first on EDN.

Pages