EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 58 min 52 sec ago

Hotwire thermostat: Using fine copper wire as integrated sensor and heater for temperature control

Wed, 02/14/2024 - 15:07

Conventional thermostats are based on separate temperature sensor and heater devices with means for feedback between them. But in some recent EDN design ideas (DIs) we’ve seen thermostat designs that meld the functions of sensor and heater into a single active device (usually FET or BJT). The ploy can make a better fit to applications where the intended thermal load is physically small or has some other quirk of geometry that makes it inconvenient to apply the classic separate sensor/heater schema. This DI (see the figure) follows the melded concept but takes it in a somewhat different direction by using fine gauge copper wire (e.g., 40 AWG polyurethane insulated) as an integrated temperature sensor and heater.

Here’s how it works.

Miniature thermostat utilizing the tempco and I2R heating of 40 AWG copper wire as a melded sensor/heater.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The resistance and temperature coefficient of a standard 40 AWG copper wire at 25 oC are generally spec’d at 1.07 Ω/foot and +0.393%/oC, respectively. Therefore, L feet of 40 ga can be expected to have an approximate resistance at a given temperature T of:

R(L,T) = 1.07 L(1 + 0.00393(T – 25))                 (1)
R = 1.07 L + 0.00421 L T – 0.00421 L 25           (2)
T = (R – 1.07 L + 0.00421 L 25) / 0.00421 L      (3)
T = (R – 0.965 L) / 0.00421 L                            (4)

Equation 4 holds well from R/L = 0.965 Ω/ft at 0o up to 1.6 Ω /ft at 155o (the recommended upper temperature limit for solderable polyurethane wire insulation). 

Consider the implications for the use of fine copper wire as a combination temperature sensor and heater.

If a suitable length (between 5 and 15 feet) of wire is placed in a feedback loop driving current through it so as to dissipate enough I2R heating to raise and maintain a temperature that creates a preselected constant wire resistance, then said temperature, and the temperature of any thermal load thermally bonded to it, would likewise be constant! This is exactly what the circuit in the figure does.

Q1’s drain supplies heating; heating current I to the sensor/heater wire (please ignore for a moment the minor contribution from start-up resistor R2). The voltage induced between the terminals of the R wire resistance is then:

V = IR                         (5)

This causes the A1b, Q2 current source to output:

I2 = V/(R4 + R7) = IR/(R4 + R7)           (6)

Which induces a voltage at pin 2 of A1b:

V2 = I2(R5 + R6) = IR(R5 + R6)/(R4 + R7)           (7)

Meanwhile, Q1’s source current (also equal to I) sampling resistor R1 produces:

V3 = IR1                     (8)

FET control amplifer A1a forces FET gate voltage and thereby R drive current such that:

V2 = V3                                           (9)
IR(R5 + R6)/(R4 + R7) = R1I          (10)
R = R1(R4 + R7)/(R5 + R6)             (11)

Thus, heater current, and therefore wire resistance and temperature, are forced to equilibrium values set purely by the resistance ratios listed in Equation 11, with the resultant constant temperature given by Equation 4.

About Q3. The thermostat circuit is intended to be as flexible as possible in regard to wire gauge, length and associated sensor/heater R resistance. To accommodate R < 10 Ω and consequent possibility of potentially damaging peak I values, Q3 removes Q1 gate drive when necessary and limits I to a safe ~1.4 A.

Setup and calibration. In further pursuit of flexibility in accommodating sensor/heater wire length and initial R, this simple calibration procedure is suggested for whenever the wire is replaced.

  1. Before first power up, allow sensor/heater to fully equilibrate to room temperature.
  2. Set R4 and R5 fully CCW.
  3. Push and hold the CAL NC pushbutton.
  4. Turn the power on.
  5. Slowly turn R4 clockwise until LED first flickers on.
  6. Release CAL.

Done. R5 is now “reasonably well” calibrated for a CCW to CW span of zero to 130oC above room temp.

Thermal coupling of the chosen length of sensor/heater wire to the desired thermal load (e.g., thermostated circuit component, test tube, petri dish, etc.) can be done by winding a meander of wire around the load, and securing it with polyimide tape, RTV silicone, or a similar heat tolerant adhesive.

And about R2. Although not significant in the steady state function of the circuit, without R2 the thermostat might be vulnerable to a failure to start when first switched on and might simply sit looking stupid. Indefinitely. Don’t ask how I know this…

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Hotwire thermostat: Using fine copper wire as integrated sensor and heater for temperature control appeared first on EDN.

Indoor solar cells spur design-option reassessments

Tue, 02/13/2024 - 14:27

Conventional solar cells are just that: photovoltaic devices which, by their physics, extract and transform energy from the sun. Their sensitivity and efficiency are matched to the optical-energy spectrum of radiated and received power from the Sun to the extent possible, Figure 1.

Figure 1 The solar optical spectrum is complex and the available power per wavelength is a function of many factors. Source: Pennsylvania State University

In many small-scale applications, these same solar cells are used indoors and powered by ambient light from source’s overhead fixtures (which may be fluorescent or LEDs of various color temperatures), incandescent lamps (yes, some are still out there), diffuse or shaded natural light, and even specialized light such as halogen sources.

Given the indoor situation, two things are obvious:

  • The designation as “solar” is somewhat of a misnomer since the Sun is no longer the source and so “photovoltaic” (PV) would be more accurate—but that’s the widely used, colloquial way of describing these cells.
  • The energy spectrum of these indoor lighting sources is mismatched to the responsiveness of the solar cells, so efficiency is low.

While there have been some smaller, less-critical indoor products using solar power alone such as small calculators, such harvesting of ambient indoor optical energy is generally limited in its usefulness.

That situation may be changing as several companies have developed solar cells (we’ll stick with that misnomer) based on technologies which are very different from those used by conventional “real” solar cells. These indoor-optimized cells use complex layers of dyes along with specialized physical and chemical processes to achieve their indoor-optimized results.

Both Ambient Photonics (Scotts Valley, CA) and Exeger Operations AB (Stockholm) use variations of dye sensitized solar cell (shortened to DSC or DSSC) technology to produce light-sensitive cells which are optimized for indoor settings. The production process is a high-volume printing-like operation rather than the furnace-based process used for conventional solar cells.

Ambient says they have reinvented the chemistry of the dye sensitized solar cell (DSSC) with novel, proprietary molecules, using light-sensitive dyes to collect photons and convert them into electrons. In their electrochemical system, these light-sensitive dye molecules harvest and produce energy, with the dyes functioning similar to how chlorophyll behaves during photosynthesis in converting photons into energy.

They maintain that their energy-harvesting technology can harness photons across the light spectrum, yielding more than 90 percent conversion efficiency in low-light condition, even when compared to standard DSSC cells, Figure 2. They also function effectively despite the dynamic, changing indoor low-light conditions which are largely a function of the time of day.

Figure 2 Ambient says their DSSC process yields results which are superior to conventional film-based PV cells. Source: Ambient Photonics

Exeger’s dye sensitized solar cell uses a new architecture which they say improves real-life performance, provides greater flexibility, and offers seamless integration possibilities. In their approach, a unique conductive electrode material has replaced the traditional expensive and inefficient indium-tin-oxide (ITO) layer, Figure 3.

Figure 3 The Exeger’s process requires multiple layers of sophisticated materials and films and is compatible with mass production. Source: Exeger Operations AB

Dubbed Powerfoyle, it is flexible and durable and so can be integrated on curved surfaces such as headbands, Figure 4. It can be produced in sizes from 15 cm² to 500 cm², and therefore integrated into products ranging from small IoT sensors to speakers and larger accessories.

Figure 4 A bendable, flexible solar cell opens up new design-in and application opportunities. Source: Exeger Operations AB

For most design engineers, how these companies have achieved their indoor-friendly solar cells is not as important as what these innovations may do with respect to design options and degrees of freedom. Do power sources such as these enable increased consideration of IoT devices (sensors, trackers, shelf labels, and even remote controls) which do not need battery replacement, yet require more power other harvesting schemes (such as ambient RF-harvesting) support? For example, electronic door locks in hotels are an interesting possibility, as they are continually exposed to indoor lighting and used relatively infrequently; in theory, that’s a good combination of harvesting and use cycles.

Applications do not have to be limited to such small devices, either; Exeger has an agreement with a headphone manufacturer for ambient-powered units with the headband capturing ambient light. The same idea can be used for providing power to safety vests and alarm devices.

Of course, the energy source itself is only part of the harvesting chain. For designers, the dominant issue is not “how did they do it” but instead “what can it perhaps do for me?”; “what new opportunities does it provide?”; and “what do I need to do in my design to make use of this power source?”

For example, designers will have to decide on a suitable energy-storage and charging arrangement, whether using a rechargeable battery and the issues of limits on viable charge/discharge cycles, or a supercapacitor and the unique issues of using these non-chemical storage cells.

It will be interesting to see if these indoor-friendly solar cells become a standard part of the design-in possibilities, or if they have downsides which only become apparent when you get into the nitty-gritty design details of product design, manufacturing, use patterns, and long-term performance. 

Do you see a viable energy-harvesting role for these non-Sun-driven solar cells? Would they allow you to create something you haven’t been able to do thus far? What possible design-in issues do you see?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Indoor solar cells spur design-option reassessments appeared first on EDN.

Will AI PCs be a new sweet spot for CPUs and DRAMs?

Mon, 02/12/2024 - 12:04

The personal computer (PC) industry is warming up to a new sweet spot: PCs incorporating artificial intelligence (AI) capabilities. Intel and Microsoft—the primary beneficiaries of the PC revolution—are now pushing PCs with AI-enabled CPUs and AI-powered software assistants, respectively, to move AI applications from the cloud to the PC realm.

In other words, AI PCs embedded with specialized chips can run AI models locally without relying on the cloud. That, according to Intel CEO Pat Gelsinger, will make AI services cheaper, faster, and more private than using services based in cloud-centric data centers. “You’re unleashing this power for every person, every use case, every location in the future,” he said at the CES 2024 in Las Vegas.

Intel, while competing with AI powerhouse Nvidia in server space, clearly sees an opportunity to catch up in its forte: PC processors. What it’s doing right now is integrating neural processor units (NPUs) into PC processors; NPUs are specialized semiconductors dedicated to handling AI tasks.

Intel’s Meteor Lake laptop CPU has incorporated an NPU to support third-party AI software features. Its archrival in the PC hardware space, AMD, has also been shipping AI PC processors. Next, Nvidia showcased three new GPUs—RTX 4060 Super, RTX 4070 Ti Super, and RTX 4080 Super—for AI-ready laptops at a virtual event before CES 2024.

Figure 1 Meteor Lake CPU has incorporated an NPU to support AI applications. Source: Intel

Besides AI-ready processors, memory chipmakers like Micron, Samsung, and SK hynix are also eyeing AI PCs to enable AI accelerators to run powerful assistants on personal computers. New laptops currently come with as much as 8 MB of RAM, and it’s likely to double in Windows-based AI PCs. In fact, a large language model (LLM) running an AI assistant could require more than 16 MB of memory.

Take the example of the Llama 2 family of AI models created by Meta, which requires nearly 30 GB of RAM for its modest variant. Moreover, the amount of memory in AI PCs will likely increase with the availability of more powerful AI accelerators and processors.

Figure 2 Copilot AI assistant is built around OpenAI’s GPT-4 model. Source: Microsoft

At the moment, consumers are only warming up to AI personal computers, and it’ll take a while before more native applications are made available for AI PCs. However, both hardware and software for AI PCs will become more powerful over time, and that’s good news for semiconductor devices like CPUs, GPUs, and DRAMs.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Will AI PCs be a new sweet spot for CPUs and DRAMs? appeared first on EDN.

Oscilloscope persistence displays

Mon, 02/12/2024 - 11:52

Persistence displays retain waveform traces on the screen, allowing them to decay over a user set time duration, they allow users to see a history of signal variations on the screen. This feature is very useful if you are adjusting a signal, as it allows you to see the changes as they are made. Some oscilloscope applications require displaying a history of events in order to see how the signal varies over time. Persistence displays are key tools for viewing such signal changes as a function of time over multiple acquisitions. The most common applications that use persistence displays include jitter analysis of a serial data transmission and eye diagrams used for digital communications systems (Figure 1).

Figure 1 The persistence display of timing jitter on an edge. Multiple acquisitions are retained on the display of the edge to show the variation in its timing. Source Arthur Pini

 This is an analog persistence view of jitter on a clock edge, it is a monochrome display where the brighter areas are the more often occurring signal paths and the duller areas occur less often. The center area of the transition is brighter, meaning more edges pass at that time than during the times corresponding to the outer edges.

The same data can be viewed in color-graded persistence, a tool used to map the frequency of occurrence spectrally. Most frequent events appear in red while the least frequent events are shown in violet (Figure 2).

Figure 2 A color graded persistence display of the same edge jitter. The red areas occur more often than violet areas. Source Arthur Pini

The intermediate frequency of occurrence is mapped spectrally, from most to least often occurring as red-orange-yellow-green-blue-indigo-violet.

Multiple acquisitions are acquired and stored in a persistence map which shows signal variations over time. The persistence decay time is user-selectable with a time constant from half a second to infinite. A saturation control allows users to control the mapping of frequency of occurrence to intensity or color. 

Eye and state transition diagrams

Persistence displays also help analyze data communications signals, where they are used to display eye diagrams and state transition diagrams (Figure 3).

Figure 3 The eye diagrams of the I and Q components and state transition diagrams of a 16-QAM signal rendered in monochrome analog persistence. Source: Arthur Pini

The eye diagrams of a 16-QAM signal show the results of 12,890 acquisitions of the I and Q signal components, which are also cross plotted as an X-Y plot, forming the state transition diagram shown in the upper right corner. Again, the intensity variations are proportional to the amount of time a waveform falls on a particular point on the display. The highly repetitive elements of a signal are brighter than the rarely occurring signal events. The data states, which appear as horizontal lines in the I and Q traces, are written more often and show up brighter than the transitions, which take different paths and occur with less frequency at any given point. The same is true of the state transition diagram where the data states appear as bright dots and the transition paths have a lower intensity.

Persistence histograms

All the data behind the persistence display is available and can be used to quantify the acquired data statistically. One example is to generate a histogram from the persistence display. The oscilloscope used in this article has a function called persistence histogram, it lets the user define either a horizontal or vertical slice through the

persistence display and then forms a histogram as shown in Figure 4.

Figure 4 A persistence histogram with a horizontal slice of the jitter persistence display centered at a level of 0 mV with a width of 10 mV. Source: Arthur Pini

The persistence histogram appears in the trace below the persistence display. Cursors are used to mark the location where the histogram slice originates. In a vertical slice, each bin of the histogram contains a class of related amplitude levels. A horizontal slice, used in the example, produces a histogram where each bin contains a class of related time values.

In the example, the vertical axis of the histogram reads the number of times a specific horizontal pixel is hit. The peak of the histogram corresponds to the central area with a light blue color, while the falling sides correspond to the persistence display changing from indigo to violet. The histogram can be measured using the oscilloscope’s measurement parameters, the measurement parameters P1 through P3 beneath the display grids read the mean, the standard deviation, and the range of the histogram. Parameter help markers annotate the locations of these measurements on the histogram itself.

Persistence histograms can also be applied to eye diagrams showing the horizontal timing uncertainty as well as the vertical deviation (Figure 5).

Figure 5 Application of persistence histogram to an eye diagram permits analysis of noise and jitter on the eye. Source: Arthur Pini

The histogram in the center trace was taken from a horizontal slice through the eye crossing and shows the range of variation in the time of the crossings. The lower histogram was taken using a vertical slice centered between the crossings, it shows the uncertainty in the amplitude of the eye in the center. Some oscilloscopes may not offer measurements that quantify eye characteristics such as eye height and width, .these can actually be obtained using persistence histograms and their associated statistical measurements.

 Persistence trace functions

Persistence trace functions take the histogram of the persistence values over a number of vertical slices set by the user and extract the mean, standard deviation, and range of the persistence data at each slice. It then plots the extracted statistical parameter over time (Figure 6).

Figure 6 Examples of the persistence trace mean (second from the top), persistence trace sigma (third from the top), and persistence trace range (bottom) traces. Source: Arthur Pin)

The persistence trace mean function plots the mean value of the histograms at each of the user’s selected intervals. The resultant plot is the average value of the source persistence trace. In this example, the trace is taken from one thousand points along the persistence trace. This function shows the underlying waveform without vertical noise. Persistence trace sigma plots the minimum and maximum values of the standard deviation about the mean using an extrema plot. The plot shows mean + and – one standard deviation. This function provides a view of the rms noise on the source waveform. The persistence trace range plots the minimum and maximum values of the persistence histogram about the mean and shows the range of the histogram. It is the worst-case range of possible values, especially noise, at each point.

Persistence trace mean is the most useful of the functions allowing a quick determination of the average value of a persistence trace. It is also useful to smooth out traces acquired with low sample point counts (Figure 7).

Figure 7 The persistence trace mean shows all the possible states in waveform with a low sample count by retaining multiple acquisitions. Source: Arthur Pini

Waveforms with low sample counts, displayed with linear interpolation, may appear angular and discontinuous however they are not, and over multiple acquisitions, they trace a smooth waveform. Using persistence trace mean to view the waveform allows the persistence history to fill in the intermediate states and smooth the waveform, showing its actual structure.

3-D persistence display

Adding vertical height to a persistence display proportional to the rate of occurrence gives you a three-dimensional (3-D) effect. This 3-D persistence display creates a topographical view of your waveform.

As shown in Figure 8, this is most useful when studying X-Y plots of signals such as QPSK.

Figure 8 The in-phase and quadrature components of a QSPK signal and a three-dimensional persistence plot of a QPSK state transition diagram. Source: Arthur Pini

The three-dimensional plot retains the color or intensity coding of the persistence displays but adds height proportional to the frequency of occurrence of the display pixels. The shape of these peaks provides an alternative view of the frequency of occurrences in your signals. In this example, the data states of the signal which occur most frequently appear as the highest elements in the X-Y display and are coded in red. Transition paths have more variation and occur less repetitively. They are lower on the display and coded in yellow/green. Off path regions are at the bottom of the display, coded in violet. Controls allow for rotating the 3-D plot to view it from different angles.

The 3-D display can be rendered in three different qualities. The first is as a solid, as is shown, and is the default quality. It can also be rendered in the wireframe quality; this is constructed using lines of equal intensity to create the persistence map. The third quality is shaded, which is only available in monochrome persistence. Shaded quality shows the 3-D object as if it were illuminated by projected light, the shading emphasizes the shape of the object.

The value of persistence displays

Whether used to measure jitter, eye diagrams, or state transition diagrams, persistence is a valuable display technique. When combined with math persistence analysis tools and related measurements, it becomes a powerful tool for quantifying signal variations.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Oscilloscope persistence displays appeared first on EDN.

Current sensors cover DC to 5 MHz

Fri, 02/09/2024 - 16:30

Two wideband current sensors from Allegro, the ACS37030 and ACS37032, ensure efficiency and reliability in GaN and SiC FET power architectures. With an operating bandwidth of DC to 5 MHz, the devices are suitable for electrified vehicles, clean energy solutions, and data center applications.

The ACS37030 and ACS37032 offer current sensing ranges of ±20 A, ±40 A, and ±65 A, with a typical response time of 40 ns. Both devices employ dual signal paths. One path captures low-frequency and DC current using Hall-effect elements. The other path captures high-frequency current data through an inductive coil. These two paths are summed to enable sensing from DC to 5 MHz in a single device.

The current sensors achieve stable and safe control, while reducing EMI. Sensitivity error over temperature is ±2%. The properties of the inductive coil increase signal to noise ratio (SNR) as frequency increases, minimizing noise at the output. The ACS37030 provides a zero current reference output, while the ACS37032 offers an overcurrent fault output.

Housed in compact 6-pin SOIC packages, the sensors have a rated isolation voltage of 3500 VRMS and a basic working voltage of 840 VRMS. They operate over a temperature range of -40° to +150°C. To learn more about the ACS37030 and ACS37032 current sensors, click here.

Allegro MicroSystems

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Current sensors cover DC to 5 MHz appeared first on EDN.

PAM4 DSP toolkit optimizes cable design

Fri, 02/09/2024 - 16:30

MaxLinear is offering a product design kit (PDK) to help cable manufacturers integrate the Keystone PAM4 DSP into their active electrical cables. According to MaxLinear, the 5-nm PAM4 DSP can yield up to a 40% power savings over competitor solutions when used in active electrical cable (AEC) applications.

Unlike passive cables, active electrical cables actively boost signals, allowing for longer distances (up to 7 meters for 400G); higher bandwidth; and thinner, lighter cables. Keystone PAM4 DSPs based on 5-nm CMOS technology enable designers to build high-speed cables that maximize reach and minimize power consumption in next-generation hyperscale cloud networks. To ease DSP integration, the PDK includes strong application support, multiple tools to optimize and monitor performance, and both hardware and software reference designs.

Keystone 5-nm DSPs cater to 400G and 800G applications and provide a 106.25-Gbps host-side electrical I/O that aligns with the line-side interface rate. Variants support single-mode optics (EML and SiPh) and multimode optics (VCSEL transceivers and AOCs), as well as AECs. The family also includes companion transimpedance amplifiers.

For more information about the Keystone 5-nm PAM4 DSPs (MxL93642, MxL93643, MxL93682, and MxL93683), click here.

MaxLinear

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PAM4 DSP toolkit optimizes cable design appeared first on EDN.

LTE-M module integrates GNSS receiver

Fri, 02/09/2024 - 16:29

Sara-R520M19, an LTE-M and NB-IoT module from Swiss provider u-blox, delivers accurate positioning data concurrent with LTE communication. Simultaneous GNSS and cellular connectivity is an important factor for applications requiring continuous or cyclic tracking, such as utility metering and asset tracking.

The Sara-R520M10 module incorporates the company’s UBX-R52 cellular chip, M10 GNSS receiver, and dedicated GNSS antenna interface in a 16×26×2.2-mm, 96-pin LGA package. A variant without the GNSS receiver, the Sara-R520, is also available for general-purpose applications. This model features SpotNow, an assisted GPS receiver for applications requiring occasional tracking. 

The Sara-R52 series offers 23 dBm of RF output power to ensure stable connectivity. Modules include an Open CPU (uCPU) feature that allows users to run their own software on the chip without the need for an external MCU. An onboard smart connection manager performs automatic connectivity management. Its function is to achieve either the best performance or the lowest power consumption. This is useful when a connection is lost and needs to be re-established.

In addition to the Sara-R52 series, u-blox released the Lexi-R520 LTE-M module. The Lexi-R520 furnishes the same features as the Sara-R520, but in a smaller form factor. Its 16×16×2-mm, 133-pin LGA package lends itself to applications like wearables.

Samples of the LTE-M modules are available now, with volume production scheduled for Q3 2024.

Sara-R52 series product page

Lexi-R520 product page

u-blox

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post LTE-M module integrates GNSS receiver appeared first on EDN.

16-bit audio ADC detects breaking glass

Fri, 02/09/2024 - 16:29

Asahi Kasei’s AK5707 ADC packs an acoustic activity analyzer (AAA) that can be configured to detect specific types of acoustic events, such as breaking glass. This integrated analog acoustic event detector makes the AK5707 16-bit monaural ADC well-suited for IoT security applications, like wireless cameras and smart doorbells.

Consuming just 34 µA, the AK5707’s AAA block listens for acoustic events that fit user-customizable profiles. Upon detection, the AAA activates the ADC and initiates recording to an integrated audio buffer. Simultaneously, it generates an interrupt to wake the external SoC.

Unlike typical loudness-based detection, the AAA constantly tracks the current noise floor and adjusts its detection parameters in response. AAA detection not only reduces false positives, but also increases battery life in noisier environments. Asahi Kasei offers a suite of detection profiles for the AAA, including glass-break, alarm patterns, crying baby, and human voice. These profiles are configurable, with multiple acoustic parameters set by the user.

The 16-bit, 48-kHz ADC block of the AK5707, which can be powered on and off independently of the AAA, achieves a signal-to-noise ratio of 95 dB, while consuming only 200 µA. Built-in AC coupling capacitors allow for a 3.2 mm2 PCB area, including one external capacitor.

The AK5707 comes in a tiny 1.53×1.58-mm WLCSP. It is currently sampling, with mass production scheduled to begin in September 2024.

AK5707 product page

Asahi Kasei Microdevices 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 16-bit audio ADC detects breaking glass appeared first on EDN.

Reference design serves Lunar Lake CPU

Fri, 02/09/2024 - 16:29

Cirrus Logic, Intel, and Microsoft are developing a reference design that teams Intel’s forthcoming Lunar Lake processor with Cirrus audio and power devices. The reference platform will help developers create more immersive audio for laptop PCs, while reducing heat generation and extending battery life to enable smaller, thinner designs.

Claiming to bring best-in-class audio to more PCs, the reference design employs the Cirrus Logic CS42L43 SmartHIFI codec, CS35L56 audio amplifier, and CP9314 switched-capacitor power converter. The codec and audio amplifiers deliver louder bass, clearer voice, and lower distortion to both the speaker and the headset. The power converter promises to reduce power and heat, as well as fan noise.

In addition, the audio design will assist with the transition to the MIPI SoundWire interface and Microsoft’s ACX (audio class extension) framework. Along with built-in security features, the design supports next-generation features like spatial audio. It is scalable across different processors, speakers, and notebook designs, allowing OEMs to implement audio subsystems that scale in channel count and features.

Intel’s Lunar Lake processor for portable PCs is expected to launch later this year.

Cirrus Logic 

Intel

Microsoft

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Reference design serves Lunar Lake CPU appeared first on EDN.

Parsing PWM (DAC) performance: Part 2—Rail-to-rail outputs

Thu, 02/08/2024 - 17:30

Editor’s Note: This a four-part series of DIs proposing improvements in the performance of a “traditional” PWM—one whose output is a duty cycle-variable rectangular pulse which requires filtering by a low-pass analog filter to produce a DAC. This second part addresses the inability of “rail-to-rail” op amps’ output swing to encompass supply rail voltages.

Part 1 can be found here.

Recently, there has been a spate of design ideas (DIs) published that deal with microprocessor (µP)-generated pulse width modulators driving low-pass filters to produce DACs. Approaches have been introduced which address ripple attenuation, settling time minimization, and limitations in accuracy. This is the second in a series of DIs proposing improvements in PWM-based DAC performance. Each of the series’ part’s recommendations are, and will be, implementable independently of the others. This DI addresses the inability of “rail-to-rail” op amps’ output swings to encompass their supply rail voltages. Recognizing that an op amp is needed to buffer a filter from a DC load to prevent load-induced errors, and that these devices are useful in implementing more effective analog filters, there is a legitimate interest in mitigating or eliminating this imperfection.

Wow the engineering world with your unique design: Design Ideas Submission Guide

It don’t mean a thing if it ain’t got that swing (well, sort of…)

The common mode input voltages of many rail-to-rail op amps may be 100 mV above their positive and below their negative supply rails, but none have an output common mode voltage range which includes those rails. The OPA376, 2376, and 4376 rail-rail family with its excellent input offset voltage and bias current ratings are no different. The SC70-5, SOT23-5, and SO-8 package versions reach within 40 mV of the rails with a 10 kΩ load from -40°C to 125°C, and within 50 mV with a 2 kΩ load. There are various means of dealing with this limitation.

In the spirit of “Doctor, it hurts when I do this”, “Then don’t do that!”: software could simply prevent the setting of duty cycles which would drive the op amp too near a supply rail. This is rather unsatisfactory if the code which generates the duty cycle values expects that the values of zero and full scale (FS) will be executable. So, suppose an op amp can swing to within X mV of both its positive rail (VDD) and ground; instead of programing the PWM counter with a value of DC, program it with DC’ = DC · (1 – α) + α · FS/2, where α = X mV · 2 / VDD.

If that calculation imposes an unacceptable software burden, there is a related analog approach. In Figure 1, set R = r · α / (1 – α). The full range of DC values is now restricted to a range that the op amp output can replicate.

Figure 1 A purely analog means of avoiding op amp input voltages so close to the supply rails that the output cannot replicate them.

If the resistors have a 0.1% tolerance, the maximum offset error is a little greater than 2-15· VDD. The gain error is larger though: a little less than 2-10 · VDD. With adequate calculation resolution, the method of scaling the duty cycle count in software leads to smaller errors than the purely analog one.

In some applications, it is imperative that a DAC can swing to ground. In others, it must also be able to reach the µP’s positive rail, VDD. To accomplish this, voltage(s) beyond (a) supply rail(s) must be generated. But in no case can the supply voltages’ range exceed that recommended for the op amp, which is 5.5 V for the OPAx376 family. This necessitates different solutions for the common VDD supply values of 1.8, 2.5, 3.3 and 5.0 V. We will now follow with a series of schematics that contain solutions for each of these voltages…

The circuitry for the op amp positive rail (OP+) can be ignored in favor of VDD if the DAC needn’t swing to VDD. Texas Instruments’ LM7705 provides a complete and elegant means of generating a voltage that is only slightly more negative than ground, thereby allowing the op amp output to reach 0 V (Figure 2). This charge pump accepts a supply voltage of from 3 to 5.25 V and provides a regulated output of -230 mV at up to 20 mA. The LM7705 offer features beyond those of a simple charge pump inverter (which requires an external oscillator) in that:

  1. An inverter sets the negative rail supply voltage to be the negative of the positive supply voltage. At VDD = 3 V and above, 3 V – (-3 V) exceeds the OPAx376’s family’s maximum differential supply voltage VOpRange of 5.5 V. The LM7705 provides just enough negative voltage and no more than is needed.
  2. The LM7705 has a smaller footprint and incorporates an oscillator and a regulated DC output into a single IC.

Figure 2 This simple and inexpensive inverting charge pump provides a regulated -0.23 V for a rail-to-rail op amp’s negative supply so that the op amp output can swing to, and even below, ground.

But an application might also require swinging to the positive rail. The need to avoid supply voltage ranges exceeding 5.5 V for the OPAx376 leads to different solutions for different values of VDD (always assumed to be within +/- 5% of nominal value). The simplest solution is for the case of VDD equal to 1.8 V (Figure 3).

Figure 3 Solution for staying within the supply operating range for the OPAx376 where VDD = 1.8 V.

The LM2664 is a voltage inverter generating -VDD from + VDD. With the addition of D1, D2, C3 and C4, a voltage of 2 · VDD – 2 · Vd is generated where Vd is the voltage drop across the diodes. OA+ is enough above VDD to allow the op amp output to include the positive rail. The difference between OA+ and OA- is safely within supply operating range (VOpRange) for the OPAx376. If your VDD is between 1.8 and 5.5 V and is less than 1/3 of the VOpRange of your op amp, this simple and cheap circuit could be all you need. But if not…

As shown in Figure 4, the same circuit is the basis for operation from a 2.5V supply, but accommodations must be made to meet VOpRange for the OPAx376. This is accomplished by adding D3 and D4 to incur voltage drops.

Figure 4 Solution for staying within the supply operating range for the OPAx376 where VDD = 2.5 V.

Combinations of +/-5% variations in VDD, tolerances in diode voltage drops, and variations over temperature and load of the above circuit’s output voltages warn against applying the strategy of adding more diodes in series for the case where VDD increases to 3.3V (Figure 5).

Figure 5 Solution for staying within the supply operating range for the OPAx376 where VDD = 3.3 V.

Here the LM2664 performs the same function as it did for a VDD of 1.8 and 2.5 V. But it powers a cheap op amp IC which functions as a positive and a negative voltage regulator. The R6 / R7 divider ensures that the LM358BI operates within its common mode input range. (Its VOpRange is greater than 30 V!) OA+ and OA- voltages are approximately 100 mV beyond VDD = 3.3 V +/-5% and ground. Q1 and Q2 are placed in feedback loops to reduce the regulator output impedance. Since the op amp rails should be decoupled with ground-referenced .1 µF capacitors, this reduced impedance increases the loop’s high frequency break point. The result could be unstable were it not for the combination of C5 and R3 and that of C6 and R1. These pairs filter out the high phase-shift, high frequency feedback taken from the emitters and ensure that only mid frequencies down to DC are being regulated, thus establishing stability. In this circuit, the resistors are 1% tolerance parts.

As shown in Figure 6, the circuit for a 5 V VDD is similar to that for 3.3 V, but simpler. Here the higher Pump+ voltage means that there are no worries about input common mode operation, and we can dispense with R6 and R7. The passive components that make up the regulators are now identical.

Figure 6 Solution for staying within the supply operating range for the OPAx376 where VDD = 5 V.

Encompassing supply rail voltages

In this DI, several different approaches have been presented for producing DACs whose voltage swings encompass supply rails, or at least mitigate the problems associated with those that don’t. Hopefully, one or more are suitable for your application.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Parsing PWM (DAC) performance: Part 2—Rail-to-rail outputs appeared first on EDN.

Neon lamp blunder

Wed, 02/07/2024 - 16:26

There was this test system that comprised a huge row of equipment racks into which various items of test equipment would be mounted. Those items were a digital multimeter, an oscilloscope, several signal generators and so forth. Each section of the rack assembly had a neon lamp mounted at its base which was supposed to indicate that 400 Hz AC line voltage was turned on or turned off for the equipment mounted in that rack section.

Planned essentially as follows in Figure 1, the idea did not work.

Figure 1 Neon lamp indicator plan where line voltage was always present and applied to the equipment installed within each section via a power relay where singular SPST contact set operated that section’s neon lamp.

Line voltage was always present but would be applied to installed equipment within each section via a power relay of which one SPST contact set was to operate that section’s neon lamp. The problem was that each section’s neon lamp would always stay lit, no matter the state of the relay and the state of equipment power application.

No neon lamp would ever go dark.

There was much ado about this with all kinds of accusations and posturing, finger pointing, scoldings, searching for a fall guy and so forth but the problem itself was never solved. What had been overlooked is shown as follows in Figure 2.

Figure 2 The culprit was the stray capacitance from the wiring harness that each SPST contact was wired through that kept each neon lamp visibly lit.

Each SPST contact was wired through a harness which imposed a stray capacitance across the contacts of the intended switch. When the SPST was set to be open, that stray capacitance provided a low enough impedance for AC current to flow anyway and that current level was sufficient to keep the neon lamp visibly lit.

Brilliant, huh?

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Neon lamp blunder appeared first on EDN.

∆Vbe differential thermometer needs no calibration

Tue, 02/06/2024 - 17:17

Differential temperature measurement is a handy way to quantify the performance of heatsinks, thermoelectric coolers (TECs), and thermal control in electronic assemblies. Figure 1 illustrates an inexpensive design for a high-resolution differential thermometer utilizing the ∆Vbe effect to make accurate measurements with ordinary uncalibrated transistors as precision temperature sensors. 

Here’s how it works.

Figure 1 Transistors Q1 and Q2 perform self-calibrated high resolution differential temperature measurements.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Diode connected transistors Q1 and Q2 do duty as precision temperature sensors driven by switches U1and U1c and respective resistors R2, R3, R13, and R14. The excitation employed comprises alternating-amplitude current-mode signals in the ratio of (almost exactly):

10:1 = (100 µA via R3 and R13):(10 µA via R2 and R14).

With this specific 10:1 excitation, most every friendly small-signal transistor will produce an AC voltage signal accurately proportional to absolute temperature with peak-to-peak amplitude given by:

∆Vbe = Absolute Temperature / 5050 = 198.02 µV/oC.

The temperature-difference-proportional signals from Q1 and Q2 are boosted by ~100:1 gain differential amplifier A1a and A1d, synchronously demodulated by U1b, then filtered by R11, C2, and C3 to produce a DC signal = 20 mV/oC. This is then scaled by a factor of 2.5 by A1c to produce the final Q1–Q2 differential temperature signal output of 50 mV/oC, positive for Q1 warmer than Q2, negative for Q2 warmer than Q1.

Some gritty design minutiae are:

  1. Although the modulation-current setting resistors are in an exact 10:1 current ratio, the resulting modulation current ratio isn’t quite…The ∆Vbe signal itself subtracts slightly from the 100 µA half-cycle, which reduces the actual current ratio from exactly 10:1 to 9.9:1. This cuts the ∆Vbe temperature signal by approximately -1%.
  2. Luckily, the gain of the A1a/d amplifier isn’t exactly the advertised 100 either but is actually (100k/10k + 1) =101. This +1% “error” neatly cancels the ∆Vbe signal’s -1% “error” to result in a final, acceptably accurate 20mV/oC demodulator output.
  3. The modulating/demodulating frequency Fc generated by the A1b oscillator is deliberately set by the R4C1 time constant to be half the power mains frequency (30 Hz for 60 Hz power and 25 Hz for 50 Hz) via the choice of R4 (160 kΩ for 60 Hz and 200 kΩ for 50 Hz). This averages a couple mains-frequency cycles into each temperature measurement and thus improves immunity to stray pickup of power-line coupled noise. It’s a useful trick because some differential-thermometry applications may involve noise-radiating, mains-frequency-powered heaters. For convenience, the R5/R6 ratio was chosen so that Fc = 1/(2R4C1).
  4. Resistor values adorned with an asterisk in the schematic denote precision metal-film types. Current-ratio-setting R2, R3, R13, and R14 are particularly critical to minimizing zero error and would benefit from being 0.1% types. The others are less so and 1% tolerance is adequate. No asterisk means 5% is good enough.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ∆Vbe differential thermometer needs no calibration appeared first on EDN.

Faraday to manufacture 64-bit Arm processor on Intel 1.8-nm node

Tue, 02/06/2024 - 16:16

The paths of RISC processor powerhouse Arm and x86 giant Intel have finally converged after they signed a collaboration pact to manufacture chips on Intel’s 1.8 nm process node in April 2023. Hsinchu, Taiwan-based contract chip designer Faraday Technology will manufacture Arm Neoverse cores-based server processors on Intel Foundry Services (IFS) using the Intel 18A process technology.

Chip design service provider Faraday is designing a 64-core processor using Arm’s Neoverse Compute Subsystems (CSS) for a wide range of applications. That includes high-performance computing (HPC)-related ASICs and custom system-on-chips (SoCs) for scalable hyperscale data centers, infrastructure edge, and 5G networks. Though ASIC designer won’t sell these processors, it hasn’t named its end customers either.

Figure 1 Faraday’s chip manufactured on the 18A process node will be ready in the first half of 2025. Source: Intel

It’s a breakthrough for Arm to have its foot in the door for large data center chips. It’s also a design win for Arm’s Neoverse technology, which provides chip designers with whole processors unlike individual CPU or GPU cores. Faraday will use interface IPs from the Arm Total Design ecosystem as well, though no details have been provided.

Intel, though not so keen to see Arm chips in the server realm, where x86 chips dominate, still welcomes them to its brand-new IFS business. It will likely be one of the first Arm server processors manufactured in an Intel fab. It also provides Intel with an important IFS customer for its advanced fabrication node.

Intel’s 18A fabrication technology for 1.8-nm process node—boasting gate-all-around (GAA) RibbonFET transistors and PowerVia backside power delivery—offers a 10% performance-per-watt improvement over its 20A technology for 2-nm process. It’s expected to be particularly suitable for data center applications.

Figure 2 The 18A fabrication technology is particularly considered suitable for data center chips. Source: Intel

Intel has already got orders to manufacture data center chips, including one for 1.8-nm chips from the U.S. Department of Defense. Now, a notable chip designer from Taiwan brings Intel Arm-based chips, boosting IFS’ fabrication orders as well as its credentials for data center chips.

The production of this Faraday chip is expected to be complete in the first half of 2025.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Faraday to manufacture 64-bit Arm processor on Intel 1.8-nm node appeared first on EDN.

Walmart’s Mobile Scan & Go: Who it’s For, I really don’t know

Mon, 02/05/2024 - 17:47

During Amazon’s annual Prime Day (which is two days, to be precise, but I’m being pedantic) sale mid-July last year, Walmart coincidentally (right) ran a half-off promotion for its normally $98/year competing Walmart+ membership service in conjunction with its competing Walmart+ Week (four days—I know, pedantic again) sale. Copy-and-pasted from the help page:

Walmart+ is a membership that helps save you time and money. You’ll need a Walmart.com account and the Walmart app to access the money and time-saving features of membership.

 Benefits include:

  • Early access to promotions and events
  • Video Streaming with Paramount+
  • Free delivery from your store
  • Free shipping, no order minimum
  • Savings on fuel
  • Walmart Rewards
  • Mobile Scan & Go

Free shipping absent the normal $35 order minimum is nice, as is free delivery from my local store. Unfortunately, for unknown reasons, only three Walmarts in all of Colorado, none of them close to me, offer fuel service. Truth be told, though, my primary signup motivation was that my existing Paramount+ streaming service was nearing its one-year subscription renewal date, at which time the $24.99/year (plus a free Amazon Fire Stick Lite!) promotional discount would end and I’d be back to the normal $49.99/year price. Walmart+ bundles Paramount+ as one of its service offerings, and since the Walmart+ one-year promo price was the same (minus $0.99, to be pedantic) as I’d normally pay for Paramount+ standalone, the decision was easy.

But none of these was the primary motivation for this writeup. Instead, I’ll direct your attention to the last entry in the bullet list, Walmart’s Mobile Scan & Go:

Here’s the summary from Walmart’s website:

Shop & check out fast with your phone in-store. Just scan, pay, & be on your way!

  • Get Walmart Cash by easily claiming manufacturer offers as you scan
  • Check out fast at self-checkout without having to rescan each item
  • See the price of items as you go

 It’s easy in 3 simple steps!

  • Open your Walmart app: Select Scan & go from the Store Mode landing page. Make sure your location access is enabled.
  • Scan your items as you shop: Once your items are scanned, click “View cart” to verify that everything is correct.
  • Tap “Check out”: Tap the blue “Check out” button in the app & head over to self-checkout. Confirm your payment method. Scan QR code at register.

Sounds good, right? I’d agree with you, at least at first glance. And even now, after using the service with some degree of regularity over the past few months, I remain “gee-whiz” impressed with many aspects of the underlying technology. Take this excerpt, for example:

Open your Walmart app: Select Scan & go from the Store Mode landing page. Make sure your location access is enabled.

To elaborate: if you’ve enabled location services for the Walmart app on your Android or iOS device, it’ll know when you’re at a store, automatically switching the user interface to one more amenable to helping you find which aisle (and region in that aisle) a product you’re looking for can be found (to wit, “Store Mode”), versus the more traditional online-inventory search. And if you’re also logged into the app, it knows who you are and will, among other things, auto-bill your in-store purchases to the credit card associated with your account.

Keep in mind, however, that (IMHO) the fundamental point of the app (as well as the broader self-checkout service option) is to reduce the per-store employee headcount by shifting the bulk of the checkout labor burden to you. Which would be at least somewhat OK, putting aside the obvious unemployment rate impact, if it also translated into lower consumer prices versus just higher shareholder profits. Truly enabling you to just “Scan & Go” would also be nice. Reality unfortunately undershoots the hype, at least in the current service implementation form.

Note, for example, the “scan your items” phrase. For one thing, scanning while you’re shopping is only relevant for items with associated UPS or other barcodes. The app won’t auto-identify SpaghettiOs if you just point the smartphone camera at the pasta can, for example:

not that I’m sure you’d even want it to be able to do that, considering the potential privacy concerns in comparison to a conceptually similar but fixed-orientation camera setup at the self-checkout counter. Consider, for example, the confidentiality quagmire of a small child in the background of the image captured by your smartphone and uploaded to Walmart’s servers…

The app also can’t standalone handle, perhaps obviously, variable-priced items such as per-pound produce that must be weighed to determine the total charge, and which therefore must instead be set aside and segregated in your shopping cart for further processing at checkout. And about that self-checkout counter…it unfortunately remains an essential step in the purchase process, pragmatically ensuring that you’re not “gaming the system”. After you first scan a QR code that’s displayed on your smartphone, you then deal with any remaining items (such as the aforementioned produce) and pay. And then, as you exit the self-checkout area, there’s a Walmart employee parked there who may (or may not) double-check your receipt against the contents of your cart, including in your bags, to ensure you haven’t “forgotten” to scan anything or “accidentally” scanned a barcode for a less expensive alternative item instead.

Still, doesn’t sound too bad, does it? Well, now consider these next-level nuances, which I’m conceptually aware of from a comparative standpoint versus the Meijer Shop & Scan alternative offered back in Indiana, the state of my birth.

In upfront fairness, at least some of what follows may be specifically reflective of my relatively tiny local Walmart versus the larger stores “down the hill” in Denver and elsewhere (against which I haven’t yet compared), versus a more general comparative critique:

  • There’s no way to get a printed receipt at self-checkout; you can only view it online post-transaction completion. This one’s utterly baffling to me, given that conventional self-checkouts offer it. And speaking of which…
  • At my store, at least, you’re forced to route through the same self-checkout lines as folks who are tediously doing full self-checkouts (thereby neutering the “Go” promise), versus also offering dedicated faster “Mobile Scan & Go” lines as Meijer does with Shop & Scan.
  • Meijer also offers self-weighing stations right at the produce department, linked to the store’s app and broader service, further speeding up the final checkout step. There aren’t any at Walmart, at least at my local store, where I instead need to weigh and accept the total per-item prices at checkout.
  • Not to mention the fact that “Mobile Scan & Go” is only available to subscribers of the paid-for-by-consumer Walmart+ service! You’d think that if the company was mostly motivated to reduce headcount costs, it’d at least offer “Mobile Scan & Go” for free, as it does with conventional self-checkout. You’d think…but nope. Pay up, suckers.

First-world “problems”? Sure. Rest assured that I haven’t lost sight of my longstanding big-picture perspective. But nonetheless irritating? Absolutely.

Service “upgrades” that seemingly benefit only the provider, not also the user, are destined for rapid backlash and a speedy demise. Consumers won’t use them and may even take their entire business elsewhere. While this case study is specific to grocery store shopping, I suspect the big-picture issues it raises may also resonate with related situations in your company’s existing and/or under-consideration business plans. Don’t listen solely to the accountants, who focus predominantly-to-completely on short-term cost, revenue and profit targets, folks!

Reader thoughts are as-always welcomed in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Relate Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Walmart’s Mobile Scan & Go: Who it’s For, I really don’t know appeared first on EDN.

The profile of a power simulation tool for SiC devices

Mon, 02/05/2024 - 14:06

Power electronics design is a critical aspect of modern engineering, influencing the efficiency, reliability, and performance of numerous applications. Developing circuits that meet stringent requirements while considering manufacturing variations and worst-case scenarios demands precision and sophisticated tools.

At the same time, the landscape of power electronics design is rapidly evolving, ushering in an era of high-speed, high-efficiency components. Amidst this evolution, simulation tools need to redefine the way engineers conceptualize, design, and validate power systems. Take Elite Power Simulator and Self-Service PLECS Model Generator (SSPMG), which allows power electronics engineers to reduce time-to-market. Collectively, these tools offer a precise depiction of the operational behavior of the circuit when using EliteSiC line of silicon carbide (SiC) products.

Figure 1 Elite Power Simulator and Self-Service PLECS Model Generator provides a precise depiction of the operational behavior of power circuits. Source: onsemi

This simulation platform aims to empower engineers to visualize, simulate, and refine complex power electronic topologies with unparalleled ease. It does that by offering engineers a unique digital environment to test and refine their designs. Here, the underlying PLECS models and their accuracy are a critical component to the effectiveness of the Elite Power Simulator. The simulator allows engineers to upload custom PLECS models that are generated with the SSPMG.

The heart of this simulation tool is its ability to accurately simulate a wide array of power electronic topologies, including AC-DC, DC-DC, and DC-AC converters, among others. With over 40 topologies available, it provides engineers with an extensive library to explore and fine-tune their designs. For instance, in industrial applications, it supports critical systems such as fast DC charging, uninterruptible power supplies (UPS), energy storage systems (ESS), and solar inverters. Similarly, the tool is suited for onboard chargers (OBC) and traction inverter systems serving the automotive industry.

Figure 2 Engineers can select application and topology in Elite Simulator. Source: onsemi

Challenges in creating PLECS models

The traditional method of creating PLECS models in the industry relies on measurement-based loss tables aligned with manufacturer datasheets. However, this approach faces several key challenges:

  • Dependency on measurement setups: The switching energy loss data is influenced by the specific parasitics of the application layouts and circuits used, leading to variations and inaccuracies.
  • Limited data density: Conduction and switching energy loss data are often insufficiently dense, hindering accurate interpolation within PLECS and often necessitating extrapolation, which can compromise accuracy.
  • Nominal semiconductor conditions: Loss data typically represents nominal semiconductor process conditions, potentially overlooking variations and real-world scenarios.
  • Validity for hard switching only: Models derived from datasheet’s double-pulse-generated loss data are applicable only to hard switching topologies. They become highly inaccurate when applied to soft switching topology or for synchronous rectification simulations.

These challenges associated with the conventional approach of depending on measurement-based loss tables for PLECS model generation are addressed by introducing the SSPMG. It optimizes models by considering specific passive elements’ impact on energy losses, providing denser and more detailed data for accurate simulations.

Figure 3 Dense loss table is one of the key SSPMG features. Source: onsemi

SSPMG includes semiconductor process variations for realistic models and creates adaptable models suited for soft switching topologies, ensuring reliability beyond hard switching scenarios. PLECS models designed with SSPMG can be seamlessly uploaded to the Elite Power Simulator or downloaded for use in stand-alone PLECS.

Figure 4 Soft switching simulation is another key SSPMG feature. Source: onsemi

Simulator capabilities

Central to the tool’s prowess is PLECS operating in the background. PLECS is a system-level simulator that makes it easier to model and simulate whole systems by using device models that are designed for speed and accuracy. It combines an easy-to-use web-based environment, simplifying things for engineers during the design process.

The significance of this tool extends beyond its simulation capabilities. It’s not merely a tool for simulating; it can also aid engineers in selecting suitable components for their applications. Engineers can seamlessly navigate through various product generations to understand performance-cost trade-offs and make informed decisions.

Moreover, PLECS is not a SPICE-based circuit simulator, where the focus is on low-level behavior of circuit components. The PLECS models, referred to as “thermal models”, are composed of lookup tables for conduction and switching losses, along with a thermal chain in the form of a Cauer or Foster equivalent network.

The simulator has an intuitive loss and thermal data plotting utility that enables engineers to visualize the loss behavior of their chosen switch. This multifunctional 3D visualization tool works with device conduction loss, switching energy loss, and thermal impedance.

Next, the simulator has a utility to design custom heat sink models, enabling users to accurately predict junction temperatures and optimize cooling solutions tailored to their specific needs.

The simulation stage within this tool is highly detailed, offering insights into various parameters such as losses, efficiency, and junction temperature in transient and steady state conditions. Furthermore, the tool has an easy mechanism to compare runs with different devices, circuit parameters, cooling designs, and loss models.

Figure 5 Loss plotting is another important feature offered by Elite Power Simulator. Source: onsemi

The simulator and SSPMG are adaptable to diverse semiconductor technologies. While initially focusing on SiC products, both tools will be expanding to other power devices. This versatility ensures that engineers can leverage the tools across various devices, tailoring simulations to their specific requirements.

Simulating beyond datasheet conditions

The utilization of simulation tools in virtual prototyping has brought about substantial transformation in design flows. Engineers and designers are now able to comprehend the performance of these electronic circuits prior to their mass production in their quest for first time right performance. Accuracy is a critical component when it comes to simulating intricate electronic circuits.

Simulating beyond datasheet conditions is key due to the access to unlimited data that normally would be difficult to acquire via physical testing. This facilitates the optimization and analysis of circuit performance virtually.

By employing precise simulations, one can prevent the underestimation or overestimation of system performances.

James Victory is a fellow for TD modeling and simulation solutions at onsemi’s Power Solutions Group.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The profile of a power simulation tool for SiC devices appeared first on EDN.

Creating a very fast edge rate generator for testing (or taking the pulse of your scope)

Fri, 02/02/2024 - 19:33

I recently purchased a new oscilloscope for home use. It’s a 250 MHz scope, but I was curious what the actual -3dB frequency was as most scopes have a bit more upper end margin than their published rating. The signal generators I have either don’t go up to those frequencies or, the amplitudes at these frequencies are questionable. That meant I didn’t have a way to actually input a sine wave and sweep it up in frequency until the amplitude dropped down 3 dB to find the true bandwidth. So, I needed another way to find the bandwidth.

Wow the engineering world with your unique design: Design Ideas Submission Guide

You may have seen the technique of using a fast rise time pulse to measure the scope’s bandwidth (you can read how this relation works here). The essence is that you send a pulse, with a fast rising and/or falling edge to the scope and measure the rise or fall time at the fastest sweep rate available. You can then calculate the scopes bandwidth with the Equation (1):

(Note: there is much discussion about the use of 0.35 in the formula. Some claim it should be 0.45, or even 0.40. It really comes down to the implementation of the anti-aliasing filter before the ADC in the scope. If it is a simple single pole filter the number should be 0.35. Newer, higher priced scopes may use a sharper filter and claim the number is 0.45. As my new scope is not one of the expensive laboratory level scopes, I am assuming a single pole filter implying 0.35 as the correct number to use.)

OK, now I needed to find a fast-edged square-wave pulse generator. If we assume my scope has a bandwidth of 300 MHz, then it’s capable of showing a rise time of around:

The rise time actually seen on the scope will be slower than its maximum because the viewed rise time is a combination of the scope’s maximum rise time and the pulse generator’s rise time. In fact, the relationship is based on a “root sum squared” formula shown in Equation (3):

Where:

  • Rv is the rise time as viewed on the scope
  • Rp is the rise time of the pulse generator
  • Rm is the scope minimum, or shortest, rise time as limited by its bandwidth

If Rp is much less than Rm, then we may be able to ignore it as it would add very little to Rv. For example, the gold standard for this type of test is the Leo Bodnar Electronics 40 ps pulse generator. If we used this, the formula would show the expected rise time on the scope to be:

As you can see, in this case the pulse generator rise time contributes a negligible amount to the rise time viewed on the scope.

As nice as the Bondar generator is, I didn’t want to spend that much on a device I would only use a few times. What I needed was a simple pulse generator with a reasonable fast edge—something in the 500-ns-or-better range.

I checked the frequency generators available to me, but the fastest rise time was around 3 ns which would be much too large, so I decided to build a pulse generator. There are a few fast pulse generator designs floating around, some using discrete components and some using Schmitt trigger ICs, but these didn’t quite fit what I wanted. What I ended up designing is based on an Analog Devices LTC6509-80 IC. The spec sheet states it can output pulses with rise time of 500 ps—more on that later. But is 500 ps fast enough? Let’s explore this. What happens if we use a pulse with a rise time in the 500 ns range? Then:

Even if the final design could attain a 500 ps rise time, this would be too large to ignore as it could give an error in the 10% range. But if we assumed a value for Rp (or better yet pre-measured it) we could remove it after the fact.

As discussed earlier, the rise time that will be seen on the scope can be seen in Equation (1). Manipulating this, we can see that the maximum rise time is:

So, if we can establish the generator’s rise time, we can subtract it out. In this case “establishing” could be a close enough educated guess, an LTspice simulation, or measuring it on some other equipment. An educated guess is: Based on the LTC6905 data sheet, I should be able to get a ~500 ps rise time in a design. The LTspice path didn’t work out as I couldn’t get a reasonable number out of the simulation—probably operator error. I got lucky though and got some short access to a very high-end scope. I’ll share the results later in the article. But first, let’s look at the design. First, the schematic as shown in Figure 1.

Figure 1 Schematic 1 with the LTC6905 IC to generate a square wave, a capacitor, resistor, and a BNC connector.

The first thing you may notice is that it is very simple: an IC, capacitor, resistor, and a BNC connector. The LTC6905 generates square waves of a fixed frequency and a fixed 50% duty cycle. The version of the IC that I used produces an 80, 40, or 20 MHz output depending on the state of pin 4 (DIV). In this design, this pin is grounded which selects a 20 MHz output. The 33 Ω resistor is in series with the 17 Ω internal impedance thereby producing 50 Ω to match the BNC connector impedance. Matching the impedance reduces any overshoot or ringing in the output. (Using the Tiny Pulser on a 50 Ω scope setting will result in an output 50 mA peak or ~25 mA average output current. It seemed like it might be high for the IC but the spec for the LTC6905 states that the output can be shorted indefinitely. I also checked the temperature of the IC with a thermal camera, and it was minimal.)

I also tried some designs using various resistor values and some with a combination of resistors and capacitors, in series, between pin 5 and the BNC. The idea here was to reduce the capacitance as seen by the IC output. The oscilloscope has an input impedance of around 15 pF (in parallel with 1 MΩ) and adding a capacitor in series could reduce this, as seen by the IC. These were indeed faster but with significant overshoot.

So, Figure 1 is the design I followed through on. The only thing to add to this is a BNC connector, an enclosure (with 4 screws), and a USB cable to power the unit. This simple design, and the fact that the IC is a tiny SOT-23 package, allows for a very small design as seen in Figure 2.

Figure 2 The Tiny Pulser prototype with a 3D printable enclosure based on the schematic in Figure 1 that is roughly the size of a sugar cube.

The 3D printable enclosure is roughly the size of a sugar cube, so I named the device the “Tiny Pulser”. Figure 3 shows the PCB in the enclosure while Figure 4 displays the PCB assembly.

Figure 3 The PCB enclosure of the Tiny Pulser showing the BNC, IC, and passives used in Figure 1.

Figure 4 Tiny Pulser 6-pin SOT-23 PCB assembly with only a few components and jumper wires to solder to the PCB itself.

The PCB is a 6 pin SOT-23 adapter available from various sources (a full BOM is included in the download link provided at the end of the article). As you can see in Figure 4, there are only a few things to solder to the PCB including a jumper. Three wires are attached including the +5 V and ground from the USB cable. The other ground wire needs to be soldered to the BNC body. To do this, I had to break out the old Radio Shack 100 W soldering gun to get enough heat on the BNC base by the solder cup. Scratching up the surface also helped. The PCB is then attached to the BNC by soldering the output pad of the PCB (backside) to the BNC solder cup. (More pictures of this are included in the download.)

So how does it perform? The best performance is obtained when using a 50 Ω scope input and measuring the fall time which was a bit faster than the rise time. In Figure 5 we see the generated pulse train of 20 MHz while Figure 6 is a screenshot showing a fall time of 1.34 ns.

Figure 5 The generated pulse train of 20 MHz of the Tiny Pulser using a 50 Ω scope input.

Figure 6 Fall time measurement (1.34 ns) of the Tiny Pulser circuit made on a 50 Ω scope input.

You can see the pulse train is pretty clean with a bit of overshoot. Note that the 1.34 ns fall time is a combination of the scopes fall time and the Tiny Pulsers fall time. Now we need to figure out the actual fall time of the Tiny Pulser.

As I said I got a chance to use a high-powered scope (2.5 GHz, 20 GS/s) to measure the rise and fall times, Figure 7 shows the results (pardon the poor picture):

Figure 7 Picture of the high-end oscilloscope (2.5 GHz, 20 GS/s) display measuring the rise and fall times of the Tiny Pulser.

You can see that the Tiny Pulser delivers a very clean pulse with a rise time of 510 ps and a fall time of 395 ps. We now have all the information we need to make our bandwidth calculations. (The formulas we have developed are as applicable to fall time as they are to rise time, so we will not change the variable names.) Using the scopes measured fall time and the 395 ps Tiny Pulser fall time, we calculate the bandwidth of the scope, first by calculating the scopes maximum fall time [Equation (6)]:

And now use this to calculate the bandwidth [Equation (1)]:

A gut check tells me this is a reasonable number for an oscilloscope sold as a 250 MHz model.

I tested another scope I have that is rated as 200 MHz. It displayed a fall time of 1.51 ns which works out to be 240 MHz. This number agrees to within a few percent of other numbers I have found on the internet. It seems like the Tiny Pulser works well for measuring scope bandwidth!

Another use for a fast pulse

A better-known use for a fast rise time is probably in a time-domain reflectometer (TDR). A TDR is used to measure the length, distance to faults, or distance to an impedance change in a cable. To do this with the Tiny Pulser, add a BNC tee adapter to your scope and connect the cable (coax, twisted pair, zip cord, etc.), to be tested, to one side of the tee adapter (use a BNC to banana jack adapter if needed). Do not short the end of the wire. Next, connect the Tiny Pulser to the other side of the tee adapter as seen in the setup in Figure 8.

Figure 8 A TDR set up using the Tiny Pulser with a BNC tee adapter to connect the circuit as required (e.g., via coax, twisted pair, etc.).

Now power up the Tiny Pulser and adjust the sweep rate to around 10 ns/div so you see something like the upper part of the screen in Figure 9. I find that the high impedance setting on the scope works better than the 50 Ω setting for the wire I was testing. This may vary with the wire you are testing. You can see that the square wave is distorted which is due to the signal reflecting from the end of the wire. If your scope has a math function to display the derivative (or differential) of the trace you will be able to see what’s happening clearer. This can be seen in the lower trace in Figure 9 when I connected a 53 inch piece of 24 AWG solid twisted pair.

Figure 9 Using the high impedance setting on the scope to perform a TDR test on a 53” piece of 24 AWG wire. The math function displays the derivative of the trace to view results more clearly.

To find the timing of the reflection, measure from the start of the pulse rising (or falling) to the distorted part of the pulse where it is rising (or falling) again. Or, if using the math differential function, measure the time from the tall bump to the smaller bump—I find this much easier to see.

In Figure 9 the falling edge of the pulse is marked by cursor AX and the reflected pulse is marked with the cursor BX. On the right side we can see the time between these pulses is 13.2 ns.

The length of the cable or distance to an impedance change can now be calculated but we first need the speed of the wavefront in the wire. For that we need the velocity factor (VF) for the cable that is being tested. This is multiplied by the speed of light to obtain the speed of the wavefront. The velocity factor for some cables may be found here.

In the case of Figure 9, the velocity factor is 0.707. Multiplying this with the speed of light in inches gives us 8.34 inches/ns. So, multiplying 13.2 ns by 8.34 inches/ns yields 110 inches. But this is the time up and down the wire, so we divide this by 2 giving us 55 inches. There are a few inches of connector also, so the answer is very close to the 53 inches of wire.

Note that, because we have a pulse rate of 20 MHz, we are limited to identifying the reflections up to about 22 ns, after which reflection pulses will blend with the next edge generated pulse. This is about 90 inches of cable.

One last trick

An interesting use of the TDR setup is to discover a cable’s impedance. Do this by adding a potentiometer across the end of the cable and adjust the pot until the TDR reflections disappear and the square wave looks relatively restored. Then measure the pot’s resistance and this is the impedance of your cable.

More info

A link to the download for the 3D printable enclosure, BOM, and various notes and pictures to explain the assembly, can be found at: https://www.thingiverse.com/thing:6398615.

I hope you find this useful in your lab/shop and if you have other uses for the Tiny Pulser, please share them in a comment below.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Creating a very fast edge rate generator for testing (or taking the pulse of your scope) appeared first on EDN.

Chiplets diary: Three anecdotes recount design progress

Fri, 02/02/2024 - 14:45

The chiplet design movement representing multi-billion-dollar market potential is marching ahead with key building blocks falling in place while being taped out at advanced process nodes like TSMC’s 3 nm. These multi-die packaging devices can now mix and match pre-built or customized compute, memory, and I/O ingredients in different process nodes, paving the way for system-in-packages (SiPs) to become the system motherboard of the future.

Chiplets also promise considerable cost reduction and improved yields compared to traditional system-on-chip (SoC) designs. Transparency Market Research forecasts the chiplet market to reach more than $47 billion by 2031, becoming one of the fastest-growing segments of the semiconductor industry at more than 40% CAGR from 2021 to 2031.

Below are a few anecdotes demonstrating how chiplet-enabled silicon platforms are making strides in areas such as packaging, memory bandwidth, and application-optimized IP subsystems.

  1. Chiplets in standard packaging

While chiplet designs are generally associated with advanced packaging technologies, a new PHY solution claims to have used standard packaging to create a multi-die platform. Eliyan’s NuLink PHY facilitates a bandwidth of 64 Gbps/bump on a 3-nm process while utilizing standard organic/laminate packaging with 8-2-8 stack-up.

An efficient combination of compute density and memory bandwidth in a practical package construction will substantially improve performance-per-dollar and performance-per-watt. Moreover, chiplet-based systems in standard organic packages enable the creation of larger SiP solutions, leading to higher performance per power at considerably lower cost and system-level power.

Figure 1 Chiplets in standard packages could encourage their use in inference and gaming applications. Source: Eliyan

Eliyan has announced the tape-out of this die-to-die connectivity PHY at a 3-nm node, and the first silicon is expected in the third quarter of 2024. The tape-out includes a die-to-die PHY coupled with an adaptor layer/link layer controller IP to facilitate a complete solution.

  1. Sub $1 chiplets

Chiplets have mostly been synonymous with high performance computing (HPC) applications, where these multi-die devices cost tens to hundreds of dollars. YorChip has joined hands with Siloxit to develop a data acquisition chiplet at a sub $1 price target in volume.

The two companies will leverage low-cost die-to-die links, physically unclonable function (PUF) security technology, and delta-sigma analog-to-digital (ADC) IP to create a cost-optimized chiplet. That’s how this chiplet aims to develop a low-cost die-to-die footprint that achieves 75% size savings over the competition.

  1. High bandwidth memory (HBM) chiplets

Memory bandwidth is a major consideration alongside compute density and high-speed I/Os in chiplet designs. That makes high bandwidth memory 3 (HBM3) PHY a key ingredient in chiplets for applications such as generative AI and cloud computing. This is especially the case in HPC systems where memory bandwidth per watt is a key performance indicator.

Figure 2 The HBM3 memory subsystem supports data rates up to 8.4 Gbps per data pin and features 16 independent channels, each containing 64 bits for a total data width of 1,024 bits. Source: Alphawave Semi

Alphawave Semi has made available an HBM3 PHY IP that targets high-performance memory interfaces up to 8.6 Gbps and 16 channels. This HBM subsystem integrates the HBM PHY with a JEDEC-compliant, highly configurable HBM controller. It has been taped out at TSMC’s 3-nm node and is tailored for hyperscaler and data infrastructure designs.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Chiplets diary: Three anecdotes recount design progress appeared first on EDN.

Scope probes reach bandwidths up to 52 GHz

Thu, 02/01/2024 - 19:46

InfiniiMax 4 oscilloscope probes from Keysight operate at bandwidths up to 52 GHz (brickwall response) and 40 GHz (Bessel-Thomson response). The company reports that the InfiniiMax 4 is the first high-impedance probe head operating at more than 50 GHz, making it well-suited for high-speed digital, semiconductor, and wafer applications.

InfiniiMax 4 offers DC input resistance of 100 kΩ differential and two input attenuation settings: high-precision 1-Vpp and high-voltage 2-Vpp maximum input range. The probes work with Infiniium UXR-B series oscilloscopes equipped with 1.85-mm and 1.0-mm input connectors. They are also compatible with the AutoProbe III interface.

The InfiniiMax 4 probes feature an RCRC architecture with a flexible PCA probe head that leverages the natural flexibility of the PCA to take the strain off the delicate tip wires. Their modular probe-head amplifier provides multiple access points, eliminating the need for custom evaluation boards or interposers.

Request a price quote for InfiniiMax 4 oscilloscope probes using the link to the product page below.

InfiniiMax 4 product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Scope probes reach bandwidths up to 52 GHz appeared first on EDN.

Low-noise amplifier is radiation-tolerant

Thu, 02/01/2024 - 19:46

Teledyne’s TDLNA0430SEP low-noise amplifier targets space and military communication receivers and radar systems operating in the UHF to S-Band. The radiation-tolerant device offers a low noise figure, minimal power consumption, and small package footprint.

According to the manufacturer, the MMIC amplifier delivers a gain of 21.5 dB from 0.3 GHz to 3 GHz, while maintaining a noise figure of less than 0.35 dB and an output power (P1dB) of 18.5 dBm. The amplifier should be biased at +5.0 VDD and 60 mA IDDQ.

The TDLNA0430SEP low-noise amplifier is built on a 90-nm enhancement-mode pHEMT process. It comes in an 8-pin, 2×2×0.75-mm plastic DFN package and is qualified per Teledyne’s space enhanced plastic flow.

The amp is available now for immediate shipment from the company’s DoD Trusted Facility. An evaluation kit is also available.

TDLNA0430SEP product page

Teledyne e2v HiRel Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Low-noise amplifier is radiation-tolerant appeared first on EDN.

Flyback switcher ICs boost efficiency

Thu, 02/01/2024 - 19:46

InnoSwitch5-Pro programmable flyback switchers from Power Integrations employ zero-voltage switching and SR FET control to achieve >95% efficiency. A switching frequency of up to 140 kHz and a high level of integration combine to reduce the component volume and PCB board area required by a typical USB PD adapter implementation.

The single-chip switchers incorporate a 750-V or 900-V PowiGaN primary switch, primary-side controller, FluxLink isolated feedback, and secondary controller with an I2C interface. They can be used in single and multiport USB PD adapters, including designs that require the USB PD Extended Power Range (EPR) protocol.

Devices accommodate a wide output voltage range of 3 V to 30 V. To maximize efficiency, the switchers support lossless input line voltage sensing on the secondary side for adaptive continuous conduction mode (CCM), discontinuous conduction mode (DCM), and zero voltage sensing (ZVS) control. A post-production tolerance offset enables constant-current accuracy of <2% to support the Universal Fast Charging Specification (UFCS) protocol.

Prices for the InnoSwitch5-Pro flyback switcher ICs start at $2.40 each in lots of 10,000 units.

InnoSwitch5-Pro product page

Power Integrations 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Flyback switcher ICs boost efficiency appeared first on EDN.

Pages