-   Українською
-   In English
EDN Network
Understanding currents in DC/DC buck converter input capacitors
All buck converters need capacitors on the input. Actually, in a perfect world, if the supply had zero output impedance and infinite current capacity and the tracks had zero resistance or inductance, you wouldn’t need input capacitors. But since this is infinitesimally unlikely, it’s best to assume that your buck converter will need input capacitors.
Input capacitors store the charge that supplies the current pulse when the high-side switch turns on; they are recharged by the input supply when the high-side switch is off (Figure 1).
Figure 1 The above diagram shows simplified current waveform in the input capacitor current during the buck DC/DC switching cycle, assuming infinite output inductance. Source: Texas Instruments
The switching action of the buck converter charges and discharges the input capacitor, causing the voltage across it to rise and fall. This voltage change represents the input voltage ripple of the converter at the switching frequency. The input capacitor filters the input current pulses to minimize the ripple on the input supply voltage.
The amount of capacitance governs the voltage ripple, so the capacitor must be rated to withstand the root-mean-square (RMS) current ripple. The RMS current calculation assumes the presence of only one input capacitor, with no equivalent series resistance (ESR) or equivalent series inductance (ESL). The finite output inductance accounts for the current ripple on the input side, as shown in Figure 2.
Figure 2 Input capacitor ripple current and calculated RMS current are displayed by TI’s Power Stage Designer software. Source: Texas Instruments
Current sharing between parallel input capacitors
Most practical implementations use multiple input capacitors in parallel to provide the required capacitance. These capacitors often include a small-value, high-frequency multilayer ceramic capacitor (MLCC), for example, 100 nF. One or more larger MLCCs (10 µF or 22 µF) are used, and sometimes accompany a polarized large-value bulk capacitor (100 µF).
Each capacitor is performing similar yet different functions; the high-frequency MLCC decouples fast transient currents caused by the MOSFET switching process in DC/DC converter. The larger MLCCs source the current pulses to the converter at the switching frequency and its harmonics. The bulk capacitor supplies the current required to respond to output load transients when the impedance of the input source means that it cannot respond as quickly.
Where used, a large bulk capacitor has a significant ESR, which provides some damping of the input filter’s Q factor. Depending on its equivalent impedance at the switching frequency relative to the ceramic capacitors, the capacitor may also have significant RMS current at the switching frequency.
The datasheet of a bulk capacitor specifies a maximum RMS current rating to prevent self-heating and ensure that its lifetime is not degraded. The MLCCs have a much smaller ESR and correspondingly much less self-heating because of the RMS current. Even so, circuit designers sometimes overlook the maximum RMS current specified in ceramic capacitor datasheets. Therefore, it is important to understand the RMS currents in each of the individual input capacitors.
If you are using multiple larger MLCCs, you can combine them and enter the equivalent capacitance into the current-sharing calculator for calculating RMS currents in parallel input capacitors. The calculation of RMS current considers the fundamental frequency only. Nonetheless, this calculation tool is a useful refinement of the single input capacitor RMS current calculation.
Consider an application where VIN = 9 V, VOUT = 3 V, IOUT = 12.4 A, fSW = 440 kHz and L = 1 µH. The three parallel input capacitors could then be 100 nF (MLCC), ESR = 30 mΩ, ESL = 0.5 nH; 10 µF (MLCC), ESR = 2 mΩ, ESL = 2 nH; and 100 µF (bulk), ESR = 25 mΩ, ESL = 5 nH. The ESL here includes the PCB track inductance.
Figure 3 shows the capacitor current-sharing calculator results for this example. The 100-nF capacitor draws a low RMS current of 40 mA as expected. The larger MLCC and bulk capacitors divide their RMS currents more evenly at 4.77 A and 5.42 A, respectively.
Figure 3 Output is shown from TI’s Power Stage Designer capacitor current-sharing calculator. Source: Texas Instruments
In reality, the actual capacitance of the 10-µF MLCC is somewhat lower because of the voltage applied. For example, a 10-µF, 25-V X7R MLCC in an 0805 package might only provide 30% of its rated capacitance when biased at 12 V, in which case the large bulk capacitor’s current is 6.38 A, which may exceed its RMS rating.
The solution is to use a larger capacitor package size and parallel multiple capacitors. For example, a 10-µF, 25-V X7R MLCC in a 1210 package retains 80% of its rated capacitance when biased at 12 V. Three of these capacitors have a total effective value of 24 µF when used for C2 in the capacitor current-sharing calculator.
Using these capacitors in parallel reduces the RMS current in the large bulk capacitor to 3.07 A, which is more manageable. Placing the three 10-µF MLCCs in parallel also reduces the overall ESR and ESL of the C2 branch by a factor of three.
The low capacitance of the 100-nF MLCC and its relatively high ESR mean that this capacitor plays little part in sourcing the current at the switching frequency and its lower-order harmonics. The function of this capacitor is to decouple nanosecond current transients seen at the switching instants of the DC/DC converter’s MOSFETs. Designers often refer to it as the high-frequency capacitor.
In order to be effective, it’s essential to place the high-frequency capacitor as close as possible to the input voltage and ground terminals of the regulator using the shortest (lowest inductance) PCB routing possible. Otherwise, the parasitic inductance of the tracks will prevent this high-frequency capacitor from decoupling the high-frequency harmonics of the switching frequency.
It’s also important to use as small a package as possible to minimize the ESL of the capacitor. A high-frequency capacitor with a value of <100 nF can be beneficial for decoupling at a specific frequency when compared to its ESR and impedance curve. A smaller capacitor will have a higher self-resonance frequency.
Similarly, always place the larger MLCCs as close as possible to the converter to minimize their parasitic track inductance and maximize their effectiveness at the switching frequency and its harmonics.
Figure 3 also shows that, although the overall RMS current in the overall input capacitor (were it a single equivalent capacitor) is 6 A, the sum of RMS currents in the C1, C2 and C3 branches is >6 A and does not follow Kirchhoff’s current law. The law only applies to the instantaneous values, or to the complex addition of the time-varying and phase-shifted currents.
Using PSpice for TI or TINA-TI software
Designers who need more than three input capacitor branches for their applications can use PSpice for TI simulation software or TINA-TI software. These tools enable more complex RMS current calculations, including harmonics alongside the fundamental switching frequency and the use of a more sophisticated model for the capacitor, which captures the frequency-dependent nature of the ESR.
TINA-TI software can compute the RMS current in each capacitor branch in the following way: run the simulation, click the desired current waveform to select it, and from the Process menu option in the waveform window, select Averages. TINA-TI software uses a numerical integration over the start and end display times of the simulation to calculate the RMS current.
Figure 4 shows the simulation view. For clarity in this example, we omitted the 100-nF capacitor because its current is very low and contributes to ringing at the switching edges. The Power Stage Designer software analysis of the total input capacitor current waveform for the converter calculates the input current (IIN), which is 6 ARMS, the same value as for Figure 2.
Figure 4 Output from TINA-TI software shows the capacitor branch current waveforms and calculated RMS current in C2. Source: Texas Instruments
The capacitor current waveforms in each branch are quite different compared to the idealized trapezoidal waveform that ignores their ESR and ESL. This difference has implications for DC/DC converters such as the TI LM60440, which has two parallel voltage input (VIN) and ground (GND) pins.
The mirror-image pin configuration enables designers to connect two identical parallel input loops, meaning that they can place double input capacitance (both high frequency and bulk) in parallel close to the two pairs of power input (PVIN) and power ground (PGND) pins. The two parallel current loops also halve the effective parasitic inductance.
In addition, the two mirrored-input current loops have equal and opposite magnetic fields, allowing some H-field cancellation that further reduces the parasitic inductance (Figure 5). Figure 4 suggests that if you don’t carefully match the parallel loops in capacitor values, ESR, ESL and layout for equal parasitic impedances, then the current in the parallel capacitor paths can differ significantly.
Figure 5 Parallel input and output loops are shown in a symmetrical “butterfly” layout. Source: Texas Instruments
Software tool use considerations
To correctly specify input capacitors for buck DC/DC converters, you must know the RMS currents in the capacitors. You can estimate the currents from equations, or more simply by using software tools like TI’s Power Stage Designer. You can also use this tool to estimate the currents in up to three parallel input capacitor branches, as commonly used in practical converter designs.
More complex simulation packages such as TINA-TI software or PSpice for TI can compute the currents, including harmonics and fundamental frequencies. These tools can also model frequency-dependent parasitic impedance and many more parallel branches, illustrating the importance of matching the input capacitor combinations in mirrored input butterfly layouts.
Dr. Dan Tooth is Member of Group Technical Staff at Texas Instruments. He joined TI in 2007 and has been a field application engineer for over 17 years. He is responsible for supporting TI’s analog and power product portfolio in ADAS, EV and diverse industrial applications.
Dr. Jim Perkins Senior Member of Technical Staff at Texas Instruments. He joined TI in 2011 as part of the acquisition of National Semiconductor and has been a field application engineer for over 25 years. He is now mainly responsible for supporting TI’s analog and power product portfolio in grid infrastructure applications such as EV charging and smart metering.
Related Content
- Step-Down DC/DC Converter
- DC/DC Converter Considerations for Smart Lighting Designs
- Choosing The Right Switching Frequency For Buck Converter
- Use DC/DC buck converter features to optimize EMI in automotive designs
- Reducing Noise in DC/DC Converters with Integrated Ferrite-bead Compensation
The post Understanding currents in DC/DC buck converter input capacitors appeared first on EDN.
The Google Chromecast Gen 2 (2015): A form factor redesign with beefier Wi-Fi, too
In mid-2023, Google subtly signaled that its first-generation Chromecast A/V streaming receiver, originally introduced in 2013, had reached the end of the support road. I’d already tore one down, but I had several others still in use, which I promptly replaced with 3rd-generation (2018) successors off eBay. And while I was at it, I picked up an additional “rough”-condition one, plus intermediary 2nd-generation (2015) and Ultra (2016) well-used devices, for teardown purposes.
One year (and a couple of months) later, and a couple of months ago as I write these words in late October 2024, Google end-of-life’d the entire Chromecast product line, also encompassing the 4K (introduced in 2020) and HD (2022) variants of the Chromecast with Google TV which I’d already torn down too, and replacing everything with its newly unveiled TV Streamer:
So, I guess you can say I’m now backfilling from a disassembly-and-analysis standpoint. Today you’ll see the insides of the 2nd generation (2015) Chromecast:
with the Ultra (2016), notably kitted with the Stadia online-streamed gaming controller:
and 3rd generation (2018) to follow in the coming months.
Truth be told, I’ve also got a couple of Chromecast Audio streamers on hand, but as they’re so rare and prized by audiophiles (and wannabes like me), I’m loath to (destructively, at least) take one apart. Time will tell if I change my mind and/or get more disassembly-skilled in the future…
Anyhoo, let’s get to tearing down, beginning with the device I eBay-purchased last summer, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
As you probably already noticed from the “stock” shots I’ve previously shared, the 2nd generation Chromecast marked a fairly radical physical design departure from its forebear. I’ll begin with something that might seem to be a “nit” at first glance but was actually a big deal to many users. That USB-A to micro-USB cable you see on the left was only 1’ long with the first-gen Chromecast; now it’s 5’ long. Much more convenient, especially if you’re getting power from an outlet-installed “wall wart” versus a TV back panel USB connector:
The device itself has more visibly-obviously evolved. The first-gen Chromecast looked a bit like a USB flash “stick”, cigar-shaped with a stubby HDMI connector jutting out of one end. Google bundled a short female-to-male extender cable with it, which frequently got quickly lost. Now, the extender cable is integrated, and the device itself is circular in shape. This transition has multiple benefits, two obvious and another conjecture on my part. The extension cable simplifies hookup to a TV’s crowded-connector backside (and as I’ve already mentioned, won’t be inadvertently discarded). Also, as you’ll soon see, the 2nd generation round Chromecast includes multiple Wi-Fi antennae, arranged around the partial-circumference of the also-circular PCB. And here’s the conjecture part: the 1st generation Chromecast was plagued by overheating issues, which I’m guessing the redesign assists in mitigating.
I’m calling this the “front”, although as I’ve mentioned before, I used this term, along with “back” and “sides”, loosely because, as I’ve also previously mentioned with other devices of this type, orientation is HDMI plug- and cable-orientation dependent, therefore inconsistent from one TV and broader setup to another. Mine’s black (duh); it also came in “Coral” (red) and “Lime” (also referred to in some places as “Lemonade”, yellow) shades:
At the bottom is the micro-USB power input jack, along with a reset switch to its left and a multi-color status LED to its right:
When not in use, the HDMI connector magnetically attaches to the back of the circular main body…for unclear-to-me reasons (ease of portability?). I apparently wasn’t alone, because Google dropped this particular “feature” for the third-generation successor:
Here the HDMI cable is extended; the magnet is that shiny rectangle with rounded corners (which I just learned today is called a stadium, presumably referencing the shape of an athletic entertainment facility) toward the top:
Here’s what the HDMI cable end looks like:
And once more back to the back (see what I did there?) of the device for a closeup of the various markings, including the FCC ID, A4RNC2-6A5 (which has an interesting historical twist I’ll revisit shortly):
Time to dive inside. From my advance research, I already knew that the glue holding the two halves of the body together was particularly stubborn stuff. This gave me an opportunity to try out a new piece of gear I’d recently acquired, iFixit’s iOpener kit, consisting of a long, narrow insulated heat-retaining bag which you put in the microwave oven for 30 seconds before using:
plus other handy disassembly accessories (the iOpener is also optionally sold standalone):
Strictly speaking, the iOpener is intended for removing the screen from a tablet or the like:
but I managed to get it to work (with a “bit” of overlap) with the Chromecast, too:
After that, assisted by a couple of the Opening Picks also included in the kit:
I was inside, with minimal cosmetic damage to the case (although I still harbored no delusions that my remaining disassembly steps would be non-destructive)
Here’s the inside of the top half of the case:
And here’s our first glimpse of the PCB topside, complete with a sizeable Faraday Cage:
Did you notice those three screws holding the PCB in place? You know what comes next:
Ladies and gentlemen, we have liftoff:
This is still the PCB topside, but alongside it (to the left) is the first-time revealed inside of the top of the case, complete with a LED light pipe assembly, a dallop of thermal paste, and a round gray heatsink that does double-duty as the attractant for the HDMI cable connector magnet. Also note the reset switch in the lower left edge:
Flipping the insides upside down first-time reveals the PCB underside; this time, the LED is clearly visible. And there’s another Faraday cage, to which the dallop of thermal paste connects:
Let’s return to the PCB topside, specifically to its Faraday cage, for removal first:
In past teardowns, to get it off, I’ve relied either on fairly flimsy-tip devices like the iSesamo:
Or just brute-forced it with a flat-head screwdriver, which inevitably resulted to both a mangled cage and PCB. This time, however, I pressed into service another new tool in my arsenal, iFixit’s Jimmy, which in the words of Goldilocks, was “just right”:
As you may have already inferred, two of the three earlier screws did double-duty, not only holding the PCB in place within the lower half of the case but also keeping the PCB-connector end of the HDMI cable intact. After removing them and then the Cage, the HDMI cable was free:
I’m sure that in the earlier shots you already noticed a second dallop of thermal paste between the large IC in the lower left quadrant and the Faraday Cage:
A bit of rubbing alcohol cleaned it off sufficiently for me to ID it and the other components on the board:
The previously paste-encrusted IC in the lower left quadrant is Marvell’s Armada 88DE3006 1500 Mini Plus dual-core ARM Cortex-A7 media processor, an uptick from the Marvell Armada DE3005-A1 1500-mini SoC in the first-generation Chromecast. To its right, barely visible under the Cage-mounting frame, is a Toshiba TC58NVG1S3HBAI6 2 Gbit NAND flash memory; curiously, its predecessor in the first-gen Chromecast, a Micron MT29F16G08, was 16 Gbit (8x larger) in capacity). In the lower right corner is a chip marked:
MRVL
21AA3
521GDT
which iFixit believes implements the system’s power management control capabilities. And in the lower left corner is another frame-obscured Marvell IC, marked as follows (you’ll have to trust me on this one):
MRVL
G868
524GBD
whose identity is unclear to me (and iFixit didn’t even bother taking a stab at), although it apparently was also in the first-gen Chromecast. Readers?
Flipping the board back over to its underside, and going through the same Faraday cage removal (this time also with preparatory thermal paste cleanup) process as before:
Reveals our third dallop of thermal paste, inside the second (underside) cage in the design:
Time for more rubbing alcohol-plus-tissues:
The dominant ICs this time are a Samsung K4B4G1646D-BY 4 Gbit DDR3L SDRAM to the right (this memory time around, the same capacity as with the first-gen Chromecast) and Marvell’s Avastar 88W8887 wireless controller (Wi-Fi, Bluetooth, NFC and FM, not all of these used). At this point, I’ll refer back to the “interesting historical twist” teaser from before. For one thing, the Avastar 88W8887’s precursor in the first-gen Chromecast was an AzureWave AW-NH387, a 2.4 GHz-only Wi-Fi (plus Bluetooth and FM receiver, the latter again unused) controller. This time, however, you get dual-band 1×1 802.11ac, reflective of the multi-PCB-embedded-antenna array you see around the PCB sides.
And what about Bluetooth? Here’s where things get really interesting. At its initial 2015 introduction, Bluetooth capabilities were innate in the silicon but not enabled in software. A couple of years later, however, Google went back to the FCC for recertification, not because any of the hardware had changed but just because a new firmware “push” had turned on Bluetooth support. Why? I don’t know for sure, but I have a theory.
Initially, Google relied on a wonky app called Device Utility that forced you to jump thorough a bunch of hoops in a poorly documented specific sequence and with precise step-by-step timing in order for initial activation to successfully complete:
Subsequent setup steps were done through the TV to which the Chromecast was connected over HDMI. Google subsequently switched to doing these latter setup steps over its Google Home app, initially launched in 2016 and substantially revamped in 2023, instead, which presumably leverages Bluetooth (therefore the subsystem software-enable and FCC recertification). But for legacy devices, initial activation still needed to occur over Device Utility.
And with that, closing in on 1,800 words, I’ll wrap up for today. Your thoughts are as-always welcomed in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- Google’s Chromecast with Google TV: Dissecting the HD edition
- Teardown: Chromecast streams must have gotten crossed
- Google’s Chromecast: Is “proprietary” necessary for wireless multimedia streaming success?
- Google’s Chromecast: impressively (and increasingly) up for the video streaming task
The post The Google Chromecast Gen 2 (2015): A form factor redesign with beefier Wi-Fi, too appeared first on EDN.
Profile of an MCU promising AI at the tiny edge
The common misconception about artificial intelligence (AI) often relates this up-and-coming technology to data center and high-performance compute (HPC) applications. This is no longer true, says Tom Hackenberg, principal analyst for the Memory and Computing Group at Yole Group. He said this while commenting on STMicroelectronics’ new microcontroller that embeds a neural processing unit (NPU) to support AI workloads at the tiny edge.
ST has launched its most powerful MCU to date to cater to a new range of embedded AI applications. “The explosion of AI-enabled devices is accelerating the inference shift from the cloud to the tiny edge,” said Remi El-Ouazzane, president of Microcontrollers, Digital ICs and RF Products Group (MDRF) at STMicroelectronics.
He added that inferring at the edge brings substantial benefits, including ultra-low latency for real-time applications, reduced data transmission, and enhanced privacy and security. Not sharing data with the cloud also leads to sustainable energy use.
STM32N6, available to selected customers since October 2023, is now available in high volumes. It integrates a proprietary NPU, the Neural-ART Accelerator, which can deliver 600 times more machine-learning performance than a high-end STM32 MCU today. That will enable the new MCU to leverage computer vision, audio processing, sound analysis and other algorithms that are currently beyond the capabilities of small embedded systems.
Figure 1 STM32N6 offers the benefits of an MPU-like experience in industrial and consumer applications while leveraging the advantages of an MCU. Source: STMicroelectronics
“Today’s IoT edge applications are hungry for the kind of analytics that AI can provide,” said Yole’s Hackenberg. “The STM32N6 is a great example of the new trend melding energy-efficient microcontroller workloads with the power of AI analytics to provide computer vision and mass sensor-driven performance capable of great savings in the total cost of ownership in modern equipment.”
Besides the AI accelerator, STM32N6 features an 800-MHz Arm Cortex-M55 core and 4.2 MB of RAM for real-time data processing and multitasking, which ensure sufficient compute for complementing AI acceleration. As a result, the MCU can run AI models to carry out tasks like segmentation, classification, and recognition. Moreover, an image signal processor (ISP) incorporated into the MCU provides direct signal processing, which allows engineers to use simple and affordable image sensors in their designs.
Design testimonials
Lenovo Research, which rigorously evaluated STM32N6 in its labs, acknowledges its neural processing performance and power efficiency claims. “It accelerates our research of “AI for All” technologies at the edge,” said Seiichi Kawano, principal researcher at Lenovo Research. LG, currently incorporating AI features into smartphones, home appliances and televisions, has also recognized STM32N6’s AI performance for embedded systems.
Figure 2 Meta Bounds has employed the AI-enabled STM32N6 in its AR glasses. Source: STMicroelectronics
Then there is Meta Bounds, a Zhuhai, China-based developer of consumer-level augmented reality (AR) glasses. Its founding partner, Zhou Xing, acknowledges the vital role that STM32N6’s embedded AI accelerator, enhanced camera interfaces, and dedicated ISP played in the development of the company’s ultra-lightweight and compact form factor AI glasses.
Besides these design testimonials, what’s important to note is the transition from MPUs to MCUs for embedded inference. That eliminates the cost of cloud computing and related energy penalties, making AI a reality at the tiny edge.
Figure 3 The shift from MPU to MCU for AI applications saves cost and energy and it lowers the barrier to entry for developers to take advantage of AI-accelerated performance for real-time operating systems (RTOSes). Source: STMicroelectronics
Take the case of Autotrak, a trucking company in South Africa. According to its engineering director, Gavin Leask, fast and efficient AI inference within the vehicle can give the driver a real-time audible warning to prevent an upcoming incident.
At venues like this, AI-enabled MCUs can run computer vision, audio processing, sound analysis and more at a much lower cost and power usage than MPUs.
Related Content
- Getting a Grasp on AI at the Edge
- Implementing AI at the edge: How it works
- It’s All About Edge AI, But Where’s the Revenue?
- Edge AI accelerators are just sand without future-ready software
- Edge AI: The Future of Artificial Intelligence in embedded systems
The post Profile of an MCU promising AI at the tiny edge appeared first on EDN.
Oscilloscope probing your satellite
When designing space electronics, particularly during the early prototyping stage or if qualification or flight hardware doesn’t function as intended, the humble oscilloscope is often used to verify the presence, timing and quality of key signals.
When debugging your space electronics using an oscilloscope, many different measurement types are now possible, e.g., voltages, currents, power rails, digital logic, EMC, optical, TDR, and high voltage. For each of these applications, the specification of the probe that makes contact with your device under test (DUT) determines the quality of your test, e.g., its frequency response and bandwidth, how it loads the DUT, whether it matches the input impedance of your scope, where and how you attach the signal and ground contact tips. Often the probe is overlooked or taken for granted. During many of my visits to customers, I have seen the wrong diagnosis due to poor probing techniques or incorrect decision-making because of the specification of the oscilloscope and/or probe. Ultimately, this has impacted the ability of clients to deliver hardware and sub-systems to cost and schedule.
The impact of scope probes on your testAn ideal probe would not load the DUT, would transmit the signal under investigation from its tip to the oscilloscope with perfect fidelity, have zero attenuation, zero capacitance, zero inductance, infinite bandwidth, and linear-phase characteristics at all frequencies.
In reality, a probe is a circuit with its own electrical characteristics and when it makes contact with your DUT, it suddenly becomes part of a larger system with its specification combining with that of the circuit of interest. To make a measurement, the probe must “steal” some energy from the DUT and transfer this to the scope’s inputs such that it does not load the DUT while preventing degradation of the signal to be measured and will not impair the functionality of the DUT. Probes and oscilloscopes have an inherent capacitance creating a low-pass filter that impacts higher frequencies, i.e., bandwidth, slowing rise times. Probes have an intrinsic resistance, forming a voltage divider which reduces signal amplitude. Leads attached to probes add unwanted inductance resulting in overshoot and ringing on the display. Leads can also act as antennae, picking-up electrical noise from the surrounding environment. None of these effects may actually be present in the signal you are trying to measure, so it’s not a surprise if tests are misleading and the diagnosis wrong!
Probing preferencesFor general-purpose testing, the key is to use probes which have a high input resistance to minimize the current taken from the DUT, typically 1 to 10 MΩ, as well as low input capacitance to ensure high impedance at higher frequencies, usually 10 to 30 pF. A low impedance would adversely load the DUT impacting the measured signal level.
As frequencies rise, to avoid reflections due to capacitive and inductive reactances, the source, load, and probe characteristic impedance should be matched, usually 50 Ω. As a guide, an interconnect can be considered a transmission line potentially susceptible to reflections if its time delay, or its critical length, is greater than one third the signal rise time.
Figure 1 compares the signal integrity measured from a 10-MHz clock using both 1 MΩ and 50 Ω input impedances. The former (light blue trace) contains reflections while the fidelity of the latter (green trace) is superior!
Figure 1 The difference in signal integrity due to oscilloscope input impedance. Source: Rajan Bedi
Although I still have several, it’s rare for me to use the much-abused, 50-Ω, BNC oscilloscope coax cable with crocodile clips to accurately test and measure the latest space electronics. However, occasionally I do: the question is, when can you use this $10 “probe” rather than the expensive $10k ones?
A general-purpose solutionLast week I had an FPGA that wasn’t communicating with its JTAG programmer and needed to check the board was receiving the TCK, TMS, and TDI inputs, and outputting TDO. All the expensive, sexy probes were being used, however, due to the low frequency of the JTAG signals, I knew the trusted 50-Ω BNC coax cable (Figure 2) could verify these signals with good integrity.
Figure 2 The ubiquitous 50-Ω BNC oscilloscope coax cable found in most labs. Source: RS PRO
High frequency solutions Knowing the maximum signal frequencyToday’s satellite and spacecraft electronics operate at higher frequencies, with digital signals having faster edges and lower voltages, close to larger switching currents and sensitive analog signals. Many small satellites squeeze all these functions onto one tiny PCB. To accurately measure signals, observe events, and make the correct decisions, the specifications of the probe and the oscilloscope become paramount; in particular bandwidth.
From Fourier analysis, the bandwidth of a digital signal with a 50% duty cycle and a 10 to 90% rise time can be approximated by:
You might not know the edge rates of the digital signals you may have to measure in the future, but if you have an appreciation of the highest frequency of interest, e.g., a clock, an estimate of rise time and hence bandwidth can be calculated. For example, if one assumes rise time comprises 7% of the total period, the signal bandwidth can be estimated as, 5*Fclk, or up to the fifth harmonic!
Knowing the maximum signal frequency allows you select the appropriate oscilloscope and probe bandwidths for accurate measurements. To minimize the in-band effects of their respective 3-dB amplitude roll-offs, these should be 3 to 5 times higher than the largest harmonic contained within the signal of interest.
As an example, for a sine wave with a fundamental frequency of 700 MHz, the oscilloscope and probe bandwidths should each be between 2.1 and 3.5 GHz. Likewise for other analog signals such as modulated carriers, choose bandwidths at least three times larger than its highest frequency component.
For a digital signal with an edge rate of 0.5 ns, its resulting bandwidth can be approximated by 0.35/0.5 = 700 MHz. The measurement bandwidth of the scope and probe should be between 2.1 and 3.5 GHz to accurately capture the fidelity of the fifth harmonic. Similarly, for rise and fall times, if you want to accurately see these transitions, the edge rates of your probe and oscilloscope should be three to five times faster the signal of interest. If you want to validate the specifications of your instrumentation, input a pulse that has rise/fall times three to five times faster!
Impact of scope bandwidthFigure 3 and Figure 4 illustrate the impact of oscilloscope bandwidth on measurement fidelity when verifying a 100 MHz clock with 500 ps edges: a 100 MHz scope only passes through the fundamental frequency while a 500 MHz one captures up to the fifth harmonic preserving the intended waveform, but its own rise time specification is limiting the measurement of the actual signal edge rate. A 1 GHz scope has 20% accuracy while a 2 GHz one offers 3%.
Figure 3 The impact of scope bandwidth on waveform fidelity and rise-time. Source: Keysight
A word of caution, there is such a thing as too much bandwidth as measurements can pick-up high-frequency noise as shown below, impacting the system’s effective number of bits (ENOB). The 20 MHz waveform on the left was captured using a bandwidth of 6 GHz, while the one on the right had 100 MHz. Use the lowest bandwidth possible while still having enough to accurately capture the frequencies contained within your signal of interest. If possible, limit measurement bandwidth using the oscilloscope’s built-in hardware or software filters, and/or the specification of the probe.
Figure 4 The impact of too much measurement bandwidth on waveform noise. Source: Keysight
Debugging a real problemGoing back to my problem of debugging the uncommunicative FPGA using a one meter, 50Ω BNC coax cable as a probe; how did I know this would be fine for verifying the JTAG signals? The delay the signal experiences as it travels down the one metre cable is approximately 5 ns. For rise times longer than 3 * 5 ns = 15 ns (critical length), the resulting bandwidth can be approximated by 0.35/15 ns = 23 MHz. The fundamental frequency of the JTAG signals is around 1 MHz (well below 23 MHz) with a sufficient number of odd harmonics (bandwidth) captured to display the waveforms with good integrity and sharp edges. I also knew the BNC cable has a bandwidth of at least 1 GHz and the oscilloscope 8 GHz. Don’t trash those $10 cables just yet!
Passive probesMany different types of probes can be used with modern digital oscilloscopes enabling a variety of measurement types. Firstly, should you choose a passive or an active probe? The former are often shipped with oscilloscopes, are lower cost, rugged, and good for general-purpose testing often up to several hundred MHz. Internally, these only contain passive components that respond to the signal being measured.
Passive probes have an attenuation factor that impacts DUT loading and the measurement bandwidth: a 1x or 1:1 probe does not change the input amplitude offering better sensitivity for low-voltage signals, whereas a 10x reduces the input magnitude by a factor of ten (Figure 5). These are used to protect the oscilloscope’s maximum rated voltage and offer better SNR as any noise picked up by the probe is also attenuated, thus improving signal quality. The use of a 10:1 probe results in a higher internal resistance, typically 10 MΩ compared to 1 MΩ, which reduces circuit loading. The addition of capacitance in the tip cancels the scope’s input capacitance, increasing bandwidth and improving the measurement of higher frequencies and faster edges.
Check whether your oscilloscope auto-senses the probe attenuation or whether you have to manually switch between these. Passive probes also come with compensation to match the probe and oscilloscope input impedances. Without adjustment, the capacitive loading of the probe may filter out high-frequency components and distort the signal under investigation.
Figure 5 The typical schematic of a 10x passive probe. Source: Rohde and Schwarz
At frequencies beyond 500 MHz, the output capacitance of most passive probes degrades the higher harmonics and edge rates. Furthermore, they can severely load the DUT as the oscilloscope’s input impedance is not significantly higher than the circuit’s output impedance.
Active probes Single-endedActive probes do not use signal power from the DUT, but alternatively utilize wideband amplifiers to enable high-frequency measurements. Active probes have high input resistance and low capacitance, typically less than 1 pF, offering bandwidths up to 20 GHz. The probe in Figure 6 is a single-ended probe, measuring a signal with respect to ground.
Figure 6 The typical schematic of an active probe. Source: Rohde and Schwarz
DifferentialDifferential probes will measure the potential difference between any two points and are suitable for verifying low-amplitude, high-frequency I/O such as LVDS as used by many Earth-Observation imagers. Figure 7 shows a typical schematic for a differential probe and Figure 8 shows an off the shelf active differential probe from Rohde and Schwarz (R&S). Differential probes offer high common-mode rejection over a broad range of frequencies and use an internal differential amplifier to convert the difference between two inputs into a voltage that can be sent to a typical single-ended scope input.
Figure 7 A typical schematic of a differential probe. Source: Rohde and Schwarz
Figure 8 An active differential probe. Source: Rohde and Schwarz
Power-rail probePower-rail probes allow you to measure AC ripple, high-frequency noise, and transients at high bandwidths on supply voltages with large DC offsets, and then analyze the spectrum of this interference. EMC probes enable electric and magnetic (E&H) near-field debugging of EMI issues, while current probes provide a non-invasive method for measuring current flow through a conductor. A DC probe uses a hall-effect sensor to detect the magnetic field generated by a current as it passes through the probe’s ferrite core, while an AC probe uses a transformer to measure current flow through its core.
Figure 9 shows a recent current measurement from an Earth-Observation payload to verify its in-rush behavior at power-up. Thank you to my friends Giovanni and Nick from R&S UK for helping me with this test.
Figure 9 Oscilloscope current probe measuring payload in-rush current at power-up. Source: Rajan Bedi
Parasitic effect of probe tipsOne issue often overlooked is the parasitic effect of the tips probing the DUT, known as the “connection bandwidth”. How and where you probe is equally as important as the specification of your test equipment: long connections degrade the measurement bandwidth, slowing edges, as well as adding unwanted inductance, resulting in ringing and distortion when measuring high-frequency signals. These may not actually exist in the circuit under investigation! Parasitic components to the left of the point labelled VAtn below determine the quality of the actual measurement. Figure 10 is a useful image from Keysight highlighting the impact of probe-tip length on measurement bandwidth.
Figure 10 The impact of probe-tip length on measurement bandwidth. Source: Keysight
Top tipsHere’s my Top Tips when considering oscilloscope probes to test your space electronics:
- Before choosing a probe, understand the statistics of the signals to be measured—amplitude, frequency, bandwidth, and edge rates—and then specify the probe as described.
- Ensure the probe is compatible with your scope’s input impedance.
- Ensure the probe does not adversely load the DUT and compensate passive probes.
- For single-ended probing, do not confuse the signal and ground measurement points—I did this once and killed an FPGA!
- Ensure the probe has a better or comparable bandwidth to the oscilloscope.
- Use short leads/tips to maximize probe bandwidth and minimize parasitic components.
- Specify the required measurement bandwidth but avoid too much to minimize noise.
- Check common-mode rejection before testing.
- Consider its ergonomics/physical design, order a holding fixture if you run out of hands!
- Keep a few traditional BNC cables in the lab. in case your colleagues won’t share.
To conclude, the humble oscilloscope is often used to verify the presence, timing and integrity of key signals during the early prototyping stage or, if qualification or flight hardware doesn’t function as intended. Many different tests are now possible using your scope and choosing the correct probe is critical. The choice will require an understanding on how its specification reconciles and interacts with your DUT, its parasitic effects, and how and where the probe is used, as this will all impact the quality of your measurements.
We could probe further , but I’m off to the lab: until next month, the person who shares their best oscilloscope probing story will win a Spacechips’ Training World Tour tee-shirt.
Spacechips will be teaching its training course, Testing Satellite Payloads, next year and you can contact, events@spacechipsllc.com, for more information.
Dr. Rajan Bedi is the CEO and founder of Spacechips, which designs and builds a range of advanced, AI-enabled, L to K-band, ultra-high-throughput transponders, Edge-based on-board processors, SDRs and MMUs for telecommunication, Earth-Observation, navigation, internet, space-domain awareness, space-debris retrieval and M2M/IoT satellites. The company also offers Space-Electronics Design-Consultancy, Technical-Marketing, Business-Intelligence, Avionics Testing and Training Services. You can also connect with Rajan on LinkedIn.
Spacechips’ Design-Consultancy Services develop bespoke satellite and spacecraft hardware and software solutions, as well as advising customers how to use and select the right components, how to design, test, assemble and manufacture space electronics.
Related Content
- Spacecraft on-board computing using rad-hard ARM MCUs
- Testing times for design engineers
- How to choose passives for space-grade switching regulators
- Pesky parasitics
- Understanding and applying oscilloscope measurements
- Digital vs. analog triggering in oscilloscope: What’s the difference?
- Measuring pulsed RF signals with an oscilloscope
The post Oscilloscope probing your satellite appeared first on EDN.
The Blink Mini: AC-powered indoor camera-based security
One portion—battery-operated outdoor-optimized devices—of Blink’s security camera product line (bought by Amazon in December 2017, followed shortly thereafter by Amazon’s acquisition of Ring, and with both brands still viable market options), has already gotten plenty of coverage attention from yours truly. But the product line in its entirety is much more diverse, both generationally (battery-operated outdoor cameras are now in their fourth generation, for example) and operationally (both indoor and outdoor variants, both battery- and AC-powered). The Blink Mini 2, another example, is intended for both indoor and outdoor AC-powered setups:
And here’s the Pan-Tilt variant of the original Blink Mini:
That said, the first-generation Blink Mini, which was introduced in early 2020 and is still sold by Amazon, is what we’ll be tearing down today:
Like its siblings, it comes in both white and black color options:
Key to my acquisition motivation was the following excerpt from an early review:
It comes with free cloud storage through the end of this year, and if you already have a Blink account through an older Blink camera, you’ll continue to get free cloud storage as a perk.
At intro, the Blink mini cost $35 (for one) or $65 (for a two-pack), plus the optional Sync Module 2 for an additional $35. Why’s the Sync Module optional, in this case, versus required for my battery-operated cameras? That’s because, as I explained in detail in one of my initial posts in the series and summarized in the subsequent sync module teardown:
Each of the cameras in a particular network implementation connects not only the common LAN/WAN router over Wi-Fi, but also to a separate and common piece of hardware, the Sync Module, over a proprietary long-range 900 MHz channel that Blink refers to as the LFR (low-frequency radio) beacon.
The Sync Module also connects to the router over Wi-Fi. And the cameras, which claim up-to-2 year operating life out of a set of two AA lithium batteries, are normally in low-power standby mode. If the cameras’ PIR motion sensing is enabled, they’ll alert the Sync Module over LFR when they’re triggered, and the Sync Module will pass along the operating status to “cloud” servers to prepare them to capture the incoming video clip.
Similarly, when you want to view a live stream from a particular camera using the Blink app for Android or iOS, or over an Amazon Echo Show or Echo Spot (or through the Echo Show mode available in recent-edition Amazon Kindle Fire tablets), it’s the Sync Module you’ll be talking to first. The Sync Module will pass along the app’s request and activate the selected camera, turning it back off again afterwards to preserve battery life. The Sync module itself is AC-powered, via a USB power adapter intermediary.
The difference this time, of course, is that the cameras themselves are also AC-powered; low-power standby operation to maximize battery life therefore isn’t relevant in this situation. As such, the key enhancement supported by the 2nd-generation Sync Module, at least for the Blink Mini, is local storage to a plugged-in USB flash drive, for folks who don’t want to spring for a $3/month/camera (!!!) cloud storage subscription and aren’t (like me) “grandfathered” legacy users. Thereby explaining why the Sync Module 2 isn’t bundled with Blink mini cameras, as it is with battery-powered alternatives, but instead is a now-$49.99 accessory (that said, I just bought a “nonfunctional, for parts only” one off eBay for future-teardown purposes!):
In January 2023, I picked up a ”used-good” condition Blink mini 2-camera kit for $18.99 (plus tax) total from Woot, an Amazon subsidiary company. The original purchaser’s shipping sticker was still affixed to the front box panel when I got it (a separate shipping box apparently hadn’t found use); I’ve done my best to peel it off in the first of the photos that follow:
Flip open the box:
and the two cameras inside come into view:
Here’s today’s patient, along with its accoutrements: some optional mounting hardware:
The USB-to-micro-USB power cable:
The 5-W output “wall wart”, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
And the camera again, now standalone save once more for the aforementioned US penny (the camera’s dimensions are 1.9 x 1.9 x 1.3 in/48 x 48 x 34 mm, and it weighs 1.7 oz/48 grams with an additional 1.5 oz/42 grams for its mount). Front:
Left side:
Back, revealing the micro-USB power input (the QR code finds use during initial setup):
Right side:
Let’s get that protective clear plastic off:
Still reflectively shiny. That said, the LEDs to either side of the lens (green/red to the left, blue to the right) are a bit less obscured now, as is the microphone below the lens. Not visible here but located in the upper right corner of the lens is an 850 nm infrared LED for “night vision” purposes. And the lens has a 110-degree viewing angle and captures 1080p images:
Up top is the speaker:
And down below are rubberized pads to prevent the camera from marring whatever surface it’s sitting on, plus a (comparatively) heavy metal disc to keep it from tipping over:
You probably also noticed the two optional mounting-screw holes in the bottom-view photo, which are related to the earlier-shown baggie-enclosed hardware. Here’s how you gain access to the holes’ topsides:
And under the camera itself is a hardware reset switch:
Time to dive inside. As I go along, particularly when I get to the PCB and lens assembly, you might want to also periodically reference my earlier teardown of the Blink XT battery-operated outdoor camera for compare-and-contrast purposes. Let’s start by extracting that metal disc:
Although, after all that effort, we didn’t get very far:
It turns out, however, that a firm tug is all it takes to separate the base from the camera body:
Giving us now a clear view of, among other things, the camera’s FCC ID (2AF77-H1931660):
If you visit the FCC site using the above link, you’ll see that the applicant name is “Immedia Semiconductor LLC”. As mentioned in previous Blink-themed posts, Immedia was originally founded in 2009 as a chip supplier but pivoted to become a consumer electronics system supplier in 2014, branded as Blink, with a highly successful Kickstarter unveil that July. 3.5 years later, as previously mentioned, Amazon acquired the company, and Immedia Semiconductor, LLC continues to operate as an independent subsidiary (thereby explaining the FCC info).
Onward. Fortunately, prior to striving to get inside myself, I’d done a bit of online research. This intrepid hacker had spectacularly mangled her camera’s case during disassembly:
only afterward, alas, stumbling across a video that greatly simplified (not to mention doing much less damage in) the process:
For likely obvious reasons, I followed in the second person’s footsteps:
Voila:
Here’s the now-exposed inner case topside speaker:
and underside:
The notched grooves on both sides; right:
and left:
And the inside-outer-case clips that originally fit into those grooves. This time, as you can see from the respective damage-or-not, the thinner, more flexible iSesamo “spudger” panned out more intact than its stronger but also thicker and more rigid Jimmy counterpart had. Right:
and left:
Oh well, our objective is now in sight, and any collateral damage done along the way is relative:
I want to first draw your attention to the two large gold-color rectangular PCB contacts in the upper right, to the right of an unpopulated seeming screw hole at the top of the PCB. Note their proximity to the speaker vent holes at the top of the outer case? Now let’s look at the inside of the inner case:
Yep, that’s again the speaker at the top. And when assembled, those two “spring” contacts mate with their PCB counterparts to drive it with an audio signal.
Back to the PCB. Although that top hole doesn’t have a screw in it, the two on the sides are populated. Let’s fix that:
We have liftoff:
Going forward I’ll be referring to this as the “rear” PCB, to differentiate it from the “front” PCB still currently existent in the outer case. Looking first at the “rear” PCB’s now exposed frontside:
there’s, for example, the connector at bottom right that originally mated it to the “front” PCB and held it in place even after the screws were removed, necessitating the earlier-shown Jimmy-as-crowbar to prod it into detaching. In the upper right is a Faraday Cage whose contents we’ll see shortly. In the upper left are Immedia Semiconductor markings. And in the center is the lense assembly; the two-wire harness routing to it suggests that, as in past teardowns of products like this, there’s likely an IR filter normally between the lens and image sensor that can selectively be moved out of the way for “night vision” applications.
Back to the backside of the “rear” PCB:
Note again the two large contacts in the upper right that feed the speaker. Accompanying them are numerous smaller contacts spread across the PCB, which presumably act as test points, assembly-line firmware programming landing pads, and the like. And speaking of which, in the lower left corner is a Winbond W25Q32JW 32 Mbit SPI serial NOR flash memory.
Two more screws remain in the center of the PCB; I’m guessing from past experience that they hold the other-side lens assembly in place in front of the image sensor. Let’s test my hypothesis:
Yep (to the lens-assembly removal scheme, moveable IR filter inside and sensor underneath):
And here’s the as usual dallop of dried glue that holds the lens in position after its focus and other optics characteristics are fine-tuned on the assembly line:
Before switching our attention to the “front” PCB, let’s revisit that Faraday Cage:
Inside is the application processor, Immedia’s AC1002B. Since it’s Amazon-proprietary, there’s unsurprisingly no public documentation available, aside from a brief mention in the camera documentation of “4 cores / 200 MHz”. That said, I’ll note that it’s different than the Immedia ISI-108A SoC I found in my earlier Blink XT teardown , although the Blink XT2 successor switched to it. And while I’m on the teardown-comparison topic, the flash memory is 4x the capacity of that seen previously.
Last, but not least, let’s switch our attention to the remaining “front” PCB:
Two more screws to go (by the way, note that in addition to the earlier mentioned unpopulated screw hole site at the top of the device, there’s now an additional one at the bottom!):
And the “front” PCB is now also free:
It’d be hard to miss the PCB-embedded antenna in the upper right corner, even if you tried to overlook it! The markings on the shiny IC below it are pretty faint, so take my word when I tell you that the first line identifies it as Infineon’s CYW43438 1×1 single-band Wi-Fi 4 (802.11n) + Bluetooth® 5.1 combo chip (unsurprising given its antenna proximity; it seems that the IC’s Bluetooth facilities are unused in this design). Below that, at the lower right edge, is the reset switch. In the lower left is the other end of the PCB-to-PCB connection scheme. And in-between them, directly below the lens “hole”, is the MEMS microphone, whose input port is on the other side of the PCB. Speaking of which…
In the upper right quadrant is the earlier mentioned 850 nm infrared LED for use in dim ambient light settings. To clarify, it’s not a PIR module, as was found in the earlier Blink XT teardown. in this particular camera model case, an alternative technique called Pixel Difference Analysis (PDA) finds use in detecting object motion in the scene. On either side of the lens “hole” are the two LEDs, respectively labeled D2 (blue to right) and D4 (green/red to left). And below the lens “hole” is the MEMS microphone input port. By the way, I’m struck by how many seemingly unpopulated-component solder pads there are on both sides of this PCB. Readers, agree?
With the PCBs out of the way, all that’s left to do is push the front panel out of the case:
Note that there’s an additional “hole” site above the lens opening, which is currently plastic-filled. As with those unpopulated component sites I just mentioned, along with the unused screw holes I noted earlier, whenever I see something like this, I wonder what it was originally intended to serve as…a second microphone for ambient noise-cancellation purposes, perhaps?
I’ll close with a confession. At some point in these final teardown steps, the crooked rubbery white plastic widget in the lower right corner of the previous photo fell off, and I was admittedly baffled as to what function it served…until I retraced my earlier disassembly steps (and photos) and remembered that it was the inherently elastic interface between the PCB-mounted hardware reset switch and the outside world:
And speaking of disassembly steps…since my removal of the Faraday Cage “lid” and more general dissection were so “clean” (mangled side clip aside), after this writeup is published I’m going to strive to tediously and patiently reverse course and successfully reassemble the device back to a fully functional state. Wish me luck; I’ll post the outcome in the comments. And as always, I also look forward to reading your thoughts there; thanks in advance for them!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Blink: Security cameras with a power- and bandwidth-stingy uplink
- Blink Cameras and their batteries: Functional abnormalities and consumer liabilities
- Teardown: Blink XT security camera
- Blink: Security camera system installation and impressions
The post The Blink Mini: AC-powered indoor camera-based security appeared first on EDN.
What’s LVGL, and how it works in embedded designs
Light and Versatile Graphics Library (LVGL) is steadily making inroads in the graphics realm by efficiently facilitating graphical user interface (GUI) designs in small, resource-constrained, and battery-powered devices such as wearables, e-bikes, navigation systems, instrument clusters, medical gadgets, and more.
Graphics IP suppliers are increasingly partnering with LVGL to optimize GPU performance and expand graphic processing capabilities for a wide range of embedded applications. But who’s LVGL? It’s the company behind the free and open-source graphics library for embedded systems; it helps developers create GUIs for microcontroller units (MCUs), microprocessor units (MPUs), and display processors.
Figure 1 LVGL has no external dependencies, which makes its porting incredibly simple.
LVGL, written in C, allows embedded developers to create modern and visually appealing user interfaces in embedded applications. It works with various processors and operating systems and enables developers to keep code size and memory usage to a minimum. It can be used with any RTOS and bare-metal setup and quickly adapts to unique project needs.
What IP suppliers do is integrate their GPU solutions into LVGL’s graphics ecosystem; so, developers can build sleek, responsive interfaces without compromising on performance or power efficiency. The integration of LVGL into GPUs is transforming the embedded UI landscape in resource-constrained devices like MCUs.
Design case studies
Take the case of Think Silicon, a supplier of ultra-low-power GPU IPs for embedded systems and an Applied Materials company. It’s teaming up with LVGL to develop high-performance, low-power graphics libraries for MCUs. As a result, the software development kit for its NEMA GPUs will be able to accelerate LVGL’s graphics library by up to 5x compared to software-only rendering.
Figure 2 Think Silicon has combined LVGL’s lightweight open-source graphics library with its NEMA GPU-Series.
VeriSilicon, a Shanghai, China-based supplier of embedded GPUs, has also partnered with LVGL to facilitate seamless integration of 2D, 2.5D, and 3D content in embedded applications. In conjunction with LVGL’s graphics library, VeriSilicon aims to advance 3D rendering capabilities in the GUI designs.
Actions Technology, a Zhuhai, China-based firm developing chips for AIoT applications, has incorporated VeriSilicon’s GPU into its smartwatch system-on-chip (SoC) design. Tim Zhang, GM of Actions Technology’s Wearable and Sensing Business Unit, acknowledges the importance of LVGL’s graphics technology contribution in delivering rich 3D graphics in its smartwatch SoC.
Figure 3 Actions Technology has incorporated VeriSilicon’s LVGL-enabled GPU in its smartwatch SoC.
Embedded GPUs are now serving a wide range of applications, from wearables and infotainment to micro-mobility and AIoT. Here, the integration of LVGL in GPUs enables users to create visually appealing UIs across a wide variety of hardware platforms.
Related Content
- Re-imagining Imagination Technologies
- GUI Development: Embedding Graphics
- CAST launches graphics acceleration IP cores
- Imagination outstrips all other GPU IP suppliers
- Two-digit nanosecond latency CXL IP addresses GPU memory expansion
The post What’s LVGL, and how it works in embedded designs appeared first on EDN.
Beauty is in the eye of the beholder
Take a close look at this power audio amplifier that dates back to 1961 (Figure 1). This image is a still extracted from a YouTube video by a group called “The Spotnicks” playing their version of “Ghost Riders In The Sky”. (Try not to laugh too hard when you see them in their space suits.)
Figure 1: Image of an amplifier featured in a YouTube video with eight 6L6GB beam power tetrode tubes.
Sporting eight 6L6GB beam power tetrode tubes, this thing was clearly pushing the limits of the state-of-the-art at that time but notice the gorgeous appearance, the emphasis on gold coloration and the mirror-like reflection. This amplifier was meant as a work of art to be seen, particularly on stage, not just listened to.
After I looked on in dazed admiration for a while, I noticed something. One of the seven-pin-miniature tube shields on the left is not gold colored. Somewhere along the line I suspect, one of those tubes had to be replaced and its tube shield got lost somehow. A replacement shield was used instead, which left the amplifier with something of a beauty mark.
When you play that video, you’ll see that more than one of these amplifiers was in service during the performance.
Enjoy!
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Vacuum tube negative resistance
- Friday Fun: Test Those Tubes
- Electronics of our youth
- Self-contained TV receiver uses 24 transistors
- Remembrance of chips past
- Gone but Not Forgotten
The post Beauty is in the eye of the beholder appeared first on EDN.
Wireless combo modules offer global connectivity
Silicon Labs’ SiWx917Y series of Wi-Fi 6 and Bluetooth LE 5.4 modules provides plug-and-play simplicity with global RF certifications, helping to accelerate development. Intended for battery-powered IoT devices, these energy-efficient modules integrate an Arm Cortex-M4 application processor and antenna in a small 16×21×2.3-mm package.
The SiWG917Y variant enables customers to run all application code on the device’s Arm Cortex-M4 core. In contrast, the SiWN917Y allows customers to execute applications on a separate MCU, while the module manages Wi-Fi 6 and BLE 5.4 communication tasks. Intelligent power management supports connected sleep mode, consuming as little as 20 µA with Target Wake Time (TWT) and a 60-second keep-alive interval.
The wireless subsystem includes a 160-MHz network processor, baseband DSP, analog front-end, RF transceiver, and power amplifier. The application subsystem features a 180-MHz Cortex-M4 processor with a floating-point unit (FPU) for peripheral and application tasks. This dual-core architecture separates applications and wireless stacks to optimize performance and ensure timely processing.
Wireless modules in the SiWx917Y series are now generally available for purchase.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wireless combo modules offer global connectivity appeared first on EDN.
MPU controls multi-axis motors in real time
Renesas’s RZ/T2H microprocessor, its most advanced for industrial equipment, enables precise control of robot motors with up to nine axes. The device’s application processing and real-time performance make it well-suited for programmable logic controllers (PLCs), motion controllers, distributed control systems, and computerized numerical controls (CNCs). It also supports a variety of network communications, including industrial Ethernet with Time Sensitive Networking (TSN).
The RZ/T2H features four Arm Cortex-A55 CPUs (up to 1.2 GHz) for application tasks and two Cortex-R52 CPUs (up to 1 GHz) for real-time processing. Each R52 core is equipped with 576 KB of tightly coupled memory (TCM). The RZ/T2H also supports 32-bit-wide LPDDR4-3200 SDRAM for external memory, enabling high-performance tasks like Linux applications, robot trajectory generation, and PLC sequence processing.
To support multi-axis motor control, the RZ/T2H provides 3-phase PWM timers, delta-sigma interfaces for current measurement, and encoder interfaces. Peripheral functions for motor control are connected to the Low Latency Peripheral Port (LLPP) of the Cortex-R52 for fast access.
The RZ/T2H microprocessor is now available through authorized Renesas distributors.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post MPU controls multi-axis motors in real time appeared first on EDN.
Tiny Hall element delivers InAs sensitivity
The HQ0A11, an indium arsenide (InAs) Hall element from AKM, is the smallest and thinnest in the company’s high-sensitivity HQ series. At just 0.8×0.4×0.23 mm, the HQ0A11 reduces volume by 85% compared to its predecessor, the HQ0811. It also delivers approximately 16% better signal-to-noise ratio, directly improved position detection accuracy and making its S/N performance the highest in the HQ lineup.
Hall elements are commonly used in position detection for image stabilization and autofocus in smartphone cameras. According to AKM, the HQ0A11 significantly reduces lens-shake effects, particularly with telephoto lenses. Its compact size also makes it well-suited for high-density component mounting in limited spaces.
The HQ0A11 achieves a sensitivity of 0.66 mV/mA/mT, with noise kept at 1.51 µVRMS/mA. It is expected to contribute to enhanced performance, not only in smartphone camera modules, but also in small motors for robots.
AKM has begun mass production and shipment of the HQ0A11 InAs Hall element.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Tiny Hall element delivers InAs sensitivity appeared first on EDN.
Shunt monitors detect vehicle currents
ZXCT18xQ automotive current shunt monitors from Diodes measure small sense voltages across common-mode voltages up to 26 V. Operating from a 2.7-V to 5.5-V supply, these AEC-Q100 qualified instrumentation amplifiers are suited for vehicle lighting controls, high-side and low-side current sensing, battery management, fault current detection, and other vehicle body control systems.
The ZXCT180Q and ZXCT181Q support unidirectional and bidirectional current sensing, respectively. The ZXCT180Q features two pin assignment options, with its OUT pin placed in different configurations. Its REF pin is tied to GND. For bidirectional current sensing, as required in battery management systems, the ZXCT181Q introduces a voltage to the REF pin, offsetting the output voltage.
Both the ZXCT180Q and ZXCT181Q offer fixed gains of 20 V/V, 50 V/V, 100 V/V, and 200 V/V. They measure shunt voltages at common-mode voltages from -0.3 V to 26 V, independent of the supply, with a 370-µA maximum supply current. The devices support bandwidths up to 400 kHz at 20 V/V and slew rates of 2 V/µs.
The ZXCT180Q is priced at $0.11 each, while the ZXCT181Q is priced at $0.12 each, both in 1,000-piece quantities.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Shunt monitors detect vehicle currents appeared first on EDN.
Portable 5G transmitter streams live video
DragonFly V 5G is Vislink’s bonded cellular miniature transmitter with 5G connectivity and HD video streaming capabilities. The device supports public and private 5G networks, delivering live broadcast-quality video from point-of-view cameras, drones, UAVs, and body-worn devices.
Leveraging High-Efficiency Video Coding (HEVC) compression, the DragonFly V 5G streams high-definition, high dynamic range video in formats up to 1080p at 50/60 Hz. Its cellular bonding technology combines multiple 5G network connections into a single, aggregated data stream for enhanced reliability and speed.
Weighing just 82 grams, the transmitter accommodates HDMI or SDI camera inputs, depending on the variant. Additionally, the DragonFly V 5G includes support for Wi-Fi in the 2.4-GHz ISM band and RS-232 remote control.
The DragonFly V 5G is now available, joining the previously announced DragonFly V COFDM model.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Portable 5G transmitter streams live video appeared first on EDN.
∆Vbe bread dough incubator
I was surprised recently when a (nearly) two-decade-old design idea of mine for take-back-half temperature control got a question from reader John Louis Waugaman. John says he needs a way to control the temperature of batches of rising bread dough. I’m glad he might be considering applying my TBH circuit to his problem, but it really is kind of overkill in that context. So, I began pondering whether a simpler topology might solve his dough incubation problem as well as the TBH circuit would while saving some cost and effort. Plus, there’s a backstory.
Wow the engineering world with your unique design: Design Ideas Submission Guide
For a long while I’ve been fond of a particularly elegant, accurate, and (very!) inexpensive method (∆Vbe) for sensing and controlling temperature using ordinary, uncalibrated bipolar transistors. I first saw it explained in an application note by famed analog guru Jim Williams (see page 7 for ∆Vbe theory).
I’m always watching for opportunities to use ∆Vbe and this possibility of providing cheap, accurate, and calibration-free temperature control in a cool culinary context was too good to let pass by. Figure 1 shows the new circuit I cooked up, (loosely) based on Jim’s recipe.
Figure 1 Delta-Vbe sensor Q1 is programmed via R1 for desired setpoint temperature in Kelvin = R1/100 = 312oK = 39oC for R1 = 31.2k. Asterisked Rs should be precision types (1% or better).
The R2 R3 D5 D6 network drives Q1 in the magic 10:1 current ratio ∆Vbe measurement cycle described by Williams. Note that the absolute currents supplied to Q1 are no more accurate than the raw unregulated 60-Hz line voltage that creates them, but that doesn’t affect ∆Vbe accuracy. All that counts is their 10:1 ratio which is set independent of line voltage variation solely by the precision of (R2/R3 + 1) = 10.
This makes Q1 generate a PTAT (proportional to absolute temperature) AC signal equal to oK/5050 volts peak-to-peak that follows the 120-Hz log(|sine|) waveshape shown in Figure 2.
Figure 2 Q1’s ∆Vbe PTAT log(|sin(r)|) oK/5050 waveform. (Yaxis = volts, Xaxis = radians Red = average value = AC baseline)
The PTAT signal Vpp is boosted by A1a’s gain = –2,742,160/R1, then compared by A2 to its precision (2.50 V +0.4%) internal shunt reference (thanks again, Konstantin Kim, for finding the versatile AP4310A!).
A2’s output stays at zero, holding Q2 off, while Q1’s temperature and the PTAT signal are below setpoint. This allows 120-Hz pulses coupled through C3 to reach Q3’s gate, switch it on, and apply power to the heater. When the heater warms Q1 (and presumably the dough) to the programmed temperature, then the PTAT waveform rises above A2’s reference voltage. This makes A2 start turning Q2 on which diverts the TRIAC gate pulses to ground. That turns Q3 and the heater off allowing Q1 to cool, etc., etc. The resulting cycling completes a thermostasis feedback loop to make the dough grow.
About Q3: Even though heater drive is unipolar, I selected a TRIAC instead of an SCR for Q3. This wasn’t to get bipolar capability but rather because the TRIAC has a higher max gate current rating. This makes Q3 able to shrug off the 2-A inrush possible at power-up, which might vaporize an SCR gate.
D7 provides a path to ground for C3 return current, preventing it from false triggering Q3.
For heater duty, John suggested an incandescent light bulb. I agree radiant heating should work well. Since Q3’s maximum duty factor is 50%, a 100-W bulb would be just about perfect for a maximum heating power of ~60 W. Plus, a bonus benefit of reduced voltage would be a lower filament temperature. This should make an ordinary tungsten bulb last many thousands of hours.
It may be just what John kneads.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Temperature controller has “take-back-half” convergence algorithm
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Take-Back-Half precision diode charge pump
- 20MHz VFC with take-back-half charge pump
The post ∆Vbe bread dough incubator appeared first on EDN.
Walmart’s onn. FHD streaming stick: Still Android TV, but less thick
Within the introduction to my July 2024 teardown of Walmart’s first-generation onn. Android TV 4K UHD Streaming Device, I also alluded to “a FHD “stick” sibling” that was “queued up on the bookshelf to my right for sooner-or-later teardown purposes.” That time is now. Let’s start with a brief review of the differences between “UHD” and “FHD”. The latter delivers a maximum resolution of 1920×1080 pixels (curiously also referred to as “2K” on the Walmart website product page), initially (with ATSC TV, for example) interlaced, and progressive scan nowadays. The former, conversely maxes out at 3840×2160 pixels (the more common “4K”), alternately stated as 4-times 1920×1080 pixels FHD.
So why would you go with a lower-resolution device, versus a higher-resolution alternative? One (slight) difference involves pricing; while the UHD Streaming Device was $19.88 on closeout when I bought it in October 2021, its FHD Streaming Stick sibling was $5 cheaper ($14.88). Size and weight deviations are a more meaningful differentiator. Here’s the UHD Streaming Device again, with dimensions of 4.90” x 4.90” x 0.80” and weighing in at 1.2 lbs.:
And here are some stock shots of the 3.81” x 1.39” x 0.61”, 6.5 oz FHD Streaming Stick, designed to hang off the back of a TV versus sitting on top of it (or its stand, for that matter):
Next up, some “real-life” box shots of the device I’ll be tearing down today:
I’m admittedly easily amused, but as I also mentioned last time, I still giggle whenever I see this:
Next, let’s dive inside:
Underneath the remote control (and a piece of interstitial cardboard), identical appearance-wise to the remote bundled with the UHD Streaming Device (and otherwise as well; the two remotes’ product markings inside their battery compartments exactly match):
is the USB-A to micro-USB power cable, for which I’ll undoubtedly find alternative use elsewhere in the future:
Underneath the “wall wart” power supply for the streaming stick, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, is a set of AAA batteries for the remote control, which I’ll also press into service somewhere else some other time (the “wall wart” as well, for that matter):
And underneath the streaming stick:
is a HDMI male-to-female extender cable useful for when direct access to a TV’s HDMI input (as shown in one of the earlier stock photos) is too tight of a squeeze (again, saved for reuse later):
Now for today’s subject. The topside (I used this term, along with “bottom” and “sides”, loosely because, as I’ve also previously mentioned with other devices of this type, orientation is HDMI plug- and cable-orientation dependent, therefore inconsistent from one TV and broader setup to another) view, with a logo niftily constructed of multiple passive airflow vents, you’ve already seen from the prior “stock” photos:
Even more vents adorn the bottom:
Here’s the HDMI connector end:
The other end’s comparatively bland:
Along one side is the micro-USB power input, with a UPC code below it:
Along the other is an access hole for a reset switch (which you’ll see more clearly shortly), and a product-marking suite:
which is more clearly (and fully) readable once I give the stick a slight tilt:
Time to see what’s inside the case. The micro-USB slot has been a convenient access starting point in the past, as it was again in this instance:
Here’s the now-exposed inside of the bottom of the case, along with the underside of the PCB:
Lifting out the PCB and flipping it over reveals both its topside and the case top insides:
Here are the interiors of both case sides by themselves:
Along with the now-standalone PCB topside:
And PCB underside:
Covering the bottom of the PCB is a sizeable piece of tape:
The top’s a different story. A much smaller piece of tape:
And below it, a more sizeable piece of (presumably) aluminum, I’m guessing both for heat-removal purposes and to provide the assembly with rigidity:
Below that is thermal tape, along with a pad:
While we’re here, let’s get those two Faraday Cages off:
The large square IC at the bottom, previously covered by the thermal pad, is (unsurprisingly) the system SoC, an Amlogic S805Y-B (versus the S905Y2 in the sibling Streaming Device, thereby rationalizing among other things the output-resolution variability between the respective systems containing them). Above it is one (of two total, the other on the other side of the PCB) rectangular-shaped CXMT CXDQ2BFAM-CG 4 Gbit (x16) 1200 MHz DDR4 SDRAM.
Above and to the left of the DRAM is a Realtek RTL8821CS 802.11ac/abgn SDIO WLAN with Bluetooth 4.2 single-chip controller, unsurprising considering its standalone Faraday Cage along with its proximity to PCB-embedded antennae above it and to its right. Along the left edge of the PCB is the reset switch. And in the upper right corner is the micro-USB connector.
Flipping the PCB back over, back to its underside, and removing its Faraday Cage:
At bottom is a Sandisk SDINBDG4-8G industrial (-25°C to 85°C) 8 GByte embedded MMC flash memory module. And above it, toward the middle, is the aforementioned other CXMT CXDQ2BFAM-CG 4 Gbit (x16) 1200 MHz DDR4 SDRAM. Note, by the way, that the earlier dissected Streaming Device had twice the total DRAM capacity in the design as is found here: another reflection of the output-resolution differentiation between them.
See anything else interesting in this design? Let me (and your fellow readers) know in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- The Google Chromecast with Google TV: Realizing a resurrection opportunity
- Google’s Chromecast with Google TV: Dissecting the HD edition
- Teardown: Roku Streaming Stick
The post Walmart’s onn. FHD streaming stick: Still Android TV, but less thick appeared first on EDN.
Pat Gelsinger: Return of the engineer CEO that wasn’t
After more than three years at the helm, Intel’s charismatic and ambitious chief Pat Gelsinger has stepped down and the faltering American semiconductor icon is starting to look for his replacement. It’s not a coincidence that this year Intel will announce its first annual loss since 1986; analysts expect Intel to lose $3.68 billion in 2024.
Intel is a semiconductor industry pioneer with luminaries like Robert Noyce, Gordon Moore, and Andy Grove in its fold. While the case for Craig Barrett, who took over reins from Grove in 1998, hangs in balance, the downfall at Intel’s top spot began with Paul Otellini, who was widely known to refuse Apple to supply chips for iPhones due to a perceived lack of volume.
The downward trend at Intel’s corner office continued under Brian Krzanich and Bob Swan, and then Gelsinger came along with a big bang. Gelsinger, who joined Intel in 1979 at the age of 18 with an associate’s degree from Lincoln Tech, was the lead architect of Intel’s fourth-generation 80486 processor launched in 1989. He also became Intel’s youngest vice president at the age of 32.
Gelsinger continued to rise through the ranks and became the Santa Clara, California-based chipmaker’s first chief technology officer (CTO) in 2001. That’s also when his career’s first anticlimax began. So, restless Gelsinger chose the path of upward mobility by leaving Intel in 2009 and becoming EMC’s president and CFO.
Figure 1 Gelsinger (second from right) is seen with the 386 processor development team. Source: Intel
In 2012, he got the top job at VMware, and after nine years at this cloud computing firm, he returned to then troubled Intel as the turnaround CEO. After the Intel board ousted Bob Swan, who had a finance background, Gelsinger came on board with great expectations. He was being hailed as the engineer CEO taking the reins of an American icon with engineering roots. However, in retrospect, that proved easier said than done.
Intel’s foundry business, the linchpin of Gelsinger’s turnaround plan, is now the elephant in the room, and there is no viable remedy in sight. Besides the semiconductor contract manufacturing business that he envisioned to turn the corner at Intel, other critical areas relate more to execution than vision.
Below is a brief snapshot of the debacles that happened under Gelsinger’s watch:
- Intel, the CPU company by its heritage, has been losing market share in PC and data center processors to competitor AMD. Moreover, x86 rival Arm is making inroads in the highly lucrative server and data center markets.
- Intel seems to have missed the AI wave, and its Gaudi AI accelerators haven’t been selling well.
- While Gelsinger spent a lot of time rubbing shoulders with politicians, his inept remarks about Taiwan’s precarious relations with China offended TSMC, leading Intel to lose massive discounts from the mega-fab for the manufacturing of its processors.
- Intel is also known to have failed in sealing chip supply deals with Sony for the PlayStation console and Waymo for self-driving vehicles.
- In 2023, Intel had to cancel its bid to acquire Israel-based fab Tower Semiconductor for $5.4 billion due to regulatory issues. As a result, it was forced to pay $353 million as the termination fee.
Now, back to the elephant in the room: Intel’s foundry business currently in transition to become an independent subsidiary due to shareholder pressure. It’s probably the straw that broke the camel’s back. Chip contract manufacturing is a long-term business with massive capital investment, and time isn’t apparently on Intel’s side.
More specifically, Intel’s much-publicized 18A process node has announced only a single tape-out while large potential customers such as Apple and Qualcomm are reported to have passed on 18A for technical reasons. The trade media is abuzz about reliability issues facing 18A, and Intel’s isn’t expected to manufacture chips in volume on this node until 2026.
Figure 2 Gelsinger’s departure marks his 45 years of work in the tech industry. Source: Intel
While technology and trade media outlets have seen Gelsinger’s ouster with an element of surprise, we at EDN saw it coming much earlier. Our story “Intel: Gelsinger’s foundry gamble enters crunch,” published on 4 November 2024, offered a blueprint of this inevitable ouster announced on 2 December 2024.
We also think that Intel’s problems aren’t unsolvable. Gelsinger’s successor must avoid grand plans and focus on perseverance, innovation, and execution. Lisa Su did that at Intel’s archrival AMD during the past few years.
Related Content
- Change of guard at Intel Foundry, again
- Intel Paints Their AI Future with Gaudi 3
- Intel 3 Represents an Intel Foundry Milestone
- Computex 2024: Pat Gelsinger vs. Jensen Huang
- Intel unveils service-oriented internal foundry model
The post Pat Gelsinger: Return of the engineer CEO that wasn’t appeared first on EDN.
Single-supply single-ended input to pseudo class A/B differential output amp
From the micromixer topology by Barrie Gilbert [1], this amplifier allows a single-ended input to be converted to a Class A/B current output from a single supply.
Wow the engineering world with your unique design: Design Ideas Submission Guide
As shown in Figure 1 with an LTspice implementation, the circuit employs 6 bipolar transistors (BJTs) in a unique configuration which “directs” the output current from Q3 and Q4 dependent upon input polarity.
Figure 1 LTspice schematic of the micromixer topology with 6 BJTs arranged such that the output current from Q3 and Q4 are dependent upon input polarity.
How it worksC1 serves as a decoupling capacitor which allows the potential at Q1 base to vary from the biased Vbe with the Q1 diode connection. Q2 acts as a mirror to Q1 for positive input signals and cuts off for large negative inputs; while Q3 is a cascode under small signal conditions, and sources large negative input currents.
With ideal equal-sized transistors, all collector currents are equal and set by the base-emitter voltages of Q5 plus Q6, which are determined by the current through R3 as Ibias.
When the input signal is large and positive, input current flows mostly through Q1 which acts as a transconductor and cascode device and Q3 becomes cutoff as its emitter voltage rises. Q2 “mirrors” the Q1 collector current through cascode device Q4, which sinks a replica of the large positive input current as the positive output.
Large signal inputsOn large negative inputs, all the input current is sourced by Q3 as its emitter voltage drops and Q1 becomes cutoff, with Q3’s collector sinking the negative output (Figure 2). Note that for large positive inputs, the output currents are “directed” to either Q4 (+) or Q3(-) and unlimited with ideal devices.
Figure 2 LTspice simulation of differential output in response to large negative signal inputs.
Small signal inputsAs shown in Figure 3, with smaller inputs, the circuit acts like a small signal Class A type as all the transistors operate with an Ibias collector current. So, with zero input current, both Q1 and Q3 conduct Ibias, as does Q2 and Q4 and the circuit acts with a differential output.
Figure 3 LTspice simulation of differential output in response to small negative input current.
Actual resultsFigure 4 shows actual results from the circuit in Figure 1, shown on a DSO with a sine-wave input of 2Vpp @ 1KHz. Figure 5 shows the LTspice simulation results of Figure 1. Note the LTspice plots were set up with color and display offset bias to match DSO display for comparisons.
Figure 4 Actual results of Figure 1 schematic shown on a DSO with a sine-wave input of 2Vpp @ 1KHz.
Figure 5 LTspice simulation results of Figure 1 where the plots were set up with color and display offset bias to match DSO display for comparisons
Also note the slight “cross over distortion” shown in both the DSO and LTspice results—common to conventional Class A/B stages. This can be improved with a higher Ibias at the expense of higher amplifier dissipation.
This topology offers additional features other than just single-to-differential Conversion [1]. It addresses the dynamic input impedance as a function of instantaneous input signal level; an area often not addressed in conventional amplifier discussions.
Signal induced distortion begins with the effective amplifier input impedance changing with dynamic signal level and working against the source impedance creating a non-linear signal dependent voltage/power divider which modulates the input signal level.
Improving input impedanceFigure 6 shows a version where additional resistors are added to improve the input impedance variation with input signal level [2]. Note the inclusion of additional resistors to help balance the input impedance variations over large input signal swings while still maintaining an equal collector current bias for each device as determined by Ibias.
Figure 6 A high dynamic range amplifier with additional resistors to help balance the input impedance variations over large input signal swings while still maintaining an equal collector current bias for each device as determined by Ibias.
As shown in Figure 7 and Figure 8, the LTspice input impedance results were created by taking the derivative of input voltage with respect to input current as the input is swept across a large positive and negative range.
For more details see references [1] and [2].
Figure 7 Small signal input impedance results and output differential current.
Figure 8 Large signal +10 V peak input impedance results and output differential current.
High bandwidth and dynamic range single-to-differential amplifier circuitThese circuits operate in the “current domain” and can offer very high bandwidths with high dynamic range at low static power dissipation from a single supply. In the distant past, the author has implemented this with high frequency SiGe bipolar transistors in a BiCMOS process with good results.
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.
Related Content
- Simple 5-component oscillator works below 0.8V
- Applying fully differential amplifier output-noise analysis to drive high-performance ADCs
- Differential vs. single-ended data transfer: What’s the difference?
- Understanding output filters for Class-D amplifiers
- Distortion in power amplifiers, Part I: the sources of distortion
- Loudspeakers: Effects of amplifiers and cables – Part 5
References
- Chen, M. R. D. Rodrigues and I. J. Wassell, “A Frechet Mean Approach for Compressive Sensing Date Acquisition and Reconstruction in Wireless Sensor Networks,” in IEEE Transactions on Wireless Communications, vol. 11, no. 10, pp. 3598-3606, October 2012, doi: 10.1109/TWC.2012.081612.111908.
- https://www.eevblog.com/forum/projects/interesting-amplifier-topology/
The post Single-supply single-ended input to pseudo class A/B differential output amp appeared first on EDN.
Energizer’s PowerSource Pro Battery Generator: Not bad, but you can do better
Back in August, EDN published my coverage of the SLA (sealed lead acid) battery-based Phase2 Energy PowerSource 660Wh 1800-Watt Power Station on sale for $149.99 plus tax at bargains site Meh:
I hypothesized at the time that both it and a Duracell-branded clone, which originally sold for $699.99, were private-label brands of a common design originally sourced from a company called Battery-Biz. And toward the end of that same writeup, I also mentioned a couple of lithium battery-based successor power sources, also Battery-Biz-sourced, among them a $599.99 (versus $1,899.99 original MSRP, believe it or not) Energizer-branded bundle with a 200W solar cell that was also at the time being discount-sold by Meh:
That initial limited-time promotion has subsequently been resurrected several more times (that I’ve seen, maybe more than that) by Meh to date. Why? I’ll let them explain in their own words:
We’ve offered this a bunch now, but we haven’t seen any real drop-off in sales.
The third time I saw it, I decided to take the plunge. The second time, it had been sold in two different configurations: $499 refurbished with a 90-day warranty, or $599 new with full two-year factory warranty. But by the time I got around to acting on my purchase aspiration, refurb inventory was depleted. “No problem,” I thought this time, “what’s available for sale is only $100 more, is brand new, and comes with a much longer warranty period.” Hold that thought.
Here are more stock photos and infographic images from the Battery-Biz website product page (and here’s the user manual, linked to from that same page):
Along with a few more stock images from the promotion page on Meh’s website:
And now, some images of my unit, both in action while being initially charged and accompanying its SLA battery-based sibling:
So why did decide to pull the purchase trigger, aside from out of engineering curiosity? While the Phase2 Energy PowerSource Power Station remains a perfectly acceptable solution for residential backup power in utility outage situations (which we unfortunately seem to be increasingly experiencing of late), as I mentioned back in September, it’s quite a (carrying handles-included, but still) boat anchor. The Energizer PowerSource Pro Battery Generator is comparatively quite svelte: nearly half the total volume (15.35” x 9.5” x 8.8”, versus 19.9″ x 12.8″ x 8.9″) and less than half the weight (23 lbs., versus 58.8 lbs.). However, even though it’s lighter, its battery has 50% higher capacity (991 Wh versus 660 Wh). It recharges faster too: 2 hours to “full” on AC from an initially empty state, versus 10 hours. Its inverter-driven AC outputs are pure sine wave in form, versus simulated. It provides one more DC output, that being 100W USB-C with Power Delivery cognizance. And IMHO, it looks cool, too.
That all said, I actually decided to not keep it (and by the way, Meh was stellar in handling the return, going as far as issuing me a full refund while the bundle was still enroute back to them). Cons of the Energizer PowerSource Pro Battery Generator compared to the Phase2 Energy PowerSource Power Station precursor include:
- Lower cumulative inverter output power—1200-W versus 1440-W continuous/1800-W surge—with the lack of surge support in the Energizer product case due both to the comparative battery technologies in use and the lack of circuitry support (versus, say, the EcoFlow units I told you about in the recent Holiday Shopping Guide for Engineers)
- Lack of support for “chaining” the internal battery to an external supplemental one for runtime extension
- And a permanently attached topside handle, making it difficult to stack other things on top of the Energizer unit should I want to take it on a trip in my camper, for example.
So far, these are minor “nits”. This next one’s more notable, however. As I also mentioned in the recent Holiday Shopping Guide for Engineers, Battery-Biz and Energizer were vague upfront about the exact battery formulation in use with the PowerSource Pro Battery Generator, referring to it only as a “lithium-ion”. Turns out, it’s NMC (Lithium Nickel Manganese Cobalt); no, I don’t know why there’s no upfront “L” in the acronym, either. NMC batteries are typically spec’d for only a few hundred recharge cycles before they need to be replaced. Ironically, this is comparable to the Phase2 Energy PowerSource Power Station’s AGM (absorbed glass mat) SLA battery cycle spec. But it’s much lower than the several thousand cycles oft-touted for LiFePO₄ (lithium iron phosphate), also known as LFP (lithium ferrophosphate), counterparts.
And even this might not have been enough to prompt a return-and-refund request, given the compelling bundle price, except for two other “gotchas”. For one thing, the Energizer PowerSource Pro Battery Generator arrived with obvious already-used cosmetic evidence, contrary to the brand-new claimed condition in the promotion (to be clear, I blame Battery-Biz, not Meh, for this seeming bait-and-switch):
Who knows how many recharge cycles that NMC battery already had on it when I got it?
The accompanying solar panel was also pre-owned, it turns out, with similar cosmetic evidence, plus it arrived damage. I’ll save more details on this twist for my next post in this series. Until then, and as always, please sound off with your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- SLA batteries: More system form factors and lithium-based successors
- A holiday shopping guide for engineers: 2024 edition
- Grasping at the battery “identity crisis”
- Solar fan with dynamic battery backup for constant speed of operation
- Modern UPSs: Their creative control schemes and power sources
- Battery-Powered Large Home Appliances: Good Idea or Resource Misuse?
- The PowerStation PSX3: A portable multifunction vehicular powerhouse with a beefy battery
The post Energizer’s PowerSource Pro Battery Generator: Not bad, but you can do better appeared first on EDN.
IP players prominent in chiplet’s 2024 diary
Chiplets—discrete semiconductor components co-designed and manufactured separately before being integrated into a larger system—are emerging as a groundbreaking approach to addressing many of the challenges faced by monolithic system-on-chip (SoC) designs. They have also become a major venue for increasing transistor density as Moore’s Law slows down.
IDTechEx report “Chiplet Technology 2025-2035: Technology, Opportunities, Applications” asserts that the chiplets approach resembles an SoC on a module, where each chiplet is designed to function in conjunction with others, necessitating co-optimization in design. Moreover, chiplets are increasingly associated with heterogeneous integration and advanced packaging technologies.
While large semiconductor outfits like AMD and Intel were initially prominent in the chiplets world, IP players are now increasingly visible in showcasing the potential of chiplets. That includes established IP players like Arm and Cadence as well as upstarts such as Alphawave Semi.
Cadence’s Arm-based system chiplet
Cadence joined hands with Arm in March 2024 to deliver a chiplet-based reference design, and the outcome of this collaboration is what Cadence calls the industry’s first system chiplet. It integrates processors, system IP, and memory IP within a single package while interconnected through the Universal Chiplet Interconnect Express (UCIe) standard interface.
Figure 1 The system chiplet comprises components such as a system processor, safety management processor, controllers, and PHY IPs for LPDDR5 and UCIe. Source: Cadence
The system chiplet—complying with Arm’s Chiplet System Architecture (CSA)—features of the overall multi-chiplet SoC functionality. It accommodates up to 64 GB/s peak bandwidth for UCIe IP and 32 GB/s peak memory bandwidth for LPDDR5 IP.
AI accelerator chiplet
Another SoC-like emulation on a chiplet platform comes from South Korean AI chip startup Rebellions, which calls its chiplet-based compute accelerator SoC “REBEL. This AI accelerator—designed for generative AI workloads in AI and hyperscale data centers—employs Alphawave Semi’s multiprotocol I/O connectivity chiplets, which integrate PCIe 6.0, CXL 3.1, and Ethernet subsystems with UCIe 2.0 die-to-die connectivity.
Figure 2 The UCIe subsystem serves as the foundation for Rebellions’ REBEL chiplet. Source: Alphawave Semi
It’s another example of a customizable design employing high-speed connectivity and interoperable chiplet architectures. As a result, the chiplet can be deployed as modular building blocks, scalable from single cards to full racks.
The above developments demonstrate how chiplets can help overcome Moore’s Law limits while enhancing function density. Furthermore, they showcase how the chiplets ecosystem allows companies to source different parts from multiple suppliers across various regions.
Related Content
- Startup Tackles Chiplet Design Complexity
- How the Worlds of Chiplets and Packaging Intertwine
- Chiplets advancing one design breakthrough at a time
- Chiplets diary: Three anecdotes recount design progress
- Chiplets diary: Controller IP complies with UCIe 1.1 standard
The post IP players prominent in chiplet’s 2024 diary appeared first on EDN.
Wireless fabric-based charging
You’ve likely read or seen video reports of various university research projects which use fabric—usually fashioned into a shirt—as an energy-harvesting arrangement. Some of these use fabrics which have been treated to generate small amounts of power via an enhanced triboelectric effect and wearer frictional movement, others have been modified to functions as thermoelectric generators (TEGs) based on body heat, and a few even try to fabricate solar cells on the material to catch ambient light.
Once again, it’s the concept of “something for (almost) nothing” with respect to energy harvesting (or scavenging) which is the lure and headline-grabber. It all sounds so attractive and enticing, and makes a lot of sense, at least in theory.
Of course, “in theory” is one thing and “in practice” is often another. Although some of these developments have been heralded in press releases along the lines of “energy harvesting from your own body to charge your smartphone” or similar, the reality is different. As far as I have been able to find out through some basic searches, none of these have been converted into standard consumer products, although some are being used for specialized sensor systems such as for athletes.
There’s a lot more to a garment than just its fabric. In the case of energy harvesting, there are connections, storage (battery or supercap), some power-management electronics, normal use and abuse, wash cycles, and more.
Recent developmentHowever, there’s the complement of using fabrics and shirts for energy harvesting, and that’s using them for energy capture. A recent project from Drexel University, University of Pennsylvania, and Accenture Labs team has devised in a process for using MXene ink to print a textile energy grid that can be charged wirelessly at 140 kilohertz
What’s “MXene” ink? It’s a nanomaterial substance which Drexel has been involved with for quite a while and with which they have considerable experience. Their work centers on the process and viability of building a small-scale power “grid” by printing it on nonwoven cotton textiles with an ink composed of MXene. MXenes were created at Drexel and are simultaneously highly conductive yet durable enough to withstand the folding, stretching and washing that clothing endures.
[More formally, MXenes are a family of two-dimensional (2D) carbides or nitrides with the formula Mn+1XnTx where n = 1, 2, 3, or 4; M is an early transition metal, X is either carbon and/or nitrogen, and T is a surface termination bonded to the M element (e.g., OH, O, F, or Cl).]
The team’s textile grid was printed on a lightweight, flexible, cotton substrate the size of a small patch. It includes a printed resonator coil, which they called an MX-coil, that can convert impinging RF energy via induction and use it to charge a series of three textile supercapacitors (also previously developed by Drexel and Accenture Labs) that can store energy and use it to power electronic devices.
For various reasons, they chose to use direct-ink-write printing (DIW) for prototyping and development of the MX-coils. They had to perform a rheological analysis as there are several different shear rates during DIW printing, with a high shear through the writing needle but low shear as a deposited material on the substrate. As a shear-thinning material, MXene ink has low viscosity at high shear rates, allowing it to flow easily through the needle, but high viscosity at low shear, meaning that it retains its printed geometry without spreading on the substrate, Figure 1.
Figure 1 Rheological data on a MXene ink. a) Shear rate ramp, b) amplitude sweep, and c) frequency sweep. d) Schematic depicting a direct-ink-write that is used to print wireless charging MXene coils. e) A photograph of a 5×5-cm MX-coil. Light micrographs and nanoCT images of prints on f) hydrophobic and g) hydrophilic woven cotton, showing superior deposition onto hydrophilic cotton. Source: Drexel University, University of Pennsylvania, and Accenture Labs
They evaluated several different woven-cotton substrates for the best print quality. Initial coil designs were modeled in MATLAB using a conductivity of 20,000 siemens/cm and a thickness of 10 µm as assumptions for the MXene trace. The coils were modeled using a140 kHz transmit frequency, which is within the range of the Qi standard.
While this modeling provided a solid design framework, many of the optimizing parameters had to be altered to accommodate the engineering challenges due to practical limitations, primarily based on the surface roughness of the textile surface. To test the effectiveness of MX-coils, they isolated several parameters such as shape, number of turns, and trace pitch to find an optimum combination.
ResultsThey fabricated an MX-coil with a 1200-µm pitch, 10 turns, and 40 passes (resistance = 80 Ω) to analyze how effectively MXcoils can charge MXene-textile supercapacitors capable of powering on-body electronics and transmitting data via Bluetooth, Figure 2.
Figure 2 a) Schematic of MXene-textile supercapacitor that is being powered by the MX-coil. b) Schematic of testing setup. c) Curves of MX-coil charge and 2 mA discharge. d) Discharging time at 2 mA as a function of MX-coil charging time. e) MX-coil charging current and MXene-textile supercapacitor voltage. f) Powering an Artemis Nano microcontroller for BLE broadcast with MXene-textile supercapacitor charged with MX-coil. Source: Drexel University, University of Pennsylvania, and Accenture Labs
A DC power supply was used to feed 10 V into the transmitter-coil circuitry where power was transferred wirelessly to the MX-coil. AC power was then diverted through a battery-management system that rectifies the signal and limits the voltage going into the supercapacitor. The charging is collected on a potentiostat, voltage data is collected by measuring the voltage at the supercapacitor terminals, and current data is collected in series between the LTC3331 power-management board and the supercapacitor.
The assessed power transfer at up to 10% efficiency, resulting in 100 milliwatts of power directly applied to textiles. They also used it to charge a MXene-textile supercapacitor, introducing the idea of an on-garment energy grid fully made of MXene. Additionally, they showed that MXcoils are capable of directly powering MXene-based surface electromyographic (sEMG) sensors with wireless live data transmission, using an Artemis Nano microcontroller for BLE broadcast.
The project also demonstrated some of the unique challenges faced by flexible, fabric-based charging schemes. They saw significant degradation of the cell over time; however, they could “reconstitute” the cell by squishing it under approximately 10 kilograms for several hours. This led them to speculate that they were losing contact between the carbon foil tabs and the MXene-textile electrodes rather than observing a breakdown of the electrodes themselves.
The work is detailed in their paper “MXene-enabled textile-based energy grid utilizing wireless charging” published in Material Today. The paper has the usual discussion, but also has exposition of the fabrication techniques, test arrangement, production and test equipment used, and more in a supplement at the end.
What’s your view on wireless charging of fabric and clothes? Is it a discovery waiting to hit some inflection point, or are the practicalities of fabrication, longevity, and usefulness too daunting? Will it be limited to being an academic research project or will it find a role in some specialty application niches?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- Nothing new about energy harvesting
- Is triboelectricity the next harvesting source?
- Energy harvesting gets really personal
- Lightning as an energy harvesting source?
The post Wireless fabric-based charging appeared first on EDN.
Four tie-ups uncover the emerging AI chip design models
The semiconductor industry is undergoing a major realignment to serve artificial intelligence (AI) and related environments like data centers and high-performance computing (HPC). That’s partly because AI chips mandate new design skills, tools, and methodologies.
As a result, IP suppliers, chip design service providers, and AI specialists are far more prominent in the AI-centric design value chain. Below are four design use cases that underscore the realignment in chip design models serving AI applications.
- LG engages Tenstorrent
LG Electronics has partnered with Tenstorrent to enhance its design and development capabilities for AI chips tailored to its products and services. The Korean conglomerate aims to link its system design capabilities with AI-related software and algorithm technologies and thus enhance its AI-powered home appliances and smart home solutions.
Tenstorrent is known for its HPC semiconductors for specialized AI applications. The two companies will work together to navigate the rapidly evolving AI landscape to secure competitiveness in on-device AI technology. Meanwhile, LG has established a dedicated system-on-chip (SoC) R&D center focusing on system semiconductor design and development.
Figure 1 Tenstorrent CEO Jim Keller joined LG CEO William Cho at the LG Twin Towers in Yeouido, Seoul, to announce AI chip collaboration.
- Arm-based AI chiplet
Egis Technology and Alcor Micro are leveraging Neoverse Compute Subsystems (CSS)—part of the Arm Total Design ecosystem—to develop new chiplet solutions targeting the HPC and generative AI applications. “As generative AI applications continue to proliferate, the demand for HPC is rising faster than ever,” said Steve Lo, chairman of Egis Group.
Figure 2 Neoverse Compute Subsystems (CSS) offer a speedier way to produce Arm-based chips by including more pre-validated components besides processor cores. Source: Arm
Egis will provide UCIe IP, an interconnect interface for chiplet architecture, while Alcor will contribute expertise for Chip-on-Wafer-on-Substrate (CoWoS) advanced packaging services and chiplet design. Arm will offer its latest Neoverse CSS V3 platform to enable high-performance, low-latency, and highly scalable AI server solutions.
- OpenAI’s in-house chip
Tech industry darling OpenAI is working with chip designer Broadcom and chip manufacturer TSMC to create a chip specifically for its AI systems. The Silicon Valley upstart, one of the largest buyers of AI chips, uses chips to train models for AI to learn data and carry out inference where AI applies the data to make decisions or predictions.
According to a Reuters story, OpenAI has been working with Broadcom for months to build its first AI chip focusing on inference. The AI powerhouse has assembled a team of about 20 chip designers, which includes designers who developed Google’s famed tensor processing units (TPUs); Thomas Norrie and Richard Ho are prominent names in this design team.
- Sondrel wins HPC chip contract
Sondrel recently announced a multi-million design win for a high-performance computing (HPC) chip project. HPC is in a huge demand for AI, data center, and scientific modeling applications that require tremendous computing power. The Reading, U.K.-based chip design service provider has started front-end, RTL design and verification work on this HPC chip.
Figure 3 In HPC chips, it’s imperative that data flow is balanced and that processors are not stalled waiting for data. Source: Sondrel
As Sondrel’s CEO Ollie Jones puts it, HPC designs require large, ultra-complex custom chips on advanced nodes. These chips require advanced design methodologies to create billion-transistor designs at leading manufacturing process nodes.
HPC designs require multicore processors running at maximum clock frequencies while utilizing advanced memory and high-bandwidth I/O interfaces. Then there is network-on-chip (NoC) technology, which enables data to move between processors, memory and I/O while allowing the processors to reliably share and maintain data available in their caches.
The coming AI disruption
Every decade or so, a new technology transforms the semiconductor industry in profound ways. This time around, AI and relating technologies such as HPC and data centers are reshaping chip fabrics to cater to unprecedented data flows inside these semiconductor devices.
It’s a trend to watch because the AI revolution is just getting started.
Related Content
- Special report: HPC enables innovation
- The role of cache in AI processor design
- AI boom and the politics of HBM memory chips
- The Future of Medical Research: HPC, AI & HBM
- Enhancing the Efficiency of AI and HPC Data Centers
The post Four tie-ups uncover the emerging AI chip design models appeared first on EDN.