Українською
  In English
EDN Network
Expanding output range of step-up converter

This is one real-life quest: How do we increase the output voltage of a step-up converter? If you have unlimited access to the right ICs, you are one lucky dog, but what if you don’t? Or maybe you are limited to a specific chip due to particular requirements, for instance, it is stable under some environmental conditions or, it has some specific features/interfaces or maybe, it’s easily accessible or cheap. Here, the ADP1611 step-up converter, is taken as an example. An application circuit can be seen in Figure 1.
Figure 1: An application circuit for the 5 to 15 V ADP1611 step-up regulator.
Wow the engineering world with your unique design: Design Ideas Submission Guide
It has a 20-V limit on its output voltage; this limit is mainly due to the output switch of the ADP1611. Adding a tiny GaN FET such as the EPC2051 to the ADP1611 can increase this limit to above 100 V (Figure 2).
Figure 2: A 5 V to 40 V step-up regulator with the addition of the GaN FET.
The cascode, shown in Figure 2 consists of an internal switch transistor and the newcomer FET; to have better frequency characteristics than the internal switch alone. So, if the newly added GaN FET also has much lower on-resistance (Rds(on)), then the internal switch, it will not reduce the efficiency.
To make the trick possible, the step-up converter should have an open drain (or open collector) output. Also, the connection of the inductor, diode, and the output of the chip must be reconfigured as shown in Figure 2. Diode D2 protects the internal switch from over-voltage.
Don’t forget to use this new value of the output voltage in your calculations. The output diode, capacitor, and inductor should also be rated to the new voltage. For the output diode, I used the HER107.
The addition of this GaN FET adds only 15 mΩ to the switch resistance of ADP1611 (0.23 Ω)—an increase of less than 10%. Please note, the gate-source voltage (VGS) of EPC2051 cannot exceed +6V, so be careful.
—Peter Demchenko studied math at the University of Vilnius and has worked in software development.
Related Content
- GaN vs SiC: A look at two popular WBG semiconductors in power
- High-performance GaN-based 48-V to 1-V conversion for PoL applications
- GaN transistors for efficient power conversion: buck converters
- How to get 500W in an eighth-brick converter with GaN, part 1
- Thermal design for a high density GaN-based power stage
The post Expanding output range of step-up converter appeared first on EDN.
SoCs offer RF sampling and DSP muscle

Adaptive SoCs in AMD’s Versal RF series integrate direct RF sampling data converters, dedicated DSP hard IP, and AI engines in a single chip. The devices offer wideband-spectrum observability and up to 80 TOPS of digital signal processing performance in a SWaP-optimized design for radar, spectral analysis, and test and measurement applications. They also provide programmable logic and ample memory to create powerful accelerators.
Versal RF SoCs enable wideband spectrum capture and analysis with 14-bit multichannel RF ADCs and RF DACs. These converters support input/output frequencies up to 18 GHz and sampling rates up to 32 Gsamples/s. Select DSP functions, like 4-Gsample/s FFT/iFFT, channelizer, polyphase resampler, and LDPC decoder, run on dedicated hard IP blocks, cutting dynamic power by up to 80% compared to AMD soft logic.
Versal RF silicon samples and evaluation kits are expected in Q4 2025, with production shipments beginning in the first half of 2027.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post SoCs offer RF sampling and DSP muscle appeared first on EDN.
Lattice launches small-size FPGA platform

Nexus 2 is Lattice Semiconductor’s next-generation small FPGA platform, featuring improved power efficiency, edge connectivity, and security. Built on a 16-nm FinFET TSMC process, Nexus 2 FPGAs offer 65k to 220k system logic cells in a form factor that is up to 5 times smaller than similar class devices.
According to Lattice, Nexus 2 FPGAs deliver up to 3 times lower power consumption and up to 10 times greater energy efficiency for edge sensor monitoring compared to competing devices in the same class. Fast connectivity is enabled by a multiprotocol 16-Gbps SERDES, PCIe Gen 4 controller, and MIPI D-PHY/C-PHY interfaces operating at speeds up to 7.98 Gbps.
Nexus 2 FPGAs support a broad range of security functions, including 256-bit AES-GCM encryption and SHA3-512 hashing, compliant with FIPS 140-3 Level 2 standards. The devices also feature crypto agility, anti-tamper protection, and post-quantum readiness.
The Nexus 2 platform is designed to allow rapid development of new device families based on a single platform. The first of these, the Certus-N2 family of general-purpose small FPGAs, is now available for sampling.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Lattice launches small-size FPGA platform appeared first on EDN.
Multiprotocol wireless SoC is Matter-compliant

Joining Synaptics’ Veros IoT connectivity family is the SYN20708, a dual-core SoC that supports Bluetooth 5.4 and IEEE 802.15.4. The Matter-compliant chip enables Bluetooth Classic, Bluetooth Low Energy (BLE), Zigbee, and Thread protocols to operate concurrently on both cores, allowing simultaneous connections to multiple endpoints in heterogeneous network environments.
The SYN20708 employs a modular software architecture that simplifies development for systems requiring low latency, extended range, low power, and interoperability. It can be used in a range of consumer, automotive, healthcare, and industrial applications, including dedicated home hubs and automotive infotainment systems.
The SoC features dual-antenna maximum ratio combining (MRC) and transmit beamforming (TxBF) to enhance signal quality and double communication range. It is Bluetooth 5.4 certified and Bluetooth 6.0 compliant, enabling channel sounding, Bluetooth Classic Audio, and LE Audio. The SoC supports IEEE 802.15.4 (OpenThread and ZBOSS) up to Version 2, along with BLE Long Range, angle of departure (AoD), and angle of arrival (AoA) capabilities. Synaptics’ proprietary CoEX technology improves coexistence in the 2.4-GHz band.
The SYN20708 wireless SoC is available now.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Multiprotocol wireless SoC is Matter-compliant appeared first on EDN.
Multiphase PWM controller powers Blackwell GPUs

A 4-phase PWM controller from AOS, paired with industry-standard DrMOS power stages, boosts system efficiency for NVIDIA Blackwell GPU platforms. The AOZ73004CQI, which powers AI servers and graphics cards based on the Blackwell architecture, is fully compliant with the Open Voltage Regulator (OpenVReg) OVR4-22 standard.
The AOZ73004CQI’s cycle-by-cycle current limit aligns with the GPU’s overcurrent protection requirements, enabling safe power throttling to maximize performance. It features an external reference input and PWMVID interface for dynamic output voltage control. By reducing ripple effects, the controller achieves PWMVID slew rates of up to 30 mV/µs—a threefold increase over typical rates. Additionally, deep-off and shallow-off power states minimize power consumption.
The AOZ73004CQI with 4-phase PWM is not limited to using four DrMOS power stages as standard. AOS’s proprietary DrMOS design allows precise turn-on timing, enabling one PWM to drive two or three DrMOS devices. By doubling or tripling DrMOS, designers can create a high-power, multiphase system with up to 12 power stages.
Prices for the AOZ73004CQI buck controller start at $1.20 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Multiphase PWM controller powers Blackwell GPUs appeared first on EDN.
Multichannel driver enhances automotive lighting

With 36 programmable LED current channels, the AL5887Q from Diodes drives up to 12 RGB configurations or 36 individual LEDs. The automotive-compliant linear driver provides a hardware-selectable I2C or SPI digital interface, along with an internal 12-bit PWM for precise color and brightness control. Designers can create dynamic lighting patterns and rich color depths for both interior and exterior lamps.
An external resistor sets the output current for all 36 channels, with each channel’s current digitally configurable up to 70 mA without the need for paralleling. An automatic power-saving mode reduces current to 15 µA, and a quiescent shutdown mode cuts it to 1 µA when all LEDs are off for more than 30 ms, minimizing energy draw from the car’s battery.
The AL5887Q includes multiple protection features, such as an open-drain fault pin with diagnostic fault registers and individual fault mask registers. It also provides overtemperature protection with a pre-OTP warning.
The AEC-Q100 qualified AL5887Q driver costs $1.13 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Multichannel driver enhances automotive lighting appeared first on EDN.
Synthesize precision bipolar Dpot rheostats

The ubiquitous variable resistance circuit network shown in Figure 1…
Figure 1 Classic adjustable resistance; Rmax = Rs + Rr; Rmin = Rs.
…can be accurately synthesized in solid state circuitry built around a digital potentiometer (Dpot) as discussed in “Synthesize precision Dpot resistances that aren’t in the catalog.” Its accuracy holds up despite pot resistance element tolerance and is independent of wiper resistance. See Figure 2 for the circuit.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 2 Synthetic Dpot evades problems by using FET shunt, precision fixed resistors, and op-amp; Rab > Rmax; Rp = (Rmax-1 – Rab-1)-1; Rs = (Rmin-1 – Rab-1 – Rp-1)-1.
But a sticky question remains: What if the polarity of the Va – Vb differential is subject to reversal? Figure 1 can of course accommodate this without a second thought, but it’s a killer for Figure 2.
A simple—but unfortunately unworkable—solution is shown in Figure 3.
Figure 3 Simply paralleling complementary N and P channel MOSFETs might look good but won’t work beyond a few hundred mV of |Va – Vb|.
The problem arises of course from the parasitic body diodes common to MOSFETs, which conduct and bypass the transistor if the reverse polarity source-drain differential is ever more than a few tenths of a volt.
Figure 4 shows the simplest (not very simple) solution I’ve been able to come up with.
Figure 4 Two complementary anti-series FET pairs connected in parallel allow bipolar operation.
Inspection of Figure 4 shows a couple extra FETs have been added in anti-series with the paralleled complementary transistors of Figure 3, together with polarity comparator amplifier A2. A2 enables the Q1/Q2 pair for (Va – Vb) > 0, Q3/Q4 for (Va – Vb) < 0.
The TLV9152 with its 4.5-MHz gain-bandwidth, 400-ns overload recovery, and 21-V/µs slew rate is a fairly good choice for this application. Nevertheless, significant crossover distortion can be expected to creep in for low signal amplitudes and frequencies above 10 kHz or so.
Design equations are unchanged from Figure 2.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Synthesize precision Dpot resistances that aren’t in the catalog
- Keep Dpot pseudologarithmic gain control on a leash
- Dpot pseudolog + log lookup table = actual logarithmic gain
- Digital potentiometer simulates log taper to accurately set gain
- Op-amp wipes out DPOT wiper resistance
- Adjust op-amp gain from -30 dB to +60 dB with one linear pot
The post Synthesize precision bipolar Dpot rheostats appeared first on EDN.
Understanding currents in DC/DC buck converter input capacitors

All buck converters need capacitors on the input. Actually, in a perfect world, if the supply had zero output impedance and infinite current capacity and the tracks had zero resistance or inductance, you wouldn’t need input capacitors. But since this is infinitesimally unlikely, it’s best to assume that your buck converter will need input capacitors.
Input capacitors store the charge that supplies the current pulse when the high-side switch turns on; they are recharged by the input supply when the high-side switch is off (Figure 1).
Figure 1 The above diagram shows simplified current waveform in the input capacitor current during the buck DC/DC switching cycle, assuming infinite output inductance. Source: Texas Instruments
The switching action of the buck converter charges and discharges the input capacitor, causing the voltage across it to rise and fall. This voltage change represents the input voltage ripple of the converter at the switching frequency. The input capacitor filters the input current pulses to minimize the ripple on the input supply voltage.
The amount of capacitance governs the voltage ripple, so the capacitor must be rated to withstand the root-mean-square (RMS) current ripple. The RMS current calculation assumes the presence of only one input capacitor, with no equivalent series resistance (ESR) or equivalent series inductance (ESL). The finite output inductance accounts for the current ripple on the input side, as shown in Figure 2.
Figure 2 Input capacitor ripple current and calculated RMS current are displayed by TI’s Power Stage Designer software. Source: Texas Instruments
Current sharing between parallel input capacitors
Most practical implementations use multiple input capacitors in parallel to provide the required capacitance. These capacitors often include a small-value, high-frequency multilayer ceramic capacitor (MLCC), for example, 100 nF. One or more larger MLCCs (10 µF or 22 µF) are used, and sometimes accompany a polarized large-value bulk capacitor (100 µF).
Each capacitor is performing similar yet different functions; the high-frequency MLCC decouples fast transient currents caused by the MOSFET switching process in DC/DC converter. The larger MLCCs source the current pulses to the converter at the switching frequency and its harmonics. The bulk capacitor supplies the current required to respond to output load transients when the impedance of the input source means that it cannot respond as quickly.
Where used, a large bulk capacitor has a significant ESR, which provides some damping of the input filter’s Q factor. Depending on its equivalent impedance at the switching frequency relative to the ceramic capacitors, the capacitor may also have significant RMS current at the switching frequency.
The datasheet of a bulk capacitor specifies a maximum RMS current rating to prevent self-heating and ensure that its lifetime is not degraded. The MLCCs have a much smaller ESR and correspondingly much less self-heating because of the RMS current. Even so, circuit designers sometimes overlook the maximum RMS current specified in ceramic capacitor datasheets. Therefore, it is important to understand the RMS currents in each of the individual input capacitors.
If you are using multiple larger MLCCs, you can combine them and enter the equivalent capacitance into the current-sharing calculator for calculating RMS currents in parallel input capacitors. The calculation of RMS current considers the fundamental frequency only. Nonetheless, this calculation tool is a useful refinement of the single input capacitor RMS current calculation.
Consider an application where VIN = 9 V, VOUT = 3 V, IOUT = 12.4 A, fSW = 440 kHz and L = 1 µH. The three parallel input capacitors could then be 100 nF (MLCC), ESR = 30 mΩ, ESL = 0.5 nH; 10 µF (MLCC), ESR = 2 mΩ, ESL = 2 nH; and 100 µF (bulk), ESR = 25 mΩ, ESL = 5 nH. The ESL here includes the PCB track inductance.
Figure 3 shows the capacitor current-sharing calculator results for this example. The 100-nF capacitor draws a low RMS current of 40 mA as expected. The larger MLCC and bulk capacitors divide their RMS currents more evenly at 4.77 A and 5.42 A, respectively.
Figure 3 Output is shown from TI’s Power Stage Designer capacitor current-sharing calculator. Source: Texas Instruments
In reality, the actual capacitance of the 10-µF MLCC is somewhat lower because of the voltage applied. For example, a 10-µF, 25-V X7R MLCC in an 0805 package might only provide 30% of its rated capacitance when biased at 12 V, in which case the large bulk capacitor’s current is 6.38 A, which may exceed its RMS rating.
The solution is to use a larger capacitor package size and parallel multiple capacitors. For example, a 10-µF, 25-V X7R MLCC in a 1210 package retains 80% of its rated capacitance when biased at 12 V. Three of these capacitors have a total effective value of 24 µF when used for C2 in the capacitor current-sharing calculator.
Using these capacitors in parallel reduces the RMS current in the large bulk capacitor to 3.07 A, which is more manageable. Placing the three 10-µF MLCCs in parallel also reduces the overall ESR and ESL of the C2 branch by a factor of three.
The low capacitance of the 100-nF MLCC and its relatively high ESR mean that this capacitor plays little part in sourcing the current at the switching frequency and its lower-order harmonics. The function of this capacitor is to decouple nanosecond current transients seen at the switching instants of the DC/DC converter’s MOSFETs. Designers often refer to it as the high-frequency capacitor.
In order to be effective, it’s essential to place the high-frequency capacitor as close as possible to the input voltage and ground terminals of the regulator using the shortest (lowest inductance) PCB routing possible. Otherwise, the parasitic inductance of the tracks will prevent this high-frequency capacitor from decoupling the high-frequency harmonics of the switching frequency.
It’s also important to use as small a package as possible to minimize the ESL of the capacitor. A high-frequency capacitor with a value of <100 nF can be beneficial for decoupling at a specific frequency when compared to its ESR and impedance curve. A smaller capacitor will have a higher self-resonance frequency.
Similarly, always place the larger MLCCs as close as possible to the converter to minimize their parasitic track inductance and maximize their effectiveness at the switching frequency and its harmonics.
Figure 3 also shows that, although the overall RMS current in the overall input capacitor (were it a single equivalent capacitor) is 6 A, the sum of RMS currents in the C1, C2 and C3 branches is >6 A and does not follow Kirchhoff’s current law. The law only applies to the instantaneous values, or to the complex addition of the time-varying and phase-shifted currents.
Using PSpice for TI or TINA-TI software
Designers who need more than three input capacitor branches for their applications can use PSpice for TI simulation software or TINA-TI software. These tools enable more complex RMS current calculations, including harmonics alongside the fundamental switching frequency and the use of a more sophisticated model for the capacitor, which captures the frequency-dependent nature of the ESR.
TINA-TI software can compute the RMS current in each capacitor branch in the following way: run the simulation, click the desired current waveform to select it, and from the Process menu option in the waveform window, select Averages. TINA-TI software uses a numerical integration over the start and end display times of the simulation to calculate the RMS current.
Figure 4 shows the simulation view. For clarity in this example, we omitted the 100-nF capacitor because its current is very low and contributes to ringing at the switching edges. The Power Stage Designer software analysis of the total input capacitor current waveform for the converter calculates the input current (IIN), which is 6 ARMS, the same value as for Figure 2.
Figure 4 Output from TINA-TI software shows the capacitor branch current waveforms and calculated RMS current in C2. Source: Texas Instruments
The capacitor current waveforms in each branch are quite different compared to the idealized trapezoidal waveform that ignores their ESR and ESL. This difference has implications for DC/DC converters such as the TI LM60440, which has two parallel voltage input (VIN) and ground (GND) pins.
The mirror-image pin configuration enables designers to connect two identical parallel input loops, meaning that they can place double input capacitance (both high frequency and bulk) in parallel close to the two pairs of power input (PVIN) and power ground (PGND) pins. The two parallel current loops also halve the effective parasitic inductance.
In addition, the two mirrored-input current loops have equal and opposite magnetic fields, allowing some H-field cancellation that further reduces the parasitic inductance (Figure 5). Figure 4 suggests that if you don’t carefully match the parallel loops in capacitor values, ESR, ESL and layout for equal parasitic impedances, then the current in the parallel capacitor paths can differ significantly.
Figure 5 Parallel input and output loops are shown in a symmetrical “butterfly” layout. Source: Texas Instruments
Software tool use considerations
To correctly specify input capacitors for buck DC/DC converters, you must know the RMS currents in the capacitors. You can estimate the currents from equations, or more simply by using software tools like TI’s Power Stage Designer. You can also use this tool to estimate the currents in up to three parallel input capacitor branches, as commonly used in practical converter designs.
More complex simulation packages such as TINA-TI software or PSpice for TI can compute the currents, including harmonics and fundamental frequencies. These tools can also model frequency-dependent parasitic impedance and many more parallel branches, illustrating the importance of matching the input capacitor combinations in mirrored input butterfly layouts.
Dr. Dan Tooth is Member of Group Technical Staff at Texas Instruments. He joined TI in 2007 and has been a field application engineer for over 17 years. He is responsible for supporting TI’s analog and power product portfolio in ADAS, EV and diverse industrial applications.
Dr. Jim Perkins Senior Member of Technical Staff at Texas Instruments. He joined TI in 2011 as part of the acquisition of National Semiconductor and has been a field application engineer for over 25 years. He is now mainly responsible for supporting TI’s analog and power product portfolio in grid infrastructure applications such as EV charging and smart metering.
Related Content
- Step-Down DC/DC Converter
- DC/DC Converter Considerations for Smart Lighting Designs
- Choosing The Right Switching Frequency For Buck Converter
- Use DC/DC buck converter features to optimize EMI in automotive designs
- Reducing Noise in DC/DC Converters with Integrated Ferrite-bead Compensation
The post Understanding currents in DC/DC buck converter input capacitors appeared first on EDN.
The Google Chromecast Gen 2 (2015): A form factor redesign with beefier Wi-Fi, too

In mid-2023, Google subtly signaled that its first-generation Chromecast A/V streaming receiver, originally introduced in 2013, had reached the end of the support road. I’d already tore one down, but I had several others still in use, which I promptly replaced with 3rd-generation (2018) successors off eBay. And while I was at it, I picked up an additional “rough”-condition one, plus intermediary 2nd-generation (2015) and Ultra (2016) well-used devices, for teardown purposes.
One year (and a couple of months) later, and a couple of months ago as I write these words in late October 2024, Google end-of-life’d the entire Chromecast product line, also encompassing the 4K (introduced in 2020) and HD (2022) variants of the Chromecast with Google TV which I’d already torn down too, and replacing everything with its newly unveiled TV Streamer:
So, I guess you can say I’m now backfilling from a disassembly-and-analysis standpoint. Today you’ll see the insides of the 2nd generation (2015) Chromecast:
with the Ultra (2016), notably kitted with the Stadia online-streamed gaming controller:
and 3rd generation (2018) to follow in the coming months.
Truth be told, I’ve also got a couple of Chromecast Audio streamers on hand, but as they’re so rare and prized by audiophiles (and wannabes like me), I’m loath to (destructively, at least) take one apart. Time will tell if I change my mind and/or get more disassembly-skilled in the future…
Anyhoo, let’s get to tearing down, beginning with the device I eBay-purchased last summer, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
As you probably already noticed from the “stock” shots I’ve previously shared, the 2nd generation Chromecast marked a fairly radical physical design departure from its forebear. I’ll begin with something that might seem to be a “nit” at first glance but was actually a big deal to many users. That USB-A to micro-USB cable you see on the left was only 1’ long with the first-gen Chromecast; now it’s 5’ long. Much more convenient, especially if you’re getting power from an outlet-installed “wall wart” versus a TV back panel USB connector:
The device itself has more visibly-obviously evolved. The first-gen Chromecast looked a bit like a USB flash “stick”, cigar-shaped with a stubby HDMI connector jutting out of one end. Google bundled a short female-to-male extender cable with it, which frequently got quickly lost. Now, the extender cable is integrated, and the device itself is circular in shape. This transition has multiple benefits, two obvious and another conjecture on my part. The extension cable simplifies hookup to a TV’s crowded-connector backside (and as I’ve already mentioned, won’t be inadvertently discarded). Also, as you’ll soon see, the 2nd generation round Chromecast includes multiple Wi-Fi antennae, arranged around the partial-circumference of the also-circular PCB. And here’s the conjecture part: the 1st generation Chromecast was plagued by overheating issues, which I’m guessing the redesign assists in mitigating.
I’m calling this the “front”, although as I’ve mentioned before, I used this term, along with “back” and “sides”, loosely because, as I’ve also previously mentioned with other devices of this type, orientation is HDMI plug- and cable-orientation dependent, therefore inconsistent from one TV and broader setup to another. Mine’s black (duh); it also came in “Coral” (red) and “Lime” (also referred to in some places as “Lemonade”, yellow) shades:
At the bottom is the micro-USB power input jack, along with a reset switch to its left and a multi-color status LED to its right:
When not in use, the HDMI connector magnetically attaches to the back of the circular main body…for unclear-to-me reasons (ease of portability?). I apparently wasn’t alone, because Google dropped this particular “feature” for the third-generation successor:
Here the HDMI cable is extended; the magnet is that shiny rectangle with rounded corners (which I just learned today is called a stadium, presumably referencing the shape of an athletic entertainment facility) toward the top:
Here’s what the HDMI cable end looks like:
And once more back to the back (see what I did there?) of the device for a closeup of the various markings, including the FCC ID, A4RNC2-6A5 (which has an interesting historical twist I’ll revisit shortly):
Time to dive inside. From my advance research, I already knew that the glue holding the two halves of the body together was particularly stubborn stuff. This gave me an opportunity to try out a new piece of gear I’d recently acquired, iFixit’s iOpener kit, consisting of a long, narrow insulated heat-retaining bag which you put in the microwave oven for 30 seconds before using:
plus other handy disassembly accessories (the iOpener is also optionally sold standalone):
Strictly speaking, the iOpener is intended for removing the screen from a tablet or the like:
but I managed to get it to work (with a “bit” of overlap) with the Chromecast, too:
After that, assisted by a couple of the Opening Picks also included in the kit:
I was inside, with minimal cosmetic damage to the case (although I still harbored no delusions that my remaining disassembly steps would be non-destructive)
Here’s the inside of the top half of the case:
And here’s our first glimpse of the PCB topside, complete with a sizeable Faraday Cage:
Did you notice those three screws holding the PCB in place? You know what comes next:
Ladies and gentlemen, we have liftoff:
This is still the PCB topside, but alongside it (to the left) is the first-time revealed inside of the top of the case, complete with a LED light pipe assembly, a dallop of thermal paste, and a round gray heatsink that does double-duty as the attractant for the HDMI cable connector magnet. Also note the reset switch in the lower left edge:
Flipping the insides upside down first-time reveals the PCB underside; this time, the LED is clearly visible. And there’s another Faraday cage, to which the dallop of thermal paste connects:
Let’s return to the PCB topside, specifically to its Faraday cage, for removal first:
In past teardowns, to get it off, I’ve relied either on fairly flimsy-tip devices like the iSesamo:
Or just brute-forced it with a flat-head screwdriver, which inevitably resulted to both a mangled cage and PCB. This time, however, I pressed into service another new tool in my arsenal, iFixit’s Jimmy, which in the words of Goldilocks, was “just right”:
As you may have already inferred, two of the three earlier screws did double-duty, not only holding the PCB in place within the lower half of the case but also keeping the PCB-connector end of the HDMI cable intact. After removing them and then the Cage, the HDMI cable was free:
I’m sure that in the earlier shots you already noticed a second dallop of thermal paste between the large IC in the lower left quadrant and the Faraday Cage:
A bit of rubbing alcohol cleaned it off sufficiently for me to ID it and the other components on the board:
The previously paste-encrusted IC in the lower left quadrant is Marvell’s Armada 88DE3006 1500 Mini Plus dual-core ARM Cortex-A7 media processor, an uptick from the Marvell Armada DE3005-A1 1500-mini SoC in the first-generation Chromecast. To its right, barely visible under the Cage-mounting frame, is a Toshiba TC58NVG1S3HBAI6 2 Gbit NAND flash memory; curiously, its predecessor in the first-gen Chromecast, a Micron MT29F16G08, was 16 Gbit (8x larger) in capacity). In the lower right corner is a chip marked:
MRVL
21AA3
521GDT
which iFixit believes implements the system’s power management control capabilities. And in the lower left corner is another frame-obscured Marvell IC, marked as follows (you’ll have to trust me on this one):
MRVL
G868
524GBD
whose identity is unclear to me (and iFixit didn’t even bother taking a stab at), although it apparently was also in the first-gen Chromecast. Readers?
Flipping the board back over to its underside, and going through the same Faraday cage removal (this time also with preparatory thermal paste cleanup) process as before:
Reveals our third dallop of thermal paste, inside the second (underside) cage in the design:
Time for more rubbing alcohol-plus-tissues:
The dominant ICs this time are a Samsung K4B4G1646D-BY 4 Gbit DDR3L SDRAM to the right (this memory time around, the same capacity as with the first-gen Chromecast) and Marvell’s Avastar 88W8887 wireless controller (Wi-Fi, Bluetooth, NFC and FM, not all of these used). At this point, I’ll refer back to the “interesting historical twist” teaser from before. For one thing, the Avastar 88W8887’s precursor in the first-gen Chromecast was an AzureWave AW-NH387, a 2.4 GHz-only Wi-Fi (plus Bluetooth and FM receiver, the latter again unused) controller. This time, however, you get dual-band 1×1 802.11ac, reflective of the multi-PCB-embedded-antenna array you see around the PCB sides.
And what about Bluetooth? Here’s where things get really interesting. At its initial 2015 introduction, Bluetooth capabilities were innate in the silicon but not enabled in software. A couple of years later, however, Google went back to the FCC for recertification, not because any of the hardware had changed but just because a new firmware “push” had turned on Bluetooth support. Why? I don’t know for sure, but I have a theory.
Initially, Google relied on a wonky app called Device Utility that forced you to jump thorough a bunch of hoops in a poorly documented specific sequence and with precise step-by-step timing in order for initial activation to successfully complete:
Subsequent setup steps were done through the TV to which the Chromecast was connected over HDMI. Google subsequently switched to doing these latter setup steps over its Google Home app, initially launched in 2016 and substantially revamped in 2023, instead, which presumably leverages Bluetooth (therefore the subsystem software-enable and FCC recertification). But for legacy devices, initial activation still needed to occur over Device Utility.
And with that, closing in on 1,800 words, I’ll wrap up for today. Your thoughts are as-always welcomed in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- Google’s Chromecast with Google TV: Dissecting the HD edition
- Teardown: Chromecast streams must have gotten crossed
- Google’s Chromecast: Is “proprietary” necessary for wireless multimedia streaming success?
- Google’s Chromecast: impressively (and increasingly) up for the video streaming task
The post The Google Chromecast Gen 2 (2015): A form factor redesign with beefier Wi-Fi, too appeared first on EDN.
Profile of an MCU promising AI at the tiny edge

The common misconception about artificial intelligence (AI) often relates this up-and-coming technology to data center and high-performance compute (HPC) applications. This is no longer true, says Tom Hackenberg, principal analyst for the Memory and Computing Group at Yole Group. He said this while commenting on STMicroelectronics’ new microcontroller that embeds a neural processing unit (NPU) to support AI workloads at the tiny edge.
ST has launched its most powerful MCU to date to cater to a new range of embedded AI applications. “The explosion of AI-enabled devices is accelerating the inference shift from the cloud to the tiny edge,” said Remi El-Ouazzane, president of Microcontrollers, Digital ICs and RF Products Group (MDRF) at STMicroelectronics.
He added that inferring at the edge brings substantial benefits, including ultra-low latency for real-time applications, reduced data transmission, and enhanced privacy and security. Not sharing data with the cloud also leads to sustainable energy use.
STM32N6, available to selected customers since October 2023, is now available in high volumes. It integrates a proprietary NPU, the Neural-ART Accelerator, which can deliver 600 times more machine-learning performance than a high-end STM32 MCU today. That will enable the new MCU to leverage computer vision, audio processing, sound analysis and other algorithms that are currently beyond the capabilities of small embedded systems.
Figure 1 STM32N6 offers the benefits of an MPU-like experience in industrial and consumer applications while leveraging the advantages of an MCU. Source: STMicroelectronics
“Today’s IoT edge applications are hungry for the kind of analytics that AI can provide,” said Yole’s Hackenberg. “The STM32N6 is a great example of the new trend melding energy-efficient microcontroller workloads with the power of AI analytics to provide computer vision and mass sensor-driven performance capable of great savings in the total cost of ownership in modern equipment.”
Besides the AI accelerator, STM32N6 features an 800-MHz Arm Cortex-M55 core and 4.2 MB of RAM for real-time data processing and multitasking, which ensure sufficient compute for complementing AI acceleration. As a result, the MCU can run AI models to carry out tasks like segmentation, classification, and recognition. Moreover, an image signal processor (ISP) incorporated into the MCU provides direct signal processing, which allows engineers to use simple and affordable image sensors in their designs.
Design testimonials
Lenovo Research, which rigorously evaluated STM32N6 in its labs, acknowledges its neural processing performance and power efficiency claims. “It accelerates our research of “AI for All” technologies at the edge,” said Seiichi Kawano, principal researcher at Lenovo Research. LG, currently incorporating AI features into smartphones, home appliances and televisions, has also recognized STM32N6’s AI performance for embedded systems.
Figure 2 Meta Bounds has employed the AI-enabled STM32N6 in its AR glasses. Source: STMicroelectronics
Then there is Meta Bounds, a Zhuhai, China-based developer of consumer-level augmented reality (AR) glasses. Its founding partner, Zhou Xing, acknowledges the vital role that STM32N6’s embedded AI accelerator, enhanced camera interfaces, and dedicated ISP played in the development of the company’s ultra-lightweight and compact form factor AI glasses.
Besides these design testimonials, what’s important to note is the transition from MPUs to MCUs for embedded inference. That eliminates the cost of cloud computing and related energy penalties, making AI a reality at the tiny edge.
Figure 3 The shift from MPU to MCU for AI applications saves cost and energy and it lowers the barrier to entry for developers to take advantage of AI-accelerated performance for real-time operating systems (RTOSes). Source: STMicroelectronics
Take the case of Autotrak, a trucking company in South Africa. According to its engineering director, Gavin Leask, fast and efficient AI inference within the vehicle can give the driver a real-time audible warning to prevent an upcoming incident.
At venues like this, AI-enabled MCUs can run computer vision, audio processing, sound analysis and more at a much lower cost and power usage than MPUs.
Related Content
- Getting a Grasp on AI at the Edge
- Implementing AI at the edge: How it works
- It’s All About Edge AI, But Where’s the Revenue?
- Edge AI accelerators are just sand without future-ready software
- Edge AI: The Future of Artificial Intelligence in embedded systems
The post Profile of an MCU promising AI at the tiny edge appeared first on EDN.
Oscilloscope probing your satellite

When designing space electronics, particularly during the early prototyping stage or if qualification or flight hardware doesn’t function as intended, the humble oscilloscope is often used to verify the presence, timing and quality of key signals.
When debugging your space electronics using an oscilloscope, many different measurement types are now possible, e.g., voltages, currents, power rails, digital logic, EMC, optical, TDR, and high voltage. For each of these applications, the specification of the probe that makes contact with your device under test (DUT) determines the quality of your test, e.g., its frequency response and bandwidth, how it loads the DUT, whether it matches the input impedance of your scope, where and how you attach the signal and ground contact tips. Often the probe is overlooked or taken for granted. During many of my visits to customers, I have seen the wrong diagnosis due to poor probing techniques or incorrect decision-making because of the specification of the oscilloscope and/or probe. Ultimately, this has impacted the ability of clients to deliver hardware and sub-systems to cost and schedule.
The impact of scope probes on your testAn ideal probe would not load the DUT, would transmit the signal under investigation from its tip to the oscilloscope with perfect fidelity, have zero attenuation, zero capacitance, zero inductance, infinite bandwidth, and linear-phase characteristics at all frequencies.
In reality, a probe is a circuit with its own electrical characteristics and when it makes contact with your DUT, it suddenly becomes part of a larger system with its specification combining with that of the circuit of interest. To make a measurement, the probe must “steal” some energy from the DUT and transfer this to the scope’s inputs such that it does not load the DUT while preventing degradation of the signal to be measured and will not impair the functionality of the DUT. Probes and oscilloscopes have an inherent capacitance creating a low-pass filter that impacts higher frequencies, i.e., bandwidth, slowing rise times. Probes have an intrinsic resistance, forming a voltage divider which reduces signal amplitude. Leads attached to probes add unwanted inductance resulting in overshoot and ringing on the display. Leads can also act as antennae, picking-up electrical noise from the surrounding environment. None of these effects may actually be present in the signal you are trying to measure, so it’s not a surprise if tests are misleading and the diagnosis wrong!
Probing preferencesFor general-purpose testing, the key is to use probes which have a high input resistance to minimize the current taken from the DUT, typically 1 to 10 MΩ, as well as low input capacitance to ensure high impedance at higher frequencies, usually 10 to 30 pF. A low impedance would adversely load the DUT impacting the measured signal level.
As frequencies rise, to avoid reflections due to capacitive and inductive reactances, the source, load, and probe characteristic impedance should be matched, usually 50 Ω. As a guide, an interconnect can be considered a transmission line potentially susceptible to reflections if its time delay, or its critical length, is greater than one third the signal rise time.
Figure 1 compares the signal integrity measured from a 10-MHz clock using both 1 MΩ and 50 Ω input impedances. The former (light blue trace) contains reflections while the fidelity of the latter (green trace) is superior!
Figure 1 The difference in signal integrity due to oscilloscope input impedance. Source: Rajan Bedi
Although I still have several, it’s rare for me to use the much-abused, 50-Ω, BNC oscilloscope coax cable with crocodile clips to accurately test and measure the latest space electronics. However, occasionally I do: the question is, when can you use this $10 “probe” rather than the expensive $10k ones?
A general-purpose solutionLast week I had an FPGA that wasn’t communicating with its JTAG programmer and needed to check the board was receiving the TCK, TMS, and TDI inputs, and outputting TDO. All the expensive, sexy probes were being used, however, due to the low frequency of the JTAG signals, I knew the trusted 50-Ω BNC coax cable (Figure 2) could verify these signals with good integrity.
Figure 2 The ubiquitous 50-Ω BNC oscilloscope coax cable found in most labs. Source: RS PRO
High frequency solutions Knowing the maximum signal frequencyToday’s satellite and spacecraft electronics operate at higher frequencies, with digital signals having faster edges and lower voltages, close to larger switching currents and sensitive analog signals. Many small satellites squeeze all these functions onto one tiny PCB. To accurately measure signals, observe events, and make the correct decisions, the specifications of the probe and the oscilloscope become paramount; in particular bandwidth.
From Fourier analysis, the bandwidth of a digital signal with a 50% duty cycle and a 10 to 90% rise time can be approximated by:
You might not know the edge rates of the digital signals you may have to measure in the future, but if you have an appreciation of the highest frequency of interest, e.g., a clock, an estimate of rise time and hence bandwidth can be calculated. For example, if one assumes rise time comprises 7% of the total period, the signal bandwidth can be estimated as, 5*Fclk, or up to the fifth harmonic!
Knowing the maximum signal frequency allows you select the appropriate oscilloscope and probe bandwidths for accurate measurements. To minimize the in-band effects of their respective 3-dB amplitude roll-offs, these should be 3 to 5 times higher than the largest harmonic contained within the signal of interest.
As an example, for a sine wave with a fundamental frequency of 700 MHz, the oscilloscope and probe bandwidths should each be between 2.1 and 3.5 GHz. Likewise for other analog signals such as modulated carriers, choose bandwidths at least three times larger than its highest frequency component.
For a digital signal with an edge rate of 0.5 ns, its resulting bandwidth can be approximated by 0.35/0.5 = 700 MHz. The measurement bandwidth of the scope and probe should be between 2.1 and 3.5 GHz to accurately capture the fidelity of the fifth harmonic. Similarly, for rise and fall times, if you want to accurately see these transitions, the edge rates of your probe and oscilloscope should be three to five times faster the signal of interest. If you want to validate the specifications of your instrumentation, input a pulse that has rise/fall times three to five times faster!
Impact of scope bandwidthFigure 3 and Figure 4 illustrate the impact of oscilloscope bandwidth on measurement fidelity when verifying a 100 MHz clock with 500 ps edges: a 100 MHz scope only passes through the fundamental frequency while a 500 MHz one captures up to the fifth harmonic preserving the intended waveform, but its own rise time specification is limiting the measurement of the actual signal edge rate. A 1 GHz scope has 20% accuracy while a 2 GHz one offers 3%.
Figure 3 The impact of scope bandwidth on waveform fidelity and rise-time. Source: Keysight
A word of caution, there is such a thing as too much bandwidth as measurements can pick-up high-frequency noise as shown below, impacting the system’s effective number of bits (ENOB). The 20 MHz waveform on the left was captured using a bandwidth of 6 GHz, while the one on the right had 100 MHz. Use the lowest bandwidth possible while still having enough to accurately capture the frequencies contained within your signal of interest. If possible, limit measurement bandwidth using the oscilloscope’s built-in hardware or software filters, and/or the specification of the probe.
Figure 4 The impact of too much measurement bandwidth on waveform noise. Source: Keysight
Debugging a real problemGoing back to my problem of debugging the uncommunicative FPGA using a one meter, 50Ω BNC coax cable as a probe; how did I know this would be fine for verifying the JTAG signals? The delay the signal experiences as it travels down the one metre cable is approximately 5 ns. For rise times longer than 3 * 5 ns = 15 ns (critical length), the resulting bandwidth can be approximated by 0.35/15 ns = 23 MHz. The fundamental frequency of the JTAG signals is around 1 MHz (well below 23 MHz) with a sufficient number of odd harmonics (bandwidth) captured to display the waveforms with good integrity and sharp edges. I also knew the BNC cable has a bandwidth of at least 1 GHz and the oscilloscope 8 GHz. Don’t trash those $10 cables just yet!
Passive probesMany different types of probes can be used with modern digital oscilloscopes enabling a variety of measurement types. Firstly, should you choose a passive or an active probe? The former are often shipped with oscilloscopes, are lower cost, rugged, and good for general-purpose testing often up to several hundred MHz. Internally, these only contain passive components that respond to the signal being measured.
Passive probes have an attenuation factor that impacts DUT loading and the measurement bandwidth: a 1x or 1:1 probe does not change the input amplitude offering better sensitivity for low-voltage signals, whereas a 10x reduces the input magnitude by a factor of ten (Figure 5). These are used to protect the oscilloscope’s maximum rated voltage and offer better SNR as any noise picked up by the probe is also attenuated, thus improving signal quality. The use of a 10:1 probe results in a higher internal resistance, typically 10 MΩ compared to 1 MΩ, which reduces circuit loading. The addition of capacitance in the tip cancels the scope’s input capacitance, increasing bandwidth and improving the measurement of higher frequencies and faster edges.
Check whether your oscilloscope auto-senses the probe attenuation or whether you have to manually switch between these. Passive probes also come with compensation to match the probe and oscilloscope input impedances. Without adjustment, the capacitive loading of the probe may filter out high-frequency components and distort the signal under investigation.
Figure 5 The typical schematic of a 10x passive probe. Source: Rohde and Schwarz
At frequencies beyond 500 MHz, the output capacitance of most passive probes degrades the higher harmonics and edge rates. Furthermore, they can severely load the DUT as the oscilloscope’s input impedance is not significantly higher than the circuit’s output impedance.
Active probes Single-endedActive probes do not use signal power from the DUT, but alternatively utilize wideband amplifiers to enable high-frequency measurements. Active probes have high input resistance and low capacitance, typically less than 1 pF, offering bandwidths up to 20 GHz. The probe in Figure 6 is a single-ended probe, measuring a signal with respect to ground.
Figure 6 The typical schematic of an active probe. Source: Rohde and Schwarz
DifferentialDifferential probes will measure the potential difference between any two points and are suitable for verifying low-amplitude, high-frequency I/O such as LVDS as used by many Earth-Observation imagers. Figure 7 shows a typical schematic for a differential probe and Figure 8 shows an off the shelf active differential probe from Rohde and Schwarz (R&S). Differential probes offer high common-mode rejection over a broad range of frequencies and use an internal differential amplifier to convert the difference between two inputs into a voltage that can be sent to a typical single-ended scope input.
Figure 7 A typical schematic of a differential probe. Source: Rohde and Schwarz
Figure 8 An active differential probe. Source: Rohde and Schwarz
Power-rail probePower-rail probes allow you to measure AC ripple, high-frequency noise, and transients at high bandwidths on supply voltages with large DC offsets, and then analyze the spectrum of this interference. EMC probes enable electric and magnetic (E&H) near-field debugging of EMI issues, while current probes provide a non-invasive method for measuring current flow through a conductor. A DC probe uses a hall-effect sensor to detect the magnetic field generated by a current as it passes through the probe’s ferrite core, while an AC probe uses a transformer to measure current flow through its core.
Figure 9 shows a recent current measurement from an Earth-Observation payload to verify its in-rush behavior at power-up. Thank you to my friends Giovanni and Nick from R&S UK for helping me with this test.
Figure 9 Oscilloscope current probe measuring payload in-rush current at power-up. Source: Rajan Bedi
One issue often overlooked is the parasitic effect of the tips probing the DUT, known as the “connection bandwidth”. How and where you probe is equally as important as the specification of your test equipment: long connections degrade the measurement bandwidth, slowing edges, as well as adding unwanted inductance, resulting in ringing and distortion when measuring high-frequency signals. These may not actually exist in the circuit under investigation! Parasitic components to the left of the point labelled VAtn below determine the quality of the actual measurement. Figure 10 is a useful image from Keysight highlighting the impact of probe-tip length on measurement bandwidth.
Figure 10 The impact of probe-tip length on measurement bandwidth. Source: Keysight
Top tipsHere’s my Top Tips when considering oscilloscope probes to test your space electronics:
- Before choosing a probe, understand the statistics of the signals to be measured—amplitude, frequency, bandwidth, and edge rates—and then specify the probe as described.
- Ensure the probe is compatible with your scope’s input impedance.
- Ensure the probe does not adversely load the DUT and compensate passive probes.
- For single-ended probing, do not confuse the signal and ground measurement points—I did this once and killed an FPGA!
- Ensure the probe has a better or comparable bandwidth to the oscilloscope.
- Use short leads/tips to maximize probe bandwidth and minimize parasitic components.
- Specify the required measurement bandwidth but avoid too much to minimize noise.
- Check common-mode rejection before testing.
- Consider its ergonomics/physical design, order a holding fixture if you run out of hands!
- Keep a few traditional BNC cables in the lab. in case your colleagues won’t share.
To conclude, the humble oscilloscope is often used to verify the presence, timing and integrity of key signals during the early prototyping stage or, if qualification or flight hardware doesn’t function as intended. Many different tests are now possible using your scope and choosing the correct probe is critical. The choice will require an understanding on how its specification reconciles and interacts with your DUT, its parasitic effects, and how and where the probe is used, as this will all impact the quality of your measurements.
We could probe further , but I’m off to the lab: until next month, the person who shares their best oscilloscope probing story will win a Spacechips’ Training World Tour tee-shirt.
Spacechips will be teaching its training course, Testing Satellite Payloads, next year and you can contact, events@spacechipsllc.com, for more information.
Dr. Rajan Bedi is the CEO and founder of Spacechips, which designs and builds a range of advanced, AI-enabled, L to K-band, ultra-high-throughput transponders, Edge-based on-board processors, SDRs and MMUs for telecommunication, Earth-Observation, navigation, internet, space-domain awareness, space-debris retrieval and M2M/IoT satellites. The company also offers Space-Electronics Design-Consultancy, Technical-Marketing, Business-Intelligence, Avionics Testing and Training Services. You can also connect with Rajan on LinkedIn.
Spacechips’ Design-Consultancy Services develop bespoke satellite and spacecraft hardware and software solutions, as well as advising customers how to use and select the right components, how to design, test, assemble and manufacture space electronics.
Related Content
- Spacecraft on-board computing using rad-hard ARM MCUs
- Testing times for design engineers
- How to choose passives for space-grade switching regulators
- Pesky parasitics
- Understanding and applying oscilloscope measurements
- Digital vs. analog triggering in oscilloscope: What’s the difference?
- Measuring pulsed RF signals with an oscilloscope
The post Oscilloscope probing your satellite appeared first on EDN.
The Blink Mini: AC-powered indoor camera-based security

One portion—battery-operated outdoor-optimized devices—of Blink’s security camera product line (bought by Amazon in December 2017, followed shortly thereafter by Amazon’s acquisition of Ring, and with both brands still viable market options), has already gotten plenty of coverage attention from yours truly. But the product line in its entirety is much more diverse, both generationally (battery-operated outdoor cameras are now in their fourth generation, for example) and operationally (both indoor and outdoor variants, both battery- and AC-powered). The Blink Mini 2, another example, is intended for both indoor and outdoor AC-powered setups:
And here’s the Pan-Tilt variant of the original Blink Mini:
That said, the first-generation Blink Mini, which was introduced in early 2020 and is still sold by Amazon, is what we’ll be tearing down today:
Like its siblings, it comes in both white and black color options:
Key to my acquisition motivation was the following excerpt from an early review:
It comes with free cloud storage through the end of this year, and if you already have a Blink account through an older Blink camera, you’ll continue to get free cloud storage as a perk.
At intro, the Blink mini cost $35 (for one) or $65 (for a two-pack), plus the optional Sync Module 2 for an additional $35. Why’s the Sync Module optional, in this case, versus required for my battery-operated cameras? That’s because, as I explained in detail in one of my initial posts in the series and summarized in the subsequent sync module teardown:
Each of the cameras in a particular network implementation connects not only the common LAN/WAN router over Wi-Fi, but also to a separate and common piece of hardware, the Sync Module, over a proprietary long-range 900 MHz channel that Blink refers to as the LFR (low-frequency radio) beacon.
The Sync Module also connects to the router over Wi-Fi. And the cameras, which claim up-to-2 year operating life out of a set of two AA lithium batteries, are normally in low-power standby mode. If the cameras’ PIR motion sensing is enabled, they’ll alert the Sync Module over LFR when they’re triggered, and the Sync Module will pass along the operating status to “cloud” servers to prepare them to capture the incoming video clip.
Similarly, when you want to view a live stream from a particular camera using the Blink app for Android or iOS, or over an Amazon Echo Show or Echo Spot (or through the Echo Show mode available in recent-edition Amazon Kindle Fire tablets), it’s the Sync Module you’ll be talking to first. The Sync Module will pass along the app’s request and activate the selected camera, turning it back off again afterwards to preserve battery life. The Sync module itself is AC-powered, via a USB power adapter intermediary.
The difference this time, of course, is that the cameras themselves are also AC-powered; low-power standby operation to maximize battery life therefore isn’t relevant in this situation. As such, the key enhancement supported by the 2nd-generation Sync Module, at least for the Blink Mini, is local storage to a plugged-in USB flash drive, for folks who don’t want to spring for a $3/month/camera (!!!) cloud storage subscription and aren’t (like me) “grandfathered” legacy users. Thereby explaining why the Sync Module 2 isn’t bundled with Blink mini cameras, as it is with battery-powered alternatives, but instead is a now-$49.99 accessory (that said, I just bought a “nonfunctional, for parts only” one off eBay for future-teardown purposes!):
In January 2023, I picked up a ”used-good” condition Blink mini 2-camera kit for $18.99 (plus tax) total from Woot, an Amazon subsidiary company. The original purchaser’s shipping sticker was still affixed to the front box panel when I got it (a separate shipping box apparently hadn’t found use); I’ve done my best to peel it off in the first of the photos that follow:
Flip open the box:
and the two cameras inside come into view:
Here’s today’s patient, along with its accoutrements: some optional mounting hardware:
The USB-to-micro-USB power cable:
The 5-W output “wall wart”, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
And the camera again, now standalone save once more for the aforementioned US penny (the camera’s dimensions are 1.9 x 1.9 x 1.3 in/48 x 48 x 34 mm, and it weighs 1.7 oz/48 grams with an additional 1.5 oz/42 grams for its mount). Front:
Left side:
Back, revealing the micro-USB power input (the QR code finds use during initial setup):
Right side:
Let’s get that protective clear plastic off:
Still reflectively shiny. That said, the LEDs to either side of the lens (green/red to the left, blue to the right) are a bit less obscured now, as is the microphone below the lens. Not visible here but located in the upper right corner of the lens is an 850 nm infrared LED for “night vision” purposes. And the lens has a 110-degree viewing angle and captures 1080p images:
Up top is the speaker:
And down below are rubberized pads to prevent the camera from marring whatever surface it’s sitting on, plus a (comparatively) heavy metal disc to keep it from tipping over:
You probably also noticed the two optional mounting-screw holes in the bottom-view photo, which are related to the earlier-shown baggie-enclosed hardware. Here’s how you gain access to the holes’ topsides:
And under the camera itself is a hardware reset switch:
Time to dive inside. As I go along, particularly when I get to the PCB and lens assembly, you might want to also periodically reference my earlier teardown of the Blink XT battery-operated outdoor camera for compare-and-contrast purposes. Let’s start by extracting that metal disc:
Although, after all that effort, we didn’t get very far:
It turns out, however, that a firm tug is all it takes to separate the base from the camera body:
Giving us now a clear view of, among other things, the camera’s FCC ID (2AF77-H1931660):
If you visit the FCC site using the above link, you’ll see that the applicant name is “Immedia Semiconductor LLC”. As mentioned in previous Blink-themed posts, Immedia was originally founded in 2009 as a chip supplier but pivoted to become a consumer electronics system supplier in 2014, branded as Blink, with a highly successful Kickstarter unveil that July. 3.5 years later, as previously mentioned, Amazon acquired the company, and Immedia Semiconductor, LLC continues to operate as an independent subsidiary (thereby explaining the FCC info).
Onward. Fortunately, prior to striving to get inside myself, I’d done a bit of online research. This intrepid hacker had spectacularly mangled her camera’s case during disassembly:
only afterward, alas, stumbling across a video that greatly simplified (not to mention doing much less damage in) the process:
For likely obvious reasons, I followed in the second person’s footsteps:
Voila:
Here’s the now-exposed inner case topside speaker:
and underside:
The notched grooves on both sides; right:
and left:
And the inside-outer-case clips that originally fit into those grooves. This time, as you can see from the respective damage-or-not, the thinner, more flexible iSesamo “spudger” panned out more intact than its stronger but also thicker and more rigid Jimmy counterpart had. Right:
and left:
Oh well, our objective is now in sight, and any collateral damage done along the way is relative:
I want to first draw your attention to the two large gold-color rectangular PCB contacts in the upper right, to the right of an unpopulated seeming screw hole at the top of the PCB. Note their proximity to the speaker vent holes at the top of the outer case? Now let’s look at the inside of the inner case:
Yep, that’s again the speaker at the top. And when assembled, those two “spring” contacts mate with their PCB counterparts to drive it with an audio signal.
Back to the PCB. Although that top hole doesn’t have a screw in it, the two on the sides are populated. Let’s fix that:
We have liftoff:
Going forward I’ll be referring to this as the “rear” PCB, to differentiate it from the “front” PCB still currently existent in the outer case. Looking first at the “rear” PCB’s now exposed frontside:
there’s, for example, the connector at bottom right that originally mated it to the “front” PCB and held it in place even after the screws were removed, necessitating the earlier-shown Jimmy-as-crowbar to prod it into detaching. In the upper right is a Faraday Cage whose contents we’ll see shortly. In the upper left are Immedia Semiconductor markings. And in the center is the lense assembly; the two-wire harness routing to it suggests that, as in past teardowns of products like this, there’s likely an IR filter normally between the lens and image sensor that can selectively be moved out of the way for “night vision” applications.
Back to the backside of the “rear” PCB:
Note again the two large contacts in the upper right that feed the speaker. Accompanying them are numerous smaller contacts spread across the PCB, which presumably act as test points, assembly-line firmware programming landing pads, and the like. And speaking of which, in the lower left corner is a Winbond W25Q32JW 32 Mbit SPI serial NOR flash memory.
Two more screws remain in the center of the PCB; I’m guessing from past experience that they hold the other-side lens assembly in place in front of the image sensor. Let’s test my hypothesis:
Yep (to the lens-assembly removal scheme, moveable IR filter inside and sensor underneath):
And here’s the as usual dallop of dried glue that holds the lens in position after its focus and other optics characteristics are fine-tuned on the assembly line:
Before switching our attention to the “front” PCB, let’s revisit that Faraday Cage:
Inside is the application processor, Immedia’s AC1002B. Since it’s Amazon-proprietary, there’s unsurprisingly no public documentation available, aside from a brief mention in the camera documentation of “4 cores / 200 MHz”. That said, I’ll note that it’s different than the Immedia ISI-108A SoC I found in my earlier Blink XT teardown , although the Blink XT2 successor switched to it. And while I’m on the teardown-comparison topic, the flash memory is 4x the capacity of that seen previously.
Last, but not least, let’s switch our attention to the remaining “front” PCB:
Two more screws to go (by the way, note that in addition to the earlier mentioned unpopulated screw hole site at the top of the device, there’s now an additional one at the bottom!):
And the “front” PCB is now also free:
It’d be hard to miss the PCB-embedded antenna in the upper right corner, even if you tried to overlook it! The markings on the shiny IC below it are pretty faint, so take my word when I tell you that the first line identifies it as Infineon’s CYW43438 1×1 single-band Wi-Fi 4 (802.11n) + Bluetooth® 5.1 combo chip (unsurprising given its antenna proximity; it seems that the IC’s Bluetooth facilities are unused in this design). Below that, at the lower right edge, is the reset switch. In the lower left is the other end of the PCB-to-PCB connection scheme. And in-between them, directly below the lens “hole”, is the MEMS microphone, whose input port is on the other side of the PCB. Speaking of which…
In the upper right quadrant is the earlier mentioned 850 nm infrared LED for use in dim ambient light settings. To clarify, it’s not a PIR module, as was found in the earlier Blink XT teardown. in this particular camera model case, an alternative technique called Pixel Difference Analysis (PDA) finds use in detecting object motion in the scene. On either side of the lens “hole” are the two LEDs, respectively labeled D2 (blue to right) and D4 (green/red to left). And below the lens “hole” is the MEMS microphone input port. By the way, I’m struck by how many seemingly unpopulated-component solder pads there are on both sides of this PCB. Readers, agree?
With the PCBs out of the way, all that’s left to do is push the front panel out of the case:
Note that there’s an additional “hole” site above the lens opening, which is currently plastic-filled. As with those unpopulated component sites I just mentioned, along with the unused screw holes I noted earlier, whenever I see something like this, I wonder what it was originally intended to serve as…a second microphone for ambient noise-cancellation purposes, perhaps?
I’ll close with a confession. At some point in these final teardown steps, the crooked rubbery white plastic widget in the lower right corner of the previous photo fell off, and I was admittedly baffled as to what function it served…until I retraced my earlier disassembly steps (and photos) and remembered that it was the inherently elastic interface between the PCB-mounted hardware reset switch and the outside world:
And speaking of disassembly steps…since my removal of the Faraday Cage “lid” and more general dissection were so “clean” (mangled side clip aside), after this writeup is published I’m going to strive to tediously and patiently reverse course and successfully reassemble the device back to a fully functional state. Wish me luck; I’ll post the outcome in the comments. And as always, I also look forward to reading your thoughts there; thanks in advance for them!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Blink: Security cameras with a power- and bandwidth-stingy uplink
- Blink Cameras and their batteries: Functional abnormalities and consumer liabilities
- Teardown: Blink XT security camera
- Blink: Security camera system installation and impressions
The post The Blink Mini: AC-powered indoor camera-based security appeared first on EDN.
What’s LVGL, and how it works in embedded designs

Light and Versatile Graphics Library (LVGL) is steadily making inroads in the graphics realm by efficiently facilitating graphical user interface (GUI) designs in small, resource-constrained, and battery-powered devices such as wearables, e-bikes, navigation systems, instrument clusters, medical gadgets, and more.
Graphics IP suppliers are increasingly partnering with LVGL to optimize GPU performance and expand graphic processing capabilities for a wide range of embedded applications. But who’s LVGL? It’s the company behind the free and open-source graphics library for embedded systems; it helps developers create GUIs for microcontroller units (MCUs), microprocessor units (MPUs), and display processors.
Figure 1 LVGL has no external dependencies, which makes its porting incredibly simple.
LVGL, written in C, allows embedded developers to create modern and visually appealing user interfaces in embedded applications. It works with various processors and operating systems and enables developers to keep code size and memory usage to a minimum. It can be used with any RTOS and bare-metal setup and quickly adapts to unique project needs.
What IP suppliers do is integrate their GPU solutions into LVGL’s graphics ecosystem; so, developers can build sleek, responsive interfaces without compromising on performance or power efficiency. The integration of LVGL into GPUs is transforming the embedded UI landscape in resource-constrained devices like MCUs.
Design case studies
Take the case of Think Silicon, a supplier of ultra-low-power GPU IPs for embedded systems and an Applied Materials company. It’s teaming up with LVGL to develop high-performance, low-power graphics libraries for MCUs. As a result, the software development kit for its NEMA GPUs will be able to accelerate LVGL’s graphics library by up to 5x compared to software-only rendering.
Figure 2 Think Silicon has combined LVGL’s lightweight open-source graphics library with its NEMA GPU-Series.
VeriSilicon, a Shanghai, China-based supplier of embedded GPUs, has also partnered with LVGL to facilitate seamless integration of 2D, 2.5D, and 3D content in embedded applications. In conjunction with LVGL’s graphics library, VeriSilicon aims to advance 3D rendering capabilities in the GUI designs.
Actions Technology, a Zhuhai, China-based firm developing chips for AIoT applications, has incorporated VeriSilicon’s GPU into its smartwatch system-on-chip (SoC) design. Tim Zhang, GM of Actions Technology’s Wearable and Sensing Business Unit, acknowledges the importance of LVGL’s graphics technology contribution in delivering rich 3D graphics in its smartwatch SoC.
Figure 3 Actions Technology has incorporated VeriSilicon’s LVGL-enabled GPU in its smartwatch SoC.
Embedded GPUs are now serving a wide range of applications, from wearables and infotainment to micro-mobility and AIoT. Here, the integration of LVGL in GPUs enables users to create visually appealing UIs across a wide variety of hardware platforms.
Related Content
- Re-imagining Imagination Technologies
- GUI Development: Embedding Graphics
- CAST launches graphics acceleration IP cores
- Imagination outstrips all other GPU IP suppliers
- Two-digit nanosecond latency CXL IP addresses GPU memory expansion
The post What’s LVGL, and how it works in embedded designs appeared first on EDN.
Beauty is in the eye of the beholder

Take a close look at this power audio amplifier that dates back to 1961 (Figure 1). This image is a still extracted from a YouTube video by a group called “The Spotnicks” playing their version of “Ghost Riders In The Sky”. (Try not to laugh too hard when you see them in their space suits.)
Figure 1: Image of an amplifier featured in a YouTube video with eight 6L6GB beam power tetrode tubes.
Sporting eight 6L6GB beam power tetrode tubes, this thing was clearly pushing the limits of the state-of-the-art at that time but notice the gorgeous appearance, the emphasis on gold coloration and the mirror-like reflection. This amplifier was meant as a work of art to be seen, particularly on stage, not just listened to.
After I looked on in dazed admiration for a while, I noticed something. One of the seven-pin-miniature tube shields on the left is not gold colored. Somewhere along the line I suspect, one of those tubes had to be replaced and its tube shield got lost somehow. A replacement shield was used instead, which left the amplifier with something of a beauty mark.
When you play that video, you’ll see that more than one of these amplifiers was in service during the performance.
Enjoy!
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Vacuum tube negative resistance
- Friday Fun: Test Those Tubes
- Electronics of our youth
- Self-contained TV receiver uses 24 transistors
- Remembrance of chips past
- Gone but Not Forgotten
The post Beauty is in the eye of the beholder appeared first on EDN.
Wireless combo modules offer global connectivity

Silicon Labs’ SiWx917Y series of Wi-Fi 6 and Bluetooth LE 5.4 modules provides plug-and-play simplicity with global RF certifications, helping to accelerate development. Intended for battery-powered IoT devices, these energy-efficient modules integrate an Arm Cortex-M4 application processor and antenna in a small 16×21×2.3-mm package.
The SiWG917Y variant enables customers to run all application code on the device’s Arm Cortex-M4 core. In contrast, the SiWN917Y allows customers to execute applications on a separate MCU, while the module manages Wi-Fi 6 and BLE 5.4 communication tasks. Intelligent power management supports connected sleep mode, consuming as little as 20 µA with Target Wake Time (TWT) and a 60-second keep-alive interval.
The wireless subsystem includes a 160-MHz network processor, baseband DSP, analog front-end, RF transceiver, and power amplifier. The application subsystem features a 180-MHz Cortex-M4 processor with a floating-point unit (FPU) for peripheral and application tasks. This dual-core architecture separates applications and wireless stacks to optimize performance and ensure timely processing.
Wireless modules in the SiWx917Y series are now generally available for purchase.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wireless combo modules offer global connectivity appeared first on EDN.
MPU controls multi-axis motors in real time

Renesas’s RZ/T2H microprocessor, its most advanced for industrial equipment, enables precise control of robot motors with up to nine axes. The device’s application processing and real-time performance make it well-suited for programmable logic controllers (PLCs), motion controllers, distributed control systems, and computerized numerical controls (CNCs). It also supports a variety of network communications, including industrial Ethernet with Time Sensitive Networking (TSN).
The RZ/T2H features four Arm Cortex-A55 CPUs (up to 1.2 GHz) for application tasks and two Cortex-R52 CPUs (up to 1 GHz) for real-time processing. Each R52 core is equipped with 576 KB of tightly coupled memory (TCM). The RZ/T2H also supports 32-bit-wide LPDDR4-3200 SDRAM for external memory, enabling high-performance tasks like Linux applications, robot trajectory generation, and PLC sequence processing.
To support multi-axis motor control, the RZ/T2H provides 3-phase PWM timers, delta-sigma interfaces for current measurement, and encoder interfaces. Peripheral functions for motor control are connected to the Low Latency Peripheral Port (LLPP) of the Cortex-R52 for fast access.
The RZ/T2H microprocessor is now available through authorized Renesas distributors.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post MPU controls multi-axis motors in real time appeared first on EDN.
Tiny Hall element delivers InAs sensitivity

The HQ0A11, an indium arsenide (InAs) Hall element from AKM, is the smallest and thinnest in the company’s high-sensitivity HQ series. At just 0.8×0.4×0.23 mm, the HQ0A11 reduces volume by 85% compared to its predecessor, the HQ0811. It also delivers approximately 16% better signal-to-noise ratio, directly improved position detection accuracy and making its S/N performance the highest in the HQ lineup.
Hall elements are commonly used in position detection for image stabilization and autofocus in smartphone cameras. According to AKM, the HQ0A11 significantly reduces lens-shake effects, particularly with telephoto lenses. Its compact size also makes it well-suited for high-density component mounting in limited spaces.
The HQ0A11 achieves a sensitivity of 0.66 mV/mA/mT, with noise kept at 1.51 µVRMS/mA. It is expected to contribute to enhanced performance, not only in smartphone camera modules, but also in small motors for robots.
AKM has begun mass production and shipment of the HQ0A11 InAs Hall element.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Tiny Hall element delivers InAs sensitivity appeared first on EDN.
Shunt monitors detect vehicle currents

ZXCT18xQ automotive current shunt monitors from Diodes measure small sense voltages across common-mode voltages up to 26 V. Operating from a 2.7-V to 5.5-V supply, these AEC-Q100 qualified instrumentation amplifiers are suited for vehicle lighting controls, high-side and low-side current sensing, battery management, fault current detection, and other vehicle body control systems.
The ZXCT180Q and ZXCT181Q support unidirectional and bidirectional current sensing, respectively. The ZXCT180Q features two pin assignment options, with its OUT pin placed in different configurations. Its REF pin is tied to GND. For bidirectional current sensing, as required in battery management systems, the ZXCT181Q introduces a voltage to the REF pin, offsetting the output voltage.
Both the ZXCT180Q and ZXCT181Q offer fixed gains of 20 V/V, 50 V/V, 100 V/V, and 200 V/V. They measure shunt voltages at common-mode voltages from -0.3 V to 26 V, independent of the supply, with a 370-µA maximum supply current. The devices support bandwidths up to 400 kHz at 20 V/V and slew rates of 2 V/µs.
The ZXCT180Q is priced at $0.11 each, while the ZXCT181Q is priced at $0.12 each, both in 1,000-piece quantities.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Shunt monitors detect vehicle currents appeared first on EDN.
Portable 5G transmitter streams live video

DragonFly V 5G is Vislink’s bonded cellular miniature transmitter with 5G connectivity and HD video streaming capabilities. The device supports public and private 5G networks, delivering live broadcast-quality video from point-of-view cameras, drones, UAVs, and body-worn devices.
Leveraging High-Efficiency Video Coding (HEVC) compression, the DragonFly V 5G streams high-definition, high dynamic range video in formats up to 1080p at 50/60 Hz. Its cellular bonding technology combines multiple 5G network connections into a single, aggregated data stream for enhanced reliability and speed.
Weighing just 82 grams, the transmitter accommodates HDMI or SDI camera inputs, depending on the variant. Additionally, the DragonFly V 5G includes support for Wi-Fi in the 2.4-GHz ISM band and RS-232 remote control.
The DragonFly V 5G is now available, joining the previously announced DragonFly V COFDM model.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Portable 5G transmitter streams live video appeared first on EDN.
∆Vbe bread dough incubator

I was surprised recently when a (nearly) two-decade-old design idea of mine for take-back-half temperature control got a question from reader John Louis Waugaman. John says he needs a way to control the temperature of batches of rising bread dough. I’m glad he might be considering applying my TBH circuit to his problem, but it really is kind of overkill in that context. So, I began pondering whether a simpler topology might solve his dough incubation problem as well as the TBH circuit would while saving some cost and effort. Plus, there’s a backstory.
Wow the engineering world with your unique design: Design Ideas Submission Guide
For a long while I’ve been fond of a particularly elegant, accurate, and (very!) inexpensive method (∆Vbe) for sensing and controlling temperature using ordinary, uncalibrated bipolar transistors. I first saw it explained in an application note by famed analog guru Jim Williams (see page 7 for ∆Vbe theory).
I’m always watching for opportunities to use ∆Vbe and this possibility of providing cheap, accurate, and calibration-free temperature control in a cool culinary context was too good to let pass by. Figure 1 shows the new circuit I cooked up, (loosely) based on Jim’s recipe.
Figure 1 Delta-Vbe sensor Q1 is programmed via R1 for desired setpoint temperature in Kelvin = R1/100 = 312oK = 39oC for R1 = 31.2k. Asterisked Rs should be precision types (1% or better).
The R2 R3 D5 D6 network drives Q1 in the magic 10:1 current ratio ∆Vbe measurement cycle described by Williams. Note that the absolute currents supplied to Q1 are no more accurate than the raw unregulated 60-Hz line voltage that creates them, but that doesn’t affect ∆Vbe accuracy. All that counts is their 10:1 ratio which is set independent of line voltage variation solely by the precision of (R2/R3 + 1) = 10.
This makes Q1 generate a PTAT (proportional to absolute temperature) AC signal equal to oK/5050 volts peak-to-peak that follows the 120-Hz log(|sine|) waveshape shown in Figure 2.
Figure 2 Q1’s ∆Vbe PTAT log(|sin(r)|) oK/5050 waveform. (Yaxis = volts, Xaxis = radians Red = average value = AC baseline)
The PTAT signal Vpp is boosted by A1a’s gain = –2,742,160/R1, then compared by A2 to its precision (2.50 V +0.4%) internal shunt reference (thanks again, Konstantin Kim, for finding the versatile AP4310A!).
A2’s output stays at zero, holding Q2 off, while Q1’s temperature and the PTAT signal are below setpoint. This allows 120-Hz pulses coupled through C3 to reach Q3’s gate, switch it on, and apply power to the heater. When the heater warms Q1 (and presumably the dough) to the programmed temperature, then the PTAT waveform rises above A2’s reference voltage. This makes A2 start turning Q2 on which diverts the TRIAC gate pulses to ground. That turns Q3 and the heater off allowing Q1 to cool, etc., etc. The resulting cycling completes a thermostasis feedback loop to make the dough grow.
About Q3: Even though heater drive is unipolar, I selected a TRIAC instead of an SCR for Q3. This wasn’t to get bipolar capability but rather because the TRIAC has a higher max gate current rating. This makes Q3 able to shrug off the 2-A inrush possible at power-up, which might vaporize an SCR gate.
D7 provides a path to ground for C3 return current, preventing it from false triggering Q3.
For heater duty, John suggested an incandescent light bulb. I agree radiant heating should work well. Since Q3’s maximum duty factor is 50%, a 100-W bulb would be just about perfect for a maximum heating power of ~60 W. Plus, a bonus benefit of reduced voltage would be a lower filament temperature. This should make an ordinary tungsten bulb last many thousands of hours.
It may be just what John kneads.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Temperature controller has “take-back-half” convergence algorithm
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Take-Back-Half precision diode charge pump
- 20MHz VFC with take-back-half charge pump
The post ∆Vbe bread dough incubator appeared first on EDN.