EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 59 хв 33 секунди тому

Contactless potentiometers: Unlocking precision with magnetic sensing

Пн, 09/01/2025 - 11:41

In the evolving landscape of precision sensing, contactless potentiometers are quietly redefining what reliability looks like. By replacing mechanical wear points with magnetic sensing, these devices offer a frictionless alternative that is both durable and remarkably accurate.

This post offers a quick look at how contactless potentiometers work, where they are used, and why they are gaining ground.

Detecting position, movement, rotation, or angular acceleration is essential in modern control and measurement systems. Traditionally, this was done using mechanical potentiometers—a resistive strip with a sliding contact known as a wiper. As the wiper moves, it alters the resistance values, allowing the system to determine position.

Although these devices are inexpensive, they suffer from wear and tears due to friction between the strip and the wiper. This limits their reliability and shortens their lifespan, especially in harsh environments.

To address these issues, non-contact alternatives have become increasingly popular. Most rely on magnetic sensors and offer a range of advantages: higher accuracy, greater resistance to shocks, vibrations, moisture and contaminants, wider operating temperature ranges, and minimal maintenance. Most importantly, they last significantly longer, making them ideal for demanding applications where durability and precision are critical.

Where are contactless potentiometers used?

Contactless potentiometers (non-contact position sensors) are found in all sorts of machines and devices where it’s important to know how something is moving—without touching it directly. Because they do not wear out like traditional potentiometers, they are perfect for jobs that need long-lasting, reliable performance.

In factories, they help robots and machines move precisely. In cars, they track things like pedal position and steering angle. You will even find them in wind turbines, helping monitor movement to keep everything running smoothly.

They are also used in airplanes, satellites, and other high-tech systems where accuracy and reliability are absolutely critical. When precision and reliability are non-negotiable, contactless potentiometers outperform their mechanical counterparts.

What makes contactless potentiometers work

At the heart of every contactless potentiometer lies a clever interplay of magnetic fields and sensor technology that enables precise, wear-free position sensing.

Figure 1 The STHE30 series single-turn single-output contactless potentiometer employs Hall-effect technology. Source: P3 America

The contactless potentiometer shown above—like most contemporary designs—employs Hall-effect technology to sense the rotational travel of the knob. This method is favored for its reliability, long lifespan, and immunity to mechanical wear.

However, Hall-effect sensing is just one of several technologies used in contactless potentiometers. Other approaches include magneto-resistive sensing, which offers robust precision and thermal stability. Then there is inductive sensing, known for its robustness in harsh environments and suitability for high-speed applications. Next, capacitive sensing, often chosen for compact form factors, facilitates low-power designs. Finally, optical encoding provides high-resolution feedback by detecting changes in light patterns.

Ultimately, choosing the right sensing technology hinges on factors like required accuracy, environmental conditions, and mechanical limitations.

Displayed below is the SK22B model—a contactless potentiometer that operates using inductive sensing for precise, wear-free position detection.

Figure 2 The SK22B potentiometer integrates precision inductive elements to achieve contactless operation. Source: www.potentiometers.com

Contactless sensing for makers

So, contactless potentiometers—also known as non-contact rotary sensors, angle encoders, or electronic position knobs—offer precise, wear-free angular position sensing.

Something worth pointing out is that a quick pick for practical hobbyists is the AS5600—a compact, easy-to-program magnetic rotary position sensor that excels in such applications, thanks to its 12-bit resolution, low power draw, and strong immunity to stray magnetic fields.

Also keep in mind that while the AS5600 is favored for its simplicity and reliability, other magnetic position sensors—like the AS5048 or MLX90316—offer robust contactless performance for more advanced or specialized applications.

Another notable option is the MagAlpha MAQ470 automotive angle sensor, engineered to detect the absolute angular position of a permanent magnet—typically a diametrically magnetized cylindrical magnet mounted on a rotating shaft.

Figure 3 Functional blocks of the AS5600 unveil the inner workings. Source: ams OSRAM

And a bit of advice for anyone designing angle measurement systems using contactless potentiometers: success hinges on tailoring the solution to the specific demands of the application. These devices are widely used in areas like industrial automation, robotics, electronic power steering, and motor position sensing, where they monitor the angular position of rotating shafts in either on-axis or off-axis setups.

Key design considerations include shaft arrangement, air gap tolerance, required accuracy, and operating temperature range. During practical implementation, it’s crucial to account for two major sources of error—those stemming from the sensor chip itself and those introduced by the magnetic input—to ensure reliable performance and precise measurements.

A while ago, I shared an outline for weather enthusiasts to build an expandable wind vane using a readily available angle sensor module. This time, I am diving into a complementary idea: crafting a poor man’s optical contactless potentiometer/angle sensor/encoder.

The device itself is quite simple: a perforated disc rotates between infrared LEDs and phototransistors. Whenever a phototransistor is illuminated by its corresponding light sender, it becomes conductive. Naturally, you will need access to a 3D printer to fabricate the disc.

Be sure to position the phototransistors and align the holes strategically; this allows you to encode the maximum number of angular positions within minimal space. A quick reference drawing is shown below.

Figure 4 The schematic shows an optical alternative setup. Source: Author

It’s worth pointing out that this setup is particularly effective for implementing a Gray Coding system, as long as the disc is patterned with a single-track Gray Code. Developed by Frank Gray, Gray Code stands out for its elegant approach to binary representation. By ensuring that only a single bit changes between consecutive values, it streamlines logic operations and helps guard against transition errors.

That’s all for now, leaving plenty of intriguing ideas for you to ponder and inquire further. But the story does not end here—I have some deeper thoughts to share on absolute encoders, incremental encoders, rotary encoders, linear encoders, and more. Perhaps a topic for an upcoming post.

If any of these spark your curiosity, let me know—your questions and comments might just shape what comes next. Until then, stay curious, keep questioning, and do not hesitate to reach out with your thoughts.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Contactless potentiometers: Unlocking precision with magnetic sensing appeared first on EDN.

Simple diff-amp extension creates a square-law characteristic

Птн, 08/29/2025 - 16:31

Back on December 3, 2024, a Design Idea (DI) was published, “Single-supply single-ended inputs to pseudo class A/B differential output amp,” which created some discussion about using the circuit as a full wave rectifier.

DI editor Aalyia has kindly allowed a follow-up discussion about a circuit which could be utilized for this, but is better suited for square-law functions.

The circuit shown in Figure 1 is an LTspice implementation built around a bipolar differential amplifier with Q1 and Q3 serving as the + and – active differential input devices, respectively.

Figure 1 An LTspice implementation built around a bipolar differential amplifier with Q1 and Q3 serving as the + and – active differential input devices, respectively, allowing the circuit to be better suited for square-law functions.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Additional devices Q2 and Q4 are added at the “center point” between Q1 and Q3, and act such that the collector currents of all devices are equal when no differential voltage is present.

This occurs because resistors R7 and R8 create a virtual differential zero-volt “center point” between the + and – differential inputs, and all device Vbe’s are the same, neglecting the small voltage drop across R7 and R8 due to Q2 and Q4 base bias currents.

R7 and R8 set the differential input impedance for the circuit configuration, where R1 and R3 set the signal source differential impedances for the simulations.

The device emitter currents are controlled by the “tail current source” I1 at 4 mA; thus, each device has an emitter current of ~1 mA with zero differential input. Note the -Diff Input signal is created by using a voltage-controlled voltage source with an effective gain of -1 due to the inverted sensing of the +Diff Input voltage (VIN+). This arrangement allows the input signal to be fully differential when LTspice controls the VI+ voltage source during signal sweeps.

This is not part of the circuit but used for comparisons: Voltage-controlled current source, B1, is configured to produce an ideal square-law characteristic by squaring the differential voltage (Vin+ Vin-) and scaling by factor “K”.

Figure 2 shows the simulation results of sweeping the differential input voltage sources from -200 mV to +200 mV while monitoring the various device currents. Note the differential output current, which is:

[Ic(Q1)+Ic(Q3)] – [Ic(Q2)+Ic(Q4)]

closely approximates the ideal square-law with a scale factor of 0.3 (amps/volt) for differential input voltages of ±60 mV.

Figure 2 Simulation results of sweeping the differential input voltage sources from -200 mV to +200 mV while monitoring the various device currents.

Please note this circuit is a transconductor type where the output is a differential current controlled by a differential input voltage.

Anyway, thanks to Aalyia for allowing us to follow up with this DI, and hopefully some folks will find this and the previous circuits interesting.

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.

 Related Content

The post Simple diff-amp extension creates a square-law characteristic appeared first on EDN.

Event-based vision comes to Raspberry Pi 5

Птн, 08/29/2025 - 00:52

A starter kit from Prophesee enables low-power, high-speed event-based vision on the Raspberry Pi 5 single-board computer. Based on the GenX320 Metavision event-based vision sensor, the kit accelerates development of real-time neuromorphic vision applications for drones, robotics, industrial automation, security, and surveillance. The camera module connects directly to the Raspberry Pi 5 via a MIPI CSI-2 (D-PHY) interface.

Consuming less than 50 mW, the 1.5-in. GenX320 sensor provides 320×320-pixel resolution with an event rate equivalent to ~10,000 fps. It offers >140-dB dynamic range and sub-millisecond latency (<150 µs at 1,000 lux).

Software resources include OpenEB, the open-source core of Prophesee’s Metavision SDK, with Python and C++ API support. Drivers, data recording, replay, and visualization tools can be found on GitHub.

The GenX320 starter kit is available for pre-order through Prophesee and authorized distributors. The Raspberry Pi 5 board is sold separately.

GenX320 starter kit product page

Prophesee

The post Event-based vision comes to Raspberry Pi 5 appeared first on EDN.

MCUs drive LCD and capacitive touch

Птн, 08/29/2025 - 00:52

Renesas’ RL78/L23 16-bit MCUs provide segment LCD control and capacitive touch sensing for responsive HMIs in smart home appliances, consumer electronics, and metering systems. Running at 32 MHz, these low-power MCUs include 512 KB of dual-bank flash memory, enabling seamless over-the-air firmware updates.

The MCUs offer an active current of 109 µA/MHz and a standby current as low as 0.365 µA, with a fast 1‑µs wakeup time. With a wide voltage range of 1.6 V to 5.5 V, they can operate directly from 5‑V power supplies commonly used in home appliances and industrial systems.

The reference mode of the integrated LCD controller reduces display power by approximately 30% compared to the RL78/L1X series. A snooze mode sequencer (SMS) enables dynamic segment updates without CPU intervention, further enhancing energy efficiency.

Development tools for the RL78/L23 include the Smart Configurator and QE for Capacitive Touch, which simplify system design and firmware setup. Renesas also provides the RL78/L23 Fast Prototyping Board, compatible with the Arduino IDE, and a capacitive touch evaluation system for hardware testing and validation.

RL78/L23 MCUs are available now from the Renesas website or distributors.

RL78/L23 product page 

Renesas Electronics 

The post MCUs drive LCD and capacitive touch appeared first on EDN.

Wireless SoC raises AI efficiency at the edge

Птн, 08/29/2025 - 00:52

The Apollo510B wireless SoC from Ambiq combines a 48-MHz dedicated network coprocessor with a Bluetooth LE 5.4 radio for power-efficient edge AI. Its Arm Cortex-M55 CPU, enhanced with Helium vector processing and Ambiq’s turboSPOT dynamic scaling, delivers up to 30× greater AI efficiency and 16× faster performance than Cortex-M4 devices. 

With 64 KB each of instruction and data cache, 3.75 MB of RAM, and 4 MB of embedded nonvolatile memory, the Apollo510B provides fast, real-time processing. Its 2D/2.5D GPU handles vector graphics, while SPI, I²C, UART, and high-speed USB 2.0 support flexible sensor and device connections. High-fidelity audio is enabled via a low-power ADC and stereo digital microphone PDM interfaces.

Apollo510B also integrates secureSPOT 3.0 and Arm TrustZone, enabling secure boot, firmware updates, and protection of data exchange across connected devices. These features make the device well-suited for always-on, intelligent applications such as wearables, smart glasses, remote patient monitoring, asset tracking, and industrial automation.

The Apollo510B SoC will be available in fall 2025.

Apollo510B product page 

Ambiq Micro

The post Wireless SoC raises AI efficiency at the edge appeared first on EDN.

Instruments work together to ensure design integrity

Птн, 08/29/2025 - 00:52

Smart Bench Essentials Plus is an enhanced set of Keysight test instruments offering improved precision and reliability. The core instruments—a power supply, waveform generator, digital multimeter, and oscilloscope—meet industry and safety standards such as ISO/IEC 17025, IEC 61010, and CSA. All instruments are managed from a single PC via PathWave BenchVue software, simplifying test automation and workflows.

According to Keysight, Smart Bench Essentials Plus delivers 10× higher DMM resolution, 5× greater waveform generator bandwidth, 4× more power supply capacity, and 64× higher oscilloscope vertical resolution over the previous series. Development engineers can test, troubleshoot, and qualify electronic designs while leveraging these benefits:

  • Reduce measurement errors with Truevolt technology in a 6.5-digit dual-display digital multimeter.
  • Generate accurate waveforms with Trueform technology in a 100-MHz waveform/function generator.
  • Deliver reliable, responsive power with a 400-W, four-channel DC power supply.
  • Capture even the smallest signals with a portable four-channel oscilloscope featuring a custom ASIC and 14-bit ADC.

Instruments have intuitive, color-coded interfaces and standardized menus to improve productivity. Built-in graphical charting tools make it easy to visualize and analyze test results.

To learn more about the Smart Bench Essentials Plus portfolio and request a bundled quote, click here.

Keysight Technologies 

The post Instruments work together to ensure design integrity appeared first on EDN.

AEC-Q100 LED driver delivers dynamic effects

Птн, 08/29/2025 - 00:52

Diodes’ AL5958Q matrix LED driver integrates a 48-channel constant-current source and 16 N-channel MOSFET switches for automotive dynamic lighting. Two cascade-connected drivers support up to 32 scans, well-suited for narrow-pixel mini- and micro-LED displays that use multiple RGB LEDs to deliver animated lighting effects and information.

The AEC-Q100 qualified driver employs multiplex pulse density modulation (M-PDM) control to raise the refresh rate of dynamic scanning systems without increasing the grayscale clock frequency or introducing EMI. Built-in matrix display command functions reduce processing overhead on the local MCU. These functions include automatic black-frame insertion, ghost elimination, and suppression of shorted-pixel caterpillars.

Operating from a 3-V to 5-V input, the AL5958Q’s 48 constant-current outputs supply up to 20 mA per LED channel string. Current accuracy between channels and matching across devices is typically ±1.5%.

The AL5958Q LED driver costs $1.60 each in lots of 2500 units.

AL5958Q product page

Diodes

The post AEC-Q100 LED driver delivers dynamic effects appeared first on EDN.

Mixed signals, on a power budget: Intelligent low-power analog in MCUs

Чтв, 08/28/2025 - 18:20

It goes without saying that battery-powered devices are sensitive to power draw, especially during periods of inactivity. One such use case is in sensor nodes or portable sensors—these devices passively monitor a specific condition. When the threshold is exceeded, they trigger an alarm or log the event for further analysis. Since most devices incorporate some form of microcontroller (MCU), selecting an MCU with intelligent analog peripherals can reduce the Bill of Materials (BOM) by performing the same functions of a discrete device while potentially saving power by disabling the analog functionality when not needed.

To demonstrate these features, we built two demos on the PIC16F17576 microcontroller family. One demo aims to use as little power as possible while detecting temperature changes, while the other utilizes the embedded op-amps to dynamically adjust the gain based on the input signal.

Power consumption

Let’s start at the top—power consumption. No matter how you slice it, all roads will lead to the same basic tenets:

  • Keep VDD as low as possible
  • Minimize oscillator frequency
  • Turn off all unused peripherals and external circuits, when possible, and as much as possible
  • Avoid floating nodes on digital I/O

Beyond this advice, it becomes a lot more application-specific. For instance, most op-amps and ADCs don’t have an OFF switch. This is where intelligent analog peripherals fit into designs.

The “intelligent” part of their name is derived from the fact that they can be controlled in software. While most analog peripherals would not be considered power hungry, when optimizing battery life, every little bit of current matters, and generally, there is a higher quiescent current draw that the discrete device would have due to process limitations.

However, there are special low-power peripherals that allow for ultra-low power operation, even when enabled all the time. For instance, the Low Power Voltage Reference (VREFLP) and Low Power Analog Comparator (CMPLP) in the PIC16F17576 family of MCUs draw minimal power but can trigger interrupts to wake the CPU if action is needed.

For devices without these lower power peripherals, another peripheral available in PIC MCUs is the Analog Peripheral Manager (APM). The APM is a specialized counter that can toggle power ON/OFF to the analog peripherals while allowing the CPU to remain continuously in sleep.

If an event occurs, requiring intervention from the CPU, the peripherals can generate an interrupt to wake the device. This avoids having to perform the following sequence: wake the CPU, power on the peripherals, check the results, perform an action, shut down the peripherals, and return to deep sleep.

Low-power demo

The objective of the low-power demo is to demonstrate the new CMPLP and VREFLP as a temperature alarm. This application could be used for cold asset tracking to log when an event over the expected temperature occurs. For the demo implementation, we designed a circuit to detect when a person touches the thermistor(s), causing a rise in temperature.

Figure 1 A finished low power demo prototype that will detects the temperature rise that occurs when a person touches the thermistor(s).

Theory of operation

This circuit is composed of two PIC16F17576 MCUs; one device acts like the device under test (DUT) while the other handles power measurement and display.

Power measurement and display

To measure the minuscule amount of current pulled by the MCU DUT, it was important to design a circuit that could perform high-side current sensing while also being capable of maintaining the power supply at 1.8 V, which is the lowest recommended operating voltage for this device family. For reference, the minimum operating voltage is 1.62 V, which provides a 10% margin on the power supply before the device is out of specified operating conditions.

To measure the quiescent current of the MCU and low-power analog peripherals, a precision 1:1 current mirror IC was used to supply current to the DUT (Figure 2). This IC has a settable compliance output limit, but the tolerancing and ranging on the internal reference was not acceptable for our purposes, so we overdrive the integrated circuit with an external 1.8-V reference (MCP1501-18E) to avoid having to calibrate each unit individually.

Figure 2 The high-side current circuit to measure the minuscule amount of current pulled by the MCU DUT, and 1.8-V DUT power supply.

This ensures the power rail for the DUT is as close as possible to 1.8 V. Guard rings and planes are placed on the PCB to minimize the leakage current of this rail as much as possible. The 1:1 current output goes through a sense resistor, and then a differential measurement of the voltage at the resistor is performed with a 24-bit delta-sigma ADC (MCP3564R) with an external 2.048-V voltage reference (MCP1501-20E). This is shown in Figure 3. The resulting measurement is then displayed on the OLED screen attached to the board.

Figure 3 The ADC implementation where the differential measurement of the voltage at the resistor is performed with a 24-bit delta-sigma ADC with an external 2.048-V voltage reference.

A (good) problem we discovered late in the process was that the current measurement in this configuration is so stable, it looks hard-coded on the display. Thankfully, this can be easily disproved by gently touching the DUT’s decoupling capacitors with a finger or other slightly conductive object and observing the change in measured current.

DUT

The DUT device performs a simple but crucial role in detecting temperature changes with as little power consumption as possible. For this, CMPLP and VREFLP are used together with the Peripheral Pin Select (PPS) system to output the state of the CMPLP without waking the CPU.

In an actual application, CMPLP’s output edge (LOW ↔ HIGH) would be used to wake the CPU to perform some action like logging a temperature event or sounding an alarm.

Using the high-side current measurement circuit designed, we found the current of the microcontroller in this state is ~2.2 to 2.4 μA, but there is room for a tiny bit of extra power savings.

VREFLP is comprised of two separate subsystems: a low-power 1-V reference and a low-power DAC. This application uses the slightly more power-hungry low-power DAC instead of the fixed 1-V reference because the temperature change from physical contact is very small, and the system must recalibrate the threshold on startup to account for environmental variance. In an application where a few degrees of tolerance are acceptable, using the 1-V reference would save a few fractions of a microamp.

Notably, this demo does not use the APM because the APM requires an oscillator to remain active, consuming a little bit more power (~2.8 μA) than simply leaving these ultra-low power modules on. In a situation where multiple analog peripherals are being used, such as the integrated op-amps, ADC, etc., the APM would provide significant savings in power.

Dynamic gain

Another feature of intelligent analog peripherals is the ability to adjust on the fly. In some cases, a signal may have a large dynamic range that is tricky to measure without clipping.

Clipping a signal is usually considered undesirable, as waveform information about the signal is lost. A simple example of this is a microphone: whispering requires a high gain while shouting requires a low gain. With a fixed gain, designers pick the worst (reasonable) conditions to avoid signal clipping, but this, in turn, reduces the signal resolution.

A way around this problem is to use embedded op-amps. These op-amps aren’t going to outmatch the high-end op-amps, but they are often comparable to general-purpose ones.

And, in many cases, the integrated op-amps contain built-in resistor networks that allow the op-amp(s) to adjust the circuit gain as needed. This requires no extra components or specialized circuitry as it’s already integrated into the die.

Dynamic gain demo

One of the main use cases for the integrated op-amps inside MCUs is to dynamically switch gains depending on how strong the signal is. This is often performed to avoid clipping the signal when the signal strength is high.

This application creates a simple demonstration of this use case by amplifying the output of a pressure sensor and displaying it visually on an LED bar graph.

Figure 4 A dynamic gain demo that amplifies the output of a pressure sensor and displays it visually on an LED bar graph.

Theory of operation Pressure sensor

The pressure sensor in this application changes resistance depending on the amount of pressure applied. This resistor is used as part of a resistor divider network to generate an output signal from 0 to 2 V. Since both the discrete op-amp and the integrated op-amp have high-input impedances, the two circuits can share the same signal without loading down the network.

Dynamic gain circuit

The PIC16F17576 MCU has four op-amps, with two of them containing integrated resistor ladders. These ladders have eight steps, plus an additional option for unity gain (1x), for a total of nine options. Alternatively, resistors or other components can be connected to the I/O pins to assign an arbitrary gain or function, if desired.

In this demo, the MCU’s op-amp is switched between a gain of 2x (LOW) and 4x (HIGH) at runtime depending on the measured signal.

In most applications, when the signal strength is low, the gain would be HIGH. However, it is worth noting that in this demo, the inverse is true. This is purely for visual reasons; otherwise, the clipping condition would have more lights ON and thus appear “better” than the dynamic gain version at a glance. As the gain of the embedded op-amps is set up in software, it was easily reconfigured to match the desired behavior.

Measurement and display

The PIC16F17576 MCU also performs the measurement of both op-amp outputs to display on the LED bar graph. The internal Fixed Voltage Reference (FVR) is used to generate a stable 4.096 V from the +5-V (USB) supply for conversions. MCP23017 I2C I/O expanders are used to drive the LEDs of the display. 

Putting it all together

Adjusting the circuit gain without any external circuitry greatly simplifies designs where there are large signal ranges. These peripherals, of course, will not replace high-performance op-amps, ADCs, DACs, or voltage references, but embedded analog peripherals are a good way to handle signals that require some conditioning but aren’t particularly sensitive. This, coupled with low power functionality, makes them a useful tool to reduce circuit complexity, time to market, and ultimately the BOM in your design.

Robert Perkel is an application engineer for Microchip Technology. In this role, he develops technical content such as App Notes, contributed articles, and videos. He is also responsible for analyzing use cases of peripherals and the development of code examples and demonstrations. Perkel is a graduate of Virginia Tech, where he earned a Bachelor of Science degree in Computer Engineering.

Related Content

The post Mixed signals, on a power budget: Intelligent low-power analog in MCUs appeared first on EDN.

Post-quantum cryptography (PQC) knocks on MCU doors

Чтв, 08/28/2025 - 15:16

An MCU facilitating real-time control in motor control and power conversion applications incorporates post-quantum cryptography (PQC) requirements for firmware protection outlined in the Commercial National Security Algorithm (CNSA) Suite 2.0. These MCUs also support Platform Security Architecture (PSA) Level 3 compliance.

PSA Certified Level 3 is an Internet of Things (IoT) security standard that focuses on robust protection against software and hardware attacks on a chip’s root of trust. It provides an independently evaluated and validated environment that can securely house and execute the PQC algorithms.

Figure 1 PQC encompasses the replacement of Elliptic Curve Cryptography (ECC)-based asymmetric cryptography as well as increasing the size of Advanced Encryption Standard (AES) keys and Secure Hash Algorithm (SHA) sizes. Source: Infineon

“By adopting both PSA Certified Level 3 and PQC compliance with other regulations, companies can proactively address current and future cyber threats,” said Erik Wood, senior director of cryptography and product security at Infineon Technologies. He is responsible for defining the security requirements of Infineon MCUs.

Quantum computers, exponentially faster than classical computers, are still under development. However, cybercriminals can collect encrypted data now and decrypt it later using quantum computers. That calls for futureproofing of current systems to ensure that companies remain secure as quantum computing technologies advance.

Enter PQC, a collection of cryptographic algorithms designed to be secure against attacks from powerful quantum computers. In MCUs, which mainly use cryptography during boot-time and run-time operations, it commands significant changes in security architecture amid evolving regulations.

For instance, MCU’s memory size is a key design consideration. “More memory size is required because encryption keys are longer,” Wood said. “The certificate size is different because the signatures of these certificates are much bigger.”

Figure 2 PSOC Control C3 MCU’s embedded security provides stringent protection against quantum-based attacks on critical systems. Source: Infineon

Next comes the throughput shortfall. “While certificates are currently transferred through an I2C bus, the throughput falls short with QPC use,” he added. “Now you need to have three I3C buses.” Wood said that the industry is even procrastinating about whether every MCU will have a USB port in four years.

In other words, integrating QPC into MCUs will entail a primary upgrade of cryptographic algorithms. Next come memory upgrades, and finally, interface upgrades will follow.

Wood claimed that Infineon is the first MCU supplier to have integrated and ported PQC algorithms. “We offer an integrated library already hooked up to the accelerators for peak optimization and performance in a PSA-3 level device.”

Related Content

The post Post-quantum cryptography (PQC) knocks on MCU doors appeared first on EDN.

Power Tips #144: Designing an efficient, cost-effective micro DC/DC converter with high output accuracy for automotive applications

Срд, 08/27/2025 - 16:52

The ongoing electrification of cars brings new trends and requirements with every new design cycle. One trend for battery electric vehicles is reducing the size of the low-voltage batteries, which power either 12-V or 48-V systems. Some auto manufacturers are even investigating whether it’s possible to eliminate low-voltage batteries completely. Regardless, you’ll need isolated high- to low-voltage DC/DC converters as a backup or buffer for the low-voltage battery rail. In all of these cases, the high-voltage battery powers the DC/DC converters. Many high-voltage battery systems in cars currently in production or in development use a 400-V or 800-V architecture.

Given the disadvantages of discharging the high-voltage battery more than necessary, high- to low-voltage DC/DC converters need to support operation with the highest possible efficiency. Different activity states in the car require different power levels in the subsystems—for example, 60 W when the driver opens the car, 300 W when the car is in standby but not moving, and 3 kW or more when the car is in drive and fully operational. It is not possible to optimize a single DC/DC converter to cover all three potential output power levels with high-efficiency operation over the whole load range; in the examples given here, you would need two or three independent power converters.

Converter topology selection

In this power tip, I will focus on the 300-W output power range, also known as a micro DC/DC converter. Suitable DC/DC topologies for this output power range include half- and full-bridge converters. Resonant topologies such as half-bridge inductor-inductor-capacitor (LLC) converters offer higher efficiency conversion than their hard-switched counterparts through zero-voltage switching (ZVS) on the primary side and zero-current switching (ZCS) on the secondary side. Another potential topology is the phase-shifted full-bridge (PSFB) topology, which also employs soft switching by leveraging ZVS but is less cost-effective for the 300-W target output power level, since it requires four switches on the primary side.

Figure 1 shows the converter efficiency for various input voltages and load values for the Texas Instruments (TI) Automotive 300 W Micro DC/DC Converter Reference Design Using Half-Bridge LLC. Optimized for 400-V battery inputs and a 48-V output, this design reflects a good compromise between efficiency and cost of the four different topologies.

Figure 1 Efficiency plot of the automotive 300-W micro DC/DC converter reference design. Source: Texas Instruments

In an electric vehicle with a 400-V architecture, the battery voltage can vary from 200 V to 450 V. In general, LLC converters are not known to work well with very wide input voltage ranges because, with peak current-mode control, such a wide input voltage range could lead to the converter prematurely entering light-load efficiency mode (also known as burst mode) under full load conditions, or reaching overload conditions too early under low input-voltage conditions. The reason for both effects is that the feedback voltage is scaled in the controller with the input voltage, making it switching frequency-dependent.

So why should you even consider an LLC for this type of application? The UCC256612-Q1 LLC controller from TI uses input-power proportional control (IPPC), which overcomes these limitations. The feedback voltage only scales with the input power, and stays quasi-constant over the whole input voltage range for a constant load current. Figure 2 shows the differences between IPPC feedback voltage behavior (Figure 2a) and traditional peak current-mode control feedback voltage behavior (Figure 2b).

Figure 2 Feedback over input voltage using (a) IPPC and (b) traditional LLC control. Source: Texas Instruments

Accurate output voltage regulation with isolation

The proper regulation of isolated power supplies in electric vehicles is a tricky topic. Optocouplers, typically used for secondary-side regulation (SSR) in nonautomotive applications, are considered unreliable in automotive applications because of aging effects on the internal glass passivation over their lifetime. An alternative way to provide output feedback to a controller on the primary side is primary-side regulation (PSR) through an auxiliary winding. PSR is not very accurate for high output currents because the voltage drop across the rectifier(s) and droop across traces to the load will be current-dependent but not visible on the auxiliary winding. A second option is to use isolated amplifiers.

For SSR, the reference design uses the TI ISOM8110-Q1 automotive-qualified pin-to-pin replacement for traditional optocoupler devices. Superior aging performance and smaller current transfer ratio (CTR) variations of the ISOM8110-Q1 enable more accurate and reliable designs, which are crucial for automotive systems with expected lifetimes of at least 10 years. In addition, the ISOM8110-Q1 has a slightly different transfer function than traditional optocouplers, enabling higher control loop bandwidths that can ultimately save costs because lower output capacitance values will be able to meet similar load transient requirements.

Figure 3 shows a load transient from 3 A to 6.25 A and back to 3 A for the reference design with a 48-V output. The output voltage deviation with four 82-µF output capacitors is only 400 mV.

Figure 3 Load transient behavior, 400 VIN, 3 A to 6.25 A, and back to 3 A. Source: Texas Instruments

Apart from dynamic output accuracy, load regulation under static load conditions is important too. Figure 4 shows the load regulation across different input voltages for the reference design.

Figure 4 Load regulation over various input voltage levels, illustrating good load regulation under static load conditions. Source: Texas Instruments

For full functionality, the ISOM8110-Q1 requires a bias current of at least 700 µA on the diode side of the device and 700 µA multiplied by the worst-case CTR on the transistor side, which is 155% with a 5 mA bias current and 180% with a 2 mA bias current. Because some control ICs are optimized for minimum standby power, the feedback pin of such a controller might not be capable of sourcing sufficient current to supply the ISOM8110-Q1 on its own. A simple workaround for such a scenario is to provide the bias current with a pull-up resistor from a regulated voltage rail to the feedback pin. The UCC256612-Q1 generates a 5-V rail with an internal low-dropout regulator, which is externally accessible and can therefore provide the bias current for the opto-emulator IC. The block diagram in Figure 5 demonstrates the implementation of this workaround.

Figure 5 Secondary-side feedback implementation using the ISOM8110-Q1, with external bias from a control IC on the primary side. Source: Texas Instruments

Alternative for micro DC/DC converters

The reference design demonstrates that the half-bridge LLC topology can be a viable alternative for automotive micro DC/DC converters in the 300 W power range, demonstrating good efficiency as well as excellent static and dynamic output voltage regulation.

The ISOM8110-Q1 is a cost-effective, accurate and reliable option to close the loop of isolated power converters in automotive applications. It works well with controllers optimized for low standby power when there is the possibility of an external bias voltage.

Markus Zehendner is a systems engineer and Member Group Technical Staff in TI’s EMEA Power Supply Design Services group. He holds a bachelor’s degree in electrical engineering and a master’s degree in electrical and microsystems engineering from the Technical University of Applied Sciences in Regensburg, Germany. His main focus lies on automotive low-voltage designs for advanced driver assistance systems and infotainment, as well as high-voltage designs for hybrid and electric vehicle applications.

 Related Content

The post Power Tips #144: Designing an efficient, cost-effective micro DC/DC converter with high output accuracy for automotive applications appeared first on EDN.

Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs

Втр, 08/26/2025 - 15:40

Accurate, inexpensive, and mature platinum resistance temperature detectors (PRTDs) with an operating range extending from the cryogenic to the incendiary are a gold (no! platinum!) standard for temperature measurement.

Similarly, the 4 to 20 mA analog current loop is a legacy, but still popular, noise- and wiring-resistance-tolerant interconnection method with good built-in fault detection and transmitter “phantom-power” features.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 combines them in a simple, cheap, and cheerful temperature sensor using just eight off-the-shelf (OTS) parts, counting the PRTD. Here’s how it works.

Figure 1 PRTD current loop sensor with Ix = 500 µA constant current excitation.
Ix = 2.5v/R2, PRTD resistance = R1(Io/Ix – 1)
R1 and R2 are 0.1% tolerance (ideally)

The key to measurement accuracy is the 2.50-V LM4040x25 shunt reference, available with accuracy grade suffixes of  0.1% (x = A), 0.2% (B), 0.5% (C), and 1% (D). The “B” grade is consistent (just barely) with a temperature measurement accuracy of ±0.5oC.

R1 and R2 should have similar precision. R2 throttles the 2.5 V to provide Ix = 2.5/R2 = 500 µA excitation to T1. Because A1 continuously servos the Io output current to hold pin3 = pin4 = LM4040 anode, the 2.5 V across R2 is held constant, therefore Ix is likewise.

Thus, the voltage across output sense resistor R1 is forced to Vr1 = Ix(Rprtd) and Io = Ix(Rprtd/R1 + 1). This makes Io/Ix = Rprtd/R1 + 1 and Rprtd/R1 = Io/Ix – 1 for Rprtd = R1(Io/Ix – 1).

Wrapping it all up with a bow: Rprtd = R1(Io/(2.5/R2) – 1). Note that accommodation of different Rprtd resistance (and therefore temperature) ranges is a simple matter of choosing different R1 and/or R2 values.

Conversion of the Io reading to Rprtd is an easy chore in software, and the step from there to temperature isn’t much worse, thanks to Callendar Van Dusen math.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs appeared first on EDN.

Integrated voltage regulator (IVR) for the AI era

Втр, 08/26/2025 - 13:20

A new integrated voltage regulator (IVR) claims to expand the limits of current density, conversion efficiency, voltage range, and control bandwidth for artificial intelligence (AI) processors without hitting thermal and space limits. This chip-scale power converter can sit directly within the processor package to free up board and system space and boost current density for the most power-hungry digital processors.

Data centers are grappling with rising energy costs as AI workloads scale with modern processors demanding over 5 kW per chip. That’s more than ten times what CPUs and GPUs required just a few years ago. Not surprisingly, therefore, in a data center, power can account for more than 50% of the total cost of ownership.

“This massive jump in power consumption of data centers calls for a fundamental rethink of power delivery networks (PDNs),” said Noah Sturcken, co-founder and CEO of Ferric. He claims that his company’s new IVR addresses both the chip-level bottleneck and the system-level PDN challenge in one breakthrough.

Fe1766—a single-output, self-contained power system-on-chip (SoC)—is a 16-phase interleaved buck converter with a fully-integrated powertrain that includes ferromagnetic power inductors. The high-switching-frequency powertrain also includes high-performance FETs and capacitors that drive ferromagnetic power inductors.

Figure 1 The new IVR features a digital interface that provides complete power management and monitoring with fast and precise voltage control, fast transient response times, and high bandwidth regulation. Source: Ferric

Fe1766 delivers 160 A in an 8 × 4.4 mm form factor to bolster power density and reduce board area, layout complexity, and component count. The new IVR achieves one to two levels of miniaturization compared to a traditional DC/DC converter by taking a collection of discrete components that we design on a motherboard and replacing them with a much smaller chip-scale power converter.

Moreover, these IVRs can be directly integrated into the packaging of a processor, which improves the efficiency of the PDN by reducing transmission losses. It also brings the power converter much closer to the processor, leading to a cleaner power supply and a reduction in board area. “That means more processing can occur in the same space, and in some cases, design engineers can place a second processor in the same space,” Sturcken added.

Fe1766, which enables vertical power delivery within the processor package, claims to provide more power within the processor package while cutting energy losses with vertical power delivery. That makes it highly suitable for ultra-dense AI chips like GPUs. AI chip suppliers like Marvell have already started embedding IVRs in their processor designs.

Figure 2 Marvell has started incorporating IVRs in its AI processor packages. Source: Ferric

Ferric, which specializes in advanced power conversion technologies designed to optimize power delivery in next-generation compute, aims to establish a new benchmark for integrated power delivery in the AI era. And it’s doing that by providing dynamic control over power at the core level.

Related Content

The post Integrated voltage regulator (IVR) for the AI era appeared first on EDN.

If you made it through the schtick, Google’s latest products were pretty fantastic

Пн, 08/25/2025 - 21:38

Until last year, Google historically held its mobile device launch events in October, ceding the yearly first-mover advantage to primary competitor Apple with its September comparable-device announcements. In 2024, however, Google “flipped the script”, jumping ahead to August. The same thing seems to have happened this year…assuming Apple does a late summer or early fall event at all, of course, since all we have right now is a lot of leaks, not a solid date. That said, Google rolled out the latest updates to its longstanding smartphone, smart watch, and earbuds products last Wednesday, August 20th at its Made by Google event, along with making additional announcements related to other R&D programs and product lines.

I suppose I probably should touch on (and get past) the “schtick” aspect of this post title’s first. I didn’t watch the livestream, as I was fully focused on my “day job” duties at the time. And truth be told, I still haven’t watched the archived video in its entirety, because I can’t stomach it:

Say what you want about Jimmy Fallon as a comedian, television host, actor, singer, writer, and producer; I personally think he’s quite talented, generally speaking:

As a tech event host, however, in this initial experiment at least, his skill set was a mismatch, IMHO at least. Not that the other guests, or even Google’s own spokespersons, were much—if any—better, for that matter. Here’s what TechCrunch noted in retrospect:

The result was a watered-down, cringey, and at times almost QVC-like sales event, which Reddit users immediately dubbed “unwatchable.” In large part, this had to do with Fallon’s performance.  Trying to shift his goofy late-night persona to a corporate event, he ended up coming across as deeply uninterested in the technology, necessitating an over-the-top display of decidedly less-than-genuine enthusiasm.

The Verge’s conceptually similar take was aptly titled “The Made by Google event felt like being sucked into an episode of Wandavision”. Here’s an excerpt:

The real unsettling thing was understanding that I — and other gadget nerds and media — were not the target audience for this show. The point of a keynote is to be both informative and impressive, telling the most interested audiences about the ins and outs of the new products and attempting to wow them with live demos and technological feats. Today’s Pixel event was less concerned with product minutiae and more concerned with making it all entertaining.

That said, Victoria Song’s self-aware closing comments were thought-provoking; perhaps at least some of the reason for my underwhelming reaction was that I’m traditional and…old:

Back in the day, [Steve] Jobs needed media to get the word out and build buzz. In this new age, companies can go straight to the source through influencers, YouTube (which Google also owns), and livestreams. It’s why you see an increasing number of influencers invited to launch events — and featuring in them. There were plenty in attendance today. It’s not that journalists are getting left out. It’s more that the keynote as we know it isn’t the only way to get attention anymore. All I know is today felt like the end of an era. That’s not necessarily a bad thing. I’ll confess that traditional keynotes have felt stale as of late. As cringe as it was, this was at least something different.

That all said, I give Google kudos for taking it straight to Apple this time, which depending on your perspective, reflects either genuine confidence or deluded arrogance. And I’d still suggest you stick with The Verge’s 11:39 abridged video versus slogging through the full 1:16:55 version:

The processors

One downside to the reality that “gadget nerds and media were not the target audience for this show” is that we didn’t end up getting nearly as much technical detail as we’d like. At this point, for example, we don’t have any idea whose SoC is inside Google’s new Pixel Buds 2a earbuds:

INSERT https://youtu.be/v7sWikAU-os

To be fair, we don’t generally find out this kind of info for these kinds of products anyway, at least until either the supplier reveals its presence or someone like me tears ‘em apart. And speaking of suppliers subtly-or-not revealing themselves, the fact that Qualcomm rolled out its latest “Snapdragon W5+ and W5 Gen 2 Wearable Platforms” for smart watches and the like the same day as Google’s event was a tipoff that it’s what’s powering the new Pixel Watch 4:

The main IC, comprising a quad-core Arm Cortex-A53 CPU matrix and a Hexagon V66K AI DSP, is fabricated on a 4 nm process (foundry source not identified). The key difference between the W5 (what Google’s smartwatch uses) and W5+ is the latter’s inclusion of a separate 22 nm-fabricated always-on coprocessor (AOC). The Qualcomm chipset’s narrowband non-terrestrial networks (NB-NTN) support enables emergency message transmission and reception via satellite when out of cellular and Wi-Fi coverage, something rumored for the (near) future with Apple Watches but not available with Apple’s current wrist-wearable products. And dual-band GPS capabilities, coupled with “Location Machine Learning 3.0” RF front-end (RFFE) and processing algorithm enhancements, claim to improve positioning accuracy by up to 50%.

Speaking of “foundry sources”, a supplier transition here is one of the most notable aspects of the new Tensor G5 SoC powering Google’s latest Pixel 10 products, including the newest Fold:

Google provided no detailed block diagram, sorry, only a pretty concept picture:

And when it comes to specs, there’s only high-level handwaving, at least for now, until third-party developers and users get their hands on hardware:

  • An up to 60% more powerful TPU
  • A 34% faster on average CPU, and
  • New security hardware

The other thing we know is that Google switched from its longstanding foundry partner, Samsung, to TSMC this time around. The Tensor G4 (along with its G3 precursor…perhaps that lithography stall was behind the foundry switch?) had been built on a 4-nm process. Now it’s fabbed on 3 nm.

Beyond that…🤷‍♂️ The Tensor G4 contained the following “octa-core” CPU cluster:

  • 1× 3.1 GHz Cortex-X4
  • 3× 2.6 GHz Cortex-A720
  • 4× 1.92 GHz Cortex-A520

along with an Arm Mali-G715 MP7 GPU. Ars Technica notes that this time around, the total CPU core count is the same (eight), but the “mix” is different; one “prime” core, five mid-level ones, and two efficiency ones. Core identity and speed specifics are TBD, as are GPU details, although benchmarks (including relative comparisons to Apple SoC counterparts) have already leaked. To wit, the Tensor Processing Unit (TPU) for on-device AI inference seems to be notably upgraded:

The more powerful TPU runs the largest version of Gemini Nano yet, clocking in at 4 billion parameters. This model, designed in partnership with the team at DeepMind, is twice as efficient and 2.6 times faster than Gemini Nano models running on the Tensor G4. The context window (a measure of how much data you can put into the model) now sits at 32,000 tokens, almost three times more than last year.

More on the Pixel Buds 2a (and Pro 2)

As I’d mentioned upfront in my Pixel Buds Pro teardown published at the beginning of 2023, Google’s initial earbuds product efforts had been hit-or-miss at best. The Pixel Buds Pro, though, introduced at the May 2022 Google I/O developer conference, was a notable update, adding both active noise cancellation (ANC) and “transparency”, among other improvements:

The subsequent enhancements made to their Pixel Buds Pro 2 successors, unveiled at last year’s Made by Google event, were more modest, and I took a “pass” on the upgrade. The original Pixel Buds Pro remain my Android-paired “daily drivers” to this very day, actually. But now, with the gen-2 update to the four-plus year old A Series:

I may reconsider my longstanding no-update loyalty. They carry forward the bulk of the Pixel Buds Pro 2 capabilities, including first-time A-Series ANC support, at the modest tradeoff of decreased between-charges operating time. Speaking of charging, the batteries inside the case (albeit not those in the earbuds themselves) are user-replaceable, precluding you from needing to toss the case in the trash when its original cells expire. And did I mention that the Pixel Buds 2a costs $100 less than its “big brother”? Presumably as an attempt to maintain (and maximize) the feature set differentiation, as a means of rationalizing the price differentiation, Google also announced a new color option and pending modest feature set updates for the Pixel Buds Pro 2:

More on the Pixel Watch 4

I never would have believed that a smartwatch update would be the highlight of a new product launch suite, but I actually think that’s what Google pulled off last week. The glass face is now curved across the entirety of its diameter, not just at the outer edges…as is the display itself, which Google refers to as “Actua 360”. The result? A 10% larger active area, even with 16% smaller bezels, and an edgeless appearance. It’s also 50% brighter, with a 3000-nit max output.

No word on battery capacity expansions for either/both the 41 mm and 45 mm diameter models, although given that the new Qualcomm chipset’s RFFE is ~20% smaller than before, it wouldn’t surprise me to learn that Google filled the now-available internal space with more Li-ion capacity. Regardless, Google claims that the Pixel Watch 4 has a 25% longer battery life (30 hours on the 41 mm version and 40 hours on the larger battery capacity 45 mm variant), further extendable to two days (41 mm) and three days (45 mm) via Battery Saver mode.

And when recharging is necessary, Google has made welcome updates here as well, claiming that the Pixel Watch 4 charges 25% faster than before, from zero to 50% in just 15 minutes.

The approach shown in the above video marks the third charging scheme Google has employed across only four smartwatch generations to date. The first-generation Pixel Watch was launched three years ago at Made by Google:

and previewed a few months earlier at the 2022 Google I/O conference. It remains in daily use on my wrist to this very day. The premiere Pixel Watch leveraged proprietary wireless charging, which was convenient but slow and inefficient, and also translated into thermal tradeoffs that “encouraged” the back panel to fall off. Second- and third-generation successors switched to physical charging contacts on that same back panel. And now Google’s moved them to the side, among other things, translating into improved (more accurately: finally feasible) repairability.

Unsurprisingly, the new SoC affords additional Gemini-fueled AI capabilities, both fitness-specific (a pending Fitbit revamp is planned, for example, and more general. Other UI enhancements are physical versus virtual: a 15% stronger haptic engine and a louder, clearer speaker. Pixel Watch 4 preorders are now open, with product availability slated for October.

More on the Pixel 10 phone family

And now for the smartphones, normally the upfront-in-coverage stars of the show. Unless you look closely, and disregarding the varied color options this time around, you won’t be able to discern any differences between them and last year’s Pixel 9 predecessors, at least from the outside. Same four models (10, 10 Pro, 10 Pro Max, and 10 Pro Fold, the latter also with October availability), same screen-size options (albeit with modestly boosted peak brightness) and other dimensions (albeit slightly thicker in some cases), etc. The biggest external evolution is the baseline Pixel 10’s added (third) 10.8 Mpixel backside telephoto camera, prompting a (presumably bill-of-materials driven) devolution of its ultrawide peer to 13 Mpixels from the Pixel 9’s 48 Mpixels (the wide camera resolution also dropped slightly, from 50 to 48 Mpixels).

Pop off the screen and peer inside, and things get more interesting. The 3rd gen Fold version, for example, is now IP68 water and dust resistant; Google was also refreshingly candid that it’s not a “forever” panacea (for it or any other device, for that matter, either). The Pixel 10’s Wi-Fi downgrades from 7 (on the Pixel 9) to 6e. Battery capacities have gone up slightly across the board, as have between-charges battery life estimates. And how does one charge those batteries? Legacy wired USB-C connections are faster than before, at least for the Pixel 10 Pro XL, which can charge to 70% in 30 minutes using a 45-W input. And that same product variant also supports up-to-25W wireless Qi2.2 charging. The others are “only” 15W-capable, although their common Qi2-gen technology first-time embeds magnets, branded by Google as Pixelsnap:

One pleasant surprise, speaking of bill-of-materials costs, was that tariff pressures aside (Pixel products are variously manufactured in China, Vietnam and, increasingly, India), and aside from the $100-more Pixel 10 Pro XL, there were no other price increases from last year’s models to this year’s. And Google also didn’t “hide” tariff costs by cutting RAM capacities (which would counterbalance its burgeoning AI ambitions, anyway) or offering only higher-capacity, higher-priced (and profitable) storage variants, the latter as Apple is rumored to be doing with at least some of its various upcoming iPhone 17 flavors. Speaking of storage, the baseline interface moves from UFS v3.1 on the Pixel 9 to faster v4.0 on the Pixel 10…as long as you purchase a device with at last 256 GBytes of flash memory, that is. Bump that up to 512 GBytes or further, and you also get “Zoned UFS” (ZUFS). Google didn’t say much about it last week, but here’s how SK Hynix explained it in a year-plus-back press release:

The ZUFS is a differentiated technology that classifies and stores data generated from smartphones in different zones in accordance with characteristics. Unlike a conventional UFS, the latest product groups and stores data with similar purposes and frequencies in separate zones, boosting the speed of a smartphone’s operating system and management efficiency of the storage devices. The ZUFS also shortens the time required to run an application from a smartphone in long hours use by 45%, compared with a conventional UFS. With the issue of degradation of read and write performance improved by more than four times, the lifetime of the product also increased by 40%.

The explicit ZUFS tie to higher capacities suggests to me that it’s explicitly tied to multi-die memory modules, which are inherently easier to manage from a multiple-simultaneous-access (read and/or write) standpoint. Further, regarding the claimed performance and durability improvements, it’s conceptually feasible that a portion of the total capacity allocation might derive from more costly (on a per-bit basis) but more robust single- or dual-bit-per-cell flash memory, with the remainder using cheaper but slower and less durable triple- or quad-bit-per-cell flash and the operating system on-the-fly directing usage to one or the other as appropriate. One final internal (with external ramifications) change of note: with the exception of the Fold variant and only in the United States, Google has dropped physical SIM support from this year’s phones, just as Apple had done with its iPhone 14 product line three years back.

Other “teasers”

Google also mentioned last week that a pending migration from Google Assistant to Gemini, in both free and paid service tiers, was planned for its various existing Home devices (likely a reaction to both users’ increasingly vocal complaints about their existing setups and competitor Amazon’s underway Alexa+ staged rollout), along with reassuring everyone that Gemini support in Android Auto and Google TV is still on the way. And apparently, judging from a teased image, “Gemini for Home” will be supported by not only legacy but also new hardware. I could imagine, for example, that legacy memory capacity and processing horsepower limitations would significantly hamper, if not completely preclude, local “edge” AI inference capabilities:

(yes, that’s Formula 1 Team McLaren driver Lando Norris)

And what about new (specifically Google-branded) product categories? Company executives indicated, for example, that Google has at least temporarily paused internal tablet development after the underwhelming market acceptance of its most recent (2.5 year old) Pixel Tablet model:

a particularly interesting twist in light of chronologically-coincident reports that Amazon is dropping its Android-derived Fire OS and refocusing on “pure” Android for its future tablets.

Similarly, Google claims it has no definite (public, at least) plans to release branded smart glasses or other head-mounted wearables—instead being content to develop foundation O/S and application suites for partners to productize—or even a smart ring. I’m particularly skeptical about that last one, as I am regarding Apple’s claimed non-interest in the smart ring product category. I’ve been testing various manufacturers’ smart rings in recent months, with compelling albeit embryonic outcomes, and I find it hard to imagine either Apple or Fitbit-by-Google perpetually ceding that particular product-category space to others (that said, the effectiveness of patent-portfolio barriers should never be underestimated).

Stay tuned for the first in a series of smart ring-themed posts by yours truly in EDN starting next month. And with that, nearing 3,000 words, I’m going to wrap up for today. Apple is rumored to be holding its own event in a few weeks, which I’m as-usual also planning on covering. Until then, as always, let me (and your fellow readers) know your thoughts via the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles

 

The post If you made it through the schtick, Google’s latest products were pretty fantastic appeared first on EDN.

The Google TV Streamer 4K: Hardware updates on display(s)

Пн, 08/25/2025 - 19:09

Within my year-back coverage of Google’s August 2024 multi-product launch event, I devoted multiple prose paragraphs to the $99.99 TV Streamer 4K, the company’s high-end replacement for the popular prior Chromecast with Google TV 4K and HD series:

Memory-drive evolutions

Part of the motivation for Google’s product-succession move, we belatedly learned, was a requirement unveiled three months later that all new Google TV O/S-licensed devices needed to ship with a minimum of 2 GBytes of RAM. While the original (4K) Chromecast with Google TV met that specification, the HD sibling undershot it by 25% (1.5 GBytes). The TV Streamer 4K, on the other hand, doubles the onboard RAM allotment to 4 GBytes.

Another increasingly problematic issue with prior-generation devices was their dearth of integrated nonvolatile (flash memory) storage, which adversely affected not only how many apps and other downloaded content could be held on-device but even the available capacity to house operating system updates. Both the 4K and HD variants of the Chromecast with Google TV included only 8 GBytes of storage, only around half of which were user-accessible. The TV Streamer 4K quadruples that total amount, to 32 GBytes.

Then there’s the competitive angle. A year ago, the most advanced device in licensee-slash-competitor (frenemy?) Walmart’s product arsenal was the $19.88 onn. 4K Streaming Box (which I just noticed they’re calling the “Streaming Device” again in conjunction with the recent packaging refresh) with 2 GBytes of RAM and 8 GBytes of nonvolatile storage, memory capacity-matching the Chromecast with Google TV 4K at less than half the price. That said, as any of you who saw one of my last-month teardowns already knows, Walmart subsequently unveiled a “Pro” device of its own, with 3 GBytes of RAM, 32 GBytes of nonvolatile storage, and, at $49.99, a price tag once again half that of the Google TV Streamer 4K counterpart.

And amid all this memory-related chitchat, don’t overlook equally important processing and graphics horsepower, along with connectivity and other hardware enhancements. Walmart has historically leveraged Amlogic SoCs, sometimes architecture- and/or clock speed-upgraded from one generation to another, and other times generationally essentially the same. Up to this point, at least, Google has also done the same. What’s inside the TV Streamer 4K, claimed to be “22% faster” this time? And does its feature set “adders” versus competitive alternatives, such as the ability to act as a Google Home and Matter-and-Thread hub…umm…matter? Let’s find out.

eBay once again comes through

Sorry, folks, but given my per-teardown monetary compensation, I’m not going to drop $100 on a brand new dissection “patient”, especially if I’m not confident upfront that I’ll be able to get it back together afterwards in cosmetically pristine and fully functional form. Fortunately, back in early May, I came across a “Porcelain” color (“Hazel” is also available) used-condition device with all accessories included on eBay for $52.25 plus tax, with free shipping. It was a bit beat up, but seemingly still worked fine:

Here’s how it and the accompanying accessories arrived (inside a bubble wrap-rich cardboard box, of course), as usual, in the following photo (and others to come) accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Let’s have a close-up peek at the power supply first. I was admittedly surprised to still see Google shipping devices accompanied by wall warts with legacy USB-A outputs, mated to USB-A to USB-C cables, although the combo still seemingly provides sufficient juice to power the streamer:

That’s a 5V/1.5A (7.5W) output, if you can’t discern the faint fine print:

Next, the remote control:

It’s a slightly larger version of the one bundled with the Chromecast with Google TV HD (to the right in the following photos), notably moving the volume controls to the front versus the side:

And now for the star of the show, with the following specifications:

  • Length: 6.4 in
  • Width: 3.0 in
  • Height: 1.0 in
  • Weight: 5.7 oz

Note that (optional for use, in addition to built-in Wi-Fi) wired Ethernet support is integrated this time, not necessitating the use of a separate USB-C hub. More generally, left-to-right, there’s the status LED, a “find remote” button that does double-duty for reset purposes, USB-C (software-enabled for both power and peripheral data purposes), GbE Ethernet, and HDMI 2.1:

Open sesame

Time to dive inside. That underside rubberized “foot” is usually a fruitful pathway bet:

No luck yet, but the various-shaped and -sized opening outlines barely visible below the translucent next-level layer are encouraging:

That’s better…

…save for the lingering “bubble” after I put the “foot” back in place, a familiar sight to anyone who’s ever imperfectly applied a screen protector…

Let’s pause for a moment and take in the lay of the land:

There are screw heads in all four corners, along with recessed tabs on both sides, and additional holes (with metal visible within them) at both the top and bottom edges:

Removing the screws was easy:

The tabs were more of a struggle and, ultimately, a surprise. What I thought I needed to do was to carefully bend them out of the way, thereby enabling the two halves to vertically separate. And indeed, I was able to shift one to the side, fortunately not breaking it in the process. But when I turned my attention to the other, the two halves instead separated sideways in response:

And then they vertically lifted apart. Turns out I could have saved myself some trouble (and potential tab breakage) by just sliding them apart from the beginning:

Tackling various temperature inhibitors

Next up: that sizeable heat sink. Remember the earlier-mentioned “additional holes at both the top and bottom edges”? Those were for the four additional screws that now need to be removed; the tips had been visible through the holes to the other (bottom) side:

Houston, we have liftoff:

Next, the PCB, held in place by plastic tabs (and the connectors’ inserts to the case back panel):

And yes, as you can see from the now-present smear, I got thermal paste all over myself, etc. in the process of getting the PCB out of the bottom case half:

A close-up of the LED light pipe and button mechanical bits:

Voila:

Already visible are the PCB-embedded Wi-Fi antennae on both sides; the TV Streamer 4K supports Wi-Fi 802.11ac (both 2.4 GHz and 5 GHz) along with both Bluetooth 5.1 and a Thread transceiver. Before going any further, let’s get rid of the rest of that thermal paste, properly this time (via rubbing alcohol and a tissue):

Now let’s flip the PCB over and see what the other side reveals:

Another Faraday cage! And another embedded antenna (lower left). I’m guessing that it’s for Bluetooth and, doing double-duty, Thread, both protocols being 2.4 GHz-based.

While here, let’s get this cage off. Unlike most I’ve encountered, this one has numerous discrete “dimpled” tabs holding it in place, versus longer segments each with multiple embedded “dimples”:

Tedious patience eventually won out, however:

The “fins” (which I presume are for “spring” purposes) on top of the Faraday cage are interesting:

And what’s with the three gold-color “clips” (for lack of a better word) scattered around the cage, readers? I’ve seen them in past teardowns, too; I’m not sure what purpose they serve:

A new generation, a supplier transition

A closeup reveals, at lower left, an unknown chip stamped thusly:

MG21
A020H1
B02ARA
2436

to its right, an unknown-function MediaTek MT6393GN (although this has me suspecting it’s a power management controller, and to my earlier “what SoC is in the design this time” question: hmm, MediaTek?), and at lower right, a Samsung K4FBE3D4HB-MGCL 32 Gbit LPDDR4 DRAM:

Back to the topside, and (tediously, again) off with another Faraday cage:

More thermal paste inside, unsurprisingly:

Zooming in, I’m guessing that the application processor is at far left, under the lingering lump of paste (which I’ll attempt to clean up next). Below it is the nonvolatile storage, a Kioxia (formerly Toshiba Semiconductor) THGAMVG8T13BAIL 32 GByte eMMC flash memory. To its right is the wired Ethernet transceiver, a Realtek RTL8211F. And at far right is the wireless communications nexus, MediaTek’s MT7663BSN “802.11a/b/g/n/ac Wi-Fi 2T2R + Bluetooth v5.1 Combo Chip”.

Who’ll take my bet that under that glob of thermal paste is a MediaTek-sourced SoC?

I win! It’s the MT8696, based on a quad-core Arm Cortex-A55 and capable of clocking at up to 2 GHz. I can’t read the markings on the crystal in the SoC’s upper left corner, but TechInsights’ analysis report, which I’ll revisit soon, says that the MT8696 runs at 1.8 GHz in this design.

All that was left was to apply fresh thermal paste everywhere I’d cleaned it off, set the Faraday cages back on top of their brackets, push the tabs back in place, snap some side-view shots:

and then fire it back up and see if it still works. I didn’t bother with putting the top back in place at first, in case it didn’t work, but that white LED glow in the lower left is an encouraging sign.

Huzzah!

I let it run for about 15 minutes to ensure that it was thermally stable, then unplugged it and completed the reassembly process.

Is the enemy of my enemy my friend?

In closing, I’ll share the report summary of another teardown I came across, from TechInsights, with the identities of a few other ICs. And I’ll toss out a few questions for your introspection:

  • Given that Google’s conspicuous reference to this one as the “4K” model, will they follow up later with a “HD” edition as they did in the Chromecast with Google TV era?
  • Given the subsequent unveiling of both Walmart’s aforementioned 4K Pro Streaming Device and even newer “little brother” (sorta…hold that thought for another teardown to come) onn. 4K Plus Streaming Device, plus other manufacturers’ Google TV O/S-based products, all significantly lower priced, just how many TV Streamer 4Ks does Google really expect to sell?
  • And at the end of the day, given that Google is fundamentally a software company (with a software-licensing business model), does it matter? Is TV Streamer 4K fundamentally just a showcase product to advance the feature set of the overall market, analogous to Microsoft and its Surface computer product line? Said another way, are Amazon (with its various Fire OS-based devices), Apple (with tvOS-based Apple TV products), and Roku (Roku OS-based sticks, boxes, and TVs) Google’s real competitors?

Wrapping up, some words I previously wrote (and EDN subsequently published) last August:

Competing against a foundation-software partner who’s focused on volume at the expense of per-unit profit (even willing to sell “loss leaders” in some cases, to get customers in stores and on the website in the hopes that they’ll also buy other, more lucrative items while they’re there) is a tough business for Google to be in, I suspect. Therefore, the pivot to the high end, letting its partners handle the volume market while being content with the high-profit segment.

How well (or not) has my year-back perspective held up? Any other thoughts on what I’ve shared today? Let me (and your fellow readers) know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post The Google TV Streamer 4K: Hardware updates on display(s) appeared first on EDN.

EMC compliance spanning instruments, software, and systems

Пн, 08/25/2025 - 15:48

A variety of electromagnetic compliance (EMC) testing solutions—standalone instruments, software, and systems—will be on display at Rohde & Schwarz’s booth during IEEE EMC Europe 2025 symposium held at Sorbonne Université in Paris from 1-5 September 2025.

Start with HF1444G14, the new high-gain electromagnetic interference (EMI) microwave antenna covering 14.9 to 44 GHz. It will be paired with the company’s ESW EMI test receiver to demonstrate full compliance testing with a single measurement. The ESW EMI test receiver, boasting an FFT bandwidth of up to 970 MHz, facilitates measurements of CISPR frequency bands C and D in a single sweep.

Figure 1 The ESW EMI test receiver offers a wide measurement bandwidth and high dynamic range. Source: Rohde & Schwarz

Next, the EPL1007 EMI test receiver, supporting frequency ranges up to 7.125 GHz, can be either used for EMI pre-compliance testing or as a CISPR 16-1-1 compliant receiver. It’s a portable device that can operate on batteries, which makes it suitable for a wide range of testing environments.

Figure 2 The EPL1007 EMI test receiver is suitable for conducted and radiated measurements. Source: Rohde & Schwarz

Then there is the ELEKTRA test software, which automates EMC testing for EMI and electromagnetic susceptibility (EMS) measurements of an equipment under test (EUT). The software simplifies test configuration, speeds up test execution, and generates comprehensive test reports. Rohde & Schwarz will demonstrate new features of this test software, including the latest capabilities for immunity testing in reverberation chambers.

Figure 3 The ELEKTRA test software captures the entire system to measure EMI emissions and EMS immunity. Source: Rohde & Schwarz

Moreover, the Munich, Germany-based test and measurement company will also demonstrate EMI debugging on its oscilloscopes and probing solutions. Rohde & Schwarz’s MXO 5 oscilloscopes—featuring an update rate of more than 4.5 million wfms/s and more than 45k FFT/s for spectrum analysis—will be paired with the isolated probing system RT-ZISO to allow users to debug digital and power electronic devices quickly.

Rohde & Schwarz will also present four technical sessions at the conference.

Related Content

The post EMC compliance spanning instruments, software, and systems appeared first on EDN.

An e-mail delivery problem

Птн, 08/22/2025 - 16:53

For many years, I have been using my IEEE alias e-mail address in the “From” line of outgoing messages and receiving replies back addressed to that alias. Recently, that doesn’t seem to work anymore. If I put ambertec@ieee.org into the “From” line, messages often do not get delivered, as in these three examples below.

Figure 1 Screenshots of several rejected emails when ambertec@ieee.org is used in the “From” field.

The techno-babble of these rejections is different in each case, but the end result is the same: My message was refused for delivery.

I can only reliably send messages now using jdunn4@optimum.net as the “From” entry because, in the interest of “security,” the “From” entry must now match the actual sending address. When I made them match in these three cases, all of the messages were successfully sent to their intended recipients.

This difficulty was first noticed a few months ago for one recipient of my messages, but this issue has spread like some kind of disease to other recipients as well. The utility of the IEEE alias itself has therefore been very much diminished.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post An e-mail delivery problem appeared first on EDN.

Three-level buck controllers boost USB-C efficiency

Чтв, 08/21/2025 - 21:36

Two voltage controllers from Renesas feature a three-level buck topology for battery charging and voltage regulation in USB-C systems. With wide input and output voltage ranges, the RAA489300 and RAA489301 are well-suited for multiport USB-PD chargers, portable power stations, robots, drones, and other high-efficiency DC/DC applications.

The three-level topology adds two switches and a flying capacitor to a conventional buck converter. The capacitor lowers voltage stress on the switches, enabling the use of lower-voltage FETs with better efficiency and reducing conduction and switching losses. It also allows a smaller inductor, with ripple only about 25% of that in a two-level design, further cutting inductor losses.

In addition to the three-level buck configuration, the controllers offer passthrough mode in both forward and reverse directions, enabling high efficiency when input and output voltages are equal. Key parameters are programmable via an SMBus/I²C-compatible interface.

The devices differ in voltage range and switching frequency. The RAA489300 operates from an input of 4.5 V to 57.6 V and an output of 3 V to 54.912 V, with a programmable switching frequency up to 400 kHz (800 kHz at the switching node). The RAA489301 supports an input of 4.5 V to 24 V and output of 3 V to 21 V, with a programmable frequency up to 367 kHz (734 kHz at the switching node).

The RAA489300 and RAA489301 are available now in 4×4 mm, 32-lead TQFN packages.

RAA489300 product page 

RAA489301 product page 

Renesas Electronics

The post Three-level buck controllers boost USB-C efficiency appeared first on EDN.

Early access opens for BittWare 3U VPX cards

Чтв, 08/21/2025 - 21:36

BittWare’s early access program helps customers speed development of systems using its upcoming 3U VPX cards with AMD Ryzen processors and Versal SoCs. Launching later this year, these ruggedized, SWaP-optimized cards support mission-critical aerospace and defense applications.

The 3U VPX products integrate Ryzen x86 embedded CPUs with Versal RF and Gen 2 adaptive SoCs for high-speed signal capture and real-time multi-sensor processing. They comply with Sensor Open Systems Architecture (SOSA) and VITA 48 standards, meeting the mechanical and cooling requirements for high-reliability deployments. The cards are well-suited for radar, sensor fusion, electronic warfare, signals intelligence, UAVs, and image-processing workloads.

Customers can apply for early access to gain exclusive technical details, roadmap visibility, and direct engagement with experts on next-generation designs. NDAs with both AMD and BittWare are required.

BittWare

The post Early access opens for BittWare 3U VPX cards appeared first on EDN.

GaN transistors drive long-pulse radar

Чтв, 08/21/2025 - 21:35

Ampleon has launched four 700-W GaN-on-SiC RF transistors for S-band radar systems, operating between 2.7 GHz and 3.5 GHz. The CLS3H2731 and CLS3H3135 series leverage a radar-optimized GaN-on-SiC platform that combines frequency-specific design, long-pulse support, and robust thermal performance—features beyond standard GaN transistors.

The transistors span two frequency ranges: 2.7 GHz to 3.1 GHz and 3.1 GHz to 3.5 GHz. Devices in each range are offered in flanged (SOT502A) and leadless ceramic (SOT502B) packages to meet diverse mechanical and thermal requirements. The lineup includes:

Internally pre- and post-matched, the transistors offer high input impedance and support pulse lengths up to 300 µs with duty cycles of 10–20%. Low thermal resistance ensures reliable operation under high duty cycles. Designed for advanced radar transmitters, they are well-suited for air traffic control, ground and naval defense, weather monitoring, surveillance, and particle acceleration.

Now in mass production, the RF transistors are available through Ampleon’s global distributors, including DigiKey and Mouser.

Ampleon

The post GaN transistors drive long-pulse radar appeared first on EDN.

Clock buffers pair low jitter with I/O flexibility

Чтв, 08/21/2025 - 21:35

Operating from DC to 3.1 GHz, the SKY53510, SKY53580, and SKY53540 clock buffers from Skyworks provide 10, 8, and 4 outputs, respectively. These low-jitter devices support high-speed communication infrastructure, including data centers, 5G networks, and PCIe 7.0.

Each device integrates a 3:1 input multiplexer that accepts two universal inputs—compatible with LVPECL, LVDS, S-LVDS, HCSL, CML, SSTL, and HSTL—as well as a crystal input (also usable with a single-ended clock). The inputs support slew rates down to 0.75 V/ns. Differential outputs are arranged in two banks, with each bank independently selectable as LVPECL, LVDS, HCSL, or tristate and powered by its own 1.8-V, 2.5-V, or 3.3-V supply.

The buffers achieve low additive jitter, specified at 35 fs typical (47 fs max) at 156.25 MHz and 3 fs at 100 MHz for PCIe 7. Multiple on-chip LDO regulators provide >70 dBc PSRR in noisy environments, while a -166 dBc/Hz noise floor allows operation with Synchronous Ethernet (SyncE) at 156.25 MHz.

Samples and production quantities of the SKY53510/80/40 clock buffers are available now.

SKY53510 product page 

SKY53580 product page 

SKY53540 product page 

Skyworks Solutions 

The post Clock buffers pair low jitter with I/O flexibility appeared first on EDN.

Сторінки