EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 1 min ago

Smart factory: The rise of PoE in industrial environments

8 hours 45 min ago

As industrial environments rapidly evolve with the integration of operational technology (OT) and information technology (IT), the demand for seamless connectivity and reliable power delivery has never been higher. The proliferation of smart devices, such as sensors, controllers, cameras and robotic arms, has made data indispensable to modern factories and process industries.

To meet the increased demand, more industrial IoT (IIoT) device manufacturers are turning to Power over Ethernet (PoE) as a preferred solution, leveraging its unique ability to deliver both power and data over a single cable. This convergence is enabling smarter, more flexible and efficient industrial operations, while simplifying deployment and maintenance for end users.

Figure 1 Industrial environments are increasingly integrating operational and information technologies. Source: Microchip

What’s Power over Ethernet (PoE)?

Power over Ethernet (PoE) is a technology that allows electrical power and data to be transmitted simultaneously over standard Ethernet cabling. It was first introduced by PowerDsine in 1998; the company was later acquired by Microchip Technology. The Institute of Electrical and Electronic Engineers (IEEE) introduced the first IEEE 802.3af standard in 2003.

PoE was initially developed to power devices like IP phones and wireless access points without the need for separate power supplies. Since then, PoE standards have evolved to include IEEE 802.3 af/at/bt supporting higher power levels and a broader range of devices, making it a cornerstone technology for modern networking encompassing industrial automation and IIoT deployments.

Why IIoT manufacturers are turning to PoE

For IIoT device manufacturers, PoE offers a host of compelling benefits. PoE simplifies deployment by combining power and data in a single cable, eliminating the need for separate electrical wiring and reducing installation complexity and cost. It enables flexible placement of devices, allowing installation in remote, hard-to-reach, or hazardous locations where traditional power sources may be unavailable or cost-prohibitive.

PoE also supports unified network architecture, streamlining network design and making it easier to scale and adapt to changing operational needs. Reliability and compliance are enhanced, as standards-based PoE delivers safe, low-voltage DC power, supporting regulatory compliance and minimizing electrical hazards.

Additionally, offering PoE-powered devices can provide manufacturers with a competitive advantage in a crowded market by delivering a more convenient, integrated solution to customers.

Overcoming PoE deployment challenges in industrial settings

Despite its advantages, deploying PoE in industrial environments is not without challenges. One of the primary obstacles is the limited availability of PoE-enabled network infrastructure. Many existing industrial networks lack PoE switches, and even when available, these switches may not provide sufficient power on every port to support all connected devices.

The cost and complexity of upgrading network infrastructure can be prohibitive, especially in legacy facilities. Other challenges include limited access to power, as not all areas of a factory or plant have easy access to network cabling or power outlets, making device placement difficult. The high cost of power delivery can also be a concern, as retrofitting facilities to support PoE can be expensive and disruptive.

Compatibility concerns must be addressed to ensure that PoE-powered devices work seamlessly with existing network equipment, avoiding downtime and support issues. Finally, scalability is a challenge, as the number of connected devices grows, so does the demand for reliable, scalable power solutions.

Introducing PoE midspans: Supplementing network power

To address the challenge of limited PoE-enabled infrastructure, many industrial facilities are turning to PoE midspans, also known as injectors, to supplement network power where it does not exist. A PoE injector is a device that sits between an Ethernet port that is not supplying PoE and the powered device, injecting power into the Ethernet cable so that both data and power are delivered to the endpoint.

This approach allows manufacturers and customers to deploy PoE-powered IIoT devices without the need to replace existing switches or overhaul network architecture, making it a cost-effective and scalable solution for expanding PoE coverage in industrial environments.

Figure 2 PoE midspans inject power into the Ethernet cable. Source: Microchip

PoE industrial injectors vs. standard indoor injectors

While standard indoor PoE injectors are suitable for office or commercial settings, industrial environments demand more robust solutions. PoE industrial injectors are specifically designed to withstand the harsh conditions often found in factories, processing plants, and outdoor installations.

These injectors feature ruggedized construction, enabling reliable operation in environments with extreme temperatures, humidity, dust, and vibration. They support an extended temperature range, ensuring consistent performance in both hot and cold conditions.

Enhanced safety and compliance are also critical, as industrial injectors meet stringent safety and regulatory standards, providing low-voltage, standards-compliant DC power that minimizes electrical hazards. Industrial PoE injectors support higher power levels—such as IEEE 802.3bt up to 90 W—to accommodate demanding devices and are designed with robust surge protection, which is essential in industrial environments where electrical surges from machinery or harsh conditions are more common.

Flexible mounting options, such as DIN rail, wall, or rack installations, accommodate diverse deployment scenarios. Reliability and longevity are ensured through components and enclosures designed for continuous operation, providing long-term durability and minimal maintenance. These features are essential for maintaining uptime, safety, and performance in industrial settings, where environmental challenges and operational demands are far greater than in typical office environments.

Figure 3 Here is a visual comparison between standard indoor midspan (above) and industrial midspan (below). Source: Microchip

What to look for in a PoE solution provider

For IIoT device manufacturers and customers deploying PoE-powered devices, selecting the right PoE solution provider is critical. Proven compatibility is essential; the provider’s injectors should be tested and validated for seamless operation with a wide range of industrial devices, reducing the risk of downtime and support issues.

Flexible power options are important, with support for various power levels and device types to meet diverse application needs. Reliability and compliance should be prioritized, ensuring solutions meet industry standards for safety and performance, supporting regulatory requirements and minimizing risk.

Ease of installation is also key, with plug-and-play solutions that leverage existing Ethernet cabling to simplify deployment and reduce installation time. Rugged design is necessary for industrial-grade injectors, offering robust construction and extended temperature ranges for reliable operation in challenging environments.

Finally, strong technical support and post-sale service from the provider can help resolve compatibility issues and ensure long-term satisfaction. By prioritizing these features, manufacturers and customers can ensure successful, scalable, and reliable PoE deployments in industrial environments, unlocking the full potential of smart IIoT devices.

Alan Jay Zwiren is senior marketing manager of Microchip Technology’s Networking and Connectivity Business Unit.

Special Section: Smart Factory

The post Smart factory: The rise of PoE in industrial environments appeared first on EDN.

AFE ICs accelerate industrial image scanning

Thu, 04/23/2026 - 17:46

Cirrus Logic has launched the CS82L4x series of analog front-end (AFE) chips for CIS and CCD sensors in scanners and industrial imaging platforms. Based on a redesigned SAR ADC architecture, the devices are said to offer faster scan times and enhanced efficiency, while an integrated RGB LED driver reduces design complexity.

The CS82L41, CS82L44, and CS82L46 provide one, four, and six channels, respectively, with a conversion rate of 24 Msamples/s per channel. With 16-bit resolution, the AFE ICs convert LED reflections from scanned objects into accurate digital representations. Per-channel signal conditioning includes reset level clamping, correlated double sampling, and programmable polarity, gain, and offset adjustment.

Operating from a 3.3-V supply, the CS82L4x series provides a scalable platform for multi-lens and multichannel scanning architectures across a range of imaging systems. The CS82L41 features an SPI control interface with CMOS output. The CS82L44 and CS82L46 offer SPI or I²C control interfaces, CMOS or LVDS outputs, and integrated sensor timing generation. All devices operate over a temperature range of −40°C to +85°C and come in QFN packages.

Samples are available now from Cirrus.

CS82L41 product page

CS82L44 product page

CS82L46 product page

Cirrus Logic 

The post AFE ICs accelerate industrial image scanning appeared first on EDN.

Compact inductors meet tight layout demands

Thu, 04/23/2026 - 17:45

Power inductors in Vishay’s IHLP1212-EZ-1Z series come in low-profile 1212-size packages suited for space-constrained commercial applications. With a 3×3-mm footprint and profile options of 1.2 mm, 1.5 mm, and 2.0 mm, their electrical performance is comparable to larger devices.

The series includes 24 devices with typical DC resistance from 8.6 mΩ to 50.4 mΩ and inductance values from 0.22 µH to 3.3 µH. Rated saturation current reaches 14.3 A, while heating current extends to 11.1 A. Operating over a temperature range of −55°C to +125°C, the inductors are designed to handle high transient spikes without saturation.

IHLP1212-EZ-1Z inductors feature a powdered iron body that completely encapsulates the windings, eliminating air gaps and providing magnetic shielding to reduce crosstalk with nearby components. Their composite construction also offers strong resistance to thermal shock, moisture, and mechanical stress.

Designed for low-profile DC/DC converters, the inductors enable energy storage, noise suppression, and filtering across industrial, consumer, telecom, and medical applications. Samples and production quantities are available with lead times of 10 weeks.

IHLP1212-EZ-1Z product page

Vishay Intertechnology 

The post Compact inductors meet tight layout demands appeared first on EDN.

Signal generators enable Pulsar signal testing

Thu, 04/23/2026 - 17:43

A software option for Rohde & Schwarz vector signal generators supports Pulsar signal simulation testing in production settings. Pulsar is Xona Space Systems’ planned LEO satellite constellation for high-precision positioning, navigation, and timing (PNT) services. R&S SMBV100B and SMW200A generators equipped with the software allow engineers and manufacturers to test receiver compatibility as the constellation enters scaled deployment.

“Pulsar is designed to upgrade the global navigation infrastructure while remaining compatible with GNSS devices already in use today,” said Bryan Chan, co-founder and VP of strategy at Xona Space Systems. “Test and measurement solutions play an important role in enabling device manufacturers to evaluate compatibility as new signals become available. Rohde & Schwarz brings deep expertise in precision signal generation that helps make this possible.”

The SMBV100B and SMW200A vector signal generators will soon join Pulsar’s verified ecosystem program, which recognizes devices and test systems validated for compatibility with Pulsar signals.

Rohde & Schwarz 

The post Signal generators enable Pulsar signal testing appeared first on EDN.

Core Series 3 scales AI to entry PCs, edge

Thu, 04/23/2026 - 17:41

Intel has introduced its Core Series 3 mobile processors targeting budget laptops and essential edge devices. Built on the same 18A process node as the Core Ultra Series 3 platform, they are described as the first “hybrid AI-ready” Core series processors, supporting AI workloads up to 40 TOPS at the platform level.

The processor lineup includes seven variants, one without an NPU. Compared with five-year-old PCs, Core Series 3 delivers up to 47% higher single-thread performance and 2.8× higher GPU-based AI performance, based on Intel’s internal benchmarks. Beyond laptops, it brings these gains to edge deployments such as robotics, smart buildings, POS terminals, and smart metering.

According to Intel, Core Series 3 is designed for all-day battery life, with up to 64% lower processor power consumption. The devices support high-speed connectivity, including up to two Thunderbolt 4 ports, Wi-Fi 7 (R2), and Bluetooth 6. They also support up to 48 GB of LPDDR5X memory at 7467 MT/s or up to 64 GB of DDR5 memory at 6400 MT/s.

Core Series 3-based consumer and commercial systems will be available from OEM partners starting April 2026, with edge systems following in Q2 2026. An in-depth overview of Core Series 3 is available here

Core Series 3 product page 

Intel

The post Core Series 3 scales AI to entry PCs, edge appeared first on EDN.

Anker brings on-device AI to earbuds

Thu, 04/23/2026 - 17:33

Anker Innovations has developed an AI audio chip for earbuds, called Thus, that uses NOR flash memory for compute-in-memory (CIM) processing. This approach supports several million model parameters across multiple workloads and delivers up to 150× more AI computing power for environmental noise cancellation compared with Anker’s previous flagship earphones.

NOR flash-based CIM reduces the required silicon footprint to about one-sixth that of SRAM-based alternatives, making it better suited for highly constrained consumer devices. Anker will integrate Thus into its upcoming Soundcore true wireless earbuds. The company also plans to bring neural-network AI to additional consumer devices, including mobile accessories and IoT devices.

The AI processor’s first disclosed feature, Clear Calls, improves voice clarity on calls by isolating the speaker’s voice from background noise. Unlike conventional environmental noise cancellation, which can struggle in loud environments, it uses an on-device neural network supported by eight MEMS microphones and two bone conduction sensors to separate speech from ambient sound. The result is clearer calls in challenging environments such as airports, bars, and busy streets.

Full product details will be announced at Anker Day on May 21, 2026, in New York.

Anker Innovations

The post Anker brings on-device AI to earbuds appeared first on EDN.

Linearly variable two-wire loop current generator

Thu, 04/23/2026 - 15:00

Circuits such as the design described here implement useful tools for a diversity of calibration and testing applications.

A two-wire loop current generator is a useful tool for the testing, calibration and commissioning of current-to-pressure (I/P) converters connected with control valves, actuators, etc. in process industries. Such product can also help calibrate the analog input modules of distributed control systems (DCSs) and programmable logic controllers (PLCs) by simulating process signals.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In these and other applications, it is advantageous to generate a loop current which is linearly variable for precisely setting the desired current. A Design Idea published in EDN’s December 10, 2025 issue, although compact and otherwise excellent, does not support linearly variable current, since the output current relationship is Io=1.24/R1. R1 is adjusted to vary the output current, but since it is in the denominator of the equation, the resultant current variation is not linear.

Figure 1 describes a circuit where the variation of loop current is linear. Here, the loop current is directly proportional to the voltage set by potentiometer RV1. Moreover, this current can service a source or sink load up to 500 ohms without need for recalibration. These two requirements are essential for a loop current generator in process industries.


Figure 1 With this linearly adjustable two-wire current source, RV1 is adjusted to set the current, and either LOAD1 (source) or LOAD2 (sink) can be connected.

How does the circuit work? First connect a 24V DC supply, a DC ammeter and a load resistor—say, 200 ohms—at the source or sink side. In field applications, this portion is built into the I/P converter, DCS or PLC.

Two currents exist at pin 3 of U1A :

  • I span=Vset/R5
  • Through R4=(Io*R6)/(R4+R6)

The first current minus the second current = 0, as U1A is an operational amplifier.

Io is the loop current. Hence Vset/R5= (Io*R6)/(R4+R6). After rearranging, Io= (Vset/R5) * (1+R4/R6). Substituting the values, R4/R6= 99. Hence, Io= (Vset/R5)*100.

Thus, Io is directly proportional to Vset which is adjustable linearly by RV1. A multiturn potentiometer selected for RV1 will enable smooth and precise adjustment.

Other comments, in closing:

  • U3 generates 5V DC.
  • Q1 and U1A adjust the loop current Io proportional to Vset.
  • R1 and Q2 set the current limit for Io at approximately 30 mA for safety reasons.
  • The loop current is settable from 0.5 mA to 23.5 mA, which is sufficient for this application.
  • For different current settings, select R3, R2 and R5 as per the equation given earlier for Io.
  • And Q1 requires a heat sink.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post Linearly variable two-wire loop current generator appeared first on EDN.

The system architect’s sketchbook: The pickleball protocol

Thu, 04/23/2026 - 11:11

Deepak Shankar, founder of Mirabilis Design and developer of VisualSim Architect platform for chip and system designs, has created this cartoon for electronics design engineers.

The post The system architect’s sketchbook: The pickleball protocol appeared first on EDN.

The ASIC design remake in the AI era

Thu, 04/23/2026 - 09:39

The traditional ASIC design model—focusing on relatively stable standards and well-defined functions—is now under pressure. That’s partly because AI workloads are highly diverse, compute-intensive, and tightly coupled to software behavior and system context. Consequently, ASICs, besides being application-specific, are now increasingly becoming system-specific.

Take the case of a custom chip for LLM inference, where the prefill and decode stages are now running on separate chips. So, there are two ASICs instead of one: the compute-intensive part of the application (prefill) and the memory-bandwidth-limited part of the application (decode). That shows how ASICs are increasingly becoming modular and disaggregated with cross-domain collaboration spanning architecture, packaging, and manufacturing.

Read the full article at EDN’s sister publication, EE Times.

The post The ASIC design remake in the AI era appeared first on EDN.

The system architect’s sketchbook: GenZLens built in a dorm

Wed, 04/22/2026 - 17:40

Deepak Shankar, founder of Mirabilis Design and developer of VisualSim Architect platform for chip and system designs, has created this cartoon for electronics design engineers.

The post The system architect’s sketchbook: GenZLens built in a dorm appeared first on EDN.

BJT is accurate sensor for absolute temperature in Kelvin and Rankine

Wed, 04/22/2026 - 15:00

Simple math implemented in a (very) simple circuit. What’s not to like?

A very cool (also warm!) property of the base-emitter junction of (most) small signal BJTs is the ΔVbe temperature-sensing effect.  ΔVbe temperature measurement is aptly described and applied here by famed and forever remembered analog design guru Jim Williams (see page 7):

At room temperature, the Vbe junction diode shifts 59.16mV per decade of current. The temperature dependence of this constant is 0.33%/°C, or 198μV/°C. This ΔVbe versus current relationship holds, regardless of the Vbe diode’s absolute value.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Rearranging Williams’ math, since 198uV=1V/5050, 198μV/°C per current decade works out to (the easier to remember…ha!) ΔVbe/°C = Log10(Current-ratio)/5050.  So, if we need any given ΔVbe/°C, the required

Current-ratio = 10^(5050 ΔVbe/°C).

For example, for ΔVbe/°C = 100uV, Current-ratio = 10^(5050 * 100uV) = 10^(0.5050) = 3.20

Of course, this trick also works for Fahrenheit, albeit with a different scale factor.  Since 1 °F = 5/9 of 1°C, for Fahrenheit the corresponding Current-ratio = 10^(9090 ΔVbe/°F).  Therefore, for the 100uV example, if ΔVbe/°F = 100uV, then Current-ratio = 10^(9090 * 100uV) = 10^(0.9090) = 8.11

Figure 1 shows this simple math implemented in a (very) simple circuit:


Figure 1
An ordinary BJT Q1 makes an accurate absolute temperature sensor in two different units (K and R).

Here’s how it works. Switch U1a applies alternating current ratio drive to sensor Q1 per Williams’ method.  The ratio is (approximately) Current-ratio = (1/R1 + 1/R2)/(1/R2) = (R2/R1 + 1) = 3.20 for measurement in units of Celsius (Kelvin) and = 8.11 for Fahrenheit (Rankine).  The “approximately” thing comes in because the resistor ratio needed to be fudged (slightly) to compensate for the few 10s of mV of varying difference between V+ and Q1’s Vbe and thus make the current ratios accurately equal to the calculated values.

The resulting 100uVpp per degree AC signal is synchronously rectified by U1b and filtered by C3 to become the 100uV per degree of absolute temperature DC output signal suitable for direct input to a DMM.  A ~5kHz clock signal for current switching and rectification is provided by U1c, with a little help from one side of U1a.

Note that, per Williams’ analysis of the ΔVbe effect, accuracy of temperature measurement relies only on the accuracy of the current ratio and therefore on only the precision of R1 and R2.  No other reference is required or relevant and any 2N3904 will do. 

The V+ supply, for example, can vary from 3 to 6 volts without affecting accuracy.  Passive output impedance is roughly 10k.  So, loading by a typical 10M DMM input won’t either.

Thanks, Jim!

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

Related Content 

The post BJT is accurate sensor for absolute temperature in Kelvin and Rankine appeared first on EDN.

How to implement OTA firmware update on MCUs

Wed, 04/22/2026 - 10:31

Here is how design engineers can implement over-the-air (OTA) firmware updates for a microcontroller using the “staging + copy” method. The microcontroller—NXP’s RW612 in this design case study—relies on external serial flash. The article highlights the use of NXP’s ROM-resident FlexSPI API to safely erase and program the flash without bricking the device.

Figure 1 RW612 is a wireless microcontroller with an Arm Cortex-M33 application core. Source: NXP

The OTA process involves downloading the new firmware into a secondary staging partition, verifying, and then copying it to the active partition upon reboot. The article also points to a practical, production-ready example for developers.

For a practical application of OTA implementation, check the complete video tutorial that explains how to implement a remote firmware update. In this video, we use the NXP FRDM-RW612 development board with Mongoose Wizard, but the same method applies to virtually any other NXP microcontroller.

OTA firmware update

If you are looking for a practical OTA firmware update example, this article shows a simple “staging + copy” method on the NXP RW612 microcontroller using external FlexSPI flash. It matches what the FRDM-RW612 board setup looks like in real life, and it points to the exact Mongoose source file (ota_rw612.c) that implements the flow.

Figure 2 The FRDM-RW612 development board is designed for rapid prototyping with the RW61x family of wireless microcontrollers. Source: NXP

OTA firmware updates let you ship fixes and features without asking users to plug in a debugger. On Wi-Fi MCUs, such as NXP RW612, OTA is also one of the first things you want because it unlocks faster iteration during development.

There is no single “correct” OTA design. Different products pick different strategies depending on flash size, how paranoid you are about power loss, and how strict your security requirements are. Here are a few common patterns you will see in the wild:

  • In-place update (single slot): Download the new firmware and overwrite the currently running image. This uses the least flash, but it has the highest risk; if power is lost while you erase or program, you may brick the device unless a bootloader can recover.
  • Staging + copy: Download the new image into a staging area (an “inactive” region), verify it, and then copy it over the active firmware region. This is a very common and practical method because the device keeps running the old firmware while the download happens, and you only switch after you have a complete, verified image.
  • A/B (dual slot): Split flash into two full firmware slots and select which one to boot. It’s viable when you can afford the space, because rollback can be as simple as flipping a flag. It does, however, require enough flash for two complete images plus metadata.
  • Delta updates: Download only the binary diff from the old version to the new version and reconstruct on the device. Great for saving bandwidth, but the tooling and edge cases can get complicated fast.

In this article, we focus on the staging + copy approach because it’s easy to reason about, does not require two complete bootable slots, and maps nicely onto RW612 designs with external serial flash.

A minimal staging + copy flow looks like this:

  1. Reserve a staging region in external flash plus a tiny metadata area.
  2. While running the current firmware, download the new firmware into the staging region.
  3. Verify the staged image (signature and/or CRC, size checks, and version rules).
  4. Reboot into a small bootloader or early-boot update routine.
  5. Copy the staged image over the active firmware region, update metadata, and then boot the new firmware.

Here is a practical note: the easiest way to create a staging area is to split the external flash into two partitions. You keep the active firmware in the first partition and use the second partition as the staging area for the download. After verification, you copy from the second partition back into the active region during reboot.

If power is lost during the download, you still have the old firmware. If power is lost during the final copy, a well-designed bootloader can retry the copy or fall back to a known-good image (depending on your layout and policy). Either way, the goal is the same: avoid bricking devices.

External flash and FlexSPI ROM API

A key RW612 detail that influences OTA design: RW612 does not have built-in internal flash for your application image. Instead, designs typically use external serial NOR flash connected over FlexSPI. The FRDM-RW612 development board, for example, includes external serial flash (Winbond) on the board. That means your OTA code ultimately needs to erase and program external NOR flash.

The nice part is that NXP provides a ROM-resident API that can operate that flash through FlexSPI. In the MCUXpresso SDK documentation, you will see this described as the ROM API driver for external NOR flash connected to the FlexSPI controller, with support for initialize, program, and erase operations.

Why a ROM API matters: when you update flash, you want the programming logic to be as reliable as possible. ROM-resident routines are not stored in external flash, so they can still run safely while you are erasing and programming the external device.

Here are references for RW612 and FlexSPI ROM API (MCUXpresso SDK):

  • RW612 datasheet (notes off-chip XIP flash and FlexSPI interface)

https://www.nxp.com/docs/en/data-sheet/RW612.pdf

  • FRDM-RW612 board user manual (mentions external serial flash on the board)

https://www.mouser.com/pdfDocs/NXP_FRDM-RW612_UM.pdf

  • MCUXpresso SDK ROMAPI driver reference (external NOR over FlexSPI)

https://mcuxpresso.nxp.com/api_doc/dev/2349/a00044.html

  • MCUXpresso SDK romapi examples index

https://mcuxpresso.nxp.com/mcuxsdk/25.03.00/html/examples/driver_examples/romapi/index.html

  • MCUXpresso SDK romapi_flexspi example readme

https://mcuxpresso.nxp.com/mcuxsdk/25.03.00/html/examples/driver_examples/romapi/flexspi/readme.html

  • MCUXpresso SDK fsl_romapi example readme

https://mcuxpresso.nxp.com/mcuxsdk/latest/html/examples/driver_examples/fsl_romapi/readme.html

Practical layout tip for RW612 OTA: treat the external flash as your update playground. Reserve space for the active firmware, a staging region, and a small metadata area that records the update state. Keep the metadata redundant (two copies, versioned records, or a simple log) so you can survive an interrupted write.

Mongoose OTA example

If you want something you can build and run quickly, Mongoose includes a working RW612 OTA implementation that demonstrates the staging + copy method on the FRDM-RW612 board. The walkthrough video is at the beginning of this article and the implementation lives in https://github.com/cesanta/mongoose/blob/master/src/ota_rw612.c.

At high level, the Mongoose RW612 OTA example does three jobs:

  1. Receive the new firmware image over the network

The transport can be HTTP, HTTPS, or whatever your product uses. In a typical Mongoose setup, you stream the incoming bytes straight to the staging region in external flash, so you don’t need a giant RAM buffer.

  1. Write the new image into the staging region using the FlexSPI ROM API

The OTA code erases the destination region (sector erase) and programs data (page program) as the download progresses. This is the part that is RW612-specific: you use the ROM API FlexSPI routines to safely erase and program the external serial NOR flash.

  1. Copy staged firmware to the active region and switch over

After the image is fully written and verified, you reboot. Early in boot, the update routine copies the staged image to the active firmware region using the same FlexSPI ROM API. Finally, metadata is updated, so the device knows the update is complete and the new firmware boots.

Below are a few practical details that are worth copying into your own RW612 OTA design:

  • Stream to flash

Do not buffer the whole image in RAM. Erase in sector-sized chunks and program in page-sized chunks as data arrives.

  • Verify before you copy

At minimum, store and check a CRC of the downloaded image. For production, verify a signature and enforce anti-rollback rules if needed.

  • Make the update state robust

Store update metadata in a small, dedicated region (for example, “no update”, “downloaded”, “copy in progress”, and “done”). Consider writing metadata as an append-only record or keep two copies and alternate between them so you can recover from a power cut during the metadata update.

  • Handle power loss during the final copy

A common trick is to mark “copy in progress” before you start copying; then if the device reboots unexpectedly, the boot code can resume the copy from where it left off or restart it safely. Another trick is to copy in fixed chunks and persist progress.

If you just want to see it working, start with the demo video, then open ota_rw612.c and trace the flow: where the image bytes land in external flash (staging), how the ROM API based erase and program calls are made, and how the staged image is copied over the active region during reboot.

That’s how RW612 OTA is done in a way that is simple, resilient, and easy to productize.

Sergey Lyubka is director at Cesanta Software.

Related Content

The post How to implement OTA firmware update on MCUs appeared first on EDN.

What’s the impact of AI on analog design

Tue, 04/21/2026 - 16:50

It seems any expert who can spell “AI” has an opinion on its potential impact. There are countless predictions out there, many made with precision and confidence, and they are often contradictory.

Depending on who you listen to, AI will cause widespread disruption and unemployment, especially at starting and lower middle-levels, open up new vistas and ways of working and getting things done, resulting in the need to hardly do any work, or make us all work harder to stay in place…you get the picture. Whatever answer you want, you can find someone who has provided it.

I’ll jump in and give you my prediction on the impact of AI, with a two-part answer. First, I don’t know, and second, neither does anyone else.

If you look back at the track record of predictions about how past technical advances would unfold, one thing is clear: Most of these prediction underestimate or overestimate the reality, and most of them miss the actual nature of the change that these advances spur.

AI and analog: Round 1

Initially, I thought of doing a “thought experiment” about analog design and AI and go beyond the issues of general analog considerations. But then it made more sense to look at some of the specific stages of analog design, from ICs to circuits and systems, and all the way to final documentation.

However, I realized soon that it was a swamp. There were so many perspectives, so many considerations, and so many exceptions that it would take a lengthy treatise rather than a modest blog to begin to highlight the possibilities. The only meaningful possibility I could think of was using AI to help a beleaguered designer doing “best” component selection.

For example, this might be the task of choosing an op amp that fits the application priorities from among the dozens of vendors and thousands of models. Going further, AI might even help with some trade-off decisions (“show me an op amp that has 10% more dissipation than my stated maximum, if it gives me a 20% improvement in noise”).

AI and analog: Round 2

I then asked myself if it would make sense to instead look at AI and analog from the opposite direction: how can AI help analog-centric systems—meaning those with real-world front-end sensors—do a better job or perhaps implement innovative architectures.

My question was answered when I came across a project from researchers at the University of California, Davis. They used a different approach to miniaturization of a spectrometer that reduced its size to the scale of a grain of sand. This compact spectrometer-on-a-chip is designed for integration into portable devices. Instead of separating light into a spectrum physically, the system relies on computational reconstruction.

Conventional spectrometers rely on dispersive elements such as diffraction gratings or prisms to spatially separate light into its constituent wavelengths. But it requires long path lengths and bulky designs to separate individual wavelengths. The need to spatially disperse the light makes it challenging to miniaturize these delicate and expensive systems, making them unsuitable for portable applications.

On the contrary, the so-called reconstructive spectrometers use a unique set of numerous but compact photoresponsive detectors to directly encode the complex spectral information, which is later extracted using advanced computational algorithms. The team leveraged recent advances in machine learning and computational power, thus enabling further miniaturization toward chip-scale design with reduced manufacturing cost (Figure 1).

Figure 1 Working mechanisms of spectrometers include conventional spectrometers with uniform detector arrays that disperse the light spatially using diffraction gratings, which require long path lengths owing to their bulky nature (a). Then there are reconstructive spectrometers that utilize unique photodetectors capturing the minute variations in the incident light spectrum. The spectral information is then reconstructed using machine learning algorithms (b).

The chip replaces traditional optics with an array comprising 16 silicon detectors, each tuned to respond slightly differently to incoming light. Together, these detectors capture overlapping signals that encode the original spectrum, and they can provide wider bandwidth due to the use of staggered, tailored sensors for each spectrum slice. This process is similar to having multiple sensors that sample different elements of a complex signal, with the full picture emerging only after full analysis.

The analysis is performed using AI where the spectral reconstruction of an unknown spectrum is what has been defined as an inverse problem. The spectral reconstruction of the photon-trapping structures of the spectrometer is performed using a fully connected neural network that solves the inverse problem; an outline of the training and reconstruction process is shown in Figure 2.

Figure 2 Neural network model for spectral reconstruction shows demonstration of the training and reconstruction process of the neural network (a). Training and validation losses plotted against epoch show convergence of the model (b). The model is trained for 2,000 epochs with the loss function converging around 0.03, where comparison of spectral reconstruction uses matrix pseudo-inversion (c), linear combination of Gaussian functions (d), and neural network model (e).

The neural network model outperforms the other two methods in reconstructing the spectral profile of a 3-nm full width at half maximum (FWHM) laser peak. The root-mean-square error (RMSE) and Pearson’s R value (a correlation coefficient) for the neural network model are 0.046 and 0.87, respectively, indicating high accuracy in spectral reconstruction.

The training process involves learning the complex spectral encoding between the photocurrent of photon-trapping structure-enhanced photodetectors and their corresponding spectral information by back-propagating the loss function.

Their detailed modeling, analysis, and experimental results also demonstrated that this approach provided superior noise tolerance compared to traditional spectrometers despite the low photon intensity and small capture area. The fascinating story is presented in a highly readable paper “AI-augmented photon-trapping spectrometer-on-a-chip on silicon platform with extended near-infrared sensitivity” published in Advanced Photonics.

I’ll be honest: When I first saw this paper, my first if somewhat cynical thought was that this was just an attempt to dress up an old analog signal-chain technique with an AI “glow.” There are two basic ways to implement a precision sensor-based path. First, use top-grade components and various circuit topologies such as matched resistors to cancel errors to the extent possible. Second, use lesser components and just calibrate out inaccuracies.

But as I continued to read their paper, I saw that the neural network method added a new level of sophistication and ability to work through inherent weakness in the design and components to deliver an impressive result.

Where do you see AI helping, if at all, in the design cycle of an analog circuit or system? Allowing new topologies for sensor-based systems that were previously not viable or practical.

Related Content

The post What’s the impact of AI on analog design appeared first on EDN.

Simple circuit interfaces differential capacitance sensor

Tue, 04/21/2026 - 15:00

This design based on an SR latch and two RC networks is, unlike many alternative solutions, neither complex nor expensive.

Single and differential capacitance sensors are widely used to measure linear and angle displacement, pressure, proximity, humidity, fluid level, inclination and acceleration. Both analog and digital circuits are used to interface the sensors (References 1-4). Some of the solutions tend to be complex and expensive (References 5-9).

Wow the engineering world with your unique design: Design Ideas Submission Guide

This Design Idea presents a very simple circuit to interface differential capacitance sensors (Figure 1). It is a relaxation oscillator made of an SR latch and two RC networks. When one of the capacitors is gradually charged through the corresponding resistor, the other capacitor is quickly discharged through a parallel switch. When the charging capacitor reaches the trip voltage VT of its gate, the latch changes its state. The other capacitor starts charging and the first one is quickly discharged. When the second charging capacitor reaches the trip level VT of its gate, the latch flips again returning to the initial state. The charge-discharge process repeats over and over again.


Figure 1 The sensor becomes part of a relaxation oscillator where one of the capacitors is charging when the other one is shorted; the two capacitors periodically swap their operation.

Signal VQ1 goes to a microcontroller, which measures time intervals t1 and t2 and calculates the average value VAVR = VDD * t1 / (t1 + t2). A number needs to be subtracted from this value so when the two capacitors are equal the average value is zero. Thus, the average value will be positive when C1 > C2 and negative when C1 < C2.

Circuit operation was tested with a bank of ten 50-pF capacitors. The left side of Figure 2 shows connections to set a duty cycle of 20%; the right side of the figure sets the duty cycle of 90%.


Figure 2 Sensor operation is simulated with a bank of 10 capacitors.

Figure 3 presents how period T and duty cycle D = t1 / T depend on the value of C1. Period barely changes between 96 and 98 µs, while the duty cycle is proportional to C1. A straight line fits perfectly the duty cycle data (the R2 factor equals 1); however, as Figure 4 shows, the line has a nonlinearity error of ±0.3%.


Figure 3 Circuit responses: at the top, the period is almost the same, below it, the duty cycle depends linearly on the value of C1.


Figure 4 The duty cycle response has a nonlinearity error of ±0.3 %.

The bump shape of the error graph means that a second-order polynomial may improve linearity. Indeed, equation y = 1*10-5 * x2 + 0.182 * x + 4.21 reduces the error down to ±0.1%. Such an equation is easy to implement in the microcontroller firmware.

Jordan Dimitrov is an electrical engineer & PhD with 40 years of experience. Currently, he teaches electrical and electronics courses at a Toronto community college.

Related Content

References

  1. Regtien P., E. Dertien. Sensors for mechatronics. 2nd ed., Ch. 5, Elsevier, 2018.
  2. Northrop R. B. Introduction to instrumentation and measurement. 3rd ed., CRC Press, 2014.
  3. Baxter L. Capacitive sensors. http://www.capsense.com/capsense-wp.pdf
  4. Differential capacitance pressure sensor circuit. https://instrumentationtools.com/differential-capacitance-pressure-sensor-circuit/
  5. Reverter F., O. Casas. Direct interface circuit for differential capacitive sensors. I2MTC 2008 – IEEE International Instrumentation and Measurement Technology Conference, Victoria, Vancouver Island, Canada, May 12-15, 2008.
  6. Barile G. et al. Linear integrated interface for automatic differential capacitive sensing. Proceedings 2017, 1, 592.
  7. Ferri G. et al. Automatic bridge-based interface for differential capacitive full sensing. 30th Eurosensors Conference, EUROSENSORS 2016. Procedia Engineering 168 (2016) 1585 – 1588.
  8. Bai Y. et al. Absolute position sensing based on a robust differential capacitive sensor with a grounded shield window. Sensors (Basel). 2016 May; 16(5): 680. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4883371/
  9. De Marcellis A., C. Reig, M. Cubells-Beltrán. A capacitance-to-time converter-based electronic interface for differential capacitive sensors. MDPI Electronics, Jan 2019.

The post Simple circuit interfaces differential capacitance sensor appeared first on EDN.

TP-Link’s Tapo H100: Smart sensing unencumbered

Mon, 04/20/2026 - 15:00

Three smart home hubs, from two different companies. All supporting both 2.4 GHz Wi-Fi and proprietary 900 MHz wireless links. How do they differ, and are similar? Let’s find out.

Last month, I told you about TP-Link’s Tapo Hubs and their functional similarity to Blink’s Sync Modules. And last week, I took apart Blink’s second-generation hub, comparing it to its premiere predecessor which’d gone “under the knife” nearly a decade earlier. Today, I’ll be dissecting the entry-level Tapo H100 hub I conceptually covered in late March.

How comparable (or not) is its design to those of its Blink competitors? Let’s dive in and see.

Smart hub brothers from different mothers?

I shared a full set of outer box shots last month; so to avoid redundancy, this time I’ll show only the perspective that’s different, since last month’s device remains in ongoing use while this one (with a different serial number) is intended (initially, at least) solely for dissection.

As usual, it’s accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. Also note that, per the common “US/1.26” notation on the sticker found on the bottom of both boxes, this device and last month’s H100 are presumably based on the same hardware version.

Opening up the packaging, you’ll find a sliver of literature inside, with our patient below it.

The only constant is change

On the product support page I initially referenced earlier, you’ll also discover that there have been four hardware versions to date: v1.0, v1.2, my v1.26, and the subsequent (I’m assuming) v1.8. Attempts to mix-and-match divergent hardware, as I’ve noted before, can be problematic. That said, most households will contain only a single hub device (versus multiple sensors and other “smart” peripherals), minimizing the potential-problem set size in this particular case.

Before continuing, let’s revisit the backside of the device, this time zooming on the markings.

Notice what looks like a label stuck on top of part of the original info? That’s exactly what it is.

As it turns out, the FCC ID found on the backside markings (2AXJ4H100) was also later updated; it’s now 2BH7FH100. Are the two changes related? Dunno.

Time to dive inside, a task that, compared to TP-Link smart switches of (recent) past, was thankfully fairly straightforward this time around.

Inside the front half of the enclosure, you’ll find a speaker (used, for example, to implement the sound emitted when the hub is paired with, and activated by, a “smart” doorbell).

And the mechanical assembly for the pairing-and-reset switch is shown on one side, as seen earlier.

Categorizing the guts

Here, however, is the view that most of you are most interested in, I guess.

The bottom half of the PCB disconnected itself from the back half of the enclosure while I was prying apart the two halves.

Further bending back the PCB reveals how the AC “prongs” connect to it.

As well as the PCB backside itself.

The small five-lead IC in the middle, PCB-labeled U4, is marked:

TACeY1

Its identity is unknown to me (readers?). Below it, in a larger seven-lead package, is On-Bright Electronics’ OB2512NJP offline primary-side-regulation (PSR) power switch. Below that is a M7 high voltage rectifier diode. And to its left is another (bridge and three-lead, this time) rectifier, Galaxy Microelectronics’ MBF10M.

Back to the PCB front side, after “un-popping” the PCB (putting it back in its normal place within the enclosure, which is upside down in both the prior-version and the following photo versus its normal orientation).

Note first the two antennae, one embedded and along the lower edge, the other discrete and along the right side. I assume one’s for 2.4 GHz Wi-Fi while the other supports TP-Link’s proprietary 900 MHz ISM band “ultra-low power wireless protocol”. Reader suggestions as to which is what are greatly appreciated in the comments.

In the upper right (again, lower left in normal operating orientation) is the status LED, which ends up shining out the device front cover. The pairing-and-reset switch is along the left side. The top half of the PCB, perhaps obviously given the sizeable transformer, houses the AC/DC conversion circuitry (the fact that the AC prongs are directly behind it at the rear of the device is another functional tipoff).

And, last but not least, the various ICs. In the lower right corner of the transformer is an Eon Silicon Solution EN56Q64-104HIP 64 Mbit serial flash memory, which we’ve seen before in both higher and lower capacities. I assume it houses the code for Realtek’s RTL8710CM SoC below and to its left, also found in the first two of the three TP-Link smart switches I’ve dissected so far. At the bottom, in the middle, is WayTronic’s WT588F02B audio DSP with an integrated DAC, which “can directly drive 8R 0.5W speakers”, an unsurprising function given the speaker connection directly to the left of it. Above and to the right of the audio DSP is another IC I can’t ID:

35UT
53C1

And above and to the left of the mono speaker connector is one final mystery:

300A
S992
515

Reader insights into any of the chips I was unable to identify, as well as broader thoughts on anything I’ve discussed here, are always welcome in the comments.

Brian Dipert is the associate editor, as well as a contributing editor, at EDN.

Related Content

The post TP-Link’s Tapo H100: Smart sensing unencumbered appeared first on EDN.

Electronic biosensing: A quick take on ketone detection

Mon, 04/20/2026 - 14:13

Ketone detection may sound like the domain of biochemistry, but at its core, it’s also an electronics challenge: how do we translate a chemical presence into a measurable electrical signal?

The key lies in the ability of circuits to convert molecular interactions into quantifiable outputs. Through principles like signal conversion, amplification, and conditioning, electronics transform invisible chemical activity into reliable data, making ketone monitoring practical and accurate while underscoring how deeply electronics shape modern health technologies.

Ketones: Small molecules, big impact

Ketone detection is crucial because these molecules act as direct indicators of how the body manages its energy balance. Moderate levels can reflect healthy states such as fasting, exercise, or adherence to ketogenic diets, while dangerously high concentrations may signal conditions like diabetic ketoacidosis that require urgent medical attention.

By providing timely and accurate measurements, ketone monitoring empowers individuals to optimize nutrition and performance and gives clinicians essential data to prevent and manage metabolic complications. In both everyday wellness and clinical care, reliable ketone tracking plays a decisive role in safeguarding health.

Overview of ketone detection sensors

Nowadays ketone detection has moved well beyond the lab bench and into lifestyle and wearable electronics. Compact analyzers are being built into fitness trackers, smartwatches, and portable health devices, giving users real-time insights into metabolism and diet. This evolution is powered by the fundamentals of electronics—miniaturization, low-power design, and signal processing—that make complex biochemical measurements practical in everyday life, turning health monitoring into a seamless part of daily routines.

While electronics provide the backbone for translating chemistry into measurable signals, the choice of sensor defines how ketones are detected. Electrochemical sensors generate currents via redox reactions, optical sensors capture variations in light absorption or fluorescence, and chemiresistive sensors—including semiconductor gas sensors—exploit surface-level conductivity shifts. Each technology offers a unique pathway from molecular interaction to electrical output, setting the stage for circuits to amplify, filter, and interpret the data with precision.

Ketone sensing: The gold standard and beyond

In practice, blood testing is the clinical gold standard, using the enzyme β-hydroxybutyrate dehydrogenase (HBDH) to generate a precise electrical signal from β-hydroxybutyrate (BHB). Keep note that a blood ketone meter functions as a miniaturized potentiostat; it maintains a fixed voltage across the biosensor to measure the current produced by this reaction, providing the data needed to distinguish safe ketosis from metabolic crisis.

Figure 1 Today’s multifunction blood meter kits provide a fast and reliable method for measuring β-ketone, blood glucose, and other parameters from fresh whole blood samples in just a few simple steps. Source: eLinkCare

However, the field is evolving beyond the invasive finger-prick. Researchers are now optimizing alternative biomarkers and delivery methods to bridge the gap between clinical accuracy and user convenience.

Exhaled breath analysis targets acetone—a volatile byproduct of fat metabolism. Current technologies, such as chemiresistive metal-oxide sensors, offer a high-frequency, non-invasive “proxy” for ketosis. While breath analysis currently lacks the clinical precision required for acute emergencies like diabetic ketoacidosis (DKA), it provides a sustainable, pain-free alternative for routine wellness tracking.

In a nutshell, ketone breath analyzers typically employ semiconductor-based, chemiresistive sensors to detect acetone—a byproduct of fat metabolism—in exhaled breath. These sensors function by measuring changes in electrical resistance triggered by volatile organic compounds (VOCs), which serves as a proxy for blood ketone concentration. High-end models often integrate CMOS technology to enhance both sensitivity and measurement precision.

Figure 2 Ketone breath analyzers and subcutaneous sensors deliver real-time feedback on ketosis levels. Source: Author

Continuous ketone monitoring (CKM) is an emerging technology that utilizes a small subcutaneous sensor—similar to a continuous glucose monitor (CGM)—to measure BHB levels in the interstitial fluid. By providing real-time data and automated alerts, these devices aim to detect rising ketone levels before they escalate into metabolic emergencies, effectively transitioning patient care from ‘spot-check’ diagnostics to continuous, proactive health management.

Note that a subcutaneous sensor is a tiny, flexible filament inserted into the fatty tissue just beneath the skin. By monitoring the interstitial fluid in this layer, the sensor uses enzymes to measure specific chemical markers—like glucose or ketones—and converts those readings into a continuous digital stream. Because it stays in place for several days and does not require venous access, it offers a painless, real-time alternative to repeated finger-prick testing.

Electronic biosensing for makers

To wrap this up, remember that while the medical industry uses highly proprietary, pre-calibrated systems, the underlying principle is a fantastic playground for makers.

Whether you are working with a glucose oxidase strip for blood sugar or a β-hydroxybutyrate strip for ketone levels, the principle is the same: enzyme-mediated reactions generate electrons that must be measured against a stable reference potential.

Once you master the transimpedance amplifier (TIA), you have essentially built the core of a professional-grade diagnostic instrument. In fact, most commercial biosensors integrate the TIA and supporting circuitry into an analog front end (AFE), which delivers low-noise performance and simplifies design, an approach that makers can emulate at smaller scale when experimenting.

On a related note, amperometry is the electrochemical technique at the heart of most biosensor strips. It involves applying a fixed potential to an electrode and measuring the resulting current, which is directly proportional to the concentration of the analyte.

In glucose oxidase strips, the enzymatic reaction produces hydrogen peroxide that is oxidized at the electrode, while in β-hydroxybutyrate strips, NADH transfers electrons through a mediator. In both cases, the transimpedance amplifier converts this tiny current into a usable voltage signal, enabling accurate, low-noise measurement.

Figure 3 Quick view shows a closeup of a standard ketone blood tester strip. Source: Author

For those curious about non-chemical ketone monitoring, it’s worth noting that hobbyists have also experimented with MQ13x series gas sensors such as MQ138 to approximate acetone levels in breath.

These gas sensors are not medical-grade and require careful calibration against known standards, but they can respond to volatile organic compounds in exhaled breath. Pairing one with a microcontroller, a stable heater supply and signal conditioning circuitry give you a rough, experimental ketone breath analyzer. It’s a fun proof-of-concept project—ideal for learning sensor physics and electronics.

Figure 4 MQ138 sensor module helps detect acetone in exhaled breath, enabling experimental DIY ketone analysis. Source: Author

Just keep in mind that for any real-world health tracking, these DIY setups should be for educational exploration only. Medical-grade devices undergo extensive clinical validation to handle variables like hematocrit levels, temperature, and signal interference—factors that a prototype might miss.

Finally, do not let the complexity of biomedical electronics intimidate you. Every expert once started as a novice tinkering with circuits and sensors. Dive in, experiment boldly, and let curiosity be your guide—the frontier of electronic biosensing is wide open for makers willing to explore.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Electronic biosensing: A quick take on ketone detection appeared first on EDN.

Teradyne snaps up TestInsight to boost ATE for semiconductors

Fri, 04/17/2026 - 15:59

Automated test equipment (ATE) supplier Teradyne is bolstering its test solutions for semiconductor design by acquiring TestInsight, a provider of test program creation, pattern conversion, and pre-silicon validation tools used across ATE platforms and semiconductor design environments.

By acquiring a supplier of semiconductor test development, validation, and conversion software, Teradyne aims to scale its next generation of pre-silicon validation and automated pattern generation technologies. That strengthens Teradyne’s ability to support semiconductor design-in activities to accelerate time-to-market in the emerging AI and data center markets.

Here is how pattern conversion across multiple cores and CPUs accelerates the test program. Source: TestInsight

Greg Smith, president and CEO of Teradyne, calls TestInsight’s tools foundational to modern test program development. “By integrating the TestInsight team into Teradyne, we enhance our ability to help customers achieve silicon readiness faster and with greater confidence.”

The acquisition will allow Teradyne to combine its ATE platforms with TestInsight’s tightly integrated design-to-test workflow, thereby reducing debug cycles, improving coverage, and enabling earlier test program readiness. In short, the acquisition of a design-to-test software firm will help Teradyne close the gap between design and test in semiconductor design environments.

TestInsight announced that it will continue to support its existing customers across all ATE platforms.

Related Content

The post Teradyne snaps up TestInsight to boost ATE for semiconductors appeared first on EDN.

Aliasing, the bane of sampled data systems

Fri, 04/17/2026 - 15:00

Aliasing is thankfully becoming a less frequent problem due to improved instrument designs. Users should still be aware of it to prevent time- and money-costly errors.

Aliasing is an ever-present potential problem in sampled data acquisition systems. It occurs when input signals are sampled at a sample rate that is too low. If you haven’t been bamboozled by an aliased signal, you are extremely lucky.

Sampled data instruments, such as digitizers and digital oscilloscopes, must sample their input signals at a rate greater than twice the highest frequency component present in the input signal. If this criterion is not met, then aliasing can occur. Figure 1 shows an example of aliasing.


Figure 1 In this example of aliasing, a 50MHz sine wave was acquired at sampling rates of 1 Giga samples per second (GS/s) and 55 Mega samples per second (MS/s). The 55 MS/s acquisition is aliased and displayed as a 5 MHz waveform.
Source: Art Pini

A 50 MHz sine wave was acquired at both 1 GS/s and 55 MS/s. The waveform acquired at 1 GS/s has the correct frequency of 50 MHz as shown in the frequency parameter P1. The waveform acquired at 55 MS/s is aliased and has a frequency of 5 MHz as reported in parameter readout P2. The alias waveform will appear as having a different frequency than the correctly sampled waveform. This can be a significant problem that can be costly if not addressed carefully.

Let’s look into aliasing and learn how to deal with it. Sampling is a mixing process. When you apply an input signal to a sampler, the resulting output from the sampler contains the original waveforms, the sampling waveform, and the sum and difference frequencies, including the harmonics of the sampling signal. This is illustrated in Figure 2.


Figure 2 Sampling is a mixing or multiplicative process. The baseband frequency spectrum of the acquired signal appears as the upper and lower sidebands about the sampling frequency and all its harmonics.
Source: Art Pini

A correctly sampled waveform will have more than two samples per cycle at the bandwidth limit. In the sampler output, the baseband frequency spectrum of the input signal will appear as upper and lower sidebands about the sampling frequency and its harmonics. The right-hand graphs show the output spectrum of the sampler for the correct sampling rate (upper) and the undersampled case (lower). As the sampling frequency is decreased below twice the input signal bandwidth, the lower sideband of the sampling frequency interferes with the baseband signal, resulting in aliasing.

In the time-domain view (left-hand graphs), the aliased signal lacks sufficient time resolution to track the input waveform. Returning to the example in Figure 1, the 50 MHz input sampled at 55 MS/s will result in sum and difference image frequencies that are above and below the 55 MS/s sampling frequency. The lower sideband image falls into the baseband region of the spectrum and is the source of the 5 MHz alias signal.

Current digital instrument designs generally use sampling rates much higher than the instrument’s analog bandwidth. Some instruments may use sharp-cutoff anti-aliasing low-pass filters to limit the input bandwidth and control the instrument’s frequency response. These techniques, combined with long acquisition memories, also minimize this classic problem.  Still, users should be aware of aliasing.

Recognizing Aliasing

It is good practice to determine the frequency of the measured signal and verify that it has not been aliased. If the characteristics of the input signal are unknown, it is good practice to view the signal at the highest available sample rate, then decrease the sampling rate as needed. If aliasing occurs, you will see the signal’s frequency change as you select a lower sampling rate.

Another hint that a signal is an alias is that it will appear to have an unstable trigger and will jump erratically in time. This occurs because the instrument is triggered by the signal, and the alias, with fewer samples, may not display the trigger point. The instrument displays the nearest sample, which varies from one acquisition to the next, causing instability.

Aliasing can also be recognized by observing the effect on the input signal’s frequency-domain spectrum as the signal’s frequency is varied. A spectral component that shows a decrease in frequency when the input signal’s frequency is increased, a reversal of direction, is an alias. As the frequency of a sine wave increases, the spectral line corresponding to that sine wave will move to the right until it hits the Nyquist frequency of one-half the sample rate.

As the frequency increases above Nyquist, an aliased image from the lower sideband about the sampling frequency will fold back into the baseband spectrum, moving downward in frequency. The lower-sideband images for each harmonic of the sampling frequency show this reversal. Upper sideband images will move in the correct direction. This phenomenon is called spectral folding.

A helpful technique to view an aliased signal

If the signal is a relatively simple periodic waveform, such as the example sine wave, then enabling infinite display persistence will show the underlying waveform, as shown in Figure 3.


Figure 3 The aliased signal (upper trace) and the same signal displayed with infinite persistence turned on (lower trace). The persistence display accumulates all the sample values showing the original 50 MHz waveform.
Source: Art Pini

All sample points in the aliased waveform are real. If infinite persistence is enabled, all samples are accumulated on the persistence display, and the original unaliased waveform is eventually recovered. This technique won’t work for complex signals such as non-return-to-zero (NRZ) data or broadband signals.

Using aliased waveforms

Given that aliased signals are made up of real samples, an aliased signal can be used in measurements, as long as the signal’s frequency is not being measured. Consider measuring the output of a remote keyless entry transmitter. This device outputs a pulse-modulated RF signal with a carrier frequency of 433MHz. This signal has a relatively narrow bandwidth about the carrier frequency. The information being transmitted is encoded in a 400 ms pulse pattern.

Two measurement scenarios are needed. The first is to characterize the RF signal. Parameters like frequency. Also, the shape of the RF envelope affects the purity of the transmitted signal. The second measurement would involve decoding the information content. Using an oscilloscope with a 20 Mega sample (MS) memory at a horizontal scale setting of 100 ms per division (1 second acquisition time), the sampling rate would be 20 MS/s. Figure 4 shows the two measurement processes for both the RF and Data decoding measurements.


Figure 4 Measurements on a remote keyless entry transmitter use an aliased signal to decode digital data.
Source: Art Pini

The traces on the left side of the screen show the RF measurements. The signal is acquired at 20 GS/s, and its leading edge is captured. The oscilloscope measures the RF carrier frequency at 433.9 MHz. The envelope of the RF carrier is extracted by applying the absolute value function, followed by a low-pass filter to create a peak detector. Trace F1 (bottom) shows the envelope. A copy (Trace F3) of the Envelope is also overlaid on a horizontally expanded zoom view (Trace Z1) of the leading edge of the signal. The envelope can be used to measure the envelope’s rise time.

The right side of the display shows the data decoding process. The entire data packet is acquired on a 100-ms-per-division horizontal scale. The sampling rate is 20 MS/s. The RF carrier is aliased down to 6.13 MHz as measured in parameter P2. The aliased frequency of the carrier is the result of mixing the twenty-second harmonic of the sampling rate with the 433.9 MHz carrier. The same envelope detection technique is applied to the entire packet, rendering the data content as an NRZ signal. Aliasing has enabled the acquisition of the entire signal data packet.

Conclusion

Aliasing in digital instruments is a digitizer characteristic that is becoming less frequent a problem due to improved instrument designs, including anti-aliasing filters, oversampling, and very long acquisition memories. Users should still be aware of aliasing to prevent errors that cost time and money.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

The post Aliasing, the bane of sampled data systems appeared first on EDN.

Bluetooth LE throughput: Why real‑world performance falls short of specs

Fri, 04/17/2026 - 12:05

Many Bluetooth Low Energy (LE) applications depend on reliable, high‑throughput data transfer between connected devices. Typical use cases include over‑the‑air (OTA) firmware updates, sensor data streaming, and bulk data transport between embedded systems. Although the Bluetooth LE specification defines clear upper bounds on achievable data rate, measured throughput in real systems often falls well below these limits.

This discrepancy is not caused by a single factor. Instead, it arises from the interaction of connection‑event timing, controller scheduling behavior, protocol stack implementation, and radio‑frequency conditions.

While modern Bluetooth LE devices commonly support Data Length Extension (DLE), the 2-Mbps Physical Layer (PHY), and large Attribute Protocol (ATT) Maximum Transmission Unit (MTU) sizes, these features alone do not determine achievable throughput.

This article focuses on the practical constraints that shape Bluetooth LE Generic Attribute Profile (GATT) write throughput in deployed systems and explains why throughput behavior is frequently non‑linear and platform‑dependent.

Assumptions and test context

To isolate timing and scheduling effects from feature limitations, the analysis presented here assumes a contemporary Bluetooth LE configuration with the following capabilities:

  • Support for DLE on both Central and Peripheral
  • Use of the 2-Mbps PHY
  • A negotiated ATT MTU of 251 bytes
  • Transmit‑side buffering sufficient to queue multiple packets
  • Use of GATT Write Without Response operations
  • A receiver capable of sustaining the incoming data rate without application‑level back‑pressure

GATT Write Without Response is used to minimize protocol overhead and eliminate application‑layer acknowledgments that would otherwise consume airtime and delay buffer reuse. Although this write type omits an explicit GATT‑layer acknowledgment, delivery to the receiver’s Link Layer remains guaranteed by the Bluetooth LE protocol.

Under these assumptions, throughput might be expected to scale directly with the number of packets transmitted per connection interval. In practice, this assumption does not hold.

Theoretical throughput

With Data Length Extension enabled, a single Bluetooth LE Link Layer packet can carry up to 251 bytes of payload. After accounting for Logical Link Control and Adaptation Protocol (L2CAP) and Attribute Protocol (ATT) headers, 244 bytes remain available for application data.

Using the 2-Mbps PHY, the on‑air time for a maximum‑length data packet followed by its acknowledgment is approximately 1.4 ms. If a connection interval could be filled entirely with such packet exchanges, without additional Link Layer procedures or timing gaps, the resulting application‑layer throughput would be approximately 170 KBps.

This value represents an upper bound that is rarely approached in practice.

Connection events and packet scheduling

Bluetooth LE communication occurs within periodic connection events scheduled at intervals defined by the connection interval parameter. During each event, the Central and Peripheral exchange packets until one side terminates the event or the available time expires.

Most controllers support transmitting multiple packets within a single connection event, but the maximum number of packets allowed per event is not specified by the Bluetooth standard and is instead determined by the controller and stack implementation. As a result, packet scheduling behavior can vary significantly across platforms.

This difference is illustrated in Figure 1. In the left‑hand chart, a wireless MCU acting as the Central can pack 20 packets into a 30‑ms connection interval, using most of the available airtime before entering a short end‑of‑event dead time. In contrast, the right‑hand chart shows a smartphone operating as the Central, where the connection‑event length is capped at five packets, even though additional airtime remains available within the same interval.

Figure 1 Packet scheduling within a Bluetooth LE connection interval varies by platform. A wireless MCU Central fills most of a 30‑ms interval with data packets, while a smartphone Central limits the number of packets per connection event, leaving unused airtime. Source: Microchip

Such limits are particularly common on mobile platforms, where power management and radio coexistence requirements constrain connection‑event length. When the number of packets per event is capped, increasing the connection interval does not necessarily increase throughput, because the additional airtime cannot be used for data transmission.

Residual time and endofevent dead time

Two timing effects significantly reduce usable airtime within each connection interval:

  • Residual time, which occurs when the remaining interval is too short to accommodate another full packet exchange.
  • End‑of‑event dead time, during which the controller prepares for the next scheduled event and does not permit further transmissions.

The impact of these effects is illustrated in Figure 2. The figure shows that a maximum‑length data packet followed by its acknowledgment occupies approximately 1.4 ms of on‑air time. When the remaining portion of a connection interval is shorter than this duration, the controller cannot schedule another packet exchange, even though some airtime remains available.

Figure 2 Residual airtime and end‑of‑event dead time limit packet scheduling at short connection intervals. A maximum‑size packet and its acknowledgment require approximately 1.4 ms, preventing additional transmissions when insufficient time remains. Source: Microchip

The duration of end‑of‑event dead time varies widely between controller implementations and is not explicitly defined by the Bluetooth specification. In many systems, this behavior can only be identified and quantified through direct measurement.

At short connection intervals, residual and dead time consume a relatively large fraction of each interval, limiting the number of packets that can be transmitted. At longer intervals, this overhead can be amortized across additional packets, improving average throughput if packet scheduling is not otherwise constrained.

Nonlinear throughput behavior

Because residual and end‑of‑event dead time depend on internal scheduling thresholds, Bluetooth LE throughput as a function of connection interval is often non‑linear. Small changes in the connection interval can result in unexpected increases or decreases in throughput, depending on how the interval aligns with controller‑specific timing constraints.

These effects are illustrated in Figure 3, which compares measured throughput across a range of connection intervals under different environmental and platform conditions. In the left‑hand graph, an off‑the‑shelf wireless system‑on‑chip (SoC) is evaluated as both Central and Peripheral. Measurements taken in a shielded environment (orange) show consistently higher throughput than those collected in an open office (blue), indicating the impact of ambient interference on achievable performance.

Figure 3 Measured throughput versus connection interval illustrates non‑linear behavior and environmental sensitivity. Results from both a wireless SoC platform and a Zephyr GATT throughput test show higher throughput in low‑interference conditions and increased variability at longer intervals. Source: Microchip

The right‑hand graph, derived from a Zephyr GATT throughput test, reinforces this behavior while also highlighting the non‑linear relationship between connection interval and throughput. As the interval increases, throughput does not scale monotonically; instead, it exhibits discontinuities and increased variance, particularly at longer intervals where residual and dead time are amortized over more packets.

These results emphasize that throughput cannot be predicted solely from the Bluetooth LE specification. Instead, it’s strongly influenced by platform‑specific scheduling behavior and the prevailing radio‑frequency environment.

Impact of interference

Longer connection intervals typically improve throughput in clean radio‑frequency environments by amortizing residual airtime across additional packets. However, they also increase sensitivity to interference. During long connection events, many packets may be transmitted back‑to‑back; if packet loss or repeated cyclic redundancy check errors occur early in the event, some controllers terminate the event prematurely.

When this occurs, a substantial portion of the connection interval may remain unused, resulting in a sharp reduction in throughput. Shorter connection intervals limit the amount of airtime lost when errors occur and often produce more consistent throughput in noisy environments, albeit with a lower theoretical maximum.

While parameters such as PHY speed, MTU size, DLE, and GATT characteristic length are largely fixed in modern Bluetooth LE systems, connection‑event timing and controller behavior ultimately determine achievable throughput.

The connection interval remains the primary tuning parameter, but its effect is non‑linear and highly dependent on implementation details. For systems that limit packet count per connection event, selecting an interval that closely matches the allowed packet budget is critical. When longer events are supported, throughput gains must be weighed against increased sensitivity to interference.

For design engineers, optimizing Bluetooth LE throughput requires empirical evaluation and platform‑specific characterization rather than reliance on specification‑level performance limits. At a practical level, this places increased importance on controller implementations and protocol stacks that offer fine‑grained configurability on both the Central and Peripheral sides, enabling precise control over connection parameters, event length, and buffering behavior.

Wireless MCU platforms, such as Microchip’s PIC32‑BZ6 multiprotocol wireless MCU family, are representative of designs that emphasize this level of stack configurability and visibility. By allowing engineers to tune behavior symmetrically on both ends of the link and observe the resulting timing effects, such platforms can simplify the process of analyzing throughput bottlenecks and optimizing data transfer performance under real‑world operating conditions.

The ability to measure connection‑event timing, packet scheduling, and error behavior at the controller and stack levels enables more repeatable, data‑driven throughput characterization during development.

Patrick Fitzpatrick is senior technical staff engineer for software at Microchip’s Wireless Business Unit.

Related Content

The post Bluetooth LE throughput: Why real‑world performance falls short of specs appeared first on EDN.

The system architect’s sketchbook: The coherency wall

Thu, 04/16/2026 - 18:05

Deepak Shankar, founder of Mirabilis Design and developer of VisualSim Architect platform for chip and system designs, has created this cartoon for electronics design engineers.

The post The system architect’s sketchbook: The coherency wall appeared first on EDN.

Pages