Українською
  In English
Feed aggregator
Electronic biosensing: A quick take on ketone detection

Ketone detection may sound like the domain of biochemistry, but at its core, it’s also an electronics challenge: how do we translate a chemical presence into a measurable electrical signal?
The key lies in the ability of circuits to convert molecular interactions into quantifiable outputs. Through principles like signal conversion, amplification, and conditioning, electronics transform invisible chemical activity into reliable data, making ketone monitoring practical and accurate while underscoring how deeply electronics shape modern health technologies.
Ketones: Small molecules, big impact
Ketone detection is crucial because these molecules act as direct indicators of how the body manages its energy balance. Moderate levels can reflect healthy states such as fasting, exercise, or adherence to ketogenic diets, while dangerously high concentrations may signal conditions like diabetic ketoacidosis that require urgent medical attention.
By providing timely and accurate measurements, ketone monitoring empowers individuals to optimize nutrition and performance and gives clinicians essential data to prevent and manage metabolic complications. In both everyday wellness and clinical care, reliable ketone tracking plays a decisive role in safeguarding health.
Overview of ketone detection sensors
Nowadays ketone detection has moved well beyond the lab bench and into lifestyle and wearable electronics. Compact analyzers are being built into fitness trackers, smartwatches, and portable health devices, giving users real-time insights into metabolism and diet. This evolution is powered by the fundamentals of electronics—miniaturization, low-power design, and signal processing—that make complex biochemical measurements practical in everyday life, turning health monitoring into a seamless part of daily routines.
While electronics provide the backbone for translating chemistry into measurable signals, the choice of sensor defines how ketones are detected. Electrochemical sensors generate currents via redox reactions, optical sensors capture variations in light absorption or fluorescence, and chemiresistive sensors—including semiconductor gas sensors—exploit surface-level conductivity shifts. Each technology offers a unique pathway from molecular interaction to electrical output, setting the stage for circuits to amplify, filter, and interpret the data with precision.
Ketone sensing: The gold standard and beyond
In practice, blood testing is the clinical gold standard, using the enzyme β-hydroxybutyrate dehydrogenase (HBDH) to generate a precise electrical signal from β-hydroxybutyrate (BHB). Keep note that a blood ketone meter functions as a miniaturized potentiostat; it maintains a fixed voltage across the biosensor to measure the current produced by this reaction, providing the data needed to distinguish safe ketosis from metabolic crisis.

Figure 1 Today’s multifunction blood meter kits provide a fast and reliable method for measuring β-ketone, blood glucose, and other parameters from fresh whole blood samples in just a few simple steps. Source: eLinkCare
However, the field is evolving beyond the invasive finger-prick. Researchers are now optimizing alternative biomarkers and delivery methods to bridge the gap between clinical accuracy and user convenience.
Exhaled breath analysis targets acetone—a volatile byproduct of fat metabolism. Current technologies, such as chemiresistive metal-oxide sensors, offer a high-frequency, non-invasive “proxy” for ketosis. While breath analysis currently lacks the clinical precision required for acute emergencies like diabetic ketoacidosis (DKA), it provides a sustainable, pain-free alternative for routine wellness tracking.
In a nutshell, ketone breath analyzers typically employ semiconductor-based, chemiresistive sensors to detect acetone—a byproduct of fat metabolism—in exhaled breath. These sensors function by measuring changes in electrical resistance triggered by volatile organic compounds (VOCs), which serves as a proxy for blood ketone concentration. High-end models often integrate CMOS technology to enhance both sensitivity and measurement precision.

Figure 2 Ketone breath analyzers and subcutaneous sensors deliver real-time feedback on ketosis levels. Source: Author
Continuous ketone monitoring (CKM) is an emerging technology that utilizes a small subcutaneous sensor—similar to a continuous glucose monitor (CGM)—to measure BHB levels in the interstitial fluid. By providing real-time data and automated alerts, these devices aim to detect rising ketone levels before they escalate into metabolic emergencies, effectively transitioning patient care from ‘spot-check’ diagnostics to continuous, proactive health management.
Note that a subcutaneous sensor is a tiny, flexible filament inserted into the fatty tissue just beneath the skin. By monitoring the interstitial fluid in this layer, the sensor uses enzymes to measure specific chemical markers—like glucose or ketones—and converts those readings into a continuous digital stream. Because it stays in place for several days and does not require venous access, it offers a painless, real-time alternative to repeated finger-prick testing.
Electronic biosensing for makers
To wrap this up, remember that while the medical industry uses highly proprietary, pre-calibrated systems, the underlying principle is a fantastic playground for makers.
Whether you are working with a glucose oxidase strip for blood sugar or a β-hydroxybutyrate strip for ketone levels, the principle is the same: enzyme-mediated reactions generate electrons that must be measured against a stable reference potential.
Once you master the transimpedance amplifier (TIA), you have essentially built the core of a professional-grade diagnostic instrument. In fact, most commercial biosensors integrate the TIA and supporting circuitry into an analog front end (AFE), which delivers low-noise performance and simplifies design, an approach that makers can emulate at smaller scale when experimenting.
On a related note, amperometry is the electrochemical technique at the heart of most biosensor strips. It involves applying a fixed potential to an electrode and measuring the resulting current, which is directly proportional to the concentration of the analyte.
In glucose oxidase strips, the enzymatic reaction produces hydrogen peroxide that is oxidized at the electrode, while in β-hydroxybutyrate strips, NADH transfers electrons through a mediator. In both cases, the transimpedance amplifier converts this tiny current into a usable voltage signal, enabling accurate, low-noise measurement.

Figure 3 Quick view shows a closeup of a standard ketone blood tester strip. Source: Author
For those curious about non-chemical ketone monitoring, it’s worth noting that hobbyists have also experimented with MQ13x series gas sensors such as MQ138 to approximate acetone levels in breath.
These gas sensors are not medical-grade and require careful calibration against known standards, but they can respond to volatile organic compounds in exhaled breath. Pairing one with a microcontroller, a stable heater supply and signal conditioning circuitry give you a rough, experimental ketone breath analyzer. It’s a fun proof-of-concept project—ideal for learning sensor physics and electronics.

Figure 4 MQ138 sensor module helps detect acetone in exhaled breath, enabling experimental DIY ketone analysis. Source: Author
Just keep in mind that for any real-world health tracking, these DIY setups should be for educational exploration only. Medical-grade devices undergo extensive clinical validation to handle variables like hematocrit levels, temperature, and signal interference—factors that a prototype might miss.
Finally, do not let the complexity of biomedical electronics intimidate you. Every expert once started as a novice tinkering with circuits and sensors. Dive in, experiment boldly, and let curiosity be your guide—the frontier of electronic biosensing is wide open for makers willing to explore.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- What’s in store for optical biosensors?
- The critical role of sensors in medical devices
- Designer’s guide: Sensors for medical applications
- Developing medical sensors compliant with global requirements
- Tools and techniques for electrical characterization of biosensors
The post Electronic biosensing: A quick take on ketone detection appeared first on EDN.
AI optical transceiver market to grow 57% to US$26bn in 2026
UK Semiconductor Centre appoints director of international partnerships
Took apart a rechargeable battery (Venom Xbox battery) to have a look at the charging circuit
| Tried to use it to light some LED’s though I think the circuit expects a battery voltage to use as feedback as it has very low output current otherwise. Short circuit current was 300mA [link] [comments] |
I tried building a Flipper Zero myself… this is what I ended up with 😅 details in comments
| Current setup 😅 ESP32 + RFID + SDR + random modules Not sure if this will fully work yet… But it’s getting interesting 👀 Any ideas what I should add next? [link] [comments] |
EPROM UV erasing setup
| There must be a T48 UV erasing addon with the EPROM blank check. 270-280nm 800mW diode. [link] [comments] |
KiCad Netclass sizes
| I have been designing PCBs to carry a small microcontroller, an RS485 transceiver, an LED and the associated balance of plant required to make lights for my ROV. Space is at a premium, so track sizes are being chosen to minimise real estate used. KiCad has a netclasses setup page that uses IPC 2221 requirements and PCBway capabilities. I have come up with a sensible set of pre-defined values [link] [comments] |
Weekly discussion, complaint, and rant thread
Open to anything, including discussions, complaints, and rants.
Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.
Reddit-wide rules do apply.
To see the newest posts, sort the comments by "new" (instead of "best" or "top").
[link] [comments]
Infineon’s rad-hard devices used aboard NASA’s Artemis II Orion capsule
Nuvoton releases 4.5W 402nm violet laser, boosting power output by 1.5x
40 років Чорнобильської катастрофи: реалії сьогодення та виклики майбутнього
☑️ Київська політехніка взяла участь у слуханнях Комітету Верховної Ради України з питань соціальної політики та захисту прав ветеранів на тему «40 років Чорнобильської катастрофи: реалії сьогодення та виклики майбутнього».
⭐ Запрошуємо на презентацію дуальної освіти КПІ ім. Ігоря Сікорського та Melexis Academy
На презентації дуальної освіти КПІ ім. Ігоря Сікорського та Melexis Academy команда Melexis розповість про всі можливості магістратури за спеціальністю G5 «Електроніка, електронні комунікації, приладобудування та радіотехніка».
Teradyne snaps up TestInsight to boost ATE for semiconductors

Automated test equipment (ATE) supplier Teradyne is bolstering its test solutions for semiconductor design by acquiring TestInsight, a provider of test program creation, pattern conversion, and pre-silicon validation tools used across ATE platforms and semiconductor design environments.
By acquiring a supplier of semiconductor test development, validation, and conversion software, Teradyne aims to scale its next generation of pre-silicon validation and automated pattern generation technologies. That strengthens Teradyne’s ability to support semiconductor design-in activities to accelerate time-to-market in the emerging AI and data center markets.

Here is how pattern conversion across multiple cores and CPUs accelerates the test program. Source: TestInsight
Greg Smith, president and CEO of Teradyne, calls TestInsight’s tools foundational to modern test program development. “By integrating the TestInsight team into Teradyne, we enhance our ability to help customers achieve silicon readiness faster and with greater confidence.”
The acquisition will allow Teradyne to combine its ATE platforms with TestInsight’s tightly integrated design-to-test workflow, thereby reducing debug cycles, improving coverage, and enabling earlier test program readiness. In short, the acquisition of a design-to-test software firm will help Teradyne close the gap between design and test in semiconductor design environments.
TestInsight announced that it will continue to support its existing customers across all ATE platforms.
Related Content
- Low-Budget Automatic Test Equipment
- How to power automated test equipment
- Automated Test Equipment for 3D Magnetic Sensors
- Optimizing Automated Test Equipment for Quality and Complexity
- Flexible Test Strategies Keeping Pace with Semiconductor Evolution
The post Teradyne snaps up TestInsight to boost ATE for semiconductors appeared first on EDN.
👍 Запрошуємо на вебінар "Ліцензії Creative Commons: шлях до відкритої науки для українських авторів та видавців"
Бібліотека КПІ запрошує дослідників КПІ ім. Ігоря Сікорського та усіх охочих долучитися до міжнародного онлайн заходу “Ліцензії Creative Commons: шлях до відкритої науки для українських авторів та видавців”, організованого спільно з фахівцями Creative Commons.
💢 Онлайн-лекція “OpenAlex – найбільша відкрита база наукових робіт”
Бібліотека КПІ запрошує дослідників КПІ ім. Ігоря Сікорського та усіх охочих долучитися до онлайн лекції “OpenAlex – найбільша відкрита база наукових робіт”.
Aliasing, the bane of sampled data systems

Aliasing is thankfully becoming a less frequent problem due to improved instrument designs. Users should still be aware of it to prevent time- and money-costly errors.
Aliasing is an ever-present potential problem in sampled data acquisition systems. It occurs when input signals are sampled at a sample rate that is too low. If you haven’t been bamboozled by an aliased signal, you are extremely lucky.
Sampled data instruments, such as digitizers and digital oscilloscopes, must sample their input signals at a rate greater than twice the highest frequency component present in the input signal. If this criterion is not met, then aliasing can occur. Figure 1 shows an example of aliasing.

Figure 1 In this example of aliasing, a 50MHz sine wave was acquired at sampling rates of 1 Giga samples per second (GS/s) and 55 Mega samples per second (MS/s). The 55 MS/s acquisition is aliased and displayed as a 5 MHz waveform.
Source: Art Pini
A 50 MHz sine wave was acquired at both 1 GS/s and 55 MS/s. The waveform acquired at 1 GS/s has the correct frequency of 50 MHz as shown in the frequency parameter P1. The waveform acquired at 55 MS/s is aliased and has a frequency of 5 MHz as reported in parameter readout P2. The alias waveform will appear as having a different frequency than the correctly sampled waveform. This can be a significant problem that can be costly if not addressed carefully.
Let’s look into aliasing and learn how to deal with it. Sampling is a mixing process. When you apply an input signal to a sampler, the resulting output from the sampler contains the original waveforms, the sampling waveform, and the sum and difference frequencies, including the harmonics of the sampling signal. This is illustrated in Figure 2.

Figure 2 Sampling is a mixing or multiplicative process. The baseband frequency spectrum of the acquired signal appears as the upper and lower sidebands about the sampling frequency and all its harmonics.
Source: Art Pini
A correctly sampled waveform will have more than two samples per cycle at the bandwidth limit. In the sampler output, the baseband frequency spectrum of the input signal will appear as upper and lower sidebands about the sampling frequency and its harmonics. The right-hand graphs show the output spectrum of the sampler for the correct sampling rate (upper) and the undersampled case (lower). As the sampling frequency is decreased below twice the input signal bandwidth, the lower sideband of the sampling frequency interferes with the baseband signal, resulting in aliasing.
In the time-domain view (left-hand graphs), the aliased signal lacks sufficient time resolution to track the input waveform. Returning to the example in Figure 1, the 50 MHz input sampled at 55 MS/s will result in sum and difference image frequencies that are above and below the 55 MS/s sampling frequency. The lower sideband image falls into the baseband region of the spectrum and is the source of the 5 MHz alias signal.
Current digital instrument designs generally use sampling rates much higher than the instrument’s analog bandwidth. Some instruments may use sharp-cutoff anti-aliasing low-pass filters to limit the input bandwidth and control the instrument’s frequency response. These techniques, combined with long acquisition memories, also minimize this classic problem. Still, users should be aware of aliasing.
Recognizing AliasingIt is good practice to determine the frequency of the measured signal and verify that it has not been aliased. If the characteristics of the input signal are unknown, it is good practice to view the signal at the highest available sample rate, then decrease the sampling rate as needed. If aliasing occurs, you will see the signal’s frequency change as you select a lower sampling rate.
Another hint that a signal is an alias is that it will appear to have an unstable trigger and will jump erratically in time. This occurs because the instrument is triggered by the signal, and the alias, with fewer samples, may not display the trigger point. The instrument displays the nearest sample, which varies from one acquisition to the next, causing instability.
Aliasing can also be recognized by observing the effect on the input signal’s frequency-domain spectrum as the signal’s frequency is varied. A spectral component that shows a decrease in frequency when the input signal’s frequency is increased, a reversal of direction, is an alias. As the frequency of a sine wave increases, the spectral line corresponding to that sine wave will move to the right until it hits the Nyquist frequency of one-half the sample rate.
As the frequency increases above Nyquist, an aliased image from the lower sideband about the sampling frequency will fold back into the baseband spectrum, moving downward in frequency. The lower-sideband images for each harmonic of the sampling frequency show this reversal. Upper sideband images will move in the correct direction. This phenomenon is called spectral folding.
A helpful technique to view an aliased signalIf the signal is a relatively simple periodic waveform, such as the example sine wave, then enabling infinite display persistence will show the underlying waveform, as shown in Figure 3.

Figure 3 The aliased signal (upper trace) and the same signal displayed with infinite persistence turned on (lower trace). The persistence display accumulates all the sample values showing the original 50 MHz waveform.
Source: Art Pini
All sample points in the aliased waveform are real. If infinite persistence is enabled, all samples are accumulated on the persistence display, and the original unaliased waveform is eventually recovered. This technique won’t work for complex signals such as non-return-to-zero (NRZ) data or broadband signals.
Using aliased waveformsGiven that aliased signals are made up of real samples, an aliased signal can be used in measurements, as long as the signal’s frequency is not being measured. Consider measuring the output of a remote keyless entry transmitter. This device outputs a pulse-modulated RF signal with a carrier frequency of 433MHz. This signal has a relatively narrow bandwidth about the carrier frequency. The information being transmitted is encoded in a 400 ms pulse pattern.
Two measurement scenarios are needed. The first is to characterize the RF signal. Parameters like frequency. Also, the shape of the RF envelope affects the purity of the transmitted signal. The second measurement would involve decoding the information content. Using an oscilloscope with a 20 Mega sample (MS) memory at a horizontal scale setting of 100 ms per division (1 second acquisition time), the sampling rate would be 20 MS/s. Figure 4 shows the two measurement processes for both the RF and Data decoding measurements.

Figure 4 Measurements on a remote keyless entry transmitter use an aliased signal to decode digital data.
Source: Art Pini
The traces on the left side of the screen show the RF measurements. The signal is acquired at 20 GS/s, and its leading edge is captured. The oscilloscope measures the RF carrier frequency at 433.9 MHz. The envelope of the RF carrier is extracted by applying the absolute value function, followed by a low-pass filter to create a peak detector. Trace F1 (bottom) shows the envelope. A copy (Trace F3) of the Envelope is also overlaid on a horizontally expanded zoom view (Trace Z1) of the leading edge of the signal. The envelope can be used to measure the envelope’s rise time.
The right side of the display shows the data decoding process. The entire data packet is acquired on a 100-ms-per-division horizontal scale. The sampling rate is 20 MS/s. The RF carrier is aliased down to 6.13 MHz as measured in parameter P2. The aliased frequency of the carrier is the result of mixing the twenty-second harmonic of the sampling rate with the 433.9 MHz carrier. The same envelope detection technique is applied to the entire packet, rendering the data content as an NRZ signal. Aliasing has enabled the acquisition of the entire signal data packet.
ConclusionAliasing in digital instruments is a digitizer characteristic that is becoming less frequent a problem due to improved instrument designs, including anti-aliasing filters, oversampling, and very long acquisition memories. Users should still be aware of aliasing to prevent errors that cost time and money.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
- Sampling and aliasing
- Using oscilloscope filters for better measurements
- Combating noise and interference in oscilloscopes and digitizers
- Building a low-cost, precision digital oscilloscope—Part 1
- Building a low-cost, precision digital oscilloscope – Part 2
The post Aliasing, the bane of sampled data systems appeared first on EDN.
When 270 Ohm resistors in LCD backlight is no longer 270 Ohm resistor
| It is 3-rd LCD panel in a month with the same issue, backlight stopped working, there was one resistor still measuring 270 Ohm so we know what it should be, all others are open circuit or in xx MOhm range. No signs of corrosion or overheating anywhere, just crappy components, never have seen this issue. It is planned obsolence or bad combination of materials in resistor. Share your experience with similar cases. [link] [comments] |
Bluetooth LE throughput: Why real‑world performance falls short of specs

Many Bluetooth Low Energy (LE) applications depend on reliable, high‑throughput data transfer between connected devices. Typical use cases include over‑the‑air (OTA) firmware updates, sensor data streaming, and bulk data transport between embedded systems. Although the Bluetooth LE specification defines clear upper bounds on achievable data rate, measured throughput in real systems often falls well below these limits.
This discrepancy is not caused by a single factor. Instead, it arises from the interaction of connection‑event timing, controller scheduling behavior, protocol stack implementation, and radio‑frequency conditions.
While modern Bluetooth LE devices commonly support Data Length Extension (DLE), the 2-Mbps Physical Layer (PHY), and large Attribute Protocol (ATT) Maximum Transmission Unit (MTU) sizes, these features alone do not determine achievable throughput.
This article focuses on the practical constraints that shape Bluetooth LE Generic Attribute Profile (GATT) write throughput in deployed systems and explains why throughput behavior is frequently non‑linear and platform‑dependent.
Assumptions and test context
To isolate timing and scheduling effects from feature limitations, the analysis presented here assumes a contemporary Bluetooth LE configuration with the following capabilities:
- Support for DLE on both Central and Peripheral
- Use of the 2-Mbps PHY
- A negotiated ATT MTU of 251 bytes
- Transmit‑side buffering sufficient to queue multiple packets
- Use of GATT Write Without Response operations
- A receiver capable of sustaining the incoming data rate without application‑level back‑pressure
GATT Write Without Response is used to minimize protocol overhead and eliminate application‑layer acknowledgments that would otherwise consume airtime and delay buffer reuse. Although this write type omits an explicit GATT‑layer acknowledgment, delivery to the receiver’s Link Layer remains guaranteed by the Bluetooth LE protocol.
Under these assumptions, throughput might be expected to scale directly with the number of packets transmitted per connection interval. In practice, this assumption does not hold.
Theoretical throughput
With Data Length Extension enabled, a single Bluetooth LE Link Layer packet can carry up to 251 bytes of payload. After accounting for Logical Link Control and Adaptation Protocol (L2CAP) and Attribute Protocol (ATT) headers, 244 bytes remain available for application data.
Using the 2-Mbps PHY, the on‑air time for a maximum‑length data packet followed by its acknowledgment is approximately 1.4 ms. If a connection interval could be filled entirely with such packet exchanges, without additional Link Layer procedures or timing gaps, the resulting application‑layer throughput would be approximately 170 KBps.
This value represents an upper bound that is rarely approached in practice.
Connection events and packet scheduling
Bluetooth LE communication occurs within periodic connection events scheduled at intervals defined by the connection interval parameter. During each event, the Central and Peripheral exchange packets until one side terminates the event or the available time expires.
Most controllers support transmitting multiple packets within a single connection event, but the maximum number of packets allowed per event is not specified by the Bluetooth standard and is instead determined by the controller and stack implementation. As a result, packet scheduling behavior can vary significantly across platforms.
This difference is illustrated in Figure 1. In the left‑hand chart, a wireless MCU acting as the Central can pack 20 packets into a 30‑ms connection interval, using most of the available airtime before entering a short end‑of‑event dead time. In contrast, the right‑hand chart shows a smartphone operating as the Central, where the connection‑event length is capped at five packets, even though additional airtime remains available within the same interval.

Figure 1 Packet scheduling within a Bluetooth LE connection interval varies by platform. A wireless MCU Central fills most of a 30‑ms interval with data packets, while a smartphone Central limits the number of packets per connection event, leaving unused airtime. Source: Microchip
Such limits are particularly common on mobile platforms, where power management and radio coexistence requirements constrain connection‑event length. When the number of packets per event is capped, increasing the connection interval does not necessarily increase throughput, because the additional airtime cannot be used for data transmission.
Residual time and end‑of‑event dead time
Two timing effects significantly reduce usable airtime within each connection interval:
- Residual time, which occurs when the remaining interval is too short to accommodate another full packet exchange.
- End‑of‑event dead time, during which the controller prepares for the next scheduled event and does not permit further transmissions.
The impact of these effects is illustrated in Figure 2. The figure shows that a maximum‑length data packet followed by its acknowledgment occupies approximately 1.4 ms of on‑air time. When the remaining portion of a connection interval is shorter than this duration, the controller cannot schedule another packet exchange, even though some airtime remains available.

Figure 2 Residual airtime and end‑of‑event dead time limit packet scheduling at short connection intervals. A maximum‑size packet and its acknowledgment require approximately 1.4 ms, preventing additional transmissions when insufficient time remains. Source: Microchip
The duration of end‑of‑event dead time varies widely between controller implementations and is not explicitly defined by the Bluetooth specification. In many systems, this behavior can only be identified and quantified through direct measurement.
At short connection intervals, residual and dead time consume a relatively large fraction of each interval, limiting the number of packets that can be transmitted. At longer intervals, this overhead can be amortized across additional packets, improving average throughput if packet scheduling is not otherwise constrained.
Non‑linear throughput behavior
Because residual and end‑of‑event dead time depend on internal scheduling thresholds, Bluetooth LE throughput as a function of connection interval is often non‑linear. Small changes in the connection interval can result in unexpected increases or decreases in throughput, depending on how the interval aligns with controller‑specific timing constraints.
These effects are illustrated in Figure 3, which compares measured throughput across a range of connection intervals under different environmental and platform conditions. In the left‑hand graph, an off‑the‑shelf wireless system‑on‑chip (SoC) is evaluated as both Central and Peripheral. Measurements taken in a shielded environment (orange) show consistently higher throughput than those collected in an open office (blue), indicating the impact of ambient interference on achievable performance.

Figure 3 Measured throughput versus connection interval illustrates non‑linear behavior and environmental sensitivity. Results from both a wireless SoC platform and a Zephyr GATT throughput test show higher throughput in low‑interference conditions and increased variability at longer intervals. Source: Microchip
The right‑hand graph, derived from a Zephyr GATT throughput test, reinforces this behavior while also highlighting the non‑linear relationship between connection interval and throughput. As the interval increases, throughput does not scale monotonically; instead, it exhibits discontinuities and increased variance, particularly at longer intervals where residual and dead time are amortized over more packets.
These results emphasize that throughput cannot be predicted solely from the Bluetooth LE specification. Instead, it’s strongly influenced by platform‑specific scheduling behavior and the prevailing radio‑frequency environment.
Impact of interference
Longer connection intervals typically improve throughput in clean radio‑frequency environments by amortizing residual airtime across additional packets. However, they also increase sensitivity to interference. During long connection events, many packets may be transmitted back‑to‑back; if packet loss or repeated cyclic redundancy check errors occur early in the event, some controllers terminate the event prematurely.
When this occurs, a substantial portion of the connection interval may remain unused, resulting in a sharp reduction in throughput. Shorter connection intervals limit the amount of airtime lost when errors occur and often produce more consistent throughput in noisy environments, albeit with a lower theoretical maximum.
While parameters such as PHY speed, MTU size, DLE, and GATT characteristic length are largely fixed in modern Bluetooth LE systems, connection‑event timing and controller behavior ultimately determine achievable throughput.
The connection interval remains the primary tuning parameter, but its effect is non‑linear and highly dependent on implementation details. For systems that limit packet count per connection event, selecting an interval that closely matches the allowed packet budget is critical. When longer events are supported, throughput gains must be weighed against increased sensitivity to interference.
For design engineers, optimizing Bluetooth LE throughput requires empirical evaluation and platform‑specific characterization rather than reliance on specification‑level performance limits. At a practical level, this places increased importance on controller implementations and protocol stacks that offer fine‑grained configurability on both the Central and Peripheral sides, enabling precise control over connection parameters, event length, and buffering behavior.
Wireless MCU platforms, such as Microchip’s PIC32‑BZ6 multiprotocol wireless MCU family, are representative of designs that emphasize this level of stack configurability and visibility. By allowing engineers to tune behavior symmetrically on both ends of the link and observe the resulting timing effects, such platforms can simplify the process of analyzing throughput bottlenecks and optimizing data transfer performance under real‑world operating conditions.
The ability to measure connection‑event timing, packet scheduling, and error behavior at the controller and stack levels enables more repeatable, data‑driven throughput characterization during development.
Patrick Fitzpatrick is senior technical staff engineer for software at Microchip’s Wireless Business Unit.
Related Content
- Bluetooth low energy (BLE) explained
- The basics of Bluetooth Low Energy (BLE)
- Bluetooth 5 variations complicate PHY testing
- Why Industrial Operations are Turning to Bluetooth Technology
- Secure Bluetooth LE adoption on rise in automotive applications
The post Bluetooth LE throughput: Why real‑world performance falls short of specs appeared first on EDN.
Пам'яті Євгенія Демяненко
Стало відомо про загибель Євгенія Демяненко (07.08.1990 — 22.04.2023). Випускник Інституту спеціального зв’язку та захисту інформації НТУУ «КПІ» 2012 року.
Tea & Talk у Центрі міжнародної освіти
🌐 У КПІ ім. Ігоря Сікорського послідовно створюють середовище підтримки для іноземних студентів і їх комфортної інтеграції в університетське ком’юніті.



