Українською
  In English
Збирач потоків
Infineon’s rad-hard devices used aboard NASA’s Artemis II Orion capsule
Nuvoton releases 4.5W 402nm violet laser, boosting power output by 1.5x
40 років Чорнобильської катастрофи: реалії сьогодення та виклики майбутнього
☑️ Київська політехніка взяла участь у слуханнях Комітету Верховної Ради України з питань соціальної політики та захисту прав ветеранів на тему «40 років Чорнобильської катастрофи: реалії сьогодення та виклики майбутнього».
Teradyne snaps up TestInsight to boost ATE for semiconductors

Automated test equipment (ATE) supplier Teradyne is bolstering its test solutions for semiconductor design by acquiring TestInsight, a provider of test program creation, pattern conversion, and pre-silicon validation tools used across ATE platforms and semiconductor design environments.
By acquiring a supplier of semiconductor test development, validation, and conversion software, Teradyne aims to scale its next generation of pre-silicon validation and automated pattern generation technologies. That strengthens Teradyne’s ability to support semiconductor design-in activities to accelerate time-to-market in the emerging AI and data center markets.

Here is how pattern conversion across multiple cores and CPUs accelerates the test program. Source: TestInsight
Greg Smith, president and CEO of Teradyne, calls TestInsight’s tools foundational to modern test program development. “By integrating the TestInsight team into Teradyne, we enhance our ability to help customers achieve silicon readiness faster and with greater confidence.”
The acquisition will allow Teradyne to combine its ATE platforms with TestInsight’s tightly integrated design-to-test workflow, thereby reducing debug cycles, improving coverage, and enabling earlier test program readiness. In short, the acquisition of a design-to-test software firm will help Teradyne close the gap between design and test in semiconductor design environments.
TestInsight announced that it will continue to support its existing customers across all ATE platforms.
Related Content
- Low-Budget Automatic Test Equipment
- How to power automated test equipment
- Automated Test Equipment for 3D Magnetic Sensors
- Optimizing Automated Test Equipment for Quality and Complexity
- Flexible Test Strategies Keeping Pace with Semiconductor Evolution
The post Teradyne snaps up TestInsight to boost ATE for semiconductors appeared first on EDN.
👍 Запрошуємо на вебінар "Ліцензії Creative Commons: шлях до відкритої науки для українських авторів та видавців"
Бібліотека КПІ запрошує дослідників КПІ ім. Ігоря Сікорського та усіх охочих долучитися до міжнародного онлайн заходу “Ліцензії Creative Commons: шлях до відкритої науки для українських авторів та видавців”, організованого спільно з фахівцями Creative Commons.
💢 Онлайн-лекція “OpenAlex – найбільша відкрита база наукових робіт”
Бібліотека КПІ запрошує дослідників КПІ ім. Ігоря Сікорського та усіх охочих долучитися до онлайн лекції “OpenAlex – найбільша відкрита база наукових робіт”.
Aliasing, the bane of sampled data systems

Aliasing is thankfully becoming a less frequent problem due to improved instrument designs. Users should still be aware of it to prevent time- and money-costly errors.
Aliasing is an ever-present potential problem in sampled data acquisition systems. It occurs when input signals are sampled at a sample rate that is too low. If you haven’t been bamboozled by an aliased signal, you are extremely lucky.
Sampled data instruments, such as digitizers and digital oscilloscopes, must sample their input signals at a rate greater than twice the highest frequency component present in the input signal. If this criterion is not met, then aliasing can occur. Figure 1 shows an example of aliasing.

Figure 1 In this example of aliasing, a 50MHz sine wave was acquired at sampling rates of 1 Giga samples per second (GS/s) and 55 Mega samples per second (MS/s). The 55 MS/s acquisition is aliased and displayed as a 5 MHz waveform.
Source: Art Pini
A 50 MHz sine wave was acquired at both 1 GS/s and 55 MS/s. The waveform acquired at 1 GS/s has the correct frequency of 50 MHz as shown in the frequency parameter P1. The waveform acquired at 55 MS/s is aliased and has a frequency of 5 MHz as reported in parameter readout P2. The alias waveform will appear as having a different frequency than the correctly sampled waveform. This can be a significant problem that can be costly if not addressed carefully.
Let’s look into aliasing and learn how to deal with it. Sampling is a mixing process. When you apply an input signal to a sampler, the resulting output from the sampler contains the original waveforms, the sampling waveform, and the sum and difference frequencies, including the harmonics of the sampling signal. This is illustrated in Figure 2.

Figure 2 Sampling is a mixing or multiplicative process. The baseband frequency spectrum of the acquired signal appears as the upper and lower sidebands about the sampling frequency and all its harmonics.
Source: Art Pini
A correctly sampled waveform will have more than two samples per cycle at the bandwidth limit. In the sampler output, the baseband frequency spectrum of the input signal will appear as upper and lower sidebands about the sampling frequency and its harmonics. The right-hand graphs show the output spectrum of the sampler for the correct sampling rate (upper) and the undersampled case (lower). As the sampling frequency is decreased below twice the input signal bandwidth, the lower sideband of the sampling frequency interferes with the baseband signal, resulting in aliasing.
In the time-domain view (left-hand graphs), the aliased signal lacks sufficient time resolution to track the input waveform. Returning to the example in Figure 1, the 50 MHz input sampled at 55 MS/s will result in sum and difference image frequencies that are above and below the 55 MS/s sampling frequency. The lower sideband image falls into the baseband region of the spectrum and is the source of the 5 MHz alias signal.
Current digital instrument designs generally use sampling rates much higher than the instrument’s analog bandwidth. Some instruments may use sharp-cutoff anti-aliasing low-pass filters to limit the input bandwidth and control the instrument’s frequency response. These techniques, combined with long acquisition memories, also minimize this classic problem. Still, users should be aware of aliasing.
Recognizing AliasingIt is good practice to determine the frequency of the measured signal and verify that it has not been aliased. If the characteristics of the input signal are unknown, it is good practice to view the signal at the highest available sample rate, then decrease the sampling rate as needed. If aliasing occurs, you will see the signal’s frequency change as you select a lower sampling rate.
Another hint that a signal is an alias is that it will appear to have an unstable trigger and will jump erratically in time. This occurs because the instrument is triggered by the signal, and the alias, with fewer samples, may not display the trigger point. The instrument displays the nearest sample, which varies from one acquisition to the next, causing instability.
Aliasing can also be recognized by observing the effect on the input signal’s frequency-domain spectrum as the signal’s frequency is varied. A spectral component that shows a decrease in frequency when the input signal’s frequency is increased, a reversal of direction, is an alias. As the frequency of a sine wave increases, the spectral line corresponding to that sine wave will move to the right until it hits the Nyquist frequency of one-half the sample rate.
As the frequency increases above Nyquist, an aliased image from the lower sideband about the sampling frequency will fold back into the baseband spectrum, moving downward in frequency. The lower-sideband images for each harmonic of the sampling frequency show this reversal. Upper sideband images will move in the correct direction. This phenomenon is called spectral folding.
A helpful technique to view an aliased signalIf the signal is a relatively simple periodic waveform, such as the example sine wave, then enabling infinite display persistence will show the underlying waveform, as shown in Figure 3.

Figure 3 The aliased signal (upper trace) and the same signal displayed with infinite persistence turned on (lower trace). The persistence display accumulates all the sample values showing the original 50 MHz waveform.
Source: Art Pini
All sample points in the aliased waveform are real. If infinite persistence is enabled, all samples are accumulated on the persistence display, and the original unaliased waveform is eventually recovered. This technique won’t work for complex signals such as non-return-to-zero (NRZ) data or broadband signals.
Using aliased waveformsGiven that aliased signals are made up of real samples, an aliased signal can be used in measurements, as long as the signal’s frequency is not being measured. Consider measuring the output of a remote keyless entry transmitter. This device outputs a pulse-modulated RF signal with a carrier frequency of 433MHz. This signal has a relatively narrow bandwidth about the carrier frequency. The information being transmitted is encoded in a 400 ms pulse pattern.
Two measurement scenarios are needed. The first is to characterize the RF signal. Parameters like frequency. Also, the shape of the RF envelope affects the purity of the transmitted signal. The second measurement would involve decoding the information content. Using an oscilloscope with a 20 Mega sample (MS) memory at a horizontal scale setting of 100 ms per division (1 second acquisition time), the sampling rate would be 20 MS/s. Figure 4 shows the two measurement processes for both the RF and Data decoding measurements.

Figure 4 Measurements on a remote keyless entry transmitter use an aliased signal to decode digital data.
Source: Art Pini
The traces on the left side of the screen show the RF measurements. The signal is acquired at 20 GS/s, and its leading edge is captured. The oscilloscope measures the RF carrier frequency at 433.9 MHz. The envelope of the RF carrier is extracted by applying the absolute value function, followed by a low-pass filter to create a peak detector. Trace F1 (bottom) shows the envelope. A copy (Trace F3) of the Envelope is also overlaid on a horizontally expanded zoom view (Trace Z1) of the leading edge of the signal. The envelope can be used to measure the envelope’s rise time.
The right side of the display shows the data decoding process. The entire data packet is acquired on a 100-ms-per-division horizontal scale. The sampling rate is 20 MS/s. The RF carrier is aliased down to 6.13 MHz as measured in parameter P2. The aliased frequency of the carrier is the result of mixing the twenty-second harmonic of the sampling rate with the 433.9 MHz carrier. The same envelope detection technique is applied to the entire packet, rendering the data content as an NRZ signal. Aliasing has enabled the acquisition of the entire signal data packet.
ConclusionAliasing in digital instruments is a digitizer characteristic that is becoming less frequent a problem due to improved instrument designs, including anti-aliasing filters, oversampling, and very long acquisition memories. Users should still be aware of aliasing to prevent errors that cost time and money.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
- Sampling and aliasing
- Using oscilloscope filters for better measurements
- Combating noise and interference in oscilloscopes and digitizers
- Building a low-cost, precision digital oscilloscope—Part 1
- Building a low-cost, precision digital oscilloscope – Part 2
The post Aliasing, the bane of sampled data systems appeared first on EDN.
When 270 Ohm resistors in LCD backlight is no longer 270 Ohm resistor
| It is 3-rd LCD panel in a month with the same issue, backlight stopped working, there was one resistor still measuring 270 Ohm so we know what it should be, all others are open circuit or in xx MOhm range. No signs of corrosion or overheating anywhere, just crappy components, never have seen this issue. It is planned obsolence or bad combination of materials in resistor. Share your experience with similar cases. [link] [comments] |
Bluetooth LE throughput: Why real‑world performance falls short of specs

Many Bluetooth Low Energy (LE) applications depend on reliable, high‑throughput data transfer between connected devices. Typical use cases include over‑the‑air (OTA) firmware updates, sensor data streaming, and bulk data transport between embedded systems. Although the Bluetooth LE specification defines clear upper bounds on achievable data rate, measured throughput in real systems often falls well below these limits.
This discrepancy is not caused by a single factor. Instead, it arises from the interaction of connection‑event timing, controller scheduling behavior, protocol stack implementation, and radio‑frequency conditions.
While modern Bluetooth LE devices commonly support Data Length Extension (DLE), the 2-Mbps Physical Layer (PHY), and large Attribute Protocol (ATT) Maximum Transmission Unit (MTU) sizes, these features alone do not determine achievable throughput.
This article focuses on the practical constraints that shape Bluetooth LE Generic Attribute Profile (GATT) write throughput in deployed systems and explains why throughput behavior is frequently non‑linear and platform‑dependent.
Assumptions and test context
To isolate timing and scheduling effects from feature limitations, the analysis presented here assumes a contemporary Bluetooth LE configuration with the following capabilities:
- Support for DLE on both Central and Peripheral
- Use of the 2-Mbps PHY
- A negotiated ATT MTU of 251 bytes
- Transmit‑side buffering sufficient to queue multiple packets
- Use of GATT Write Without Response operations
- A receiver capable of sustaining the incoming data rate without application‑level back‑pressure
GATT Write Without Response is used to minimize protocol overhead and eliminate application‑layer acknowledgments that would otherwise consume airtime and delay buffer reuse. Although this write type omits an explicit GATT‑layer acknowledgment, delivery to the receiver’s Link Layer remains guaranteed by the Bluetooth LE protocol.
Under these assumptions, throughput might be expected to scale directly with the number of packets transmitted per connection interval. In practice, this assumption does not hold.
Theoretical throughput
With Data Length Extension enabled, a single Bluetooth LE Link Layer packet can carry up to 251 bytes of payload. After accounting for Logical Link Control and Adaptation Protocol (L2CAP) and Attribute Protocol (ATT) headers, 244 bytes remain available for application data.
Using the 2-Mbps PHY, the on‑air time for a maximum‑length data packet followed by its acknowledgment is approximately 1.4 ms. If a connection interval could be filled entirely with such packet exchanges, without additional Link Layer procedures or timing gaps, the resulting application‑layer throughput would be approximately 170 KBps.
This value represents an upper bound that is rarely approached in practice.
Connection events and packet scheduling
Bluetooth LE communication occurs within periodic connection events scheduled at intervals defined by the connection interval parameter. During each event, the Central and Peripheral exchange packets until one side terminates the event or the available time expires.
Most controllers support transmitting multiple packets within a single connection event, but the maximum number of packets allowed per event is not specified by the Bluetooth standard and is instead determined by the controller and stack implementation. As a result, packet scheduling behavior can vary significantly across platforms.
This difference is illustrated in Figure 1. In the left‑hand chart, a wireless MCU acting as the Central can pack 20 packets into a 30‑ms connection interval, using most of the available airtime before entering a short end‑of‑event dead time. In contrast, the right‑hand chart shows a smartphone operating as the Central, where the connection‑event length is capped at five packets, even though additional airtime remains available within the same interval.

Figure 1 Packet scheduling within a Bluetooth LE connection interval varies by platform. A wireless MCU Central fills most of a 30‑ms interval with data packets, while a smartphone Central limits the number of packets per connection event, leaving unused airtime. Source: Microchip
Such limits are particularly common on mobile platforms, where power management and radio coexistence requirements constrain connection‑event length. When the number of packets per event is capped, increasing the connection interval does not necessarily increase throughput, because the additional airtime cannot be used for data transmission.
Residual time and end‑of‑event dead time
Two timing effects significantly reduce usable airtime within each connection interval:
- Residual time, which occurs when the remaining interval is too short to accommodate another full packet exchange.
- End‑of‑event dead time, during which the controller prepares for the next scheduled event and does not permit further transmissions.
The impact of these effects is illustrated in Figure 2. The figure shows that a maximum‑length data packet followed by its acknowledgment occupies approximately 1.4 ms of on‑air time. When the remaining portion of a connection interval is shorter than this duration, the controller cannot schedule another packet exchange, even though some airtime remains available.

Figure 2 Residual airtime and end‑of‑event dead time limit packet scheduling at short connection intervals. A maximum‑size packet and its acknowledgment require approximately 1.4 ms, preventing additional transmissions when insufficient time remains. Source: Microchip
The duration of end‑of‑event dead time varies widely between controller implementations and is not explicitly defined by the Bluetooth specification. In many systems, this behavior can only be identified and quantified through direct measurement.
At short connection intervals, residual and dead time consume a relatively large fraction of each interval, limiting the number of packets that can be transmitted. At longer intervals, this overhead can be amortized across additional packets, improving average throughput if packet scheduling is not otherwise constrained.
Non‑linear throughput behavior
Because residual and end‑of‑event dead time depend on internal scheduling thresholds, Bluetooth LE throughput as a function of connection interval is often non‑linear. Small changes in the connection interval can result in unexpected increases or decreases in throughput, depending on how the interval aligns with controller‑specific timing constraints.
These effects are illustrated in Figure 3, which compares measured throughput across a range of connection intervals under different environmental and platform conditions. In the left‑hand graph, an off‑the‑shelf wireless system‑on‑chip (SoC) is evaluated as both Central and Peripheral. Measurements taken in a shielded environment (orange) show consistently higher throughput than those collected in an open office (blue), indicating the impact of ambient interference on achievable performance.

Figure 3 Measured throughput versus connection interval illustrates non‑linear behavior and environmental sensitivity. Results from both a wireless SoC platform and a Zephyr GATT throughput test show higher throughput in low‑interference conditions and increased variability at longer intervals. Source: Microchip
The right‑hand graph, derived from a Zephyr GATT throughput test, reinforces this behavior while also highlighting the non‑linear relationship between connection interval and throughput. As the interval increases, throughput does not scale monotonically; instead, it exhibits discontinuities and increased variance, particularly at longer intervals where residual and dead time are amortized over more packets.
These results emphasize that throughput cannot be predicted solely from the Bluetooth LE specification. Instead, it’s strongly influenced by platform‑specific scheduling behavior and the prevailing radio‑frequency environment.
Impact of interference
Longer connection intervals typically improve throughput in clean radio‑frequency environments by amortizing residual airtime across additional packets. However, they also increase sensitivity to interference. During long connection events, many packets may be transmitted back‑to‑back; if packet loss or repeated cyclic redundancy check errors occur early in the event, some controllers terminate the event prematurely.
When this occurs, a substantial portion of the connection interval may remain unused, resulting in a sharp reduction in throughput. Shorter connection intervals limit the amount of airtime lost when errors occur and often produce more consistent throughput in noisy environments, albeit with a lower theoretical maximum.
While parameters such as PHY speed, MTU size, DLE, and GATT characteristic length are largely fixed in modern Bluetooth LE systems, connection‑event timing and controller behavior ultimately determine achievable throughput.
The connection interval remains the primary tuning parameter, but its effect is non‑linear and highly dependent on implementation details. For systems that limit packet count per connection event, selecting an interval that closely matches the allowed packet budget is critical. When longer events are supported, throughput gains must be weighed against increased sensitivity to interference.
For design engineers, optimizing Bluetooth LE throughput requires empirical evaluation and platform‑specific characterization rather than reliance on specification‑level performance limits. At a practical level, this places increased importance on controller implementations and protocol stacks that offer fine‑grained configurability on both the Central and Peripheral sides, enabling precise control over connection parameters, event length, and buffering behavior.
Wireless MCU platforms, such as Microchip’s PIC32‑BZ6 multiprotocol wireless MCU family, are representative of designs that emphasize this level of stack configurability and visibility. By allowing engineers to tune behavior symmetrically on both ends of the link and observe the resulting timing effects, such platforms can simplify the process of analyzing throughput bottlenecks and optimizing data transfer performance under real‑world operating conditions.
The ability to measure connection‑event timing, packet scheduling, and error behavior at the controller and stack levels enables more repeatable, data‑driven throughput characterization during development.
Patrick Fitzpatrick is senior technical staff engineer for software at Microchip’s Wireless Business Unit.
Related Content
- Bluetooth low energy (BLE) explained
- The basics of Bluetooth Low Energy (BLE)
- Bluetooth 5 variations complicate PHY testing
- Why Industrial Operations are Turning to Bluetooth Technology
- Secure Bluetooth LE adoption on rise in automotive applications
The post Bluetooth LE throughput: Why real‑world performance falls short of specs appeared first on EDN.
Пам'яті Євгенія Демяненко
Стало відомо про загибель Євгенія Демяненко (07.08.1990 — 22.04.2023). Випускник Інституту спеціального зв’язку та захисту інформації НТУУ «КПІ» 2012 року.
Tea & Talk у Центрі міжнародної освіти
🌐 У КПІ ім. Ігоря Сікорського послідовно створюють середовище підтримки для іноземних студентів і їх комфортної інтеграції в університетське ком’юніті.
SemiLEDs’ quarterly revenue more than halves
Playstation 4 charge board
| Its a jdm 030 usb charge board but which one of these is the fuse ( that i should replace ) after a short circuit. And is it worth replacing it ? [link] [comments] |
US Critical Materials and Columbia University to advance domestic recovery of defense-critical metals from red mud
Інтелект: викладачі та науковці
КПІ ім. Ігоря Сікорського готує до запуску нову версію платформи Intellect — відкритого простору академічних досягнень, що об’єднує в єдиному реєстрі ключову інформацію про наукову діяльність і професійні здобутки науково-педагогічних працівників.
Новий реліз буде простішим у користуванні, зрозумілішим у навігації та глибше інтегрованим із внутрішніми системами університету. Оновлення впорядковує дані, оптимізує доступ і спрощує взаємодію з академічними профілями.
Built-in Garage door opener
| We used to have a wired door bell button as a garage door opener near the back door inside the house. The wiring got damaged in an area new wiring couldn’t be rerouted during renovation. Came up with this solution - took a garage door opener apart - connected wires to a decora style momentary switch and soldered other end to the pads for the buttons on pcb. Added a whip antenna to over come shielding of electrical box and drywall. To maintain the 9 inch whip antenna, drilled a small hole in electrical box and fed it into wall. Works perfectly. [link] [comments] |
The system architect’s sketchbook: The coherency wall


Deepak Shankar, founder of Mirabilis Design and developer of VisualSim Architect platform for chip and system designs, has created this cartoon for electronics design engineers.
The post The system architect’s sketchbook: The coherency wall appeared first on EDN.
Simulation tool tests assembly processes upfront

With Keysight Assembly simulation software, automotive manufacturers can virtually test shop-floor processes to identify issues early in development. Late-stage assembly failures drive delays, rework, and recalls. By delivering early insight and integrating with existing ecosystems, the software improves body-in-white assembly workflows.

Developed with automotive OEM partners, Keysight Assembly enables engineers to replicate processes such as part positioning, clamping, and spot welding through guided workflows and templates—without requiring finite element modeling (FEM) expertise. It also provides early visibility into distortion and dimensional risks, shortening production timelines and improving build accuracy.
Keysight Assembly integrates with Keysight’s stamping simulation software, allowing engineers to carry stamped-part data across the process—from forming through assembly—and validate outcomes against pre-production scan data. It also integrates with CAD, product lifecycle management (PLM), Excel, and existing digital workflows without disrupting established practices.
Learn more about Keysight Assembly and related webinars here.
The post Simulation tool tests assembly processes upfront appeared first on EDN.
SMT DIP switches fit space-constrained PCBs

Littelfuse’s TDB series of miniature DIP switches uses a 1.27-mm half-pitch, surface-mount design to reduce PCB footprint. The compact devices support high-density layouts where space, reliability, and manufacturability matter.

The switches are available in 2-, 4-, 6-, 8-, and 10-position SPST configurations, with body lengths ranging from 3.67 mm to 13.83 mm depending on position count. Contact ratings of up to 50 VDC, 100 mA (steady state) and 24 VDC, 25 mA (switching) make them suitable for low-power industrial control, as well as consumer IoT and smart home devices. Contact resistance is 100 mΩ maximum, and insulation resistance is 100 MΩ minimum at 100 VDC.
Compatible with automated SMT assembly, the switches feature gold-plated bifurcated contacts and top tape sealing for post-reflow washable processing. They offer a mechanical and electrical life of 1000 cycles and operate over a temperature range of –40°C to +85°C.
The TDB series switches are available in tube or tape-and-reel packaging for high-volume production. Samples can be obtained through authorized Littelfuse distributors.
The post SMT DIP switches fit space-constrained PCBs appeared first on EDN.
Візит делегації компанії Nihon Cyber Defence Co., Ltd. (NCD)
Київська політехніка розширює співпрацю у сфері кібербезпеки: університет відвідала делегація міжнародної компанії Nihon Cyber Defence Co., Ltd. (NCD) на чолі з CEO Картаном Джозефом Маклафліном. Штаб-квартира NCD розташована в Токіо.



