Microelectronics world news

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 04/18/2026 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Infineon’s rad-hard devices used aboard NASA’s Artemis II Orion capsule

Semiconductor today - Fri, 04/17/2026 - 19:28
Infineon Technologies AG of Munich, Germany says that radiation-hardened (rad-hard) devices from its IR HiRel (high-reliability) division supported the electronic backbone — from critical power supply and control systems to data communications — were at the heart of the Orion capsule of NASA’s Artemis II mission, which recently returned from its 10-days around the Moon (reaching the furthest distance from Earth ever achieved by crewed spaceflight)...

Nuvoton releases 4.5W 402nm violet laser, boosting power output by 1.5x

Semiconductor today - Fri, 04/17/2026 - 16:28
Nuvoton Technology of Kyoto, Japan has announced the start of mass production of the KLC434FL01WW high-power violet laser diode (402nm, 4.5W), which achieves what is claimed to be industry-leading optical output in a 9.0mm-diameter TO-9 CAN package, for continuous-wave (CW) operation at a case temperature (Tc) of 25°C. Due to the proprietary device structure and heat-dissipation design technology, the new product achieves 1.5 times the 3.0W optical output of the firm’s conventional 402nm product in a TO-9 CAN package (the KLC432FL01WW), contributing to improved production throughput in optical equipment such as maskless lithography systems. Furthermore, adding this product to the firm’s lineup enables the product portfolio to support major photosensitive materials used in advanced semiconductor packaging...

Teradyne snaps up TestInsight to boost ATE for semiconductors

EDN Network - Fri, 04/17/2026 - 15:59

Automated test equipment (ATE) supplier Teradyne is bolstering its test solutions for semiconductor design by acquiring TestInsight, a provider of test program creation, pattern conversion, and pre-silicon validation tools used across ATE platforms and semiconductor design environments.

By acquiring a supplier of semiconductor test development, validation, and conversion software, Teradyne aims to scale its next generation of pre-silicon validation and automated pattern generation technologies. That strengthens Teradyne’s ability to support semiconductor design-in activities to accelerate time-to-market in the emerging AI and data center markets.

Here is how pattern conversion across multiple cores and CPUs accelerates the test program. Source: TestInsight

Greg Smith, president and CEO of Teradyne, calls TestInsight’s tools foundational to modern test program development. “By integrating the TestInsight team into Teradyne, we enhance our ability to help customers achieve silicon readiness faster and with greater confidence.”

The acquisition will allow Teradyne to combine its ATE platforms with TestInsight’s tightly integrated design-to-test workflow, thereby reducing debug cycles, improving coverage, and enabling earlier test program readiness. In short, the acquisition of a design-to-test software firm will help Teradyne close the gap between design and test in semiconductor design environments.

TestInsight announced that it will continue to support its existing customers across all ATE platforms.

Related Content

The post Teradyne snaps up TestInsight to boost ATE for semiconductors appeared first on EDN.

Aliasing, the bane of sampled data systems

EDN Network - Fri, 04/17/2026 - 15:00

Aliasing is thankfully becoming a less frequent problem due to improved instrument designs. Users should still be aware of it to prevent time- and money-costly errors.

Aliasing is an ever-present potential problem in sampled data acquisition systems. It occurs when input signals are sampled at a sample rate that is too low. If you haven’t been bamboozled by an aliased signal, you are extremely lucky.

Sampled data instruments, such as digitizers and digital oscilloscopes, must sample their input signals at a rate greater than twice the highest frequency component present in the input signal. If this criterion is not met, then aliasing can occur. Figure 1 shows an example of aliasing.


Figure 1 In this example of aliasing, a 50MHz sine wave was acquired at sampling rates of 1 Giga samples per second (GS/s) and 55 Mega samples per second (MS/s). The 55 MS/s acquisition is aliased and displayed as a 5 MHz waveform.
Source: Art Pini

A 50 MHz sine wave was acquired at both 1 GS/s and 55 MS/s. The waveform acquired at 1 GS/s has the correct frequency of 50 MHz as shown in the frequency parameter P1. The waveform acquired at 55 MS/s is aliased and has a frequency of 5 MHz as reported in parameter readout P2. The alias waveform will appear as having a different frequency than the correctly sampled waveform. This can be a significant problem that can be costly if not addressed carefully.

Let’s look into aliasing and learn how to deal with it. Sampling is a mixing process. When you apply an input signal to a sampler, the resulting output from the sampler contains the original waveforms, the sampling waveform, and the sum and difference frequencies, including the harmonics of the sampling signal. This is illustrated in Figure 2.


Figure 2 Sampling is a mixing or multiplicative process. The baseband frequency spectrum of the acquired signal appears as the upper and lower sidebands about the sampling frequency and all its harmonics.
Source: Art Pini

A correctly sampled waveform will have more than two samples per cycle at the bandwidth limit. In the sampler output, the baseband frequency spectrum of the input signal will appear as upper and lower sidebands about the sampling frequency and its harmonics. The right-hand graphs show the output spectrum of the sampler for the correct sampling rate (upper) and the undersampled case (lower). As the sampling frequency is decreased below twice the input signal bandwidth, the lower sideband of the sampling frequency interferes with the baseband signal, resulting in aliasing.

In the time-domain view (left-hand graphs), the aliased signal lacks sufficient time resolution to track the input waveform. Returning to the example in Figure 1, the 50 MHz input sampled at 55 MS/s will result in sum and difference image frequencies that are above and below the 55 MS/s sampling frequency. The lower sideband image falls into the baseband region of the spectrum and is the source of the 5 MHz alias signal.

Current digital instrument designs generally use sampling rates much higher than the instrument’s analog bandwidth. Some instruments may use sharp-cutoff anti-aliasing low-pass filters to limit the input bandwidth and control the instrument’s frequency response. These techniques, combined with long acquisition memories, also minimize this classic problem.  Still, users should be aware of aliasing.

Recognizing Aliasing

It is good practice to determine the frequency of the measured signal and verify that it has not been aliased. If the characteristics of the input signal are unknown, it is good practice to view the signal at the highest available sample rate, then decrease the sampling rate as needed. If aliasing occurs, you will see the signal’s frequency change as you select a lower sampling rate.

Another hint that a signal is an alias is that it will appear to have an unstable trigger and will jump erratically in time. This occurs because the instrument is triggered by the signal, and the alias, with fewer samples, may not display the trigger point. The instrument displays the nearest sample, which varies from one acquisition to the next, causing instability.

Aliasing can also be recognized by observing the effect on the input signal’s frequency-domain spectrum as the signal’s frequency is varied. A spectral component that shows a decrease in frequency when the input signal’s frequency is increased, a reversal of direction, is an alias. As the frequency of a sine wave increases, the spectral line corresponding to that sine wave will move to the right until it hits the Nyquist frequency of one-half the sample rate.

As the frequency increases above Nyquist, an aliased image from the lower sideband about the sampling frequency will fold back into the baseband spectrum, moving downward in frequency. The lower-sideband images for each harmonic of the sampling frequency show this reversal. Upper sideband images will move in the correct direction. This phenomenon is called spectral folding.

A helpful technique to view an aliased signal

If the signal is a relatively simple periodic waveform, such as the example sine wave, then enabling infinite display persistence will show the underlying waveform, as shown in Figure 3.


Figure 3 The aliased signal (upper trace) and the same signal displayed with infinite persistence turned on (lower trace). The persistence display accumulates all the sample values showing the original 50 MHz waveform.
Source: Art Pini

All sample points in the aliased waveform are real. If infinite persistence is enabled, all samples are accumulated on the persistence display, and the original unaliased waveform is eventually recovered. This technique won’t work for complex signals such as non-return-to-zero (NRZ) data or broadband signals.

Using aliased waveforms

Given that aliased signals are made up of real samples, an aliased signal can be used in measurements, as long as the signal’s frequency is not being measured. Consider measuring the output of a remote keyless entry transmitter. This device outputs a pulse-modulated RF signal with a carrier frequency of 433MHz. This signal has a relatively narrow bandwidth about the carrier frequency. The information being transmitted is encoded in a 400 ms pulse pattern.

Two measurement scenarios are needed. The first is to characterize the RF signal. Parameters like frequency. Also, the shape of the RF envelope affects the purity of the transmitted signal. The second measurement would involve decoding the information content. Using an oscilloscope with a 20 Mega sample (MS) memory at a horizontal scale setting of 100 ms per division (1 second acquisition time), the sampling rate would be 20 MS/s. Figure 4 shows the two measurement processes for both the RF and Data decoding measurements.


Figure 4 Measurements on a remote keyless entry transmitter use an aliased signal to decode digital data.
Source: Art Pini

The traces on the left side of the screen show the RF measurements. The signal is acquired at 20 GS/s, and its leading edge is captured. The oscilloscope measures the RF carrier frequency at 433.9 MHz. The envelope of the RF carrier is extracted by applying the absolute value function, followed by a low-pass filter to create a peak detector. Trace F1 (bottom) shows the envelope. A copy (Trace F3) of the Envelope is also overlaid on a horizontally expanded zoom view (Trace Z1) of the leading edge of the signal. The envelope can be used to measure the envelope’s rise time.

The right side of the display shows the data decoding process. The entire data packet is acquired on a 100-ms-per-division horizontal scale. The sampling rate is 20 MS/s. The RF carrier is aliased down to 6.13 MHz as measured in parameter P2. The aliased frequency of the carrier is the result of mixing the twenty-second harmonic of the sampling rate with the 433.9 MHz carrier. The same envelope detection technique is applied to the entire packet, rendering the data content as an NRZ signal. Aliasing has enabled the acquisition of the entire signal data packet.

Conclusion

Aliasing in digital instruments is a digitizer characteristic that is becoming less frequent a problem due to improved instrument designs, including anti-aliasing filters, oversampling, and very long acquisition memories. Users should still be aware of aliasing to prevent errors that cost time and money.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

The post Aliasing, the bane of sampled data systems appeared first on EDN.

When 270 Ohm resistors in LCD backlight is no longer 270 Ohm resistor

Reddit:Electronics - Fri, 04/17/2026 - 12:53
When 270 Ohm resistors in LCD backlight is no longer 270 Ohm resistor

It is 3-rd LCD panel in a month with the same issue, backlight stopped working, there was one resistor still measuring 270 Ohm so we know what it should be, all others are open circuit or in xx MOhm range. No signs of corrosion or overheating anywhere, just crappy components, never have seen this issue. It is planned obsolence or bad combination of materials in resistor. Share your experience with similar cases.

submitted by /u/Al3x_Y
[link] [comments]

Bluetooth LE throughput: Why real‑world performance falls short of specs

EDN Network - Fri, 04/17/2026 - 12:05

Many Bluetooth Low Energy (LE) applications depend on reliable, high‑throughput data transfer between connected devices. Typical use cases include over‑the‑air (OTA) firmware updates, sensor data streaming, and bulk data transport between embedded systems. Although the Bluetooth LE specification defines clear upper bounds on achievable data rate, measured throughput in real systems often falls well below these limits.

This discrepancy is not caused by a single factor. Instead, it arises from the interaction of connection‑event timing, controller scheduling behavior, protocol stack implementation, and radio‑frequency conditions.

While modern Bluetooth LE devices commonly support Data Length Extension (DLE), the 2-Mbps Physical Layer (PHY), and large Attribute Protocol (ATT) Maximum Transmission Unit (MTU) sizes, these features alone do not determine achievable throughput.

This article focuses on the practical constraints that shape Bluetooth LE Generic Attribute Profile (GATT) write throughput in deployed systems and explains why throughput behavior is frequently non‑linear and platform‑dependent.

Assumptions and test context

To isolate timing and scheduling effects from feature limitations, the analysis presented here assumes a contemporary Bluetooth LE configuration with the following capabilities:

  • Support for DLE on both Central and Peripheral
  • Use of the 2-Mbps PHY
  • A negotiated ATT MTU of 251 bytes
  • Transmit‑side buffering sufficient to queue multiple packets
  • Use of GATT Write Without Response operations
  • A receiver capable of sustaining the incoming data rate without application‑level back‑pressure

GATT Write Without Response is used to minimize protocol overhead and eliminate application‑layer acknowledgments that would otherwise consume airtime and delay buffer reuse. Although this write type omits an explicit GATT‑layer acknowledgment, delivery to the receiver’s Link Layer remains guaranteed by the Bluetooth LE protocol.

Under these assumptions, throughput might be expected to scale directly with the number of packets transmitted per connection interval. In practice, this assumption does not hold.

Theoretical throughput

With Data Length Extension enabled, a single Bluetooth LE Link Layer packet can carry up to 251 bytes of payload. After accounting for Logical Link Control and Adaptation Protocol (L2CAP) and Attribute Protocol (ATT) headers, 244 bytes remain available for application data.

Using the 2-Mbps PHY, the on‑air time for a maximum‑length data packet followed by its acknowledgment is approximately 1.4 ms. If a connection interval could be filled entirely with such packet exchanges, without additional Link Layer procedures or timing gaps, the resulting application‑layer throughput would be approximately 170 KBps.

This value represents an upper bound that is rarely approached in practice.

Connection events and packet scheduling

Bluetooth LE communication occurs within periodic connection events scheduled at intervals defined by the connection interval parameter. During each event, the Central and Peripheral exchange packets until one side terminates the event or the available time expires.

Most controllers support transmitting multiple packets within a single connection event, but the maximum number of packets allowed per event is not specified by the Bluetooth standard and is instead determined by the controller and stack implementation. As a result, packet scheduling behavior can vary significantly across platforms.

This difference is illustrated in Figure 1. In the left‑hand chart, a wireless MCU acting as the Central can pack 20 packets into a 30‑ms connection interval, using most of the available airtime before entering a short end‑of‑event dead time. In contrast, the right‑hand chart shows a smartphone operating as the Central, where the connection‑event length is capped at five packets, even though additional airtime remains available within the same interval.

Figure 1 Packet scheduling within a Bluetooth LE connection interval varies by platform. A wireless MCU Central fills most of a 30‑ms interval with data packets, while a smartphone Central limits the number of packets per connection event, leaving unused airtime. Source: Microchip

Such limits are particularly common on mobile platforms, where power management and radio coexistence requirements constrain connection‑event length. When the number of packets per event is capped, increasing the connection interval does not necessarily increase throughput, because the additional airtime cannot be used for data transmission.

Residual time and endofevent dead time

Two timing effects significantly reduce usable airtime within each connection interval:

  • Residual time, which occurs when the remaining interval is too short to accommodate another full packet exchange.
  • End‑of‑event dead time, during which the controller prepares for the next scheduled event and does not permit further transmissions.

The impact of these effects is illustrated in Figure 2. The figure shows that a maximum‑length data packet followed by its acknowledgment occupies approximately 1.4 ms of on‑air time. When the remaining portion of a connection interval is shorter than this duration, the controller cannot schedule another packet exchange, even though some airtime remains available.

Figure 2 Residual airtime and end‑of‑event dead time limit packet scheduling at short connection intervals. A maximum‑size packet and its acknowledgment require approximately 1.4 ms, preventing additional transmissions when insufficient time remains. Source: Microchip

The duration of end‑of‑event dead time varies widely between controller implementations and is not explicitly defined by the Bluetooth specification. In many systems, this behavior can only be identified and quantified through direct measurement.

At short connection intervals, residual and dead time consume a relatively large fraction of each interval, limiting the number of packets that can be transmitted. At longer intervals, this overhead can be amortized across additional packets, improving average throughput if packet scheduling is not otherwise constrained.

Nonlinear throughput behavior

Because residual and end‑of‑event dead time depend on internal scheduling thresholds, Bluetooth LE throughput as a function of connection interval is often non‑linear. Small changes in the connection interval can result in unexpected increases or decreases in throughput, depending on how the interval aligns with controller‑specific timing constraints.

These effects are illustrated in Figure 3, which compares measured throughput across a range of connection intervals under different environmental and platform conditions. In the left‑hand graph, an off‑the‑shelf wireless system‑on‑chip (SoC) is evaluated as both Central and Peripheral. Measurements taken in a shielded environment (orange) show consistently higher throughput than those collected in an open office (blue), indicating the impact of ambient interference on achievable performance.

Figure 3 Measured throughput versus connection interval illustrates non‑linear behavior and environmental sensitivity. Results from both a wireless SoC platform and a Zephyr GATT throughput test show higher throughput in low‑interference conditions and increased variability at longer intervals. Source: Microchip

The right‑hand graph, derived from a Zephyr GATT throughput test, reinforces this behavior while also highlighting the non‑linear relationship between connection interval and throughput. As the interval increases, throughput does not scale monotonically; instead, it exhibits discontinuities and increased variance, particularly at longer intervals where residual and dead time are amortized over more packets.

These results emphasize that throughput cannot be predicted solely from the Bluetooth LE specification. Instead, it’s strongly influenced by platform‑specific scheduling behavior and the prevailing radio‑frequency environment.

Impact of interference

Longer connection intervals typically improve throughput in clean radio‑frequency environments by amortizing residual airtime across additional packets. However, they also increase sensitivity to interference. During long connection events, many packets may be transmitted back‑to‑back; if packet loss or repeated cyclic redundancy check errors occur early in the event, some controllers terminate the event prematurely.

When this occurs, a substantial portion of the connection interval may remain unused, resulting in a sharp reduction in throughput. Shorter connection intervals limit the amount of airtime lost when errors occur and often produce more consistent throughput in noisy environments, albeit with a lower theoretical maximum.

While parameters such as PHY speed, MTU size, DLE, and GATT characteristic length are largely fixed in modern Bluetooth LE systems, connection‑event timing and controller behavior ultimately determine achievable throughput.

The connection interval remains the primary tuning parameter, but its effect is non‑linear and highly dependent on implementation details. For systems that limit packet count per connection event, selecting an interval that closely matches the allowed packet budget is critical. When longer events are supported, throughput gains must be weighed against increased sensitivity to interference.

For design engineers, optimizing Bluetooth LE throughput requires empirical evaluation and platform‑specific characterization rather than reliance on specification‑level performance limits. At a practical level, this places increased importance on controller implementations and protocol stacks that offer fine‑grained configurability on both the Central and Peripheral sides, enabling precise control over connection parameters, event length, and buffering behavior.

Wireless MCU platforms, such as Microchip’s PIC32‑BZ6 multiprotocol wireless MCU family, are representative of designs that emphasize this level of stack configurability and visibility. By allowing engineers to tune behavior symmetrically on both ends of the link and observe the resulting timing effects, such platforms can simplify the process of analyzing throughput bottlenecks and optimizing data transfer performance under real‑world operating conditions.

The ability to measure connection‑event timing, packet scheduling, and error behavior at the controller and stack levels enables more repeatable, data‑driven throughput characterization during development.

Patrick Fitzpatrick is senior technical staff engineer for software at Microchip’s Wireless Business Unit.

Related Content

The post Bluetooth LE throughput: Why real‑world performance falls short of specs appeared first on EDN.

SemiLEDs’ quarterly revenue more than halves

Semiconductor today - Fri, 04/17/2026 - 10:36
For its fiscal second-quarter 2026 (to end-February), LED chip and component maker SemiLEDs Corp of Hsinchu, Taiwan has reported revenue of $1.064m, more than halving from $2.57m last quarter and down from $10.87m a year ago, due to the absence of any buy–sell purchase orders of equipment (which were $9.4m a year ago)...

Playstation 4 charge board

Reddit:Electronics - Fri, 04/17/2026 - 07:34
Playstation 4 charge board

Its a jdm 030 usb charge board but which one of these is the fuse ( that i should replace ) after a short circuit. And is it worth replacing it ?

submitted by /u/Salt-Environment9684
[link] [comments]

US Critical Materials and Columbia University to advance domestic recovery of defense-critical metals from red mud

Semiconductor today - Thu, 04/16/2026 - 19:45
Private rare-earths exploration and process development company US Critical Materials Corp (USCM) of Salt Lake City, Utah, USA and Columbia University have signed a two-year sponsored research agreement seeking to advance scientific pathways that enable the development of future US production of gallium, scandium, titanium and rare-earth elements from red mud, a major byproduct of aluminium refining...

Built-in Garage door opener

Reddit:Electronics - Thu, 04/16/2026 - 18:21
Built-in Garage door opener

We used to have a wired door bell button as a garage door opener near the back door inside the house. The wiring got damaged in an area new wiring couldn’t be rerouted during renovation.

Came up with this solution - took a garage door opener apart - connected wires to a decora style momentary switch and soldered other end to the pads for the buttons on pcb.

Added a whip antenna to over come shielding of electrical box and drywall. To maintain the 9 inch whip antenna, drilled a small hole in electrical box and fed it into wall.

Works perfectly.

submitted by /u/Certain_Proposal7191
[link] [comments]

The system architect’s sketchbook: The coherency wall

EDN Network - Thu, 04/16/2026 - 18:05

Deepak Shankar, founder of Mirabilis Design and developer of VisualSim Architect platform for chip and system designs, has created this cartoon for electronics design engineers.

The post The system architect’s sketchbook: The coherency wall appeared first on EDN.

Simulation tool tests assembly processes upfront

EDN Network - Thu, 04/16/2026 - 18:00

With Keysight Assembly simulation software, automotive manufacturers can virtually test shop-floor processes to identify issues early in development. Late-stage assembly failures drive delays, rework, and recalls. By delivering early insight and integrating with existing ecosystems, the software improves body-in-white assembly workflows.

Developed with automotive OEM partners, Keysight Assembly enables engineers to replicate processes such as part positioning, clamping, and spot welding through guided workflows and templates—without requiring finite element modeling (FEM) expertise. It also provides early visibility into distortion and dimensional risks, shortening production timelines and improving build accuracy.

Keysight Assembly integrates with Keysight’s stamping simulation software, allowing engineers to carry stamped-part data across the process—from forming through assembly—and validate outcomes against pre-production scan data. It also integrates with CAD, product lifecycle management (PLM), Excel, and existing digital workflows without disrupting established practices.

Learn more about Keysight Assembly and related webinars here.

Keysight Technologies  

The post Simulation tool tests assembly processes upfront appeared first on EDN.

SMT DIP switches fit space-constrained PCBs

EDN Network - Thu, 04/16/2026 - 18:00

Littelfuse’s TDB series of miniature DIP switches uses a 1.27-mm half-pitch, surface-mount design to reduce PCB footprint. The compact devices support high-density layouts where space, reliability, and manufacturability matter.

The switches are available in 2-, 4-, 6-, 8-, and 10-position SPST configurations, with body lengths ranging from 3.67 mm to 13.83 mm depending on position count. Contact ratings of up to 50 VDC, 100 mA (steady state) and 24 VDC, 25 mA (switching) make them suitable for low-power industrial control, as well as consumer IoT and smart home devices. Contact resistance is 100 mΩ maximum, and insulation resistance is 100 MΩ minimum at 100 VDC.

Compatible with automated SMT assembly, the switches feature gold-plated bifurcated contacts and top tape sealing for post-reflow washable processing. They offer a mechanical and electrical life of 1000 cycles and operate over a temperature range of –40°C to +85°C.

The TDB series switches are available in tube or tape-and-reel packaging for high-volume production. Samples can be obtained through authorized Littelfuse distributors.

TDB series product page 

Littelfuse

The post SMT DIP switches fit space-constrained PCBs appeared first on EDN.

Wilkinson divider/combiner reduces insertion loss

EDN Network - Thu, 04/16/2026 - 17:56

The Vishay WLKN-000 two-way Wilkinson power divider/combiner operates from 15 GHz to 20 GHz, centered at 18 GHz. By integrating a resistor with the transmission lines, the compact surface-mount device simplifies system design and saves space in aerospace, defense, and 5G/6G connectivity applications.

Low insertion loss of <0.5 dB below 19 GHz—said to be one of the industry’s lowest­—and return loss of 10 dB to 15 dB enhance system efficiency by reducing power dissipation throughout the signal path. Unlike narrowband or resistive splitters, the WLKN-000 offers high output-to-output isolation of <20 dB at the center frequency. This limits crosstalk, protects downstream amplifiers during combining, and maintains stable performance across parallel RF paths.

The thin-film device operates over a temperature range of −55°C to +155°C, supporting reliable performance in challenging environments. Applications include automotive ADAS, radio transceivers, LEO satellites, base station terminals, drones, weapons guidance systems, and phased-array radar systems.

Samples of the WLKN-000 in 1817 SMD packages are available now; production quantities have a lead time of 20 weeks.

WLKN-000 product page 

Vishay Intertechnology 

The post Wilkinson divider/combiner reduces insertion loss appeared first on EDN.

Digital isolators strengthen industrial systems

EDN Network - Thu, 04/16/2026 - 17:56

Part of Diodes’ RobustISO family, the API782x series of dual-channel digital isolators delivers 5.7-kVRMS isolation for 1 minute per UL 1577. The devices offer reliable protection for digital control and communication signals in solar inverters, motor control, industrial automation, and data center equipment.

The API782x series meets reinforced and basic isolation requirements across multiple standards, including VDE, UL, and CQC. The devices are rated for an 8-kV peak isolation voltage per DIN VDE 0884-17 and a 12.8-kV peak surge isolation voltage for transient events. Based on Diodes’ reliability calculations for the capacitive isolation barrier, the components offer a predicted operational lifetime exceeding 40 years.

The isolators support data rates up to 100 Mbps and provide a minimum CMTI of 150 kV/µs. The API7820 features both channels in the same direction, while the API7821 provides both channels in the opposite direction with either a high or low default output state. They operate from a 2.5-V to 5.5-V supply voltage and typically consume 2.1 mA per channel at 1 Mbps.

API782x isolators in SO-16WW packages are available through Diodes’ authorized distributors.

API782x product page 

Diodes

The post Digital isolators strengthen industrial systems appeared first on EDN.

DSC integrates control, sensing, security

EDN Network - Thu, 04/16/2026 - 17:55

Microchip’s dsPIC33AK256MP306 digital signal controller (DSC) combines a 200-MHz, 32-bit core and double-precision FPU with high-resolution PWMs and high-speed ADCs. It also includes a hardware crypto accelerator with support for post-quantum cryptography (PQC). Offered in a 36-pin VQFN package, the DSC saves space and simplifies PCB layout in power conversion, motor control, and intelligent sensing applications.

The dsPIC33AK256MP306’s four PWM generator pairs (eight outputs) deliver fine edge placement (FEP) resolution down to 78 ps. Three 12-bit SAR ADCs operate at up to 40 Msamples/s, alongside 5-ns comparators and 12-bit DACs with slope compensation. These features enable fast, deterministic control loops for high-efficiency DC/DC converters, auxiliary rails, and high-frequency SiC/GaN systems. An integrated touch controller extends support to sensing and HMI functions.

To address growing cybersecurity requirements, the dsPIC33AK256MP306 family supports secure boot, firmware updates, and debug, with hardware-accelerated PQC aligned to emerging standards for connected control systems. Live update capability maintains uninterrupted firmware upgrades in high-availability systems.

Prices for the dsPIC33AK256MP306 start at $1.97 in lots of 5000 units.

dsPIC33AK256MP306 product page

Microchip Technology 

The post DSC integrates control, sensing, security appeared first on EDN.

Vexlum and Menlo collaborate to streamline optical clock development

Semiconductor today - Thu, 04/16/2026 - 17:24
Finland-based laser developer and manufacturer Vexlum (which was spun off from Tampere University of Technology’s Optoelectronics Research Centre in 2017) and precision photonics technology provider Menlo Systems GmbH of Martinsried near Munich, Germany, are collaborating to accelerate the development of ultra-precise optical atomic clock timekeeping by providing an accessible, modular photonics source...

Engineering tradeoffs: a camera case study

EDN Network - Thu, 04/16/2026 - 15:00

Four cameras, from two companies. Similar at first glance. Quite different once you zoom in and ponder the picture a while. Which approach is superior? Share your opinions in the comments!

The way this is going, and to remain honest both with myself and all of you, I’m at least for now going to need to start referring to myself as a “camera collector” versus a photographer. I keep accumulating particularly noteworthy (therefore also rare) gear when I come across lightly used, but still reasonably priced, examples on eBay and elsewhere. But I can’t seem to find any spare time to actually use anything in my steadily expanding hardware inventory…at least not yet.

Will I ever be able to retire?

First World problem. I know. My latest examples of acquisition excess fall into the “pocketable” camera category. First off are two Pentax devices. As regular readers may recall from my past writeups, I’m nearing a half-century as a “Pentaxian”, a term originating on a t-shirt I snagged at the company’s CES booth a “few” years ago, prior to the brand’s acquisition by Ricoh. The new-to-me cameras are both members of the company’s “Q” product series: a first-generation Q:

and the successor Q7, skipping (at least so far) the in-between Q10 as well as the Q-S1 concluding iteration. I’d long known about the Q family, which originated a decade-and-a-half back, but enthusiastic reviews from folks such as Micro Four Nerds (the prior blog link is supplemented by the following video) tipped me over the acquisition edge:

Key differences between my two Pentax Q-series cameras include:

  • Sensor size: 1/2.3″ on the Q, 1/1.7″ on the Q7, albeit both delivering 12 Mpixel resolution. By the way, the origin of this particular sensor dimension terminology is a story in and of itself, which I’ll save for another day; for now, see here.
  • Body construction: Magnesium alloy on the Q, (still-rugged) plastic on the Q7

Note too, for comparison-to-come purposes, that neither model embeds an electronic viewfinder (EVF), although they both include both a pop-up flash and a hot shoe tailored for an optional external flash unit (which can operate in tandem with the pop-up) or a transmitter.

Potato, potahto (or tomato?)

Then I came across Chris Niccolls’ extensive writeup, “The Panasonic Lumix GM-5 Is the Greatest Digital Camera Ever Made”, along with an associated video, at PetaPixel:

Insert hook and reel me in. Regular readers may also recall I’ve been slowly-but-steadily collecting Micro Four Thirds (M43) gear in recent years, now spanning a “few” cameras from two suppliers (Olympus-now-OM System and Panasonic; the Blackmagic Design video cameras I own are both higher-end Canon EF mount-based models), one of which ended up as a gift for my wife. Well, my stable is now even fuller; I bought both a Panasonic Lumix DMC-GM1:

and the successor DMC-GM5:

both of which are also more than a decade old at this point. Here’s another Micro Four Nerds review writeup-plus-video combo, this time focused (bad pun intended) on the GM1:

along with a coverage content tandem on the GM5 (to be clear, I have no affiliation with Micro Four Nerds or any other “influencer” showcased here; I’m just a fan):

Notable differences between the two models include the following:

  • Supplemental illumination: pop-up built-in flash on the GM1 (but no hot shoe), hot shoe (but no integrated flash) on the GM5, and
  • Viewfinder: backside LCD only on the GM1, added EVF (electronic viewfinder, space-supplanting the predecessor’s pop-up flash) on the GM5
Sensor inconsistencies

Although the four cameras, from two manufacturers’ model lines, are conceptually similar (“pocketable”), their respective implementations are quite different. The inherent tradeoffs leading to each development team’s decisions and resultant product capabilities and limitations are interesting (IMHO at least) to ponder. Note that I have no company-representative insight, either quotable or off-the-record; what follows are just my educated guesses.

As already mentioned, my two Pentax Q variants’ image sensors are both 12 Mpixel in resolution, albeit with differing dimensions; the ~60% larger-area of the Q7’s sensor translates into improved low-light performance and wider dynamic range thanks to expanded pixel pitch and other factors. Conversely, the Panasonic GM1 and GM5’s image sensors are, as far as I can tell, identical, with 16 Mpixel resolutions. That said, M43 image sensors have roughly 9x the surface area of the 1/2.3″ sensor in the Pentax Q, and are still approximately 4x larger (again, surface area, not necessarily active image capture area) of the 1/1.7″ sensor in the Pentax Q7.

Stabilization tradeoffs

All four (total, including my two) Pentax Q family members also supported in-body image stabilization (IBIS), in and of itself a curious choice given the lightweight bodies and accompanying lenses, all of which would seemingly diminish the need for mechanical stabilization of any sort. To wit, neither Panasonic camera mentioned here implements IBIS, nor did any Panasonic-branded M43 lenses support optical image stabilization (OIS), at least at the time that the GM1 and GM5 were in production.

That said, M43 partner then-Olympus did have OIS-inclusive optics in its “glass” portfolio at the time, which could as-needed be used on Panasonic bodies given the two manufacturers’ lens mount compatibility. OIS is, generally speaking, inferior to IBIS, although as I’ve previously noted, it’s particularly effective with telephoto lenses. But it’s arguably better than nothing at all, or to interpolation-based digital image stabilization, for that matter.

Here are a couple of example videos discussing the similarities and differences between IBIS and OIS along with concept examples of both IS forms in action, as well as how they can collaborate:

This one from Canon has a Japanese audio track albeit with English subtitles:

Keep in mind, too, that given the Panasonic cameras’ larger-sized image sensors versus the Pentax alternatives, therefore larger pitch pixels despite the 33.3% higher resolutions, image stabilization was inherently less necessary in the M43 case given that improved light-gathering capability translated into the ability to operate them at blur-suppressing higher shutter speeds.

Space constraints

The added stabilization hardware surrounding the image sensor in the Pentax Q-family bodies, coupled with the desire to maintain their compact weight and dimensions, also compelled the company to dispense with a mechanical shutter, at least in the bodies themselves. Instead, a subset of the then-available eight-lens suite embedded mechanical leaf shutters in the lenses. More broadly, an “electronic shutter” implemented in the image sensor was available in all body-plus-lens cases, albeit with “rolling shutter” and other tradeoffs.

A hollow victory?

Ironically, in spite of these likely-difficult tradeoff decisions made by its development team, Pentax still ended up with Q-series camera bodies that were (slightly) larger than that of the Panasonic DC-GM1, as Robin Wong, another well-known photo enthusiast “personality”, notes in his Pentax- and Panasonic-themed blog posts and videos:

I’ve got more to say about the two companies’ contrasting approaches to the “pocketable” camera market, including the tradeoffs between multi-supplier standard and sole-sourced proprietary lens mounts …but I’ll save that for another day and writeup (or few). For now, I’ll wrap up my writeup and hand the keyboard to you for your so-far thoughts in the comments!

Brian Dipert is the associate editor, as well as a contributing editor, at EDN Magazine.

Related Content

The post Engineering tradeoffs: a camera case study appeared first on EDN.

Sasken Announces Hyderabad Center of Excellence to Scale Product Engineering and Digital Innovation

ELE Times - Thu, 04/16/2026 - 14:41

Hyderabad, India: April 16, 2026: Sasken Technologies Ltd. (BSE: 532663, NSE: SASKEN), a leading provider of product engineering and digital transformation services, today announced the launch of its Center of Excellence (CoE) in Hyderabad. The new center strengthens Sasken’s regional delivery footprint and deepens collaboration with strategic Chipset customers like Qualcomm and their OEMs.

Located in one of India’s fastest-growing technology ecosystems, the Hyderabad CoE will focus on next-generation engineering across connected devices, 5G-led platforms, IoT solutions, embedded systems, and digital product engineering. The center is designed to enable closer customer collaboration, accelerate engineering cycles, and support faster product innovation for complex global programs across industries such as Automotive, Smart devices, HiTech, Satellite communication, Industrial etc.

As part of this expansion, Sasken also announced the appointment of Nirmala Datla, Chief Data Science & Engineering Officer, as Site Leader for the Hyderabad CoE. She will drive delivery excellence, build next-generation engineering capabilities, and lead talent development from the location.

Alongside the leadership appointment, Sasken plans to hire initially 100+ specialized professionals from Hyderabad’s strong technology talent ecosystem in the coming months. Hiring will focus on high-skill roles across semiconductors, ODM, automotive, and Data Science & Engineering, supporting advanced engineering programs in connected devices, intelligent platforms, and next-generation digital ecosystems for global customers.

“The Hyderabad CoE represents a strategic investment to support the growing demand for Sasken’s capabilities in product engineering and digital transformation. Hyderabad’s strong engineering talent ecosystem enables us to scale delivery, deepen customer collaboration, and accelerate innovation and time-to-market for our customers,” said Nirmala Datla, Chief Data Science and Engineering Officer, Sasken Technologies.

“Hyderabad presents us with an excellent talent pool across our BU’s. The launch of this Center of Excellence strengthens Sasken’s delivery footprint and positions us to support customers with greater speed, scale, and proximity,”. “Aligned with our growth strategy, we continue to make focused investments in talent and capabilities that enable us to scale the right segments and deliver sustained growth.” said Hareesh Ramanna, Chief Experience Officer, Sasken Technologies.

The post Sasken Announces Hyderabad Center of Excellence to Scale Product Engineering and Digital Innovation appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки