Українською
  In English
EDN Network
The 200G/lane CPO pushes optical interconnect boundaries

A new co-packaged optics (CPO) solution claims to set the bar for next-generation interconnects serving hyperscale data centers and artificial intelligence (AI) workloads. Broadcom’s third-generation CPO technology delivers 200G per lane while significantly improving thermal designs, handling procedures, fiber routing, and overall yield.
Broadcom’s 200G co-packaged optics is engineered to address scale issues in interconnect, which often lead to link flaps and operational disruptions. This, in turn, affects the industry’s ability to achieve the lowest cost per token. The 200G CPO technology enables scale-up domains to exceed 512 nodes while addressing the bandwidth, power, and latency challenges associated with the increasing size of next-generation foundation model parameters.
CPO is a heterogeneous integration of optics and silicon on a single packaged substrate. Source: Broadcom
A CPO system design integrates optical and electrical components to maximize performance with lower-power optical interconnects. According to Near Margalit, VP and GM of the Optical Systems Division at Broadcom, the company’s CPO technology is driven by its switch ASICs, optical engine, and an ecosystem of passive optical components, interconnects, and system solutions partners.
Broadcom is working closely with its ecosystem partners to optimize the integration of CPO solutions. For instance, it has teamed up with Corning on advanced fiber and connector technologies. It’s also working closely with Twinstar Technologies on high-density fiber cables to scale optical interconnects in next-generation data center and AI infrastructures.
Broadcom’s CPO legacy
Broadcom began its CPO journey in 2021 with the launch of the first-generation Tomahawk 4-Humboldt chipset, which included high-density integrated optical engines, edge coupling, and detachable fiber connectors. Next, the second-generation Tomahawk 5-Bailly chipset deployed 100G per lane. According to Broadcom, it was the industry’s first volume-production CPO solution.
Now, after the launch of the third-generation 200G/lane CPO, Broadcom has telegraphed its commitment to developing a fourth-generation 400G/lane solution. That marks a steady progression of optical interconnect solutions for AI workloads driving higher bandwidth, lower latency, and more power-efficient optical interconnects.
Related Content
- The advent of co-packaged optics (CPO) in 2025
- IBM Boasts Industry-Leading Photonics to Cut AI Training Time
- Global Insights into the Co-Packaged Optics Technology Platform
- Silicon Photonics and Co-Packaged Optics Shine a Light at OFC 2025
The post The 200G/lane CPO pushes optical interconnect boundaries appeared first on EDN.
Micro-speaker enables slimmer wearables

Sycamore-W, an ultra-thin near-field MEMS speaker from xMEMS, brings premium audio to smartwatches and fitness bands. With dimensions of just 20×4×1.28 mm, it reduces speaker package volume by up to 70%, freeing space within wrist-worn devices for additional biometric sensors or larger batteries. At just 150 mg, Sycamore-W also minimizes weight impact in space-constrained wearables.
Compared to conventional coil speakers—typically 3 mm to 4 mm thick and weighing up to 3 grams—Sycamore-W’s slim profile and low mass significantly reduce bulk. The lighter design minimizes strain and improves comfort for all-day wear.
With solid-state durability, the MEMS speaker offers component-level IP58-rated ingress protection and withstands mechanical shocks up to 10,000 g. It delivers zero phase delay and improved low-frequency response compared to dynamic drivers, exhibiting 1st-order decay—where output falls off more gradually (1/f vs 1/f², or –20 dB vs –40 dB from 2 kHz to 20 Hz)—for stronger bass performance.
Samples of the Sycamore-W are available now to early access customers, with volume production slated for Q2 2026.
The post Micro-speaker enables slimmer wearables appeared first on EDN.
IMU tracks high-G impacts and motion

The LSM6DSV320X inertial measurement unit (IMU) from ST features dual accelerometers for activity tracking up to 16 g and impact detection up to 320 g. To support edge computing, it integrates a machine-learning core for context awareness with exportable AI features, along with embedded functions such as a finite state machine for configurable motion tracking.
Housed in a compact 3×2.5-mm module, the IMU suits applications ranging from wearables and consumer medical devices to smart home and vehicle crash-detection systems. In addition to the 3-axis accelerometers, it includes a gyroscope with a range of up to ±4000 dps. All three sensors are fully synchronized to simplify application development.
Other features include an embedded sensor fusion low-power algorithm for spatial orientation and an adaptive self-configuration (ASC) function. With ASC, the sensors automatically adjust their settings in real time when a specific motion pattern or machine-learning core signal is detected—without requiring host processor intervention.
In production now, the LSM6DSV320X is supported in the Edge AI Suite and will be added to AIoT Craft by the end of 2025.
The post IMU tracks high-G impacts and motion appeared first on EDN.
SoCs boost smart home interoperability

Qorvo has expanded its QPG6200 portfolio of low-power wireless SoCs supporting Matter over Thread, Zigbee, and Bluetooth LE. Featuring the company’s ConcurrentConnect technology, the devices enable seamless multiprotocol communication for smart home, industrial automation, and other IoT applications.
Based on the previously announced QPG6200L—now in production with leading smart home OEMs—the QPL6200J, QPL6200M, and QPL6200N deliver up to 20 dBm transmit power in a compact QFN package. Output power is software-configurable to meet global regulatory requirements. All devices are powered by an Arm Cortex-M4F processor running at up to 192 MHz, with 2 MB of nonvolatile memory and 336 kB of RAM.
The table below highlights key differences.
A PSA Certified Level 2 development kit based on the QPG6200L is available now, with the full device family scheduled for production in the third quarter of 2025.
The post SoCs boost smart home interoperability appeared first on EDN.
Field analyzers gain vector functions

Anritsu’s Site Master cable and antenna analyzers are now available with optional vector network analyzer (VNA) and vector voltmeter (VVM) capabilities. These additions extend their use to field testing of filters and amplifiers, as well as maintenance of radar and antenna systems, including VHF omnidirectional range (VOR) and instrument landing system (ILS) navigation equipment.
The two-port, one-path VNA covers 5 kHz to 4 GHz or 6 GHz. Calibration options include standard open/short/load accessories, the one-step InstaCal module, or factory ReadyCal for immediate measurements. With over 100 dB of dynamic range and –45 dBm to +9 dBm output power, it supports testing of filters, diplexers, and amplifiers in communication base stations. A 380-µs/point sweep speed accelerates alignment and tuning.
The VVM performs cable phase matching in complex phased array antenna systems, such as VOR/ILS systems at major airports. A table display of individual results simplifies phase matching of multiple cables.
The VNA and VVM options for the Site Master MS2085A and MS2089A portable analyzers are now available, with existing instruments eligible for upgrade.
The post Field analyzers gain vector functions appeared first on EDN.
GaN switch streamlines power conversion

Infineon’s CoolGaN 650-V G5 bidirectional switch (BDS) integrates two switches in a single device to actively block current and voltage in both directions. Its monolithic common-drain design with a double-gate structure, based on gate injection transistor technology, improves efficiency, reduces die size, and replaces conventional back-to-back configurations.
The GaN switch simplifies cycloconverter designs by enabling single-stage power conversion, eliminating the need for multiple conversion stages. In microinverters, it supports higher power density and a lower component count. It also enables advanced grid functions such as reactive power compensation and two-way power flow.
The CoolGaN 650-V G5 BDS is offered in a TOLT package with RDS(on) values of 110 mΩ and 55 mΩ. It is now open for ordering, with samples of the 110-mΩ version available.
The post GaN switch streamlines power conversion appeared first on EDN.
A closer look at capacitor auto-discharge circuit

What function does a capacitor auto-discharger perform in a power supply circuit, and how does it work? What are its fundamental building blocks, and can engineers develop a capacitor auto-discharge module on their own? What are basic considerations for component selection? T. K. Hareendran provides answers to these questions in his profile story for this design circuit.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- Quickly discharge power-supply capacitors
- Model Railway Capacitor Discharge Unit (CDU)
- A short primer on X- and Y-capacitors in AC power supply
- Parasitics and Capacitor Selection: Dielectric Absorption (DA)
- Discharge resistor modules premounted on the capacitor deliver enhanced reliability
The post A closer look at capacitor auto-discharge circuit appeared first on EDN.
My very first LED

My first encounter several decades ago with a pillar of modern technology yielded more than just one lesson.
NEREM 1967 was the IEEE Northeast Electronics and Engineering Meeting held at the Sheraton-Boston Hotel and the War Memorial Auditorium in Boston, MA, from November 1 to 3, 1967. I was there, although most of the engineering population in the Northeastern United States could probably have shared that claim back then; it was perhaps the busiest mass of human activity I’ve ever encountered in one common environment.
The display booths were extremely elaborate, but the one thing I couldn’t help noticing was that all these demonstrations of test equipment sported a constellation of these strange but intense little red lights that, in many cases, formed numbers, but some of which had to be viewed from pretty much straight on or the light would vanish.
Of course, they were early versions of LEDs.
Shortly afterward, having learned what an LED was, I obtained a sample. It was a Monsanto type MV-50 (Figure 1), which, when I wired it up, I saw its little red light. At the same time, a separate light of realization went off in my head as I finally understood what it was that I’d been seeing all over the place at the NEREM show.
Figure 1 My very first LED obtained soon after attending NEREM 1967. Source: eBay
Next, I began studying some LED datasheets.
One parameter that kept appearing at the forefront of many datasheets was light output rated in candelas. The candela is a unit of luminous intensity or luminous power per unit of solid angle, and the numbers for that parameter rivaled each other from product to product.
However!!
The physical dimensions of some LEDs of that era were really quite small and some of them had very directional and limited light output patterns. Even though the candela ratings could be “impressive”, the total light output from some of them was, if you’ll forgive me for being judgmental, puny.
The stress that was placed on the high intensity rating seemed like an exercise in specmanship. Still, if you want to know how far LED technology has come in the ensuing fifty-odd years, take a late-night walk-through Times Square in Manhattan and don’t forget your sunglasses.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- LED tail lights
- Suppress EMI coming from a variable brightness LED
- What’s the storage life of idled LED light bulbs?
- LED street-light designs create safety hazards
- The headlights and turn signal design blunder
- Headlights and turn signals, part two
The post My very first LED appeared first on EDN.
MRAM and ReRAM: The quest for automotive, aerospace niches

Where do MRAM and ReRAM technologies stand after years of promise to replace incumbent memories? Gary Hilson speaks to MRAM and ReRAM makers as well as an industry analyst to find the truth about their market standing. While these memory technologies offer endurance, they don’t seem to compete on price. As a result, they are eying specialized markets like automotive and aerospace to remain relevant.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- ReRAM enhances edge AI
- ReRAM Machine Learning Embraces Variability
- An MCU test chip embeds 10.8 Mbit STT-MRAM memory
- Memory lane: Where SOT-MRAM technology stands in 2024
- Emerging Memories May Never Go Beyond Niche Applications
The post MRAM and ReRAM: The quest for automotive, aerospace niches appeared first on EDN.
Seven-octave linear-in-pitch VCO

Frequent contributor Nick Cornford recently shared design ideas incorporating cool circuits for linear-in-pitch voltage-controlled oscillators (LPVCOs):
- Revealing the infrasonic underworld cheaply: Part 1 and Part 2
- A pitch-linear VCO: Part 1 and Part 2
Wow the engineering world with your unique design: Design Ideas Submission Guide
The linear in pitch function, which makes output frequency proportional to the antilog of voltage, is interesting because it provides a better perceptual interface to the inherently logarithmic human ear than a linear frequency.
One measure of the performance of an LPVCO is its octave range. That’s the ratio of highest to lowest frequency that its output spans, expressed as the binary (base 2) logarithm of the ratio. Two octaves (22 = 4:1) is good. Three octaves (23 = 8:1) is better. The LPVCO in Figure 1 does seven (27 = 128:1).
Figure 1 A seven-octave LPVCO comprises Q1 Q2 antilog pair that converts the 0 V to 5 V Vin, to a 1 µA to 128 µA Ic2, for a proportional 27 = 128:1 change in C1 ramp rate and U1 oscillation frequency. Counter U2 then scales the U1 oscillation frequency by 4 and converts to a three-level, very vaguely “sine-ish,” output waveform.
Here’s how it works.
Control voltage Vin is scaled by a voltage divider (R1/R2 + 1) = 34:1 and applied to the Q1 Q2 exponential-gain current mirror. There, it is level-shifted and temperature-compensated by Q1, then anti-logged by Q2 to produce
Ic2 = 2(1.4Vin) µA. The resulting C1 timing ramp spans from 5 ms (for Vin = 0) to 40 µs (for Vin = 5 V). The ramp ends when it crosses analog timer U1’s 1.67-V trigger level and is reset via R5 and D1 to U1’s threshold level of 3.33 V, starting another oscillation cycle. The resulting sawtooth will therefore repeat at F = Ic2/(1.67C2) = 2(1.4Vin) µA / 5nCb = 200 (2(1.4Vin) ) Hz.
Nick’s lovely designs show that short pulses such as U1’s output spikes require conversion to a waveshape with a less intense harmonic content if we want to hear a listenable audio output. Therefore, U2’s switch-tail counter divides U1’s oscillation frequency by 4. This produces a hardly sinusoidal, but at least somewhat less annoying, tri-level 50 Hz to 6400 Hz final output.
Thanks go to Nick for a fun and well-conceived design topic, and of course to editor Aalyia for her friendly Design Idea department format that makes such enjoyable collaboration possible!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Revealing the infrasonic underworld cheaply, Part 1
- Revealing the infrasonic underworld cheaply, Part 2
- A pitch-linear VCO, part 1: Getting it going
- A pitch-linear VCO, part 2: taking it further
- Can a free running LMC555 VCO discharge its timing cap to zero?
The post Seven-octave linear-in-pitch VCO appeared first on EDN.
Add-on features in electronic products: The good part

Add-on features are all the rage in electronic products. But are they actually handy or just embody bells and whistles? What’s their hardware and software cost? And more importantly, do they serve an actual value or merely add to the user-experience clutter? Bill Schweber looks at this user interface mystery and finds some answers.
Read the full story at EDN’s sister publication, Planet Analog.
Related content
- Coffee Teardown: Simple, Effective Design
- What’s the Impact of Medical-Design Mandates?
- Soft on/off is designer’s dilemma, user’s headache
- Extreme Analog Design: Don’t Forget Those Passives
- AVAS and the dilemma of ‘too good to be true’ designs
- Instrumented Hockey Puck is an Extreme Analog Design
- With so many mandates, can we still deliver a product design?
The post Add-on features in electronic products: The good part appeared first on EDN.
IC verification tool addresses design complexity, productivity gap

A new software combines connectivity, scalability and data-driven artificial intelligence (AI) capabilities to push the boundaries of the IC verification process and make chip design teams more productive. Questa One aims to address the verification productivity gap for large, complex designs spanning IP to IC to systems.
The steadily increasing complexity of 3DICs, chiplet-based designs, and software-defined architectures is further compounded by a critical talent shortage and growing demands for enhanced security and lower power consumption. “Questa One uses new technical advances to deliver the fastest functional, fault, and formal verification engines available,” said Abhi Kolpekwar, VP and GM of digital verification technologies at Siemens EDA.
Figure 1 Questa One strives to redefine IC verification from a reactive process into an intelligent, self-optimizing system. Source: Siemens EDA
A recent Wilson Research Group survey suggests that one in seven IC projects achieves first-time silicon success. Chris Giles, director of product management for static and formal at Siemens EDA, calls this a jaw-dropping and staggering drop. “Our approach is to enable engineers to do more with less, with not just faster engines but also faster engineers with fewer workloads,” he said.
Figure 2 Here is a view of the decline in first-time silicon success and the increase in FPGA bugs. Source: Wilson Research Group
Giles spoke with EDN to explain the technology fundamentals of this new verification tool.
Quest One’s three tenets
Giles said that Questa One has been developed around three core principles:
- Scalable verification: It allows engineers to speed verification closure. Giles noted that the semiconductor industry is struggling to tackle large designs. “That’s why we see a decline in first-time silicon success,” he added. “Chip designs are getting so large that it’s difficult to verify them in one piece.” Questa One verification aims to allow engineers to work on large chip designs.
- Data-driven verification: It leverages data for AI-powered analytics to bring new insights and to improve verification productivity. “It collects datasets that allow verification tools to either make recommendations or directly decide what to do next and do it productively,” said Giles.
- Connected verification: Questa One connects EDA tools and verification IP to form a cohesive ecosystem for robust verification, validation, and test operations. In other words, it uses a broad set of technologies and analyses to provide insights and raw verification power.
Figure 3: Questa One offerings are shown with three main value propositions summed up at the bottom. Source: Siemens EDA
Quest One’s four components
Questa One has the following focus areas:
- Questa One simulator: This simulator engine is built from the ground up. It performs functional and fault simulation for RTL, GLS, and DFT applications with parallel processing and profiling add-ons.
- Questa One SFV: The stimulus-free verification (SFV) solution delivers user productivity through synergistic combinations of static and formal analyses, AI, automation, and parallelization. “The current static and formal technology is very fragmented, challenging high productivity,” Giles said. “SFV integrates static and formal analyses, AI, and parallelization to address this challenge.”
- Questa One verification IQ: It’s a coverage solution that utilizes generative, analytic, and prescriptive AI to drive verification closure faster with fewer workloads. “It features an intelligent interface that provides insight into the entire verification ecosystem,” Giles added.
- Questa One Avery VIP: The solution, based on Avery’s high-quality VIP and high-coverage compliance test suites (CTS), offers protocol-aware debug and coverage analytics to help increase productivity. It supports 3DIC and chiplet verification from IP to system-on-chip (SoC) design.
Figure 4 Four main components of Questa One include a simulator, a static and formal verification solution, a verification intelligence coverage analysis solution, and an Avery identifier. Siemens EDA
Questa One in works
Semiconductor IP supplier Rambus acknowledged an improved verification experience in managing data center workloads like generative AI while implementing IPs for PCIe, CXL, and HBM interfaces. Rambus particularly mentioned Questa One’s simulation, static and formal analysis, and verification IP technologies.
Then there is Arm, which used Questa One simulator to reduce regression time in its latest AArch64 architecture. “The Questa One verification solution has improved our verification productivity across traditional on-premises and cloud deployments,” said Karima Dridi, head of productivity engineering at Arm.
MediaTek, another early user of Questa One, has utilized its formal verification and simulation technologies. “Questa One Property Assist utilizes generative AI to save us weeks of engineering time, and Questa One Regression Navigator predicts which simulation tests are most likely to fail, runs them first, and saves days of regression and debugging time,” said Chienlin Huang, senior technical manager of Connectivity Technology Department at MediaTek.
Questa One claims to yield step-function gains in smart regression, smart analysis, smart engine, and smart debug domains. Design testimonials from Arm, MediaTek, and Rambus are a good start.
Related Content
- How Do You Verify IC Performance?
- Integrate tools for effective verification
- The profile of a data-driven IC design verification tool
- Why verification matters in network-on-chip (NoC) design
- IC design: A short primer on the formal methods-based verification
The post IC verification tool addresses design complexity, productivity gap appeared first on EDN.
A simple circuit to let you characterize JFETs more accurately

In April 2012, EDN published a circuit by John Fattaruso that lets you quickly measure the drain-source saturation current and the pinch-off voltage of both an N-JFET and a P-JFET. The pinch-off voltage (Vp) is measured by inserting a very large resistance between the source and the ground. The drain-source saturation current (IDSS) is measured by inserting a small resistance between the source and the ground. Then, the voltage across this resistor is measured, and both Vp and IDSS can be calculated using Ohm’s law.
There is a catch with this circuit: Since IDSS is measured across a non-zero resistor, there is a deviation from the real IDSS, see Figure 1. This circuit does not really measure point A, but actually measures point B slightly before this. For JFETs with lower Vp voltages and/or higher IDSS currents, there can be a deviation between the measured and real IDSS value of 5% or more.
Figure 1 A standard N-JFET drain-source current vs gate-source voltage curve.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Circuit ideaThe accuracy of the circuit can be drastically increased by making a couple of minor changes. Figure 2 shows the basic circuit.
Figure 2 A basic, improved circuit for the JFET IDSS and Vp measurement.
An astute reader will immediately see the two circuits’ similarities and differences. Switch 1 is, again, used to select between N-channel JFETs and their less common sibling: P-channel JFETs.
Switch 2 is used to select between Vp and IDSS measurement. In the position as drawn, IDSS is measured. In this position, A1 is set up as a transimpedance amplifier.
With the op-amp’s non-inverting input connected to ground, A1 will keep the inverting input, and hence the source, to ground as well. This guarantees that the true IDSS is measured. Resistor R3 then converts the current to a voltage that can be measured at the output of A1.
When Switch 2 is flipped to the other position, A1 is set up as a simple voltage follower. The pinch-off voltage that develops across R1 is then buffered and available at the output of A1.
Full implementationNow that we have seen the basic circuit, we can look at a full implementation (Figure 3). Resistor R2 limits the current that can flow in case a JFET is inserted incorrectly. In pinch-off measurement mode, the impedances around Q1, R1 and A1 are all pretty high. To limit the influence of noise, capacitor C1 is added. It is best to keep these wires short and/or to build the circuit in a shielded box.
Figure 3 First implementation of the practical circuit used to measure IDSS and Vp.
Most operational amplifiers can only source or sink a small amount of current. The drain-source saturation current can easily be in the tens of milliamperes. To boost the current output capabilities of A1, a complementary bipolar transistor output stage is added. Please note that the output is not short-circuit proof. If preferred, a simple 1 kΩ to 10 kΩ resistor can be added in series with the output.
With the current resistor values in the circuit, a pinch-off voltage of ±10 V can be measured and a saturation current of ~ ±100 mA.
Although there are JFETs with large saturation currents (think J109 with IDSS > 40 mA and J108 with IDSS > 80 mA!), this is simply not needed for most JFETs. So, a variation on this circuit was developed. The pinch-off voltage remained the same, but the saturation current was returned to 25 mA, covering almost all JFET types. A further requirement was that the output voltage range for both measurements needed to be the same: 0 … ±5 V. This was so that a moving coil meter readout could be used with a single range.
See Figure 4 for the implementation.
Figure 4 A circuit with the tailored measurement range that is suited for most JFETs.
Read-outA read-out needs to be added to make this a complete measurement instrument. Since I have a large stock of moving coil meters, I decided to use one of these for the read-out. To some, they may look antiquated, but they are a joy to use and a marvel of mechanical engineering! The output can be positive or negative depending on whether you are measuring Vp or IDSS, and whether an N-type JFET or a P-type JFET is being tested. So, this is something that needs to be dealt with. Also, it would be nice if there were some kind of polarity indication. See Figure 5 for the read-out circuit.
Figure 5 The readout circuit with sensitive polarity indication.
The 1-mA moving coil meter is included in the feedback loop around the op-amp. D3-D6 form a common rectifier bridge so that, independent of the polarity of the input voltage, the meter is always fed a positive current.
Transistors Q1 and Q2 serve as a polarity indication. Positive voltages will turn on Q1 and LED D9. LED D10 will indicate negative voltages.
D7,8 are not needed for the rectification. Because of these diodes, the A1 output voltage must be above/below ±1.8 V before any significant current will flow through the meter. This, in turn, guarantees that transistors Q1 and Q2 will already turn on at very low input voltages, giving a good polarity indication across the whole input range.
Test socketOver the years, manufacturers have created JFETs with almost every possible pin-out, so making a single universal test socket is not so trivial. With three leads, there are 3! = 6 possible combinations as shown in Table 1.
# |
Pin-out |
||
1 |
G |
D |
S |
2 |
G |
S |
D |
3 |
D |
G |
S |
4 |
D |
S |
G |
5 |
S |
G |
D |
6 |
S |
D |
G |
Table 1 The six possible pin-out combinations that can be used for an off-the-shelf JFET.
Of course, a JFET with a pin-out of S-D-G (#6) can be tested in a socket with pin-out G-D-S (#1), simply by inserting it reverse in the socket. This effectively eliminates half of the possible combinations. So we are left with the following three, as shown in Table 2.
# |
Pin-out |
||
1 |
G |
D |
S |
2 |
G |
S |
D |
3 |
D |
G |
S |
4 |
= #2 reverse |
||
5 |
= #3 reverse |
||
6 |
= #1 reverse |
Table 2 A reduction in the number of pin-out combinations by simply reversing the component within the test socket.
After a bit of doodling, we can create a single five-pin test socket that can accommodate every possible JFET pin-out as shown in Table 3.
# |
Pin-out |
||||
D |
S |
G |
D |
S |
Table 3 A singular 5-pin test socket to accommodate all possible JFET pin-outs.
There are two different variants possible; this is left as an exercise to the reader. The same logic can be applied to create universal test sockets for bipolar transistors, of course.
Figure 6 The final PCB implementation of the practical JFET circuit used to measure IDSS and Vp, showing the test socket.
In closingThanks to John Fattaruso for his excellent design idea, which sprouted this idea! We all stand on the shoulders of the giants that came before us.
Cor van Rij blew his first fuse at 10 under the close supervision of his father, who promptly forbade him from ever working on the house mains again. He built his first regenerative receiver at the age of 12, and as a boy, his bedroom was decorated with all sorts of antennas, and a huge collection of disassembled radios took up every horizontal surface. He studied electronics and graduated cum laude.
He worked as a data design engineer and engineering manager in the telecom industry. And has worked for almost 20 years as a principal electrical design engineer, specializing in analog and RF electronics and embedded firmware. Every day is a new discovery!
Related Content
- Simple circuit lets you characterize JFETs
- A Conversation with Mike Engelhardt on QSPICE
- Your friend, the JFET.
- Building a JFET voltage-tuned Wien bridge oscillator
The post A simple circuit to let you characterize JFETs more accurately appeared first on EDN.
Cutting into a multi-solar panel parallel combiner

Earlier this year, within the concluding post of a multi-part series that explored a not-as-advertised portable power generator, its already-broken-on-delivery bundled solar panel:
and the second solar panel I’d also bought for the setup (and subsequently also returned):
I discussed the primary options (serial and parallel) for merging the outputs of multiple solar panels, the respective strengths and shortcomings of the two approaches and, in the parallel-connection case, the extra circuitry that (unless already built into the panels themselves) would likely be necessary to prevent reverse-current hotspots in situations where one or both panels were in dim light-to-darkness.
Since both panels I’d bought, plus the portable power generator they were intended to “feed”, were all based on Anderson Powerpole PP15-45 connectors:
the parallel combiner I’d also bought from (and subsequently also returned to) Amazon had Anderson Powerpole connectors on both input ends, plus the output:
What if anything was inside it beyond just two pairs of input wire, with like-polarity cables soldered together and to an output strand, all within an intermediary watertight compartment? And if more, why? Here’s what I wrote back then:
Assume first that the combiner cable simply merges the panels’ respective positive and negative feeds, with no added intermediary electronics between them and the electrons’ intended destination. What happens, first, if all the parallel-connected panels are in shade (or to my earlier “dark” wording surrogate, it’s nighttime)? If the generator is already charged up, its battery pack’s voltage potential will be higher than that of the panels themselves, resulting in possible reverse current flow from the generator to the panels. Further, what happens if there’s an illumination discrepancy between the panels? Here again there’ll be a voltage potential differential, this time between them. And so, in this case, even if they’re still charging up the generator’s batteries as intended, there’ll also be charging-rate-inefficient (not to mention potentially damaging; keep reading) current flow from one panel to the other.
The result, described in this crowded diagram from the same combiner-cable listing on Amazon:
is what’s commonly referred to as a “hotspot” on one or all panels. Whether or not it negatively impacts panel operating lifetime is, judging from the online discussions I’ve auditioned, a topic of no shortage of debate, although I suspect that at least some folks who are skeptical are also naïve…which leads to my next point: how do you prevent (or at least minimize) reverse current flow back to one or both panels? With high power-tolerant diodes, I’ll postulate.
Those folks who think you can direct-connect multiple panels in parallel with nothing but wire? What I suspect they don’t realize is that there are probably reverse current-suppressing diodes already in the panels, minimally one per but often also multiple (since each panel, particularly for large-area models, is comprised of multiple sub-panels stitched together within the common frame). The perhaps-already-obvious downside of this approach is that there’s a forward-bias voltage drop across each diode, which runs counter to the aspiration of pushing as much charge power as possible to the destination battery pack…
If you look closely at the earlier “crowded diagram” you can see a blurry image of what the combiner cable’s circuitry supposedly looks like inside:
And I closed with this:
Prior to starting this writeup, I returned the original combiner cable I bought, since due to my in-parallel return of the Duracell and Energizer devices, I no longer needed the cable, either. But I’ve just re-bought one, to satisfy my own “what’s inside” research-induced curiosity, which I’ll share with you in a teardown to come.
That time is now. Since I strongly suspected my teardown would be destructive, I picked up the cheapest combiner I could find on Amazon. This one, to be precise, from the same supplier I’d chosen before (therefore presumably with the same “guts” in between the output and inputs):
In this particular case, the combiner was intended for use with Jackery portable power stations (historically based on, as I’ve noted before, either a DC7909 or DC8020 connector depending on the model), so it included native-plus-adapter support for both plug standards. Today’s patient was “Amazon Warehouse”-sourced, therefore $3.20 cheaper than the $15.99 list price. And again, I assumed it wouldn’t live past my dissection of it, anyway. Speaking of which, here it is:
Now freed, along with its associated output adapter, from clear-plastic captivity and as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
Input(s) end:
Middle thirds, top and bottom:
And output end, both “bare” and adapter-augmented:
Back to the middle third for a side view. Look, it’s an ultrasonic welded seam all the way around!
I’m glad to see that at least some of you enjoyed my attempted (successfully, so, albeit not cleanly) breach of an ultrasonic-welded wall wart case at the beginning of last month.
To the Hackaday crowd: No, it wasn’t intended as an April Fools’ joke. I had no idea what day Aalyia was going to publish it, although in retrospect, excellent choice, my esteemed colleague!
This time I decided to downscale my “implements of destruction” somewhat, downgrading from a 2.5 lb. sledge to a more modest ball-peen hammer, and to a more diminutive but no less sharp (unfortunately, this time absent a “hammer end”) paint scraper:
I’d also like to introduce you to my equally diminutive, recently acquired vise, the surrogate for the Black & Decker Workmate I used last time. Isn’t it dainty (albeit surprisingly sturdy)?
It took a few more whacks than I would have preferred (or maybe I was just being cautious after last time’s results), but eventually I got inside, and cleanly so this time, if I do say so myself:
The other side…not so much, although still not bad (and yes, to several readers’ suggestions, I also own a hacksaw, which I’ve used before in similar situations; I was just angling for variety):
All that was left was a flat-head screwdriver acting as a lever arm to pry the two halves apart:
And we’re in:
This initial perspective is of the bottom of the device:
Note the thick PCB traces and their routings. Keep this in mind when we flip it to the other side:
Speaking of which, let’s next remove those two screws:
And the PCB’s now free:
Here’s the bottom side of the PCB again, now absent the case half that previously surrounded it:
And here’s the now-exposed top half, blurrily glimpsed earlier in one of the “stock photos”, that we all really care about:
Zooming in a bit:
And now even closer, courtesy of my crude, inexpensive loupe-as-supplemental-lens setup:
Those are indeed “high power-tolerant diodes”! Specifically, they’re multi-sourced (does anyone there know if the first line “LGE” mark refers to LG Electronics?) MBRD1045 Schottky devices, variously referred to both “diodes” and “rectifiers”, the latter because their Schottky-derived low forward voltage loss makes them amenable to use in (among other things) full-wave rectifier circuits like the one seen in last month’s “wall wart”. In actuality, the two terms refer to the same thing, as a discussion forum thread I came across in my research made clear. This memorable phrase in one of the thread’s posts cracked me up (no, I won’t reveal if I agree!):
EEs are not known for consistency and precise language.
Admittedly, a circuit diagram I found in several suppliers’ datasheets gave me initial pause:
Two anode pins? Were the same-polarity outputs of both solar cells combined ahead of the diode? And if so, why were there four diodes in the design, instead of just two?
Eventually, even before doing the math and calculating that the spec’d 10 A of peak per-diode forward current would barely-at-best enable free flow of even one solar panel’s electron output (thereby, I suspect, being the primary cause, vs the slight forward voltage drop across the diodes, of my previously mentioned inefficiency results noted by some combiner users), far from two panels’ aggregate load, I’d also realized that such a setup would only achieve one of the two desired combiner objectives. It would indeed prevent this scenario:
What happens, first, if all the parallel-connected panels are in shade (or to my earlier “dark” wording surrogate, it’s nighttime)? If the generator is already charged up, its battery pack’s voltage potential will be higher than that of the panels themselves, resulting in possible reverse current flow from the generator to the panels.
But it would do nothing to current flow-correct this other key potential “hotspot” scenario:
What happens if there’s an illumination discrepancy between the panels? Here again there’ll be a voltage potential differential, this time between them. And so, in this case, even if they’re still charging up the generator’s batteries as intended, there’ll also be charging-rate-inefficient (not to mention potentially damaging; keep reading) current flow from one panel to the other.
So, four diodes total it is, two for each panel (one for the output and the other for the return), with both anode connections of each diode leveraged for a common input, and the two panels’ respective positive and negative pairs combined after the multi-diode structure. This “digital guy” may yet evolve embryonic-at-least analog and power electronics expertise…nah. C’mon let’s get real. Delusions are inexhaustible, don’cha know. Regardless, did I get the analysis right, or have I missed something obvious? Sound off with your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Energizer’s PowerSource Pro Battery Generator: Not bad, but you can do better
- The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile
The post Cutting into a multi-solar panel parallel combiner appeared first on EDN.
Two design solutions for Bluetooth channel sounding

Bluetooth channel sounding—a new protocol stack designed to enable secure and precise distance measurement between two Bluetooth Low Energy (LE) devices—is propelling Bluetooth technology into a new era of location awareness. It offers true distance awareness while enhancing Bluetooth devices’ ranging capabilities.
Bluetooth channel sounding’s use spans from helping locate devices such as phones or tablets to digital security enhancements like geofencing. It can also be used in smart locks, pet trackers, vehicle keyless entry, and access control applications.
Hardware and software solutions are starting to emerge to fulfill the potential of Bluetooth channel sounding and provide sub-meter accuracy for Bluetooth-empowered devices. These solutions include reference boards, development kits, and software stacks.
Below are two design case studies demonstrating the potential of Bluetooth channel sounding technology.
Radio board and antenna hardware
Silicon Labs’ xG24 radio board—designed to work with Pro Kit—aims to help developers create and prototype products using Bluetooth channel sounding for precise distance estimation. Pro Kit includes a BRD4198A EFR32xG24 2.4 GHz +10-dBm radio board, a dipole antenna, and reference designs. It works with either a coprocessor with an external MCU or a wireless system-on-chip (SoC) with an integrated MCU.
Another xG24 Dev Kit features a dual-antenna PCB design and a channel sounding visualizer tool to allow developers to view distance measurements in real time. Single-antenna hardware offered in the Pro Kit has fewer antenna paths and limited multipath information, which makes it more suitable for basic Bluetooth channel sounding applications.
Figure 1 USB or coin cell powered development platform with a dual-antenna design and up to +10 dBm output power. Source: Silicon Labs
On the other hand, dual-antenna hardware offers higher accuracy, better spatial performance, and enhanced multipath resolution, making it suitable for advanced applications such as key fobs and tags that demand precise distance estimation (Figure 1). Its antenna diversity also bolsters signal quality and robustness.
Software stack
Bluetooth channel sounding technology uses phase-based ranging (PBR), round trip time (RTT), or both to accurately measure the distance between two Bluetooth LE-connected devices. PBR utilizes the principle of phase rotation in RF signals to determine precise distance between two devices. On the other hand, RTT, a communication channel, refers to the duration a signal takes to travel from the initiator to the reflector and back again.
The above solution from Silicon Labs uses both, employing RTT to verify and cross-check the PBR measurements. However, Metirionic, a German supplier of wireless ranging and positioning technologies, offers an alternative to both PBR and RTT by leveraging the channel impulse response (CIR) technique for highly accurate and reliable distance estimation.
Figure 2 The channel sound evaluation kit is built around Nordic Semiconductor’s nRF54L15 wireless MCU. Source: Metirionic
Its Bluetooth channel sounding evaluation kit—Metirionic Advanced Ranging Stack (MARS)—is a low-power signal-processing upper-layer software (Figure 2). It can run on Nordic’s nRF54L15 embedded MCU, on an external MCU or processor, or on a host PC to ensure precise, reliable and real-time ranging and location accuracy for industrial, Internet of Things (IoT), real-time location services (RTLS), logistics, and secure access applications.
Related Content
- A short design tutorial on Bluetooth Channel Sounding
- How Bluetooth Channel Sounding Compares to Other Location Tech
- Bluetooth Channel Sounding Improves Distance Estimation Accuracy
- New wireless MCUs feature software radio and Bluetooth channel sounding
- Rohde & Schwarz to show measurements on novel Bluetooth Channel Sounding signals
The post Two design solutions for Bluetooth channel sounding appeared first on EDN.
Antique NYC subway cars

We took a family trip with our grandsons to the New York Transit Museum in Brooklyn, NY. Retired subway cars were on display, some of them seemingly not that old while others dated way, way back. Visitors could freely roam in and out. I was in this one car that had been in service in 1903 which meant it predated the advent of electronics. Even the vacuum tube had not yet been invented by then.
I noticed the passenger area’s bare light bulbs and got really close to one (Figure 1). It was rated at 56 watts and 120 volts. A question came to mind as to how did that car use 120-volt light bulbs when the third rail voltage was (and still is) 600 volts DC?
Figure 1 A subway car light bulb up close showing 56-W and 120-V rating.
When we got home, I tried looking up subway car technical data, but when I came to a wiring schematic, I couldn’t read it. The symbols were indecipherable to me. Only then did it dawn on me that five such bulbs connected in series would be operable from 120 x 5 = 600 volts. If any one of the five were to burn out, all five would go dark, but then maintenance would change all five and discard four good bulbs with the one blown out bulb. It sounded wasteful, but it would have been a practical approach.
Is that the actual truth? I don’t know, and there was nobody on hand to ask, even if I had been quick enough of wit to inquire. Also, I just wasn’t smart enough on site to see if the total number of bulbs in the car was a multiple of five. Maybe one day, I can do that.
Another point about those subway bulbs is that they had left-handed threads on their bases, while household bulbs use right-handed threads. This was to discourage light bulb thefts. Stolen bulbs would not fit into light bulb sockets in households, only into the sockets of subway cars.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- The downside of overdesign, Part 2: A Dunn history
- The downside of overdesign
- Motograph News Bulletin debuts in New York City, November 6, 1928
- 1st commercial elevator starts running, March 23, 1857
The post Antique NYC subway cars appeared first on EDN.
Quantum cleanrooms: Extreme environments for building tomorrow’s computers

Quantum computers are among today’s most exciting emerging technologies, but their design, testing, and manufacturing require unparalleled care to avoid damaging their components. Semiconductors are sensitive, so they require production environments with minimal contamination risks. Cleanrooms are the industry-standard solution, but even these facilities must reach higher standards for quantum computer development.
While a cleanroom overhaul is inherently expensive and disruptive, these costs may be minimal compared to the potential of quantum technology. A quantum chip from Google was recently able to complete a calculation that would take a classic supercomputer roughly 10 septillion years in just five minutes.
This enormous processing upgrade is thanks to quantum computers’ use of qubits instead of bits. Whereas a bit represents either one or zero, a qubit can be both simultaneously—a seemingly small distinction with a dramatic impact on computing speed and power. However, the superconductors and other components necessary to enable this process are highly sensitive to external disturbances.
Many cutting-edge quantum innovations rely on nanotechnology to achieve the desired performance. Nanomaterials have superior thermal stability and electrical conductivity, making them ideal for high-power applications like quantum computing. They also let electronics engineers fit more components in a confined space to uphold Moore’s law.
As helpful as such technologies are, working with them creates an issue in conventional settings. Given their size, nanomaterials are easily contaminateable and breakable. The intensity of quantum operations exacerbates this sensitivity. Even slight deviations in temperatures, light, and air quality could jeopardize the performance of this highly sophisticated and expensive equipment.
Source: University of Waterloo
A look inside the quantum cleanroom
Quantum cleanrooms are the solution. Engineers must design and build tomorrow’s cutting-edge devices in equally cutting-edge production facilities. Even a conventional cleanroom may be too prone to contamination and environmental variability to support quantum computer development.
The most common cleanroom ratings today are ISO 7 and 8, which allow concentrations of 352,000 and 3.52 million 0.5-micron particles per cubic meter, respectively. These standards also don’t consider any particulate matter below 0.5 microns. While that’s sufficient for traditional semiconductor engineering, quantum cleanrooms must go further. Ratings of ISO 6 and above that do limit sub-0.5-micron particles are necessary.
Cleanrooms for quantum development also need different sanitation methods. Researchers at Berkeley Lab recently found that gentler component cleaning resulted in an 87% increase in induction, making parts more resistant to electrical noise. The method in question used lower temperatures, vacuums, and suspended components to minimize environmental hazards.
Even lighting and ambient temperatures require attention in the quantum cleanroom. Many of these components are photosensitive to blue wavelengths, particularly, so overhead lights should lean more toward the warm end of the spectrum. Quantum circuits also tend to be temperature-sensitive, so these cleanrooms must use gentle refrigeration techniques to keep the area cold.
Quantum electronics engineers must get used to cleanrooms
As quantum technology advances, electronics design engineers may need to adapt to it. The professionals designing, testing, and producing tomorrow’s most advanced electronics must learn to work with their unique production requirements. Getting used to the quantum cleanroom is a crucial step in getting ready for this next generation of computing.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- Quantum Computers Explained
- The Basics Of Quantum Computing
- Hardware security entering quantum computing era
- A Global Race for Supremacy in Quantum Computing
- BASF and Kipu Focus on End-User Mastery of Quantum Computing
The post Quantum cleanrooms: Extreme environments for building tomorrow’s computers appeared first on EDN.
Clock generator boosts GPS accuracy

With a built-in MEMS resonator, SiTime’s Symphonic SiT30100 mobile clock generator replaces up to four discrete timing devices. It provides accurate clock signals for 5G and GNSS chipsets in mobile and IoT devices such as smartphones, tablets, laptops, and asset trackers.
An integrated temperature sensor feeds precise data to compensation algorithms, helping maintain clock stability. This improves GPS accuracy and reduces lock time, enabling stable performance even in harsh environmental conditions.
The SiT30100 delivers four clock outputs at 76.8 MHz, 38.4 MHz, or 19.2 MHz—configurable from any output—for baseband, RF, and GNSS applications. By eliminating the need for an external resonator, the SiT30100 enables a compact 2.22-mm² single-chip solution. Multiple Output Enable pins allow selective output control to reduce power consumption and minimize EMI. The device also features a temperature-to-digital converter with a single-wire UART interface for system-level temperature compensation, supporting frequency stability down to ±0.5 ppm.
The Symphonic mobile clock generator is available now in a 10-pin chip-scale package.
Symphonic SiT30100 product page
The post Clock generator boosts GPS accuracy appeared first on EDN.
SiC MOSFETs reinforce system longevity

Navitas Semiconductor’s latest GeneSiC MOSFETs exceed AEC-Q101 standards, extending lifetime in automotive and industrial systems. Based on trench-assisted planar technology, they are available in HV-T2Pak top-side cooled packages with 6.45-mm creepage and a CTI above 600 V, supporting IEC-compliant operation up to 1200 V.
Navitas uses the term AEC-Plus to designate parts that exceed the AEC-Q101 reliability tests published by the Automotive Electronics Council (AEC), based on multi-lot stress-test results. This in-house benchmark layers additional stress conditions onto standard AEC-Q101 and JEDEC protocols to better mirror real-world automotive and industrial mission profiles by:
- Incorporating dynamic reverse bias (D-HTRB) and dynamic gate switching (D-HTGB) tests
- Running power- and temperature-cycling for over twice the standard duration
- Extending static high-temperature, high-voltage tests (HTRB, HTGB) to over three times the AEC-Q101 interval
- Qualifying parts to 200 °C TJMAX for improved overload capability
Housed in the 14×18.5-mm HV-T2Pak, the initial portfolio includes 1200-V devices with on-resistance from 18 mΩ to 135 mΩ and 650-V devices ranging from 20 mΩ to 55 mΩ. Lower on-resistance (<15 mΩ) SiC MOSFETs in the same package will follow later in 2025. For more information on GeneSiC MOSFETs, click here.
The post SiC MOSFETs reinforce system longevity appeared first on EDN.
3D ultrasonic sensor improves robot safety

Sonair’s 3D ultrasonic sensor uses acoustic detection and ranging (ADAR) to enable 360° obstacle detection up to 5 meters. Each ADAR sensor offers a 180×180° field of view, allowing autonomous mobile robots (AMRs) to safely navigate around people and objects.
The beamforming technology behind ADAR—used in SONAR, RADAR, and medical imaging—has been under development at Norway’s MiNaLab research center for over 20 years and is now adapted for in-air ultrasonic sensing.
ADAR empowers autonomous robots with omnidirectional depth perception, enabling them to ‘hear’ their surroundings in real-time 3D using airborne soundwaves to interpret spatial information. The sensor forms a 5-meter virtual shield that helps people and robots safely share space. It combines wavelength-matched transducers with efficient signal processing for beamforming and object recognition.
The 3D ultrasonic sensors offer a cost-effective alternative to LiDAR and camera-based systems, typically consuming just 5 W and performing more reliably in challenging conditions such as poor lighting, dust, and temperature fluctuations.
Sonair’s ADAR sensor is developed in accordance with ISO 13849-1:2023 PLd / SIL2, with safety certification expected by year-end. The company will unveil the sensor to North American audiences at Automate 2025, with shipments scheduled to begin in July.
The post 3D ultrasonic sensor improves robot safety appeared first on EDN.