EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 59 хв 33 секунди тому

Between two vendors

Чтв, 09/11/2025 - 17:47

It was a classic stand-off. Vendor number one’s system wasn’t talking to vendor number two’s. What to do? Of course! Blame the customer’s network!

I worked for a TV station that was part of a group run by a common owner. One of the stations in the group used a system known as production automation, which allowed a single operator to control all of the equipment in the control room during newscasts. That would include the video switcher, audio console, camera robotics, video playback, lighting, and graphics generators. The computer system in the newsroom takes the scripts written by reporters and producers, generates a sequence called a rundown, and transmits and updates it in real-time to the automation system.

Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale

While performing a major update to one of the systems, communication stopped. Head scratching ensued for a while, and then the two vendors decided the problem must be something in the network that was blocking the IP packets. The station’s engineers pointed out that nothing had been changed in their network, and in any case, there was no internal routing or filtering going on. Not good enough, say the vendors. Prove to us it’s not your fault before we continue. Their advice was to install a copy of Wireshark, analyze the packets, and show us that the path between the systems is clear.

That’s reasonable as far as it goes, but Wireshark is a mighty powerful tool, and it is not for the faint of heart. At the local TV station level, the IT staff generally does not have the expertise needed to fire it up quickly and interpret its results. The station group’s central IT networking folks do, but getting them involved would have taken a good deal of time, and if they had to travel to the site, expense.

I was just a bystander to this. My own station was one of those with the same systems, so I was included in all of the emails flying back and forth. As it happens, not long before this incident, I had written a small one-trick pony Windows utility. All it did was send IP packets from one computer to another via a specific port. As seen in Figure 1, if the path is clear, the receiving computer replies, and the arrows move. Simple as that.

Figure 1 A demonstration of the Windows utility written by the author, sending IP packets from one computer to another via a specific port.

I sent the program to the station’s IT director, and in less than half an hour, he installed it on both systems, checked all of the ports the vendors specified, and found them all clear. With no more finger-pointing at the customer, the vendors had to get to work to find the actual cause of the problem, which turned out not to be network-related.

A few notes about the program. The image shown is just a demonstration, with both ends running on the same machine. In real life, one copy would be on each of two machines on the network, across the room, or across the world. Also, to be honest, I probably spent more time getting the ballistics of the arrow movement looking good than on the rest of the program.

Robert Yankowitz retired as Chief Engineer at a television station in Boston, Massachusetts, where he had worked for 23 years.  Prior to that, he worked for 15 years at a station in Providence, Rhode Island.

Related Content

The post Between two vendors appeared first on EDN.

Low-cost NiCd battery charger with charge level indicator

Срд, 09/10/2025 - 17:25

Nickel Cadmium (NiCd) batteries are widely used in consumer electronics due to their high energy density and long life. Constant current charging is often recommended by manufacturers. Several websites, including Wikipedia, suggest safely charging NiCd batteries at a 0.1C rate, meaning at 10% of their rated capacity, for 14 to 16 hours, instead of 10 hrs.

Slow charging does not cause a temperature rise, which may affect the life of the battery. More energy must be supplied to the battery than its actual capacity to account for energy loss during charging. Hence, 14 to 16 hours of charging instead of 10 hours.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The battery charger

Figure 1 gives the circuit for slow charging a NiCd battery pack with two AA-sized 1200-mAH cells. The battery is charged with 120 mA (10% of 1200 mA) constant current for about 15 hours.

Figure 1 The circuit for a low-cost NiCd (2 AA size 1200 mAH battery pack) slow charger with charge capacity indicator. Each segment in the U5 and U6 LED bar graphs indicates a charge capacity rise by 10%. As the charging current is constant, the time duration of charging indicates the charge capacity. After 10 hours, the battery should be fully charged. Charging for a few more hours than necessary supplies more energy to account for energy loss during charging. R2 and C1 are for the power ON reset of counters. The Vcc and ground pins of U2, U3, U7 and U8 are not shown here. They must be connected to 9-V DC and Vss, respectively. Time accuracy is not es. Each segment may glow for approximately 1 hour.

Every hour of charging is indicated by the glow of one LED bar graph segment (U5 and U6). After 15 hours, charging stops automatically. This is not a microcontroller-based circuit, so that even people without programming knowledge or a programmer device can build this circuit. A crystal-based timing circuit is not used here, as there is no necessity for time accuracy.

How it works

U1 is 555, configured as an astable multivibrator to generate a pulse train of width 0.88 seconds. R7 can be replaced by a 50K resistor and a 50K multiturn potentiometer in series for adjustment. LED D2 blinks at this rate. 

U7 divides this pulse train. Dividing by a 212 output at pin one yields a pulse train with a pulse width of 1 hour. U2A counts these pulses.

U3 is a 4- to 16-line decoder with an active LOW output. The selected output goes LOW, causing the corresponding bar graph segment to glow, while all other outputs remain HIGH. Since the 16th output at pin 15 of U3 remains HIGH, Q1 turns ON and the battery starts charging, and D1 begins glowing.

U4 is configured as a constant current generator. With R3 set as 10 Ω, the charging current is set at 100 mA, which is 10% of 1200 mA.

During the first hour, the output of U3 at pin 11 goes LOW, and the first segment of the LED bar graph U5 glows. After 1 hour, counter U2A increments once, and the output of U3 at its pin 9 goes LOW, which causes the second segment of the LED bar graph (U6) to glow.

This process goes on until the 15th segment glows to indicate the 15th hour of charging. When the 16th hour starts, the 16th output at pin 15 of U3 goes LOW, turning Q1 OFF.

Now charging stops, and the “Charging ON” LED D1 goes OFF. This LOW output, connected to U8B inverter input, outputs HIGH at the U2A clock input, which disables further counting. Hence, Q1 continues to be OFF. At this point, the battery is fully charged and becomes ready for usage.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post Low-cost NiCd battery charger with charge level indicator appeared first on EDN.

Making your architecture ready for 3D IC

Срд, 09/10/2025 - 11:19

The landscape of IC design is experiencing a profound transformation. With the physical and economic limits of conventional two-dimensional scaling, the industry is rapidly embracing three-dimensional integrated circuits (3D IC) to unlock higher performance, lower power consumption, and denser silicon utilization.

For semiconductor professionals, understanding the distinct nuances of 3D IC microarchitectures is no longer optional. It’s becoming essential for those seeking to maintain a competitive edge in next-generation system design.

Microarchitecting in the 3D IC era represents more than an incremental change from traditional practices. It entails a fundamental redefinition of how data and controls move through a system, how blocks are partitioned and co-optimized across both horizontal and vertical domains, and how early-stage design decisions address the unique challenges of 3D integration.

This article aims to provide essential context and technical depth for practitioners working toward highly integrated, efficient, and resilient 3D IC systems.

3D IC technology now stands at a pivotal juncture. Source: Siemens EDA

Putting things in context

To grasp the impact of 3D IC, it’s crucial to define microarchitecture in the IC context. System architecture typically refers to a design’s functional organization as seen by software engineers—abstract functions, data flows, and protocols. Microarchitecture, viewed through the hardware engineer’s lens, describes how those features are realized in silicon using components like register files, arithmetic logic units, and on-chip memory.

Microarchitecture centers around two domains: the datapath, which encompasses the movement and transformation of data, and the control, which dictates how and when those data movements occur. Together, they determine not only performance and efficiency but also testability and resiliency.

Furthermore, while traditional ICs optimize microarchitecture in two dimensions, 3D ICs require designers to expand their strategies into the vertical axis as well. Because data in 3D ICs no longer flows only laterally, it must be orchestrated through stacked dies, each potentially featuring its own process technology, supply voltage, or clock domain. Inter-die communication—typically realized with micro-bumps, through-silicon vias, or hybrid bonding—becomes critical for both data and control signals.

With the move toward submicron interconnection pitches, design teams must address tighter integration densities and the unprecedented task of partitioning logic and memory across multiple vertical layers. This process is not unlike assembling a three-dimensional puzzle.

Effective microarchitecture in this context demands careful co-optimization of logic, physical placement, routing, and inter-die signaling—with far-reaching implications for system latency, bandwidth, and reliability.

Moreover, some microarchitectural components can be realized in three dimensions themselves. Stacked memory sitting directly above compute units, for example, enables true compute-in-memory subsystems, affecting both density and performance but also introducing significant challenges related to signal integrity, thermal design, and manufacturing yield.

Taking complexity to the third dimension

A major trend shaping modern IC development is the shift toward software-defined silicon, where software can customize and even dynamically control hardware features. While this approach provides great flexibility, it also increases complexity and requires early, holistic consideration of architectural trade-offs—especially in 3D ICs, where the cost of late-stage changes is prohibitive.

The high costs of 3D IC design and manufacturing in general demand an upfront commitment to rigorous partitioning and predictive modeling. Errors or unforeseen bottlenecks that might be addressed after tape-out in traditional design can prove disastrous in 3D ICs, where physical access for rework or test is limited.

It is thus essential for system architects and microarchitects to collaborate early, determining both physical placement of blocks and the allocation of functionality between programmable and hardwired components.

This paradigm also introduces new questions, such as which features should be programmable versus fixed? And how can test coverage and configurability be extended into the post-silicon stage? Design teams must maintain a careful balance among performance, area, power, and system flexibility as they partition and refine the design stack.

Among the most significant physical challenges in 3D integration is the sharp increase in power density. Folding a two-dimensional design into a 3D stack compresses the area available for power delivery, while escalating local heat generation. Managing thermal issues becomes significantly more difficult, as deeper layers are insulated from heat sinks and are more susceptible to temperature gradients.

Test and debug also become more complex. As interconnect pitches shrink below one-micron, direct probing is not practical. Robust testability and resilience need to be designed in from the architecture and circuit level, using techniques like embedded test paths, built-in self-test, and adaptive power management long before finalization.

Finally, resiliency—the system’s ability to absorb faults and maintain operation—takes on new urgency. The reduced access for root-cause analysis and repair in 3D assemblies compels development of in-situ monitoring, adaptive controls, and architectural redundancy, requiring innovation that extends into both the digital and analog realms.

The need for automation

The complexity of 3D IC design can only be managed through next-generation automation. Traditional automation has centered on logic synthesis, place and route, and verification for 2D designs. But with 3D ICs, automation must span package assembly, die stacking, and especially multi-physics domains.

Building 3D ICs requires engineers to bridge electrical, thermal, and mechanical analyses. For instance, co-design flows must account for materials like silicon interposers and organic substrates. This necessitates tightly integrated EDA tools for early simulation, design-for-test verification, and predictive analysis—giving teams the ability to catch issues before manufacturing begins.

System heterogeneity also sets 3D IC apart. Diverse IP, technology nodes, and even substrate compositions all coexist within a single package. Addressing this diversity, along with long design cycles and high non-recurring engineering costs, demands multi-domain, model-based simulation and robust design automation to perform comprehensive early validation and analysis.

Meanwhile, traditional packaging workflows—often manual and reliant on Windows-based tools—lag far behind the automated flows for silicon IC implementation. Closing this gap and enabling seamless integration across all domains is essential for realizing the full promise of 3D IC architectures.

The evolving role of AI and design teams

As system complexity escalates, the industry is shifting from human-centered to increasingly machine-centered design methodologies. The days of vertical specialization are yielding to interdisciplinary engineering, where practitioners must understand electrical, mechanical, thermal, and system-level concerns.

With greater reliance on automation, human teams must increasingly focus on oversight, exception analysis, and leveraging AI-generated insights. Lifelong learning and cross-functional collaboration are now prerequisites for EDA practitioners, who will require both broader and more adaptable skillsets as design paradigms continue to evolve.

Artificial intelligence is already transforming electronic design automation. Modern AI agents can optimize across multiple, often competing, objectives—proposing floorplans and partitioning schemes that would be unfeasible for manual evaluation. Looking ahead, agentic AI—teams of specialized algorithms working in concert—promise to orchestrate ever more complex design sequences from architecture to verification.

Building failure resilient systems

As the boundaries between architectural roles blur, collaboration becomes paramount. In a world of software-defined silicon, architects, microarchitects, and implementation engineers must partner closely to ensure that design intent, trade-offs, and risk mitigation are coherently managed.

Real-world progress is already visible in examples like AMD’s 3D integration of SRAM atop logic dies. Such hybrid approaches demand careful analysis of read and write latency, since splitting a kernel across stacked dies can introduce undesirable delays. Partitioning memory and processing functions to optimize performance and energy efficiency in such architectures is a delicate exercise.

Heterogeneous integration also enables new microarchitectural approaches. High-performance computing has long favored homogeneous, mesh-based architectures, but mobile and IoT applications may benefit from hub-and-spoke or non-uniform memory access models, requiring flexible latency management and workload distribution.

Adaptive throttling, dynamic resource management, and redundancy strategies are growing in importance as memory access paths and their latencies diverge, and architectural resiliency becomes mission critical.

As failure analysis becomes more complex, designs must include real-time monitoring, self-healing, and redundancy features—drawing upon proven analog circuit techniques now increasingly relevant to digital logic.

Thermal management presents fresh hurdles as well: thinning silicon to expose backside connections diminishes its native lateral thermal conductivity, potentially requiring off-die sensor and thermal protection strategies—further reinforcing the need for holistic, system-level co-design.

3D IC moving forward

3D IC stands at a pivotal juncture. Its widespread adoption depends on early, multi-disciplinary design integration, sophisticated automation, and a holistic approach to resiliency. While deployment so far has largely targeted niche applications, such as high-speed logic-memory overlays, 3D IC architectures promise adoption across more segments and vastly more heterogeneous platforms.

For industry practitioners, the challenges are formidable, including three-dimensional partitioning, integrated automation across disciplines, and entirely new approaches to test, debug, and resilience. Meeting these challenges requires both technical innovation and significant organizational and educational transformations.

Success will demand foresight, tight collaboration, and the courage to rethink assumptions at every step of the design cycle. Yet the benefits are bountiful and largely untapped.

Todd Burkholder is a senior editor at Siemens DISW. For over 25 years, he has worked as editor, author, and ghost writer with internal and external customers to create print and digital content across a broad range of EDA technologies. Todd began his career in marketing for high-technology and other industries in 1992 after earning a Bachelor of Science at Portland State University and a Master of Science degree from the University of Arizona.

Pratyush Kamal is director of Central Engineering Solutions at Siemens EDA. He is an experienced SoC and systems architect and silicon technologist providing technical leadership for advanced packaging and new foundry technology programs. Pratyush previously held various jobs at Google and Qualcomm as SoC designer, SoC architect, and systems architect. He also led 3D IC research at Qualcomm, focusing on both wafer-on-wafer hybrid bond and monolithic 3D design integrations.

Editor’s Note

This is the first part of the three-part article series about 3D IC architecture. The second part, to be published next week, will focus on how design engineers can put 3D IC to work.

Related Content

The post Making your architecture ready for 3D IC appeared first on EDN.

Calculation of temperature from PRTD resistance

Втр, 09/09/2025 - 17:04

I recently contributed a design for a simple platinum resistance temperature detector (PRTD) resistance two-wire 4-20mA transmitter circuit illustrated in Figure 1.

Figure 1 A two-wire, 4 to 20 mA current loop PRTD transmitter with 500 µA constant current sensor excitation. R1 and R2 are 0.1% tolerance, voltage reference is a 2.5-V LM4040x25

Wow the engineering world with your unique design: Design Ideas Submission Guide

Analog to digital conversion of the 4 to 20 mA Io reading is likewise simple and straightforward (a 250-Ω shunt resistor at the input of a 0 to 5-V ADC of adequate resolution and precision will do nicely) and getting from there to Rprtd is an easy chore in software (Io in milliamps):

PRTD resistance = R1(Io/Ix – 1) = 20Io – 10

The final step from there to a linear temperature measurement would be almost equally easy, thanks to Callendar Van Dusen (CVD) math, except for one annoying detail. The famous CVD polynomial is arranged to calculate PRTD resistance from temperature. Unfortunately, what we need is temperature from resistance!

Fortunately, another classic algebraic expression can ride to our rescue: The Quadratic Formula (QF).

Mixing vigorously CVD and QF and defining two constants:

u = 0.0039083 RPRTD@0oC

w = -0.0000005775 RPRTD@0oC

and one new variable,

x = RPRTD@0oC – PRTD

leads to a straightforward polynomial that will directly calculate a PRTD temperature from PRTD resistance that’s linear to within ±0.05oC over a temperature range spanning -80oC to +850oC.

 ToC= (-u + (u2 – 4wx)1/2)/(2w)

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Calculation of temperature from PRTD resistance appeared first on EDN.

A digital technique eliminates the need for an analog multiplier

Втр, 09/09/2025 - 12:30

Traditionally, multiplying two analog signals involves the use of analog multipliers. Design engineers digitize analog signals using an analog-to-digital converter (ADC) and then run the code on a microcontroller to perform digital multiplication. However, another digital technique employing an XNOR logic gate alongside an ADC performs multiplication on two bitstreams, avoiding the cost of the analog multiplier.

Find out more about this substitute analog multiplication technique in an article published in EDN’s sister publication Planet Analog.

Related Content

The post A digital technique eliminates the need for an analog multiplier appeared first on EDN.

Diagnosing a flickering LED light bulb

Пн, 09/08/2025 - 17:56

In my so-far nearly 30 years of writing for EDN, I’ve learned a lot about my ever-evolving audience (i.e., you), at least two aspects of which are directly relevant to this particular writeup:

  • You love consuming any content that’s even remotely LED-related, and
  • I’ve pretty much given up trying to figure out what topics will especially attract your attention, aside from relying on my own curiosity as a guide to what you might also like.

Take, for example, my recently published teardown of a LED-based desk lamp. Compared to some other teardowns that I’ve done, it was thankfully fairly speedy and straightforward to both implement and document. But, judging from the quantity and detail of the comments already posted on it, I’m guessing it’s still driving a lot of “eyeballs” to the EDN website. A fading-illumination-intensity-over-time LED apparently piqued more than just my curiosity.

The LED light bulb that transformed into a sorta-strobe light

Or take today’s dissection candidate, a conventional LED light bulb that had begun not fading, but instead, flickering. As historical background, I’ll take you back nine years to when I took apart my first LED light bulb, two dimmable examples of which had prematurely failed, due to (I prognosticated at the time) extended exposure to high temperatures caused by poor ventilation of the ceiling-mount enclosures within which they were installed. At the time, a reader named “docterdon” noted that I hadn’t described those sconces, so in the spirit of “a picture paints a thousand words”, here you go to start:

The room switch controlling the lights wasn’t dimmable anyway, so at the time I went ahead and replaced all of them (including the two still-functional ones, which I ended up reusing elsewhere) with CFLs. Two of those ended up prematurely dying too, so once again I swapped them out for LED bulbs, non-dimmable this time (I was admittedly surprised to realize, when recently reviewing past published teardowns, that in the plethora of LED-based illumination sources I’ve taken apart in recent years, a conventional non-dimmable one hadn’t yet gone under my knife). They came eight to a package; here’s the encompassing cardboard label:

Behind it in each initially shrink-wrapped assemblage, four of which my email archives indicate I’d promotion-purchased from VMInnovations back in October 2018 for $9.99 total, were two boxes, each with four bulbs inside (yes, your math is right; that translates to $0.31 per bulb!):

Here’s our flickering victim, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Shifting the bulb slightly to the left, here’s your reminder that my office desk is perpetually bathed by the light coming from—among other things—the front-panel blue (when the computer it’s connected to is powered on, that is; otherwise white) LED of the expansion hub tethered to my Mac mini, whose illumination you’ll see in some of the shots that follow:

Diving inside

Onward; let’s get the globe off. Extended exposure to my wife’s hair dryer didn’t help much with loosening the adhesive; then again, unlike what my heat gun had done in the past, it didn’t deform the globe itself, either. Nevertheless, using several “spudgers” and aided by plenty of “elbow grease” and “colorful language”, I finally wrestled the globe off the base:

Admittedly, in the process, snapping one of the three resistors off the plate, along with scraping the phosphor cap off one of the LEDs:

That large IC you see at center left is the RM9003T, a high-voltage single-channel constant current LED controller from Shaanxi Reactor Microelectronics. That said, from past experience, I strongly suspected that what I was currently seeing wasn’t the full extent of the circuitry; there was likely more behind the plate. There’s only one way to find out for sure:

At this point, my forward progress was stalled until…ah, yes, those power wires running to the cap end need to be disconnected before I can completely remove the plate. Time to dig out my tongue-and-groove, slip-joint (aka, “Channellock”) pliers and wrest it off…

Determining the Achilles heel

That’s more like it:

The markings on the IC on one side of the now-exposed underside PCB:

are barely discernible:

MB6S
1607

It appears to be a miniature surface-mount bridge rectifier, converting (crudely) AC to DC in combination with the 390 kΩ (“394”) resistor next to it and the 200-V 10-µF aluminum electrolytic capacitor on the PCB’s other side:

Speaking of enclosed spaces with insufficient airflow and consequent overheating potential (generated by the multi-LED array on the plate above it), I’m guessing that here’s where the flickering originates. Agree or disagree, readers? Share your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post Diagnosing a flickering LED light bulb appeared first on EDN.

Magnetic Barkhausen noise measurement unlocks possibilities for soft magnetic materials

Пн, 09/08/2025 - 12:57

Researchers in Japan have developed a highly-sensitive magnetic Barkhausen noise (MBN) measurement system to understand the energy loss mechanisms in soft magnetic materials, which can be easily magnetized and demagnetized, and are widely used in power electronics devices such as generators, transformers, and amplifiers.

It’s an important development because power electronics is moving toward high-frequency operations, which in turn demand low-loss soft magnetic materials. However, the efficiency of soft magnetic materials is fundamentally limited by iron loss, where energy is lost as heat when a varying magnetic field passes through them.

Iron loss mainly comprises three entities: hysteresis loss, classical eddy current loss, and excess eddy current loss. Eddy currents are generated when a varying magnetic field passes through a conductor; the currents that waste energy as heat are known as classical eddy current loss.

On the other hand, excess eddy current loss arises due to localized eddy currents induced by irregular movement of magnetic domain walls (DWs) under a varying magnetic field. Magnetic DWs are boundaries that separate tiny magnetic domains.

Enter magnetic Barkhausen noise or MBN, a key probe for DW dynamics. Here, it’s important to note that the current MBN measurement systems don’t facilitate the wide frequency coverage and high sensitivity needed to capture the individual MBN events. That makes it hard to understand the relationship between DW dynamics and eddy current losses.

MBN measurement system

The Japanese research team, aiming to address this gap, has developed a wideband and high-sensitivity MBN measurement system. The team is led by assistant professor Takahiro Yamazaki from the Department of Materials Science and Technology at the Tokyo University of Science (TUS). It also includes professor Masato Kotsugu from TUS and senior researcher Shingo Tamaru from the National Institute of Advanced Industrial Science and Technology (AIST) in Japan.

The MBN measurement system investigated the magnetic DW dynamics in 25-μm-thick Fe–Si–B–P–Cu NANOMET ribbons, a class of soft magnetic alloys. It comprises a dual-layer coil jig with full electromagnetic shielding, wiring, and a custom low-noise amplifier. And it’s designed to minimize noise while maintaining a wide bandwidth.

Magnetic Barkhausen noise (MBN) serves as a key probe for DW dynamics.

The system allows the capture of individual MBN pulses with the highest possible fidelity. That, in turn, enabled the team to effectively visualize the relaxation behavior and precise evaluation of DWs. As a result, they were able to observe clear and isolated MBN pulses indicative of DW relaxation in amorphous NANOMET ribbons. These materials, well known for their soft magnetic properties, have exceptionally low coercivity.

Cause of excess eddy current loss

Statistical analysis of the captured pulses also revealed a mean relaxation time constant of approximately 3.8 μs with a standard deviation of around 1.8 μs. It’s much smaller than the values predicted by conventional models.

So, the research team constructed a new physical model of DW relaxation to explain this difference. Subsequently, this model showed that the damping caused by eddy currents generated during DW motion is the main cause of excess eddy current loss. That negates the common perception that the intrinsic magnetic viscosity of DWs is the cause this phenomenon.

It provided experimental and theoretical clarification on the physical origin of excess eddy current losses. Next, the team used this system to analyze heat-treated nanocrystalline NANOMET ribbons and found a significant decline in the amplitude of MBN pulses. This led to a substantial reduction in the irregularity of the DW motion.

Moreover, it demonstrated that it’s possible to smooth DW motion and thus reduce energy loss through microstructural control. “Our method has the potential for wide application in the design of next-generation low-loss soft magnetic materials, especially in high-frequency transformers, electric vehicle motors,” said team leader Yamazaki. “It paves the way for smaller, lighter, and more efficient devices.”

He added that this wideband, high-sensitivity MBN measurement system has successfully captured high-fidelity, single-shot pulses. That provides direct experimental evidence of magnetic DW relaxation in metallic ribbons, Yamazaki concluded.

Related Content

The post Magnetic Barkhausen noise measurement unlocks possibilities for soft magnetic materials appeared first on EDN.

An e-mail delivery problem, Part 2

Птн, 09/05/2025 - 16:55

For decades, I have used my IEEE alias address for both incoming and outgoing emails with no difficulties; however, this is no longer the case. The IEEE alias address is no longer workable for outgoing e-mails that are destined for any “gmail.com” recipient.

If I put ambertec@ieee.org in the “From” line of such an outgoing message, I get an immediate message rejection reply that looks like this:

If the content of the “From” line does not match the actual sending address, rejection occurs. In this case, the intended recipient was my own cell phone, but this kind of message comes my way when trying to send any email to any Gmail.com user.

I have neither the time nor the energy to wade into the bureaucratic techno-drivel of the “DMARC policy” or of the “DMARC initiative.” I simply cite my own experience as a signal that you and other IEEE members who read this will know that you are not alone.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post An e-mail delivery problem, Part 2 appeared first on EDN.

Custom hardware helps deliver safety and security for electric traction

Птн, 09/05/2025 - 12:31

Electric traction has become a critical part of a growing number of systems that need efficient motion and position control. Motors do not just provide the driving force for vehicles, from e-bikes to cars to industrial and agricultural machinery. They also enable a new generation of robots, whether they use wheels, propellers or legs for motion.

The other common thread for many of these systems lies in the way they are expected to operate in a highly connected environment. For instance, wireless connectivity has enabled novel business models for e-bike rental and delivers positioning and other vital data to robots as they move around.

But the same connections to the Internet open avenues of attack in ways that previous generations of motion-control systems have not had to deal with. It complicates the tasks of designing, certifying, and maintaining systems that ensure safe operation.

To guarantee the actuators do not cause injury, designers must implement safeguards for their control systems to prevent them being bypassed and creating unsafe situations. They also need to ensure that corruption by hackers does not disrupt the system’s behavior. Security, therefore, now plays a major role in the design of the motor-control subsystems.

Figure 1 Connectivity in warehouse robots also opens vulnerabilities in motor control systems. Source: EnSilica

Algorithmic demands drive architectural change

Complexity in the motor control also arises from the novel algorithms that designers are using to improve energy efficiency and to deliver more precise positioning. The drive algorithms have moved away from simple strategies such as analog controllers that simply relate power delivered to the motor windings to the motors rotational speed.

They now employ far more sophisticated techniques such as field-oriented control (FOC) that are better able to deliver precise changes in torque and rotor position. With FOC, a mathematical model predicts with high precision when power transistors should activate to supply power to each of the stator windings in order to control rotor torque.

The maximum torque results when the electric and magnetic fields are offset by 90°, delivering highly efficient motion control. It also ensures high positioning accuracy with no need for expensive sensors or encoders. Instead, the mathematical model uses voltage and current inputs from the motor winding to provide the data needed to estimate position and state accurately.

Figure 2 The use of techniques like FOC delivers highly efficient motion control, which ensures greater positioning accuracy without expensive sensors or encoders. Source: EnSilica

In robotics, these algorithms are being supplemented by techniques such as reinforcement learning. Using machine learning to augment motion control has proven highly effective at delivering precise traction control for both wheeled vehicles and legged robots. Dusty or slippery surfaces can be problematic for any automated traction control systems. Training the system to cope with these difficult surfaces delivers greater stability than conventional model-based techniques.

Such control strategies often call for the use of extensive software-based algorithms running on digital signal processors (DSPs) and other accelerators alongside high-performance microprocessors in a layered architecture because of the different time horizons of each of the components.

An AI model trained using a reinforcement learning model, for example, will typically operate with a longer cycle time than the FOC algorithms and the pulse-width modulation (PWM) control signals below them that ensure the motors follow the response needed. As a result, DSP-based models with long time horizons will be supported by algorithms and peripherals that use hardware assistance to operate and meet the deadlines required for real-time operation.

The case for custom hardware

The hard real-time functions are those that have direct control over the power transistors that deliver power to the motor windings, usually implemented in an “inverter” comprising a half-bridge circuit for each of the motor phases. Traditionally, such half-bridge controllers have focused on the implementation of timing loops for PWM.

The switching frequencies are often too high to be supported reliably by software running even on a dedicated microprocessor without needing the processor to be clocked at excessive frequencies. The state machines used to implement PWM switching also take care of functions such as dead-time insertion, which is used to ensure that each transistor doesn’t turn on before its counterpart transistor in the half-bridge inverter is turned off.

The timing gap prevents the shoot-through of current that would result if both transistors were active at the same time. The excess current can damage the motor windings and the drive circuit board. These subsystems are so important that they are often provided as standard building blocks for industrial microcontrollers.

However, in the context of increased threats from hackers and the need to support advanced algorithms, the inverter controller can become a vital component in supporting overall system resilience. By customising the inverter controller, implementors can more easily guarantee safety and security, as well as protect core traction-control IP. Partitioning of the inverter and the rest of the drive subsystem need not just support all three aims, which can also reduce the cost of implementation and verification.

A major advantage of hardware in terms of security is its relative immutability compared to software code. Attackers cannot replace important parts of the hardware algorithm if they gain access. This simplifies some aspects of security certification in addition. Techniques such as formal verification can determine whether the circuitry can ever enter a particular state. Future updates to the system will not directly affect that circuitry.

It’s possible for code changes to alter the interactions between the microcontroller-based subsystems and the lower-level hardware. However, this relationship provides opportunities for the designer to improve their ability to guarantee safe operation, even under the worst-case conditions where a hacker has gained access and replaced the firmware.

Hardware-based lockout mechanisms and security checks can ensure that if the upper-level software of the system is compromised, the system will place itself into a safe state. The lockouts can include support for mechanisms such as secure boot. This ensures that only the software that passes the ASIC’s own checks can activate the motor.

Using hardware for safety and security protection can help reduce the cost of software assurance, which is now subject to legislation such as the European Union’s Cybersecurity Resilience Act (CRA). The new law demands that manufacturers and service operators issue software updates for critically compromised systems.

By moving key elements of the system design into hardware and minimizing the implications of a hack, the designer can reduce the need for frequent updates if new vulnerabilities are found in upper-level software. Similarly, moving interlocks into hardware simplifies the task of demonstrating safe operation for standards such as ISO 26262 compared with purely software-based implementations.

Physical attacks can often involve power interruptions, which provides a way to design an ASIC that protects against such tampering. For example, if power monitoring circuitry detects a brownout, it can reset the microprocessor and place the rest of the system in a safe, quiescent state.

Hardware choices that support compliance and control

Alongside the additional functions, an ASIC inverter controller can host more extensive parts of the motor-control subsystem and reduce the cost of the microprocessor components. For example, FOC relies on trigonometric and other computationally expensive transforms.

Moving these into a coprocessor block in the ASIC can streamline the design. This combination can also reduce control latency by connecting inputs from current and voltage sensors to the low-level DSP functions.

The functions need not all be fixed. Modern ASICs may include configurable blocks such as programmable filters, gain stages, and parameterizable logic to offer a level of adaptability. The use of programmable functions can let a single ASIC design control various motor configurations across an entire product range.

The programming of these elements illustrates one of the many safety and security trade-offs that design teams can make. Incorporating non-volatile memory into the ASIC can provide the greatest security. Putting the programmable elements into an ASIC that can be locked by blowing fuses after manufacturing is more secure than a design where a host microcontroller writes configuration values during the boot process.

The MCU-based control chips require a silicon process suitable for storing the firmware code, normally based on flash memory. This implies some additional processing masks, which increase the cost of the final product, a factor especially sensitive if the production volume is high.

If the design calls for the high-voltage capability offered by Bipolar-CMOS-DMOS (BCD) processes for the motor-drive circuitry, a second die may be needed for non-volatile memory. But the flash CMOS process will normally support a higher logic density than the BCD-based parts, which allows the overall cost to be optimized.

Thanks to its ability to support deterministic control loops and support verification techniques that can ease security and safety certification, the use of hardware is becoming increasingly important to e-mobility and robotics designs.

Through careful architecture selection, such hardware can enable the use of software for flexibility and its own ability to support novel control strategies as they evolve. The result is an environment where ASIC use can offer the best of both worlds to design teams.

David Tester, chief engineer at EnSilica, has 30+ years of experience in the development of analogue, digital and mixed-signal ICs across a wide range of semiconductor products.

Related Content

The post Custom hardware helps deliver safety and security for electric traction appeared first on EDN.

HV reed relays are customizable to 20 kV

Чтв, 09/04/2025 - 19:07

Series 600 high-voltage reed relays from Pickering Electronics offer over 2500 combinations of rating and connection options. They are customizable from 3.5 kV to 12.5 kV, with standoff voltages from 5 kV to 20 kV and switching power up to 200 W. Switch-to-coil isolation reaches 25 kV, safely separating control circuitry from high-voltage paths even in demanding environments.

Built with vacuum-sealed, instrumentation-grade reed switches, the relays are available with 1 Form A (NO), 1 Form B (NC), and 1 Form C (Changeover) contacts and 5-V, 12-V, or 24-V coils. An optional diode or Zener-diode combination suppresses back EMF, while mu-metal screening reduces magnetic interference. Insulation resistance exceeds 1013 Ω, ensuring minimal leakage and maximum isolation.

A variety of case sizes, connection types (turrets, flying leads, PCB pins), and potting materials helps engineers meet thermal, mechanical, and environmental requirements. Series 600 relays support many high-voltage test and switching applications, including EV BMS and charge-point testing, inverter or insulation-resistance testing in solar systems, and isolation in medical equipment.

Request free pre-production samples, access the datasheet, or try the configuration tool via the product page link below.

Series 600 product page 

Pickering Electronics 

The post HV reed relays are customizable to 20 kV appeared first on EDN.

WM-Bus modules enable flexible sub-GHz metering

Чтв, 09/04/2025 - 19:07

Quectel has announced the KCMCA6S series of Wireless M‑Bus (WM‑Bus) modules, capable of sub-1 GHz operation for smart metering. Based on Silicon Labs’ EFR32FG23 wireless SoC, featuring a 73‑MHz Arm Cortex‑M33 processor, the modules operate in the 868‑MHz, 433‑MHz, and 169‑MHz bands.

The devices comply with EN 13757‑4, the European standard for wireless metering, and support the WM‑Bus protocol and other proprietary sub‑GHz protocols. Their built-in software stack and flexible configuration modes eliminate the need for third-party protocol integration.

Modules include an optional integrated SAW filter to limit interference from cellular signals, an important factor for devices combining WM-Bus with cellular technologies such as NB-IoT or LTE Cat 1. They feature 32 KB of RAM and 256 KB of flash memory.

Availability for the KCMCA6S series was not provided at the time of this announcement.

KCMCA6S product page

Quectel Wireless Solutions 

The post WM-Bus modules enable flexible sub-GHz metering appeared first on EDN.

TOLL-packaged SiC MOSFETs cut size, losses

Чтв, 09/04/2025 - 19:07

Three 650-V SiC MOSFETs from Toshiba come in compact surface-mount TOLL packages, boosting both power density and efficiency. The 9.9×11.68×2.3-mm package shrinks volume by more than 80% compared to through-hole TO-247 and TO-247-4L(X) types.

TOLL also provides lower parasitic impedance, reducing switching losses. As a 4-terminal package, it enables a Kelvin source connection for the gate drive, minimizing the impact of package inductance and supporting high-speed switching. For the TW048U65C 650-V SiC MOSFET, turn-on and turn-off losses are about 55% and 25% lower, respectively, than the same Toshiba products in the TO-247 package without Kelvin connection.

The third-generation MOSFETs in this launch target switch-mode power supplies in servers, communication gear, and data centers. They are also suited for EV charging stations, photovoltaic inverters, and UPS equipment.

Datasheets and device availability are accessible via the product page links below.

TW027U65C product page 

TW048U65C product page 

TW083U65C product page 

Toshiba Electronic Devices & Storage 

The post TOLL-packaged SiC MOSFETs cut size, losses appeared first on EDN.

Software verifies HDMI 2.2 electrical compliance

Чтв, 09/04/2025 - 19:07

Keysight physical-layer test software provides compliance and performance validation for HDMI 2.2 transmitters and Cat 4 cables. The D9021HDMC electrical performance and compliance software and the N5992HPCD cable eye test software help engineers address the demands of UHD video and HDR content. Together, they improve signal integrity and support HDMI Forum compliance.

The recent release of the HDMI 2.2 test specification introduces more stringent compliance requirements for transmitters and cables, exposing gaps in conventional test coverage. As the HDMI ecosystem evolves to support higher resolutions, faster refresh rates, and greater bandwidth, the Keysight software provides a unified platform for automated electrical testing as defined by the specification. 

Keysight’s platform combines high-bandwidth measurement hardware with automated compliance workflows to manage complex test scenarios across transmitters and cables. Its modular architecture enables flexible test configurations, and built-in diagnostics help identify the root causes of signal degradation. This allows design teams to verify compliance and optimize performance early in development.

D9021HDMC product page 

N5992HPCD product page 

Keysight Technologies 

The post Software verifies HDMI 2.2 electrical compliance appeared first on EDN.

GNSSDO modules ensure reliable PNT performance

Чтв, 09/04/2025 - 19:07

Microchip’s GNSS-disciplined oscillator (GNSSDO) modules integrate positioning, navigation, and timing (PNT) for mission-critical aerospace and defense applications. Built with the company’s chip-scale atomic clock, miniature atomic clock, and OCXOs, the compact modules are well-suited for systems that operate in GNSS-denied environments.

The modules process reference signals from a GNSS or an alternative clock source to discipline the onboard oscillator, ensuring precise timing, stability, and holdover operation. They can function as a PNT subsystem within a larger system or as a stand-alone unit.

All modules output 1-PPS TTL and 10-MHz sine wave signals, with distinct features for different use cases:

  • MD-013 ULTRA CLEAN – Highest-performance design with multi-constellation GNSS support, ultra-low phase noise, and short-term stability; optional dual-band receiver upgrades.
  • MD-300 – Rugged 1.5×2.5-in. module with MEMS OCXO or TCXO for low g-sensitivity, shock/vibration tolerance, and low thermal response; suited for drones and manpacks.
  • LM-010 – PPS-disciplined module for LEO requiring radiation tolerance, stability, and holdover; built with a digitally corrected OCXO or low-power CSAC.

The GNSSDO modules are available in production quantities.

GNSSDO module product page 

Microchip Technology 

The post GNSSDO modules ensure reliable PNT performance appeared first on EDN.

The Smart Ring: Passing fad, or the next big health-monitoring thing?

Чтв, 09/04/2025 - 16:32

The battery in my two-year-old first-gen Pixel Watch generally—unless I use GPS and/or LTE data services heavily—lasts 24 hours-plus until it hits the 15%-left Battery Saver threshold. And because sleep quality tracking is particularly important to me, I’ve more or less gotten in the habit of tossing it on the charger right before dinner, for maximum likelihood it’ll then robustly make it through the night. Inevitably, however, once (or more) every week or so, I forget about the charger-at-dinner bit and then, right when I’m planning on hitting the sack, find myself staring at a depleted watch that won’t make it until morning. First world problem. I know. Still…

Therein lies one (of several) of the key motivations behind my recent interest in the rapidly maturing smart ring product category. Such devices typically tout ~1 week (or more) of between-charges operating life, and they also recharge rapidly, courtesy of their diminutive integrated cells. A smart ring also affords flexibility regarding what watches (including traditional ones) I can then variously put on my wrist. And, as noted within my 2025 CES coverage:

This wearable health product category is admittedly more intriguing to me because unlike glasses (or watches, for that matter), rings are less obvious to others, therefore it’s less critical (IMHO, at least) for the wearer to perfectly match them with the rest of the ensemble…plus you have 10 options of where to wear one (that said, does anyone put a ring on their thumb?).

I’ve spent the last few months acquiring and testing smart rings from three leading companies: Oura (the Gen3 Horizon), Ultrahuman (the Ring AIR), and RingConn (the Gen 2). They’re left-to-right on my left-hand index finger in the following photo: that’s my wedding band on the ring finger 😉. The results have been interesting, to say the least. I’ll save per-manufacturer and per-product specifics for follow-up write-ups to appear here in the coming months. For now, in the following sections, I’ll share some general comparisons that span multiple-to-all of them.

Judicial Jockeying

An important upfront note: back in April, I learned that Finland-based Oura (the product category’s volume shipment originator, and the current worldwide market leader) had successfully obtained a preliminary ruling from the United States ITC (International Trade Commission) that both China-based RingConn and India-based Ultrahuman had infringed on its patent portfolio. The final ITC judgement, released on Friday, August 22 (three days ago as I write these words) affirmed that earlier ruling, blocking (in coordination with U.S. Customs and Border Protection enforcement) further shipments of both RingConn and Ultrahuman products into the country and, more generally, further sales by either company after a further 60 day review period ending on October 21. There’s one qualifier, apparently: retailers are allowed to continue selling past that point until their warehouse inventories are depleted.

I haven’t seen a formal response yet from RingConn, but Ultrahuman clearly hasn’t given up the fight. It’s already countersued Oura in its home country, also reporting that the disputed patent, which it claims combines existing components in an obvious way that renders it invalid, is being reviewed by the U.S. Patent and Trademark Office’s Patent Trial and Appeal Board.

We welcome the ITC’s recognition of consumer-protective exemptions and its rejection of attempts to block the access of U.S. consumers. Customers can continue purchasing and importing Ring AIR directly from us through October 21, 2025, and at retailers beyond this date.

What’s more, our software application and charging accessories remain fully available, after the Commission rejected Oura’s request to restrict them.

While we respectfully disagree with the Commission’s ruling on U.S. Patent No. 11,868,178, its validity is already under review by the USPTO’s Patent Trial and Appeal Board (PTAB) on the grounds of obviousness.

 Public reporting has raised questions about Oura’s business practices, and its reliance on litigation to limit competition.

We are moving forward with confidence — doubling down on compliance while accelerating development of a next-generation ring built on a fundamentally new architecture. As many observers recognize, restricting competition risks fewer choices, higher prices, and slower innovation.

Ultrahuman remains energized by the road ahead, committed to championing consumer choice and pushing the frontier of health technology.

One perhaps-obvious note: the ITC’s actions only affect sales in the United States, not elsewhere. This also isn’t the first time that the ITC has gotten involved in a wearables dispute. Apple Watch owners, for example, may be familiar with the multi-year, ongoing litigation between Apple and Masimo regarding blood oxygen monitoring. Also, more specific to today’s topic, Samsung pre-emptively filed a lawsuit against Oura prior to entering the market with its Galaxy Ring in mid-2024, citing Oura’s claimed litigious history and striving to ensure that Samsung’s product launch wouldn’t be jeopardized by patent infringement lawsuits from Oura.

The lawsuit was eventually dismissed in March, with the judge noting a lack of evidence that Oura ever intended to sue Samsung, but Samsung is now appealing that ruling. And as I noted in recent Google product launch event coverage, this same litigious environment may at least partly explain why both Google/Fitbit and Apple haven’t entered the market…yet, at least.

Sizing prep is essential

Before you buy a smart ring, whatever company’s device you end up selecting, I strongly advise you to first purchase a sizing kit and figure out what size you need on whatever finger you plan to wear it. Sizing varies finger-to-finger and hand-to-hand for every person, first and foremost. Not to mention that if the ring enhances your fitness, leading to weight loss, you’ll probably need to buy a smaller replacement ring eventually—the battery and embedded circuitry preclude the resizing that a jeweler historically would do—hold that thought.

Smart ring sizing can also vary not only from traditional ring measurements’ results, but also from company to company and model to model. My Oura and RingConn rings are both size 11, for example, whereas the Ultrahuman one is a size 10. Sizing kits are inexpensive…usually less than $10, with the purchase price often then applicable as credit against the subsequent smart ring price. And in the RingConn case, the kits are free from the manufacturer’s online store. A sizing kit is upfront money well spent, regardless of the modest-at-worst cost.

Charging options and sometimes-case enhancements

One key differentiator between manufacturers you’ll immediately run into involves charging schemes. Oura and Ultrahuman’s rings leverage close-proximity wireless inductive charging. Both the battery and the entirety of its charging circuitry, including the charging coil, are fully embedded within the ring. RingConn’s approach, conversely, involves magnetized (for proper auto-alignment)-connection contacts both on the ring itself and on the associated charger.

(Ultrahuman inductive charging)

(RingConn conventional contacts-based charging)

I’ve yet to come across any published pros-and-cons positioning on the two approaches, but I have theories. Charging speed doesn’t seem to be one of the factors. Second-gen-and-beyond Google Pixel Watches with physical contacts reportedly recharge faster than my wireless-based predecessor, especially after its firmware update-induced intentional slowdown. Conversely, I didn’t notice any statistically significant charge-speed variance between any of the smart rings I tested. Perhaps their diminutive battery capacities minimize any otherwise evident variances?

What about fluid-intrusion resistance? I could imagine that, in line with its usage with rechargeable electric toothbrushes operated in water exposure-prone environments:

inductive charging might make it possible, or at a minimum, easier from a design standpoint, to achieve higher IP (ingress protection) ratings for smart rings. Conversely, however, there’s a consumer cost-and-convenience factor that favors RingConn’s more traditional approach. I’ve acquired two chargers per smart ring I tested—one for upstairs at my desk, the other in the bathroom—the latter so I can give the ring a quick charge boost while I’m in the shower.

Were I to go down or (heaven forbid) up a size-or-few with an Oura or UltraHuman ring, my existing charger suite would also be rendered useless, since inductive charging requires a size-specific “mount”. RingConn’s approach, on the other hand (bad pun intended), is ring size-agnostic.

Speaking of RingConn, let’s talk about charging cases (and their absence in some cases). The company’s $199 Gen 2 “Air” model comes with the conventional charging dock shown earlier. Conversely, one of the added benefits (along with sleep apnea monitoring) of the $299 Gen 2 version is a battery-inclusive charging case, akin to those used by Bluetooth earbuds:

It’s particularly handy when traveling, since you don’t need to also pack a power cord and wall wart (conventional charger docks can also be purchased separately). Oura-compatible charging cases are, currently at least, only available from (unsanctioned-by-Oura, so use at your own risk) third parties and require a separate Oura-sourced dock.

And as for Ultrahuman, at least as far as I’ve found, there are only docks.

Internal and external form factors

In addition to the aforementioned charging circuitry, there is other integrated-electronics commonality between the various manufacturers’ offerings (leading to the aforementioned patent infringement claim—if you’re Oura—or “obviousness” claim—if you’re Ultrahuman). You’ll find multi-color status LEDs, for example, along with Bluetooth and/or NFC connectivity, accelerometers, body temperature monitoring, and pulse rate (green) and oximetry (red) plus infrared photoplethysmography sensors.

The finger is the preferable location for blood-related monitoring vs the wrist, actually (theoretically at least), thanks to higher comparative aggregate blood flow density. That said, however, sensor placement is particularly critical on the finger, as well as particularly difficult to achieve, due to the ring’s circular and easily rotated form factor.

Most smart rings are more or less round, for style reasons and akin to traditional non-electronic forebears, with some including flatter regions to guide the wearer in achieving ideal on-finger placement alignment. One extreme example is the Heritage version of the Oura Gen3 ring:

with a style-driven flatter frontside compared to its Gen3 Horizon sibling:

Interestingly, at least to me, Oura’s newest Ring 4 only comes in a fully round style:

as well as in an expanded suite of both color and size options, all specifically targeting a growing female audience, which Ultrahuman’s Rare line is also more obviously pursuing (I hadn’t realized this until my recent research, but the smart ring market was initially male-dominated):

The Ring 4 also touts new Smart Sensing technology with 18 optical signal paths (vs 8 in the Gen3) and a broader sensor array. I’m guessing that this enhancement was made in part to counterbalance the degraded-results effects of non-ideal finger placement. To wit, look at the ring interior and you’ll encounter another means by which manufacturers (Oura with the Gen3, as well as RingConn, shown here) include physical prompting to achieve and maintain proper placement: sensor-inclusive “bump” guides on both sides of the backside inside:

Some people apparently find them annoying, judging from Reddit commentary and reviews I’ve read, along with the fact that Ultrahuman rings’ interiors are smooth, as well as the comparable sensor retraction done by Oura on the Ring 4. The bumps don’t bother me (and others); in contrast, in fact, I appreciate their ongoing optimal-placement physical-guidance assistance.

Accuracy, or lack thereof

How did I test all these rings? Thanks for asking. At any point in time, I had one on each index finger, along with my Pixel Watch on my wrist (my middle fingers were also available, along with my right ring finger, but their narrower diameters led to loose fits that I feared would unfairly throw off measurement results).

I rotated through my three-ring inventory both intra- and inter-day, also repeatedly altering which hand’s index finger might have a given manufacturer’s device on it. And I kept ongoing data-point notes to supplement my oft-imperfect memory.

The good news? Cardio- and pulmonary-related data measurements, including sleep-cycle interpretations (which I realize also factor in the accelerometer; keep reading), seemed solid. In the absence of professional medical equipment to compare against, I have no way of knowing whether any of the output data sets (which needed to be viewed on the associated mobile apps, since unlike watches, these widgets don’t have built-in displays…duh…) were accurate. But the fact that they all at least roughly matched each other was reassuring in and of itself.

Step counting was a different matter, however. Two general trends became increasingly apparent as my testing and data collection continued:

  • Smart ring step counts closely matched both each other and the Pixel Watch on weekends, but grossly overshot the smart watch’s numbers on weekdays, and
  • During the week, whatever ring I had on my right hand’s index finger overshot the step-count numbers accumulated by its left-hand counterpart…consistently.

Before reading on, can you figure out what was going on? Don’t feel bad if you’re stumped; I thank my wife’s intellect (which, I might add, immediately discerned the root cause), not mine (sloth-like and, standalone, unsuccessful), for sorting out the situation. On the weekends, I do a better job of staying away from my computer keyboard; during the week, the smart rings’ accelerometers were counting key presses as steps. And I’m right-handed, therefore leading to additional right-hand movement (and phantom step counts) each time I accessed the trackpad.

By the way, each manufacturer’s app, with varying breadth, depth, and emphasis, not only reports raw data but also interpretations of stress level and the like by combining and analyzing multiple sensors’ outputs. To date, I’ve generally overlooked these additional results nuances, no matter that I’m sure I’d find the machinations of the underlying algorithms fascinating. More to come in the future; for now, with three rings tested, the raw data was overwhelming enough.

Battery life and broader reliability

As I dove into the smart ring product category, I kept coming across mentions of claimed differentiation between their “health” tracking and other wearables’ “fitness” tracking. It turns out that, as documented in at least some cases, smart rings aren’t continuously measuring and logging data coming from a portion of their sensor suites. I haven’t been able to find any info on this from RingConn, whose literature is in general comparatively deficient; I’d welcome reader direction toward published info to bolster my understanding here. That said, the company’s ring was the clear leader of the three, dropping only ~5% of charge per day (impressively translating to nearly 3 weeks of between-charges operating life until the battery is drained).

Oura’s rings only discern heart rate variability (HRV) during sleep (albeit logging the base heart rate more frequently), “to avoid the daytime ‘noise’ that can affect your data and make it harder to interpret”. Blood oxygen (SpO2) sensing also only happens while asleep (I took this photo right after waking up, right before the watch figured out I’d done so and shut off):

Selective, versus continuous, data measurement has likely obvious benefits when it comes to battery life. That said, my Oura ring’s (which, like its RingConn counterpart, I bought already lightly used; keep reading) battery level dropped by an average of ~15% per day.

And Ultrahuman? The first ring I acquired only lasted ~12 hours until drained, and took nearly a day to return to “full”, the apparent result of a firmware update gone awry (unrecoverable in this case, alas). To its credit, the company sent me a replacement ring (and told me to just keep the existing one; stay tuned for a future teardown!). At about that same time, Ultrahuman also added another Oura-reminiscent and battery life-extending operating mode called “Chill” to the app and ring settings, which it also made the default versus the prior-sole “Turbo”:

Chill Mode is designed to intelligently manage power while preserving the accuracy of your health data. It extends your Ring AIR battery life by up to 35% by tracking only what matters, when it matters. Chill Mode uses motion and context-based intelligence to track heart rate and temperature primarily during sleep and rest.

More generally, keep in mind that none of these devices are particularly inexpensive; the RingConn Gen 2 Air is most economical at $199, with the Oura Ring 4 the priciest mainstream option at between $349 and $499, depending on color (and discounting the up-to-$2,200 Ultrahuman Rare…ahem…). A smart ring that lasts a few years while retaining reasonable battery life across inevitable cycle-induced cell degradation is one thing. One that becomes essentially unusable after a few months is conversely problematic from a reputation standpoint.

Total cost, and other factors to consider

Keep in mind, too, that ongoing usage costs may significantly affect the total price you end up paying over a smart ring’s operating life. Ironically, RingConn is not only the least expensive option from an entry-cost standpoint but also over time; although the company offers optional extended warranty coverage for damage, theft, or loss, lifetime support of all health metrics is included at no extra charge.

On the other end of the spectrum is Oura; unless you pay $5.99/month or $69.99/year for a membership (first month free), “you’ll only be able to see your three daily Oura scores (Readiness, Activity, and Sleep), ring battery, basic profile information, app settings, and the Explore content.” Between these spectrum endpoints is Ultrahuman. Like RingConn, it offers extended warranties, this time including (to earlier comments) 2-year “Weight loss insurance”:

Achieved your weight loss goals? We’ll make resizing easy with a free Ultrahuman Ring AIR replacement, redeemable once during your UltrahumanX coverage period.

And, again, as with RingConn, although baseline data collection and reporting are lifetime-included, it also sells a suite of additional-function software plug-ins it calls PowerPlugs.

One final factor to consider, which I continue to find both surprising and baffling, is the fact that none of the three manufacturers I’ve mentioned here seems to support having more than one ring actively associated with an account, therefore, cloud-logging and archiving data, at the same time. To press a second ring into service, you need to manually delete the first one from your account first. The lack of multi-ring support is a frequent cause of complaints on Reddit on elsewhere, from folks who want to accessorize multiple smart rings just as they do with normal rings, varying color and style to match outfits and occasions. And the fiscal benefit to the manufacturers of such support is intuitively obvious, yes?

Looking back, having just crossed through 3,000 words, I’m sure glad I decided to split what was originally envisioned as a single write-up into a multi-post series 😉 I’ll try to get the RingConn and Ultrahuman pieces published ahead of that October 21 deadline, for U.S. readers that might want to take the purchase plunge before inventory disappears. And until then, I welcome your thoughts in the comments on what I’ve written thus far!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post The Smart Ring: Passing fad, or the next big health-monitoring thing? appeared first on EDN.

A design guide for respiratory belt transducers

Чтв, 09/04/2025 - 15:28

Curious about how respiratory belt transducers work—or how to design one yourself? This quick guide walks you through the essentials, from sensing principles to circuit basics. Whether you are a hobbyist, student, or engineer exploring wearable health technology, you will find practical insights to kickstart your own design.

Belly breathing, also known as diaphragmatic or abdominal breathing, involves deep inhalation that expands the stomach and allows the lungs to fully inflate. This technique engages the diaphragm—a dome-shaped muscle at the base of the lungs—which contracts downward during inhalation to create space for lung expansion and relaxes upward during exhalation to push air out.

In contrast, chest breathing (also called thoracic or shallow breathing) relies on upper chest muscles and produces shorter, less efficient breaths, limiting oxygen intake and often contributing to stress and tension. Belly breathing has been shown to lower heart rate and blood pressure, promote relaxation, and improve overall respiratory efficiency.

What if you could measure your breathing motion, capture it in real time, and receive meaningful feedback? A respiratory belt transducer offers a simple and effective solution. It detects changes in chest or abdominal diameter during breathing and converts that movement into a voltage signal, which can be recorded and analyzed to assess breathing patterns, rate, and depth.

First off, note that while piezoelectric, inductive, capacitive, and strain gauge sensors are commonly used in respiratory monitoring, this post highlights more accessible alternatives, namely conductive rubber cords and stretch sensors. These materials offer a low-cost, flexible solution for detecting abdominal or chest expansion, making them ideal for DIY builds, classroom experiments, and basic biofeedback systems.

Figure 1 A generic 2-mm diameter conductive rubber cord stretch sensor kit that makes breathing belt assembly easier. Source: Author

As observed, the standard 2-mm conductive rubber cord commonly available in the hobby electronics market exhibits a resistance of approximately 140 to 160 ohms per centimeter. This capability makes it suitable for constructing a respiratory belt that generates a voltage in response to changes in thoracic or abdominal circumference during breathing.

Next, fabricate the transducer by securely bonding the flexible sensing element—the conductive rubber cord—to the inner surface of a suitably sized fabric belt. It should then be placed around the body at the level of maximum respiratory expansion.

A quick hint on design math: in its relaxed state, the conductive rubber cord (carbon-black impregnated) exhibits a resistance of approximately 140 ohms per centimeter. When stretched, the conductive particles disperse, increasing the resistance proportionally.

Once the force is removed, the rubber gradually returns to its original length, but not instantly. Full recovery may take a minute or two, depending on the material and conditions. You can typically stretch the cord to about 50–70% beyond its original length, but it must stay within that range to avoid damage. For example, a 15-cm piece should not be stretched beyond 25–26 cm.

Keep in mind, this conductive rubber cord stretch sensor does not behave in a perfectly linear way. Its resistance can change from one batch to another, so it’s best used to sense stretching motion in a general way, not for exact measurements.

To ensure accurate signal interpretation, a custom electronic circuitry with a sensible response to changes in cord length is essential; otherwise, the data will not hold water. The output connector on the adapter electronics should provide a directly proportional voltage to the extent of stretch in the sensing element.

Frankly, this post doesn’t delve into the mechanical construction of the respiratory belt transducer, although conductive rubber cords are relatively easy to use in a circuit. However, they can be a bit tricky to attach to things, both mechanically and electrically.

The following diagram illustrates the proposed front-end electronics for the resistive stretch sensor (definitely not the final look). Optimized through voltage scaling and linearization, the setup yields an analog output suitable for most microcontroller ADCs.

Figure 2 The proposed sensor front-end circuitry reveals a simplistic analog approach. Source: Author

So, now you have the blueprint for a respiratory belt transducer, commonly known as a breathing belt. It incorporates a resistive stretch sensor to detect changes in chest or abdominal expansion during breathing. As the belt stretches, the system produces an analog output voltage that varies within a defined range. This voltage is approximately proportional to the amount of stretch, providing a continuous signal that mirrors the breathing pattern.

Quick detour: A ratiometric output refers to a sensor output voltage that varies in proportion to its supply voltage. In other words, the output signal scales with the supply itself, so any change in supply voltage results in a corresponding change in output. This behavior is common in unamplified sensors, where the output is typically expressed as a percentage of the supply voltage.

Before wrapping up, I just came across another resistive change type strain sensor worth mentioning: GummiStra from Yamaha. It’s a rubber-like, stretchable sensor capable of detecting a wide range of small to large strains (up to twice in length), both statically and dynamically. You can explore its capabilities in detail through Yamaha’s technology page.

Figure 3 GummiStra unlocks new use cases for resistive stretch sensing across wearables, robotics, and structural health monitoring. Source: Yamaha

We will leave it there for the moment. Got your own twist on respiratory belt transducer design? Share your ideas or questions in the comments.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post A design guide for respiratory belt transducers appeared first on EDN.

A temperature-compensated, calibration-free anti-log amplifier

Срд, 09/03/2025 - 16:52
The typical anti-log circuit

The basic anti-log amplifier looks like the familiar circuit of Figure 1.

Figure 1 The typical anti-log circuit has uncertainties related to the reverse current, Is, and is sensitive to temperature.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The approximate equation for V0 given in Figure 1 comes from the Ebers-Moll model. A more advanced model employed by many modern spice simulators, such as LTspice, is the Gummel-Poon model, which I won’t discuss here. It suffices for discussions in this Design Idea (DI) to work with Ebers-Moll and to let simulations benefit from the Gummel-Poon model.

The simple Figure 1 circuit is sensitive to both temperature and the value of Is. Unfortunately, the value and limits of Is are not specified in datasheets. Interestingly, spice models employ specific parametric values for each transistor, but still say nothing about the limits of these values. Transistors taken from different sections of the same silicon wafer can have different parametric values. The differences between different wafers from the same facility can be greater yet and can be even more noticeable when those from different facilities of the same manufacturer are considered. Factor in the products of the same part number from different manufacturers, and clear, plausible concerns about design repeatability are evident.

Addressing temperature and Is variations

There’s a need for a circuit that addresses these two banes of consistent performance. Fortunately, the circuit of Figure 2 is a known solution to the problem [1].

Figure 2 This circuit addresses variations in both temperature and Is. Key to its successful operation is that Q1a and Q1b constitute a matched pair, taken from adjacent locations on the same silicon wafer. Operating with the same VCEs is also beneficial for matching.

It works as follows. Given that Q1a and Q1b are taken from adjacent locations on the same silicon wafer, their characteristics (and specifically Is) are approximately identical (again, Is isn’t spec’d). And so, we can write that:

It’s also clear that:

Additionally,

So:

Therefore:      

Substituting Ic expressions for the two VBEs,

And here’s some of the circuit’s “magic”: whatever their value, the matched Is’s cancel! From the properties of logarithms,

Again, from the properties of logarithms:     

Exponentiating, substituting for the Ic’s, and solving for V0:

Note that Vi must be negative for proper operation.

Improving temperature compensation

Let’s now turn our attention to using a thermistor to deal with temperature compensation. Those I’m used to dealing with are negative temperature coefficient (NTC) devices. But they’ll do a poor job of canceling the “T” in the denominator of Equation (1). Was there an error in Reference [1]?

I exchanged the positions of R3 and the (NTC) thermistor in the circuit of Figure 2 and added a few resistors in various series and parallel combinations. Trying some resistor values, this met with some success. But the results were far better with the circuit as shown when a positive temperature coefficient (PTC) was used.

I settled on the readily available and inexpensive Vishay TFPT1206L1002FM. These are almost perfectly linear devices, especially in comparison to the highly non-linear NTCs. Figure 3 shows the differences between two such devices with resistances of 10 kΩ at 25°C. It makes sense that a properly situated nearly linear device would do a better job of canceling the linear temperature variation.

Figure 3 A comparison of a highly non-linear NTC and a nearly linear PTC.

To see if it would improve the overall temperature compensation in the Figure 2 circuit, I considered adding a fixed resistor in series with the TFPT1206L1002FM and another in parallel with that series combination.

Thinking intuitively that this three-component combination might work better in the feedback path of an inverting op amp whose input was another fixed resistor, I considered both the original non-inverting and this new inverting configurations. The question became how to find the fixed resistor values.

The argument of the exponent in Equation (1) (exclusive of Vi) provides the transfer function H(T, <resistors, PTC>), which would be ideally invariant with temperature T (with Th1 suitably modified to accommodate the series and parallel resistors).

For any given set of resistor values, the configurations apply some approximate, average attenuation α to the input voltage Vi. We need to find the values of the resistors and of α such that for each temperature Tk over a selected temperature range (I chose to work with the integer temperatures from -40°C to +85°C inclusive and used the PTC’s associated values), the following expression is minimized:

Excel’s Solver was the perfect tool for this job. (Drop me a note in this DI’s comments section if you’re interested in the details.)

The winning result

The configurations were found to work equally well (with different value components.) I chose the inverter because it allows Vi to be a positive voltage. Figure 4 shows the winning result. The average value α was determined to be 1.1996.

Figure 4 The simulated circuit with R2a, R2b, and R3 chosen with the help of Excel’s Solver. A specific matched pair of transistors has been selected, along with values for resistors R1 and Rref, and a voltage source Vref.

For Figure 4, Equation (1) now becomes approximately:

The circuit in Figure 4 was simulated with 10° temperature steps from -40°C to +80°C and values for Vi of 100 µV, 1 mV, 10 mV, 100 mV, 1 V, and 6 V. These V0 values were divided by those given by Equation (2), which are the expected results for this circuit.

Over the industrial range of operating temperatures and more than four orders of magnitude of input voltages, Figure 5 shows a worst-case error of -4.5% / +1.0%.

Figure 5 Over the industrial range of operating temperatures and over 4.5 orders of magnitude of input voltages from 100 µV to 6 V, the Figure 4 circuit shows a worst-case error of better than  -5.0% / + 1.0%. V0 ranges from 2.5 mV to 3 V.

Bonus

With a minor addition, this circuit can also support a current source output. Simply split Figure 4’s R1 into two resistors in series and add the circuit of Figure 6.

Figure 6 Split R1 of Figure 4 into R1a and R1b; also add U4, Rsense, and a 2N5089 transistor to produce a current source output.

Caveats

 With all of this, the simulation does not account for variations between the IS’s of a matched pair’s transistors; I’m unaware of a source for any such information. I’ve not specified op-amps for this circuit, but they will require positive and negative supplies and should be able to swing at least 1-V negative with respect to and have a common-mode input range that includes ground. Bias currents should not exceed 10 nA, and sub-1 mV offset voltages are recommended.

Temperature compensation for anti-log amp

Excel’s Solver has been used to design a temperature-compensation network for an anti-log amplifier around a nearly linear PTC thermistor. The circuit exhibits good temperature compensation over the industrial range. It operates within a signal range of more than three orders of magnitude. Voltage and current outputs are available.

References

  1. Jain, M. K. (n.d.). Antilog amplifiers. https://udrc.lkouniv.ac.in/Content/DepartmentContent/SM_6aac9272-bddd-4108-96ba-00a485a00155_57.pdf

Related Content

The post A temperature-compensated, calibration-free anti-log amplifier appeared first on EDN.

Positive analog feedback linearizes 4 to 20 mA PRTD transmitter

Втр, 09/02/2025 - 19:07

I recently published a simple design for a platinum resistance detector (PRTD) 4 to 20mA transmitter circuit, illustrated in Figure 1.

Figure 1 The PRTD 4 to 20 mA loop transmitter with constant current PRTD excitation that relies on 2nd order software nonlinearity correction math, ToC= (-u + (u2 – 4wx)1/2)/(2w).

Wow the engineering world with your unique design: Design Ideas Submission Guide

The simplicity of Figure 1’s circuitry is somewhat compromised, however, by its need for PRTD nonlinearity correction in software:

u and w constant and x = RPRTD@0oC – RPRTD@0oT
ToC= (-u + (u2 – 4wx)1/2)/(2w)

Unfortunately, implementing such quadratic floating-point arithmetic in a small system might be inconveniently costly in code complexity, program memory requirements, and processing time.

But fortunately, there’s a cool, clever, comparably accurate, code-ware-lite, and still (reasonably) uncomplicated alternative (analog) solution. It’s explained in this article Design Note 45: Signal Conditioning for Platinum Temperature Transducers,” by (whom else?) famed designer Jim Williams.

Figure 2, shamelessly copied from William’s article, showcases his analog solution to PRTD nonlinearity.

Figure 2 A platinum RTD bridge where feedback to the bridge from A3 linearizes the circuit. Source: Jim Williams

Williams explains: The nonlinearity could cause several degrees of error over the circuit’s 0°C to 400°C operating range. The bridge’s output is fed to instrumentation amplifier A3, which provides differential gain while simultaneously supplying nonlinearity correction. The correction is implemented by feeding a portion of A3’s output back to A1’s input via the 10k to 250k divider. This causes the current supplied to Rp to slightly shift with its operating point, compensating sensor nonlinearity to within ±0.05°C.

Figure 3 shows William’s basic idea melded onto Figure 1’s current transmitter concept.

Figure 3 A PRTD transmitter based on the classic LM10 op-amp plus a 200 mV precision reference combo.

R5 provides PRTD-linearizing positive feedback to sensor excitation over the temperature range of -130 °C to +380 °C.

Here, linearity correction is routed through R5 to the LM10 internal voltage reference, where it is inverted to become positive feedback. The resulting “slight shift in operating point” (about 4% over the full temperature range) duplicates William’s basic idea to achieve the measurement linearity plotted in Figure 4.

Figure 4 Positive feedback reduces linearity error to < ±0.05 oC over -127 oC to +380 oC. The x-axis = Io (mA),  left y-axis = PRTD temperature, right y-axis = linearity error. T oC = 31.7(Io – 8mA).

Of course, to consistently achieve this ppm level of accuracy and linearity probably needs an iterative calibration process like the one William’s describes. Figure 5 shows the modified circuit from Figure 3, which includes three additional trims to enable post-assembly tweaking using his procedure.

Figure 5 Linearized temperature transmitter modified for a post-assembly tweaking using his procedure.

Substituting selected precision resistors for the PRTD at chosen calibration points is vital to making the round-robin process feasible. Using actual variable temperatures would take impossibly long! Unfortunately, super precise decade boxes like the one William’s describes are also super scarce commodities. So, three suitable standard value resistors, along with the corresponding simulated temperatures and 4-20 mA loop currents, are suggested in Figure 5.  They are:

51.7 Ω = -121 oC = 4.183 mA
100 Ω = 0 oC = 8.000 mA
237 Ω = 371 oC = 19.70 mA

Happy tweaking!

Oh yeah, to avoid overheating in Q1, it should ideally be in a TO-220 or similar package if Vloop > 15 V.

Related Content

The post Positive analog feedback linearizes 4 to 20 mA PRTD transmitter appeared first on EDN.

EMI fundamentals for spacecraft avionics & satellite applications

Втр, 09/02/2025 - 16:44

OEMs must ensure their avionics are electromagnetically clean and do not pollute other sub-systems with unwelcome radiative, conducted, or coupled emissions. Similarly, integrators must ensure their space electronics are not susceptible to RFI from external sources, as this could impact performance or even damage hardware.

As a product provider, how do you ensure that your subsystem can be integrated seamlessly and is ready for launch? As an operator, how does EMI affect your mission application and the quality of service you deliver to your customers?

EMI is unwanted electrical noise that interferes with the normal operation of spacecraft and satellite avionics, generated when fast switching signals with rapid changes in voltage and current interact with unintended capacitances and inductances, producing high-frequency noise that can radiate, conduct, or couple unintended energy into nearby circuits or systems. No conduction exists without some radiation and vice versa!  

Fast switching signals with rapidly changing currents and voltages energise parasitic inductances and capacitances,

causing these to continuously store and release energy at high frequencies. These unintended interactions become stronger as the rate of change increases, generating transients, ringing, overshoot and undershoot, crosstalk, as well as power and signal-integrity problems that impact satellite applications.

Sources of EMI

Modern avionics use switching power supplies, e.g., isolated DC-DCs or point-of-load (POL) regulators, CPUs, FPGAs, clock oscillators, and speedy digital interfaces, all of which switch at high frequencies with increasingly faster edge rates that contain RF harmonics. These functions have become more tightly coupled as OEMs integrate more of these into physically smaller satellites, exacerbating the potential to form and spread EMI.

Furthermore, they typically share power or ground return rails, and a signal or noise in one circuit affects the others through common-impedance coupling via the shared impedance, contributing to power-integrity issues such as ground bounce.

Similarly, satellites use motors, relays, and mechanical switches to deploy and orient solar arrays, point antennae, control reaction wheels and gyroscopes, for robotics and to enable/disable redundant sub-systems. Rapid changes in current and voltage during their operation generate conductive and radiative EMI that impacts nearby circuits, caused by arcing, brush noise within motors, inductance kickback from coils, and contact bounce from mechanical switches.

EMI can also enter spacecraft from the external space environment, i.e., high-energy radiation from solar flares and cosmic rays can induce noise resulting in discharges and transient spikes. Over time, charged particles from the Earth’s magnetosphere, solar wind, or from geomagnetic storms, such as electrons and ions, accumulate on satellite surfaces, forming large potential differences. When the amassed electric-field strength exceeds the breakdown voltage of materials, ESD-induced EMI generates a fast, high-energy transient pulse that can couple into signal lines, disrupting or damaging space electronics. Conductive coatings and grounding networks are used to equalise surface potentials, as well as plasma contactors to remove built-up charge.

EM impact of a high dI/dt and dV/dt

EMI can be generated, coupled, and then conducted through physical wires, traces, connectors, and cables. Conductors separated by a dielectric form a capacitor, even unintentionally, and a fast signal on one trace switching at nanosecond speeds, i.e., a high dV/dt, energizes a changing electric field that can capacitively couple noise onto an adjacent track, e.g., a sensitive analogue signal.

Similarly, any loop of wire or a PCB trace intrinsically contains inductance and a high dI/dt and energizes a changing magnetic field that can inductively couple (induce) noise onto an adjacent trace or circuit.

In both cases, inherent parasitic capacitance or inductance provides a lower impedance to current than the intended path. Since current must flow in a loop to its source, loop impedance is the key!

The faster the rate of change, the stronger the electromagnetic coupling, and a changing electric field generates a corresponding magnetic field, which will radiate as an antenna if its loop area is large, contains high-frequency harmonics, or if there is not tight coupling between the forward and return paths. The radiated EM wave couples into nearby conductive structures such as cables, traces, metal enclosures, and sensors, receiving the unwanted RFI.

Any conductor with a time-varying current creates an EM field, and the signal wire and its return path form a loop which can become an antenna when carrying fast-switching currents. Similarly, a PCB trace can start radiating, even if the fundamental signal frequency is low, but contains fast edges, if its forward path is not referenced to an adjacent solid ground plane or if the track length approaches 1/10th or more of the signal wavelength, when the EM fields no longer cancel, forming standing waves that radiate from the track. As a simple example, a 10-cm trace resonates around 350 MHz, depending on the PCB dielectric, and an edge rate of 1 ns contains harmonics up to this frequency that will radiate.

EMI issues in modern modulation techniques

For telecommunications applications, EMI can raise the noise floor masking low-power uplink carriers (Figure 1), impacting receiver sensitivity and dynamic range, lowering SNR, and reducing channel capacity (). Unintended, in-band spurs can distort modulation constellations, leading to bit/symbol errors, degrading error vector magnitude (EVM). Energy from unwanted spurs can completely mask narrowband carriers or leak into adjacent channels, impacting performance and regulated RFI emissions levels.

Figure 1 Q-PSK and 16-PSK constellations before (left) and after (right) EMI. 

Telecommunication satellites provide a continuous service with tight regulatory limits, and even small EMI emissions can be problematic. Payloads typically process many channels and frequency bands, receiving low-level uplinks, so any unwanted noise impacts the overall link budget and operational integrity.

RFI coupling into the low noise amplifiers (LNAs), frequency converters, and filters can generate harmonic distortion, intermodulation products, and crosstalk between channels.

EMI issues in space applications

Earth-observation applications rely on high-precision optical, LiDAR, radar, or hyperspectral sensors, and unwanted EMI can introduce noise or distortion into the receive electronics, degrading resolution, accuracy, and calibration, misinterpreting the collected data (Figure 2).

Figure 2  Earth-observation imagery before (left) and after (right) EMI. Source: Spacechips

Signals intelligence (SIGINT) satellites rely on the accurate detection, reception, and analysis of weak, distant, and often low-power carriers, and unwanted EMI can severely degrade receiver performance, limit intelligence value, or even render it ineffective (Figure 3). RFI can reduce sensitivity and dynamic range, or overload (jam) RF front-ends, causing non-linear distortion. Internally generated noise can mimic the characteristics of actual intercepted signals, resulting in false-positive classifications or geolocation, misleading analysts or automated processing systems.

EMI from the on-board electronics or switching power supplies can raise the receiver’s noise floor, making it harder or impossible to detect weak signals of interest.

Figure 3 SIGINT spectra before (left) and after (right) EMI. Source: Spacechips

For in-space servicing, assembly, and manufacturing (ISAM) applications, unwanted EMI from motors, actuators, and robotics can impact LiDAR, radar, cameras, and proximity sensors, resulting in loss of situational awareness, errors in docking and alignment, and reduced control accuracy.

For space exploration, EMI can affect sensitive instruments, corrupting measurements, resulting in the misinterpretation of scientific data. For example, magnetometers are used to detect weak, planetary magnetic fields and their variation, and artificial emissions from the avionics or spacecraft motors can mask or distort real science. As shown in Figure 4, magnetometers are often mounted on long booms away from the satellite to reduce the impact of EMI from the on-board electronics.

Figure 4 NASA’s MESSENGER Spacecraft with Magnetometer Boom. Source: NASA

For all applications, unintended and uncontrolled EMI on power, ground, and signal cables/traces affects on-board circuits and overall system performance. If not managed, RFI can pose a greater threat to avionics than the radioactive environment of space, damaging sub-systems, impacting mission reliability, and satellite lifetime.

Regulatory agencies

For decades, many OEMs have built avionics with little regard to EMI, only to discover emissions are too high or their sub-systems are susceptible to external RFI. Considerable time is then spent identifying the source of the interference, retrofitting fixes to patch the problem, and pass the mission’s EMC requirements. Often, the root cause is never found or fully understood, and this ‘sticking-plaster’ approach increases product cost, both non-recurrent and recurring, as well as delaying time-to-market.

What should you do if you discover EMI with your latest hardware? For all applications, unwanted noise could result in RFI emissions that violate spectral regulations and interfere with other satellites or terrestrial systems. The UN’s ITU defines how the radio spectrum is allocated between different services and sets maximum allowable levels for out-of-band emissions, spurs, effective radiated power (EIRP), and the received power flux density on Earth.

National regulators, such as the FCC (US), Ofcom (UK), CEPT (Europe), and ETSI (global), enforce these limits before granting operating licenses. Agencies provide EMC standards to guide OEMs developing avionics hardware, e.g., MIL-STD-461, AIAA S-121A, and ECSS-E-ST-20C.

Characterizing EMI

The first step in determining the origin of unwanted EMI is to understand whether this is being radiated, conducted, coupled, or a combination of these. EM hardware is often tested as a proof-of-concept PCB in a lab. without a case using unshielded cables and connectors, making system validation more susceptible to external pick-up and common-mode noise.

This interference needs to be initially characterized (probe ground to understand the measurement noise floor) and managed using ferrite-bead clamps, for example, to avoid false positives. Figure 5 and Figure 6 show EM testing with significant common-mode noise picked up by the setup that appears on all the power rails and the ground plane. Both the supply and return cables are around eighteen inches in length, mostly untwisted and unprotected from EMI: 

Figure 5 Typical EM testing in a lab using exposed hardware. Source: Spacechips

Figure 6 Common and differential-mode scope measurements of 1V8 power rail. Source: Spacechips

Testing in an anechoic chamber isolates the device under test (DUT) from external interference as well as internal reflections, simulating open-space conditions, allowing you to measure the actual emissions from your avionics to understand their origin and mitigate their impact.

Engineering qualification model (EQM) and flight model (FM) hardware are typically verified in a sealed metal box with gaskets, shielded cables, and connectors, providing a protective Faraday cage for the DUT. This makes the system less susceptible to external EMI and minimizes RFI emissions from the avionics.

Reducing EMI

To reduce EMI in existing avionics, filters, chokes, and ferrite beads (lossy as opposed to energy-storing inductors) are added to lower conducted noise on power, signal, and data cables. The most obvious way to decrease EM coupling is to increase the physical separation between conductors, but this may not always be possible. The use of twisted pairs equalizes field coupling between two wires, resulting in common-mode interference that can subsequently be removed. Similarly, differential signalling cancels EM fields.

Clamp-on ferrites choke high-frequency common-mode noise on conductors, allowing low-speed signals to pass while dissipating RF interference as heat. If the same EMI could have generated radiated emissions from long cables, then the ferrites would indirectly reduce this antenna effect. Chip-bead ferrites can suppress both differential and common-mode noise, depending on their placement.

Shielding reduces radiated EMI by creating a physical barrier that reflects or absorbs EM fields before they can escape, as well as preventing external noise from entering avionics. Gaskets maintain an electrically conductive seal, preventing external EMI radiation from entering through openings or internal RFI from escaping through gaps or seams in a metal enclosure. Gaskets ensure a continuous Faraday cage, maintaining a low-impedance electrical path to ground, reducing potential differences that could allow common-mode currents and radiation. The gasket redirects EM fields along the enclosure or to ground, instead of allowing them to radiate into or out of the avionics.

I’ve seen absorbing foam added to many avionics products to soak up unwanted radiated emissions, both internal reflections to prevent these bouncing around within enclosures, coupling and inducing further EMI, as well as reducing the strength of RF energy before it escapes through gaps or seams or conducts onto cables and traces. The foam contains carbon or ferrite particles that create resistive losses when RF fields interact with them. An electronic case can act as a cavity that resonates at a certain frequency, and the use of foam can reduce such standing waves.

Tips for proper EMC design

While the addition of EMI filters, RF absorbing foam, and ferrites is very helpful, they should be the last line of defense, not the first solution. If you design it right, you won’t need to fix it later! Sometimes there will be exceptions to the rule, and I have used a high-speed semiconductor in a large ceramic package whose intrinsic parasitic inductance generated an EMI spur. Initially, this was an issue for both the OEM and the telecommunications operator, who cleverly positioned the problematic channel over a low-traffic region of the Indian Ocean.

Likewise, when observing and measuring signals, you must ensure your test equipment does not pick up unwanted interference, confuse decision-making, and delay time-to-market by incorrectly diagnosing a working sub-system as a faulty, noisy one. A scope probe and its ground lead form a loop creating a closed-circuit path that can pick up signals or interference due to electromagnetic induction. Faraday’s Law states, “a changing magnetic field through a closed loop induces an EMF in the loop.” The larger the loop area or the faster the rate of change in the magnetic field, the greater the induced voltage.

Proper EMC design and mitigation are essential to ensure data integrity, mission reliability, and satellite longevity. As avionics sub-systems become faster and more integrated, a more proactive approach is required to deliver right-first-time, EMC-compliant hardware and satellite applications:

  1. EMC compliance must be a key part of early product design.
  2. Understand the sources of emissions and how to control them – 90% of all EMI originates from unintentional signal flow, e.g., crosstalk or return currents flowing where they were never intended to be, such as to close to the edge of a PCB. All unwanted EMI originates from intentional signals!
  3. Simulate before building hardware: current radiates, not voltage, check its spectrum before building hardware. The radiated electric field, in V/m, from a current loop in free space can be simplified as, where I is the current amplitude, A the loop area, and k a constant for a given frequency and observation point. The corresponding magnetic near field in A/m can be approximated as: , where S is the loop separation and D the measurement distance.
  4. The most common cause of EMI from products is unintentional common-mode currents on external cables and shields as a result of voltage differences relative to the chassis.
  5. Manage the layout of your return currents by providing dedicated ground planes, their spread (path of least impedance dominated by inductance) on these reference planes to avoid them coupling, minimize loop area, and provide adjacent ground layers for signals. The following Hyperlynx simulation in Figure 7 predicts current-flow density from a SIGINT SDR:

Figure 7 Siemens’ Hyperlynx Post-Layout Prediction of Return-Current Flow. Source: Spacechips

  1. Minimize loop area by keeping PCB trace lengths and cables < λ/10 of the highest harmonic frequency within a signal, and not just the fundamental component.
  2. When probing signals using an oscilloscope, use the smallest ground lead possible to minimize loop area to reduce the amount of induced magnetic flux and hence EMI. A shorter ground connection also has less inductance, which means less distortion and a more accurate representation of the signal under test. Probing in differential mode cancels common-mode noise at the measurement point, and the use of a ferrite-bead clamp around the cable reduces the amount of external noise picked up (induced) by the lead entering the scope. Null probing of ground baselines, the noise floor, and future measurements!
  3. When testing EM hardware in the lab, exposed circuit boards and/or unshielded power and ground cables pick up EMI interference. These can pollute measurements and obfuscate decisions, validating the system design.
  4. Test in an anechoic chamber to isolate the avionics from external interference as well as internal reflections to measure the actual emissions from your hardware to understand their origin and mitigate their impact.
  5. Design your PCB stack, floorplan, and layout to prevent the generation of EMI: assign routing layers between neighbouring ground planes to contain the spread of return currents and maintain good Z0. Never route across a power or ground-plane split!

There’s so much more to say and if you would like to learn more, Spacechips teaches courses on Right-First-Time PCB Design for Spacecraft Avionics as well as EMI Fundamentals for Spacecraft Avionics and Satellite Applications.

Spacechips’ Avionics-Testing Services help OEMs and satellite integrators solve EMI issues that are preventing them from meeting regulatory targets and delivering hardware on time.

Dr. Rajan Bedi is the CEO and founder of Spacechips, which designs and builds a range of advanced, AI-enabled, re-configurable, L to K-band, ultra high-throughput transponders, SDRs, Edge-based on-board processors and Mass-Memory Units for telecommunication, Earth-Observation, ISAM, SIGINT, navigation, 5G, internet and M2M/IoT satellites. The company also offers Space-Electronics Design-Consultancy, Avionics Testing, Technical-Marketing, Business-Intelligence and Training Services. (www.spacechips.co.uk).

Related Content

The post EMI fundamentals for spacecraft avionics & satellite applications appeared first on EDN.

Tearing apart a multi-battery charger

Пн, 09/01/2025 - 18:11

As regular readers may recall, I’m fond of acquiring gear from the “Warehouse” (now renamed as “Resale”) area of Amazon’s website, particularly when it’s temporary-promotion marked down even lower than the normal discounted-vs-new prices. The acquisitions don’t always pan out, but the success rate is sufficient (as are the discounts) to keep me coming back for more.

Today’s product showcase was a mixed-results outcome, which I’ve decided to tear down to maximize my ROI (assuaging my curiosity in the process). Last October, I picked up EBL’s 8-bay charger with eight included NiMH batteries (four AA and four AAA), $24.99 new, for $17.22 (post-20%-off promo discount) in claimed “mint” condition:

The price tag was the primary temptation; that said, the added inclusion of two USB-A power ports was a nice feature set bonus that I hadn’t encountered with other multi-bay chargers. And Amazon also claimed that this Warehouse-sourced device was the second-generation EBL model that supported per-bay charging flexibility.

Not exactly (or even remotely) as-advertised

When it arrived, however, while the device itself was in solid cosmetic condition, its packaging, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny in the following photos for size comparison purposes, definitely wasn’t “mint”:

and the contents (including the quick start guide, which I’ve scanned for your educational convenience) were also quite jumbled:

(I belatedly realized, by the way, that I’d forgotten one piece of paper, the also-scanned user manual, in the previous box-contents overview photo)

Not to mention the fact that the charger ended up being the first-generation model, not the second-gen successor, thereby requiring that both bays of each two-bay pair be populated (also with the same battery technology—Ni-MH or Ni-Cd—and size/capacity) to successfully kick off the charging process. When I grumbled, Amazon offered $4.49 in partial-refund compensation, which I begrudgingly accepted, rationalizing that the eight included batteries were still fine and the charger seemed to function fine for what it truly was. Only later did I realize that the charger was actually extremely finicky, rejecting batteries that other chargers accepted complaint-free:

Turning lemons into lemonade

And like I said before, I’d always been curious to look inside one of these things. So, I decided to pull it out of active service and sacrifice it to the teardown knife instead. Here’s our patient:

Note how both sides’ contact arrangements support both AA and AAA battery sizes:

Onward. Top:

Bottom:

Left and right sides:

And back, also including a label closeup:

Before continuing, here are both ends of the AC cord that powers the charger:

When at first you don’t succeed, muscle your way in

And now it’s time to dive inside. No visible (or even initially invisible) screws to speak of:

So, I resorted to “elbow grease”. The device didn’t give up its internal secrets easily (an understandable reality, given that its target customers are largely-tech-unsavvy consumers, and it has high-voltage AC running around inside it), but it eventually succumbed to my colorful language-augmented efforts:

Mission (finally) accomplished:

Some side (left, then right, at least when the device is upright…remember that right now it’s upside-down) shots of newly exposed circuit glimpses before proceeding:

Close only counts in horseshoes and…

And now let’s get that PCB outta there. At first glance, I saw only three screws holding it in place:

Uhhhh…nope, not yet:

Oh wait, there’s another one, albeit when removed, still delivering no dissection luck:

A bit more blue-streak phrasing…one more peek at the PCB, this time with readers…and…

That’s five minutes of my life I’m never gonna get back:

Upside: the PCB topside’s now exposed to view, too. Note, first off, the four multicolor LEDs (one per pair of charging bays) running along the left edge:

Binary deficiency

I was admittedly surprised, albeit not so much in retrospect, at just how “analog” everything was. I’d expect a higher percentage of “digital” circuitry were I to take apart my much more expensive La Crosse Technology BC-9009 AlphaPower charger (I’m not going to, to be clear):

 

Specifically, among other things, I was initially expecting to see a dedicated USB controller IC, which I regularly find in other USB-inclusive devices…until I realized that these USB-A ports had no data-related functions, only power-associated ones, and not even PD-enhanced. Duh on me:

Flipping the PCB back over once again revealed the unsurprising presence of a hefty ground plane and other thick traces. The upper right quadrant (upper left when not upside-down):

handles AC to DC conversion (along with the transformer and other stuff already seen on the other side); the two dominant ICs there are labeled (left to right):

CRE6536
2126KD
(seemingly an AC-DC power management IC from China-based CRE Semiconductor)

and:

ABS210
(which appears to be a single-phase bridge rectifier diode)

 while the upper left area, routing the generated DC to the USB ports on the PCB’s other side (among other things), is landscape-dominated by an even larger SS54 diode.

Further down is more circuitry, including a long, skinny IC PCB-marked as U2 but whose topside markings are illegible (if they even ever existed in the first place):

I’ll close out with some side-view shots. Top:

Right:

Bottom:

And left:

And I’ll wrap up with a teaser photo of another, smaller, but no less finicky battery charger that I’ve also taken apart, but, due to this piece as-is ending up longer-than-expected (what else is new?), I have decided to instead save for another dedicated teardown writeup for another day:

With that, I’ll turn it over to you, dear readers, for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Tearing apart a multi-battery charger appeared first on EDN.

Сторінки