Українською
  In English
EDN Network
Automotive silicon in the era of AI, functional safety, and cybersecurity
Automotive silicon design is entering a phase where functional safety, cybersecurity and artificial intelligence (AI) can no longer be treated as separate concerns. In connected, software-defined vehicles, safety outcomes depend not only on protection against random hardware faults, but also on resilience to malicious interference and software vulnerabilities. As a result, many of the decisions that determine system safety are now made at the silicon architecture level.
When ISO 26262 was first published in 2011, it marked a major step forward in structuring functional safety for automotive electronics. But the vehicles being designed today are fundamentally different. Autonomous driving, electrification, AI-based perception, vehicle-to-everything (V2X) connectivity, and centralized compute architectures were not primary considerations at the time.
The core objective remains unchanged: to avoid hazards to people. However, the way this objective is achieved is now deeply tied to how safety is architected into semiconductor devices.
Functional safety is no longer just a system-level concern; it’s a design-time challenge for ASIC and SoC engineers. For many safety-critical functions, whether ISO 26262 targets can be met depends on decisions made in the earliest stages of silicon architecture.
A growing and converging standards landscape
The industry has responded to new challenges by expanding the safety and security framework. ISO 26262:2018 addresses functional safety in road vehicles, while ISO 21448 (SOTIF) considers hazards arising from insufficient or incorrect system behavior. ISO/PAS 8800:2024 begins to address the safety implications of AI-based systems.
Alongside these, ISO/SAE 21434 introduces requirements for automotive cybersecurity, and platform-level schemes such as PSA Certified, while not automotive-specific, are shaping expectations for secure-by-design silicon, roots of trust, and independently evaluated security assurance.
In practice, these frameworks cannot be applied in isolation. Safety and cybersecurity requirements must be interpreted together and traced into silicon architecture, verification strategies, and ultimately the safety case. This convergence increases complexity, but it also reflects the reality of modern automotive systems: safety now depends on both fault tolerance and system integrity.
![]()
Figure 1 Functional safety is now a silicon architecture problem that must be addressed alongside cybersecurity and AI from the earliest stages of design. Source: EnSilica
Safety is implemented in silicon
In today’s vehicles, many critical safety mechanisms are implemented directly in hardware. Fault detection, redundancy schemes, error correction, watchdogs, and safe-state control are embedded within ASICs and SoCs. Typical techniques include lockstep CPU architectures for execution monitoring, ECC-protected memories to detect and correct bit errors, and dedicated safety islands that supervise system health and enforce safe-state transitions.
These mechanisms are responsible for ensuring that faults are either corrected or managed in a way that prevents hazardous behavior. Increasingly, they must also be robust against unintended interactions and deliberate manipulation, not just random faults.
This creates a fundamental shift. Functional safety is no longer something that can be added at the system level; it must be designed into silicon architecture from the outset. Decisions around redundancy affect area and cost. Diagnostic features influence power consumption and performance. Detection latency must be balanced against system constraints. These trade-offs are often made before the full system context is completely defined.
At the same time, safety mechanisms are only effective if the system enforcing them remains trustworthy. Ensuring that trust is now a core architectural concern.
Cybersecurity as a determinant of safety
Cybersecurity is no longer adjacent to functional safety—it’s a determinant of it. A system that meets ASIL targets for random faults may still be unsafe if it can be compromised through software, interfaces, or update mechanisms. In connected vehicles, a maliciously induced fault can have the same or greater impact than a hardware failure.
At the silicon level, this translates into requirements for hardware roots of trust, secure boot, run-time integrity checking, and domain isolation. These mechanisms ensure that only authenticated software can control safety-critical functions and that faults or compromises in non-critical domains cannot propagate into safety paths.
From a design perspective, this expands the traditional fault model. In addition to random hardware failures, engineers must now consider adversarial conditions such as fault injection attacks, privilege escalation, and corrupted firmware. Safety architectures must be capable of detecting, containing, and responding to both types of failure.
The limits of the V-model in silicon development
ISO 26262 promotes the V-model as a structured development approach, moving from requirements to implementation and back through verification. While this provides a useful framework, it does not always reflect how safety-critical ASICs are developed in practice.
Silicon design requires early decisions that cut across the V-model structure. Process technology selection, architectural partitioning, testability, and diagnostic coverage must all be considered at a very early stage. These decisions directly influence safety mechanisms and compliance with ASIL requirements.
In reality, ASIC development is highly iterative, moving between architecture, implementation constraints, and verification. The goal is not strict adherence to a linear process, but maintaining traceability, safety intent, and configuration control throughout the design cycle.
Traditional safety analysis is under pressure
Safety analysis methods such as failure modes and effects analysis (FMEA) and fault tree analysis (FTA) remain foundational. However, their application at the ASIC level is becoming increasingly challenging.
Modern automotive SoCs integrate CPUs, AI accelerators, high-speed interfaces, and complex interconnect structures on a single device. Applying traditional analysis techniques at this scale is difficult, often requiring abstraction that introduces uncertainty.
As complexity increases, the question is no longer whether analysis has been performed, but whether it’s sufficient to capture all relevant failure modes, particularly when both accidental faults and adversarial conditions must be considered.
Toward simulation-driven safety verification
To address these challenges, the industry is moving toward more dynamic, simulation-driven approaches. Fault simulation, long used in semiconductor tests, is increasingly applied in a functional safety context.
Instead of simply identifying faults, the focus shifts to system response. When a fault is injected, engineers must determine whether it is detected, whether it is corrected, and whether the system transitions to a safe state within the required time.
This approach integrates safety analysis with design verification and provides more concrete evidence that safety mechanisms operate correctly under realistic conditions. Increasingly, safety metrics such as single-point fault metric (SPFM) and latent fault metric (LFM) can increasingly be supported by fault-injection and simulation-based evidence, alongside analytical safety analysis.

Figure 2 The fault injection verification flow demonstrates how the design contains, detects, and correct faults. Source: EnSilica
AI moves the challenge further into silicon
AI introduces both new risks and new opportunities for functional safety. On the hardware side, AI workloads are implemented in dedicated accelerators within automotive SoCs, further shifting safety responsibility into silicon.
Designers must consider how these accelerators behave under fault conditions and how their outputs are monitored and validated. On the system side, AI raises fundamental challenges around verification. Unlike deterministic logic, AI systems exhibit probabilistic behavior influenced by data and operating conditions.
AI also reinforces the convergence between safety and security. Ensuring the integrity of inputs, models and execution becomes critical, as corrupted data or manipulated models can lead directly to hazardous behavior.
Memory safety and system integrity
One emerging approach to improving robustness is the use of hardware-enforced memory safety. Capability-based architectures, such as CHERI, provide fine-grained control over memory access, reducing the likelihood that software defects or exploitable vulnerabilities propagate into safety-critical behavior.
By mitigating broad classes of memory-corruption vulnerabilities at the hardware level, these techniques contribute to both system integrity and functional safety, particularly in complex software-defined environments.
Designing for long-term security
Automotive systems are expected to operate reliably over long lifetimes, often exceeding a decade. This introduces additional challenges for cybersecurity.
Cryptographic mechanisms that are secure today may not remain so over the lifetime of the vehicle. As a result, there is growing interest in cryptographic agility and support for post-quantum cryptography (PQC), particularly for secure boot, firmware updates, and vehicle communications.
These considerations further reinforce the need to treat security as a foundational aspect of silicon design, rather than a feature added later in the development process.
However, the automotive industry does not need to abandon existing safety standards; instead, it must adapt how they are applied in the context of semiconductor design. Take, for instance, functional safety, which is no longer just a system integration challenge. It’s a silicon architecture problem that must be addressed alongside cybersecurity and AI from the earliest stages of design.
At the silicon level, the distinction between safety and security is becoming increasingly artificial. Safety mechanisms must operate correctly in the presence of both accidental faults and malicious interference. This requires a unified architectural approach, where safety, security and system integrity are designed, verified, and validated together.
As vehicles become more intelligent, connected and autonomous, the role of custom silicon in delivering safe operation will only grow. The standards still matter, but increasingly, it’s silicon that determines whether those standards can be met in practice.
Enrique Martinez-Asensio is functional safety manager at EnSilica. He has more than 35 years of experience in the semiconductor industry, having worked on mixed-signal IC design and technical support and management in several semiconductor companies.
Related Content
- Automotive Cybersecurity: Attacks Keeps Growing
- Enabling functional safety in automotive processors
- Approaches to functional safety in automotive design
- Automotive processor IP complies with ISO 21434 cybersecurity
- ISO/SAE 21434: Software certification for automotive cybersecurity
The post Automotive silicon in the era of AI, functional safety, and cybersecurity appeared first on EDN.
How emerging robotics standards will shape next-gen automation

Walk into any modern fulfillment center or high‑precision inspection site and the pattern is unmistakable: robots are becoming smarter, more autonomous, and more deeply embedded in daily operations. They navigate cluttered aisles, collaborate with people, and execute tasks that once required years of human experience.
Yet behind the impressive demos and AI‑powered autonomy lies a quieter, more stubborn truth. The frameworks governing how these robots behave, communicate, and integrate with the rest of the factory are still playing catch‑up.
For years, robotics innovation has moved faster than the standards meant to ensure safety, reliability, and interoperability. That was manageable when robots lived in structured, predictable environments. But now that they’re entering aircraft wing boxes, nuclear vessels, medical labs, and public spaces, the gap is no longer sustainable.
The industry is reaching a point where the convergence of ISO/TC 299 and ASME Model‑Based Enterprise (MBE) frameworks is becoming essential. Together, they are laying the foundation for the next decade of automation.
Through my work in robotics and engineering standards, I’ve seen how the absence of a unified digital thread slows down certification, complicates integration, and turns validation into a guessing game. The industry is ready for a shift, and these standards are the mechanism for that shift.
Synergy: Behavior meets mechanical truth
In robotics, reliability is a marriage of autonomous behavior and physical reality. You cannot have one without the other. The relationship is best understood through a simple metaphor: a driver and a map.
ISO/TC 299 is the driver’s manual. It defines how a robot should behave when a human enters its workspace, how collaborative systems maintain predictable safety envelopes, and how mobile fleets negotiate shared space. These behavioral expectations create consistency across vendors and applications, which is critical as multi‑robot systems become the norm.
ASME MBE, particularly ASME Y14.41, is the map. It provides machine-readable geometry, tolerances, and load paths that tell the robot what the world looks like and how its own structure behaves under stress. It is the robot’s mechanical truth, which is the foundation for accurate motion planning, stiffness modeling, and digital twin fidelity.
When these two systems operate independently, problems emerge. A robot may follow every safety rule perfectly, but if it doesn’t understand its own deflection under load, it can still “safely” drill a hole in the wrong place. I’ve seen this disconnect repeatedly in real deployments: behavior and mechanical truth treated as separate concerns, even though they collide on every project.
The future of robotics depends on eliminating this separation.
Standards in action: Solving the validation gap
Consider a high‑precision assembly task inside a Brownfield environment. A long‑reach robot is working in an aircraft hangar where the temperature rises throughout the day. The robot plans its path using a static CAD model, unaware that its arm has expanded by a millimeter due to thermal drift. In a traditional setup, the robot executes the plan anyway, and the error shows up only after inspection and is often too late to avoid rework.
In a standard-integrated environment, the workflow looks very different. The robot pulls its geometry and stiffness information from an ASME Y14.41 model, uses ISO/TC 299 to manage safe behavior when a human enters the cell, and continuously adjusts its trajectory by comparing sensor feedback with its digital thread. The result is a sub‑millimeter accurate operation that remains safe and reliable even as conditions change.
This is not hypothetical. In aerospace and energy applications, thermal drift, compliance, and load‑path uncertainty are among the most common sources of failure. Standards give robots the context they need to correct these issues in real time.
A similar story plays out in dynamic warehouses. Mobile robots constantly encounter shifting pallets, narrowing aisles, and unpredictable human movement. ISO/TC 299 governs how they yield, reroute, and negotiate shared space. ASME MBE ensures that the robot’s internal map reflects real geometry rather than outdated floor plans. When a pallet is slightly misaligned, the robot doesn’t just detect it, it understands how that misalignment affects its own kinematics and load stability. This combination prevents collisions, downtime, and cascading errors that can shut down an entire facility.
The economic advantage: Eliminating the hidden tax
Beyond the engineering benefits, there is a major economic argument for this convergence. Today, companies pay a hidden tax in the form of custom integration. Every robot vendor uses a different data model, forcing end‑users to build expensive bridges between incompatible systems. These one‑off integrations accumulate over time, creating brittle automation ecosystems that are difficult to scale and nearly impossible to maintain.
When ISO governs behavior and ASME governs data, robots become vendor‑agnostic. A new robot can be dropped into an existing digital thread, and it will immediately understand the factory’s geometry, safety rules, and tolerances. Deployment times shrink from months to days. Total cost of ownership drops because automation no longer forms isolated islands that require constant reinvention.
In my experience, the companies that adopt standards early see the benefits almost immediately: fewer integration failures, faster certification cycles, and a more predictable automation roadmap. Standards don’t slow innovation; they accelerate it by removing friction.
The era of deterministic robotics
The last decade of robotics was defined by intelligence in AI, perception, and autonomy. The next decade will be defined by determinism. Robots will need to be predictable, traceable, and grounded in mechanical truth. The convergence of ISO/TC 299 and ASME MBE is pushing the industry toward systems that are not just automated, but self‑aware and self‑correcting.
From what I’ve seen in industry, the organizations that embrace this convergence early will be the ones shaping the next era of automation. As robots expand into more complex and safety‑critical environments, this integrated framework will influence the future of robotics as much as any breakthrough in neural networks.
The companies that act now will define the next generation of automation and the standards that make it possible.
Santosh Yadav is a hardware development engineer at Amazon Robotics and an IEEE Senior Member. His work focuses on the intersection of mechanical reliability and standardized automation frameworks.
Special Section: Smart Factory
- Rethinking machine vision in industrial automation
- Smart factory: The rise of PoE in industrial environments
- Precision lasers boost safety and efficiency in smart factories
- Tale of 3 sensors operating in smart factory environments
- From edge AI to physical AI in smart factories: A shift in how machines perceive and act
- Robots: Why AI alone will not deliver the next leap in automation
The post How emerging robotics standards will shape next-gen automation appeared first on EDN.
Backup batteries and supercaps: Geriatric hardware traps

Batteries eventually die, whether due to excessive recharge cycles, deep-discharge, or other factors. Capacitors, in contrast, often don’t hold charge long enough. What’s an engineer to do?
When the Pentax Q compact digital camera that I told you about last month:

showed up at my front door, I excitedly opened the packaging, tossed the battery on the charger to rejuvenate it, then slotted the battery inside the camera, bayonet-mounted a lens to the body, pressed the power button to turn the Q on, and…was immediately prompted to enter the current time and date, along with the desired format for the latter:

Not a huge surprise, at least at first. The Pentax Q is nearly a decade-and-a-half old at this point, after all. I figured that the camera had been sitting around without a primary battery in it—or maybe that primary battery had just drained—with draining of the backup battery (commonly referred to as the CMOS battery in functionally-equivalent PC settings storage terminology) following in short order.
So, after thoroughly testing the camera to make sure it was otherwise operating properly, I popped the primary battery back out to top it off again, then slotted it back in the camera body and let everything sit overnight to recharge the backup battery.
Drained brainsThe next day, I turned the Pentax Q back on and…once again was immediately prompted to enter the current time and date, along with the desired format for the latter. What the heck? I hit Google and learned that mine wasn’t remotely a unique issue. Unsurprisingly, in retrospect, the embedded battery had exceeded its maximum recharge cycle count and/or had experienced deep discharge degradation from which it was unable to recover. CR2032 cells on PCBs are admittedly prone to suffer similar fates:

with one key difference; they’re much easier to access and replace. I trust you’ll resonate with my reluctance to disassemble my photographic antique and attempt similar surgery on it. Yes?
At the end of the day, of course, this is a First World problem, the latest in an admitted long list that I’ve shared with all of you over the years. Time, date, location and directly related settings are the only ones that don’t survive primary-cell separations and drains; all the others (including the all-important user interface language setting, critical for someone who’s fond of Asian-sourced electronics but can’t visually-or-otherwise understand any Asian dialects) are generally stored in nonvolatile memory instead of battery-backed SRAM (because, speaking of cycles, these other settings change comparatively infrequently and are therefore unlikely to hit the max-rewrite cycle count of the EEPROM, flash memory or other technology housing them).
Imperfect workarounds and alternativesIf I could just remember to plug my primary battery-housing camera in to recharge every once in a while, I might also dodge the flaky-backup-battery issue that way…except that the Pentax Q can’t be recharged over its proprietary USB-derived connector. And anyway, I avoid recharging primary batteries in situ whenever possible, in case the cells were to swell and permanently embed themselves in the battery compartment. And speaking of ejecting batteries, I’ve found at least two other hacks:
- Have a spare fully-charged battery sitting nearby, and pop out and replace the drained battery with it really quickly (the backup battery’s charge storage capability apparently isn’t completely neutered, only severely compromised)
- Or just hit “cancel” after seeing the initial-settings screen to skip past it…with the obvious downside that subsequently logged date and time info will be (quite) incorrect!
What about so-called “supercapacitors” (aka, ultracapacitors) as an implementation alternative to conventional backup batteries?

The obvious key advantage here is that they support near-infinite recharge cycle counts. They also have comparatively high output power density (translation: high output current, although this attribute isn’t necessary in the application we’re discussing today) and there’s also no worry about the overheating-induced thermal runaway (translation: heat, smoke, flame) to which batteries are prone to varying degrees depending on implementation-technology specifics.
Nothing’s perfect in real lifeAlas, there are also downsides. Although you can, to my previous high output power density comment, drain ‘em really fast, they also drain comparatively fast all by themselves (in weeks, versus months or even years for batteries), even in the absence of a load…which kind of defeats the purpose of using them for long-shelf-life settings-backup purposes, yes? Plus, as the above image suggests, they tend to have comparatively poor storage density. Translation: they’re huge in both linear size and volume in comparison to a comparable battery alternative.
Comparative size isn’t so much of a problem with an available volume-rich application such as a desktop computer or server. In a camera, or even a laptop computer, or any other diminutive device for that matter, it’s more likely to make a supercapacitor a non-starter. Alas, as I alluded to earlier, those same compact form factors are also more likely to be difficult-to-impossible to disassemble in order to do a backup battery swap, so…
In closing, I freely admit that I’m not a power electronics expert. That’d be my predecessor. So, I’ll stop pontificating at this point and pass the keyboard over to you.
What characteristics make a backup battery the obvious choice for a design, and conversely lead you to definitively select a supercapacitor instead? How do you decide between the two when the differentiation is more muted? How have any recent implementation innovations in either/both product categories evolved your thinking in this regard? And are there other technologies besides these two that your readers and I should also consider?
Let me know your thoughts in the comments, please!
—Brian Dipert is the associate editor, as well as a contributing editor, at EDN.
Related Content
- Engineering tradeoffs: a camera case study
- Supercapacitors – A Quick Refresher
- Supercapacitor research finds new applications
- Infused concrete yields greatly improved structural supercapacitor
The post Backup batteries and supercaps: Geriatric hardware traps appeared first on EDN.
Measuring G: The ultimate metrology challenge?

Four fundamental forces of nature—gravity, electromagnetism, strong nuclear force, and weak nuclear force—govern all known physical interactions in the universe. Of these four, gravity is the one with which we are all personally familiar, as we deal with it in our daily routine. Knowing these forces, along with the other defining constants of the International System of Units (SI), form the foundation of much of modern science and engineering (Figure 1).

Figure 1 This wallet card displays the fundamental constants and other physical values that will define a revised international system of units. Source: NIST
A good semi-technical, highly readable overview of the development of metrology, the people who made it happen, and its role in civilization and the industrial and technology revolution is the book “Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constants” by James Vincent (Figure 2).

Figure 2 This enjoyable book provides great insight into the hard-fought efforts of metrologies over the centuries, even if they were not called that. Source: W. W. Norton
The gravitational constant, informally dubbed “big G”, determines the strength of the attraction between two masses anywhere in the universe. It’s approximately 6.67 x 10-11 meters3/kilogram-second2. It is, of course, associated with Isaac Newton’s brilliant insight and law of universal gravitation, published in 1687, which states that every particle in the universe attracts every other particle with a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them (Figure 3).

Figure 3 We can thank Isaac Newton for this simple equation that quantifies all-pervasive yet mysterious gravity. Source: Wikipedia
This “big G” is distinct from “little g”, which describes the acceleration that an object experiences due to the gravitational pull of a large mass, such as Earth, and it varies from location to location. For instance, the value of little g is approximately 9.8 meters/second2 at Earth’s surface but only 1.62 meters/second2 on the Moon.
The first implicit measurement is attributed to Henry Cavendish in a 1798 experiment with an accuracy of about 1%, which is impressive considering the year and available tools and technology. Yet, while other fundamental physical constants are known to 6 or more digits of confidence, measurement of this oldest-known force to comparable precision has eluded physicists, and it’s known with confidence only to about 4 digits.
While a better value for G wouldn’t affect the lives of most people or projects, there are some cases for which it would be needed, and it’s also a part of the broader science “quest”.
Why is G so hard to measure? There are three main reasons:
- Gravity is the weakest of the four fundamental forces of physics; for comparison, it’s approximately 1038 times weaker than the strongest force.
- The masses used in the experiment must fit inside a relatively small, constrained space of the experimental lab, and small masses generate small gravitational forces.
- Since gravitational force is inherent by every object, it’s extremely challenging to make sure the force you measure in the laboratory really comes from the intended mass.
What was the next step?
Trying to determine G to higher accuracy has been an ongoing project for many institutions and researchers. I recently came across a news report from National Institute of Standards and Technology (NIST) site summarizing the 10-year quest led by physicist Stephan Schlamminger to improve that measurement (“NIST Weighs In on the Mystery of the Gravitational Constant”).
His team’s strategy was to painstakingly replicate a precision experiment conducted by the International Bureau of Weights and Measures (BIPM) in Sèvres, France, in 2007, which provides the value of G in use now. To do this, the team not only improved the precision of the physical parts of the experimental setup but also dived deeper to more fully identify sources of error, and then either reduce them or work out ways to have them self-cancel.
The basic arrangement is very simple and begins with a torsion balance similar to the one used by Cavendish (Figure 4). He placed two lead balls on opposite ends of a wooden beam horizontally suspended at its center by a thin wire. Nearby, he positioned two much heavier masses, suspended separately.

Figure 4 The traditional Cavendish experiment for measuring the strength of gravity was a torsion balance with an “optical” readout. Source: NASA
The gravitational attraction between the smaller and heavier masses caused the wooden beam to rotate, twisting the wire until the torque it exerted by counterbalancing the gravitational force. The motion of the wooden beam, measured with a mirror and light pointer, indicated the value of big G.
The Schlamminger team upgraded to eight cylindrical metal masses. Four of the cylinders sat on a rotating carousel, resembling four candlesticks in an old-fashioned chandelier. The other four smaller masses were placed inside the carousel, on a disk suspended by a ribbon of copper-beryllium about the thickness of a human hair.
They then added a modern-day “twist” not available to Cavendish: applying a voltage to electrodes placed alongside each of the inner masses (Figure 5). These voltages created an electrostatic torque that twisted the wire in a direction opposite to the gravitationally induced torque. By carefully setting a voltage that exactly counterbalanced and nulled the gravitational torque, the researchers prevented the torsion balance from rotating. The magnitude of the voltage provided another estimate of big G.

Figure 5 The latest version of the torsion balance is loosely based on the Cavendish design, but adds advanced features, including electrostatic torque nullification. Source: NASA
Of course, the actual unit is larger and much more sophisticated (Figure 6).

Figure 6 The NIST version of the Cavendish torsion balance bears little resemblance in actual implementation. Source: NASA
What’s the result?
The Committee on Data of the International Science Council, or CODATA, issues recommended values of fundamental physical constants. Its recommended numerical value for big G is a four-digit number with a measurement uncertainty of 22 points per million. To put this in perspective, a watch that runs 22 ppm late would measure the year 12 minutes too long.
So, to cut to the chase: How did the team do? Bluntly, not as well as they would like. In fact, their answer differed significantly from the BIPM number—which would be OK if it was “more correct” —but it had greater uncertainty as well. It was 0.0235% lower than the result that the researchers had attempted to replicate and is at odds with the CODATA figure.
I won’t try to summarize the project, as it has so many nuances and details. Fortunately, the published Metrologia paper titled “Redetermination of the gravitational constant with the BIPM torsion balance at NIST” is not a dry, academic-style presentation of the project. Instead, it’s a fascinating, highly readable 30-page recounting of a story that begins with an overview of the history of G measurement, then goes on to review the project step by step, covering the rationales for each step; the improbable, possible, and likely sources of error; the dilemmas they addressed; the qualitative issues as well as the quantitative analysis; and much more.
You could almost say it could be the basis for a scripted TV show or even a movie, perhaps not as dramatic as the 2023 Oppenheimer but still a “grabber.” As a very nice consideration for the readers, the paper begins with a full list of acronyms and abbreviations, which I wish all papers would do. It even includes a group photo of the 40+ team participants.
There’s one other interesting aspect of the project that I have very rarely seen. Schlamminger worried that he might unconsciously skew his measurement so that it agreed with the value of G that researchers found in the French experiment. To satisfy his own meticulous standards, he asked a colleague to scramble the data.
To accomplish this, colleague at NIST’s Mass and Force Group multiplied each Source Mass value by an unknown factor (1 + r) with r ∈ [−1 × 10−3, +1 × 10−3], stored in a secure envelope to be hidden from Schlamminger until the work was complete. This random offset number for the masses served to “blind” Schlamminger to the actual measurement he was taking.
By employing that strategy, Schlamminger would not know the actual value of big G that his team had measured. The envelope with the secret number was unsealed on a conference stage at the July 2024 Conference on Precision Electromagnetic Measurements (CPEM), and Schlamminger and his team finally found out the somewhat disappointing real results of their work.
Related Content
- What if Gravitational Constant G Isn’t?
- Goodbye, Erlang; Hello, Gbps/km2/MHz
- Spinning spheres test relativity’s subtlety
- Goodbye, Fundamental Kilogram & Ampere
The post Measuring G: The ultimate metrology challenge? appeared first on EDN.
Mux switches deliver wide-bandwidth signal paths

A pair of 2:1 multiplexer/1:2 demultiplexer switches from Toshiba support PCIe 6.0 and USB4 Version 2.0 interfaces with bandwidths up to 34 GHz. The TDS5C212MX and TDS5B212MX are designed for reliable switching of high-speed differential signals in servers, industrial testers, robots, and PCs.

Manufactured using Toshiba’s TarfSOI (Toshiba advanced RF SOI) process, the TDS5C212MX and TDS5B212MX achieve typical differential 3-dB bandwidths of 34 GHz and 29 GHz, respectively. These wide bandwidths help suppress signal waveform distortion and improve reliability in high-speed data transmission.

The switches differ in their pin layouts. The TDS5C212MX minimizes signal path length to reduce reflections and losses, improving high-speed signal integrity. The TDS5B212MX retains the same pin layout as conventional products. Both devices operate over a temperature range of -40°C to +125°C and are now shipping.
Toshiba Electronic Devices & Storage
The post Mux switches deliver wide-bandwidth signal paths appeared first on EDN.
Micron ships 245-TB data center SSD

Micron Technology’s 245-TB 6600 ION SSD boosts rack-scale storage density for data centers and AI infrastructure. Now shipping, the company describes it as the industry’s highest-capacity commercially available SSD. Built with Micron’s G9 QLC NAND in an E3.L form factor, it requires 82% fewer racks than equivalent HDD-based deployments, while reducing power and cooling needs for large-scale, data-intensive workloads.

Micron lab testing showed significant gains over HDD-based systems. For AI workloads, the 245-TB Micron 6600 ION SSD achieved up to 84 times better energy efficiency, 8.6 times faster preprocessing, and 29 times lower latency. For object storage, it delivered up to 435 times better throughput per watt and 96 times faster time to first byte.
For 1-EB deployments, Micron says the drive requires 1.9 times less energy than HDD-based systems, reducing annual CO2 emissions by 438 metric tons and saving 921 MWh of energy. The drive consumes up to 30 W of peak power, about half that of comparable-capacity HDD deployments, supporting data center sustainability initiatives.
The 245-TB Micron 6600 ION SSD will be on display at Dell Technologies World 2026, May 18–21, 2026.
The post Micron ships 245-TB data center SSD appeared first on EDN.
4D vision platform enhances perimeter monitoring

Eyeonic Vista from SiLC is a high-resolution 4D vision system that accurately detects and classifies small targets at distances exceeding 1 km. Designed for mission-critical applications including perimeter security, counter-UAS operations, and maritime monitoring, Vista identifies humans, animals, vehicles, drones, and unauthorized vessels in complex environments. The system is also suited for protecting sensitive infrastructure such as airports, borders, power stations, and military assets.

The 8-channel vision system uses 1550-nm FMCW LiDAR to generate a data-rich point cloud, while dynamic Region of Interest (RoI) scaling enhances resolution for improved clarity and responsiveness. Micro-Doppler velocity data enables motion-based analytics for rapid threat identification. The system features angular resolution down to 8 mdeg (0.008°)—about twice as fine as human vision—and dual polarization for remote material identification. It is also resistant to jamming and crosstalk in multi-sensor environments.
Housed in an all-weather, IP65-rated enclosure, Vista operates in ambient light with no impact at 100 klux (bright sunlight), as well as in cloudy or dusty conditions.
SiLC will display the Eyeonic Vista at XPONENTIAL 2026 from May 12–14, 2026. For more information, email contact@silc.com or connect with SiLC on LinkedIn.
The post 4D vision platform enhances perimeter monitoring appeared first on EDN.
32-Gbps redriver improves in-vehicle connectivity

The PI3EQX32904Q automotive four-channel redriver from Diodes optimizes signal integrity for smart cockpits combining ADAS, infotainment systems, and instrument clusters into a single unit. Designed for GPU+CPU SoCs, it supports data rates up to 32 Gbps for high-speed PCIe 5.0, SAS-4, and CXL interfaces.

The linear redriver is rate and coding agnostic without interfering with link setup. Four independent differential channels allow configuration of receiver equalization, output swing, and flat gain through an I2C interface. Designers can tune signal performance across various physical media and system configurations with minimal firmware overhead. In addition, the ability to extend PCB trace lengths helps reduce intersymbol interference.
Built using a 0.13-µm SiGe BiCMOS process, the PI3EQX32904Q delivers robust data transmission with high linearity and low jitter. It operates from a 3.3-V supply across a -40°C to +85°C temperature range. The device complies with Modern Standby (S0 Low Power Idle) requirements, consuming less than 5 mW in deep standby while maintaining readiness for rapid wakeup.
Prices for the PI3EQX32904Q start at $4.84 each in 1000-piece quantities.
The post 32-Gbps redriver improves in-vehicle connectivity appeared first on EDN.
Qualcomm advances Snapdragon mid-range tiers

Qualcomm’s Snapdragon 6 Gen 5 and Snapdragon 4 Gen 5 bring strong performance and extended battery life to the company’s mobile platforms. Both introduce Smooth Motion UI for more responsive device interactions and smoother navigation. Compared to the previous generation, Snapdragon 6 Gen 5 delivers 20% faster app launches and 18% less screen stutter, while Snapdragon 4 Gen 5 provides 45% faster app launches and 25% less screen stutter.

Snapdragon 6 Gen 5 adds AI-powered camera features and the Qualcomm Adaptive Performance Engine 4.0 to support extended gaming sessions. With 21% higher GPU performance, the platform enables responsive everyday interactions and richer graphics, backed by improved power efficiency plus 5G and Wi-Fi 7 connectivity.
Snapdragon 4 Gen 5 extends dual-SIM 5G connectivity and improved gaming features to entry-level smartphones. The platform delivers 77% higher GPU performance and supports 90-fps gameplay, a first for the Snapdragon 4 series.
Based on a Kryo CPU and Adreno GPU, both platforms are expected to power commercial devices in the second half of 2026 from global OEMs including Honor, OPPO, realme, and Redmi.
The post Qualcomm advances Snapdragon mid-range tiers appeared first on EDN.
ΔVbe thermometer is switchable between °C and °F

Ordinary bipolar junction transistors can sometimes be precision sensors.
When you think of precision components, you usually don’t (and probably shouldn’t) think of general-purpose bipolar junction transistors. Are GP BJTs cheap and versatile? Unquestionably yes. But are their characteristics, current gain, bias voltage, etc., precise and predictable to a fraction of a percent? Sadly (and maybe even laughably) no. But not entirely so. A dramatic exception is the ΔVbe effect, in which ordinary small signal BJTs can function in simple circuits as 0.1% precision absolute temperature sensors, as shown in an earlier Design Idea.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The ΔVbe effect depends solely on the ratio of applied currents, independent of their absolute magnitudes. It has an amplitude of 1/5050 volts per Kelvin and 1/9090 volts per Rankine per current ratio decade. Figure 1 shows how this simple math can be exploited to turn most any 3 ¾ digital multimeter with a 300mV range into a versatile and accurate 0.1° resolution thermometer switchable between Celsius and Fahrentheit scales:

Figure 1 Switch U1a and current mirror Q2Q3 apply an excitation current ratio of 10.23:1 to the 9-sensor transistor string. The string is tapped at 5 x 200uV/°C = 1mV/°C and 9 x 111uV/°F = 1mV/°F.
Here’s how it works. Multivibrator U1b and switch U1a drive current mirror Q2Q3 with a square wave current signal. Its two states have a precise ratio of 101.01 = 10.23:1. The current mirror applies this signal to the 9-transistor temperature sensing string. There, the ΔVbe effect causes each transistor to develop 200uV per Kelvin and 111uV per Rankine, summing to 1mV/°K at the 5-transistor tap and 1mV/°R at the 9-transistor tap.
The S1a section of the DPDT switch S1 allows appropriate tap selection for the desired temperature scale. Meanwhile, the S1b section selects the appropriate Z1 derived 0° offset: 273mV for Celsius and 460mV for Fahrenheit. The D1R6 dummy load balances the currents passed by the two sides of the U1a switch, equalizing its Ron voltage losses. Current mirror lovers will no doubt notice that the Q2Q3 mirror, consisting as it does of unmatched transistors with no emitter degeneration, probably lacks an accurate gain ratio. But that’s okay. It doesn’t need one.
Remember that the ΔVbe effect depends solely on the ratio of applied currents and is unaffected by of their absolute magnitudes. So the mirror’s gain can vary over a wide range without significantly affecting temperature measurement accuracy. V+ can likewise wander harmlessly from 7 to 20 volts. A simple 9 volt battery will therefore work well and, since the total current draw is less than 2mA, will last for hundreds of hours of continuous operation.
Multivibrator U1b provides asymmetrical ~7kHz timing for synchronous sensor excitation and precision AC signal rectification by U1c. Asterisked resistors should be +/- 0.1% precision types to preserve accuracy.
Yes. Those ordinary dime-a-throw GP BJTs are really that good.
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included best Design Idea of the year in 1974 and 2001.
Related Content
- ΔVbe thermometer outputs 1mV/°C without calibration or op amps
- ΔVbe + DMM = Celsius, Kelvin, Fahrenheit, and Rankine thermometer
- BJT is accurate sensor for absolute temperature in Kelvin and Rankine
- Temperature compensation with a simple resistance temperature detector
- A temperature-compensated, calibration-free anti-log amplifier
The post ΔVbe thermometer is switchable between °C and °F appeared first on EDN.
Making the case for MRAM in software-defined vehicles

Implementation of software-defined vehicles (SDV) has changed significantly over the past decade, but the need for in-field upgrades and new features has remained constant. As OEMs move from legacy architectures to SDVs, they will need to add new capabilities over time to deliver a more differentiated user experience.
At the same time, ECU consolidation and the need for more headroom for future use cases are increasing compute demands. Microcontroller unit (MCU) manufacturers have responded by moving to smaller process nodes, enabling higher performance in a more cost-effective way.
However, while MCUs are evolving fast, memory—embedded non-volatile memory (eNVM) in particular—is being left behind. In many cases, memory still relies on outdated specifications from the days of distributed architectures, where most ECUs never saw firmware upgrades after release.
This creates an important question for the auto industry. If vehicles are expected to receive in-field bug fixes, performance improvements and entirely new features over time, is your SDV’s eNVM ready?
How SDVs shape the customer experience
Before we answer this question, it’s important to consider how SDVs shape the customer experience. Faster over-the-air (OTA) updates mean less vehicle downtime, lower power use during the update and a lower battery state-of-charge (SoC) requirement while starting an OTA upgrade process. When issues are found, the ability to deliver fixes quickly reduces customer frustration and improves confidence in the vehicle.
With the right technology, SDVs can also offer a lower total cost of ownership while improving the overall experience. But for that to be achieved, it needs to be easier for SDVs to support larger applications, more data-heavy features and ongoing software updates without driving up memory needs or development cost.
In short, the platform must support frequent improvements without getting in the way of the vehicle’s long-term success, and that means more efficient eNVM is required.
Specifications that need to be addressed
There are two eNVM specifications that impact user experience and total cost of ownership: endurance and write speed (write time and erase time).
Endurance determines how many times memory can be rewritten over the life of the vehicle. In today’s MCUs, code memory is often rated for about 1,000 write cycles, while data memory, which is usually a very small subset of total eNVM, is typically rated for around 100,000. Those limits have changed very little over time, even though SDVs now depend on frequent updates, bug fixes and new features delivered long after launch. As update demands increase, higher endurance becomes essential.
Page size also matters. Many eNVMs only support page-level writes, which means updating even a single byte require rewriting an entire page, which can typically be sized between 64 bytes to 512 bytes. That increases wear, wastes memory and adds software complexity, especially when page sizes are large.
For SDVs to support more data-intensive use cases over time, memory needs to offer much higher endurance along with smaller page sizes or byte-level write capability. That reduces memory overhead, simplifies software design, and makes future upgrades far more practical.
Impact of temperature on endurance and retention
In eNVM technologies, temperature matters just as much as raw endurance and retention. That’s because eNVM hardware can degrade when writes happen at high temperatures, which is a real concern for vehicles receiving OTA updates. A car parked in extreme summer heat may still need a firmware update, for example, and customers should not have to worry about whether the vehicle is too hot to update safely. For SDVs, memory needs to deliver reliable endurance and data retention across the full operating temperature range over the life of the vehicle.
Write and erase times also have a direct impact on the customer experience. In many eNVM technologies, memory must be erased before it can be rewritten, and erase times are often even longer than write times.
That may have been acceptable when programming mainly happened in the factory, but in SDVs it can mean longer update times, more downtime, and added software constraints during normal vehicle operation. Faster writes and eliminating the need for erase cycles would make updates quicker, reduce performance penalties, and simplify software design.
Why MRAM stands out
When comparing embedded memory options for SDVs, including embedded charge-trap flash, PCM, RRAM and MRAM, the key question is which technology can best support frequent updates, long life, and a good customer experience. MRAM stands out because it addresses many of the limitations of older embedded non-volatile memory technologies. It can support scalable memory sizes at smaller technology nodes like 16 nm, needed for zonal, domain and consolidated vehicle architectures, while remaining practical from a cost and reliability standpoint.
MRAM works differently from traditional memory technologies. Instead of storing data through charge, material movement or phase change, it stores data using magnetic states. That matters because magnetic storage does not wear out in the same way as many other non-volatile memory approaches.
As a result, MRAM is well suited for the durability, update frequency, and long-term reliability that SDVs require. MRAM supports 20 years of data retention at 150⁰C ambient temperature, well within the requirements of today’s automotive applications.

Figure 1 MRAM stands out because it addresses several limitations of older embedded non-volatile memory technologies. Source: NXP
A solution that meets the needs of SDVs
MRAM is also a strong fit for SDVs because it combines very high endurance with fast write speeds, up to 20 times faster write speed than traditional embedded memory. Unlike many other embedded memory technologies, it does not require an erase step before writing, which helps enable much faster updates and reduces vehicle downtime.
Its endurance is high enough to support frequent firmware updates and heavy data writes up to 1 million cycles with little or no need for wear leveling in most use cases. Just as importantly, its performance and retention remain reliable over the full life of the vehicle.
These strengths also make new SDV use cases more practical. MRAM, with its fast write and high endurance capabilities can enable new use cases, especially data-intensive applications such as AI and machine learning. It also makes it easier to load software dynamically based on how the vehicle is being used.
In short, MRAM-based MCUs help automakers deliver faster updates, support more flexible software architectures, and add new capabilities over time without compromising the customer experience.

Figure 2 The MRAM-based MCUs like S32K5 help automakers deliver faster updates, support more flexible software architectures, and add new capabilities. Source: NXP
Put simply, underlying hardware technology, and eNVM in particular, must evolve to unlock the true potential of SDVs. Memory write speed and endurance can be make-or-break capabilities for a competitive user experience and the ability to rollout new features consistently. MRAM, with its crucial improvements to endurance and speed, is the eNVM technology truly capable of bringing this SDV vision to life.
Sachin Gupta is senior director of sales and business development for automotive at NXP Semiconductors.
Related Content
- MRAM debut cues memory transition
- The Rise of MRAM in the Automotive Market
- MRAM, ReRAM Eye Automotive-Grade Opportunities
- MRAM Maker Everspin Remembers Its Industrial Roots
- Architectural opportunities propel software-defined vehicles forward
The post Making the case for MRAM in software-defined vehicles appeared first on EDN.
The next EDA wave: Lessons from DATE 2026

The Design, Automation & Test in Europe (DATE) Conference in Verona in April showed an EDA research community moving with real momentum into the AI era. The strongest signal from the conference was that AI is no longer a separate topic sitting beside chip design. It’s now shaping the workloads, architectures, design tools, verification flows, and security questions that will define the next phase of semiconductor development.
The conference was upbeat because the direction is clear and the opportunity is substantial. Heterogeneous compute, RISC-V, chiplets, AI accelerators, agentic EDA, structured specifications, and AI-assisted verification are all advancing at the same time. The challenge is significant: these systems must be designed, verified, secured, and trusted.
However, DATE 2026 showed that the research community is already developing the methods, tools, and flows needed to address that challenge. For Europe, the opportunity is not simply to catch up with existing EDA capability, but to help lead the next wave of AI-enabled, verification-aware, and trustworthy semiconductor design.
This also re-frames the European sovereignty discussion. There are three distinct parts: sovereignty in processor design, sovereignty in EDA tools, and sovereignty in next-generation AI+EDA capability. Processor design is being opened up by RISC-V, chiplets and design-enablement platforms.
EDA-tool sovereignty is more challenging, because advanced-node signoff depends on mature commercial tools, process design kits (PDKs), verification IP, and foundry-qualified flows. The strongest near-term opportunity is therefore AI+EDA capability: building the methods, benchmarks, structured specifications, secure deployment models, and verification-aware AI flows that will define the next generation of design automation.
Conference context and program messaging
DATE 2026 provided a useful view of where semiconductor research is moving as AI, EDA, advanced architectures, verification, and security begin to converge. DATE is not the Design and Verification Conference (DVCon), with its practitioner focus on verification methodology and commercial tool use. It is not the Design Automation Conference (DAC), where the exhibition floor is often as important as the technical program. DATE is research-led, with the papers, focus sessions, tutorials, keynotes, and European project sessions forming the center of gravity.
That research-led character matters. It makes DATE a good indicator of topics that are still forming before they become mature tool flows or standard industry practice. The commercial ecosystem was clearly present with Cadence, Synopsys, Qualcomm, Arm, Infineon, Micron, STMicroelectronics, Tenstorrent, Axelera AI, Real Intent, and others represented in the sponsor list. However, the tone was less product marketing and more ecosystem development.
A key takeaway was that AI is now present as a workload, a design objective, a design-assistance technology, a verification challenge, and a security risk. The individual sessions differed in emphasis, but the common thread was the same: the next phase of EDA will be shaped by the interaction between AI, heterogeneous architectures, verification, security, and trust.
DATE 2026 included 325 regular papers and 91 extended abstracts across the D, A, T, and E research tracks, giving 416 accepted research-track outputs. The program offered 41 main technical sessions, three Best Paper Award candidate sessions, two late-breaking-result sessions, five keynotes, 10 focus sessions, five workshops, four special-day sessions, and four embedded tutorials.
The geographical distribution was also significant. DATE is European in location and culture, but the research paper base reflects the global semiconductor research map. By country-affiliated appearances in technical paper-like entries, China, plus Hong Kong and Taiwan, accounted for 247 appearances, or 44.7%. Europe, plus the U.K., accounted for 133 appearances, or 24.1%. The U.S. accounted for 94 appearances, or 17.0%, with the rest of the world at 79 appearances, or 14.2%.
Using a broad classification, roughly 27% of the technical country-affiliated appearances had some AI connection. Most of this was hardware-for-AI: accelerators, compute-in-memory, large language model (LLM) inference, edge AI, photonic AI, and memory systems. AI applied directly to verification, test generation, fuzzing, coverage, and security validation was closer to 2.7% of the technical program. This shows that AI-for-verification is currently a specialist part of the larger AI-related research activity.
AI as workload, tool, and risk
The opening keynote from Luc Van de Hove of IMEC set out one of the central pressures: AI models are evolving faster than semiconductor hardware development, creating bottlenecks that require new compute architectures and semiconductor platforms. In this framing, AI is a key demand changing the hardware stack.
At DATE, AI appeared in at least four roles. First, AI is the workload driving accelerators, compute-in-memory structures, chiplets, photonics, and energy-efficient platforms. Focus session FS02, “Architecting Intelligence: Next-Gen Acceleration for Generative AI,” and TS36, “Next-Generation Memory Systems for AI Acceleration,” were good examples. Second, AI is becoming a design tool, with LLMs, agents, and machine-learning-driven optimization applied to routing, placement, high-level synthesis (HLS), analog sizing, and lithography simulation.
Third, AI is changing the research process itself, as raised in the keynote from Rolf Drechsler from the University of Bremen in Germany. Fourth, AI is becoming a security and trust problem, since AI-guided verification tools can introduce risks such as adversarial manipulation, biased test generation, or hallucinated security guidance.
The AI-for-EDA message was therefore not simply that AI will automate design. AI can accelerate parts of the design and verification flow, while also creating systems and flows that are harder to verify, explain, secure, and certify.
Future platforms are heterogeneous
A repeated architectural message was that general-purpose compute is no longer sufficient for many target workloads. The program included strong content on AI accelerators, chiplets, 3D integrated circuits (3DIC), RISC-V vector extensions, photonic accelerators, quantum and high-performance computing (HPC) coupling, FPGAs, high level synthesis (HLS), open chiplet ecosystems, and domain-specific processors.
RISC-V appeared prominently as an instruction set architecture (ISA), especially where openness, customization, and verification interact. It appeared in open-source cores such as Rocket, BOOM, XiangShan, and Snitch; in vector-extension verification; in processor fuzzing; in cryptographic accelerators; in SoC security; and in lightweight wearable systems. This is consistent with the broader RISC-V opportunity: the open ISA makes architectural experimentation easier but also increases the verification responsibility for each implementation and extension.
The Cornell University keynote by Zhiru Zhang on accelerator design and programming described a familiar problem. Performance and efficiency increasingly come from specialized accelerators, but there is a widening gap between how accelerators are designed and how they are programmed. That gap is an EDA problem because the design flow needs to connect architecture, programmability, verification, performance estimation, and software maintenance.
Quantum was also treated as a systems topic rather than as isolated physics. Nvidia’s Bettina Heim described NVQLink, coupling GPU real-time processing with quantum processors at sub-microsecond latency for error correction and control. A focus session covered MLIR, QIR, and intermediate representations for quantum-classical compilation. The point for EDA is that quantum-classical systems create problems in compilation, control, architecture, timing, and verification. These are recognizable EDA problems, even if the devices are different.
Verification and security become first-class constraints
The third major theme was the convergence of verification, security, and open ecosystems. DATE treated verification and security as part of the same scalability problem. As systems become heterogeneous, AI-driven, and assembled from chiplets and third-party IP, functional correctness, security validation, explainability, and certification overlap.
The verification panel (session FS06), “Who Is Best Suited to Do Verification?”, framed rising re-spin rates and verification cost as a central industry problem. The hardware security focus session argued that heterogeneous SoCs, CPUs, and accelerators create attack surfaces too large for manual analysis alone. The AI-for-verification thread included coverage-driven test generation, reinforcement-learning-guided concolic (concrete + symbolic) testing, processor fuzzing, SystemVerilog Assertion (SVA) generation, and agentic security assistants.
This work is still emerging. However, the direction is clear: verification needs more automation, and that automation needs to be tool-grounded, measurable, and traceable. A generated test, assertion, or security recommendation is useful only if it connects to coverage, formal results, simulation results, reviewable traces, or other engineering evidence.
AI for RTL and verification
A specialist but important cluster was AI applied to register-transfer level (RTL) design. This included LLM-generated Verilog, closed-loop RTL repair, multi-agent design flows, HLS-to-RTL pathways, and benchmark contamination. The volume was small, roughly 2-3% of the technical program, but the technical direction was important.
The field has moved beyond asking an LLM to write Verilog. The more credible flows put verification in the loop: generate RTL, run checks, estimate correctness, repair errors, and preserve equivalence. VeriBToT (session TS07.1) combined self-decoupling and self-verification for modular Verilog generation.
EstCoder (TS22.9) used a collaborative agent flow with a functional-estimation agent scoring generated RTL before accepting or correcting it, reporting up to 9% improvement in RTL correctness. LiveVerilogEval (TS29.1) addressed benchmark contamination and found that LLM performance degraded significantly on dynamically generated benchmarks, suggesting that static benchmarks may have overstated current capability.
The sponsor-hosted executive session on EDA agentic AI provided a useful industrial view. Agentic AI is moving from demonstrations toward production flows with RTL checking and fixing, specification-to-testbench construction, and synthesis-to-GDSII flows identified as near-term use cases. The hard constraints are determinism, traceability, IP protection, tool integration, and signoff confidence.
The AI-for-verification work showed the same pattern. The best examples were closed-loop and tool-grounded, not generic prompt-based test generation. ChatTest (TS22.7) used a multi-agent LLM framework with a structured Verification Description Language (VDL), retrieval-augmented generation, and a coverage-feedback loop. It reported 1.46 times higher toggle coverage, 2.28 times higher line coverage, and a 24.23% improvement in functional coverage across 20 complex RTL designs. CoverAssert (TS40.10) used functional coverage feedback to guide LLM generation of SVAs.
Processor fuzzing gave another important example. SimFuzz (TS40.6) applied similarity-guided block-level mutation to RISC-V processors Rocket, BOOM, and XiangShan, finding 17 bugs, including 14 previously unknown issues and seven CVE-assigned bugs affecting decode and memory units.
This connects to GhostWrite (CVE-2024-44067), a RISC-V vector-extension implementation bug in T-Head XuanTie processors that allowed unprivileged code to write arbitrary physical memory. GhostWrite was not a side channel. It was a direct architectural flaw, and the mitigation required disabling the vector extension. This is a strong argument for structure-aware, security-directed processor verification.
AI-generated SVAs also appeared in several forms. PALM (TS07.6) investigated LLM assistance for valid SVAs in security verification, while CoverAssert (TS40.10) and AutoAssert (TS02.5) extended coverage-driven, LLM-assisted assertion generation with formal verification feedback. This seems to be the right near-term role for AI in formal verification: assistant and accelerator, not replacement for formal reasoning.
Agentic AI and structured specifications
The most visible emerging pattern in AI+EDA was the movement from single-shot prompting to multi-agent, tool-grounded, feedback-driven workflows. The focus session (FS07) “From Concept to Silicon: End-to-End Agentic AI for Smarter Chip Design” made this explicit across HLS, physical design, testing, and security verification.
The Nexus paper presented by PrimisAI (session SD01.1) framed the engineering problem clearly. EDA workflows need reliability and traceability, and weak coordination and unstructured communication are bottlenecks for multi-agent deployment. Nexus reported 100% accuracy on RTL generation tasks in VerilogEval-Human and nearly 30% average power savings on Verilog-to-routing (VTR) timing-optimization benchmarks.
AgenticTCAD (TS41.6) applied a natural-language-driven multi-agent system to TCAD device optimization, achieving IRDS-2024 specifications for a 2-nm nanosheet FET within 4.2 hours, compared with 7.1 days for human experts.
The key point is that agentic AI wraps the LLM in an engineering process. The flow is to decompose the task, call EDA tools, inspect reports, measure quality, repair errors, and iterate. That is much more credible for EDA than single-shot generation.
Two structured-language examples were also notable. The first was the Universal Specification Format (USF), a formal specification format (in session TS24.3) with unambiguous syntax and semantics able to generate formal properties and behavioral simulation models.
The second was Verification Description Language (VDL), introduced in ChatTest (TS22.7), which captures I/O pins, timing, functional coverage targets, stimulus sequences, checkpoints, and boundary conditions in YAML format. These are early signs that AI-assisted EDA may require better intermediate representations, not only better models.
European sovereignty and the next EDA wave
European semiconductor sovereignty was an undercurrent throughout DATE 2026, but it needs to be framed carefully. Semiconductor sovereignty is not about becoming completely self-sufficient, it is about reducing dangerous dependencies on other geographic regions. There are several separate questions, for example: sovereignty in processor design, sovereignty in EDA tools, and sovereignty in next-generation AI+EDA capability.
For processor design, the RISC-V activity, open chiplet ecosystems, and European design-enablement platforms such as the cloud-based makeChip point in a useful direction. However, first-time-right silicon still depends heavily on commercial EDA tools, qualified PDKs, verified sign-off flows, and high-quality verification IP. A realistic sovereignty strategy means sovereign design competence and secure access to the best tools, not an assumption that open-source-only flows can replace the commercial stack.
For EDA-tool sovereignty, open-source EDA is strategically valuable for education, research, reproducibility, open PDKs, and lowering barriers for small and medium-sized enterprises (SMEs) and universities. However, advanced-node commercial EDA represents decades of investment in algorithms, foundry relationships, sign-off maturity, and customer regression infrastructure.
The keynote by Luca Benini of the University of Bologna in Italy on democratizing silicon made the positive case for broader access, but open-source EDA is a supplemental and educational platform, not a near-term substitute for advanced-node sign-off.
The more compelling opportunity is next-generation AI+EDA. DATE 2026 showed that this area is still being defined. Agentic workflows, AI-assisted verification, coverage-driven test generation, formal and SVA support, open benchmarks, trustworthy AI, structured specification languages, and secure on-premise model deployment are all areas where research depth and engineering discipline matter.
Europe has strong universities, safety-critical application domains, active RISC-V and open-source hardware communities, and the policy framework of the EU Chips Act. That combination is well suited to shaping the next EDA wave.
The strongest form of European sovereignty is not isolation. It is capability: the ability to design, verify, secure, and understand the systems Europe depends on. DATE 2026 showed that the future of EDA will require new compute architectures, better verification methods, more automation, structured specifications, stronger security methods, and a clear understanding of where AI helps and where it introduces new risks. These are exactly the problems that a research-led, ecosystem-focused community should be able to address.
DATE 2026 was therefore not just an EDA conference about AI in chip design. It was a useful indication that the next phase of EDA will be defined by the interaction between AI, heterogeneous architectures, verification, security, and trust. The next step is to turn these research directions into reliable engineering flows.
Simon Davidmann is an EDA industry pioneer and serial technology entrepreneur with over 40 years of experience in simulation and verification. His career has been instrumental in shaping the foundational languages and methodologies used in modern chip design, particularly those now critical for AI/ML hardware. Davidmann was the co-creator of Superlog that became SystemVerilog. After selling Imperas to Synopsys in 2023 and being Synopsys VP for Processor Modeling & Simulation, he left Synopsys and is now an AI + EDA researcher at Southampton University, UK.
Editor’s Note
DATE 2026 was held on 20-22 April 2026 in Verona, Italy. The conference program is available at https://www.date-conference.com/programme. Specific session labels are noted in parentheses in the article.
Related Content
- AI features in EDA tools: Facts and fiction
- EDA’s big three compare AI notes with TSMC
- What is the EDA problem worth solving with AI?
- DAC 2025: Towards Multi-Agent Systems In EDA
- How AI-based EDA will enable, not replace the engineer
The post The next EDA wave: Lessons from DATE 2026 appeared first on EDN.
Well-balanced gain, driven without pain

A subtle change to a standard circuit can enhance its usefulness—and even save a resistor.
If there were a prize for the most trivial Design Idea (DI) of the year, this one would likely be high on the shortlist (if not at the top). Most DIs involve adding components to circuits to improve them; this time we’re removing one. Circuits for line drivers, balanced or not, are ten a penny, but this variant has a surprising twist: surprising because it’s so simple and, when you look at it, obvious, though I can’t find it in any published schematic, even those from National Semiconductor’s golden days.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 presents it:

Figure 1 Resistors R1 and R2 help to set the gains of both the non-inverting and inverting stages, allowing for excellent matching of the anti-phased outputs with minimal components.
A1a is a non-inverting gain stage, utterly conventional except that its feedback network is referred to A1b’s virtual ground point. A1b is an inverting unity-gain stage, utterly conventional except that its input resistor is also A1a’s feedback network. A1a and A1b therefore work together to deliver perfectly matched anti-phase outputs (assuming perfectly matched components, of course). The gain can be set to anything above 1 (unity gain would revert the circuit to a simple buffer plus an inverting stage: nothing new).
At first glance, this circuit may look rather like part of a differential or instrumentation amplifier. But its function, as determined by the resistor ratios, is quite different. Those others have accurately-matched differential inputs; this is designed for balanced outputs.
Is that it?Yup: ’fraid so, apart from some practical details. A CR network may be needed to remove DC from the input, and any remaining imbalance could be trimmed by bleeding some current into (or out of) the A1b in- input. Otherwise, the circuit is stable and well-behaved, and will happily drive a transformer directly, though series matching resistors should be added, perhaps with 300R in each output line if you want to be really picky about balance.
Trimming the frequency response is messy, and should be done before the signal gets this far. Any (HF-cutting) capacitor across R1 (call it C1) needs to be matched by (1 – 1 / Gain) × C1 across R3 if the responses in both output legs are to match.
The output drive differs from device to device. Using ±15 V rails and working into 600R, LM4562s delivered 26.3 V pk-pk and KA5532s gave 24.5 V, while TL072/082s disappointed at just 13.8 V. An MCP6022 (RRIO, unlike the others) with ±2.5 V supplies clipped at 4.7 V pk-pk into 600R.
And in the real world…To paraphrase Bob Pease, “If a circuit’s never seen a soldering iron, it probably won’t work right” (although perhaps he’d make an exception for plug-in breadboards, at least at low frequencies). So, just to demonstrate that this doesn’t merely describe a simulation, Figure 2 shows it plugged in and “working right”:

Figure 2 This is how an LM4562 performs at 1 kHz with ±15 V rails and a 600R load. It is just clipping—cleanly and symmetrically—at a differential output level of 32.2 dBu.
As noted earlier, the circuit is well behaved as long as you avoid driving capacitive loads directly, as with all op-amp circuits (33–100R in series with an op-amp’s output pin is normally a good cure, limiting the peak current). Lacking any suitable audio transformers but wanting to check if such loading might cause problems, I hooked it up directly to the secondary winding of a small mains transformer, which seemed like a cruel enough (not to mention fun) test.
While the resulting >>300 V RMS output tolerated little loading, it could light a neon brightly (with its integral 220k series resistor) without affecting the distortion at the op-amps’ outputs. Although the HV output showed a nick in the waveform where the neon struck and went negative-resistance, this artifact wasn’t reflected back to the drive. Which is exactly what we’d expect, but should not take for granted.
For phase-splitting with gain (but no pain) and the ability to drive old-school 600Ω balanced lines, this circuit may be ideal. That said, there may be easier and cheaper ways of powering neons…
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- ΔVbe thermometer outputs 1mV/°C without calibration or op amps
- Newer, shinier DMM RTDs—part 1 and part 2
- Why modulate a power amplifier?—and how to do it
- Power amplifiers that oscillate—deliberately. Part 1: A simple start and Part 2: A crafty conclusion
The post Well-balanced gain, driven without pain appeared first on EDN.
AI inference accelerator bolsters efficiency in power modules

Power modules for data centers are incorporating AI inference for applications such as agentic AI, response generation with large language models (LLMs), and predictive analytics in finance and healthcare. The use of AI accelerators is mainly aimed at boosting energy efficiency in high-density boards.
Take the case of Infineon, which is incorporating d-Matrix’s Corsair inference accelerator in its OptiMOS TDM2254xx dual-phase power modules. According to Sid Sheth, founder and CEO of d-Matrix, Corsair was purpose-built for delivering the sub-2 ms token latency that interactive applications require.

The OptiMOS TDM2254xx dual-phase power module enables vertical power delivery while offering a density of 1.0 A/mm2. Source: Infineon
Infineon has been working closely with d-Matrix to optimize the Corsair inference accelerator for its power semiconductors. “Infineon has been collaborating with customers specializing in inference processors, such as d-Matrix, from the early days when the industry was mostly focused on training hardware,” said Raj Khattoi, VP and GM of consumer, computing and communication at Infineon.
Infineon, which offers a broad portfolio of power semiconductors, based on silicon (Si), silicon carbide (SiC), and gallium nitride (GaN), has also been working closely with AI companies in both the training and inference markets. And these liaisons have aimed to improve energy efficiency at higher power density in hardware at data centers and other AI installations.
Related Content
- Solving AI’s Power Struggle
- TI launches power management devices for AI computing
- Taiwan’s Emerging Power Electronics Strategy in the AI Era
- Why AI Is Redefining the Future of Commercial Power Infrastructure
- Power Module Packaging Evolves as Materials and Supply Chains Redefine Power Electronics
The post AI inference accelerator bolsters efficiency in power modules appeared first on EDN.
NOCO’s Genius 1: A trickle charger that tries harder

Diminutive? Definitely. Flexible? Indubitably. Safety-cognizant? Thankfully…unless you activate “FORCE” mode, that is (hopefully intentionally).
A bit more than a year ago, within a blog post that talked about (potentially) resurrecting dead lead-acid batteries, I noted that I’d recently added additional members to my battery-charger stable. Historically, I’d relied on a legacy-design DieHard model, one of the two which, loudly humming and dubiously still working, I subsequently turned into a teardown target:

The others were all newer designs, solid-state (vs transformer-based) and both more flexible in their supported battery voltages and technologies and more feature-rich. Specifically, today I’ll be focusing on the NOCO Genius 1, a 1A trickle charge two examples of which I’d acquired on promo discount from Amazon’s Warehouse-now-Resale) site intending to tear one of ‘em down:

I’d teased the feature set a year-plus back, then focusing (given the overall writeup topic slant) on its battery-rejuvenating chops. Here’s the fuller feature-set list, requoted from the Amazon product page (from which, by the way, I’d acquired today’s dissection victim for only $20.12, ~1/3 off the current brand-new $29.95 price tag, which in and of itself also isn’t bad, or if you prefer, half off the $39.95 MSRP):
- MEET THE GENIUS 1 — Similar to our G750, just better. It’s 35% smaller and delivers over 35% more power. It’s the all-in-one charging solution – battery charger, battery maintainer, trickle charger, plus desulfator.
- DO MORE WITH GENIUS — Designed for 6-volt and 12-volt lead-acid (AGM, Gel, SLA, VRLA) and lithium-ion (LiFePO4) batteries, including flooded, maintenance-free, deep-cycle, marine and powersport batteries.
- ENJOY PRECISION CHARGING — An integrated thermal sensor dynamically adjusts the charge based on ambient temperature, preventing overcharging in hot weather and undercharging in cold, ensuring optimal battery performance.
- CHARGE DEAD BATTERIES — Charge batteries from as low as 1 volt, or use Force Mode to manually charge completely dead batteries down to zero volts. Perfect for recovering deeply discharged or neglected batteries.
- BEYOND MAINTENANCE — Keep your battery fully charged without worrying about overcharging. Our smart charger constantly monitors the battery, allowing you to leave it connected safely – indefinitely – for worry-free maintenance.
- RESTORE YOUR BATTERY — Precision pulse charging automatically detects and reverses battery sulfation and acid stratification, restoring your battery’s health for improved performance and extended lifespan.
- COMPATIBLE — Charges and maintains all types of vehicles, including cars, automobiles, motorcycles, mopeds, lawn mowers, ATVs, UTVs, tractors, trucks, SUVs, RVs, campers, trailers, boats, PWCs, jet skis, classic cars, and more.
- WHAT’S IN THE BOX — Includes a 1A charger, a direct wall plug-in, 110-inch DC cable with battery clamps, and integrated eyelet terminals, and 3-year warranty. Proudly designed in the USA.
It’s pretty tiny (that’s the aforementioned G750 behind it in the following photo, by the way); 3.62in (92mm) high, 2.32in (59mm) wide and 1.26in (32mm) deep, and weighing only 0.77lb (0.35kg):

And the manufacturer was even thoughtful enough to include a preparatory teardown diagram on the website product page:

Let’s see how close reality comes to matching that conceptual image, shall we? This charger arrived absent its packaging, so what you’ll see first (as usual accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes) is the other, ~$3 more, charger’s box:




Wonder what happened to the original “tab” for retail-display hanging purposes?


Opening up the box…

you’ll find user guide (also accessible here as a multi-language PDF, plus the product spec sheet) and promo literature, plus, in this particular case, the aforementioned formerly-MIA tab:

along with, of course, today’s two-part patient:

the base unit:

and the remainder of the cabling, including the battery terminal clamps:

Here’s the male-and-female connector pair that mates ‘em:


And what’s that lump partway down the “remainder of the cabling” span?

It’s a (user-replaceable, which is nice) fuse, as at least some of you may have already guessed. 2A is, IMHO at least, a reasonable choice considering the device’s 1A-max output specs:

Before putting the “remainder of the cabling” to the side, here’s a closeup of those “integrated eyelets” mentioned earlier in the bulletized feature list:

And this stock shot shows how to make ‘em usable:

Now for the base unit. Before diving inside, here are some real-life overview shots to augment the earlier stock ones:







You’ve probably already noticed the ultrasonic welds around the outside, holding the halves together. Regular readers may already recall that they’re a longstanding bane of mine. This time, since it was convenient to do so and I was under no delusions that the charger would be salvageable/reusable post-teardown anyway, I took a hacksaw to ‘em in conjunction with a vise:

Here’s what the inside of the back half looks like, revealing AC prong connections to the PCB:

And speaking of which, here’s our first look at the PCB itself, specifically the backside:
Nothing here is particularly surprising, nor is the broader fact that DC conversion circuitry dominates the landscape, given the physical proximity to the AC source. Most notable, probably, is the diminutive size of the two transformers, explained in part (but only in part) by this particular unit’s trickle-current characteristics. For the rest of the (hint: solid-state) story, we’ll need to see the other side of the PCB. No better time than the present:

With the normally-restraining screws now removed:
and in the process of lifting the PCB out of the remaining chassis half:
I happened to notice, down by the DC cable exit point, two more wires alongside a NTC1 notation on the PCB:
I’m (fairly confidently) assuming that they reference a negative temperature coefficient (NTC) thermistor. My initial reaction, and one that in retrospect I admittedly clung to far too long, was that it somehow was used to ascertain if the battery itself was overheating, a situation which would compel the charger to “cut the juice”. Problem being, though, that there are only two wires (DC positive and negative) in the cable running from the main unit to the battery, so the thermistor would end up being nowhere near the battery itself (PDF).
In grasping at straws, I surmised that perhaps the battery temperature was being indirectly determined by the transferred temperature of the connected cabling, which admittedly seemed increasingly silly the more I thought about it. But then I re-read the device specs prior to sitting down to write and realized that what the thermistor was actually measuring was (probably) just the ambient environmental temperature. “An integrated thermal sensor dynamically adjusts the charge based on ambient temperature, preventing overcharging in hot weather and undercharging in cold, ensuring optimal battery performance.” Yeah, that’s it. Ahem.
Onward. Interesting PCB topside two-level sandwich, eh?
And speaking of which:

here’s the inside of the front half of the chassis:

And the PCB topside itself:
The largest IC, the one with the white dot on it and located at lower right on the top (of the two-PCB sandwich) mini-PCB, is the “brains” of the operation, an ABOV Semiconductor A96G148GR 8-bit 8051-class microcontroller with integrated flash memory. On the other (top) end, toward the center, is the multi-function toggle switch, which puts the charger in various operating modes, surrounded by a ring of LEDs, including two more toward the bottom. And to its far left is the multi-pin connector that mates the mini-PCB with its larger sibling below it.
I almost stopped at this point, clinging to the delusion that maybe I’d glue everything back together again in fully-functional form. But curiosity-while-writing eventually got the better of me (and anyway, that was a silly idea), so I rotated the assembly by 90° so the PCB markings could be read right-side-up and let ‘er rip:
Ok, now I’m done!
A (potentially fatal?) forcing functionIn closing, let’s revisit that just-referenced multi-function toggle switch, specifically in the context of the “unless you activate “FORCE” mode (hopefully intentionally), that is” comment in this article’s subtitle. Quoting from the user guide:
|
Mode |
Explanation |
|
Force Mode |
For charging batteries with a voltage lower than 1V. Press and Hold for five (5) seconds to enter Force Mode. The selected charge mode will then operate under Force Mode for five (5) minutes before returning to standard charging in the selected mode. |
Here’s the ominous bit:
Force Mode. [Press & Hold for 5 seconds]
Force mode allow the charger to manually begin charging when the connected battery’s voltage is too low to be detected. If battery voltage is too low for the charger to detect, press and hold the mode button for 5 seconds to activate Force Mode, then select the appropriate mode. All available modes will flash. Once a charge mode is selected, the Charge Mode LED and Charge LED will alternate between each other, indicating Force Mode is active. After five (5) minutes the charger will return to the normal charge operation and low voltage detection will be reactivated.
CAUTION. USE THIS MODE WITH EXTREME CARE. FORCE MODE DISABLES SAFETY FEATURES AND LIVE POWER IS PRESENT AT THE CONNECTORS. ENSURE ALL CONNECTIONS ARE MADE PRIOR TO ENTERING FORCE MODE, AND DO NOT TOUCH CONNECTIONS TOGETHER. RISK OF SPARKS, FIRE, EXPLOSION, PROPERTY DAMAGE, INJURY, AND DEATH.
The entire quote, notably the all-caps portion, was 100% original, by the way, not “enhanced” in any way by editing from yours truly (explaining, among other things, the “creative” grammar in spots). Reminds you of Jason Hemphill’s “hack” that I highlighted back in mid-March, doesn’t it?
Death. I’ll just leave that for you to ponder as you wish. Memento Mori, my friends. And with that pleasant thought
, I’ll wrap up for today and turn it over to you for your thoughts (feel free to skip posting the morbid ones, please) in the comments!
—Brian Dipert is the associate editor, as well as a contributing editor, at EDN.
Related Content
- Dead lead-acid batteries: Desulfation-resurrection opportunities?
- A battery charger that loudly hums: Dump it or just make it dumb?
- Resurrecting a 6-amp battery charger
The post NOCO’s Genius 1: A trickle charger that tries harder appeared first on EDN.
Strain gauges: Turning stress into signal

When structures bend, stretch, or compress, engineers need a way to translate that invisible mechanical stress into measurable data. Strain gauges do exactly that—tiny sensors that convert deformation into electrical signals with remarkable precision.
From monitoring bridges and aircraft wings to ensuring the reliability of everyday electronics, strain gauges are the quiet workhorses that make stress visible, quantifiable, and actionable.
How resistance reveals stress
At the heart of every strain gauge lies a deceptively simple principle: when a conductor or semiconductor is stretched, its electrical resistance changes. Engineers harness this effect by arranging strain gauges in a Wheatstone bridge circuit, amplifying tiny resistance shifts into measurable voltage signals.
It’s a clever translation—microscopic deformations become clear electrical outputs. Narratively, this is where the magic happens: the silent stress within a bridge girder or aircraft fuselage suddenly speaks in numbers, allowing designers to predict failures, validate models, and ensure safety long before cracks appear.
Stress signals in the real world
A strain gauge is the sensing element itself, while a strain gauge sensor is the complete packaged device that integrates the gauge with wiring, housing, and often signal conditioning for practical measurement. That distinction becomes critical when sensors are deployed in demanding environments.
Consider aerospace wing testing: engineers attach arrays of strain gauges across critical points of an aircraft wing. As the wing flexes under simulated flight loads, each gauge’s resistance shifts, feeding signals into a monitoring system. The sensor assemblies ensure those delicate gauges survive vibration, temperature swings, and handling. This is where theory meets reality—tiny resistance changes become the data that validates aerodynamic models, ensures passenger safety, and drives innovation in lighter, stronger aircraft designs.
Civil infrastructure offers another compelling example. Bridges endure constant stress from traffic, wind, and temperature cycles. Embedded strain gauge sensors provide early warnings of fatigue, helping engineers schedule maintenance before cracks or failures occur. In this narrative, strain gauges are not just measuring stress, they are safeguarding lives and economies by keeping critical structures resilient and reliable.
A technical note: A strain gauge directly measures strain (physical deformation). From this measurement, we determine the internal stress—the intensity of the forces resisting that deformation—using the material’s known stiffness.
Strain gauge vs. load cell vs. FSR
Since this post is focused on strain gauges, here is a quick distinction. A strain gauge measures material deformation as a resistance change, forming the basis of precise force sensing. A load cell builds on this, packaging strain gauges into a calibrated transducer for accurate weight and force measurement in industry. By contrast, a force-sensing resistor (FSR) is a low-cost sensor whose resistance shifts with pressure—handy for relative force detection in consumer and robotic applications, but far less precise.

Figure 1 Strain gauges and force-sensing resistors convert mechanical input into changes in electrical resistance, yet their responses vary in linearity, sensitivity, and application scope. Source: Author
So, in essence, when designers and engineers need to measure force, two of the most widely used technologies are force sensing resistors and strain gauges. Both convert mechanical input into changes in electrical resistance, yet their principles, accuracy, and applications differ greatly.
A force sensing resistor is a thin, flexible, polymer-based sensor whose resistance decreases as pressure is applied to its surface. A strain gauge, on the other hand, is made of fine metallic foil or wire arranged in a grid and bonded to a stable substrate. Rather than detecting direct pressure, it measures strain—the deformation of the material it is attached to. As the material stretches or compresses, the strain gauge deforms as well, producing a slight change in resistance. This change is typically measured using a Wheatstone bridge circuit for precise results.
Similarly, load cells build upon strain gauge technology by integrating one or more gauges into a mechanical structure that translates applied force into measurable strain. This makes load cells highly accurate and reliable devices for quantifying weight and force in industrial, commercial, and scientific applications.

Figure 2 A compact button-type load cell, based on strain-gauge technology, delivers compression measurements in space-limited applications. Source: ATO
Wheatstone bridge configurations for precision strain measurement
In practical applications, strain measurements typically involve very small changes rather than large strain values. Detecting these minute variations requires precise measurement of small resistance changes. A Wheatstone bridge circuit (WBC) is widely used for this purpose, as it translates subtle resistance shifts into measurable voltage outputs.
A standard Wheatstone bridge consists of four equal resistors arranged in a square. An excitation voltage is applied across one diagonal, while the output voltage is measured across the other. In its balanced state, the bridge produces zero output voltage. For strain measurement, one or more resistors are replaced with active strain gauges, whose resistance varies in response to external forces acting on the structure.
To achieve higher sensitivity and improved accuracy, different Wheatstone bridge configurations are employed: quarter-bridge, half-bridge, and full-bridge. In a quarter-bridge, a single resistor is replaced with a strain gauge. A half-bridge uses two strain gauges, while a full bridge replaces all four resistors. These configurations not only enhance measurement precision but also help compensate for temperature effects, making them essential in modern strain gauge instrumentation.

Figure 3 Diagram illustrates a quarter Wheatstone bridge, where one resistor is replaced by the strain gauge. Source: Author
Selecting the right strain gauge
Selecting the right strain gauge requires balancing geometry, resistance, and environmental compatibility to achieve accurate measurements while controlling installation costs. Options range from simple linear gauges for uniaxial stress fields to rosette configurations—rectangular, delta, or tee—for analyzing complex or unknown stress directions, and bridge arrangements for enhanced sensitivity and thermal compensation.
The choice of grid orientation and gauge length must align with the material’s homogeneity and the stress distribution being measured. Equally important are electrical parameters such as the nominal resistance, which determines compatibility with the measurement circuitry, and self-temperature compensation, which offsets thermal effects to maintain accuracy and improve signal-to-noise ratios under fluctuating operating conditions.
Environmental and installation considerations in strain measurement
As stated before, strain gauges are inherently sensitive to temperature variations, and changes in temperature can alter their electrical resistance. If not properly compensated or controlled, this effect can introduce significant measurement errors.
Beyond temperature, external factors such as humidity, moisture, vibration, and electromagnetic interference can also degrade performance and accuracy. Appropriate protective measures—such as encapsulation, shielding, and environmental sealing—are therefore essential to ensure reliable operation.
Equally important is the bonding of the strain gauge to the surface of the substrate. A strong, uniform bond ensures that the gauge accurately follows the strain of the underlying material. Achieving this can be challenging when working with dissimilar materials or irregular surfaces. Poor bonding may result in signal instability or inaccurate readings, undermining the integrity of the measurement system.
Practical strain gauge systems: Bridges, amps, and test kits
In a Wheatstone bridge, the strain gauge serves as the variable resistor whose resistance shifts under mechanical deformation, producing a differential voltage proportional to strain. Because this resistance change is extremely small—often less than 0.1% of the gauge’s nominal value—the bridge must be energized with a stable excitation source and paired with an amplifier stage to extract the signal from noise.
For basic designs, a differential amplifier can provide initial signal conditioning, but for precision applications, an instrumentation amplifier (INA) is preferred due to its superior common-mode rejection and high input impedance.
Keep in mind that the bridge configuration depends on accuracy requirements: a quarter-bridge offers simplicity, a half-bridge adds temperature compensation, and a full-bridge delivers maximum sensitivity. The choice of amplifier ensures the bridge’s delicate balance is preserved while enabling reliable strain measurement.
Today’s compact strain gauge amplifiers make the entire measurement workflow far more straightforward by integrating multiple critical functions into a single, easy-to-use module. Not only do they provide clean signal gain and low-noise performance, but many also feature built-in excitation voltage sources, eliminating the need for external supplies.
They often include automatic bridge balancing to correct minor mismatches in resistance, ensuring the Wheatstone bridge remains stable and accurate. With high input impedance, filtering options, and sometimes digital outputs, these amplifiers reduce design complexity, accelerate setup, and deliver reliable strain data. For engineers, this means less time spent on circuit design and more confidence in capturing precise measurements across lab and field applications.

Figure 4 Compact strain gauge amplifier modules meet growing demand for industrial strain measurements, where miniature size and easy setup are essential. Source: Transmission Dynamics
Moreover, when it comes to strain gauge test kits, they offer a practical, all-in-one pathway for converting mechanical stress into precise electrical signals. These kits typically include gauges with standard resistances (120 Ω or 350 Ω), along with surface preparation tools, adhesives for secure bonding, and protective coatings to ensure durability in challenging environments.
Once integrated into a Wheatstone bridge, the kit enables detection of minute resistance changes defined by the gauge factor, directly linking strain to output voltage. Thus, strain gauge kits simplify what would otherwise be a complex measurement workflow, making them indispensable across fields ranging from structural health monitoring and aerospace stress testing to advanced biomechanics.
That wraps up today’s dive into strain gauges. From foil to semiconductors, the evolution continues—and now it’s your turn to engineer what comes next.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Strain Gauge Sensor Module Intro
- Industrial sensors and control–The basics
- Nanoscale strain sensors measure molecular force
- LXI-compatible sensor measurement unit packs built-in LAN controller
The post Strain gauges: Turning stress into signal appeared first on EDN.
Single switch controls sequential operation of multiple power supplies

Simple analog circuits manage multi-PSU powerup and shutdown sequences.
In projects containing digital and/or analog circuits, multiple power supplies are used, generally 5V DC for digital circuits and 15V DC for analog circuits. Some projects also use 24V or 48V DC as the third power supply. In many cases, these power supplies need to be switched on in sequence, commonly 5V DC first and 15V DC next, with a time delay in-between. Subsequently switching them off necessitates implementing this sequence in reverse, i.e., first in/last out (FILO) in total, with 15 VDC first and 5V DC next and again with a time delay in-between.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In MCU-based projects, this sequencing can be achieved through an appropriate software routine. For non-MCU projects, conversely, Figure 1 shows a simple analog circuit that accomplishes this function for two power supplies:

Figure 1 A simple analog circuit controls the powerup and shutdown sequencing of two power supplies.
How does this circuit work? Fundamentally, it employs the charging and discharging of capacitor C1 to achieve both power supply sequencing and the interim time delay. SW1 is a two-pole ON/OFF switch. When it is pressed, 5V is applied first through one pole and then through the second pole. 0V applied to the base of Q5 creates an open circuit. Next, C1 gets charged through R8.
The voltage at C1 rises per the following formula:
v= V(1-e-t/T)
Here V=5V and T=R8xC1. R9, R10 and R11 serve as voltage dividers to set the references for comparators U1B and U1A.
When the rising voltage v crosses through the first reference voltage set by R11, the U1B output goes HIGH, saturating Q1. This transition causes Q2 to conduct and connect to the 5V output. Capacitor voltage v, further rising, next crosses through the second reference voltage set by R10+R11. Now the U1A output goes HIGH, saturating Q4. Q3 now also conducts, with 15V also made available at the output.
For switching off, although SW1 is now opened, 5V initially continues to be fed to the output through the ongoing conduction of Q2. The base of Q5 goes HIGH, causing it to saturate. C1 resultantly starts discharging through R12. The voltage v at C1 decreases as per the formula:
v=Ve-t/T
When this voltage goes below the reference voltage 2 set as the input to U1A, its output goes LOW. Q4 and Q3 now turn OFF. Hence, the 15V DC output is switched OFF first. As the capacitor voltage further decreases with the passing of time, it goes below the reference 1 set at the input of U1B. Its output now also goes LOW, turning Q1 and Q2 OFF. The 5V output, switched OFF last, implements the desired FILO sequence.
Notably, this design doesn’t employ a constantly power-consuming watchdog circuit. For different time delays, accordingly select R9, R10 and R11 to set the desired reference voltages. High current power supplies can be handled by using suitable MOS switches (Q2 and Q3).
You can expand this concept to cover any number of power supplies to be operated in a time-delay FILO sequence. For example, Figure 2 shows a derived analog circuit, this time supporting three power supplies:

Figure 2 An analog circuit derived from the previous one controls the powerup and shutdown sequencing of three power supplies, with the concept further as-needed expandable.
The video below demonstrates the operation of Figure 2’s circuit with three power supplies in a FILO sequence.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- Short push, long push for sequential operation of multiple power supplies
- Silly simple supply sequencing
- Vcc delay
The post Single switch controls sequential operation of multiple power supplies appeared first on EDN.
Robots: Why AI alone will not deliver the next leap in automation

The current robotics narrative is heavily weighted toward artificial intelligence (AI). The prevailing assumption is that more parameters, larger models, and better reinforcement learning pipelines will eventually grant machines human like dexterity. This belief has shaped research agendas, funding priorities, and public expectations.
However, for engineers designing hardware that must survive millions of high-velocity cycles at companies like Amazon Robotics, a different truth is apparent. In the lab, the focus is on the brain, but on the production floor, robots fail for mechanical reasons far more often than algorithmic ones.
In high duty cycle environments, the primary drivers of unplanned downtime are wear, compliance, thermal drift, misalignment, and mechanical fatigue. These are not failures of perception or planning. No amount of neural network tuning can compensate for a linkage that deflects under load or an end effector that cannot maintain repeatability. As the industry continues to chase AI-centric solutions, it risks overlooking the fundamental engineering disciplines that determine whether a robot succeeds in the physical world.
The robotics community is at a crossroads. The last decade has delivered extraordinary advances in machine learning, but the physical reliability of robotic systems has not kept pace. The result is a widening gap between what robots can demonstrate in controlled environments and what they can sustain in real production settings.
Closing this gap requires a shift in mindset. The next leap in robotics will not come from larger models or more training data. It will come from better mechanisms, better actuation, and better physical architectures.
The reliability gap
The industry has spent a decade optimizing the brain while neglecting the body. This imbalance has created what can be described as the reliability gap. As a technical judge for MassChallenge and for university capstone programs at Worcester Polytechnic Institute and Boston University, I have observed a recurring pattern.
Startups and student teams often present systems that segment objects perfectly in simulation, classify scenes with remarkable accuracy, and demonstrate impressive reinforcement learning policies. Yet when these systems are deployed in the physical world, they fail after only a few hours of operation.
The reason is straightforward. AI amplifies a robot’s capability, but the mechanism defines the physical boundary. If a kinematic chain introduces unpredictable hysteresis, software cannot compensate its way to a reliable solution. If a transmission loses stiffness under load, no amount of perception accuracy will restore positional integrity. If an end effector cannot generate stable contact forces, even the most advanced grasping model will fail.
The robotics industry must acknowledge a practical reality. Software and AI are essential, but they cannot overcome fundamental mechanical limitations. The most successful robotic systems in history have not been those with the most advanced algorithms, but those with the most deterministic mechanical behavior. Reliability is not an emergent property of software. It’s engineered into the physical system from the beginning.
Determinism and the voyager philosophy
True industrial progress requires a return to mechanical rigor, specifically a focus on what can be called deterministic mechatronics. This philosophy suggests that the most successful robotic systems are those engineered for passive stability, predictable behavior, and graceful failure. A useful analogy comes from deep space engineering.
Voyager 1, launched nearly half a century ago, remains operational in one of the harshest environments imaginable. NASA has occasionally uploaded new command sequences, performed resets, and adjusted subsystems to extend its life. These interventions succeed because the underlying mechanical and electrical systems were engineered for extreme reliability. The spacecraft’s longevity is not the result of software alone or hardware alone, but the synergy between robust physical design and intelligent control.
Industrial robotics should adopt this same mindset. The next leap in automation will come from kinematic architectures that reduce inertia, precision transmissions that maintain sub-millimeter accuracy under load, and actuation strategies that prioritize physical determinism. The goal is not to diminish the role of AI, but to ensure that AI is built on a stable mechanical foundation.
A deterministic mechanism reduces the burden on perception and control. It narrows the solution space. It transforms a difficult control problem into a manageable one. When the physical system behaves predictably, the software becomes simpler, more robust, and more efficient.
Case study: The apparel challenge
The manipulation of non-rigid materials, such as apparel, provides a clear example of this principle. Handling folded fabric is traditionally viewed as an AI problem. The common assumption is that complex pose estimation, dense depth reconstruction, and advanced vision models are required to manage the noise introduced by folds and wrinkles.
However, breakthroughs in this field, including those protected under U.S. Patents 11268223 and 11939714, demonstrate that the solution is primarily mechanical. By designing a compliant yet deterministic gripping architecture, the physics of the material can be used to the machine’s advantage.
When the kinematic chain is engineered to minimize shear forces, the physical interaction becomes predictable. When the mechanism constrains the degrees of freedom in a way that aligns with the material’s natural behavior, the need for complex perception is reduced.
In these systems, AI still plays a meaningful role. It identifies features, guides sequencing, and handles variability. But it succeeds because the underlying mechanism provides a stable substrate. The machine does the heavy lifting so the software can remain efficient. This balanced approach is what the industry needs. Instead of using software to compensate for mechanical unpredictability, the mechanism is engineered to reduce the burden on software.
This approach scales. It is robust. It is repeatable. And it is the foundation on which industrial grade automation must be built.
A new hierarchy of design
To unlock the next stage of automation, the engineering community must rebalance its priorities. The hierarchy of design must shift.
First, the industry must invest in mechanism research and development with the same intensity it brings to AI. For every dollar spent on perception, equal resources should be allocated to transmissions, linkages, and end effectors. Mechanisms are not a solved problem. They are the frontier that will determine the next decade of progress.
Second, the industry must build reliability-first architectures. Robots should be engineered with the longevity of aerospace systems, not the lifecycle of consumer electronics. This requires a shift in mindset. Reliability is not a feature. It’s a design philosophy.
Third, the industry must foster a new breed of roboticists. The next generation of engineers must be equally proficient in kinematics and PyTorch, equally comfortable with finite element analysis and neural network training and equally invested in mechanical determinism and algorithmic efficiency. The future belongs to engineers who can bridge the physical and digital domains.
Finally, the industry must resist the temptation to chase demos. The goal is not to produce systems that perform well in controlled environments, but systems that operate reliably in the real world. The measure of success is not a viral video, but a robot that performs millions of cycles without failure.
The next decade of robotics
Artificial intelligence is an extraordinary amplifier, but it’s not the foundation of robotics. Intelligence can only be as effective as the physical vessel through which it acts. The next decade of robotics will be defined by the engineers who recognize that mechanisms, transmissions, and physical architectures are not secondary considerations. They are the core of the system.
The future of robotics does not belong to the AI-first approach or the mechanism-first approach. It belongs to the integration of both into a single, reliable, and deterministic system. When the body and the brain evolve together, automation will finally achieve the scale, reliability, and capability that the industry has been pursuing for years.
This is the mechanism-centric future of robotics. And it’s long overdue.
Santosh Yadav is senior mechanical engineer and robotics researcher at ASME MBE Standards Committee.
Special Section: Smart Factory
- Rethinking machine vision in industrial automation
- Smart factory: The rise of PoE in industrial environments
- Precision lasers boost safety and efficiency in smart factories
- Tale of 3 sensors operating in smart factory environments
- From edge AI to physical AI in smart factories: A shift in how machines perceive and act
The post Robots: Why AI alone will not deliver the next leap in automation appeared first on EDN.
The guardians inside: How radar is redefining in-cabin sensing

The evolution of automotive safety is moving from the exterior to the interior, opening a new frontier: in-cabin sensing. Its emergence marks a shift from passive vehicle shells to active systems capable of detecting and safeguarding occupants. However, implementing radar-based in-cabin sensing presents multifaceted engineering challenges, including privacy considerations, real-time data processing, and functional safety, all under strict regulatory umbrella.
Radar has become the preferred modality for in-cabin applications, offering privacy by design, effectiveness through interior materials, and immunity to lighting conditions. Crucially, it detects micro-motions such as breathing and heartbeat.
Why in-cabin sensing Is becoming mandatory
In-cabin sensing includes systems that monitor driver behavior, track occupant presence, detect vital signs, and recognize gestures within the vehicle. With the push for in-cabin sensing in response to global demand for higher safety standards, in-cabin sensing is moving from a “nice-to-have” to a “must-have” feature set.

Figure 1 In-cabin sensing is increasingly becoming a must-have feature in modern vehicles. Source: Cadence Design Systems
Tragic incidents involving children left in hot cars and drowsy driving have prompted regulators and safety organizations to act, making in-cabin sensing essential for top safety ratings.
Regulatory bodies are shifting focus from external crash prevention to interior safety measures. Programs like Euro NCAP’s Child Presence Detection (CPD), effective in 2025, and the U.S. Hot Cars Act highlight the importance of interior monitoring to prevent child fatalities and assess driver alertness. While traditional camera systems face privacy and lighting challenges, radar technology, especially 60 GHz frequency-modulated continuous wave (FMCW) radar, offers a superior, privacy-preserving solution for next-generation intelligent cockpits.
Why radar is emerging as a preferred modality
Radar technology offers a unique set of capabilities that make it the optimal choice for the complex environment of a vehicle cabin. Unlike cameras, which can be obstructed by poor lighting or raise privacy concerns, radar provides robust, non-intrusive sensing and offers many benefits.
Privacy by design
In an era where data privacy is paramount, radar offers a distinct advantage. It does not capture detailed visual images of faces or bodies. Instead, it detects presence and movement through point clouds. This allows the system to monitor occupants effectively without recording sensitive personal visual data, making it far more acceptable to privacy-conscious consumers.
Seeing the unseen (non-line-of-sight)
One of the most profound advantages of radar is its ability to penetrate materials. A camera cannot see a child covered by a blanket or sleeping in a rear-facing car seat obstructed by the driver’s seat. Radar, however, can detect the micro-movements of breathing or a heartbeat through clothing, blankets, and even seat materials (excluding steel). This non-line-of-sight (NLOS) capability is crucial for reliable CPD.
Environmental robustness
Radar is immune to lighting conditions. It functions just as effectively in pitch-black darkness as it does in blinding sunlight, ensuring continuous protection day or night. Furthermore, its performance remains robust despite temperature fluctuations, humidity, or vibrations—common factors in the automotive environment.
Why 60-GHz FMCW radar specifically?
As OEMs and Tier 1 manufacturers evaluate their platform choices, the FMCW-versus-ultra-wideband (UWB) debate often arises. While UWB has had success in consumer electronics and certain automotive access systems, FMCW radar aligns more naturally with the requirements of high-volume automotive in-cabin sensing deployments.
FMCW offers a lower cost structure, simpler integration path, and superior feature scalability. It supports multi-use sensing—from occupant monitoring and CPD to vital signs and gesture recognition—all within a unified signal-processing pipeline.
FMCW also avoids security challenges such as relay or “man-in-the-middle” vulnerabilities sometimes associated with UWB applications. Taken together, these factors make FMCW at 60 GHz the “sweet spot” for OEMs targeting a multi-model rollout between 2026 and 2030.
Challenges in engineering the intelligent cabin
Implementing radar-based in-cabin sensing is not without its challenges. It represents a multifaceted engineering hurdle that requires the convergence of precision sensors, high-speed signal processing, and functional safety compliance.
The processing challenge
Detecting the subtle rise and fall of a sleeping infant’s chest amidst the noise of a moving vehicle requires immense computational precision. The radar processing pipeline involves complex stages, including the Range FFT (Fast Fourier Transform), the Doppler FFT, and sophisticated clutter-removal algorithms.
Statistics show 99.9% accuracy in CPD using radar. To achieve this high accuracy, engineers must employ advanced digital signal processing (DSP) technologies. Solutions like the Tensilica Vision 110 DSP are designed specifically for these high-performance, low-power requirements.

Figure 2 Here is a radar processing pipeline for a child presence detection use case. Source: Cadence Design Systems
By offloading complex mathematical operations such as 8-bit and 16-bit MACs to a dedicated DSP, automotive designers can achieve the required frame rates (around 50 FPS) while adhering to strict power and thermal constraints.
Integrating AI and machine learning
The future of in-cabin sensing lies in the fusion of traditional signal processing with machine learning (ML). While traditional algorithms excel at determining distance and speed, ML is essential for classification. Is the object a bag of groceries or a child? Is the driver blinking due to fatigue or just natural movement? Object segmentation is performed by running AI models on a radar dataset.
Advanced radar architectures now support AI-driven classification, allowing the system to learn and adapt. This capability enables features like gesture recognition for touchless control of infotainment systems, adding a layer of comfort and convenience alongside safety.
Applications beyond safety: Comfort and autonomy
While safety mandates are the primary driver, the potential of radar-based in-cabin sensing extends well beyond user experience and autonomous operation.
Health and wellbeing
The sensitivity of 60-GHz radar enables vital sign monitoring. Systems can continuously track heart and breathing rates without physical contact.

Figure 3 This radar processing pipeline serves vital signs monitoring (HR/BR). Source: Cadence Design Systems
In the event of a medical emergency, the vehicle could detect the driver’s distress and autonomously pull over or alert emergency services.
Enhancing autonomy
As we progress toward L3 and L4 autonomy, the vehicle needs to know not just where it is, but also how its occupants are doing. In a handover scenario where the car needs the driver to take control, the in-cabin sensing system must verify that the driver is alert, present, and ready. Radar provides this verification reliably, acting as a core intelligence layer that builds trust in machine-driven environments.
Operational efficiency
For emerging mobility models like robotaxis, radar offers practical benefits. It can detect the number of passengers for billing purposes, ensure no objects are left behind, and even automatically manage trunk operation.
The silicon imperative: Efficient DSPs and AI at the edge
In-cabin radar workloads demand a unique blend of high-throughput DSP operations and compact neural-inference capabilities. Traditional MCUs lack the parallelism required for FFT-heavy pipelines, while dedicated NPUs often exceed cost and power envelopes for cabin modules. A new category of radar-optimized DSPs has emerged as the right balance—programmable, efficient, and capable of supporting both classical signal processing and radar-trained neural networks.
These processors must deliver high MAC throughput, robust SIMD capabilities, and efficient memory architecture while operating within tight thermal constraints. Their flexibility enables quick algorithmic iteration, which is essential in a domain where radar datasets continue to expand across body sizes, seating layouts, and vehicle architectures.
The road ahead
As vehicles advance toward autonomous operation, in-cabin sensing will become a core intelligence layer that predicts occupant needs, safeguards their well-being, and builds trust in machine-driven environments. The integration of radar into the vehicle cabin is redefining what it means to be safe on the road.
For automotive OEMs and Tier 1 suppliers, mastering scalable, radar-based sensing architecture is no longer optional, but is a determinant of future leadership. By leveraging powerful DSP platforms and embracing the unique capabilities of FMCW radar, engineers are not just meeting regulations; they are designing a safer, more intuitive driving experience.
The guardians are no longer just on the bumper; they are inside, ensuring that every journey ends as safely as it began.
Amit Kumar is director of Automotive Product Management and Marketing for Tensilica DSPs at Cadence. He has more than 20 years of design experience in the semiconductor and IP segments. Amit has held product marketing, application engineering, business development, and key strategic management roles with a specialization in automotive ADAS/AD and robotics applications.
Related Content
- Automotive: The latest on in-cabin sensing designs
- Partnering to Advance Automotive In-Cabin Sensing Tech
- In-Cabin Monitoring: Time-of-Flight and Radar Take the Wheel
- How In-Cabin Monitoring Solutions Contribute to Overall Vehicle Safety
- Advancements in radar technology and the evolution of in-cabin sensing
The post The guardians inside: How radar is redefining in-cabin sensing appeared first on EDN.
Protected DrMOS ICs enable fast AI current limiting

SmartClamp DrMOS power devices from AOS are designed for the demanding power requirements of AI servers and high-end GPUs. Each device is a synchronous buck power stage with two asymmetrically optimized high-side and low-side MOSFETs and an integrated driver. They provide precise 100-A positive and 50-A negative current limiting during high di/dt transients. The flagship AOZ53228QI extends protection to multiphase voltage regulators, helping prevent failures during frequent high peak-current events.

In AI applications, fast load transients can drive current beyond the limits of standard inductors and power stages. Conventional overcurrent protection schemes may introduce response delays that allow short current overshoot events, which can stress the high-side MOSFET, particularly under inductor saturation conditions.
The SmartClamp family mitigates this risk by implementing current limiting directly within the power stage rather than relying solely on the controller, improving response to load transients that occur in tens of nanoseconds. An internal ramp-based sensing method continuously monitors inductor current in real time, enabling cycle-by-cycle current clamping instead of reacting after fault conditions develop. Cycle-by-cycle control reduces the likelihood of inductor saturation and MOSFET overstress during AI-style burst loads.
SmartClamp devices, including the AOZ53228QI, AOZ53262QI, and AOZ53263QI, are available in production quantities with a 12-week lead time. The AOZ53228QI is priced at $1.40 each in lots of 1000 units.
The post Protected DrMOS ICs enable fast AI current limiting appeared first on EDN.














