Українською
  In English
Збирач потоків
Mux switches deliver wide-bandwidth signal paths

A pair of 2:1 multiplexer/1:2 demultiplexer switches from Toshiba support PCIe 6.0 and USB4 Version 2.0 interfaces with bandwidths up to 34 GHz. The TDS5C212MX and TDS5B212MX are designed for reliable switching of high-speed differential signals in servers, industrial testers, robots, and PCs.

Manufactured using Toshiba’s TarfSOI (Toshiba advanced RF SOI) process, the TDS5C212MX and TDS5B212MX achieve typical differential 3-dB bandwidths of 34 GHz and 29 GHz, respectively. These wide bandwidths help suppress signal waveform distortion and improve reliability in high-speed data transmission.

The switches differ in their pin layouts. The TDS5C212MX minimizes signal path length to reduce reflections and losses, improving high-speed signal integrity. The TDS5B212MX retains the same pin layout as conventional products. Both devices operate over a temperature range of -40°C to +125°C and are now shipping.
Toshiba Electronic Devices & Storage
The post Mux switches deliver wide-bandwidth signal paths appeared first on EDN.
Micron ships 245-TB data center SSD

Micron Technology’s 245-TB 6600 ION SSD boosts rack-scale storage density for data centers and AI infrastructure. Now shipping, the company describes it as the industry’s highest-capacity commercially available SSD. Built with Micron’s G9 QLC NAND in an E3.L form factor, it requires 82% fewer racks than equivalent HDD-based deployments, while reducing power and cooling needs for large-scale, data-intensive workloads.

Micron lab testing showed significant gains over HDD-based systems. For AI workloads, the 245-TB Micron 6600 ION SSD achieved up to 84 times better energy efficiency, 8.6 times faster preprocessing, and 29 times lower latency. For object storage, it delivered up to 435 times better throughput per watt and 96 times faster time to first byte.
For 1-EB deployments, Micron says the drive requires 1.9 times less energy than HDD-based systems, reducing annual CO2 emissions by 438 metric tons and saving 921 MWh of energy. The drive consumes up to 30 W of peak power, about half that of comparable-capacity HDD deployments, supporting data center sustainability initiatives.
The 245-TB Micron 6600 ION SSD will be on display at Dell Technologies World 2026, May 18–21, 2026.
The post Micron ships 245-TB data center SSD appeared first on EDN.
4D vision platform enhances perimeter monitoring

Eyeonic Vista from SiLC is a high-resolution 4D vision system that accurately detects and classifies small targets at distances exceeding 1 km. Designed for mission-critical applications including perimeter security, counter-UAS operations, and maritime monitoring, Vista identifies humans, animals, vehicles, drones, and unauthorized vessels in complex environments. The system is also suited for protecting sensitive infrastructure such as airports, borders, power stations, and military assets.

The 8-channel vision system uses 1550-nm FMCW LiDAR to generate a data-rich point cloud, while dynamic Region of Interest (RoI) scaling enhances resolution for improved clarity and responsiveness. Micro-Doppler velocity data enables motion-based analytics for rapid threat identification. The system features angular resolution down to 8 mdeg (0.008°)—about twice as fine as human vision—and dual polarization for remote material identification. It is also resistant to jamming and crosstalk in multi-sensor environments.
Housed in an all-weather, IP65-rated enclosure, Vista operates in ambient light with no impact at 100 klux (bright sunlight), as well as in cloudy or dusty conditions.
SiLC will display the Eyeonic Vista at XPONENTIAL 2026 from May 12–14, 2026. For more information, email contact@silc.com or connect with SiLC on LinkedIn.
The post 4D vision platform enhances perimeter monitoring appeared first on EDN.
32-Gbps redriver improves in-vehicle connectivity

The PI3EQX32904Q automotive four-channel redriver from Diodes optimizes signal integrity for smart cockpits combining ADAS, infotainment systems, and instrument clusters into a single unit. Designed for GPU+CPU SoCs, it supports data rates up to 32 Gbps for high-speed PCIe 5.0, SAS-4, and CXL interfaces.

The linear redriver is rate and coding agnostic without interfering with link setup. Four independent differential channels allow configuration of receiver equalization, output swing, and flat gain through an I2C interface. Designers can tune signal performance across various physical media and system configurations with minimal firmware overhead. In addition, the ability to extend PCB trace lengths helps reduce intersymbol interference.
Built using a 0.13-µm SiGe BiCMOS process, the PI3EQX32904Q delivers robust data transmission with high linearity and low jitter. It operates from a 3.3-V supply across a -40°C to +85°C temperature range. The device complies with Modern Standby (S0 Low Power Idle) requirements, consuming less than 5 mW in deep standby while maintaining readiness for rapid wakeup.
Prices for the PI3EQX32904Q start at $4.84 each in 1000-piece quantities.
The post 32-Gbps redriver improves in-vehicle connectivity appeared first on EDN.
Qualcomm advances Snapdragon mid-range tiers

Qualcomm’s Snapdragon 6 Gen 5 and Snapdragon 4 Gen 5 bring strong performance and extended battery life to the company’s mobile platforms. Both introduce Smooth Motion UI for more responsive device interactions and smoother navigation. Compared to the previous generation, Snapdragon 6 Gen 5 delivers 20% faster app launches and 18% less screen stutter, while Snapdragon 4 Gen 5 provides 45% faster app launches and 25% less screen stutter.

Snapdragon 6 Gen 5 adds AI-powered camera features and the Qualcomm Adaptive Performance Engine 4.0 to support extended gaming sessions. With 21% higher GPU performance, the platform enables responsive everyday interactions and richer graphics, backed by improved power efficiency plus 5G and Wi-Fi 7 connectivity.
Snapdragon 4 Gen 5 extends dual-SIM 5G connectivity and improved gaming features to entry-level smartphones. The platform delivers 77% higher GPU performance and supports 90-fps gameplay, a first for the Snapdragon 4 series.
Based on a Kryo CPU and Adreno GPU, both platforms are expected to power commercial devices in the second half of 2026 from global OEMs including Honor, OPPO, realme, and Redmi.
The post Qualcomm advances Snapdragon mid-range tiers appeared first on EDN.
Made some custom joystick caps for Arduino modules
| Started designing a few joystick cap styles for KY-023/Arduino joystick modules and thought they turned out pretty nice. if you wan the model: [link] [comments] |
ΔVbe thermometer is switchable between °C and °F

Ordinary bipolar junction transistors can sometimes be precision sensors.
When you think of precision components, you usually don’t (and probably shouldn’t) think of general-purpose bipolar junction transistors. Are GP BJTs cheap and versatile? Unquestionably yes. But are their characteristics, current gain, bias voltage, etc., precise and predictable to a fraction of a percent? Sadly (and maybe even laughably) no. But not entirely so. A dramatic exception is the ΔVbe effect, in which ordinary small signal BJTs can function in simple circuits as 0.1% precision absolute temperature sensors, as shown in an earlier Design Idea.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The ΔVbe effect depends solely on the ratio of applied currents, independent of their absolute magnitudes. It has an amplitude of 1/5050 volts per Kelvin and 1/9090 volts per Rankine per current ratio decade. Figure 1 shows how this simple math can be exploited to turn most any 3 ¾ digital multimeter with a 300mV range into a versatile and accurate 0.1° resolution thermometer switchable between Celsius and Fahrentheit scales:

Figure 1 Switch U1a and current mirror Q2Q3 apply an excitation current ratio of 10.23:1 to the 9-sensor transistor string. The string is tapped at 5 x 200uV/°C = 1mV/°C and 9 x 111uV/°F = 1mV/°F.
Here’s how it works. Multivibrator U1b and switch U1a drive current mirror Q2Q3 with a square wave current signal. Its two states have a precise ratio of 101.01 = 10.23:1. The current mirror applies this signal to the 9-transistor temperature sensing string. There, the ΔVbe effect causes each transistor to develop 200uV per Kelvin and 111uV per Rankine, summing to 1mV/°K at the 5-transistor tap and 1mV/°R at the 9-transistor tap.
The S1a section of the DPDT switch S1 allows appropriate tap selection for the desired temperature scale. Meanwhile, the S1b section selects the appropriate Z1 derived 0° offset: 273mV for Celsius and 460mV for Fahrenheit. The D1R6 dummy load balances the currents passed by the two sides of the U1a switch, equalizing its Ron voltage losses. Current mirror lovers will no doubt notice that the Q2Q3 mirror, consisting as it does of unmatched transistors with no emitter degeneration, probably lacks an accurate gain ratio. But that’s okay. It doesn’t need one.
Remember that the ΔVbe effect depends solely on the ratio of applied currents and is unaffected by of their absolute magnitudes. So the mirror’s gain can vary over a wide range without significantly affecting temperature measurement accuracy. V+ can likewise wander harmlessly from 7 to 20 volts. A simple 9 volt battery will therefore work well and, since the total current draw is less than 2mA, will last for hundreds of hours of continuous operation.
Multivibrator U1b provides asymmetrical ~7kHz timing for synchronous sensor excitation and precision AC signal rectification by U1c. Asterisked resistors should be +/- 0.1% precision types to preserve accuracy.
Yes. Those ordinary dime-a-throw GP BJTs are really that good.
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included best Design Idea of the year in 1974 and 2001.
Related Content
- ΔVbe thermometer outputs 1mV/°C without calibration or op amps
- ΔVbe + DMM = Celsius, Kelvin, Fahrenheit, and Rankine thermometer
- BJT is accurate sensor for absolute temperature in Kelvin and Rankine
- Temperature compensation with a simple resistance temperature detector
- A temperature-compensated, calibration-free anti-log amplifier
The post ΔVbe thermometer is switchable between °C and °F appeared first on EDN.
Microchip Technology Launches Single-Pair Ethernet PHYs with Integrated Time and Security Functions
Microchip’s LAN878x and LAN888x PHY families enable secure, scalable and deterministic Ethernet connectivity for automotive and industrial systems. Microchip Technology announces the launch of the LAN878x and LAN888x families of Single Pair Ethernet (SPE) PHY transceivers. It is available in 100BASE-T1, 1000BASE-T1, and dual-speed 100/1000BASE-T1. Designed to deliver secure, reliable and scalable Ethernet connectivity for automotive and other mission-critical applications.
The LAN878x and LAN888x PHYs integrate hardware-based MACsec security compliant with IEEE 802.1AE-2018, providing frame-level confidentiality, data integrity and replay protection without adding system latency or software complexity. Native Time-Sensitive Networking (TSN) support enables deterministic, low-latency communication required for ADAS, zonal gateways and safety-critical control networks.
The LAN878x and LAN888x families go beyond security and performance by delivering the latest functional safety engineered for ISO 26262 ASIL-B systems. Advanced on-chip diagnostics and link monitoring increase visibility, accelerate fault detection and support stronger system-level safety mechanisms than traditional SPE PHY solutions.
To simplify platform scalability and design reuse, the LAN878x and LAN888x families offer pin-compatible SKUs across 100BASE-T1 and 1000BASE-T1 variants, as well as SGMII and RGMII host interfaces. This compatibility allows designers to reuse existing hardware designs while scaling network bandwidth to meet evolving performance requirements.
“OEMs need a clear and efficient path to scale Ethernet performance as vehicle networks evolve,” said Charlie Forni, corporate vice president of Microchip’s networking and connectivity business unit. “The LAN878x and LAN888x families allow teams to reuse designs while supporting higher data rates and stronger security. By integrating MACsec directly in the PHY, we help designers enhance network protection without added system complexities.”
The LAN878x family includes LAN8781, LAN8781M, LAN8782 and LAN8782M, while the LAN888x family includes LAN8881, LAN8881M, LAN8882, LAN8882M, LAN8883, LAN8883M, LAN8884 and LAN8884M. Devices with the “M” suffix support MACsec security. All devices are designed for high reliability, with a maximum junction temperature of 150°C, supporting Automotive Grade 1 operating conditions (-40°C to +125°C).
Beyond automotive, the LAN878x and LAN888x families also support a wide range of industrial and mission-critical applications, including industrial automation, robotics, avionics and other systems that require deterministic Ethernet communication. The LAN878x and LAN888x PHY transceivers are comprehensive hardware evaluation platforms, SGMII, USB and PCIe plug-in boards and Linux software drivers.
The post Microchip Technology Launches Single-Pair Ethernet PHYs with Integrated Time and Security Functions appeared first on ELE Times.
Malta Government Venture Capital approves co-investment in Quinas
Nuvoton Launches NuML Studio: Tool to Build and Deploy AI on Microcontrollers
Nuvoton Technology, a leading global semiconductor provider, has announced the launch of “NuML Studio”. This is a graphic user interface (UI) tool designed specifically for machine learning applications on Nuvoton microcontrollers (MCUs). NuML Studio helps developers solve common problems when building Endpoint AI, providing a clear path from real-time data collection to automatic firmware project generation. This allows developers to focus more on improving their AI models and creating new applications.
Easy Setup to Start AI Development Immediately NuML Studio optimises Windows and provides a “ready-to-use” version that does not require users to install Python or complex software libraries, making it significantly easier for beginners to set up their development environment.
This “download and run” approach allows developers to bypass tedious setup processes and immediately access intuitive project management features, where they can easily create projects for data collection, machine learning deployment, or a combination of both to facilitate rapid iteration.
Strong Data Collection and Conversion Features Accurate data is the foundation of any AI model. NuML Studio provides full support for sensors and automatic data conversion:
- Support for Many Sensors: It supports 3-axis G-sensors, 16KHz Audio, and Image collection using the NuMaker-M55M1 board.
- Automatic Data Conversion: Collected raw data can be converted into standard formats like .csv (for sensors), .wav (for audio), or .jpg (for images) with one click.
- Cloud Integration: With built-in machine learning platform API support, developers can upload their collected data directly to a cloud platform for model training.
Automatic Project Generation for Fast Deployment The core technology of NuML Studio can automatically create firmware projects that follow industry standards:
- Support for Popular Models: It works with the TensorFlow Lite Micro (TFLM) framework and supports quantised models.
- Automatic Firmware Creation: It can automatically generate Keil MDK and VS Code CMSIS projects for tasks like image classification, object detection, and keyword spotting (KWS).
- Hardware Optimisation: For chips with an Arm Ethos-U55 NPU (such as the NuMicro M55M1), it provides special library support to get the best performance from the hardware.
With the launch of NuML Studio, Nuvoton reinforces its commitment to lowering the barriers to Endpoint AI development. By providing an integrated path from real-time data collection to automatic firmware generation, this tool allows developers to bypass complex environment setups and focus on AI model optimisation and innovation. Supporting industry standards and providing specialised library support for hardware acceleration on chips like the Arm Ethos-U55 NPU, NuML Studio empowers developers to deliver high-performance intelligent edge applications with unprecedented speed.
The post Nuvoton Launches NuML Studio: Tool to Build and Deploy AI on Microcontrollers appeared first on ELE Times.
Multibus Controller with Automotive Ethernet Expansion for Faster, Parallel Communication Testing
The Multibus Controller 6281 is a field-proven test system from GÖPEL electronic offering a wide range of applications and high flexibility. GÖEPEL electronic launches a new generation of multibus communication controllers under “Series 62”. This Series 62 test device is specifically tailored to the needs and transmission standards of the automotive sector and is widely used in that field. With the new expansion, the devices in the 62 Series become even more powerful. The new architecture offers users up to 16 independent bus interfaces for CAN, CAN-FD, FlexRay, Automotive Ethernet and LIN. With the new expansion, the devices in the 62 Series become even more powerful: In addition to support for 100BASE-T1 and 1000BASE-T1 users now have access to up to eight independent 10BASE-T1S interfaces. This allows the Multibus Controller 6281 to cover all communication technologies currently used in vehicles with just a single hardware unit. Numerous configuration and application options are available to ensure optimal adaptation to the device under test or the test task.
The new Series 62 is suited for use in restbus simulations as well as test and flash programming of complex ECUs. With the advent of Ethernet in automotive electronics, the demand for reliable and high-performance test solutions for these communication networks are growing. With a bandwidth of 10 Mbit/s and the use of a multidrop topology, which allows a large number of nodes to be connected to a single twisted-pair cable, 10BASE-T1S competes directly with established vehicle buses such as CAN, CAN FD, CAN XL, LIN, and FlexRay. The PLCA (Physical Layer Collision Avoidance) arbitration, as specified in the standard, prevents collisions and thus enables full utilisation of the available bandwidth with low latency. The new expansion for the 62 Series, featuring up to eight independent 10BASE-T1S interfaces for the first time, now allows for the simultaneous parallel testing of up to eight DUTs. This pays off above all in significant time savings during endurance tests. In addition to its eight communication interfaces, the highly flexible 6281 Multibus Controller offers eight digital I/O interfaces (4 digital inputs, 4 digital outputs). The communication interfaces can be configured in a wide variety of ways. In addition to Automotive Ethernet, CAN FD, LIN, K-Line, or FlexRay interfaces
The Multibus Controller 6281 functions as a standalone embedded test system with its own real-time environment, in which the communication and simulation logic is executed entirely on the hardware. The host connection via PCIe, PXIe, or Ethernet is used for parameterisation, configuration, and result transmission. The G PCIe 6281 and G PXIe 6281 variants have been developed as plug-in cards for a PCIe or PXIe bus system, respectively; the G CAR 6281 is a standalone device with Gigabit Ethernet (1 GigE) as the host interface.
Two connector variants are available to the user for connecting the DUT to the communication interfaces: RJ Point Five or HARTING ix Industrial. The feature set of the Multibus Controller 6281 is identical for both variants, regardless of the connector type. The digital inputs and outputs of the Multibus Controller 6281 are located on a Molex connector. The Gigabit Ethernet host interface, which is also available on the PCIe and PXIe cards, supports PTP (Precision Time Protocol) and can therefore be used to synchronise multiple cards and devices.
The post Multibus Controller with Automotive Ethernet Expansion for Faster, Parallel Communication Testing appeared first on ELE Times.
Making the case for MRAM in software-defined vehicles

Implementation of software-defined vehicles (SDV) has changed significantly over the past decade, but the need for in-field upgrades and new features has remained constant. As OEMs move from legacy architectures to SDVs, they will need to add new capabilities over time to deliver a more differentiated user experience.
At the same time, ECU consolidation and the need for more headroom for future use cases are increasing compute demands. Microcontroller unit (MCU) manufacturers have responded by moving to smaller process nodes, enabling higher performance in a more cost-effective way.
However, while MCUs are evolving fast, memory—embedded non-volatile memory (eNVM) in particular—is being left behind. In many cases, memory still relies on outdated specifications from the days of distributed architectures, where most ECUs never saw firmware upgrades after release.
This creates an important question for the auto industry. If vehicles are expected to receive in-field bug fixes, performance improvements and entirely new features over time, is your SDV’s eNVM ready?
How SDVs shape the customer experience
Before we answer this question, it’s important to consider how SDVs shape the customer experience. Faster over-the-air (OTA) updates mean less vehicle downtime, lower power use during the update and a lower battery state-of-charge (SoC) requirement while starting an OTA upgrade process. When issues are found, the ability to deliver fixes quickly reduces customer frustration and improves confidence in the vehicle.
With the right technology, SDVs can also offer a lower total cost of ownership while improving the overall experience. But for that to be achieved, it needs to be easier for SDVs to support larger applications, more data-heavy features and ongoing software updates without driving up memory needs or development cost.
In short, the platform must support frequent improvements without getting in the way of the vehicle’s long-term success, and that means more efficient eNVM is required.
Specifications that need to be addressed
There are two eNVM specifications that impact user experience and total cost of ownership: endurance and write speed (write time and erase time).
Endurance determines how many times memory can be rewritten over the life of the vehicle. In today’s MCUs, code memory is often rated for about 1,000 write cycles, while data memory, which is usually a very small subset of total eNVM, is typically rated for around 100,000. Those limits have changed very little over time, even though SDVs now depend on frequent updates, bug fixes and new features delivered long after launch. As update demands increase, higher endurance becomes essential.
Page size also matters. Many eNVMs only support page-level writes, which means updating even a single byte require rewriting an entire page, which can typically be sized between 64 bytes to 512 bytes. That increases wear, wastes memory and adds software complexity, especially when page sizes are large.
For SDVs to support more data-intensive use cases over time, memory needs to offer much higher endurance along with smaller page sizes or byte-level write capability. That reduces memory overhead, simplifies software design, and makes future upgrades far more practical.
Impact of temperature on endurance and retention
In eNVM technologies, temperature matters just as much as raw endurance and retention. That’s because eNVM hardware can degrade when writes happen at high temperatures, which is a real concern for vehicles receiving OTA updates. A car parked in extreme summer heat may still need a firmware update, for example, and customers should not have to worry about whether the vehicle is too hot to update safely. For SDVs, memory needs to deliver reliable endurance and data retention across the full operating temperature range over the life of the vehicle.
Write and erase times also have a direct impact on the customer experience. In many eNVM technologies, memory must be erased before it can be rewritten, and erase times are often even longer than write times.
That may have been acceptable when programming mainly happened in the factory, but in SDVs it can mean longer update times, more downtime, and added software constraints during normal vehicle operation. Faster writes and eliminating the need for erase cycles would make updates quicker, reduce performance penalties, and simplify software design.
Why MRAM stands out
When comparing embedded memory options for SDVs, including embedded charge-trap flash, PCM, RRAM and MRAM, the key question is which technology can best support frequent updates, long life, and a good customer experience. MRAM stands out because it addresses many of the limitations of older embedded non-volatile memory technologies. It can support scalable memory sizes at smaller technology nodes like 16 nm, needed for zonal, domain and consolidated vehicle architectures, while remaining practical from a cost and reliability standpoint.
MRAM works differently from traditional memory technologies. Instead of storing data through charge, material movement or phase change, it stores data using magnetic states. That matters because magnetic storage does not wear out in the same way as many other non-volatile memory approaches.
As a result, MRAM is well suited for the durability, update frequency, and long-term reliability that SDVs require. MRAM supports 20 years of data retention at 150⁰C ambient temperature, well within the requirements of today’s automotive applications.

Figure 1 MRAM stands out because it addresses several limitations of older embedded non-volatile memory technologies. Source: NXP
A solution that meets the needs of SDVs
MRAM is also a strong fit for SDVs because it combines very high endurance with fast write speeds, up to 20 times faster write speed than traditional embedded memory. Unlike many other embedded memory technologies, it does not require an erase step before writing, which helps enable much faster updates and reduces vehicle downtime.
Its endurance is high enough to support frequent firmware updates and heavy data writes up to 1 million cycles with little or no need for wear leveling in most use cases. Just as importantly, its performance and retention remain reliable over the full life of the vehicle.
These strengths also make new SDV use cases more practical. MRAM, with its fast write and high endurance capabilities can enable new use cases, especially data-intensive applications such as AI and machine learning. It also makes it easier to load software dynamically based on how the vehicle is being used.
In short, MRAM-based MCUs help automakers deliver faster updates, support more flexible software architectures, and add new capabilities over time without compromising the customer experience.

Figure 2 The MRAM-based MCUs like S32K5 help automakers deliver faster updates, support more flexible software architectures, and add new capabilities. Source: NXP
Put simply, underlying hardware technology, and eNVM in particular, must evolve to unlock the true potential of SDVs. Memory write speed and endurance can be make-or-break capabilities for a competitive user experience and the ability to rollout new features consistently. MRAM, with its crucial improvements to endurance and speed, is the eNVM technology truly capable of bringing this SDV vision to life.
Sachin Gupta is senior director of sales and business development for automotive at NXP Semiconductors.
Related Content
- MRAM debut cues memory transition
- The Rise of MRAM in the Automotive Market
- MRAM, ReRAM Eye Automotive-Grade Opportunities
- MRAM Maker Everspin Remembers Its Industrial Roots
- Architectural opportunities propel software-defined vehicles forward
The post Making the case for MRAM in software-defined vehicles appeared first on EDN.
Rohde & Schwarz Presents its Advance Solutions for Power Electronics Testing at PCIM Expo 2026
Rohde & Schwarz presents its latest test and measurement solutions for power electronics systems at PCIM Expo 2026 in Nuremberg. The showcase highlights cutting-edge approaches that address the most demanding challenges of today’s wide-bandgap devices and drivetrain applications. Advance testing and characterisation enable engineers to improve the performance, efficiency and reliability of SiC- and GaN-based power electronics in applications such as AI data centres, renewable energy and e-mobility. At PCIM Expo 2026, Rohde & Schwarz will showcase its latest test and measurement solutions.
“Power electronics are at the core of the energy and mobility transition. With our latest test and measurement solutions, we enable engineers to fully understand, optimise, and validate the performance of next-generation SiC and GaN devices, bringing higher efficiency, reliability and speed to their designs,” says Philipp Weigell, Vice President Market Segment Industry, Components, Research & Universities at Rohde & Schwarz.
3-Phase Analysis for Power and Drives
Rohde & Schwarz introduces the new 3-phase power analysis option (R&S MXO-K333) for the R&SMXO 3, 4, 5/5C series oscilloscopes. This option turns an MXO oscilloscope into the best-in-class waveform analysis tool for in-depth 3-phase AC power characterisation. The solution simplifies total power results, multiphase AC power qualities, harmonic standard testing and distortion measurements, while keeping the original transient waveforms in view for instant root-cause tracing. At the PCIM Expo, visitors can explore how a guided setup wizard maps the eight available channels of the MXO 5 to three voltage and three current probes, validates the wiring (supporting two-wire, three-wire, and four-wire configurations: 2V2A, 3V3A, 3VN3A) and automatically configures the instrument. After the setup is complete, the software delivers per-cycle power calculations, RMS values, power factor, active and reactive power, total power, phasor/vector visualisation and harmonic/THD analysis. All of this is in line with IEC 61000-3-2, and the results are presented with power-waveform views, harmonic spectra, FFT statistics and phasor diagrams. The 3-phase power analysis option provides MXO oscilloscopes with waveform view and trigger capabilities. This enables engineers to see beyond a conventional power analyser’s statistical data, supporting the debugging of power distribution, converters and industrial power systems.
Electric Drivetrain Efficiency
PCIM Expo visitors can also experience the LMG671 power analyser at the Rohde & Schwarz booth, as it demonstrates how to reliably measure efficiency and quantify losses in modern electric drivetrain power electronics. The analyser provides continuous, high-precision power measurement with exceptional dynamic range, delivering output to input efficiency for the drivetrain under test while simultaneously capturing the motor’s mechanical power through direct speed and torque sensing. Inverter output is examined in the three distinct bandwidths, fundamentals, harmonics and wideband power, to extract derived values such as high-frequency losses. All relevant readings and graphs are presented on a dedicated CUSTOM menu, giving users a complete view of the system’s performance at a single glance. The LMG671 is now part of Rohde & Schwarz’s power electronics portfolio, following the recent acquisition of ZES ZIMMER Electronic Systems GmbH.
Double-Pulse Testing of SiC Automotive Power Modules (Hitachi Energy RoadPak)
In another setup, Rohde & Schwarz, together with PE-Systems, showcase an automated double-pulse tester that delivers precise, repeatable measurements while improving consistency and efficiency in power electronics characterisation. The solution provides fast insights into the dynamic switching behaviour of power modules, with automated parameter extraction that reduces human error and accelerates development.
The demo unit is based on the rack-optimised, next-generation MXO58 oscilloscope from Rohde & Schwarz. Leveraging its eight channels in combination with the R&S RT-ZISO isolated probing system, it enables stable and accurate double-pulse testing for SiC and GaN devices in a fully automated environment.
The post Rohde & Schwarz Presents its Advance Solutions for Power Electronics Testing at PCIM Expo 2026 appeared first on ELE Times.
Cute picture that is driven by my first custom PCB + ESP32-S3!
| This is a HUD75 LED panel that is being driven by an ESP32-S3! I am using this library as a driver and this library for the animation. I have a custom animation that I loaded up and I plan to make more animations from here and turn it into a pet game! I'm looking for cute names though! [link] [comments] |
24GHz Radar Module Output!
| After some hardware fixes as usual, some glorious resoldering and a few lines of 1's and 0's later...I have data! With my DAC working, both receive antennas are working and able to read the I/Q outputs! Very pleased and now to turn this into something more understandable! [link] [comments] |
Cohu receives orders for testing GaN power devices for AI data centers
Нагорода студенту ФМФ за найкращу наукову роботу
Рішенням Президії Національної Академії наук України магістру ІІ року навчання програми «Страхова та "фінансова математика» Логвинову Денису Олександровичу за роботу «Деякі властивості у стохастичній моделі з альфа-схемою та трендом» присуджено премію Академії наук України за найкращу студентську
The next EDA wave: Lessons from DATE 2026

The Design, Automation & Test in Europe (DATE) Conference in Verona in April showed an EDA research community moving with real momentum into the AI era. The strongest signal from the conference was that AI is no longer a separate topic sitting beside chip design. It’s now shaping the workloads, architectures, design tools, verification flows, and security questions that will define the next phase of semiconductor development.
The conference was upbeat because the direction is clear and the opportunity is substantial. Heterogeneous compute, RISC-V, chiplets, AI accelerators, agentic EDA, structured specifications, and AI-assisted verification are all advancing at the same time. The challenge is significant: these systems must be designed, verified, secured, and trusted.
However, DATE 2026 showed that the research community is already developing the methods, tools, and flows needed to address that challenge. For Europe, the opportunity is not simply to catch up with existing EDA capability, but to help lead the next wave of AI-enabled, verification-aware, and trustworthy semiconductor design.
This also re-frames the European sovereignty discussion. There are three distinct parts: sovereignty in processor design, sovereignty in EDA tools, and sovereignty in next-generation AI+EDA capability. Processor design is being opened up by RISC-V, chiplets and design-enablement platforms.
EDA-tool sovereignty is more challenging, because advanced-node signoff depends on mature commercial tools, process design kits (PDKs), verification IP, and foundry-qualified flows. The strongest near-term opportunity is therefore AI+EDA capability: building the methods, benchmarks, structured specifications, secure deployment models, and verification-aware AI flows that will define the next generation of design automation.
Conference context and program messaging
DATE 2026 provided a useful view of where semiconductor research is moving as AI, EDA, advanced architectures, verification, and security begin to converge. DATE is not the Design and Verification Conference (DVCon), with its practitioner focus on verification methodology and commercial tool use. It is not the Design Automation Conference (DAC), where the exhibition floor is often as important as the technical program. DATE is research-led, with the papers, focus sessions, tutorials, keynotes, and European project sessions forming the center of gravity.
That research-led character matters. It makes DATE a good indicator of topics that are still forming before they become mature tool flows or standard industry practice. The commercial ecosystem was clearly present with Cadence, Synopsys, Qualcomm, Arm, Infineon, Micron, STMicroelectronics, Tenstorrent, Axelera AI, Real Intent, and others represented in the sponsor list. However, the tone was less product marketing and more ecosystem development.
A key takeaway was that AI is now present as a workload, a design objective, a design-assistance technology, a verification challenge, and a security risk. The individual sessions differed in emphasis, but the common thread was the same: the next phase of EDA will be shaped by the interaction between AI, heterogeneous architectures, verification, security, and trust.
DATE 2026 included 325 regular papers and 91 extended abstracts across the D, A, T, and E research tracks, giving 416 accepted research-track outputs. The program offered 41 main technical sessions, three Best Paper Award candidate sessions, two late-breaking-result sessions, five keynotes, 10 focus sessions, five workshops, four special-day sessions, and four embedded tutorials.
The geographical distribution was also significant. DATE is European in location and culture, but the research paper base reflects the global semiconductor research map. By country-affiliated appearances in technical paper-like entries, China, plus Hong Kong and Taiwan, accounted for 247 appearances, or 44.7%. Europe, plus the U.K., accounted for 133 appearances, or 24.1%. The U.S. accounted for 94 appearances, or 17.0%, with the rest of the world at 79 appearances, or 14.2%.
Using a broad classification, roughly 27% of the technical country-affiliated appearances had some AI connection. Most of this was hardware-for-AI: accelerators, compute-in-memory, large language model (LLM) inference, edge AI, photonic AI, and memory systems. AI applied directly to verification, test generation, fuzzing, coverage, and security validation was closer to 2.7% of the technical program. This shows that AI-for-verification is currently a specialist part of the larger AI-related research activity.
AI as workload, tool, and risk
The opening keynote from Luc Van de Hove of IMEC set out one of the central pressures: AI models are evolving faster than semiconductor hardware development, creating bottlenecks that require new compute architectures and semiconductor platforms. In this framing, AI is a key demand changing the hardware stack.
At DATE, AI appeared in at least four roles. First, AI is the workload driving accelerators, compute-in-memory structures, chiplets, photonics, and energy-efficient platforms. Focus session FS02, “Architecting Intelligence: Next-Gen Acceleration for Generative AI,” and TS36, “Next-Generation Memory Systems for AI Acceleration,” were good examples. Second, AI is becoming a design tool, with LLMs, agents, and machine-learning-driven optimization applied to routing, placement, high-level synthesis (HLS), analog sizing, and lithography simulation.
Third, AI is changing the research process itself, as raised in the keynote from Rolf Drechsler from the University of Bremen in Germany. Fourth, AI is becoming a security and trust problem, since AI-guided verification tools can introduce risks such as adversarial manipulation, biased test generation, or hallucinated security guidance.
The AI-for-EDA message was therefore not simply that AI will automate design. AI can accelerate parts of the design and verification flow, while also creating systems and flows that are harder to verify, explain, secure, and certify.
Future platforms are heterogeneous
A repeated architectural message was that general-purpose compute is no longer sufficient for many target workloads. The program included strong content on AI accelerators, chiplets, 3D integrated circuits (3DIC), RISC-V vector extensions, photonic accelerators, quantum and high-performance computing (HPC) coupling, FPGAs, high level synthesis (HLS), open chiplet ecosystems, and domain-specific processors.
RISC-V appeared prominently as an instruction set architecture (ISA), especially where openness, customization, and verification interact. It appeared in open-source cores such as Rocket, BOOM, XiangShan, and Snitch; in vector-extension verification; in processor fuzzing; in cryptographic accelerators; in SoC security; and in lightweight wearable systems. This is consistent with the broader RISC-V opportunity: the open ISA makes architectural experimentation easier but also increases the verification responsibility for each implementation and extension.
The Cornell University keynote by Zhiru Zhang on accelerator design and programming described a familiar problem. Performance and efficiency increasingly come from specialized accelerators, but there is a widening gap between how accelerators are designed and how they are programmed. That gap is an EDA problem because the design flow needs to connect architecture, programmability, verification, performance estimation, and software maintenance.
Quantum was also treated as a systems topic rather than as isolated physics. Nvidia’s Bettina Heim described NVQLink, coupling GPU real-time processing with quantum processors at sub-microsecond latency for error correction and control. A focus session covered MLIR, QIR, and intermediate representations for quantum-classical compilation. The point for EDA is that quantum-classical systems create problems in compilation, control, architecture, timing, and verification. These are recognizable EDA problems, even if the devices are different.
Verification and security become first-class constraints
The third major theme was the convergence of verification, security, and open ecosystems. DATE treated verification and security as part of the same scalability problem. As systems become heterogeneous, AI-driven, and assembled from chiplets and third-party IP, functional correctness, security validation, explainability, and certification overlap.
The verification panel (session FS06), “Who Is Best Suited to Do Verification?”, framed rising re-spin rates and verification cost as a central industry problem. The hardware security focus session argued that heterogeneous SoCs, CPUs, and accelerators create attack surfaces too large for manual analysis alone. The AI-for-verification thread included coverage-driven test generation, reinforcement-learning-guided concolic (concrete + symbolic) testing, processor fuzzing, SystemVerilog Assertion (SVA) generation, and agentic security assistants.
This work is still emerging. However, the direction is clear: verification needs more automation, and that automation needs to be tool-grounded, measurable, and traceable. A generated test, assertion, or security recommendation is useful only if it connects to coverage, formal results, simulation results, reviewable traces, or other engineering evidence.
AI for RTL and verification
A specialist but important cluster was AI applied to register-transfer level (RTL) design. This included LLM-generated Verilog, closed-loop RTL repair, multi-agent design flows, HLS-to-RTL pathways, and benchmark contamination. The volume was small, roughly 2-3% of the technical program, but the technical direction was important.
The field has moved beyond asking an LLM to write Verilog. The more credible flows put verification in the loop: generate RTL, run checks, estimate correctness, repair errors, and preserve equivalence. VeriBToT (session TS07.1) combined self-decoupling and self-verification for modular Verilog generation.
EstCoder (TS22.9) used a collaborative agent flow with a functional-estimation agent scoring generated RTL before accepting or correcting it, reporting up to 9% improvement in RTL correctness. LiveVerilogEval (TS29.1) addressed benchmark contamination and found that LLM performance degraded significantly on dynamically generated benchmarks, suggesting that static benchmarks may have overstated current capability.
The sponsor-hosted executive session on EDA agentic AI provided a useful industrial view. Agentic AI is moving from demonstrations toward production flows with RTL checking and fixing, specification-to-testbench construction, and synthesis-to-GDSII flows identified as near-term use cases. The hard constraints are determinism, traceability, IP protection, tool integration, and signoff confidence.
The AI-for-verification work showed the same pattern. The best examples were closed-loop and tool-grounded, not generic prompt-based test generation. ChatTest (TS22.7) used a multi-agent LLM framework with a structured Verification Description Language (VDL), retrieval-augmented generation, and a coverage-feedback loop. It reported 1.46 times higher toggle coverage, 2.28 times higher line coverage, and a 24.23% improvement in functional coverage across 20 complex RTL designs. CoverAssert (TS40.10) used functional coverage feedback to guide LLM generation of SVAs.
Processor fuzzing gave another important example. SimFuzz (TS40.6) applied similarity-guided block-level mutation to RISC-V processors Rocket, BOOM, and XiangShan, finding 17 bugs, including 14 previously unknown issues and seven CVE-assigned bugs affecting decode and memory units.
This connects to GhostWrite (CVE-2024-44067), a RISC-V vector-extension implementation bug in T-Head XuanTie processors that allowed unprivileged code to write arbitrary physical memory. GhostWrite was not a side channel. It was a direct architectural flaw, and the mitigation required disabling the vector extension. This is a strong argument for structure-aware, security-directed processor verification.
AI-generated SVAs also appeared in several forms. PALM (TS07.6) investigated LLM assistance for valid SVAs in security verification, while CoverAssert (TS40.10) and AutoAssert (TS02.5) extended coverage-driven, LLM-assisted assertion generation with formal verification feedback. This seems to be the right near-term role for AI in formal verification: assistant and accelerator, not replacement for formal reasoning.
Agentic AI and structured specifications
The most visible emerging pattern in AI+EDA was the movement from single-shot prompting to multi-agent, tool-grounded, feedback-driven workflows. The focus session (FS07) “From Concept to Silicon: End-to-End Agentic AI for Smarter Chip Design” made this explicit across HLS, physical design, testing, and security verification.
The Nexus paper presented by PrimisAI (session SD01.1) framed the engineering problem clearly. EDA workflows need reliability and traceability, and weak coordination and unstructured communication are bottlenecks for multi-agent deployment. Nexus reported 100% accuracy on RTL generation tasks in VerilogEval-Human and nearly 30% average power savings on Verilog-to-routing (VTR) timing-optimization benchmarks.
AgenticTCAD (TS41.6) applied a natural-language-driven multi-agent system to TCAD device optimization, achieving IRDS-2024 specifications for a 2-nm nanosheet FET within 4.2 hours, compared with 7.1 days for human experts.
The key point is that agentic AI wraps the LLM in an engineering process. The flow is to decompose the task, call EDA tools, inspect reports, measure quality, repair errors, and iterate. That is much more credible for EDA than single-shot generation.
Two structured-language examples were also notable. The first was the Universal Specification Format (USF), a formal specification format (in session TS24.3) with unambiguous syntax and semantics able to generate formal properties and behavioral simulation models.
The second was Verification Description Language (VDL), introduced in ChatTest (TS22.7), which captures I/O pins, timing, functional coverage targets, stimulus sequences, checkpoints, and boundary conditions in YAML format. These are early signs that AI-assisted EDA may require better intermediate representations, not only better models.
European sovereignty and the next EDA wave
European semiconductor sovereignty was an undercurrent throughout DATE 2026, but it needs to be framed carefully. Semiconductor sovereignty is not about becoming completely self-sufficient, it is about reducing dangerous dependencies on other geographic regions. There are several separate questions, for example: sovereignty in processor design, sovereignty in EDA tools, and sovereignty in next-generation AI+EDA capability.
For processor design, the RISC-V activity, open chiplet ecosystems, and European design-enablement platforms such as the cloud-based makeChip point in a useful direction. However, first-time-right silicon still depends heavily on commercial EDA tools, qualified PDKs, verified sign-off flows, and high-quality verification IP. A realistic sovereignty strategy means sovereign design competence and secure access to the best tools, not an assumption that open-source-only flows can replace the commercial stack.
For EDA-tool sovereignty, open-source EDA is strategically valuable for education, research, reproducibility, open PDKs, and lowering barriers for small and medium-sized enterprises (SMEs) and universities. However, advanced-node commercial EDA represents decades of investment in algorithms, foundry relationships, sign-off maturity, and customer regression infrastructure.
The keynote by Luca Benini of the University of Bologna in Italy on democratizing silicon made the positive case for broader access, but open-source EDA is a supplemental and educational platform, not a near-term substitute for advanced-node sign-off.
The more compelling opportunity is next-generation AI+EDA. DATE 2026 showed that this area is still being defined. Agentic workflows, AI-assisted verification, coverage-driven test generation, formal and SVA support, open benchmarks, trustworthy AI, structured specification languages, and secure on-premise model deployment are all areas where research depth and engineering discipline matter.
Europe has strong universities, safety-critical application domains, active RISC-V and open-source hardware communities, and the policy framework of the EU Chips Act. That combination is well suited to shaping the next EDA wave.
The strongest form of European sovereignty is not isolation. It is capability: the ability to design, verify, secure, and understand the systems Europe depends on. DATE 2026 showed that the future of EDA will require new compute architectures, better verification methods, more automation, structured specifications, stronger security methods, and a clear understanding of where AI helps and where it introduces new risks. These are exactly the problems that a research-led, ecosystem-focused community should be able to address.
DATE 2026 was therefore not just an EDA conference about AI in chip design. It was a useful indication that the next phase of EDA will be defined by the interaction between AI, heterogeneous architectures, verification, security, and trust. The next step is to turn these research directions into reliable engineering flows.
Simon Davidmann is an EDA industry pioneer and serial technology entrepreneur with over 40 years of experience in simulation and verification. His career has been instrumental in shaping the foundational languages and methodologies used in modern chip design, particularly those now critical for AI/ML hardware. Davidmann was the co-creator of Superlog that became SystemVerilog. After selling Imperas to Synopsys in 2023 and being Synopsys VP for Processor Modeling & Simulation, he left Synopsys and is now an AI + EDA researcher at Southampton University, UK.
Editor’s Note
DATE 2026 was held on 20-22 April 2026 in Verona, Italy. The conference program is available at https://www.date-conference.com/programme. Specific session labels are noted in parentheses in the article.
Related Content
- AI features in EDA tools: Facts and fiction
- EDA’s big three compare AI notes with TSMC
- What is the EDA problem worth solving with AI?
- DAC 2025: Towards Multi-Agent Systems In EDA
- How AI-based EDA will enable, not replace the engineer
The post The next EDA wave: Lessons from DATE 2026 appeared first on EDN.



