Українською
  In English
Збирач потоків
Multibus Controller with Automotive Ethernet Expansion for Faster, Parallel Communication Testing
The Multibus Controller 6281 is a field-proven test system from GÖPEL electronic offering a wide range of applications and high flexibility. GÖEPEL electronic launches a new generation of multibus communication controllers under “Series 62”. This Series 62 test device is specifically tailored to the needs and transmission standards of the automotive sector and is widely used in that field. With the new expansion, the devices in the 62 Series become even more powerful. The new architecture offers users up to 16 independent bus interfaces for CAN, CAN-FD, FlexRay, Automotive Ethernet and LIN. With the new expansion, the devices in the 62 Series become even more powerful: In addition to support for 100BASE-T1 and 1000BASE-T1 users now have access to up to eight independent 10BASE-T1S interfaces. This allows the Multibus Controller 6281 to cover all communication technologies currently used in vehicles with just a single hardware unit. Numerous configuration and application options are available to ensure optimal adaptation to the device under test or the test task.
The new Series 62 is suited for use in restbus simulations as well as test and flash programming of complex ECUs. With the advent of Ethernet in automotive electronics, the demand for reliable and high-performance test solutions for these communication networks are growing. With a bandwidth of 10 Mbit/s and the use of a multidrop topology, which allows a large number of nodes to be connected to a single twisted-pair cable, 10BASE-T1S competes directly with established vehicle buses such as CAN, CAN FD, CAN XL, LIN, and FlexRay. The PLCA (Physical Layer Collision Avoidance) arbitration, as specified in the standard, prevents collisions and thus enables full utilisation of the available bandwidth with low latency. The new expansion for the 62 Series, featuring up to eight independent 10BASE-T1S interfaces for the first time, now allows for the simultaneous parallel testing of up to eight DUTs. This pays off above all in significant time savings during endurance tests. In addition to its eight communication interfaces, the highly flexible 6281 Multibus Controller offers eight digital I/O interfaces (4 digital inputs, 4 digital outputs). The communication interfaces can be configured in a wide variety of ways. In addition to Automotive Ethernet, CAN FD, LIN, K-Line, or FlexRay interfaces
The Multibus Controller 6281 functions as a standalone embedded test system with its own real-time environment, in which the communication and simulation logic is executed entirely on the hardware. The host connection via PCIe, PXIe, or Ethernet is used for parameterisation, configuration, and result transmission. The G PCIe 6281 and G PXIe 6281 variants have been developed as plug-in cards for a PCIe or PXIe bus system, respectively; the G CAR 6281 is a standalone device with Gigabit Ethernet (1 GigE) as the host interface.
Two connector variants are available to the user for connecting the DUT to the communication interfaces: RJ Point Five or HARTING ix Industrial. The feature set of the Multibus Controller 6281 is identical for both variants, regardless of the connector type. The digital inputs and outputs of the Multibus Controller 6281 are located on a Molex connector. The Gigabit Ethernet host interface, which is also available on the PCIe and PXIe cards, supports PTP (Precision Time Protocol) and can therefore be used to synchronise multiple cards and devices.
The post Multibus Controller with Automotive Ethernet Expansion for Faster, Parallel Communication Testing appeared first on ELE Times.
Making the case for MRAM in software-defined vehicles

Implementation of software-defined vehicles (SDV) has changed significantly over the past decade, but the need for in-field upgrades and new features has remained constant. As OEMs move from legacy architectures to SDVs, they will need to add new capabilities over time to deliver a more differentiated user experience.
At the same time, ECU consolidation and the need for more headroom for future use cases are increasing compute demands. Microcontroller unit (MCU) manufacturers have responded by moving to smaller process nodes, enabling higher performance in a more cost-effective way.
However, while MCUs are evolving fast, memory—embedded non-volatile memory (eNVM) in particular—is being left behind. In many cases, memory still relies on outdated specifications from the days of distributed architectures, where most ECUs never saw firmware upgrades after release.
This creates an important question for the auto industry. If vehicles are expected to receive in-field bug fixes, performance improvements and entirely new features over time, is your SDV’s eNVM ready?
How SDVs shape the customer experience
Before we answer this question, it’s important to consider how SDVs shape the customer experience. Faster over-the-air (OTA) updates mean less vehicle downtime, lower power use during the update and a lower battery state-of-charge (SoC) requirement while starting an OTA upgrade process. When issues are found, the ability to deliver fixes quickly reduces customer frustration and improves confidence in the vehicle.
With the right technology, SDVs can also offer a lower total cost of ownership while improving the overall experience. But for that to be achieved, it needs to be easier for SDVs to support larger applications, more data-heavy features and ongoing software updates without driving up memory needs or development cost.
In short, the platform must support frequent improvements without getting in the way of the vehicle’s long-term success, and that means more efficient eNVM is required.
Specifications that need to be addressed
There are two eNVM specifications that impact user experience and total cost of ownership: endurance and write speed (write time and erase time).
Endurance determines how many times memory can be rewritten over the life of the vehicle. In today’s MCUs, code memory is often rated for about 1,000 write cycles, while data memory, which is usually a very small subset of total eNVM, is typically rated for around 100,000. Those limits have changed very little over time, even though SDVs now depend on frequent updates, bug fixes and new features delivered long after launch. As update demands increase, higher endurance becomes essential.
Page size also matters. Many eNVMs only support page-level writes, which means updating even a single byte require rewriting an entire page, which can typically be sized between 64 bytes to 512 bytes. That increases wear, wastes memory and adds software complexity, especially when page sizes are large.
For SDVs to support more data-intensive use cases over time, memory needs to offer much higher endurance along with smaller page sizes or byte-level write capability. That reduces memory overhead, simplifies software design, and makes future upgrades far more practical.
Impact of temperature on endurance and retention
In eNVM technologies, temperature matters just as much as raw endurance and retention. That’s because eNVM hardware can degrade when writes happen at high temperatures, which is a real concern for vehicles receiving OTA updates. A car parked in extreme summer heat may still need a firmware update, for example, and customers should not have to worry about whether the vehicle is too hot to update safely. For SDVs, memory needs to deliver reliable endurance and data retention across the full operating temperature range over the life of the vehicle.
Write and erase times also have a direct impact on the customer experience. In many eNVM technologies, memory must be erased before it can be rewritten, and erase times are often even longer than write times.
That may have been acceptable when programming mainly happened in the factory, but in SDVs it can mean longer update times, more downtime, and added software constraints during normal vehicle operation. Faster writes and eliminating the need for erase cycles would make updates quicker, reduce performance penalties, and simplify software design.
Why MRAM stands out
When comparing embedded memory options for SDVs, including embedded charge-trap flash, PCM, RRAM and MRAM, the key question is which technology can best support frequent updates, long life, and a good customer experience. MRAM stands out because it addresses many of the limitations of older embedded non-volatile memory technologies. It can support scalable memory sizes at smaller technology nodes like 16 nm, needed for zonal, domain and consolidated vehicle architectures, while remaining practical from a cost and reliability standpoint.
MRAM works differently from traditional memory technologies. Instead of storing data through charge, material movement or phase change, it stores data using magnetic states. That matters because magnetic storage does not wear out in the same way as many other non-volatile memory approaches.
As a result, MRAM is well suited for the durability, update frequency, and long-term reliability that SDVs require. MRAM supports 20 years of data retention at 150⁰C ambient temperature, well within the requirements of today’s automotive applications.

Figure 1 MRAM stands out because it addresses several limitations of older embedded non-volatile memory technologies. Source: NXP
A solution that meets the needs of SDVs
MRAM is also a strong fit for SDVs because it combines very high endurance with fast write speeds, up to 20 times faster write speed than traditional embedded memory. Unlike many other embedded memory technologies, it does not require an erase step before writing, which helps enable much faster updates and reduces vehicle downtime.
Its endurance is high enough to support frequent firmware updates and heavy data writes up to 1 million cycles with little or no need for wear leveling in most use cases. Just as importantly, its performance and retention remain reliable over the full life of the vehicle.
These strengths also make new SDV use cases more practical. MRAM, with its fast write and high endurance capabilities can enable new use cases, especially data-intensive applications such as AI and machine learning. It also makes it easier to load software dynamically based on how the vehicle is being used.
In short, MRAM-based MCUs help automakers deliver faster updates, support more flexible software architectures, and add new capabilities over time without compromising the customer experience.

Figure 2 The MRAM-based MCUs like S32K5 help automakers deliver faster updates, support more flexible software architectures, and add new capabilities. Source: NXP
Put simply, underlying hardware technology, and eNVM in particular, must evolve to unlock the true potential of SDVs. Memory write speed and endurance can be make-or-break capabilities for a competitive user experience and the ability to rollout new features consistently. MRAM, with its crucial improvements to endurance and speed, is the eNVM technology truly capable of bringing this SDV vision to life.
Sachin Gupta is senior director of sales and business development for automotive at NXP Semiconductors.
Related Content
- MRAM debut cues memory transition
- The Rise of MRAM in the Automotive Market
- MRAM, ReRAM Eye Automotive-Grade Opportunities
- MRAM Maker Everspin Remembers Its Industrial Roots
- Architectural opportunities propel software-defined vehicles forward
The post Making the case for MRAM in software-defined vehicles appeared first on EDN.
Rohde & Schwarz Presents its Advance Solutions for Power Electronics Testing at PCIM Expo 2026
Rohde & Schwarz presents its latest test and measurement solutions for power electronics systems at PCIM Expo 2026 in Nuremberg. The showcase highlights cutting-edge approaches that address the most demanding challenges of today’s wide-bandgap devices and drivetrain applications. Advance testing and characterisation enable engineers to improve the performance, efficiency and reliability of SiC- and GaN-based power electronics in applications such as AI data centres, renewable energy and e-mobility. At PCIM Expo 2026, Rohde & Schwarz will showcase its latest test and measurement solutions.
“Power electronics are at the core of the energy and mobility transition. With our latest test and measurement solutions, we enable engineers to fully understand, optimise, and validate the performance of next-generation SiC and GaN devices, bringing higher efficiency, reliability and speed to their designs,” says Philipp Weigell, Vice President Market Segment Industry, Components, Research & Universities at Rohde & Schwarz.
3-Phase Analysis for Power and Drives
Rohde & Schwarz introduces the new 3-phase power analysis option (R&S MXO-K333) for the R&SMXO 3, 4, 5/5C series oscilloscopes. This option turns an MXO oscilloscope into the best-in-class waveform analysis tool for in-depth 3-phase AC power characterisation. The solution simplifies total power results, multiphase AC power qualities, harmonic standard testing and distortion measurements, while keeping the original transient waveforms in view for instant root-cause tracing. At the PCIM Expo, visitors can explore how a guided setup wizard maps the eight available channels of the MXO 5 to three voltage and three current probes, validates the wiring (supporting two-wire, three-wire, and four-wire configurations: 2V2A, 3V3A, 3VN3A) and automatically configures the instrument. After the setup is complete, the software delivers per-cycle power calculations, RMS values, power factor, active and reactive power, total power, phasor/vector visualisation and harmonic/THD analysis. All of this is in line with IEC 61000-3-2, and the results are presented with power-waveform views, harmonic spectra, FFT statistics and phasor diagrams. The 3-phase power analysis option provides MXO oscilloscopes with waveform view and trigger capabilities. This enables engineers to see beyond a conventional power analyser’s statistical data, supporting the debugging of power distribution, converters and industrial power systems.
Electric Drivetrain Efficiency
PCIM Expo visitors can also experience the LMG671 power analyser at the Rohde & Schwarz booth, as it demonstrates how to reliably measure efficiency and quantify losses in modern electric drivetrain power electronics. The analyser provides continuous, high-precision power measurement with exceptional dynamic range, delivering output to input efficiency for the drivetrain under test while simultaneously capturing the motor’s mechanical power through direct speed and torque sensing. Inverter output is examined in the three distinct bandwidths, fundamentals, harmonics and wideband power, to extract derived values such as high-frequency losses. All relevant readings and graphs are presented on a dedicated CUSTOM menu, giving users a complete view of the system’s performance at a single glance. The LMG671 is now part of Rohde & Schwarz’s power electronics portfolio, following the recent acquisition of ZES ZIMMER Electronic Systems GmbH.
Double-Pulse Testing of SiC Automotive Power Modules (Hitachi Energy RoadPak)
In another setup, Rohde & Schwarz, together with PE-Systems, showcase an automated double-pulse tester that delivers precise, repeatable measurements while improving consistency and efficiency in power electronics characterisation. The solution provides fast insights into the dynamic switching behaviour of power modules, with automated parameter extraction that reduces human error and accelerates development.
The demo unit is based on the rack-optimised, next-generation MXO58 oscilloscope from Rohde & Schwarz. Leveraging its eight channels in combination with the R&S RT-ZISO isolated probing system, it enables stable and accurate double-pulse testing for SiC and GaN devices in a fully automated environment.
The post Rohde & Schwarz Presents its Advance Solutions for Power Electronics Testing at PCIM Expo 2026 appeared first on ELE Times.
Cohu receives orders for testing GaN power devices for AI data centers
Нагорода студенту ФМФ за найкращу наукову роботу
Рішенням Президії Національної Академії наук України магістру ІІ року навчання програми «Страхова та "фінансова математика» Логвинову Денису Олександровичу за роботу «Деякі властивості у стохастичній моделі з альфа-схемою та трендом» присуджено премію Академії наук України за найкращу студентську
The next EDA wave: Lessons from DATE 2026

The Design, Automation & Test in Europe (DATE) Conference in Verona in April showed an EDA research community moving with real momentum into the AI era. The strongest signal from the conference was that AI is no longer a separate topic sitting beside chip design. It’s now shaping the workloads, architectures, design tools, verification flows, and security questions that will define the next phase of semiconductor development.
The conference was upbeat because the direction is clear and the opportunity is substantial. Heterogeneous compute, RISC-V, chiplets, AI accelerators, agentic EDA, structured specifications, and AI-assisted verification are all advancing at the same time. The challenge is significant: these systems must be designed, verified, secured, and trusted.
However, DATE 2026 showed that the research community is already developing the methods, tools, and flows needed to address that challenge. For Europe, the opportunity is not simply to catch up with existing EDA capability, but to help lead the next wave of AI-enabled, verification-aware, and trustworthy semiconductor design.
This also re-frames the European sovereignty discussion. There are three distinct parts: sovereignty in processor design, sovereignty in EDA tools, and sovereignty in next-generation AI+EDA capability. Processor design is being opened up by RISC-V, chiplets and design-enablement platforms.
EDA-tool sovereignty is more challenging, because advanced-node signoff depends on mature commercial tools, process design kits (PDKs), verification IP, and foundry-qualified flows. The strongest near-term opportunity is therefore AI+EDA capability: building the methods, benchmarks, structured specifications, secure deployment models, and verification-aware AI flows that will define the next generation of design automation.
Conference context and program messaging
DATE 2026 provided a useful view of where semiconductor research is moving as AI, EDA, advanced architectures, verification, and security begin to converge. DATE is not the Design and Verification Conference (DVCon), with its practitioner focus on verification methodology and commercial tool use. It is not the Design Automation Conference (DAC), where the exhibition floor is often as important as the technical program. DATE is research-led, with the papers, focus sessions, tutorials, keynotes, and European project sessions forming the center of gravity.
That research-led character matters. It makes DATE a good indicator of topics that are still forming before they become mature tool flows or standard industry practice. The commercial ecosystem was clearly present with Cadence, Synopsys, Qualcomm, Arm, Infineon, Micron, STMicroelectronics, Tenstorrent, Axelera AI, Real Intent, and others represented in the sponsor list. However, the tone was less product marketing and more ecosystem development.
A key takeaway was that AI is now present as a workload, a design objective, a design-assistance technology, a verification challenge, and a security risk. The individual sessions differed in emphasis, but the common thread was the same: the next phase of EDA will be shaped by the interaction between AI, heterogeneous architectures, verification, security, and trust.
DATE 2026 included 325 regular papers and 91 extended abstracts across the D, A, T, and E research tracks, giving 416 accepted research-track outputs. The program offered 41 main technical sessions, three Best Paper Award candidate sessions, two late-breaking-result sessions, five keynotes, 10 focus sessions, five workshops, four special-day sessions, and four embedded tutorials.
The geographical distribution was also significant. DATE is European in location and culture, but the research paper base reflects the global semiconductor research map. By country-affiliated appearances in technical paper-like entries, China, plus Hong Kong and Taiwan, accounted for 247 appearances, or 44.7%. Europe, plus the U.K., accounted for 133 appearances, or 24.1%. The U.S. accounted for 94 appearances, or 17.0%, with the rest of the world at 79 appearances, or 14.2%.
Using a broad classification, roughly 27% of the technical country-affiliated appearances had some AI connection. Most of this was hardware-for-AI: accelerators, compute-in-memory, large language model (LLM) inference, edge AI, photonic AI, and memory systems. AI applied directly to verification, test generation, fuzzing, coverage, and security validation was closer to 2.7% of the technical program. This shows that AI-for-verification is currently a specialist part of the larger AI-related research activity.
AI as workload, tool, and risk
The opening keynote from Luc Van de Hove of IMEC set out one of the central pressures: AI models are evolving faster than semiconductor hardware development, creating bottlenecks that require new compute architectures and semiconductor platforms. In this framing, AI is a key demand changing the hardware stack.
At DATE, AI appeared in at least four roles. First, AI is the workload driving accelerators, compute-in-memory structures, chiplets, photonics, and energy-efficient platforms. Focus session FS02, “Architecting Intelligence: Next-Gen Acceleration for Generative AI,” and TS36, “Next-Generation Memory Systems for AI Acceleration,” were good examples. Second, AI is becoming a design tool, with LLMs, agents, and machine-learning-driven optimization applied to routing, placement, high-level synthesis (HLS), analog sizing, and lithography simulation.
Third, AI is changing the research process itself, as raised in the keynote from Rolf Drechsler from the University of Bremen in Germany. Fourth, AI is becoming a security and trust problem, since AI-guided verification tools can introduce risks such as adversarial manipulation, biased test generation, or hallucinated security guidance.
The AI-for-EDA message was therefore not simply that AI will automate design. AI can accelerate parts of the design and verification flow, while also creating systems and flows that are harder to verify, explain, secure, and certify.
Future platforms are heterogeneous
A repeated architectural message was that general-purpose compute is no longer sufficient for many target workloads. The program included strong content on AI accelerators, chiplets, 3D integrated circuits (3DIC), RISC-V vector extensions, photonic accelerators, quantum and high-performance computing (HPC) coupling, FPGAs, high level synthesis (HLS), open chiplet ecosystems, and domain-specific processors.
RISC-V appeared prominently as an instruction set architecture (ISA), especially where openness, customization, and verification interact. It appeared in open-source cores such as Rocket, BOOM, XiangShan, and Snitch; in vector-extension verification; in processor fuzzing; in cryptographic accelerators; in SoC security; and in lightweight wearable systems. This is consistent with the broader RISC-V opportunity: the open ISA makes architectural experimentation easier but also increases the verification responsibility for each implementation and extension.
The Cornell University keynote by Zhiru Zhang on accelerator design and programming described a familiar problem. Performance and efficiency increasingly come from specialized accelerators, but there is a widening gap between how accelerators are designed and how they are programmed. That gap is an EDA problem because the design flow needs to connect architecture, programmability, verification, performance estimation, and software maintenance.
Quantum was also treated as a systems topic rather than as isolated physics. Nvidia’s Bettina Heim described NVQLink, coupling GPU real-time processing with quantum processors at sub-microsecond latency for error correction and control. A focus session covered MLIR, QIR, and intermediate representations for quantum-classical compilation. The point for EDA is that quantum-classical systems create problems in compilation, control, architecture, timing, and verification. These are recognizable EDA problems, even if the devices are different.
Verification and security become first-class constraints
The third major theme was the convergence of verification, security, and open ecosystems. DATE treated verification and security as part of the same scalability problem. As systems become heterogeneous, AI-driven, and assembled from chiplets and third-party IP, functional correctness, security validation, explainability, and certification overlap.
The verification panel (session FS06), “Who Is Best Suited to Do Verification?”, framed rising re-spin rates and verification cost as a central industry problem. The hardware security focus session argued that heterogeneous SoCs, CPUs, and accelerators create attack surfaces too large for manual analysis alone. The AI-for-verification thread included coverage-driven test generation, reinforcement-learning-guided concolic (concrete + symbolic) testing, processor fuzzing, SystemVerilog Assertion (SVA) generation, and agentic security assistants.
This work is still emerging. However, the direction is clear: verification needs more automation, and that automation needs to be tool-grounded, measurable, and traceable. A generated test, assertion, or security recommendation is useful only if it connects to coverage, formal results, simulation results, reviewable traces, or other engineering evidence.
AI for RTL and verification
A specialist but important cluster was AI applied to register-transfer level (RTL) design. This included LLM-generated Verilog, closed-loop RTL repair, multi-agent design flows, HLS-to-RTL pathways, and benchmark contamination. The volume was small, roughly 2-3% of the technical program, but the technical direction was important.
The field has moved beyond asking an LLM to write Verilog. The more credible flows put verification in the loop: generate RTL, run checks, estimate correctness, repair errors, and preserve equivalence. VeriBToT (session TS07.1) combined self-decoupling and self-verification for modular Verilog generation.
EstCoder (TS22.9) used a collaborative agent flow with a functional-estimation agent scoring generated RTL before accepting or correcting it, reporting up to 9% improvement in RTL correctness. LiveVerilogEval (TS29.1) addressed benchmark contamination and found that LLM performance degraded significantly on dynamically generated benchmarks, suggesting that static benchmarks may have overstated current capability.
The sponsor-hosted executive session on EDA agentic AI provided a useful industrial view. Agentic AI is moving from demonstrations toward production flows with RTL checking and fixing, specification-to-testbench construction, and synthesis-to-GDSII flows identified as near-term use cases. The hard constraints are determinism, traceability, IP protection, tool integration, and signoff confidence.
The AI-for-verification work showed the same pattern. The best examples were closed-loop and tool-grounded, not generic prompt-based test generation. ChatTest (TS22.7) used a multi-agent LLM framework with a structured Verification Description Language (VDL), retrieval-augmented generation, and a coverage-feedback loop. It reported 1.46 times higher toggle coverage, 2.28 times higher line coverage, and a 24.23% improvement in functional coverage across 20 complex RTL designs. CoverAssert (TS40.10) used functional coverage feedback to guide LLM generation of SVAs.
Processor fuzzing gave another important example. SimFuzz (TS40.6) applied similarity-guided block-level mutation to RISC-V processors Rocket, BOOM, and XiangShan, finding 17 bugs, including 14 previously unknown issues and seven CVE-assigned bugs affecting decode and memory units.
This connects to GhostWrite (CVE-2024-44067), a RISC-V vector-extension implementation bug in T-Head XuanTie processors that allowed unprivileged code to write arbitrary physical memory. GhostWrite was not a side channel. It was a direct architectural flaw, and the mitigation required disabling the vector extension. This is a strong argument for structure-aware, security-directed processor verification.
AI-generated SVAs also appeared in several forms. PALM (TS07.6) investigated LLM assistance for valid SVAs in security verification, while CoverAssert (TS40.10) and AutoAssert (TS02.5) extended coverage-driven, LLM-assisted assertion generation with formal verification feedback. This seems to be the right near-term role for AI in formal verification: assistant and accelerator, not replacement for formal reasoning.
Agentic AI and structured specifications
The most visible emerging pattern in AI+EDA was the movement from single-shot prompting to multi-agent, tool-grounded, feedback-driven workflows. The focus session (FS07) “From Concept to Silicon: End-to-End Agentic AI for Smarter Chip Design” made this explicit across HLS, physical design, testing, and security verification.
The Nexus paper presented by PrimisAI (session SD01.1) framed the engineering problem clearly. EDA workflows need reliability and traceability, and weak coordination and unstructured communication are bottlenecks for multi-agent deployment. Nexus reported 100% accuracy on RTL generation tasks in VerilogEval-Human and nearly 30% average power savings on Verilog-to-routing (VTR) timing-optimization benchmarks.
AgenticTCAD (TS41.6) applied a natural-language-driven multi-agent system to TCAD device optimization, achieving IRDS-2024 specifications for a 2-nm nanosheet FET within 4.2 hours, compared with 7.1 days for human experts.
The key point is that agentic AI wraps the LLM in an engineering process. The flow is to decompose the task, call EDA tools, inspect reports, measure quality, repair errors, and iterate. That is much more credible for EDA than single-shot generation.
Two structured-language examples were also notable. The first was the Universal Specification Format (USF), a formal specification format (in session TS24.3) with unambiguous syntax and semantics able to generate formal properties and behavioral simulation models.
The second was Verification Description Language (VDL), introduced in ChatTest (TS22.7), which captures I/O pins, timing, functional coverage targets, stimulus sequences, checkpoints, and boundary conditions in YAML format. These are early signs that AI-assisted EDA may require better intermediate representations, not only better models.
European sovereignty and the next EDA wave
European semiconductor sovereignty was an undercurrent throughout DATE 2026, but it needs to be framed carefully. Semiconductor sovereignty is not about becoming completely self-sufficient, it is about reducing dangerous dependencies on other geographic regions. There are several separate questions, for example: sovereignty in processor design, sovereignty in EDA tools, and sovereignty in next-generation AI+EDA capability.
For processor design, the RISC-V activity, open chiplet ecosystems, and European design-enablement platforms such as the cloud-based makeChip point in a useful direction. However, first-time-right silicon still depends heavily on commercial EDA tools, qualified PDKs, verified sign-off flows, and high-quality verification IP. A realistic sovereignty strategy means sovereign design competence and secure access to the best tools, not an assumption that open-source-only flows can replace the commercial stack.
For EDA-tool sovereignty, open-source EDA is strategically valuable for education, research, reproducibility, open PDKs, and lowering barriers for small and medium-sized enterprises (SMEs) and universities. However, advanced-node commercial EDA represents decades of investment in algorithms, foundry relationships, sign-off maturity, and customer regression infrastructure.
The keynote by Luca Benini of the University of Bologna in Italy on democratizing silicon made the positive case for broader access, but open-source EDA is a supplemental and educational platform, not a near-term substitute for advanced-node sign-off.
The more compelling opportunity is next-generation AI+EDA. DATE 2026 showed that this area is still being defined. Agentic workflows, AI-assisted verification, coverage-driven test generation, formal and SVA support, open benchmarks, trustworthy AI, structured specification languages, and secure on-premise model deployment are all areas where research depth and engineering discipline matter.
Europe has strong universities, safety-critical application domains, active RISC-V and open-source hardware communities, and the policy framework of the EU Chips Act. That combination is well suited to shaping the next EDA wave.
The strongest form of European sovereignty is not isolation. It is capability: the ability to design, verify, secure, and understand the systems Europe depends on. DATE 2026 showed that the future of EDA will require new compute architectures, better verification methods, more automation, structured specifications, stronger security methods, and a clear understanding of where AI helps and where it introduces new risks. These are exactly the problems that a research-led, ecosystem-focused community should be able to address.
DATE 2026 was therefore not just an EDA conference about AI in chip design. It was a useful indication that the next phase of EDA will be defined by the interaction between AI, heterogeneous architectures, verification, security, and trust. The next step is to turn these research directions into reliable engineering flows.
Simon Davidmann is an EDA industry pioneer and serial technology entrepreneur with over 40 years of experience in simulation and verification. His career has been instrumental in shaping the foundational languages and methodologies used in modern chip design, particularly those now critical for AI/ML hardware. Davidmann was the co-creator of Superlog that became SystemVerilog. After selling Imperas to Synopsys in 2023 and being Synopsys VP for Processor Modeling & Simulation, he left Synopsys and is now an AI + EDA researcher at Southampton University, UK.
Editor’s Note
DATE 2026 was held on 20-22 April 2026 in Verona, Italy. The conference program is available at https://www.date-conference.com/programme. Specific session labels are noted in parentheses in the article.
Related Content
- AI features in EDA tools: Facts and fiction
- EDA’s big three compare AI notes with TSMC
- What is the EDA problem worth solving with AI?
- DAC 2025: Towards Multi-Agent Systems In EDA
- How AI-based EDA will enable, not replace the engineer
The post The next EDA wave: Lessons from DATE 2026 appeared first on EDN.
Applied Materials and TSMC partner at EPIC Center in Silicon Vallley to accelerate AI scaling
Well-balanced gain, driven without pain

A subtle change to a standard circuit can enhance its usefulness—and even save a resistor.
If there were a prize for the most trivial Design Idea (DI) of the year, this one would likely be high on the shortlist (if not at the top). Most DIs involve adding components to circuits to improve them; this time we’re removing one. Circuits for line drivers, balanced or not, are ten a penny, but this variant has a surprising twist: surprising because it’s so simple and, when you look at it, obvious, though I can’t find it in any published schematic, even those from National Semiconductor’s golden days.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 presents it:

Figure 1 Resistors R1 and R2 help to set the gains of both the non-inverting and inverting stages, allowing for excellent matching of the anti-phased outputs with minimal components.
A1a is a non-inverting gain stage, utterly conventional except that its feedback network is referred to A1b’s virtual ground point. A1b is an inverting unity-gain stage, utterly conventional except that its input resistor is also A1a’s feedback network. A1a and A1b therefore work together to deliver perfectly matched anti-phase outputs (assuming perfectly matched components, of course). The gain can be set to anything above 1 (unity gain would revert the circuit to a simple buffer plus an inverting stage: nothing new).
At first glance, this circuit may look rather like part of a differential or instrumentation amplifier. But its function, as determined by the resistor ratios, is quite different. Those others have accurately-matched differential inputs; this is designed for balanced outputs.
Is that it?Yup: ’fraid so, apart from some practical details. A CR network may be needed to remove DC from the input, and any remaining imbalance could be trimmed by bleeding some current into (or out of) the A1b in- input. Otherwise, the circuit is stable and well-behaved, and will happily drive a transformer directly, though series matching resistors should be added, perhaps with 300R in each output line if you want to be really picky about balance.
Trimming the frequency response is messy, and should be done before the signal gets this far. Any (HF-cutting) capacitor across R1 (call it C1) needs to be matched by (1 – 1 / Gain) × C1 across R3 if the responses in both output legs are to match.
The output drive differs from device to device. Using ±15 V rails and working into 600R, LM4562s delivered 26.3 V pk-pk and KA5532s gave 24.5 V, while TL072/082s disappointed at just 13.8 V. An MCP6022 (RRIO, unlike the others) with ±2.5 V supplies clipped at 4.7 V pk-pk into 600R.
And in the real world…To paraphrase Bob Pease, “If a circuit’s never seen a soldering iron, it probably won’t work right” (although perhaps he’d make an exception for plug-in breadboards, at least at low frequencies). So, just to demonstrate that this doesn’t merely describe a simulation, Figure 2 shows it plugged in and “working right”:

Figure 2 This is how an LM4562 performs at 1 kHz with ±15 V rails and a 600R load. It is just clipping—cleanly and symmetrically—at a differential output level of 32.2 dBu.
As noted earlier, the circuit is well behaved as long as you avoid driving capacitive loads directly, as with all op-amp circuits (33–100R in series with an op-amp’s output pin is normally a good cure, limiting the peak current). Lacking any suitable audio transformers but wanting to check if such loading might cause problems, I hooked it up directly to the secondary winding of a small mains transformer, which seemed like a cruel enough (not to mention fun) test.
While the resulting >>300 V RMS output tolerated little loading, it could light a neon brightly (with its integral 220k series resistor) without affecting the distortion at the op-amps’ outputs. Although the HV output showed a nick in the waveform where the neon struck and went negative-resistance, this artifact wasn’t reflected back to the drive. Which is exactly what we’d expect, but should not take for granted.
For phase-splitting with gain (but no pain) and the ability to drive old-school 600Ω balanced lines, this circuit may be ideal. That said, there may be easier and cheaper ways of powering neons…
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- ΔVbe thermometer outputs 1mV/°C without calibration or op amps
- Newer, shinier DMM RTDs—part 1 and part 2
- Why modulate a power amplifier?—and how to do it
- Power amplifiers that oscillate—deliberately. Part 1: A simple start and Part 2: A crafty conclusion
The post Well-balanced gain, driven without pain appeared first on EDN.
I built something to control all my lab devices
| I'm slightly lazy; if I need to do repeated work I'd rather spend my time on building something that will do that job for me. (It's not always never faster but certainly more fun). In the last few months I went completely overboard and built something that connects to, well, pretty much everything in my workshop that communicates. I can now control my power supply, read data from a power meter, read temperatures etc, all in a single tool. It can even control my 6-axis robot arm and watch and analyse my security camera's. Using javascript, I can now run automated tests or whatever. And of course, since it is 2026, I added AI which is pretty awesome. Combined with voice recognition and text-to-speech, I can now say " Set the power supply to 15V, 100 mA, turn the output on" while holding two probes. And it actually works. (Though first attempt it mishard it as 100 mA as 100 million 💀 So I built in a confirmation step). But AI can also write scripts for you and help to write the drivers to your equipment. The camera and AI can also be used inside a script; imagine you have an old analog voltmeter and want to use the value to do something in your script: just point the camera at it and do something like let value=ai("return the value of the analog meter in Volts",camera.snapshot());So I hope there are more fools like me who would love to play with something like this; if you want to give it a try, it's free! Though very much in Beta so I'm sure you'll find stuff I need to fix. Or stuff I need to explain better... It should be able to connect to any scpi device over serial/usb or tcp/ip. You'll need to run your own local llm (like ollama or lm studio) to get the AI to work for now. I used lm studio with Qwen3.5 9b, which worked perfectly for recognizing images. Let me know if you have any questions! [link] [comments] |
AI inference accelerator bolsters efficiency in power modules

Power modules for data centers are incorporating AI inference for applications such as agentic AI, response generation with large language models (LLMs), and predictive analytics in finance and healthcare. The use of AI accelerators is mainly aimed at boosting energy efficiency in high-density boards.
Take the case of Infineon, which is incorporating d-Matrix’s Corsair inference accelerator in its OptiMOS TDM2254xx dual-phase power modules. According to Sid Sheth, founder and CEO of d-Matrix, Corsair was purpose-built for delivering the sub-2 ms token latency that interactive applications require.

The OptiMOS TDM2254xx dual-phase power module enables vertical power delivery while offering a density of 1.0 A/mm2. Source: Infineon
Infineon has been working closely with d-Matrix to optimize the Corsair inference accelerator for its power semiconductors. “Infineon has been collaborating with customers specializing in inference processors, such as d-Matrix, from the early days when the industry was mostly focused on training hardware,” said Raj Khattoi, VP and GM of consumer, computing and communication at Infineon.
Infineon, which offers a broad portfolio of power semiconductors, based on silicon (Si), silicon carbide (SiC), and gallium nitride (GaN), has also been working closely with AI companies in both the training and inference markets. And these liaisons have aimed to improve energy efficiency at higher power density in hardware at data centers and other AI installations.
Related Content
- Solving AI’s Power Struggle
- TI launches power management devices for AI computing
- Taiwan’s Emerging Power Electronics Strategy in the AI Era
- Why AI Is Redefining the Future of Commercial Power Infrastructure
- Power Module Packaging Evolves as Materials and Supply Chains Redefine Power Electronics
The post AI inference accelerator bolsters efficiency in power modules appeared first on EDN.
Подяка компанії VirginGrip за підтримку скеледрому КПІ
Щиро дякуємо чеській компанії VirginGrip за відгук на наш запит і вагому підтримку клубу та скеледрому КПІ після пошкодження внаслідок обстрілу.
Re-purposing of a dead hard drive motor
| I thought this recent project of mine could inspire people on how to reuse the spindle motor on obsolete or crashed hard drives. After all, it's a shame how these state-of-the-art motors often end up in the bin despite being in full working condition. I built a so-called "ringing table" for microscopy by creating a drop-in replacement for the original disk controller on a twenty year old WD drive. My board has a PIC processor, a three-phase spindle motor driver and a simple button-and-led user interface right where the SATA and Power connectors used to be. It actually worked pretty well. There must be other things one can build from this basic concept! More technical details about the project are laid out on my personal blog. https://espenandersen.no/ringing-table-from-a-dead-hard-drive/ [link] [comments] |
Ініціатива для професійної адаптації ветеранів і людей з інвалідністю
🤝 КПІ ім. Ігоря Сікорського працюватиме над соціальним проєктом RE:LinkHUB — ініціативою для професійної адаптації ветеранів і людей з інвалідністю. Проєкт поєднає інженерні рішення, VR-технології, професійне навчання у транспортній сфері та підтримку працевлаштування.
🎥 У КПІ відкрили виставку до 250-річчя Декларації незалежності США
🇺🇸 В урочистому відкритті взяла участь делегація Посольства США в Україні на чолі з Джонасом Стюартом, радником з питань преси, освіти та культури відділу публічної дипломатії. Подія стала ще однією сторінкою понад 30-річної історії співпраці КПІ з американськими інституціями.



