Українською
  In English
ELE Times
US Tariffs Could Cost Chip Equipment Makers Over $1 Billion Annually
U.S. semiconductor equipment manufacturers are facing potential annual losses exceeding $1 billion due to new tariffs proposed by President Donald Trump’s administration. Major companies like Applied Materials, Lam Research, and KLA could each incur losses of approximately $350 million, while smaller firms such as Onto Innovation may also experience significant financial impacts.
These projected losses stem from anticipated declines in overseas sales of less advanced equipment, increased costs from sourcing alternative components, and expenses related to tariff compliance. The Trump administration has temporarily paused previously announced reciprocal tariffs but is considering further actions to promote domestic manufacturing, including initiating an import investigation.
This development adds to the challenges already faced by the industry following former President Biden’s export controls aimed at limiting advanced chip manufacturing in China, which have prompted China to bolster its domestic semiconductor capabilities. Industry representatives have been in discussions with U.S. officials, emphasizing the need to consider the broader implications of these tariffs on the global semiconductor supply chain.
The post US Tariffs Could Cost Chip Equipment Makers Over $1 Billion Annually appeared first on ELE Times.
TI enables automakers to advance vehicle autonomy and safety with new chips in its automotive portfolio
- The industry’s first high-speed, single-chip lidar laser driver can detect objects faster and more accurately than discrete solutions.
- New high-performance automotive bulk acoustic wave (BAW)-based clocks are 100 times more reliable than quartz-based clocks, enabling safer operation.
- Automotive manufacturers can enhance front and corner radar sensor functions with TI’s newest millimeter-wave (mmWave) radar sensor.
Texas Instruments (TI) introduced a new portfolio of automotive lidar, clock and radar chips to help automakers transform vehicle safety by bringing more autonomous features to a wider range of cars. TI’s new LMH13000, the industry’s first integrated high-speed lidar laser driver, delivers ultra-fast rise time to improve real-time decision-making. The industry’s first automotive BAW-based clocks, the CDC6C-Q1 oscillator and LMK3H0102-Q1 and LMK3C0105-Q1 clock generators, improve advanced driver assistance system reliability. Addressing evolving ADAS needs, TI’s new AWR2944P mmWave radar sensor offers advanced front and corner radar capabilities.
“Our latest automotive analog and embedded processing products help automakers both meet current safety standards and accelerate toward a collision-free future,” said Andreas Schaefer, TI general manager, ADAS and Infotainment. “Semiconductor innovation delivers the reliability, precision, integration and affordability automakers need to increase vehicle autonomy across their entire fleet.”
Real-time decision-making with 30% longer distance measurements
A crucial technology for the future of safe autonomous vehicles, lidar provides a detailed 3D map of the driver’s surroundings. This enables vehicles to accurately detect and quickly react to obstacles, traffic and road conditions to improve real-time decision-making. TI’s new LMH13000 is the industry’s first integrated high-speed laser driver to deliver an ultra-fast 800ps rise time, achieving up to 30% longer distance measurements than discrete solutions. With integrated low-voltage differential signaling (LVDS), complementary metal-oxide semiconductor and transistor-transistor-logic control signals, the device eliminates the need for large capacitors or additional external circuitry. This integration also supports an average 30% reduction in system costs while reducing solution size by four times, empowering design engineers to discretely mount compact, affordable lidar modules in more areas and across more vehicle models
As lidar technology reaches higher output currents, vast variations in pulse duration over temperature make it challenging to meet eye safety standards. TI’s LMH13000 laser driver provides up to 5A of adjustable output current with only 2% variation across its -40C to 125C ambient temperature range, compared to discrete solutions that can have up to 30% variation. The device’s short pulse-width generation and current control enable the system to meet Class 1 U.S. Food and Drug Administration eye safety standards.
Design a reliable ADAS with the industry’s first automotive BAW-based clocks
Electronics in ADAS and in-vehicle infotainment systems must work reliably while facing temperature fluctuations, vibrations and electromagnetic interference. With TI’s BAW technology benefits, the new CDC6C-Q1 oscillator and LMK3H0102-Q1 and LMK3C0105-Q1 clock generators increase reliability by 100 times compared to traditional quartz-based clocks, with a failure-in-time rate of 0.3. Enhanced clocking precision and resilience in harsh conditions enable safer operation, cleaner data communication, and higher-speed data processing across next-generation vehicle subsystems.
Additionally, the company unveiled a new front and corner radar sensor, the AWR2944P, building on TI’s widely adopted AWR2944 platform. The new radar sensor’s enhancements improve vehicle safety by extending detection range, improving angular accuracy, and enabling more sophisticated processing algorithms. Key enhancements include:
- An improved signal-to-noise ratio.
- Increased computational capabilities.
- A larger memory capacity.
- An integrated radar hardware accelerator that allows the microcontroller and digital signal processor to execute machine learning for edge artificial intelligence applications.
TI’s new automotive lidar, clock and radar solutions build on the company’s commitment to helping engineers design adaptable ADAS for a safer, more automated driving experience.
The post TI enables automakers to advance vehicle autonomy and safety with new chips in its automotive portfolio appeared first on ELE Times.
Pushing the Boundaries of Miniaturization with Texas Instruments’ New MCU
In an exclusive interaction with ELE Times, Jaya Singh, Director – MSP WW Development at Texas Instruments, delves into the breakthrough innovations behind the world’s smallest MCU. She highlights how TI’s advanced packaging technologies, global collaboration, and the pivotal role of the India R&D team enabled unprecedented miniaturization without compromising performance or efficiency. This conversation also explores TI’s strategic roadmap, real-world applications, and the future of ultra-low-power microcontrollers in next-gen electronics.

ELE Times: What breakthrough technologies and design innovations enabled Texas Instruments to develop the world’s smallest MCU without compromising performance, power efficiency, or functionality?
Jaya Singh: Despite its tiny size, the world’s smallest MCU offers robust features, including 16KB of flash memory, a 12-bit ADC with three channels, three timers, and 6 GPIOs. With an Arm® Cortex®-M0+ CPU running at 24 MHz, it empowers engineers to create compact, power-efficient designs without compromising on performance.
Texas Instruments achieved this by leveraging advanced wafer chip-scale packaging (WCSP) technology in combination with feature optimization efforts. WCSP offers the smallest possible form factor by directly connecting an array of solder balls to the silicon die, eliminating the need for a larger package. This results in a package size virtually equal to that of the silicon itself.
By fitting eight solder balls into a compact 1.38 mm² footprint, TI enabled higher feature integration per square millimeter. This miniaturization was further complemented by a deep understanding of customer needs, allowing TI to deliver a highly optimized embedded solution that balances size, cost, and functionality.
ELE Times: What role did TI India R&D team play in this development? Can you highlight their key contributions, collaboration with the global team, and the specific engineering challenges they helped overcome?
Jaya Singh: The TI India R&D team was involved in the complete lifecycle of the product, playing an instrumental role in the end-to-end development of the world’s smallest MCU. The India team played a vital role in defining the product specifications and ensuring a highly cost-optimized and efficient solution.
The India team collaborated closely with global teams, which enabled the integration of deep technical expertise with real-world application insights. This partnership helped overcome key engineering challenges, such as achieving ultra-small packaging without sacrificing functionality or reliability.
In parallel, the India team focused on tailoring the MCU to address the specific needs of customers. This included optimizing features such as memory configuration, analog peripherals and power efficiency to align with the demands of embedded systems used in industrial, automotive and consumer electronics sectors, such as those in India.
By combining technical leadership with market localization, the TI India R&D team ensured that this innovation not only set a global benchmark but also delivered practical value across key applications.
ELE Times: Cost optimization is critical in semiconductor design. How did TI achieve the right balance between affordability, energy efficiency, and high performance in this ultra-miniature MCU?
Jaya Singh: Achieving the optimal balance between cost, efficiency and performance required close collaboration across TI’s global engineering, design and manufacturing teams. This cross-functional effort enabled a holistic approach to cost optimization, ensuring that every element of the product was purposefully engineered for value.
Key innovations in manufacturing technology, packaging and circuit design played a pivotal role in making the MCU both compact and powerful.
ELE Times: What are some real-world applications where this MCU will be a game-changer, particularly in industries like wearables, medical devices, and ultra-low-power IoT?
Jaya Singh: As electrical circuits and system designs become smaller, board space is increasingly considered a scarce and valuable resource. TI’s MSPM0C1104 WCSP MCU enables innovation in size-constrained applications such as personal electronics, medical wearables, and factory automation. Using an ultra-small, feature-rich MCU in high-density designs enables engineers to design solutions with more room for additional components and larger batteries for increased operational lifetimes.
ELE Times: How does this innovation fit into TI’s broader roadmap for ultra-low-power and miniaturized semiconductor solutions, and what can we expect next in this space?
Jaya Singh: Since the launch of our Arm® Cortex®-M0+ MCU portfolio in 2023, Texas Instruments has rapidly expanded its offering to over 100 devices for industrial, medical and automotive systems. The portfolio offers scalable configurations of on-chip analog peripherals and a range of computing options, including other small packages to help reduce board size and bill of materials. We support the portfolio with a comprehensive ecosystem of hardware and software resources, to help engineers reduce system cost and complexity while meeting diverse application needs. In the future, we plan to continue expanding the portfolio to meet the growing needs of the industry.
The introduction of the world’s smallest MCU demonstrates TI’s commitment to developing small packages across our embedded processing and analog portfolio that help our customers innovate in size-constrained applications. The trend of smaller, more compact design requirements will continue to grow. Packaging advancements enable engineers to integrate more functionality into smaller form factors while maintaining high levels of precision and performance, enhancing user experiences and creating new design possibilities.
TI’s investments in internal manufacturing and technology have given the company greater control of its entire manufacturing process, while also lowering costs. By optimizing packaging solutions for specific application needs, TI can explore new design approaches and technologies while achieving the highest levels of quality and reliability – driving innovation and meeting changing industry demands.
ELE Times: With semiconductor technology constantly evolving, what unique challenges and opportunities do you foresee in pushing the boundaries of MCU miniaturization further?
Jaya Singh: With each new generation of electronics, consumers expect continuous advancements in size and functionality. To meet these demands, engineers are challenged to add new features while maintaining or decreasing their products’ current form factors. TI is committed to helping our customers overcome their design challenges and get to market quickly with MCUs that are scalable, cost-optimized and easy-to use.
The post Pushing the Boundaries of Miniaturization with Texas Instruments’ New MCU appeared first on ELE Times.
Redefining Battery Intelligence: The Future of BMS in an 800V, AI-Driven EV Era
As the electric vehicle (EV) revolution accelerates globally, Battery Management Systems (BMS) have emerged as the unsung heroes—quietly orchestrating the health, performance, and safety of advanced battery packs that power the next generation of mobility as well as energy storage solutions.
With the global shift toward high-voltage architectures, solid-state chemistries, and connected ecosystems, BMS technology is undergoing a profound transformation. Today’s systems are not just passive controllers—they are intelligent, predictive, cyber-resilient platforms enabling faster charging, longer life, and circular energy applications.
In this feature, ELE Times explores the frontiers of BMS development, with a focus on innovations in ultra-high-voltage support, real-time AI integration, cybersecurity, thermal breakthroughs, and global standardization. Industry leaders like Delta Electronics offer valuable insights into the technological shifts shaping this pivotal domain.
From Lithium-Ion to Solid-State and Sodium-Ion: The Chemistry-Agnostic Evolution of BMSNext-gen battery chemistries like solid-state and sodium-ion promise higher energy densities, safer designs, and reduced reliance on scarce materials. However, their diverse electrochemical properties require adaptable and highly intelligent BMS platforms.
Delta Electronics, an industry leader in power and thermal solutions, is playing a key role in enabling this transition. Their latest BMS innovations are designed to be chemistry-agnostic, capable of managing not only conventional lithium-ion batteries but also emerging formats.
“Our advanced BMS platforms are optimized for ultra-high-voltage architectures—800V and beyond,” Rajesh Kaushal, Energy Infrastructure & Industrial Solutions (EIS) Business Group Head, India & SAARC, Delta Electronics, shared. “This is critical for enabling faster charging and higher drivetrain efficiency. We’re also leveraging adaptive control algorithms and AI-driven analytics to achieve precise thermal management, voltage control, and SOC estimation across various battery chemistries, including solid-state and sodium-ion.”
AI at the Edge: Unlocking Real-Time Diagnostics and Predictive IntelligenceBMS systems are becoming smart, self-optimizing, and responsive—thanks to AI and edge computing.
Delta’s integration of AI-driven algorithms and edge computing empowers its BMS to perform real-time cell-level diagnostics, dynamically optimize charging protocols, and conduct predictive analytics for early fault detection and lifecycle management. This evolution from reactive to predictive management allows early detection of anomalies, degradation trends, and potential failure points.
“By processing battery data at the edge, our BMS platforms reduce latency and improve responsiveness,” Delta noted. “This ensures superior energy distribution, thermal safety, and charging efficiency—even under dynamic operating conditions.”
The result is a system that adapts on the fly—extending battery life, maximizing range, and enhancing user safety.
Cybersecurity and Functional Safety in the Era of OTA and Vehicle ConnectivityAs EVs become increasingly connected, over-the-air (OTA) updates and cloud integration introduce new vectors for cybersecurity risks. Securing the BMS, which has access to critical vehicle and battery functions, becomes a top priority.
Leading system developers are engineering their BMS architectures in compliance with ISO 21434 for automotive cybersecurity and ISO 26262 to meet stringent functional safety requirements. Robust hardware encryption, secure boot mechanisms, and real-time anomaly detection algorithms are now standard features in next-gen BMS platforms.
Delta is among the innovators focusing on this dual mandate of cybersecurity compliance and functional reliability, ensuring that their BMS solutions remain resilient against evolving threat landscapes.
Enabling Ultra-Fast Charging with Intelligent Thermal ManagementWith growing consumer demand for sub-10 minute fast charging, thermal stress becomes a critical bottleneck. Charging a large battery at high currents within minutes can induce rapid temperature rise, risking thermal runaway if not managed effectively.
Breakthroughs in active liquid cooling, phase-change materials, and smart heat sinks are being integrated directly into the BMS ecosystem. With AI-assisted thermal forecasting, the system can predict potential heat buildup and adjust the charging cycle in advance to prevent overheating.
Delta is investing in advanced cooling technologies that work in tandem with AI-driven thermal models. “Thermal runaway mitigation is not just about removing heat—it’s about knowing where and when to intervene,” Mr. Kaushal explained.
Balancing Act: Improving Efficiency Through Active Balancing and Real-Time Impedance TrackingTraditional passive cell balancing wastes energy as heat, especially in large battery systems. New-generation BMS solutions are increasingly adopting active balancing to redistribute charge dynamically and efficiently across cells.
Coupled with real-time impedance tracking, these systems can detect early signs of cell aging or imbalance, allowing preemptive corrections to preserve performance and extend battery lifespan.
Delta’s BMS leverages both techniques, resulting in better thermal uniformity, extended range, and improved charging cycles over the battery’s lifetime.
Towards a Circular Economy: Interoperability and Second-Life ReadinessAs the EV ecosystem expands beyond traditional use cases, modern BMS architectures are being reimagined to accommodate battery swapping, second-life deployment in stationary energy storage, and the broader goals of circular economy frameworks. This requires a high degree of interoperability, with standardized communication protocols and modular software layers.
Forward-looking companies are aligning with global standards like IEC 62984 and OpenBMS frameworks, ensuring that their systems can seamlessly integrate into a variety of energy use cases—from grid storage to micro-mobility.
Delta is actively pursuing interoperability across its BMS product lines, supporting modular deployment in vehicle and stationary storage contexts alike.
Conclusion: BMS as the Central Nervous System of ElectrificationThe future of mobility rests not just on battery cells but on the intelligence that governs them. As BMS platforms become more adaptive, predictive, and secure, they are evolving into the central nervous systems of modern EVs and energy storage systems.
With pioneering work by companies like Delta, the industry is on track to support higher voltages, faster charging, and longer battery life—while embracing sustainability and digital intelligence at every level.
At ELE Times, we will continue to track the cutting-edge of BMS and battery technology as the heartbeat of the global energy transition.
The post Redefining Battery Intelligence: The Future of BMS in an 800V, AI-Driven EV Era appeared first on ELE Times.
Engineering the Future of Compact Audio: A Deep Dive into the NAU82110YG Filter-Free Class-D Amplifier
As embedded consumer electronics evolve toward greater functionality and miniaturization, audio systems are challenged to deliver high performance while consuming less board space, less power, and generating minimal EMI. Whether it’s a Bluetooth speaker on a picnic table, a surveillance system mounted on an exterior wall, or a handheld gaming device running on a lithium-ion cell, the expectations for clear, powerful audio in compact, thermally constrained systems have never been higher.
Enter the NAU82110YG, a mono, analog-input, high-efficiency Class-D audio amplifier developed to meet the rigorous design needs of modern consumer and IoT electronics. With its 18W output capability, filter-free topology, and low-noise performance, this amplifier is optimized not only for output power, but for system-level design integration, EMI mitigation, and power-aware operation.
Class-D Amplification: Efficiency by DesignAt the heart of the NAU82110YG is its Class-D amplifier topology—a PWM-based design that uses high-frequency switching (pulse-width modulation) to amplify audio signals. Unlike linear Class-AB amplifiers that operate transistors in the active region (and dissipate significant power as heat), Class-D amplifiers operate power MOSFETs in either saturation or cutoff, minimizing conduction losses.
This fundamental architecture results in power efficiency improvements of up to 66% over Class-AB designs, with typical system efficiencies exceeding 90% under moderate-to-high load conditions. The reduced thermal footprint of Class-D architectures allows for:
- Smaller heat sinks or passive cooling
- Longer battery life in portable systems
- Higher output power in thermally constrained designs
The NAU82110YG implements this efficiency to full effect, delivering:
- Up to 18W output into 4 Ω at 12V
- Up to 10W output into 8 Ω
- <6 mA quiescent current @ 12V supply
This makes it an excellent candidate for always-on or battery-powered applications that cannot afford high idle currents or thermal load.
Filter-Free Output: EMI Innovation at the EdgeConventional Class-D amplifiers require low-pass LC filters at the output stage to smooth switching artifacts and limit electromagnetic interference (EMI). However, these components increase system cost, consume PCB space, and complicate layout—particularly in tightly integrated wireless products.
The NAU82110YG breaks this dependency through a filterless Class-D output powered by two key innovations:
- Spread-Spectrum Oscillator: Dynamically modulates the PWM switching frequency, spreading EMI energy across a broader spectral band to avoid regulatory test points (e.g., FCC/CE Class B).
- Slew-Rate Control: Softens the transitions at the output stage to reduce high-frequency harmonic energy, thereby suppressing radiated and conducted EMI.
The result is compliant EMI performance with no external filtering components required—an enormous benefit in space- and cost-constrained designs such as smart home nodes and compact audio devices.
High SNR and PSRR: Precision Meets PowerWhile output power and efficiency are critical, audio signal integrity is paramount. The NAU82110YG is engineered to maintain high-fidelity signal reproduction even in noisy electrical environments. It achieves this through:
- Signal-to-Noise Ratio (SNR): 103 dB — ensuring clean output with minimal background hiss or digital noise coupling
- Power Supply Rejection Ratio (PSRR): >83 dB @ 217 Hz — isolating audio performance from ripple and transients common in switched-mode power supplies (SMPS) or wireless SoCs
This makes the NAU82110YG particularly well-suited for:
- Wireless audio products, where RF-induced noise and digital switching transients can corrupt audio paths
- Devices powered by buck converters or USB power, where 5V/12V supplies are inherently noisy
One of the standout features of the NAU82110YG is its dual-mode input architecture:
- Single-ended input for simpler source configurations or legacy audio chains
- Differential input for improved common-mode noise rejection—ideal in environments with significant ground bounce or shared power rails
In addition, the amplifier provides programmable gain control via:
- I²C control: Up to 32 discrete gain levels, allowing firmware-based dynamic range adjustment or real-time volume control
- Pin-selectable preset gains: Five options (0 / 20 / 24 / 32 / 36 dB), allowing low-latency analog selection for fixed-function systems or GPIO-driven gain staging
This flexibility enables the same amplifier to support diverse product families, audio input standards, and user interface styles.
Protection and Reliability: Built-In IntelligenceFor designers targeting industrial, outdoor, or high-volume consumer applications, system-level protection is non-negotiable. The NAU82110YG incorporates a comprehensive suite of protections to safeguard both the amplifier and downstream components:
Protection Type | Description |
Overcurrent Protection (OCP) | Prevents device damage under speaker short or overdrive conditions |
Overvoltage Protection (OVP) | Shields the amplifier against input transients or power rail fluctuations |
Undervoltage Lockout (UVLO) | Prevents operation below safe VDD levels |
Overtemperature Protection (OTP) | Shuts down the amplifier if die temperature has exceeded thermal limits |
Anti-Clipping Protection (ACP) | Reduces the likelihood of speaker damage due to waveform distortion under dynamic loads |
Combined, these features simplify system qualification under thermal, electrical, and fault conditions, accelerating product certification (e.g., CE, UL, IEC-60065) and reducing RMA rates.
Performance Summary: NAU82110YG Key SpecificationsParameter | Value |
Output Power | 18W @ 4 Ω, 10W @ 8 Ω |
Quiescent Current | <6 mA @ 12V |
SNR | 103 dB |
PSRR | >83 dB @ 217 Hz |
Gain Control | 32-step I²C or 5 preset analog pins |
Input Mode | Single-ended / Differential |
Package | QFN20 |
Temp Range | -40°C to +105°C |
EMI Control | Spread-Spectrum + Slew-Rate |
Output Filter | Not required |
Applications and Integration Scenarios
The NAU82110YG is optimized for a wide array of real-world applications:
- Bluetooth Speakers: Efficient amplification, dynamic gain, filter-free EMI compliance for compact designs
- Wireless Doorbells & Intercoms: Low idle current, fast startup (<5 ms), speaker protection for long-term use
- Outdoor Surveillance: Wide operating temperature, PSRR for SMPS isolation, differential input for long cable runs
- Handheld Game Consoles: Audio clarity with minimal power draw, quick response to sleep/resume cycles
The NAU82110YG represents a significant evolution in Class-D amplifier design, not just in raw performance, but in system-oriented integration. It addresses long-standing challenges—EMI compliance, board space constraints, thermal management, and dynamic audio control—through a highly integrated, filter-free, and protection-rich solution.
For engineers designing tomorrow’s connected devices, the NAU82110YG offers more than amplification: it provides an audio subsystem foundation that is efficient, flexible, and reliable by design.
For datasheets, application notes, and reference designs, visit:
https://www.nuvoton.com/products/smart-home-audio/audio-amplifiers/class-d-series/nau82110yg/
The post Engineering the Future of Compact Audio: A Deep Dive into the NAU82110YG Filter-Free Class-D Amplifier appeared first on ELE Times.
Nuvoton Introduces Excellent SNR, Filter-Free 18W Class-D Audio Amplifier
NAU82110YG – The New High-E Audio Device Ideal for Bluetooth Speakers, Wireless Doorbells, Outdoor Surveillance Systems, and Handheld Game Consoles
Nuvoton announced the NAU82110YG, a new Class-D audio amplifier. The NAU82110YG Class-D amplifier features high-efficiency mono, analog input, and delivers up to 10W (8 Ω load) or 18W (4 Ω load) output power. With multiple gain adjustment options, it is the ideal choice for consumer electronics applications such as Bluetooth speakers, wireless doorbells, outdoor surveillance systems, and handheld gaming consoles.
As the importance of quality of life grows, music plays an increasingly vital role in daily lives, driving strong demands for superior sound. Consumers now seek high-quality audio and advanced products, making power efficiency and noise reduction crucial in the electronics market. To address these needs, Nuvoton has introduced the NAU82110YG, a next-generation Class-D amplifier. This innovative product offers lower power consumption, reduced noise, and a range of features designed to enhance user experience.
The NAU82110YG mono Class-D audio amplifier features low quiescent current (6 mA @ 12V), high output power, and comprehensive device protection, suitable for various consumer audio applications. Additionally, this new amplifier supports both single-ended and differential input signal modes, providing flexibility for audio setup.
NAU82110YG Key Features
- Multiple Gain Settings:
- Configurable via I2C interface with 32 gain levels
- Selectable via control pins with five preset gains: 0 dB / 20 dB / 24 dB / 32 dB / 36 dB
- Comprehensive Device Protection:
- Overcurrent Protection (OCP)
- Overvoltage Protection (OVP)
- Undervoltage Lockout (UVLO)
- Overtemperature Protection (OTP)
- Speaker Protection: Anti-Clipping Protection (ACP)
- Package: QFN20
- Operating Temperature Range: -40°C ~ 105°C
Superior EMI Performance, Filter-Free
The NAU82110YG amplifier stands out by eliminating the need for an external output filter, thanks to its spread-spectrum-oscillator technology and slew-rate control, effectively reducing electromagnetic interference (EMI). Moreover, it offers enhanced immunity and power supply rejection ratio (PSRR) of > 83 dB at 217 Hz. With an exceptional signal-to-noise ratio (SNR) of 103 dB, the NAU82110YG is an excellent fit for Class-D audio amplifiers in wireless and AM () frequency band applications.
Leap Forward in Efficiency and Power
The Class-D topology represents a significant leap forward in both power efficiency and noise minimization in audio devices. By generating a binary square wave, Class-D amplifiers efficiently amplify the signal through power device switching. Compared to Class-AB devices, Class-D amplifiers offer power efficiencies that are two-thirds better.
The NAU82110YG Class-D audio amplifier excels in driving a 4 Ω load with an impressive output power of up to 18W and features a chip-enable pin for a fast start-up time of just 4.6 ms.
NAU82110YG Target Applications
The new Class-D audio amplifier is designed for consumer electronics applications including Bluetooth speakers, wireless doorbells, outdoor surveillance systems, and handheld gaming consoles.
The post Nuvoton Introduces Excellent SNR, Filter-Free 18W Class-D Audio Amplifier appeared first on ELE Times.
Infineon launches world’s first industrial gallium nitride (GaN) transistor product family with integrated Schottky diode
Infineon Technologies AG has introduced the world’s first gallium nitride power transistors with integrated Schottky diode for industrial use. The product family of medium-voltage CoolGaN Transistors G5 with integrated Schottky diode increases the performance of power systems by reducing undesired deadtime losses, thereby further increasing overall system efficiency. Additionally, the integrated solution simplifies the power stage design and reduces BOM cost.
In hard-switching applications, GaN-based topologies may incur higher power losses due to the larger effective body diode voltage of GaN devices. This gets worse with long controller dead-times, resulting in lower efficiency than targeted. Until now, power design engineers often require an external Schottky diode in parallel with the GaN transistor or try to reduce dead-times via their controllers. All of which is extra effort, time and cost. The new CoolGaN Transistor G5 from Infineon significantly reduces these challenges by offering a GaN transistor with an integrated Schottky diode appropriate for use in server and telecom IBCs, DC-DC converters, synchronous rectifiers for USB-C battery chargers, high-power PSUs, and motor drives.
“As gallium nitride technology becomes increasingly widespread in power designs, Infineon recognizes the need for continuous improvement and enhancement to meet the evolving demands of customers”, says Antoine Jalabert, Vice President of Infineon’s Medium-Voltage GaN Product Line, “The CoolGaN Transistor G5 with Schottky diode exemplifies Infineon’s dedication to an accelerated innovation-to-customer approach to further push the boundaries of what is possible with wide-bandgap semiconductor materials.“
GaN transistor reverse conduction voltage (VRC) is dependent on the threshold voltage (VTH) and the OFF-state gate bias (VGS) due to the lack of body diode. Moreover, the VTH of a GaN transistor is typically higher than the turn-on voltage of a silicon diode leading to a disadvantage during the reverse conduction operation, also known as third quadrant. Hence, with this new CoolGaN Transistor, reverse conduction losses are lower, compatibility with a wider range of high-side gate drivers, and with deadtime relaxed, there is broader controller compatibility resulting in simpler design.
The first of several GaN transistors with integrated Schottky diode is the 100 V 1.5 mΩ transistor in 3 x 5 mm PQFN package.
The post Infineon launches world’s first industrial gallium nitride (GaN) transistor product family with integrated Schottky diode appeared first on ELE Times.
Keysight Introduces Next-Generation Embedded Security Testbench
- Scalable PXI-based solution delivers enhanced performance and simplifies security testing for modern chips and embedded devices
Keysight Technologies, Inc. announces the launch of the Next-Generation Embedded Security Testbench, a consolidated and scalable test solution designed to address the increasing complex security testing demands of modern chips and embedded devices. This new solution offers enhanced flexibility, reduces test setup complexities, and improves the reliability and repeatability of critical security evaluations.
The proliferation of connected devices and the escalating sophistication of security threats create significant challenges for developers and security labs. Traditional security testing often involves cumbersome setups with multiple disparate instruments, leading to increased complexity, longer test times, and potential inconsistencies in results. The Next-Generation Embedded Security Testbench addresses these pain points by providing a unified and efficient comprehensive device security analysis platform.
The new testbench leverages a high-speed PXIe architecture designed to address the complexities of modern security testing needs. It represents a significant evolution of the Device Vulnerability Analysis product line.This robust architecture enables the Next-Generation Embedded Security Testbench to deliver up to 10 times more effective results in side-channel analysis and fault injection (FI) testing, crucial techniques for identifying and mitigating hardware-based vulnerabilities.
The Embedded Security Testbench is a modular solution that meets varying test needs. Integrating essential components such as oscilloscopes, interfacing equipment, amplifiers, and trigger generators into a single PXIe chassis significantly reduces the need for extensive cabling and enhances communication speed between modules.
The platform is powered by three core components – theM9046A PXle Chassis, the M9038A PXle High-Performance Embedded Controller, and Inspector Software. Solution packages can be extended depending on requirements to include additional tools for complex testing scenarios, incorporating oscilloscopes and extra electromagnetic components. Keysight is committed to the ongoing development of the Embedded Security Testbench, with plans to introduce further enhanced modules in the future.
Wei Yan Mao, Director of Operations at Applus+ Laboratories, said: “At Applus+ Laboratories, we see the technical opportunities and flexibility of this new platform and wanted to be one of the first to start using it in our accredited IT Security Evaluation Facilities (ITSEF).”
Erwin in ’t Veld, Product Manager, Device Security Research Lab at Keysight, said: “With the Next-Generation Embedded Security Testbench, we are setting a new standard for device security testing. By boosting performance and flexibility within a simplified workflow and with its inherent scalability, we are empowering our users to effectively address today’s security challenges and adapt to future advancements.”
The post Keysight Introduces Next-Generation Embedded Security Testbench appeared first on ELE Times.
Power and Thermal Management Concerns in AI: Challenges and Solutions
Courtesy: Arrow Electronics
Artificial Intelligence has rapidly become an innovative driver across industries, enabling everything from autonomous vehicle development to real-time healthcare diagnostics. However, as AI models grow in both complexity and scale, power and thermal management concerns are also rising. Companies must meet and overcome these challenges to help ensure sustainable and efficient AI operations.
Why Power and Thermal Management Matter in AIAI systems are, at their core, computationally intensive and require large amounts of processing power to train and deploy models effectively. This intense compute power results in increasing amounts of energy consumption and heat. Without addressing these issues, organizations are at risk of:
- System Overheating: Excessive heat can degrade hardware performance, cause unexpected failures, and shorten the lifespan of critical infrastructure.
- Operational Inefficiencies: Ineffective cooling strategies lead to higher energy costs, increased maintenance needs, and reduced system reliability.
- Environmental Impact: Escalating energy consumption increases carbon footprints, counteracting sustainability goals and regulatory requirements.
While AI is fundamentally a compute-heavy task, recent trends exacerbate heat and thermal concerns for artificial intelligence systems. Some of these trends include:
- Growing Compute Density: As AI models become larger and more complex, data centers must meet rack densities exceeding 50kW—a significant jump from traditional capacities.
- Edge Deployments: Deploying AI at the edge requires compact, energy-efficient systems that can handle extreme environmental conditions while still performing at high levels.
- Diverse Workloads: AI includes applications such as computer vision, NLP, and generative models, each with its own unique performance and cooling needs.
These challenges require a combination of advanced technologies and strategic planning to maintain performance and sustainability.
Strategies for Addressing Thermal ChallengesLiquid Cooling
While liquid cooling is not a new concept, it has seen rapid growth and adoption to combat heat and thermal issues in AI systems, especially at the edge. Unlike traditional air-based systems, liquid cooling directly removes heat from critical components, offering:
- Improved Efficiency: Direct-to-chip cooling systems enhance heat dissipation, allowing servers to handle workloads exceeding 50kW per rack without compromising reliability.
- Scalability: Liquid cooling is suitable for data centers, edge deployments, and hybrid environments and supports the growing compute density required for AI applications.
- Sustainability: Reduced reliance on energy-intensive air-cooling systems contributes to lower carbon emissions and aligns with environmental regulations.
Arrow’s Intelligent Solutions business works with leading vendors and leverages advanced liquid cooling technologies, such as rear-door heat exchangers and immersion cooling, to provide tailored solutions that address the specific needs of OEMs and ISVs. These solutions enhance system stability, extend lifespan, and significantly lower energy consumption.
Innovations in Passive Cooling
In addition to active cooling systems, advancements in passive cooling techniques, such as optimized airflow management and heat pipe technology, are becoming increasingly relevant. Heat pipe cooling, in particular, offers numerous advantages for AI systems, including exceptional thermal efficiency, uniform heat distribution across the system, minimal maintenance needs, a lightweight design, and effective cooling for high-density computing components.
The Role of Right-Sized ComputingAs seen in Ampere’s innovative GPU-free AI inference solutions, right-sized computing aligns hardware capabilities with workload requirements. This approach minimizes energy waste and reduces costs and operational complexity. Ampere’s cloud-native processors, for instance, deliver:
- Enhanced Efficiency: Up to 6.4x greater AI inference performance compared to traditional systems.
- Lower Power Consumption: Optimized for sustainability, these processors allow organizations to achieve more with less energy.
- Broad Application Support: Ampere’s solutions excel across diverse AI workloads from computer vision to natural language processing.
Integrating Ampere’s technology with Arrow’s thermal management expertise helps ensure that customers receive end-to-end solutions optimized for performance, cost, and sustainability.
Holistic Approaches to AI DeploymentIn addition to hardware choice and usage strategies, more comprehensive approaches to AI deployment can help mitigate concerns over these systems’ significant energy usage and heat generation and their general sustainability.
Predictive Maintenance
Predictive maintenance tools can monitor system performance, identify potential thermal issues before they escalate, and reduce downtime. Our engineering team can help develop comprehensive maintenance frameworks that leverage machine learning for operational continuity.
Energy-Efficient Architectures
Transitioning to energy-efficient architectures, such as those based on ARM or custom-designed accelerators, can significantly reduce power consumption. Our ecosystem of cutting-edge suppliers enables OEMs to access these transformative technologies.
Lifecycle Management
Lifecycle management is critical for achieving sustainable AI deployments. Strategies such as hardware recycling, second-life battery integration, and modular system upgrades can extend the usability of AI infrastructure while minimizing waste.
Moving Towards Sustainable AI DeploymentBeyond addressing immediate thermal and power challenges, OEMs must focus on long-term sustainability. Strategies include:
- Integrated Design Approaches: Collaborating across hardware, software, and cooling technology providers to create cohesive systems that meet evolving demands.
- Regulatory Compliance: Adhering to emerging global standards for energy efficiency and environmental responsibility.
- Customer Education: Empowering end-users with tools and knowledge to optimize their AI deployments sustainably.
Arrow is at the forefront of these efforts, providing OEMs with the tools and expertise to navigate the complexities of power and thermal management in AI. By leveraging our network of robust technology collaborations, engineering expertise, and a commitment to innovation, Arrow’s Intelligent Solutions business helps organizations stay ahead in the race for sustainable AI solutions.
ConclusionThe demands of AI are pushing the boundaries of power and thermal management, but solutions like liquid cooling, passive cooling innovations, and right-sized computing are paving the way for a more sustainable future.
In collaboration with cutting-edge technology providers, Arrow helps you build a comprehensive strategy that balances performance, cost, and environmental responsibility. With these tactics, organizations can deploy their AI solutions in an efficient, reliable, and scalable way.
The post Power and Thermal Management Concerns in AI: Challenges and Solutions appeared first on ELE Times.
Infineon bolsters global lead in automotive semiconductors with number one position in microcontrollers driving this success
Infineon Technologies AG bolsters its global and regional market leadership positions in automotive semiconductors, including its very strong position in microcontrollers. According to the latest market research from TechInsights , Infineon achieved a market share of 13.5 percent in the global automotive semiconductor market in 2024. In Europe, the company climbed to the top spot with a 14.1 percent market share, up from second in 2023.
Infineon also strengthened its presence in North America to the second largest market participant with a 10.4 percent share, rising from last year’s number three position. The global market share in microcontrollers rose again, to 32.0 percent, increasing the lead over the second-placed competitor by 2.7 percentage points.
Furthermore, Infineon maintained its leading market positions in the largest market for automotive semiconductors, China, with a 13.9 percent market share as well as in South Korea with a 17.7 percent market share. In Japan, the company confirmed its strong second place with a share of 13.2 percent. In total, the global automotive semiconductor market accounted for US$ 68.4 billion in 2024 – a slight decline of 1.2 percent compared to US$ 69.2 billion in 2023.
“We are the global number one in automotive semiconductors for the fifth consecutive year and we are equally successful across the world. For the first time in our history, Infineon is among the top two automotive semiconductor companies in every region,” said Peter Schaefer, Executive Vice President and Chief Sales Officer Automotive at Infineon. “This global success is a token of our strong product portfolio, outstanding customer support and our dedication to the specific needs of our customers.”
Infineon’s semiconductors are essential in driving the digitalization and decarbonization of vehicles to make them clean, safe and smart. They serve all major automotive applications such as driver assistance and safety systems, powertrain and battery management as well as comfort and infotainment features. A key focus is to support the evolution of electrical/electronic vehicle architectures towards more centralized zonal designs as the basis for software-defined vehicles. This requires state-of-the-art connectivity and data security, smart power distribution and real-time computing power.
“It is the fifth time in a row that the ‘TechInsights Automotive Semiconductor Vendor Market Share Ranking’ confirms the Infineon lead, with microcontrollers largely contributing to this success,” said Asif Anwar, Executive Director of Automotive End Market Research at TechInsights. “Semiconductors for advanced driver assistance systems, especially SoCs and memories, were among the best performing product categories. Infineon did exceptionally well in microcontrollers used in advanced driver assistance systems and many other applications. With an increase of 3.6 percentage points to a 32.0 percent market share, Infineon has held up well in the automotive microcontroller market, which decreased by 8.2 percent year-over-year.” TechInsights, “2024 Automotive Semiconductor Vendor Market Share”, March 2025.
The post Infineon bolsters global lead in automotive semiconductors with number one position in microcontrollers driving this success appeared first on ELE Times.
Empowering the Next Generation: ESSCI’s Role in Promoting E-Mobility and Battery Technology Skills
Author : Saleem Ahmed, Officiating Head, ESSCI
India is undergoing a significant transformation in its transportation sector, with electric vehicles at the forefront of this revolution. The government’s proactive initiatives, such as the Faster Adoption and Manufacturing of Electric Vehicles scheme and the Production Linked Incentive program for Advanced Chemistry Cell battery manufacturing, underscore a strong commitment to sustainable mobility. These efforts aim to achieve a 30% EV market share by 2030, a goal that necessitates a highly skilled workforce proficient in battery technology and e-mobility systems.
ESSCI’s Targeted Training Programs for E-Mobility
To address the burgeoning demand for skilled professionals in the EV sector, the Electronics Sector Skills Council of India has developed specialized training programs. These programs are meticulously designed to equip learners with industry-relevant expertise, preparing them for roles in manufacturing, design, and maintenance within the e-mobility ecosystem.
One of ESSCI’s flagship qualifications is the Battery System Assembly Operator (ELE/Q6604) program.
This course focuses on training individuals in the precise assembly of battery packs, emphasizing a comprehensive understanding of battery construction, adherence to safety protocols, and stringent quality control measures. Given that battery packs constitute a significant portion of an EV’s total cost, their proper assembly is crucial for both performance and affordability.For those inclined towards innovation and design, the Battery System Design Engineer (ELE/Q6701) qualification offers a robust foundation in battery architecture, chemistry, and optimization techniques.
Participants are trained to develop high-performance energy storage systems that enhance efficiency, extend battery lifespan, and minimize environmental impact. As India progresses towards advanced battery technologies, expertise in battery design becomes indispensable for realizing the nation’s e-mobility objectives.
Equally vital to the EV ecosystem is the Battery System Repair Technician (ELE/Q7001) program. This course concentrates on diagnosing, troubleshooting, and repairing battery systems. As battery degradation over time can affect vehicle performance and range, skilled repair technicians are essential to sustaining India’s EV adoption. The program ensures that professionals are proficient in battery diagnostics, cell balancing, thermal management, and safety protocols—skills that are increasingly in demand as India’s EV repair and servicing sector expands.
Collaborations and Industry Integration
To enhance practical learning, ESSCI collaborates with industry leaders and academic institutions, ensuring that its training programs align with the latest technological advancements and market demands. A notable initiative is ESSCI’s partnership with ABB India, which led to the establishment of a Smart Electrician Training Centre in Faridabad, Haryana. This center provides hands-on training in modern electrical systems, smart grid technologies, and EV charging infrastructure, equipping technicians with real-world expertise.
Additionally, ESSCI actively engages with automotive manufacturers, battery producers, and energy storage companies to integrate their insights into curriculum development. Collaborations with companies involved in lithium-ion battery manufacturing, battery recycling, and charging infrastructure development ensure that the training programs remain pertinent to industry needs. These partnerships also facilitate job placements for trainees, bridging the gap between skill acquisition and employment.
Rising Demand for E-Mobility Professionals in India
The Indian EV market is experiencing exponential growth, with projections estimating a Compound Annual Growth Rate of 49% from 2022 to 2030.This surge is driven by supportive government policies, increasing environmental awareness, and technological advancements. The adoption of EVs is anticipated to generate approximately 5 million direct and indirect jobs in India by 2030, with a significant portion of these roles emerging in battery technology, EV servicing, and charging infrastructure.
Furthermore, the lithium-ion battery market in India is expected to reach substantial capacity by 2030, propelled by the escalating demand from the EV sector. This trend underscores the urgent need for skilled professionals capable of developing, maintaining, and optimizing battery systems for electric vehicles, energy storage solutions, and grid stabilization.
The Road Ahead: Preparing India’s Workforce for the E-Mobility Revolution
ESSCI’s emphasis on future-ready skill development is instrumental in India’s journey toward sustainable mobility. By aligning its training programs with evolving industry trends, ESSCI addresses the immediate demand for skilled professionals and prepares the next generation for long-term career opportunities in e-mobility and battery technology.
With continued government support, industry collaborations, and advancements in battery research, India is well-positioned to become a global leader in electric mobility. However, achieving this vision requires a robust skilling ecosystem that empowers individuals with the expertise needed to drive innovation and ensure the reliability of EV technology.
Through its specialized training initiatives, ESSCI is effectively bridging the skill gap, enhancing employability, and contributing to India’s clean energy goals. As the demand for battery engineers, EV service technicians, and energy storage specialists continues to rise, ESSCI’s programs will remain integral to shaping the future of e-mobility in India.
The post Empowering the Next Generation: ESSCI’s Role in Promoting E-Mobility and Battery Technology Skills appeared first on ELE Times.
Nuvoton NuMicro MA35D1 Microprocessor Dual-OS Solution: Revolutionizing Industrial Automation and AIoT Applications
Nuvoton Technology’s NuMicro MA35D1 is a high-performance, dual-core Arm Cortex-A35 microprocessor capable of running both RTOS and Linux simultaneously. In the latest application demonstration, it delivers instant boot-up within one second, real-time control, and versatile application support by leveraging the strengths of both operating systems.
Linux excels in networking, multimedia, and multitasking, offering rich software tools and broad hardware compatibility that simplifies integration, while RTOS ensures stable and predictable response times for real-time industrial control scenarios. To meet diverse application demands, the MA35D1 maximizes performance and efficiency, making it an ideal solution for industrial automation, AIoT applications, smart buildings, home appliances.
Key Features
- Dual-OS Support: Runs RTOS and Linux simultaneously for optimal performance in real-time applications.
- Instant Boot-Up: Instantly operational within 1 second for mission-critical applications.
- Immediate Data Processing: Rapid sensor data acquisition and LCD display updates.
- Enhanced UI Experience: Powered by Qt for MCUs, the UI provides high-quality and smooth user interface experience.
- Networking Features: Smart devices can connect to the built-in MA35D1 web server for real-time monitoring and remote control.
Technical Specifications
- Processor: Dual-core 64-bit Cortex-A35, 800 MHz and Cortex-M4
- Memory: Multi-chip package (MCP) DDR SDRAM up to 512 MB
- Security Features: Integrated Nuvoton Trusted Secure Island (TSI) to enhance system security
- Multimedia: JPEG/H.264 decoder, TFT-LCD resolution up to 1080p
- Connectivity: 2 sets of Gigabit Ethernet, high-speed USB, SDIO 3.0, 4 sets of CAN FD, and 16 sets of UART
The post Nuvoton NuMicro MA35D1 Microprocessor Dual-OS Solution: Revolutionizing Industrial Automation and AIoT Applications appeared first on ELE Times.
Unlocking the Potential of 6G FR3
Courtesy: Keysight
6G aims to connect the physical, digital, and human worlds through emerging technology focusing on new spectrum utilization, artificial intelligence integration into networks and devices, digital twins, and new network architectures. These elements enhance network programmability and automation across various 6G use cases.
While the commercial deployment of 6G seems quite far away, the research needs for 6G are already here, including growing efforts around the spectrum. New frequency ranges are needed to satisfy the bandwidth needs of the ever-growing throughput requirements. Frequency range 3, or FR3, is one of the new spectrum ranges where 6G is evolving. There were only two frequency ranges for 5G: FR1 and FR2. FR3 is between FR1, often referred to as sub-6 GHz, and FR2, the so-called millimeter-wave range, between 7 and 24 GHz.
6G Requirements : A Drive Towards Deterministic Channel Models4G and 5G addressed radio channel modeling requirements using geometry-based stochastic channel models (GSCMs) for simulating and testing massive multiple-input and multiple-output (mMIMO) systems. However, 6G use cases and technologies are generating new requirements for radio channel modeling because of:
- Near-field (NF) and short-range communications using smaller cells.
- The need for accurate location-based services.
- Smart environments with multiple radio access technologies.
- Propagation challenges at sub-terahertz (THz) frequencies.
- Flexible adaptation and enhanced environmental awareness.
- Integrated sensing and communication (ISAC) and extreme MIMO (xMIMO).
Antenna arrays with substantial apertures are under investigation in the 6G FR3 band for MIMO technologies, including MIMO, mMIMO, and xMIMO. xMIMO antenna arrays support narrow pencil beams and more MIMO layers than conventional mMIMO. Figure 1 shows an example of extreme hybrid beamforming, FR3 upper mid-band base station. As a result, the design and validation of xMIMO wireless systems necessitate accurate intra- and inter-cluster angular characteristics from the channel model.

Ensuring that the channel model accurately captures the intra- and inter-cluster angular characteristics is essential when conducting system simulations or testing the actual performance of xMIMO base stations in RAN and Open RAN configurations. In addition, dynamic channel models are necessary for evaluating beam management and precoding adaptation over user equipment movement. These channel models consider transitions between line-of-sight / non-line-of-sight and blockage conditions.
However, the current 3GPP GSCM channel models lack the accuracy needed to develop extreme beamforming algorithms with highly directive pencil beams. To address that, a new 3GPP RP-234018 Release 19 technical study item on channel modeling enhancements for 7 to 24 GHz for New Radio (NR) was initiated.
To test the upper mid-band for FR3, test engineers need:
- Phase and time coherent multichannel emulation.
- Semi-deterministic and/or deterministic channel models.
- Accurately calibrated equipment for phase and amplitude measurements.
- Comprehensive measurement and analysis tools.
6G FR3 system testing requires phase and time-coherent multichannel emulation using semi-deterministic and deterministic channel models. From the physical layer (PHY) to the application layer, key performance metrics for 6G FR3 testing include:
- Beam weight estimation and pointing metrics.
- Beam shape and gain.
- Sidelobe levels.

With a robust channel emulation solution, test engineers can reproduce a diverse propagation environment and emulate hardware impairments like phase noise and interference. They will need:
- Channel emulation capabilities to create realistic and highly accurate 6G FR3 environment models.
- Signal generation capabilities to provide the necessary 6G transmit waveform to the channel emulator.
- Performance metrics, including beamforming gain, beam width, and sidelobe levels.
- Software tools to perform phase and time-coherent multichannel emulation and create geometry-based stochastic channel models.
- Testing environment to reproduce propagation environments and ensure comprehensive testing.
Keysight 6G FR3 system test solution includes a channel emulator, channel emulation software, and an MXG signal generator.This solution creates realistic and highly accurate stochastic and deterministic models for mimicking 6G FR3 components and systems. The MXG signal generator provides the signal input to the FR3-capable channel emulator. The channel emulator also supports ISAC for detecting and tracking objects.

Precise channel emulation is crucial for 6G FR3. Accurate channel models are needed to support advanced 6G features and allow for realistic testing of 6G technologies, such as mMIMO and beamforming, under various conditions to understand how these technologies will perform in real-world scenarios. Additionally, it supports advanced use cases like ISAC, which require detailed knowledge of the channel characteristics to function effectively.
Be prepared for 6G and the challenges it brings onboard and accelerate your 6G prototyping before releasing standards using Keysight solutions for channel modeling and emulation for FR3 MIMO, mMIMO, xMIMO, and ISAC.
The post Unlocking the Potential of 6G FR3 appeared first on ELE Times.
NVIDIA Accelerated Quantum Research Center to Bring Quantum Computing Closer
Courtesy: Nvidia
As quantum computers continue to develop, they will integrate with AI supercomputers to form accelerated quantum supercomputers capable of solving some of the world’s hardest problems.
Integrating quantum processing units into AI supercomputers is key for developing new applications, helping unlock breakthroughs critical to running future quantum hardware and enabling developments in quantum error correction and device control.
The NVIDIA Accelerated Quantum Research Center, or NVAQC, announced today at the NVIDIA GTC global AI conference, is where these developments will happen. With an NVIDIA GB200 NVL72 system and the NVIDIA Quantum-2 InfiniBand networking platform, the facility will house a supercomputer with 576 NVIDIA Blackwell GPUs dedicated to quantum computing research. “The NVAQC draws on much-needed and long-sought-after tools for scaling quantum computing to next-generation devices,” said Tim Costa, senior director of computer-aided engineering, quantum and CUDA-X at NVIDIA. “The center will be a place for large-scale simulations of quantum algorithms and hardware, tight integration of quantum processors, and both training and deployment of AI models for quantum.”

Quantum computing innovators like Quantinuum, QuEra and Quantum Machines, along with academic partners from the Harvard Quantum Initiative and the Engineering Quantum Systems group at the MIT Center for Quantum Engineering, will work on projects with NVIDIA at the center to explore how AI supercomputing can accelerate the path toward quantum computing.
“The NVAQC is a powerful tool that will be instrumental in ushering in the next generation of research across the entire quantum ecosystem,” said William Oliver, professor of electrical engineering and computer science, and of physics, leader of the EQuS group and director of the MIT Center for Quantum Engineering. “NVIDIA is a critical partner for realizing useful quantum computing.”
There are several key quantum computing challenges where the NVAQC is already set to have a dramatic impact.
Protecting Qubits With AI Supercomputing
Qubit interactions are a double-edged sword. While qubits must interact with their surroundings to be controlled and measured, these same interactions are also a source of noise — unwanted disturbances that affect the accuracy of quantum calculations. Quantum algorithms can only work if the resulting noise is kept in check.
Quantum error correction provides a solution, encoding noiseless, logical qubits within many noisy, physical qubits. By processing the outputs from repeated measurements on these noisy qubits, it’s possible to identify, track and correct qubit errors — all without destroying the delicate quantum information needed by a computation.
The process of figuring out where errors occurred and what corrections to apply is called decoding. Decoding is an extremely difficult task that must be performed by a conventional computer within a narrow time frame to prevent noise from snowballing out of control.
A key goal of the NVAQC will be exploring how AI supercomputing can accelerate decoding. Studying how to collocate quantum hardware within the center will allow the development of low-latency, parallelized and AI-enhanced decoders, running on NVIDIA GB200 Grace Blackwell Superchips.The NVAQC will also tackle other challenges in quantum error correction. QuEra will work with NVIDIA to accelerate its search for new, improved quantum error correction codes, assessing the performance of candidate codes through demanding simulations of complex quantum circuits.
“The NVAQC will be an essential tool for discovering, testing and refining new quantum error correction codes and decoders capable of bringing the whole industry closer to useful quantum computing,” said Mikhail Lukin, Joshua and Beth Friedman University Professor at Harvard and a codirector of the Harvard Quantum Initiative.
Developing Applications for Accelerated Quantum Supercomputers
The majority of useful quantum algorithms draw equally from classical and quantum computing resources, ultimately requiring an accelerated quantum supercomputer that unifies both kinds of hardware.
For example, the output of classical supercomputers is often needed to prime quantum computations. The NVAQC provides the heterogeneous compute infrastructure needed for research on developing and improving such hybrid algorithms.

New AI-based compilation techniques will also be explored at the NVAQC, with the potential to accelerate the runtime of all quantum algorithms, including through work with Quantinuum. Quantinuum will build on its previous integration work with NVIDIA, offering its hardware and emulators through the NVIDIA CUDA-Q platform. Users of CUDA-Q are currently offered access to Quantinuum’s System H1 QPU hardware and emulator for 90 days.
“We’re excited to collaborate with NVIDIA at this center,” said Rajeeb Hazra, president and CEO of Quantinuum. “By combining Quantinuum’s powerful quantum systems with NVIDIA’s cutting-edge accelerated computing, we’re pushing the boundaries of hybrid quantum-classical computing and unlocking exciting new possibilities.”
QPU Integration
Integrating quantum hardware with AI supercomputing is one of the major remaining hurdles on the path to running useful quantum hardware.
The requirements of such an integration can be extremely demanding. The decoding required by quantum error correction can only function if data from millions of qubits can be sent between quantum and classical hardware at ultralow latencies.
Quantum Machines will work with NVIDIA at the NVAQC to develop and hone new controller technologies supporting rapid, high-bandwidth interfaces between quantum processors and GB200 superchips.
“We’re excited to see NVIDIA’s growing commitment to accelerating the realization of useful quantum computers, providing researchers with the most advanced infrastructure to push the boundaries of quantum-classical computing,” said Itamar Sivan, CEO of Quantum Machines.

Key to integrating quantum and classical hardware is a platform that lets researchers and developers quickly shift context between these two disparate computing paradigms within a single application. The NVIDIA CUDA-Q platform will be the entry point for researchers to harness the NVAQC’s quantum-classical integration.
Building on tools like NVIDIA DGX Quantum — a reference architecture for integrating quantum and classical hardware — and CUDA-Q, the NVAQC is set to be an epicenter for next-generation developments in quantum computing, seeding the evolution of qubits into impactful quantum computers.
The post NVIDIA Accelerated Quantum Research Center to Bring Quantum Computing Closer appeared first on ELE Times.
ST’s Automotive MCU technology for next-generation vehicles
Author: STMicroelectronics
ST has been serving customers in the automotive market for over 30 years and provides them with a range of products and solutions covering most applications in a typical vehicle. As the market has evolved, so has ST’s offering, a key part of which is automotive microcontrollers (MCUs).
ST pioneered embedded non-volatile memory (eNVM) with ST10 and then introduced automotive microcontrollers with its SPC5 range based on PowerPC architecture, shipping more than one billion MCUs in automotive. Cost-effective automotive controllers from the STM8 family complemented this offer.
ST’s Stellar FamilyST’s latest generation of automotive microcontrollers is the Stellar family, which is the industry’s first Arm®-based portfolio that spans the entire automotive MCU spectrum, from low-end to high-end solutions. These advanced microcontrollers reduce complexity, ensure safety and security, and deliver optimal performance and efficiency of next-gen vehicle architectures and features. Customers can benefit from shorter development times and focus on bringing innovation and differentiation for their software-defined vehicles (SDVs) in this highly competitive market. For these reasons Stellar products are gaining momentum, particularly among our customers in Asia and Europe.
Stellar is the industry’s first family of emerging technologies after eFlash, representing the most mature and smallest memory cell automotive grade solution on the market. The Stellar family is optimized for electrification, including X-in-1 vehicle motion control computing, new vehicle architectures, zonal and domains, and safety MCUs for safety-critical subsystems, such as ADAS.
It fully supports automotive transformation by integrating multiple functions safely into a single device and allowing the continuous integration of new features in vehicles. This is made possible with key technologies, including a right choice of core technology, virtualization, Ethernet support and ground-breaking memory technology embedded in the automotive MCUs–a game changer for customers faces challenges with application memory sizing.
Stellar MCUs are based on Arm® Cortex®-R52+ technology. This high-performance processor delivers real-time virtualization support for time-critical secure and safety systems. It can run multiple applications simultaneously with freedom from interference. And thanks to fully programmable auxiliary cores it is possible to accelerate specific functions, such as routing, low power management, digital filtering while offloading the main cores.
“As the driver experience continues to evolve in the age of AI and software-defined vehicles, advancing automotive functional safety, flexibility and real-time performance capabilities is essential,” said Dipti Vachani, senior vice president and general manager, Automotive Line of Business, Arm. “Built on Arm, the Stellar microcontroller family taps into the Arm compute platform’s advanced safety and real-time features, as well as the broad Arm software ecosystem. This enables car manufacturers to comply with strict safety regulations while implementing innovative features that keep them at the forefront of the automotive industry.”
Stellar MCUs enable the introduction of ethernet capabilities in vehicles and is the first ST MCU embedding an Ethernet switch. Thanks to Ethernet, data exchange is more efficient, and flexible, supporting the needed gigabit throughput, with a higher level of security. By supporting various in-vehicle communication topologies, such as Ethernet ring, automotive MCUs fulfill the promise of halving the length of the cross-car wiring cables and manufacturing costs.
Phase Change Memory is set to redefine what is possible in vehicle softwareStellar MCUs, with the embedded Phase Change Memory (PCM) technology and its flexibility, transform the process of Over-the-Air (OTA) updates. In the automotive industry, OTA updates are crucial for adding new features and safety or security patches without physical intervention. However, this flexibility often requires careful consideration of future memory needs, which can lead to increased costs and complex planning.
ST’s PCM innovation is no ordinary memory. Not only is it the industry’s smallest memory cell for automotive MCU, but it is pioneering a transformative breakthrough in automotive and set to redefine what is possible in vehicle software. Thanks to ST’s innovative PCM technology, memory capabilities are reaching a new level of sophistication. This is not just about memory performance. It is a forward-thinking solution that brings adaptability and lasting value to the automotive landscape enabling the final developers to continuously improve and upgrade functions.
As vehicles become increasingly software-defined, the ability to introduce new features and enhancements is essential. PCM’s groundbreaking technology will support the shift toward more adaptable, future-focused vehicles, giving automakers new ways to refine experiences as vehicles continue to advance.
Additionally, PCM delivers the ability to support uninterrupted OTA updates. PCM securely stores updates without impacting the vehicle’s current operations. Thanks to concurrent read and write capabilities, the new software download does not interfere with the application code already running on the MCU, ensuring continuous performance during the update process.
Stellar P, designed for the integration of multiple functions, and Stellar G, for the realization of Software-Defined Vehicles (SDV) zonal controllers, are two series leveraging ST’s internally developed eNVM. They are built on 28nm FD-SOI technology, allowing them to achieve maximum frequency with lower power consumption and enhanced radiation immunity. Stellar is the first 28nm product certified for functional safety and will enter production by the end of this year.
The Stellar family also enables the X-in-1 growing trend toward more affordable electromobility, with the decisive switch from fossil to electrically powered vehicles. X-in-1 powertrain solutions combine multiple components into a single ECU, allowing manufacturers to create efficient, compact, and cost-effective performing vehicles.
Stellar offers scalable X-in-1 implementation, accommodating a growing number of ECUs from low to high integration levels. This solution supports increasingly complex X-in-1 systems by providing enhanced availability of cores, analog components, and I/O capabilities.

“As a global leader in lithium-ion batteries, Sunwoda provides stable and reliable electronic system solutions for automotive suppliers worldwide. Our new collaboration with STMicroelectronics focuses on developing solutions using ST’s advanced Stellar microcontrollers and proprietary production processes, which primarily include battery management systems, and VDC/Zonal and body control functions. Together, we aim to provide intelligent solutions that enhance the next generation of energy vehicles in China and globally,” said Wang Mingwang, founder, Sunwoda.
2. FD-SOI technology to achieve the max possible frequency at the lower power consumption and to strengthen radiation immunity
3. New powerful over-the-air methodology and true EEPROM capabilities
4. A set of ethernet-related IPs (MAC, MACSec, switches) that enable distribution of high volume data, allow different topologies of in-vehicle communication, and reduce the vehicle manufacturing cost
5. Fully programmable auxiliary cores that accelerate specific functions such as routing, low power management, digital filtering while offloading the main cores
As part of the expanded roadmap for automotive MCUs, ST will introduce its market-leading general-purpose STM32 microcontroller platform to the automotive sector. The STM32 platform is well recognized for its cost optimization, simplicity, and reliability. Augmented with automotive grade quality and safety, STM32A will achieve up to ASIL B standards. This platform will be designed to handle edge actuation, from the simplest functions to more sophisticated single tasks, all at optimized costs. It will be particularly well-suited for applications like motor control in vehicle systems, including windows, mirrors, and roofs.
Best of industrial and automotive worlds: towards converged futureOver time, the convergence of the industrial and automotive hardware and software platforms will combine the best of both worlds. Automotive brings strong security expertise and industrial is built on strong Internet of Things and artificial intelligence solutions. The converged future will share hardware technologies, cores, and a common ecosystem of tools and software support. Such convergence will enable customers to seamlessly transition between solutions, offering simplification and full scalability to innovate faster.
Edge AI technology is one example where we see what is being adopted now in industrial applications benefiting automotive in the future. Neural accelerator technology and the associated tools that enable developers to easily implement AI in their applications, whatever their level of data science expertise, will enhance automotive systems in the future. ST has spent approximately 10 years investing in the development of microcontrollers, smart sensors, and AI software tools to meet the needs of our customers and harness the power of edge AI.
While AI adoption in automotive—beyond autonomous driving—is still in its initial stages, there are emerging trends of promising use cases for system optimization, energy efficiency, and problem-solving. For example, virtual sensors can measure rotor temperatures, and predictive maintenance can ensure vehicle reliability. As the number of sensors in vehicles heavily increases, AI will play a key role in virtualizing many of them, further enhancing automotive performance. Security is another area where we see the convergence of industrial and automotive bringing significant benefits.
This future will be based on the most advanced and more efficient 18nm process technologies. ST’s advanced technology portfolio for automotive MCUs spans from 40 to 28 to 18nm, selected to optimize product performance and cost.
The benefits of the IDM model for our customersAs an integrated device manufacturer (IDM), ST develops fundamental semiconductor process technologies, creates core intellectual property (IP), designs products using these technologies and IP, and manufactures, tests and packages them using owned facilities or through partnerships. This brings several benefits for our customers:
- Processes designed and refined to meet the applications needs of our customers in various markets.
- IP blocks optimized for specific functions and systems, owned by ST.
- Manufacturing processes are optimized for key performance and yield through the tight teamwork between process, product, and operational teams.
- Control of manufacturing capacity and creation of flexible, reliable supply chains.
This is particularly important for our automotive customers.
An example of these benefits is the combination of FD-SOI and PCM technologies that ST has developed for its Stellar microcontrollers. ST was one of the key innovators in both technologies, working with partners to bring them to market. ST’s ability to master the technology and tailor it to automotive applications has resulted in products with unique benefits. ST’s implementation of PCM technology has allowed the creation of the smallest physical memory cell in the industry delivering 2x the memory density of alternatives.
Thanks to the high-energy efficiency, high reliability, and radiation immunity of this memory technology, ICs designed in FD-SOI with embedded PCM meet the most stringent requirements of automotive applications. ST’s PCM technology achieves automotive requirements for AEC-Q100 Grade 0 with an operating temperature up to +165°C. The patented technology supports high-temperature data retention, including during solder reflow, so firmware can be uploaded before soldering.
Ecosystem plays a key role for transformationAn extensive partner ecosystem from developer tools to specific libraries for safety, security, and data exchange and distribution, augmenting ST’s portfolio capabilities further complements this leading technology. This will also provide the necessary simplification, aiding our customers in their transformation journey towards software-defined vehicles.

“STMicroelectronics and Green Hills Software are working closely together to deliver innovative integrated hardware and software solutions that address the growing challenges automotive OEMs and Tier 1s face in next generation vehicle zonal architectures.” said Dan O’Dowd, Founder and CEO at Green Hills Software. “Green Hills production-proven safety-certified RTOS and tools, coupled with ST’s Stellar SR6’s unique communication IP, deliver advanced fault tolerant zonal networking that enables significant per-vehicle cost savings while reducing time-to-market.”

“With MICROSAR Classic, we enable our customers with safe and secure basic software for ECUs for a wide range of use cases. Thanks to many years of close cooperation with ST, the corresponding support for the new Stellar MCUs is already available,” says Jochen Rein, Director of the Product Line Embedded Software and Systems at Vector. “By integrating Stellar’s advanced hardware with Vector’s robust software, customers get the highest level of safety and reliability for both ADAS applications and to successfully manage the transition to Software-Defined Vehicles.”

“iSOFT is a leading developer of automotive operating systems in China and the premium partner of AUTOSAR for China’s infrastructure software. Since the introduction of its collaboration with ST in 2016, iSOFT has become ST MCAL agent in China. This includes multiple microcontrollers such as SPC58/SPC56/STM8A, and the companies will also engage in a deeper strategic cooperation on the newly introduced Stellar family that will support EasyXMen Open Source Operating System in the future.” Luo Tong, Vice President, iSOFT

“Neusoft Reach’s software platform, NeuSAR, leads the mass production of China’s full-stack “AUTOSAR +Middleware,” widely used in next-gen ADAS, chassis, power, and body control systems. Neusoft Reach provides complete solutions based on ST’s SPC5 and Stellar E series automotive MCU, including application/basic software, bootloader, refresh, and simulation, which will be complemented with the new gen of Stellar P and G series. Both companies will work together to create a higher level of automotive-grade software and hardware integrated solutions to help OEMs and tier 1s to bring efficient, personalized, and differentiated functions and accelerate SDV innovation.” Jipeng Wang, Director, NeuSAR CP Products BU, Neusoft Reach
ConclusionBy building on common foundations across product dimensions and focusing on robust automotive quality, ST serves a wide market with a comprehensive product range that is both “broad”—spanning from as low as 128 KB to 64 MB memory, single to multicore computation with virtualization—and “deep,” with each series tailored for specific functions: Stellar P and G series focus on integration, and STM32A will be optimized for value, targeting single-core applications that prioritize efficiency and simplicity.
ST’s expanded automotive microcontrollers roadmap focuses on helping its customers to reduce complexity, improve efficiency, while ensuring the highest security and safety standards for next-gen cars. It addresses electrification, personalization, automation, and connectivity.
The post ST’s Automotive MCU technology for next-generation vehicles appeared first on ELE Times.
Trump’s Tariff Surge Rattles Global Electronics Industry: Can India Rise Amid Disruption?
As the Trump administration proposes the introduction of “reciprocal tariffs”—taxing imports at the same rate foreign governments impose on U.S. goods—the global electronics and semiconductor industry is bracing for ripple effects that could redefine supply chains, trade routes, and strategic investments. With India emerging as a serious contender in global electronics manufacturing, these policy shifts could present both opportunities and challenges for the country’s growing semiconductor ambitions.
Global Supply Chains in FluxThe electronics and semiconductor sectors are among the most intricately connected global industries. From chip design in the U.S. and Taiwan, to wafer fabrication in South Korea and China, to final assembly in India and Vietnam—every component in a finished product typically crosses multiple borders. A tariff war threatens to disrupt these finely tuned global supply chains.
This risk was underscored recently when President Donald Trump announced a staggering 104% tariff on Chinese electric vehicles, shaking global markets. Reacting to the tariff hike, Tata-owned Jaguar Land Rover halted its shipments to the U.S. for a month in response to a separate 25% import tariff, citing sudden cost pressures and logistical uncertainty. These developments signal a volatile phase for companies relying heavily on cross-border operations.
For India, which imports up to 88% of its semiconductor requirements, disruptions in sourcing from East Asian hubs could cause short-term volatility in the availability and pricing of key components. Additionally, higher tariffs could push up input costs for Indian manufacturers exporting to the U.S., making pricing less competitive.
A Tailwind for ‘Make in India’?However, there’s a silver lining. As geopolitical tensions rise and multinationals seek “China Plus One” strategies, India is increasingly viewed as a viable alternative. The Indian government’s push under the Production Linked Incentive (PLI) scheme, coupled with new semiconductor fabrication plans and robust demand for electronics, places the country in a favorable position to absorb diverted investments.
This shift is already underway. Apple and Samsung are reportedly accelerating plans to shift manufacturing to India, partly to hedge against Trump’s rising tariffs on Chinese goods. Apple, for instance, has already begun iPhone production at Foxconn’s Tamil Nadu facility, with plans to ramp up output in 2025. These strategic realignments bolster India’s role in the global value chain.
If reciprocal tariffs deter trade between the U.S. and China, Indian manufacturers may gain a competitive edge—particularly in segments like PCB assembly, mobile manufacturing, and back-end chip packaging. This could catalyze India’s ambition to become a $300 billion electronics manufacturing hub by 2026.
Impact on CompetitivenessOver 80% of the U.S. semiconductor industry’s production is destined for international markets, making it highly dependent on global exports. Imposing higher tariffs may weaken its global competitiveness, particularly if retaliatory measures by other countries kick in. On the flip side, Chinese manufacturers could double down on building self-reliant supply chains, while Indian firms may find new export opportunities if trade patterns realign.
Yet, uncertainties remain. India’s electronics industry still depends significantly on imports of chips and sub-components. Tariff-induced disruptions could lead to cost escalations, affecting price-sensitive consumer markets both in India and abroad.
Export Pressures & Strategic RealignmentWith India expanding its export capabilities in smartphones, consumer electronics, and electric vehicle components, increased trade barriers may force a rethinking of pricing strategies. Electronics brands exporting to the U.S. could face squeezed margins or may need to reroute operations to avoid tariffs.
To stay competitive, global electronics firms may explore shifting part of their production from China to tariff-neutral zones such as India, Mexico, or Southeast Asia. This trend, already underway post-COVID-19, may accelerate under tariff-driven pressure.
Policy Implications for IndiaIndia must walk a fine line. While these global shifts could open up new export windows, there’s also a risk of becoming collateral damage in a broader trade conflict. To safeguard domestic manufacturers and leverage emerging opportunities, the Indian government should consider:
- Negotiating strategic trade agreements with the U.S., EU, and ASEAN nations.
- Providing greater ease-of-doing-business incentives for relocating manufacturers.
- Accelerating semiconductor ecosystem development to reduce import dependencies.
- Offer temporary tariff shelters or rebates for affected MSMEs in electronics exports.
The Trump administration’s proposed tariffs may well be a turning point for the global semiconductor and electronics industry. For India, this could serve as both a test and an opportunity—to deepen its electronics manufacturing base, attract foreign investments, and reposition itself as a trusted global partner in a rapidly changing trade environment. Strategic foresight, nimble policy responses, and continued innovation will be key to navigating the challenges ahead.
The post Trump’s Tariff Surge Rattles Global Electronics Industry: Can India Rise Amid Disruption? appeared first on ELE Times.
Transforming Edge Software Development with Arm-based Virtual Prototyping
Courtesy: Synopsys
We’re excited to announce Virtualizer Native Execution for Arm-based machines. This groundbreaking virtual prototyping technology will transform how software is developed for edge devices and applications — particularly in the automotive, HPC, IoT, and mobile industries.
Virtualizer Native Execution enables a paradigm shift in edge-focused software development via improved:
- Simulation performance. Virtualizer Native Execution significantly boosts the performance of virtual prototypes — speeding up software development, debugging, and testing.
- Development productivity. Virtualizer Native Execution leverages cloud-native approaches to enhance productivity, reduce toolchain silos, and make modern development workflows more accessible for embedded software engineers.
The market-leading virtual prototyping solution with the largest library of models and IP,Virtualizer enables developers to work with virtual prototypes of target hardware instead of physical setups that are location-dependent and difficult to scale.
Virtualizer Native Execution extends the complete Virtualizer tool suite to the Arm ecosystem, allowing virtual prototypes to be built, executed, and tested directly on Arm-based machines. And because it can be leveraged across development and computing environments — on-premises, in the cloud, and at the edge — it eliminates toolchain and workflow silos and helps increase development flexibility and agility.
Virtualizer Native Execution significantly boosts development speed and efficiency via:
- Native execution. Instead of simulating the target hardware’s CPU on an Instruction Set Simulator (ISS), Virtualizer Native Execution enables virtual prototypes to be executed directly on the host CPU. This significantly reduces boot times (from 20 minutes to tens of seconds for a typical Android boot).
- Scalability. Modern Arm host machines offer more than 96 cores, and Virtualizer Native Execution can directly map each core of the virtual system-on-chip (SoC) to a physical core to greatly accelerate prototype performance.

Arm-based CPUs have long dominated the mobile market, and in recent years they’ve been increasingly used for automotive, IoT, consumer, and other edge-based applications. Those in the cloud and HPC markets have also embraced Arm CPUs and IP, which provide an alternative to traditional x86-based solutions and deliver exceptional performance, power, and cost benefits.
This widespread adoption is leading to greater alignment and uniformity of the CPUs and toolsets being used across on-premises, cloud, and edge environments. Often referred to as Instruction Set Architecture (ISA) parity, this uniformity provides new opportunities to streamline development efficiency and flexibility.
Virtualizer Native Execution supports the increased adoption and development of Arm-based solutions and takes advantage of ISA parity to supercharge software development and edge innovation.
Combining virtual prototyping with hardware-assisted verification (HAV)Virtualizer Native Execution also supports hybrid emulation, which combines the unique strengths of virtual prototyping and hardware-assisted verification (HAV). Tightly integrated with Synopsys HAV solutions, Virtualizer Native Execution supports hybrid setups where the CPU subsystem is virtualized and the rest of the device under test (DUT) is emulated. And because it eliminates ISS overhead and runs natively on the host CPU, Virtualizer Native Execution is able to keep up with the fastest emulation systems (including the new ZeBu-200).The speed and scalability of Virtualizer Native Execution also enable new emulation use cases, like application-driven performance and power validation.
Taking embedded software development to the cloudDeveloping embedded software for edge devices has long been a fragmented process involving complicated lab setups, delicate test boards and cables, and disparate toolsets. Not only has this hindered efficiency and scalability, but it has prevented the adoption of modern, agile development processes.
With Virtualizer Native Execution, developers can:
- Build and scale CI/CD pipelines in the cloud.
- Take advantage of higher performance and throughput as well as faster boot times.
- Replicate and align virtual prototypes across development and operating environments — on-premises, in the cloud, and at the edge.
Virtualizer Native Execution for Arm marks a significant leap forward in edge-focused software development. With better performance and scalability, native execution on Arm-based machines, and cloud-to-edge parity, developers can supercharge their virtual prototyping workflows.
The post Transforming Edge Software Development with Arm-based Virtual Prototyping appeared first on ELE Times.
High-Speed Data Centers Owe a Debt of Gratitude to DRAM Memory Interfaces
Courtesy: Renesas
High-performance AI data centers are reshaping semiconductor design and investment trajectories like no technology we’ve seen. As recently as 2022, spending on AI infrastructure was in the vicinity of $15 billion. This year, expenditures could easily top $60 billion. Yes, that sucking sound you hear is the oxygen being pulled from every investment plan and breathed into data centers.
We are clearly operating in an era of unprecedented artificial intelligence capital outlays – notwithstanding the potential impact of newcomers like DeepSeek. But while high-performance computing processors from Nvidia, AMD, and others are busy stealing the limelight, the high-bandwidth memory that stores training and inference models is having its day too – with 2024 DRAM revenue setting a record of nearly $116 billion.
Data center servers are driving a continuous increase in CPU core count, which requires more memory capacity to make higher-bandwidth data available to each processor core. But the laws of physics are quickly catching up. A CPU signal can only go so fast and far. That’s where memory interface devices such as register-clock drivers and data buffers come into play. By allowing the clock, command, address, and data signals to be re-driven with much-improved signal integrity, these interfaces enable the entire memory subsystem to scale in speed and capacity.
Today, RCD empowers registered DIMMs (RDIMMs) to operate up to 8 Giga Transfers per Second (GT/s). Most data center servers use RDIMMs, although some HPC systems need even greater memory subsystem performance.
Memory Interfaces Further Accelerate DRAM and Processor PerformanceNotable for the vital role it plays in data center server systems, DRAM architecture hasn’t actually changed dramatically over the past three decades. Increases in density, speed, and power efficiency can be attributed largely to deep-submicron semiconductor scaling, while new 2.5D and 3D stacked DRAM packaging allows for higher-capacity DIMM modules.
As explained above, advances in memory interface technology – beginning with synchronous DRAM and carrying across multiple generations of double data rate DRAM – have played an outsized role in helping the interface keep pace with processor speeds.
Multi-rank DIMMs (MRDIMMs) are an innovative technology designed for AI and HPC data center applications. Made possible through a partnership between Renesas, Intel, and memory suppliers, MRDIMMs allow the memory subsystem to scale to much higher bandwidths compared to RDIMMs on corresponding server systems. Specifically, MRDIMMs double the data transfer speeds of the host interface by enabling two ranks of memory to fetch data simultaneously, which yields a six percent to 33 percent improvement in memory bandwidth.
Renesas DRAM Interfaces Help Close the Processor-to-Memory Performance GapLate last year, Renesas released the first complete memory interface chipset solution for second-generation DDR5 MRDIMMs. With an operating speed of 12.8GT/s, this represented a huge improvement in terms of how fast we can drive the interface compared to the 8.0GT/s maximum for a standard DIMM.
How did we get there? Through a combination of highly orchestrated component technologies. Since its inception at Integrated Device Technology, before it was acquired by Renesas, we’ve been on a mission to solve one problem confounding memory performance: signal integrity.
As the speed gap between DRAM and the CPU began to grow, the physical loading of the DRAM was becoming a problem for system architects. We saw an opportunity to address the challenge through our analog and mixed-signal design competency. The first in line was an RCD we used to intercept and redrive the clock signal and command address between the DRAM and processor. Subsequently, we developed a line of fully buffered DIMMs, which encapsulated all types of signals on the system memory interface, including clocks, command addresses, and data.
Fast forward, and our newest DDR5 memory interfaces include second-generation RCDs and data buffers for MRDIMMs in addition to a power management IC (PMIC), making us the only company to offer a complete chipset solution for the next-generation of RDIMMs and MRDIMMs. In addition, Renesas has made a significant contribution in helping power efficiency by evangelizing a concept called “voltage regulation on DIMM.” Voltage regulation circuitry now sits directly on the DIMM, as opposed to the motherboard, which allows for a more efficient, distributed power model. This is done using PMICs that locally generate and regulate all the necessary voltages needed by various DIMM components.
Leveraging the Electronics Design Ecosystem for the FutureRenesas has amassed a vast base of in-house expertise by collaborating with a large design ecosystem of leading CPU and memory providers, hyperscale data center customers, and standards bodies like JEDEC. That gives us the freedom to remove the bottlenecks that stand in the way of our ability to continue increasing DIMM speeds and capacity by determining how many DRAM components can be populated and how fast they can run.
It also opens opportunities to leverage technologies developed for AI data centers and redirect them to emerging use cases. That’s true for the higher processing and memory bandwidth requirements influencing designs at the edge of industrial network controls, where data must be captured and turned into actionable insights. And, it applies to the surging data volumes required by automotive safety and autonomous driving applications, which are quickly turning our vehicles into servers on wheels.
The post High-Speed Data Centers Owe a Debt of Gratitude to DRAM Memory Interfaces appeared first on ELE Times.
Mission Moon — how CubeRover makes autonomous docking for space possible
Courtesy : Bosch
The moon — an environment full of extremes that can push even the most advanced technologies to their limits. Abrasive dust blocks sensitive sensors, temperatures as low as -150°C challenge conventional electronics, and the complete absence of GPS makes precise navigation nearly impossible. Such conditions demand innovative solutions tailored to the unique requirements of this extraordinary environment.
Bosch brings its technological expertise to a visionary project funded by NASA’s Tipping Point program with $5.8 million. In collaboration with Astrobotic, WiBotic, the University of Washington, and NASA’s Glenn Research Center, the project unites contributions from leading innovators. CubeRover(TM), developed by Astrobotic, is the mission’s lightweight and modular exploration vehicle. WiBotic contributes wireless charging technology, enabling efficient energy transfer under lunar conditions. Bosch focuses on autonomous docking, providing critical systems that ensure the CubeRover(TM) can navigate and connect reliably in this extreme environment. The University of Washington and NASA Glenn Research Center contribute by offering performance characterization and testing of the wireless charging system.
Together, these efforts promise to revolutionize space exploration while paving the way for future innovations in autonomous systems development.
The minds behind the mission — Vivek Jain and his teamOne of them is Vivek Jain, a lead expert at Bosch Research. Astrobotic serves as the principal investigator for this project, working closely with Bosch, which contributes its expertise in sensing, software, and autonomous docking for wireless power transmission.
Together, the partners are developing technologies that enable the rovers to navigate the moon with precision — without GPS. To achieve this, Bosch relies on a combination of camera data, Wi-Fi fingerprinting and sensor fusion. These approaches ensure that the rovers operate reliably even under extreme conditions such as intense light or presence of sticky lunar dust. With these innovative solutions, Bosch plays a crucial role in advancing the development of autonomous systems designed for the moon’s demanding environment.

CubeRover(TM) is the centerpiece of the lunar mission, designed specifically for operation on the moon’s surface. A modular, ultra-lightweight, and compact rover, its smallest form factor weighs less than 5 pounds and is roughly the size of a shoebox. These characteristics enable the simultaneous transport of multiple rovers on a central platform (lander), which lands on the lunar surface and serves as a base station for power and navigation.
This makes missions not only more flexible but also more cost-effective, as multiple rovers can be deployed with a single launch. In addition to its compact size, CubeRover(TM) impresses with its versatility. It can carry scientific instruments such as cameras or spectrometers, opening up new approaches to lunar exploration. With its innovative technology and ability to operate reliably even in extreme environments, it represents a turning point in the exploration of new worlds.
Reaching the destination without GPS — the challenges of navigating the moonHow Bosch develops creative solutions for navigation.
Orientation with visual markers and sensor fusionHow do you navigate on the moon, where GPS is not an option? Bosch has the answer with innovative technologies that guide the CubeRover(TM) safely through the extreme conditions of the lunar surface. The lander, a platform on the moon’s surface, serves as a central base station for the CubeRover(TM), providing energy and orientation. Special visual markers, known as AprilTags, are attached to the lander and function like QR codes. These markers are detected by the CubeRover(TM)’s camera, enabling it to accurately calculate its position and navigate securely.
In addition, the CubeRover(TM) employs sensor fusion, combining camera data with information from motion sensors and wheel speed sensors. This technology ensures stability even on uneven or slippery surfaces — performing reliably amidst dust, intense light, or wheel slips.
Wi-Fi fingerprinting as a backup solutionIn addition to visual markers, Bosch uses Wi-Fi fingerprinting to ensure the CubeRover(TM)’s navigation. The lander, the central platform on the lunar surface, emits Wi-Fi signals that the rover receives. Based on the signal strength and characteristics, the CubeRover(TM) determines its position and creates a map of the surroundings.
This method acts as a backup when visual markers are obscured by dust or shadows, ensuring the CubeRover(TM) remains navigable even under challenging conditions. By combining visual markers, sensor fusion, and Wi-Fi fingerprinting, Bosch enables precise navigation – entirely without GPS.
Wireless chargingThe small rover presents unique challenges for energy supply. Being too small for large solar panels, the CubeRover(TM) employs an innovative solution: wireless charging. The lander collects solar energy and transfers it wirelessly to the rover.
An additional benefit of this technology is the heat generated during the charging process. This heat is used to protect the rover from the extreme temperatures of the lunar night. Intelligent charging algorithms ensure that the rover aligns its position optimally for efficient energy transfer.
The post Mission Moon — how CubeRover makes autonomous docking for space possible appeared first on ELE Times.
Delta Electronics Fuels India’s Digital Ambitions with Scalable, Sustainable ICT Solutions
In an exclusive conversation with Rashi Bajpai of ELE Times, Pankaj Singh, Head of Data Center & Telecom Business Solutions at Delta Electronics India, delves into the company’s groundbreaking strides in energy-efficient ICT infrastructure.

Highlighting innovations tailored for India’s unique needs and global scalability, he discusses Delta’s pivotal role in shaping sustainable, high-performance telecom and data center ecosystems aligned with the Digital India vision.
Here is the excerpt:
ELE Times: Delta Electronics has long been a leader in energy-efficient ICT solutions. Could you elaborate on the latest innovations in your telecom and data center products that optimize energy consumption without compromising on performance?
Mr. Pankaj Singh: Delta continues to innovate in the ICT sector by developing high-efficiency power and cooling solutions that optimize energy consumption while maintaining superior performance. Our telecom and data center solutions incorporate 97% efficiency rectifiers, modular UPS systems exceeding 97% efficiency, and AI-driven thermal management that dynamically adjusts cooling based on real-time data, significantly improving Power Usage Effectiveness (PUE). Additionally, our hybrid power systems seamlessly integrate renewable energy sources, reducing reliance on conventional power grids. These innovations help businesses enhance operational efficiency while reducing carbon footprints and energy costs, reinforcing Delta’s commitment to sustainability and technological advancement.
ELE Times: As a company committed to both local and global markets, how does Delta ensure that its telecom and data center products are specifically tailored to meet India’s unique requirements while also being scalable for international use?
Mr. Pankaj Singh: Delta adopts a localization-with-scalability approach to develop telecom and data center solutions that address India’s unique challenges while remaining adaptable for global markets. Last year, we inaugurated our global R&D Center in India with the vision of “Design in India, for the World.” Our India-based R&D team develops products tailored to the country’s diverse climatic conditions, including extreme temperatures and humidity, while ensuring power reliability through high-efficiency rectifiers and advanced battery storage solutions. Our grid-resilient hybrid power systems enable seamless connectivity even in remote areas with unstable power supply. Additionally, Delta ensures compliance with Indian (BIS, TEC) and global (UL, CE, IEC) standards, making our products viable for both domestic and international markets. By integrating modular and scalable architectures, we deliver future-ready ICT solutions that evolve with business needs while maintaining high efficiency and reliability.
ELE Times: With the ambitious goal of powering 5 lakh telecom towers across India, how are Delta’s cutting-edge solutions addressing the increasing demand for reliable connectivity, and how does energy efficiency factor into this large-scale initiative?
Mr. Pankaj Singh: Delta is committed to supporting India’s telecom expansion by delivering energy-efficient and reliable power solutions for 5 lakh telecom towers across the country. Our advanced high-efficiency rectifiers, lithium-ion battery energy storage solutions, and hybrid power systems ensure uninterrupted connectivity, even in regions with unstable grid power. By integrating renewable energy sources such as solar and wind with intelligent power management systems, we help telecom operators reduce operational costs and carbon footprints. Additionally, our comprehensive telecom customer service ensures 24/7 technical support, proactive maintenance, and remote monitoring capabilities, enabling seamless network operations. With a strong focus on energy efficiency, grid resilience, and smart automation, Delta empowers telecom providers to enhance network uptime while meeting sustainability goals.
ELE Times: At ELECRAMA 2025, Delta introduced several new products designed to support India’s Digital India vision. Could you highlight the key technical features of these products and explain how they will contribute to the enhancement of India’s digital infrastructure?
Mr. Pankaj Singh: At ELECRAMA 2025, Delta unveiled a range of next-generation power and ICT solutions aimed at strengthening India’s digital infrastructure. Our new high-density UPS systems offer industry-leading >97% efficiency, ensuring maximum power protection for critical IT applications. We also introduced prefabricated modular data centers, which provide a plug-and-play, scalable approach to IT infrastructure expansion, allowing rapid deployment with optimized energy consumption. Our 5G-ready telecom power systems integrate solar energy, lithium-ion storage, and AI-based thermal management, reducing energy costs while enhancing network reliability. These solutions are set to support India’s Digital India vision by providing efficient, scalable, and sustainable infrastructure for the country’s growing data needs.
ELE Times: Sustainability is a key focus for Delta. How do your product development strategies balance the need for technological innovation with the imperative for environmental sustainability, particularly in the context of energy-efficient data centers and telecom networks?
Mr. Pankaj Singh: Sustainability is at the core of Delta’s product development strategy, ensuring that every innovation balances technological advancement with environmental responsibility. Our data center and telecom solutions are designed to minimize energy consumption by incorporating high-efficiency power conversion, intelligent thermal management, and renewable energy integration. We use recyclable materials, lead-free components, and eco-friendly manufacturing processes to reduce environmental impact. Additionally, our solar-powered energy solutions and AI-driven cooling systems significantly cut carbon emissions and operational costs. By prioritizing energy efficiency, sustainable materials, and intelligent automation, Delta is driving the ICT industry toward a greener, more sustainable future without compromising on performance or scalability.
ELE Times: The telecom and data center industries are evolving at an unprecedented pace. What are the most significant challenges Delta faces in designing solutions for this rapidly changing environment, and how do your latest technologies address these challenges to ensure long-term scalability and efficiency?
Mr. Pankaj Singh: The telecom and data center industries are evolving rapidly, driven by increasing data demand, emerging technologies like 5G and AI, and the need for energy-efficient infrastructure. Delta faces several key challenges in designing solutions that ensure long-term scalability and efficiency.
One major challenge is scalability, as networks and data centers must accommodate exponential growth in bandwidth, processing power, and storage. Delta addresses this by implementing modular and cloud-native architectures, allowing seamless expansion while maintaining cost efficiency. Another challenge is energy consumption, as data centers are among the largest consumers of electricity. Delta integrates high-efficiency power solutions, intelligent cooling systems, and renewable energy integration to minimize energy use and reduce environmental impact.
Delta’s latest technologies also focus on 5G, edge computing, and AI-driven network management, enabling faster connectivity, real-time data processing, and reduced latency. By integrating automation, energy efficiency, and scalable architectures, Delta ensures its telecom and data center solutions remain future-ready, adaptable, and optimized for performance in a rapidly evolving digital landscape.
The post Delta Electronics Fuels India’s Digital Ambitions with Scalable, Sustainable ICT Solutions appeared first on ELE Times.