Українською
  In English
Збирач потоків
FPGA learning made easy with a virtual university program

Altera University aims to affordably and easily introduce students to the world of FPGAs and digital logic programming tools by unveiling the curriculum, tutorials, and lab exercises that bridge the gap between academia and real-world design work. The program comprises four courses focused on digital logic, digital systems, computer organization, and embedded systems.
This university program will offer professors, researchers, and students access to a broad collection of pre-developed curricula, software tools, and programmable hardware to help accelerate the FPGA learning experience. Moreover, professors can restructure the lab work with pre-developed curricula and hands-on lab kits.
The program aims to accelerate the FPGA learning experience by making available a broad collection of pre-developed curricula, software tools, and programmable hardware. Source: Altera
“We established Altera University with the singular goal of training the next generation of FPGA developers with both the AI and logic development skills needed to thrive in today’s modern workforce,” said Deepali Trehan, head of product management and marketing at Altera. “Through Altera University, we’re enabling professors to bring real-world experiences to their students using cutting-edge programmable solutions.”
Altera is also offering discounts on select FPGAs for developing custom hardware solutions. Then there is a 20% discount on select Agilex 7 FPGA-based development kits. The company also offers a 50% discount on using LabsLand, a remote laboratory with access to Altera FPGAs.
Altera University also offers higher-level FPGA courses that include an AI curriculum to ensure that students can stay aligned with the latest industry trends and develop an understanding of usage models for FPGAs in the AI workflow.
Altera University’s academic program website provides more information on curricula, software tools, and programmable hardware.
Related Content
- All About FPGAs
- FPGAs for beginners
- Power-aware FPGA design
- FPGA programming step by step
- Embedded design with FPGAs: Development process
The post FPGA learning made easy with a virtual university program appeared first on EDN.
ST’s Automotive MCU technology for next-generation vehicles
Author: STMicroelectronics
ST has been serving customers in the automotive market for over 30 years and provides them with a range of products and solutions covering most applications in a typical vehicle. As the market has evolved, so has ST’s offering, a key part of which is automotive microcontrollers (MCUs).
ST pioneered embedded non-volatile memory (eNVM) with ST10 and then introduced automotive microcontrollers with its SPC5 range based on PowerPC architecture, shipping more than one billion MCUs in automotive. Cost-effective automotive controllers from the STM8 family complemented this offer.
ST’s Stellar FamilyST’s latest generation of automotive microcontrollers is the Stellar family, which is the industry’s first Arm®-based portfolio that spans the entire automotive MCU spectrum, from low-end to high-end solutions. These advanced microcontrollers reduce complexity, ensure safety and security, and deliver optimal performance and efficiency of next-gen vehicle architectures and features. Customers can benefit from shorter development times and focus on bringing innovation and differentiation for their software-defined vehicles (SDVs) in this highly competitive market. For these reasons Stellar products are gaining momentum, particularly among our customers in Asia and Europe.
Stellar is the industry’s first family of emerging technologies after eFlash, representing the most mature and smallest memory cell automotive grade solution on the market. The Stellar family is optimized for electrification, including X-in-1 vehicle motion control computing, new vehicle architectures, zonal and domains, and safety MCUs for safety-critical subsystems, such as ADAS.
It fully supports automotive transformation by integrating multiple functions safely into a single device and allowing the continuous integration of new features in vehicles. This is made possible with key technologies, including a right choice of core technology, virtualization, Ethernet support and ground-breaking memory technology embedded in the automotive MCUs–a game changer for customers faces challenges with application memory sizing.
Stellar MCUs are based on Arm® Cortex®-R52+ technology. This high-performance processor delivers real-time virtualization support for time-critical secure and safety systems. It can run multiple applications simultaneously with freedom from interference. And thanks to fully programmable auxiliary cores it is possible to accelerate specific functions, such as routing, low power management, digital filtering while offloading the main cores.
“As the driver experience continues to evolve in the age of AI and software-defined vehicles, advancing automotive functional safety, flexibility and real-time performance capabilities is essential,” said Dipti Vachani, senior vice president and general manager, Automotive Line of Business, Arm. “Built on Arm, the Stellar microcontroller family taps into the Arm compute platform’s advanced safety and real-time features, as well as the broad Arm software ecosystem. This enables car manufacturers to comply with strict safety regulations while implementing innovative features that keep them at the forefront of the automotive industry.”
Stellar MCUs enable the introduction of ethernet capabilities in vehicles and is the first ST MCU embedding an Ethernet switch. Thanks to Ethernet, data exchange is more efficient, and flexible, supporting the needed gigabit throughput, with a higher level of security. By supporting various in-vehicle communication topologies, such as Ethernet ring, automotive MCUs fulfill the promise of halving the length of the cross-car wiring cables and manufacturing costs.
Phase Change Memory is set to redefine what is possible in vehicle softwareStellar MCUs, with the embedded Phase Change Memory (PCM) technology and its flexibility, transform the process of Over-the-Air (OTA) updates. In the automotive industry, OTA updates are crucial for adding new features and safety or security patches without physical intervention. However, this flexibility often requires careful consideration of future memory needs, which can lead to increased costs and complex planning.
ST’s PCM innovation is no ordinary memory. Not only is it the industry’s smallest memory cell for automotive MCU, but it is pioneering a transformative breakthrough in automotive and set to redefine what is possible in vehicle software. Thanks to ST’s innovative PCM technology, memory capabilities are reaching a new level of sophistication. This is not just about memory performance. It is a forward-thinking solution that brings adaptability and lasting value to the automotive landscape enabling the final developers to continuously improve and upgrade functions.
As vehicles become increasingly software-defined, the ability to introduce new features and enhancements is essential. PCM’s groundbreaking technology will support the shift toward more adaptable, future-focused vehicles, giving automakers new ways to refine experiences as vehicles continue to advance.
Additionally, PCM delivers the ability to support uninterrupted OTA updates. PCM securely stores updates without impacting the vehicle’s current operations. Thanks to concurrent read and write capabilities, the new software download does not interfere with the application code already running on the MCU, ensuring continuous performance during the update process.
Stellar P, designed for the integration of multiple functions, and Stellar G, for the realization of Software-Defined Vehicles (SDV) zonal controllers, are two series leveraging ST’s internally developed eNVM. They are built on 28nm FD-SOI technology, allowing them to achieve maximum frequency with lower power consumption and enhanced radiation immunity. Stellar is the first 28nm product certified for functional safety and will enter production by the end of this year.
The Stellar family also enables the X-in-1 growing trend toward more affordable electromobility, with the decisive switch from fossil to electrically powered vehicles. X-in-1 powertrain solutions combine multiple components into a single ECU, allowing manufacturers to create efficient, compact, and cost-effective performing vehicles.
Stellar offers scalable X-in-1 implementation, accommodating a growing number of ECUs from low to high integration levels. This solution supports increasingly complex X-in-1 systems by providing enhanced availability of cores, analog components, and I/O capabilities.

“As a global leader in lithium-ion batteries, Sunwoda provides stable and reliable electronic system solutions for automotive suppliers worldwide. Our new collaboration with STMicroelectronics focuses on developing solutions using ST’s advanced Stellar microcontrollers and proprietary production processes, which primarily include battery management systems, and VDC/Zonal and body control functions. Together, we aim to provide intelligent solutions that enhance the next generation of energy vehicles in China and globally,” said Wang Mingwang, founder, Sunwoda.
2. FD-SOI technology to achieve the max possible frequency at the lower power consumption and to strengthen radiation immunity
3. New powerful over-the-air methodology and true EEPROM capabilities
4. A set of ethernet-related IPs (MAC, MACSec, switches) that enable distribution of high volume data, allow different topologies of in-vehicle communication, and reduce the vehicle manufacturing cost
5. Fully programmable auxiliary cores that accelerate specific functions such as routing, low power management, digital filtering while offloading the main cores
As part of the expanded roadmap for automotive MCUs, ST will introduce its market-leading general-purpose STM32 microcontroller platform to the automotive sector. The STM32 platform is well recognized for its cost optimization, simplicity, and reliability. Augmented with automotive grade quality and safety, STM32A will achieve up to ASIL B standards. This platform will be designed to handle edge actuation, from the simplest functions to more sophisticated single tasks, all at optimized costs. It will be particularly well-suited for applications like motor control in vehicle systems, including windows, mirrors, and roofs.
Best of industrial and automotive worlds: towards converged futureOver time, the convergence of the industrial and automotive hardware and software platforms will combine the best of both worlds. Automotive brings strong security expertise and industrial is built on strong Internet of Things and artificial intelligence solutions. The converged future will share hardware technologies, cores, and a common ecosystem of tools and software support. Such convergence will enable customers to seamlessly transition between solutions, offering simplification and full scalability to innovate faster.
Edge AI technology is one example where we see what is being adopted now in industrial applications benefiting automotive in the future. Neural accelerator technology and the associated tools that enable developers to easily implement AI in their applications, whatever their level of data science expertise, will enhance automotive systems in the future. ST has spent approximately 10 years investing in the development of microcontrollers, smart sensors, and AI software tools to meet the needs of our customers and harness the power of edge AI.
While AI adoption in automotive—beyond autonomous driving—is still in its initial stages, there are emerging trends of promising use cases for system optimization, energy efficiency, and problem-solving. For example, virtual sensors can measure rotor temperatures, and predictive maintenance can ensure vehicle reliability. As the number of sensors in vehicles heavily increases, AI will play a key role in virtualizing many of them, further enhancing automotive performance. Security is another area where we see the convergence of industrial and automotive bringing significant benefits.
This future will be based on the most advanced and more efficient 18nm process technologies. ST’s advanced technology portfolio for automotive MCUs spans from 40 to 28 to 18nm, selected to optimize product performance and cost.
The benefits of the IDM model for our customersAs an integrated device manufacturer (IDM), ST develops fundamental semiconductor process technologies, creates core intellectual property (IP), designs products using these technologies and IP, and manufactures, tests and packages them using owned facilities or through partnerships. This brings several benefits for our customers:
- Processes designed and refined to meet the applications needs of our customers in various markets.
- IP blocks optimized for specific functions and systems, owned by ST.
- Manufacturing processes are optimized for key performance and yield through the tight teamwork between process, product, and operational teams.
- Control of manufacturing capacity and creation of flexible, reliable supply chains.
This is particularly important for our automotive customers.
An example of these benefits is the combination of FD-SOI and PCM technologies that ST has developed for its Stellar microcontrollers. ST was one of the key innovators in both technologies, working with partners to bring them to market. ST’s ability to master the technology and tailor it to automotive applications has resulted in products with unique benefits. ST’s implementation of PCM technology has allowed the creation of the smallest physical memory cell in the industry delivering 2x the memory density of alternatives.
Thanks to the high-energy efficiency, high reliability, and radiation immunity of this memory technology, ICs designed in FD-SOI with embedded PCM meet the most stringent requirements of automotive applications. ST’s PCM technology achieves automotive requirements for AEC-Q100 Grade 0 with an operating temperature up to +165°C. The patented technology supports high-temperature data retention, including during solder reflow, so firmware can be uploaded before soldering.
Ecosystem plays a key role for transformationAn extensive partner ecosystem from developer tools to specific libraries for safety, security, and data exchange and distribution, augmenting ST’s portfolio capabilities further complements this leading technology. This will also provide the necessary simplification, aiding our customers in their transformation journey towards software-defined vehicles.

“STMicroelectronics and Green Hills Software are working closely together to deliver innovative integrated hardware and software solutions that address the growing challenges automotive OEMs and Tier 1s face in next generation vehicle zonal architectures.” said Dan O’Dowd, Founder and CEO at Green Hills Software. “Green Hills production-proven safety-certified RTOS and tools, coupled with ST’s Stellar SR6’s unique communication IP, deliver advanced fault tolerant zonal networking that enables significant per-vehicle cost savings while reducing time-to-market.”

“With MICROSAR Classic, we enable our customers with safe and secure basic software for ECUs for a wide range of use cases. Thanks to many years of close cooperation with ST, the corresponding support for the new Stellar MCUs is already available,” says Jochen Rein, Director of the Product Line Embedded Software and Systems at Vector. “By integrating Stellar’s advanced hardware with Vector’s robust software, customers get the highest level of safety and reliability for both ADAS applications and to successfully manage the transition to Software-Defined Vehicles.”

“iSOFT is a leading developer of automotive operating systems in China and the premium partner of AUTOSAR for China’s infrastructure software. Since the introduction of its collaboration with ST in 2016, iSOFT has become ST MCAL agent in China. This includes multiple microcontrollers such as SPC58/SPC56/STM8A, and the companies will also engage in a deeper strategic cooperation on the newly introduced Stellar family that will support EasyXMen Open Source Operating System in the future.” Luo Tong, Vice President, iSOFT

“Neusoft Reach’s software platform, NeuSAR, leads the mass production of China’s full-stack “AUTOSAR +Middleware,” widely used in next-gen ADAS, chassis, power, and body control systems. Neusoft Reach provides complete solutions based on ST’s SPC5 and Stellar E series automotive MCU, including application/basic software, bootloader, refresh, and simulation, which will be complemented with the new gen of Stellar P and G series. Both companies will work together to create a higher level of automotive-grade software and hardware integrated solutions to help OEMs and tier 1s to bring efficient, personalized, and differentiated functions and accelerate SDV innovation.” Jipeng Wang, Director, NeuSAR CP Products BU, Neusoft Reach
ConclusionBy building on common foundations across product dimensions and focusing on robust automotive quality, ST serves a wide market with a comprehensive product range that is both “broad”—spanning from as low as 128 KB to 64 MB memory, single to multicore computation with virtualization—and “deep,” with each series tailored for specific functions: Stellar P and G series focus on integration, and STM32A will be optimized for value, targeting single-core applications that prioritize efficiency and simplicity.
ST’s expanded automotive microcontrollers roadmap focuses on helping its customers to reduce complexity, improve efficiency, while ensuring the highest security and safety standards for next-gen cars. It addresses electrification, personalization, automation, and connectivity.
The post ST’s Automotive MCU technology for next-generation vehicles appeared first on ELE Times.
Trump’s Tariff Surge Rattles Global Electronics Industry: Can India Rise Amid Disruption?
As the Trump administration proposes the introduction of “reciprocal tariffs”—taxing imports at the same rate foreign governments impose on U.S. goods—the global electronics and semiconductor industry is bracing for ripple effects that could redefine supply chains, trade routes, and strategic investments. With India emerging as a serious contender in global electronics manufacturing, these policy shifts could present both opportunities and challenges for the country’s growing semiconductor ambitions.
Global Supply Chains in FluxThe electronics and semiconductor sectors are among the most intricately connected global industries. From chip design in the U.S. and Taiwan, to wafer fabrication in South Korea and China, to final assembly in India and Vietnam—every component in a finished product typically crosses multiple borders. A tariff war threatens to disrupt these finely tuned global supply chains.
This risk was underscored recently when President Donald Trump announced a staggering 104% tariff on Chinese electric vehicles, shaking global markets. Reacting to the tariff hike, Tata-owned Jaguar Land Rover halted its shipments to the U.S. for a month in response to a separate 25% import tariff, citing sudden cost pressures and logistical uncertainty. These developments signal a volatile phase for companies relying heavily on cross-border operations.
For India, which imports up to 88% of its semiconductor requirements, disruptions in sourcing from East Asian hubs could cause short-term volatility in the availability and pricing of key components. Additionally, higher tariffs could push up input costs for Indian manufacturers exporting to the U.S., making pricing less competitive.
A Tailwind for ‘Make in India’?However, there’s a silver lining. As geopolitical tensions rise and multinationals seek “China Plus One” strategies, India is increasingly viewed as a viable alternative. The Indian government’s push under the Production Linked Incentive (PLI) scheme, coupled with new semiconductor fabrication plans and robust demand for electronics, places the country in a favorable position to absorb diverted investments.
This shift is already underway. Apple and Samsung are reportedly accelerating plans to shift manufacturing to India, partly to hedge against Trump’s rising tariffs on Chinese goods. Apple, for instance, has already begun iPhone production at Foxconn’s Tamil Nadu facility, with plans to ramp up output in 2025. These strategic realignments bolster India’s role in the global value chain.
If reciprocal tariffs deter trade between the U.S. and China, Indian manufacturers may gain a competitive edge—particularly in segments like PCB assembly, mobile manufacturing, and back-end chip packaging. This could catalyze India’s ambition to become a $300 billion electronics manufacturing hub by 2026.
Impact on CompetitivenessOver 80% of the U.S. semiconductor industry’s production is destined for international markets, making it highly dependent on global exports. Imposing higher tariffs may weaken its global competitiveness, particularly if retaliatory measures by other countries kick in. On the flip side, Chinese manufacturers could double down on building self-reliant supply chains, while Indian firms may find new export opportunities if trade patterns realign.
Yet, uncertainties remain. India’s electronics industry still depends significantly on imports of chips and sub-components. Tariff-induced disruptions could lead to cost escalations, affecting price-sensitive consumer markets both in India and abroad.
Export Pressures & Strategic RealignmentWith India expanding its export capabilities in smartphones, consumer electronics, and electric vehicle components, increased trade barriers may force a rethinking of pricing strategies. Electronics brands exporting to the U.S. could face squeezed margins or may need to reroute operations to avoid tariffs.
To stay competitive, global electronics firms may explore shifting part of their production from China to tariff-neutral zones such as India, Mexico, or Southeast Asia. This trend, already underway post-COVID-19, may accelerate under tariff-driven pressure.
Policy Implications for IndiaIndia must walk a fine line. While these global shifts could open up new export windows, there’s also a risk of becoming collateral damage in a broader trade conflict. To safeguard domestic manufacturers and leverage emerging opportunities, the Indian government should consider:
- Negotiating strategic trade agreements with the U.S., EU, and ASEAN nations.
- Providing greater ease-of-doing-business incentives for relocating manufacturers.
- Accelerating semiconductor ecosystem development to reduce import dependencies.
- Offer temporary tariff shelters or rebates for affected MSMEs in electronics exports.
The Trump administration’s proposed tariffs may well be a turning point for the global semiconductor and electronics industry. For India, this could serve as both a test and an opportunity—to deepen its electronics manufacturing base, attract foreign investments, and reposition itself as a trusted global partner in a rapidly changing trade environment. Strategic foresight, nimble policy responses, and continued innovation will be key to navigating the challenges ahead.
The post Trump’s Tariff Surge Rattles Global Electronics Industry: Can India Rise Amid Disruption? appeared first on ELE Times.
Transforming Edge Software Development with Arm-based Virtual Prototyping
Courtesy: Synopsys
We’re excited to announce Virtualizer Native Execution for Arm-based machines. This groundbreaking virtual prototyping technology will transform how software is developed for edge devices and applications — particularly in the automotive, HPC, IoT, and mobile industries.
Virtualizer Native Execution enables a paradigm shift in edge-focused software development via improved:
- Simulation performance. Virtualizer Native Execution significantly boosts the performance of virtual prototypes — speeding up software development, debugging, and testing.
- Development productivity. Virtualizer Native Execution leverages cloud-native approaches to enhance productivity, reduce toolchain silos, and make modern development workflows more accessible for embedded software engineers.
The market-leading virtual prototyping solution with the largest library of models and IP,Virtualizer enables developers to work with virtual prototypes of target hardware instead of physical setups that are location-dependent and difficult to scale.
Virtualizer Native Execution extends the complete Virtualizer tool suite to the Arm ecosystem, allowing virtual prototypes to be built, executed, and tested directly on Arm-based machines. And because it can be leveraged across development and computing environments — on-premises, in the cloud, and at the edge — it eliminates toolchain and workflow silos and helps increase development flexibility and agility.
Virtualizer Native Execution significantly boosts development speed and efficiency via:
- Native execution. Instead of simulating the target hardware’s CPU on an Instruction Set Simulator (ISS), Virtualizer Native Execution enables virtual prototypes to be executed directly on the host CPU. This significantly reduces boot times (from 20 minutes to tens of seconds for a typical Android boot).
- Scalability. Modern Arm host machines offer more than 96 cores, and Virtualizer Native Execution can directly map each core of the virtual system-on-chip (SoC) to a physical core to greatly accelerate prototype performance.

Arm-based CPUs have long dominated the mobile market, and in recent years they’ve been increasingly used for automotive, IoT, consumer, and other edge-based applications. Those in the cloud and HPC markets have also embraced Arm CPUs and IP, which provide an alternative to traditional x86-based solutions and deliver exceptional performance, power, and cost benefits.
This widespread adoption is leading to greater alignment and uniformity of the CPUs and toolsets being used across on-premises, cloud, and edge environments. Often referred to as Instruction Set Architecture (ISA) parity, this uniformity provides new opportunities to streamline development efficiency and flexibility.
Virtualizer Native Execution supports the increased adoption and development of Arm-based solutions and takes advantage of ISA parity to supercharge software development and edge innovation.
Combining virtual prototyping with hardware-assisted verification (HAV)Virtualizer Native Execution also supports hybrid emulation, which combines the unique strengths of virtual prototyping and hardware-assisted verification (HAV). Tightly integrated with Synopsys HAV solutions, Virtualizer Native Execution supports hybrid setups where the CPU subsystem is virtualized and the rest of the device under test (DUT) is emulated. And because it eliminates ISS overhead and runs natively on the host CPU, Virtualizer Native Execution is able to keep up with the fastest emulation systems (including the new ZeBu-200).The speed and scalability of Virtualizer Native Execution also enable new emulation use cases, like application-driven performance and power validation.
Taking embedded software development to the cloudDeveloping embedded software for edge devices has long been a fragmented process involving complicated lab setups, delicate test boards and cables, and disparate toolsets. Not only has this hindered efficiency and scalability, but it has prevented the adoption of modern, agile development processes.
With Virtualizer Native Execution, developers can:
- Build and scale CI/CD pipelines in the cloud.
- Take advantage of higher performance and throughput as well as faster boot times.
- Replicate and align virtual prototypes across development and operating environments — on-premises, in the cloud, and at the edge.
Virtualizer Native Execution for Arm marks a significant leap forward in edge-focused software development. With better performance and scalability, native execution on Arm-based machines, and cloud-to-edge parity, developers can supercharge their virtual prototyping workflows.
The post Transforming Edge Software Development with Arm-based Virtual Prototyping appeared first on ELE Times.
High-Speed Data Centers Owe a Debt of Gratitude to DRAM Memory Interfaces
Courtesy: Renesas
High-performance AI data centers are reshaping semiconductor design and investment trajectories like no technology we’ve seen. As recently as 2022, spending on AI infrastructure was in the vicinity of $15 billion. This year, expenditures could easily top $60 billion. Yes, that sucking sound you hear is the oxygen being pulled from every investment plan and breathed into data centers.
We are clearly operating in an era of unprecedented artificial intelligence capital outlays – notwithstanding the potential impact of newcomers like DeepSeek. But while high-performance computing processors from Nvidia, AMD, and others are busy stealing the limelight, the high-bandwidth memory that stores training and inference models is having its day too – with 2024 DRAM revenue setting a record of nearly $116 billion.
Data center servers are driving a continuous increase in CPU core count, which requires more memory capacity to make higher-bandwidth data available to each processor core. But the laws of physics are quickly catching up. A CPU signal can only go so fast and far. That’s where memory interface devices such as register-clock drivers and data buffers come into play. By allowing the clock, command, address, and data signals to be re-driven with much-improved signal integrity, these interfaces enable the entire memory subsystem to scale in speed and capacity.
Today, RCD empowers registered DIMMs (RDIMMs) to operate up to 8 Giga Transfers per Second (GT/s). Most data center servers use RDIMMs, although some HPC systems need even greater memory subsystem performance.
Memory Interfaces Further Accelerate DRAM and Processor PerformanceNotable for the vital role it plays in data center server systems, DRAM architecture hasn’t actually changed dramatically over the past three decades. Increases in density, speed, and power efficiency can be attributed largely to deep-submicron semiconductor scaling, while new 2.5D and 3D stacked DRAM packaging allows for higher-capacity DIMM modules.
As explained above, advances in memory interface technology – beginning with synchronous DRAM and carrying across multiple generations of double data rate DRAM – have played an outsized role in helping the interface keep pace with processor speeds.
Multi-rank DIMMs (MRDIMMs) are an innovative technology designed for AI and HPC data center applications. Made possible through a partnership between Renesas, Intel, and memory suppliers, MRDIMMs allow the memory subsystem to scale to much higher bandwidths compared to RDIMMs on corresponding server systems. Specifically, MRDIMMs double the data transfer speeds of the host interface by enabling two ranks of memory to fetch data simultaneously, which yields a six percent to 33 percent improvement in memory bandwidth.
Renesas DRAM Interfaces Help Close the Processor-to-Memory Performance GapLate last year, Renesas released the first complete memory interface chipset solution for second-generation DDR5 MRDIMMs. With an operating speed of 12.8GT/s, this represented a huge improvement in terms of how fast we can drive the interface compared to the 8.0GT/s maximum for a standard DIMM.
How did we get there? Through a combination of highly orchestrated component technologies. Since its inception at Integrated Device Technology, before it was acquired by Renesas, we’ve been on a mission to solve one problem confounding memory performance: signal integrity.
As the speed gap between DRAM and the CPU began to grow, the physical loading of the DRAM was becoming a problem for system architects. We saw an opportunity to address the challenge through our analog and mixed-signal design competency. The first in line was an RCD we used to intercept and redrive the clock signal and command address between the DRAM and processor. Subsequently, we developed a line of fully buffered DIMMs, which encapsulated all types of signals on the system memory interface, including clocks, command addresses, and data.
Fast forward, and our newest DDR5 memory interfaces include second-generation RCDs and data buffers for MRDIMMs in addition to a power management IC (PMIC), making us the only company to offer a complete chipset solution for the next-generation of RDIMMs and MRDIMMs. In addition, Renesas has made a significant contribution in helping power efficiency by evangelizing a concept called “voltage regulation on DIMM.” Voltage regulation circuitry now sits directly on the DIMM, as opposed to the motherboard, which allows for a more efficient, distributed power model. This is done using PMICs that locally generate and regulate all the necessary voltages needed by various DIMM components.
Leveraging the Electronics Design Ecosystem for the FutureRenesas has amassed a vast base of in-house expertise by collaborating with a large design ecosystem of leading CPU and memory providers, hyperscale data center customers, and standards bodies like JEDEC. That gives us the freedom to remove the bottlenecks that stand in the way of our ability to continue increasing DIMM speeds and capacity by determining how many DRAM components can be populated and how fast they can run.
It also opens opportunities to leverage technologies developed for AI data centers and redirect them to emerging use cases. That’s true for the higher processing and memory bandwidth requirements influencing designs at the edge of industrial network controls, where data must be captured and turned into actionable insights. And, it applies to the surging data volumes required by automotive safety and autonomous driving applications, which are quickly turning our vehicles into servers on wheels.
The post High-Speed Data Centers Owe a Debt of Gratitude to DRAM Memory Interfaces appeared first on ELE Times.
VIGO and PCO partner on type-II superlattice cooled IR detector arrays
Mission Moon — how CubeRover makes autonomous docking for space possible
Courtesy : Bosch
The moon — an environment full of extremes that can push even the most advanced technologies to their limits. Abrasive dust blocks sensitive sensors, temperatures as low as -150°C challenge conventional electronics, and the complete absence of GPS makes precise navigation nearly impossible. Such conditions demand innovative solutions tailored to the unique requirements of this extraordinary environment.
Bosch brings its technological expertise to a visionary project funded by NASA’s Tipping Point program with $5.8 million. In collaboration with Astrobotic, WiBotic, the University of Washington, and NASA’s Glenn Research Center, the project unites contributions from leading innovators. CubeRover(TM), developed by Astrobotic, is the mission’s lightweight and modular exploration vehicle. WiBotic contributes wireless charging technology, enabling efficient energy transfer under lunar conditions. Bosch focuses on autonomous docking, providing critical systems that ensure the CubeRover(TM) can navigate and connect reliably in this extreme environment. The University of Washington and NASA Glenn Research Center contribute by offering performance characterization and testing of the wireless charging system.
Together, these efforts promise to revolutionize space exploration while paving the way for future innovations in autonomous systems development.
The minds behind the mission — Vivek Jain and his teamOne of them is Vivek Jain, a lead expert at Bosch Research. Astrobotic serves as the principal investigator for this project, working closely with Bosch, which contributes its expertise in sensing, software, and autonomous docking for wireless power transmission.
Together, the partners are developing technologies that enable the rovers to navigate the moon with precision — without GPS. To achieve this, Bosch relies on a combination of camera data, Wi-Fi fingerprinting and sensor fusion. These approaches ensure that the rovers operate reliably even under extreme conditions such as intense light or presence of sticky lunar dust. With these innovative solutions, Bosch plays a crucial role in advancing the development of autonomous systems designed for the moon’s demanding environment.

CubeRover(TM) is the centerpiece of the lunar mission, designed specifically for operation on the moon’s surface. A modular, ultra-lightweight, and compact rover, its smallest form factor weighs less than 5 pounds and is roughly the size of a shoebox. These characteristics enable the simultaneous transport of multiple rovers on a central platform (lander), which lands on the lunar surface and serves as a base station for power and navigation.
This makes missions not only more flexible but also more cost-effective, as multiple rovers can be deployed with a single launch. In addition to its compact size, CubeRover(TM) impresses with its versatility. It can carry scientific instruments such as cameras or spectrometers, opening up new approaches to lunar exploration. With its innovative technology and ability to operate reliably even in extreme environments, it represents a turning point in the exploration of new worlds.
Reaching the destination without GPS — the challenges of navigating the moonHow Bosch develops creative solutions for navigation.
Orientation with visual markers and sensor fusionHow do you navigate on the moon, where GPS is not an option? Bosch has the answer with innovative technologies that guide the CubeRover(TM) safely through the extreme conditions of the lunar surface. The lander, a platform on the moon’s surface, serves as a central base station for the CubeRover(TM), providing energy and orientation. Special visual markers, known as AprilTags, are attached to the lander and function like QR codes. These markers are detected by the CubeRover(TM)’s camera, enabling it to accurately calculate its position and navigate securely.
In addition, the CubeRover(TM) employs sensor fusion, combining camera data with information from motion sensors and wheel speed sensors. This technology ensures stability even on uneven or slippery surfaces — performing reliably amidst dust, intense light, or wheel slips.
Wi-Fi fingerprinting as a backup solutionIn addition to visual markers, Bosch uses Wi-Fi fingerprinting to ensure the CubeRover(TM)’s navigation. The lander, the central platform on the lunar surface, emits Wi-Fi signals that the rover receives. Based on the signal strength and characteristics, the CubeRover(TM) determines its position and creates a map of the surroundings.
This method acts as a backup when visual markers are obscured by dust or shadows, ensuring the CubeRover(TM) remains navigable even under challenging conditions. By combining visual markers, sensor fusion, and Wi-Fi fingerprinting, Bosch enables precise navigation – entirely without GPS.
Wireless chargingThe small rover presents unique challenges for energy supply. Being too small for large solar panels, the CubeRover(TM) employs an innovative solution: wireless charging. The lander collects solar energy and transfers it wirelessly to the rover.
An additional benefit of this technology is the heat generated during the charging process. This heat is used to protect the rover from the extreme temperatures of the lunar night. Intelligent charging algorithms ensure that the rover aligns its position optimally for efficient energy transfer.
The post Mission Moon — how CubeRover makes autonomous docking for space possible appeared first on ELE Times.
Delta Electronics Fuels India’s Digital Ambitions with Scalable, Sustainable ICT Solutions
In an exclusive conversation with Rashi Bajpai of ELE Times, Pankaj Singh, Head of Data Center & Telecom Business Solutions at Delta Electronics India, delves into the company’s groundbreaking strides in energy-efficient ICT infrastructure.

Highlighting innovations tailored for India’s unique needs and global scalability, he discusses Delta’s pivotal role in shaping sustainable, high-performance telecom and data center ecosystems aligned with the Digital India vision.
Here is the excerpt:
ELE Times: Delta Electronics has long been a leader in energy-efficient ICT solutions. Could you elaborate on the latest innovations in your telecom and data center products that optimize energy consumption without compromising on performance?
Mr. Pankaj Singh: Delta continues to innovate in the ICT sector by developing high-efficiency power and cooling solutions that optimize energy consumption while maintaining superior performance. Our telecom and data center solutions incorporate 97% efficiency rectifiers, modular UPS systems exceeding 97% efficiency, and AI-driven thermal management that dynamically adjusts cooling based on real-time data, significantly improving Power Usage Effectiveness (PUE). Additionally, our hybrid power systems seamlessly integrate renewable energy sources, reducing reliance on conventional power grids. These innovations help businesses enhance operational efficiency while reducing carbon footprints and energy costs, reinforcing Delta’s commitment to sustainability and technological advancement.
ELE Times: As a company committed to both local and global markets, how does Delta ensure that its telecom and data center products are specifically tailored to meet India’s unique requirements while also being scalable for international use?
Mr. Pankaj Singh: Delta adopts a localization-with-scalability approach to develop telecom and data center solutions that address India’s unique challenges while remaining adaptable for global markets. Last year, we inaugurated our global R&D Center in India with the vision of “Design in India, for the World.” Our India-based R&D team develops products tailored to the country’s diverse climatic conditions, including extreme temperatures and humidity, while ensuring power reliability through high-efficiency rectifiers and advanced battery storage solutions. Our grid-resilient hybrid power systems enable seamless connectivity even in remote areas with unstable power supply. Additionally, Delta ensures compliance with Indian (BIS, TEC) and global (UL, CE, IEC) standards, making our products viable for both domestic and international markets. By integrating modular and scalable architectures, we deliver future-ready ICT solutions that evolve with business needs while maintaining high efficiency and reliability.
ELE Times: With the ambitious goal of powering 5 lakh telecom towers across India, how are Delta’s cutting-edge solutions addressing the increasing demand for reliable connectivity, and how does energy efficiency factor into this large-scale initiative?
Mr. Pankaj Singh: Delta is committed to supporting India’s telecom expansion by delivering energy-efficient and reliable power solutions for 5 lakh telecom towers across the country. Our advanced high-efficiency rectifiers, lithium-ion battery energy storage solutions, and hybrid power systems ensure uninterrupted connectivity, even in regions with unstable grid power. By integrating renewable energy sources such as solar and wind with intelligent power management systems, we help telecom operators reduce operational costs and carbon footprints. Additionally, our comprehensive telecom customer service ensures 24/7 technical support, proactive maintenance, and remote monitoring capabilities, enabling seamless network operations. With a strong focus on energy efficiency, grid resilience, and smart automation, Delta empowers telecom providers to enhance network uptime while meeting sustainability goals.
ELE Times: At ELECRAMA 2025, Delta introduced several new products designed to support India’s Digital India vision. Could you highlight the key technical features of these products and explain how they will contribute to the enhancement of India’s digital infrastructure?
Mr. Pankaj Singh: At ELECRAMA 2025, Delta unveiled a range of next-generation power and ICT solutions aimed at strengthening India’s digital infrastructure. Our new high-density UPS systems offer industry-leading >97% efficiency, ensuring maximum power protection for critical IT applications. We also introduced prefabricated modular data centers, which provide a plug-and-play, scalable approach to IT infrastructure expansion, allowing rapid deployment with optimized energy consumption. Our 5G-ready telecom power systems integrate solar energy, lithium-ion storage, and AI-based thermal management, reducing energy costs while enhancing network reliability. These solutions are set to support India’s Digital India vision by providing efficient, scalable, and sustainable infrastructure for the country’s growing data needs.
ELE Times: Sustainability is a key focus for Delta. How do your product development strategies balance the need for technological innovation with the imperative for environmental sustainability, particularly in the context of energy-efficient data centers and telecom networks?
Mr. Pankaj Singh: Sustainability is at the core of Delta’s product development strategy, ensuring that every innovation balances technological advancement with environmental responsibility. Our data center and telecom solutions are designed to minimize energy consumption by incorporating high-efficiency power conversion, intelligent thermal management, and renewable energy integration. We use recyclable materials, lead-free components, and eco-friendly manufacturing processes to reduce environmental impact. Additionally, our solar-powered energy solutions and AI-driven cooling systems significantly cut carbon emissions and operational costs. By prioritizing energy efficiency, sustainable materials, and intelligent automation, Delta is driving the ICT industry toward a greener, more sustainable future without compromising on performance or scalability.
ELE Times: The telecom and data center industries are evolving at an unprecedented pace. What are the most significant challenges Delta faces in designing solutions for this rapidly changing environment, and how do your latest technologies address these challenges to ensure long-term scalability and efficiency?
Mr. Pankaj Singh: The telecom and data center industries are evolving rapidly, driven by increasing data demand, emerging technologies like 5G and AI, and the need for energy-efficient infrastructure. Delta faces several key challenges in designing solutions that ensure long-term scalability and efficiency.
One major challenge is scalability, as networks and data centers must accommodate exponential growth in bandwidth, processing power, and storage. Delta addresses this by implementing modular and cloud-native architectures, allowing seamless expansion while maintaining cost efficiency. Another challenge is energy consumption, as data centers are among the largest consumers of electricity. Delta integrates high-efficiency power solutions, intelligent cooling systems, and renewable energy integration to minimize energy use and reduce environmental impact.
Delta’s latest technologies also focus on 5G, edge computing, and AI-driven network management, enabling faster connectivity, real-time data processing, and reduced latency. By integrating automation, energy efficiency, and scalable architectures, Delta ensures its telecom and data center solutions remain future-ready, adaptable, and optimized for performance in a rapidly evolving digital landscape.
The post Delta Electronics Fuels India’s Digital Ambitions with Scalable, Sustainable ICT Solutions appeared first on ELE Times.
The Revised Industrial Robot Safety Standard
Courtesy: ANALOG DEVICES
ISO 10218, third edition, was released at the start of 2025. This standard covers industrial robot safety. Typically, this means fixed industrial robots, including what are known as cobots. A HAS consultant assessed the standard, which will hopefully be published shortly in the OJEU (official journal of the European Union) giving a presumption of conformity with all relevant machinery directive clauses.
Note – ISO TC 299 WG 3: avoid using the word cobot, as there is no such thing. The assertion is that the application is collaborative and not the robot. Any robot can work in a collaborative application with the right external equipment, e.g., a laser scanner or 3D TOF may allow implementation

I think I started on this committee in 2018, and Ireland hosted a meeting on the committee in 2019, but I believe WG3 was already working on this revision well before that time. The convenor was Roberta Nelson Shea of Universal Robots, which meant we had much experience right there, but there was a lot. I mean a lot, of experience in both the application and design of robots within the group. It also included health and safety professionals, independent assessors, and a human factors expert. It was a well-attended group, with over 50 in some cases, leading to restrictions on how many from each country were allowed to attend. Countries with big teams attending included Canada, Japan, Korea, Sweden, Germany, the USA, the UK, Ireland, Denmark, and Italy…. I must admit, I’m a robot safety expert who has never used a robot. My other major functional safety contributor is on the IEC 61508 committee, where I lead the semiconductor group, including the new IEC 61508-2-1. Therefore, once we strayed into the use of robots, as opposed to their design, I was out of my depth. For this reason, I am sure that the highlights I have chosen below from the new version would be very different from those chosen by someone with a different background.

My highlights are:
- Removal of mandatory redundancy requirements
- New security guidance
- New comms requirements
Removal of Mandatory Redundancy Requirements
Let’s start with the mandatory redundancy requirements. The older 2011 version required a default SIL 2 with HFT=1 or PL d CAT 3 safety function. This offended me on several levels, including:
- With complex technology, systematic failure modes are more of a concern than random hardware failures
- HFT= 1 and CAT 3 are not even the same thing (CAT 3 allows the impact of diagnostics in achieving single fault tolerance to be considered, HFT does not)
- Modern technology based on semiconductors can be far more reliable than older mechanical technology and with higher diagnostic rates and lower diagnostic test intervals

Even after many years of debate (yes years and I am not exaggerating) the following was agreed. If you weren’t involved, the number of 4.43e-7/h might seem random. Both PL d and SIL 2 require a failure rate in the range 1e-6/h to 1e-7/h, so it’s just below the midpoint of the range. It’s in the better half, indicating lower-than-average risk. But if lower-than-average was all we required, we could have used 5.0e-7/h. Shown another way, it can be compared to what is traditionally achieved with a CAT 3 architecture. The graphic above also shows it exceeds what is generally considered possible with a CAT 2 (non-redundant) architecture. However, the best way to show it is by highlighting it using ISO 13849-1:2015 Annex K. To get to this number with a CAT 2 architecture (single channel with diagnostics) you need an MTTFd (mean time to failure dangerous) of 62 years and a DC of 90%. It also shows that previously with your CAT 3 architecture a DC of 60% would have been acceptable but that would deliver a much worse PFHd unless you got your MTTFd to at least 43 years and that with CAT 3 and a DC of 90% you can easily reach down into PL e type performance.
It’s good to write this down while I still remember the reasoning. These changes will make it easier to adopt new technologies into robots but reduce the robot cost and increase the robot’s capabilities, which hopefully will all contribute to a higher adoption of robots. Such new technologies might include 3D TOF and novel encoders.
New Security Guidance

My next highlight is the new cyber security guidance. It’s always controversial whether safety standards should include anything on cyber security or whether the two disciplines should remain separate. However, it is good that we added something as the new EU machinery regulation (replacing the old machinery directive) places more emphasis on cyber security than the old machinery regulation. We also now have the CRA . I would have liked to add more emphasis on IEC 62443 compliance, but what we got is good. A cyber security risk assessment is now required, and IEC TS 63074:2023 is called out, which then defers to IEC 62443. I spoke on cyber security for robots at last year’s international robot safety conference in Cincinnati. Unfortunately, the presentations from this excellent conference are not available on the web.
New Comms Requirements
Lastly, in a world with more and more requirements for always being connected, data is the new oil; it is good that the standard now includes requirements for industrial communications. There was nothing in the old version on safety data to be transmitted over a network. Previously, the best guidance would have been in IEC 61508-2010 7.4.11, which mentions a white channel design with no further details and defers to either IEC 61784-3 or IEC 62280/EN 50519 for the black channel designs. The new version of ISO 10218 concentrates on the more common black channel approach and, despite being short, shows how the black channel requirements can be tailored differently for the internal robot network (controller to the various axes) and the external robot network, e.g., controller to a PLC.
This is an area I continue to work on; we are revamping IEC 61508-2 7.4.11 with more details on the white channel in particular, which I think might be especially relevant for robot internal networks since it is more suitable in my view for hard real-time requirements. I have also, for my sins, been appointed as the liaison between IEC TC 65 SC65A(system) and SC 65C. The black channel will continue to be the most important for the controller to the PLC network, and the 1km range offered by 10BASE-TIL and even Ethernet APL / 2-WISEcould be important here.
Other information I like in the new version includes:
- The long list of safety functions in Annex C
- The allowances for small robots in 5.1.17 (<10kg, <250mm/s, <50N)
- The nice figure in Annex B shows maximum, restricted, operating, and safe-guarded spaces
ISO TC 299 WG 3 is continuing to work on ISO 20218-3, which will give more guidance on the limited information on cyber security within ISO 10218. I don’t know whether ISO 10218 already needs a refresh to allow for machinery regulation.
The post The Revised Industrial Robot Safety Standard appeared first on ELE Times.
Error Correction Status: A Powerful Reliability Multiplier for Stand-Alone EEPROM
Courtesy : Microchip
As EEPROM devices shrink, board space can be freed for exciting uses. However, smaller EEPROM means smaller cell sizes. This in turn means thinner cell oxide layers. These can wear out more easily, raising reliability concerns. Many manufacturers defend against cell wear out with Error Correcting Codes which detect and correct errors. This solution is usually hidden, with no way of knowing whether it’s there or if it was invoked. Thus, ECC should be seen as a safety feature, not a sole reliability solution. That’s why Microchip Technology, with over 30 years of EEPROM experience, has developed a new family of EEPROM with Error Correction Status. ECS alerts users when error correction occurs, indicating that a memory block should be retired. This feature takes error correction one step further, acting as a powerful reliability multiplier for stand-alone EEPROM, and providing transparency and control designers can enjoy.
Error CorrectionAs mentioned, most new EEPROMs include Error Correcting Codes, typically designed to correct single-bit errors for each specified number of bytes. There are multiple types of ECC used, the most common being Hamming codes. ECC adds parity bits calculated from stored data. When data is read back, the parity bits are recalculated from stored data and compared to the parity bits stored in memory. Discrepancies indicate errors, and the pattern of the discrepancy allows the system to pinpoint and correct single-bit errors, restoring the data and allowing the memory block to continue to be used. However, many EEPROMs don’t indicate when corrections occur, leaving you unaware of deteriorating blocks. ECC then can only be used as a safety feature, giving the application a marginal endurance extension so it may last a bit longer without corrupted data. If you need to know if a block is truly worn out, you can add manual checks to verify memory after each write, but this is resource intensive. Error Correction Status on the other hand solves this challenge because it automatically flags you when a block becomes worn-out, enhancing reliability without needing to invest a large amount of resources.
System ConsiderationsThe advantage of ECS shines the most within wear-leveling routines. How does wear-leveling work? Let’s begin with system design considerations and explain how wear leveling works, then we will show how to use ECS to enhance a wear-leveling routine.
Regardless of whether your EEPROM has ECC or not, it’s crucial to consider its endurance, typically rated at 100,000 cycles for MCU-embedded EEPROM and 1 million cycles for standalone EEPROM at room temperature. Designers must account for this by estimating the number of write cycles over the typical lifetime of the application to determine what size of an EEPROM they need and how to allocate data within the memory.
For instance, consider an industrial power distribution system in a building with four sensors, one for each machine that workers can use when needed. Each sensor generates a data packet per usage session, recording things like energy consumption, session duration and timestamps. Data is stored in the EEPROM until a central server requests a data pull. The system is designed to pull data frequently enough to avoid overwriting existing data within each packet. Assuming a 12-year application lifespan and an average of 400 daily packets per sensor, the total cycles per sensor will reach 1.752 million, greatly surpassing the typical EEPROM endurance rating. To address this, you can create a software routine to spread wear out across the additional blocks (assuming you have excess space). This is called wear-leveling.
Wear-Leveling: Dynamic and StaticTo implement wear-leveling, you can purchase an EEPROM twice as large, allowing you to allocate 2 blocks per sensor, providing up to 2 million cycles per sensor. This offers a buffer of additional cycles if needed (an extra 248 thousand cycles per sensor).
You will then need some way to know where to write new data to spread the wear. While you could write each block to its 1-million-cycle-limit before proceeding to the next, this approach may lead to premature wear if some sensors generate more data than others. If you spread the wear evenly across the EEPROM, the overall application will last longer. Figure 1 illustrates the example explained above, with four meters sending data packets (in purple) back to the MCU across the communication bus. The data is stored in blocks within the EEPROM. Each block has a counter in the top left indicating the number of erase-write cycles it has experienced.
![]() |
Figure 1
There are two types of wear-leveling: dynamic and static. Dynamic wear-leveling is simpler, spreading wear over frequently changing memory blocks but can result in uneven wear. Uneven wear caused by this type of wear-leveling is illustrated in Figure 2. The other type: static wear-leveling, spreads wear across the entire EEPROM, extending the life of the entire memory. Static wear-leveling requires more CPU overhead; however, it will produce the highest endurance for the life of the application.
![]() |
Figure 2
Wear-leveling involves monitoring each memory block’s write cycles and allocation status, which itself can cause wear in non-volatile memory. To deal with this, one option is to store this information in your MCU’s RAM, which doesn’t wear out. Since RAM loses data on power loss, you may consider designing a circuit to detect power loss early, allowing time to transfer current register states to NVM.
Implement Wear-Leveling in SoftwareIn general, a software approach to wear-leveling is to create an algorithm which directs writes to the block with the fewest previous number of writes to spread wear. In static wear-leveling specifically, data is stored in the least-used location not currently allocated for anything else, and data will be swapped to new locations if the cycle difference between blocks is too large. Each block’s write cycles are tracked with a counter, and blocks are retired when they reach their maximum endurance rating.
Wear-leveling effectively reduces wear and improves reliability, allowing each block to reach its maximum specified endurance (Figure 3). However, endurance specifications are only rough indicators of the physical life of the block and don’t include early failures. Also, many blocks will last much longer in the real-world than their endurance ratings allow for. To ensure even higher reliability, additional checks are needed. One method is to read back and compare the block just written to the original data, which requires bus time, CPU overhead and additional RAM. This readback should occur for every write, especially as the number of writes approaches the endurance limit, to detect cell wear-out failures. Without readbacks, wear-out and data corruption may go undetected. The following software flowchart illustrates an example of static wear-leveling, including the readback and comparison necessary to ensure high-reliability. This implementation has the disadvantage of spending significant system resources on reliability.
![]() |
Figure 3
Using Error Correction Status with Wear-LevelingError Correction Status enables a new data-driven approach to wear-leveling and significantly enhanced reliability without the need for full data readbacks.
ECS indicates when a single-bit error has been detected and corrected. This allows you to check a status register to see if ECC was invoked, reducing the need for full memory block readbacks (Figure 4). When an error is detected, the block can be retired, providing data-based feedback on memory wear-out instead of relying on a blind counter. This eliminates the need to carefully estimate memory lifespan and is beneficial for systems that experience vast shifts in their environments over their life, like dramatic temperature and voltage variations which are common in the manufacturing, automotive and utilities industries. This approach allows you to extend memory cell life beyond the datasheet endurance specification all the way to true failure, potentially allowing you to use the device much longer than before.
![]() |
Figure 4
This data-driven approach to wear-leveling is more reliable than classic wear-leveling because it uses actual data instead of arbitrary counts—if one block lasts longer than another, you can continue using that block until cell wear out. It also reduces bus time, CPU overhead and required RAM, which in turn lowers power consumption and improves system performance. Your software flow can now be updated to accommodate this new status indicator (Figure 5).
![]() |
Figure 5
As illustrated in the flowchart, using an ECS bit simplifies the software flow by eliminating the need to read back data, store it in RAM and perform comparisons, freeing resources to create conceptually simpler software. Although a data readback is still required to evaluate the status bit, the data can be ignored, reducing RAM and CPU overhead. The frequency of status bit checks depends on block size and the smallest file size the software handles.
The following devices offer ECS and are currently released and available for order:
- I2C EEPROMs: 24CSM01 (1 Mbit), 24CS512 (512 Kbit), 24CS256 (256 Kbit)
- SPI EEPROMs: 25CSM04 (4 Mbit), 25CS640 (64 Kbit), 25CS320 (32Kbit)
The overall benefit of ECS is significant and will allow you to see cell health in a way you could not before. Some of the advantages are:
- Maximize EEPROM block lifespan by running cells to failure
- Option to remove full block reads to check for data corruption, freeing up time on the communication bus
- If wear-leveling is not necessary or too burdensome to the application, the ECS bit serves as a quick check of memory health, facilitating the extension of EEPROM block lifespan and helping to avoid tracking write cycles
Implementing error correction with a status indicator revolutionizes reliability and extends device life, especially within wear-leveling schemes. This advancement is a game-changer for automotive, medical and other safety-critical applications, offering unparalleled reliability. Designers striving for excellence will find this approach indispensable in creating top-tier systems that stand the test of time. Remember, using our new CS EEPROM will allow you to take hold of your reliability destiny. For more information, be sure to check out our CS family of EEPROM products.
The post Error Correction Status: A Powerful Reliability Multiplier for Stand-Alone EEPROM appeared first on ELE Times.
The transformative force of ultra-wideband (UWB) radar

UWB radar is an augmentation of current ultra-wideband (UWB) ranging techniques. To understand the technical side and potential applications of UWB radar, let’s start at the beginning with the platform it builds on. UWB is a communication protocol that uses radio waves over a wide frequency bandwidth, using multiple channels anywhere within the 3.1 to 10.6 GHz spectrum. The most common frequency ranges for UWB are generally between 6 and 8 GHz.
While we’ve only recently seen its use in automotive and other industries, UWB has been around for a very long time, originally used back in the 1880s when the first radio-signal devices relied on spark-gap transmitters to generate radio waves.
Due to certain restrictions, UWB was mainly used for government and military applications in the intervening years. In 2002, however, the modulation technique was opened for public use at certain frequencies in the GHz range and has since proliferated into various applications across multiple industries.
The wide bandwidth delivers a host of benefits in the automotive world, not least that UWB is less susceptible to interference than narrowband technologies. What makes UWB truly transformative is its ability to measure distances precisely and accurately to perform real-time localization. When two devices directly connect and communicate using UWB, we can measure how long it takes for the radio wave pulses to travel between them, which is commonly referred to as Time-of-Flight (ToF).
Figure 1 For automotive applications, UWB radar provides greater precision for real-time localization with a single device. Source: NXP
This enables UWB to achieve hyper-accurate distance measurements in real-time. This accuracy, along with security features incorporated within the IEEE 802.15.4z standard, makes UWB particularly useful where security is paramount—such as keyless entry solutions.
Digging into the details
Where typical UWB applications require two sensors to communicate and operate, UWB radar only requires a single device. It uses an impulse radio technique similar to UWB’s ranging concept, where a sequence of short UWB pulses is sent, but in place of a second device actively returning the signal, a UWB radar sensor measures the time it takes for the initial series of pulses to be reflected by objects. The radar technology benefits from the underlying accuracy of UWB and provides extremely accurate readings, with the ability to detect movements measured in millimeters.
For a single UWB radar sensor to receive and interpret the reflected signal, it first must be picked up by the UWB antenna and then amplified by a low noise amplifier (LNA). To process the frequencies, the signal is fed into an I/Q-mixer powered by a local oscillator. The resulting baseband signal can be digitized using an analog to digital (ADC) converter, which in turn is fed into a symbol accumulator, and the results are correlated with known preamble sequence.
This generates a so-called channel impulse response (CIR), which represents the channel’s behavior as a function of time. This can be used to predict how the signal will distort as it travels. The sequence of CIR measurements over time are the raw data of a UWB radar device.
Additionally, the principles of the Doppler effect can be exploited, measuring the shift in a wave’s frequency as the object it’s reflecting off moves; it’s used to calculate velocity to generate a range-Doppler plot.
Figure 2 Doppler effect turns UWB technology into a highly effective radar tool. Source: NXP
This process makes it possible to use UWB as a highly effective radar device which can detect not only that an object is present, but how it’s moving in relation to the sensor itself, opening a new world of applications over other wireless standards.
How automotive industry is unlocking new applications
UWB radar has a huge potential with its specific attributes delivering plenty of benefits. It operates at comparatively low frequencies, typically between the 6 to 8 GHz range, and these lower wavelengths make it highly effective at passing through solid materials such as clothing, plastics, and even car seats.
What’s more, the combination of pinpoint accuracy, coupled with UWB radar’s ability to detect velocity, low latency, and clear signal is very powerful. This delivers a whole range of potential applications around presence and gesture detection, intrusion alert, and integration with wider systems for reactive automation.
The automotive sector is one industry that stands to gain a lot from UWB ranging and radar. OEMs have previously struggled with weaker security standards when it comes to applications such as keyless entry, with consumers facing vehicle thefts and rising insurance premiums as a result.
Today’s key fob technologies are often the subject of relay station attacks, where the car access signals are intercepted and replicated to emulate a valid access permission signal. With UWB sensors, their ability to protect the integrity of distance estimation prevents the imitation of signals.
UWB is already found in many smartphones, providing another possibility that OEMs can use to increase connectivity, turning phones into secure state-of-the-art key fobs. This enables a driver to open and even start a car while leaving their phone in their pocket or bag, and the same secure functionality can be applied to UWB-enabled key fobs.
UWB radar goes one step further with applications such as gesture control, helping drivers to open the trunk or bonnet of a car without using their hands. Of course, such features are already available using kick sensors at the front or rear of the vehicle, but this requires additional hardware, which means additional costs.
UWB anchor points can either be used in Ranging Mode for features such as smart access and keyless entry, or in Radar Mode for features like kick sensing, helping to increase functionality without adding costs or weight.
UWB radar’s greater fidelity and ability to detect signs of life is where the most pressing use case arguably lies, however. Instances of infants and children accidentally left in vehicles and suffering heatstroke and even death from heat exposure have led to the European New Car Assessment Program (Euro NCAP), introducing rating points for child presence detection systems, instructing that they become mandatory features from 2025 onward.
Figure 3 UWB radar facilitates child presence detection without additional hardware. Source: NXP
A UWB radar system can accurately scan the car’s interior using the same UWB anchor points as the vehicle’s digital key without needing additional sensors. This helps OEMs to implement child presence detection systems without having to invest in, or package, additional hardware. By detecting the chest movements of the child, a UWB radar system can alert the driver with its penetration capabilities, helping pulses to easily pass through obstructions such as blankets, clothing, and even car seats.
The art of mastering UWB radar
UWB radar has proven its effectiveness in detecting the presence of objects of interest with an emphasis on signs of life. The focus of UWB in the automotive sector is currently on short-range applications typically measured within meters, which makes it ideal for use within the cabin or trunk of a vehicle.
There are some interesting challenges when it comes to interpreting data with UWB radar. With automotive applications, the software and algorithms need to detect the required information from the provided signals, such as differentiating between a child and an adult, or even an animal.
Using UWB radar as a child presence detection solution is also more energy-hungry than other UWB applications because the radio for radar is on for longer period. It’s still more energy efficient than other technologies, however, and it doesn’t necessarily pose a problem in the automotive sphere.
Research is currently being done to optimize the on-time of the USB chip, along with enabling different power modes on an IC level that allows the development of smarter and more effective core applications, particularly regarding how they use the energy budget. These updates can be carried out remotely over-the-air (OTA).
Interference is another area that needs to be considered when using UWB radar. If multiple applications in the vehicle are designed to use UWB, it’s important that they are coordinated to avoid interference. The goal is that all UWB applications can happily coexist without interference.
UWB radar outside automotive
Through child presence detection, UWB radar will save lives in the automotive sector, but its potential reaches far and wide, not least because of its ability to calculate velocity and accurately detect very small movements. Such abilities make UWB radar perfectly suited to the healthcare industry.
There is already literature available on how UWB radar can potentially be used in social and healthcare situations. It can recognize presence, movement, postures, and vital signs, including respiration rates and heartbeat detection.
These same attributes also make UWB radar an appealing proposition when it comes to search and rescue. The ability to detect the faintest of life signs through different materials can make a huge difference following earthquakes, where time is of upmost importance when it comes to locating victims buried under rubble.
UWB radar’s precise movement detection also enables highly effective gesture recognition capabilities, offering a whole host of potential applications outside of the automotive sector. When combined with computer vision and AI technologies, for example, UWB radar could provide improved accessibility and user experiences, along with more consumer-led applications in gaming devices.
One of the most readily accessible applications for UWB radar is the augmentation of smart home and Internet of Things (IoT) deployments. Once again, presence detection capabilities can provide a cost-effective alternative to vision or thermal cameras while affording the same levels of reliability.
Figure 4 UWB radar can be employed in smart home and IoT environments. Source: NXP
When combined with power management systems such as heating, lighting and displays, buildings can achieve far greater levels of power efficiency. UWB radar also has exciting potential when it comes to making smart homes even smarter. For example, with the ability to recognize where people are located within rooms, it can control spatial audio, delivering a more immersive audio experience as a result.
Such spatial awareness could also lead to additional applications within social care, offering the ability to monitor the movement of elderly people with cognitive impairments. This could potentially negate the need for wearables for monitoring purposes, which can easily be forgotten or lost.
Looking to the future
The sheer breadth of possibilities that UWB radar enables is what makes the technology such a compelling proposition. Being able to detect precise micro movements while penetrating solid materials opens the door to near endless applications.
UWB radar could provide more effective and accurate information for seatbelt reminder systems, for example, with the ability to detect where passengers are sitting. Combined with information about whether the seatbelt is plugged in or not, this can help to avoid setting off alarms by accident, such as when a bag is placed on a seat. The seat belt reminder is a natural extension to child presence detection, but where the position of the occupant also needs to be determined.
UWB radar could also be used for more accurate security and movement detection, not only outside the vehicle, but inside as well. It’s especially effective as an intrusion alert, detecting when somebody has smashed a window or entered the vehicle.
This extra accuracy can help to avoid falsely setting off alarms during bad weather, only alerting the owner to possible thefts when signs of life are detected alongside movement. It even opens the door to greater gesture recognition within the vehicle itself, enabling drivers or passengers to carry out additional functions without having to touch physical buttons.
The ability to integrate these features without requiring additional sensors, while using existing hardware, will make a huge difference for OEMs and eventually the end consumer. Through a combination of UWB ranging and UWB radar, there’s potential to embrace multiple uses for every sensor, from integrating smarter digital keys and child presence detection to kick sensing, seatbelt reminders, and intrusion alert. This will save costs, weight, and reduce packaging challenges.
Such integration can also impact the implementation of features. Manufacturers will be able to utilize OTA updates to deliver additional functionality, or increased efficiency, without any additional sensors or changes to hardware. In the spirit of software-defined vehicles (SDV), this also means that OEMs don’t need to decide during production which feature or technology needs to be implemented, with UWB radar helping to deliver maximum flexibility and reduced complexity.
We’re at the beginning of an exciting journey when it comes to UWB radar, with the first vehicles set to hit the road in 2025, and a whole lot more to come from the technology in the future. With the ability to dramatically cut down on sensors and hardware, it’s one of the most exciting and transformative wireless technologies we’ve seen yet, and as industry standards, integrations, and guides are put in place, adoption will rise and applications proliferate, helping UWB radar to meet its incredible potential.
Bernhard Großwindhager, Marc Manninger and Christoph Zorn are responsible for product marketing and business development at NXP Semiconductors.
Related Content
- UWB to target ticket-less passengers
- Ultra-wideband tech gets a boost in capabilities
- NXP’s Trimension SR250 Combines UWB Radar and Secure Ranging
- Advances in AI-Enabled Automotive Radar Sensors and Audio Processors
- UWB radar’s potential to drive digital key for safety, security and beyond
The post The transformative force of ultra-wideband (UWB) radar appeared first on EDN.
Semiconductor industry strategy 2025: Semiconductors at the heart of software-defined products

Electronics are everywhere. As daily life becomes more digital and more devices become software defined and interconnected, the prevalence of electronics will inevitably rise. Semiconductors are what makes this all possible. So, it is no surprise that the entire semiconductor industry is on a path to being a $1 trillion market by 2030.
While accelerating demand will help semiconductors reach impressive gains, many chip makers may be held back by the costs of semiconductor design and manufacturing. Already, building a cutting-edge fab costs about $19 billion and the design of each chip is around a $500 million investment on average. With AI integration on the rise in consumer devices also fueling growth, companies will need to push the boundaries of their electronic design and manufacturing processes to cost effectively supply chips at optimal performance and environmental efficiency.
Ensuring the semiconductor industry continues its aggressive growth will require organizations to approach both fab commissioning and operation as well as chip design with a more unique, collaborative strategy. The three pillars of this strategy are:
- Collaborative semiconductor business platform
- Software-defined semiconductor enabled for software-defined products
- The comprehensive digital twin
Creating next-generation semiconductors is expensive yet necessary as more products begin to rely heavily on software. Ensuring maximum efficiency within a business will be imperative. Consequently, many chip makers are striving to create metrics-driven environments for semiconductor lifecycle optimization. Typically, companies use antiquated methods to track roles and responsibilities, causing them to rely on information that can be weeks old. As a result, problem solving can become inefficient, negatively impacting the product lifecycle.
Chip makers must upgrade to a truly metrics-driven business platform that enables real-time analysis and facilitates the management of the entire process, from new product introduction through design and verification to final product delivery. By using semiconductor lifecycle management as the foundation and accessing the wealth of data generated during design and manufacturing, companies can take control of their new product introduction processes and have integrated traceability throughout the product lifecycle.
Figure 1 Semiconductor lifecycle optimization is driven by real-time metrics analysis, enabling seamless collaboration from design to final product delivery. Source: Siemens
With this collaborative business platform in place, businesses can know the status of their teams at any point during a project. For example, the design team can take advantage of real-time data to have accurate status of the project anytime, without relying on manually generated status reports with weeks old data. Meanwhile, manufacturing can focus on both the front and back ends of IC manufacturing planning with predictability based on actual data. Once all of this in place, companies can feasibly build AI metric analysis and a business intelligence platform on top of that.
Second pillar: Software-defined semiconductor for the software-defined product (SDP)Software is increasingly being used to define customer experience with a product, Figure 2. Because of this, SDPs will become increasingly central to the evolution of the semiconductor industry. And as AI and ML workloads continue to drive requirements, the traditional boundaries between hardware and software will blur.
Figure 2 Software-defined products are driving the evolution of semiconductors, as AI and ML blur the lines between hardware and software for enhanced innovation and efficiency. Source: Vertigo3d
The convergence of software and hardware will force the semiconductor industry to rethink everything from design methodologies to verification processes. Success in this new landscape will require semiconductor companies to position themselves as enablers of software innovation through holistic co-optimization approaches. No longer will hardware and software teams work in siloed environments; they will become a holistic engineering team that works together to optimize products.
Improved product optimization from integrated teams works in tandem with the industry’s trend toward purpose-built compute platforms to handle the software workload. Consumers are already seeking out customizable chips and they will continue to do so in even greater numbers as general-purpose processors lag expectations. Simultaneously, companies are already creating specialized parts for their products. Apple has several different processors for its host of products; this will become even more important as software becomes more crucial to the functionality of a product.
Supporting the software defined products not only impacts the semiconductors that drive the software but impacts everything from the semiconductor design through ECAD, E/E, and MCAD design. Chip makers need to create environments where they can handle these types of products while getting the requirements right and then drive all requirements to all design domains to develop the product correctly moving forward.
Third pillar: The comprehensive digital twinPart of creating improved environments to better fabricate next generation semiconductors is making sure that the process remains affordable. To combat production costs that are likely to rise, semiconductor companies should lean into digitalization and leverage the comprehensive digital twin for both the semiconductor design and fabrication.
The comprehensive and physics-based Digital Twin (cDT) addresses the challenge of how to weave together the disparate engineering and process groups needed to design and create tomorrow’s SW-defined semiconductor. To enable all these players to interact early and often, the cDT incorporates mechanical, electronic, electrical, semiconductor, software, and manufacturing to fully capture today’s smart products and processes.
Specifically, the cDT merges the real and digital worlds by creating a set of consistent digital models representing different facets of the design that can be used throughout the entire product and production lifecycle and the supply chain, Figure 3. Now it is possible to do more virtually before committing to expensive prototypes or physically commissioning a fab. The result is higher quality products while meeting aggressive cost, timeline and sustainability goals.
Figure 3 The comprehensive digital twin merges real and digital worlds, enabling faster product introductions, higher yields, and improved sustainability by simulating and optimizing semiconductor design and production processes. Source: Siemens
In design, this “shift-left” provides a physics-based virtual environment for all the engineering teams to interact and create, simulate, and improve product designs. Design and manufacturing iterations in the virtual world happen quickly and consume few resources outside of the engineer’s brain power, enabling them to explore a broader design space. Then in production, it empowers companies to virtually evaluate and optimize production lines, commission machines, and examine entire factories or networks of factories to improve production speed, efficiency, and sustainability. It can analyze and act on real data from the fab and then use that wealth of data for AI metrics analysis.
Businesses can also leverage the cDT to virtualize the entire product process design for the SW-defined product. This digital twin enables manufacturers to simulate and optimize everything from initial design concepts to manufacturing processes and final product integration, which dramatically reduces development cycles and improves outcomes. Companies can verify and test changes earlier in the design process while keeping teams across disciplines in sync and on track, leading to enhanced design exploration and optimization. And since sustainability starts at design, the digital twin can help chip makers meet sustainability metrics by enabling them to choose components that have lower carbon footprints, more thermal tolerance, and reduced power requirements.
The comprehensive digital twin for the semiconductor ecosystem helps businesses manage the complexities of the SDP as well as mechanical and production requirements while bolstering efficiency. Benefits of the digital twin include:
- Faster new product introductions: Virtualizing the entire semiconductor ecosystem allows faster time to yield. Along with the quest to pursue “More than Moore,” creating a virtual environment for heterogenous packaging allows for early verification and optimization of advanced packaging techniques.
- Faster path to higher yields: Simulating the production process makes enhancing IC quality easier, enabling workers to enact changes dynamically on the shop floor to quickly achieve higher yields for greater profitability
- Traceability and zero defects: It is now possible to update the digital twin of both the product and production in tandem with their real-world counterparts, enabling manufacturers to diagnose issues and detect anomalies before they happen in the pursuit of zero defects
- Dynamic planning and scheduling: Since the digital twin provides an adaptive comparison between the physical and digital counterparts, it can detect disturbances within systems and trigger rescheduling in a timely manner
Creating next-generation semiconductors is expensive. Yet, chip manufacturers must continue to develop and fabricate new designs that require ever-more advanced fabrication technology to efficiently create semiconductors for tomorrow’s software-defined products. To handle the changing landscape, businesses within the semiconductor industry will need to rely on the comprehensive digital twin and adopt a collaborative semiconductor business platform that enables them to partner both inside and outside of the industry.
The emergence of collaborative alliances within the semiconductor industry as well as across related industries will break down traditional organizational boundaries, enabling unprecedented levels of cooperation across and beyond the semiconductor industry. The result will be extraordinary innovation that leverages collective expertise and capabilities. Already, well-established semiconductor companies have begun partnering to move forward in this rapidly evolving ecosystem. When Tata Group wanted build fabs in India, Analog Devices, Tata Electronics, and Tata Motors signed an agreement that would allow Tata to use Analog Devices’ chips in its applications like electric vehicles and network infrastructure. At the same time, Analog Devices will be able to take advantage of Tata’s plants to fab their next generation chips.
And this is just one example of the many innovative collaborations starting to emerge. The marketplace is now moving toward cooperation and partnerships that have never existed before across different industries to develop the technology and capabilities needed to move forward. To ease this transition, the semiconductor industry is a cross-industry collaboration environment that will facilitate these strategic partnerships.
Michael Munsey is the Vice President of Electronics & Semiconductors for Siemens Digital Industries Software. In this role, Munsey is responsible for setting the strategic direction for the company with a focus on helping customers drive unprecedented growth and innovation in the semiconductor and electronics industries through digital transformation.
Munsey began his career as a designer at IBM more than 35 years ago and has the distinction of contributing to products that are currently in use on two planets: Earth and Mars, the latter courtesy of his work on the Mars Rover.
Before joining Siemens in 2021, Munsey spent his career working in positions of increasing responsibility across the semiconductor and electronics industries where he did everything from leading cross-functional teams to driving product creation and executing business development in new regions to setting the vision for corporate strategy. Munsey holds a BSEE in Electrical and Electronics Engineering from Tufts University.
Related Content
- CES 2025: A Chat with Siemens EDA CEO Mike Ellow
- Shift in electronic systems design reshaping EDA tools integration
- EDA toolset parade at TSMC’s U.S. design symposium
- Overcoming challenges in electronics design landscape
The post Semiconductor industry strategy 2025: Semiconductors at the heart of software-defined products appeared first on EDN.
New cardboard star wars droid with raspberry pi pico w
![]() | submitted by /u/OtisCanHelpYou [link] [comments] |
Improving DRAM Performance Using Dual Work-Function Metal Gate (DWMG) Structures
Courtesy : LAM RESEARCH
Gate-induced drain leakage (GIDL) presents a major challenge in scaling DRAM technology.
DRAM serves as the backbone of modern computing, enabling devices ranging from smartphones to high-performance servers. As the demand accelerates for higher density and lower power consumption in memory devices, innovation in reducing DRAM leakage currents and enhancing performance becomes essential. One significant challenge in scaling DRAM technology is GIDL, a primary source of standby charge loss. This article explores how a DWMG structure in DRAM buried word-line (BWL) can mitigate GIDL. By leveraging a full-scale process integration model that supports electrical analysis, we demonstrate how this approach reduces leakage current while maintaining robust device performance.
The Challenge of GIDL in Modern DRAM
GIDL is primarily caused by band-to-band tunneling (BTBT) at the drain junction under high electric field conditions. This phenomenon not only increases off-state leakage currents but also degrades memory state retention time in DRAM cells, particularly as feature sizes shrink below 20 nm.1
Factors such as thinner gate oxides and higher doping concentrations exacerbate GIDL, creating a synergistic effect that makes it a critical problem in designing low-power, high-density DRAM.2

The Solution
The introduction of a dual work-function metal gate structure provides a compelling solution to this challenge. By segmenting the buried word-line gate into regions with distinct work functions, the electric field along the channel is precisely controlled. Examples of some dual work-function metal gate structures are shown in Figure 2.
This structure suppresses BTBT generation, thereby reducing GIDL without compromising drive current or threshold voltage (Vt). As a result, this design is well-suited for advanced DRAM nodes.4,5

DWMG Alignment with Industry Trends
The DWMG approach aligns with broader semiconductor trends emphasizing advanced gate designs and channel engineering. Our study applies this innovation to DRAM technology, addressing GIDL challenges while preserving key performance metrics. Similar methods have been successfully implemented in FinFETs6 and tunnel FETs7 to reduce leakage and improve subthreshold slopes.
Leveraging Process Integration Modeling for Insights
Our process integration modeling platform (SEMulator3D) with built-in electrical analysis capabilities played a pivotal role in evaluating the DWMG design. This tool allowed us to:
- Simulate the full process flow of a DRAM cell array, from active area formation to capacitor integration (Figure 3a).
- Focus on the BWL transistor by extracting and refining a specific transistor for electrical characterization (Figure 3b–d).
- Analyze the interactions between process parameters—such as gate work-function, oxide thickness, and doping profiles—and their impact on electrical performance.
This simulation framework provided a holistic view of integration challenges and revealed the effectiveness of DWMG in reducing current leakage.

DWMG Design and Simulation Results
The DWMG structure is realized by splitting the gate into upper and lower regions with distinct work functions in the upper region’s metal gate of 3.5eV, 4.1eV, and 4.7eV (Figure 4). The device simulation considers the models of doping/field-dependent mobility, Shockley-Read-Hall (SRH) generation/recombination, and trap-assisted band-to-band tunneling effects.
The drift-diffusion equation is solved to obtain Idrain vs. Vgate curves, both in the linear and saturation regimes. The substrate current is measured (virtually) to determine the GIDL leakage amount.

Key results include the following:
- Leakage reduction (Figure 5): The low and high work-function regions, in the upper gate and lower gate, respectively, create a more relaxed electric field distribution than the same work-function without the DWMG, which suppresses BTBT at the drain junction and in turn reduces leakage current.

- Preserved device performance (Figure 6): Despite the GIDL reduction (I_subtrate), critical IV characteristics in both linear (Idlin_Vg) and saturation (Idsat_Vg) regimes remain intact when using the DWMG, ensuring reliable operation during read and write cycles.

- Process dependency (Figure 7): Gate oxide thickness and doping concentration significantly influence performance. For instance, thinner oxides improve field control but increase BTBT risk due to the reduced barrier width. Similarly, higher doping improves modulation capabilities but exacerbates BTBT by increasing the electric field intensity, which accelerates tunneling processes.

Advantages of Combining Device Electrical Analysis with Process Integration Modeling
Performing device electrical analysis during process integration modeling can enable the following types of advanced analyses that identify design-technology trade-offs:
- Electrical pathfinding: This type of analysis can be used to rapidly explore combinations of gate work-functions, oxide thicknesses, and doping profiles to pinpoint optimal designs. This approach has the potential to minimize the cost and time of physical experiments while reducing risks associated with late-stage failures.
- Variability analysis: Statistical simulations can identify the impact of process variations—such as gate oxide non-uniformity and doping fluctuations—on GIDL and IV characteristics. This type of analysis highlights critical design margins and has the potential to provide feedback on process optimization (such as active area formation) from very early process development stages.
The Future of DWMG and DRAM
The dual work-function metal gate (DWMG) is a robust, scalable solution for mitigating GIDL in DRAM technology. By optimizing the electric field distribution, this design effectively reduces leakage currents while maintaining critical IV performance. Process integration modeling combined with electrical analysis capabilities is instrumental in demonstrating the ability to reduce leakage current using DWMG, offering a comprehensive framework for addressing design and integration challenges.
Future research efforts could include:
- Integrating DWMG designs with high-k dielectrics or advanced junction engineering to further enhance leakage control.
- Assessing the impact of scaling trends, such as smaller metal pitches and EUV lithography, on DWMG performance.
- Developing predictive models for variability in advanced DRAM nodes.
The post Improving DRAM Performance Using Dual Work-Function Metal Gate (DWMG) Structures appeared first on ELE Times.
EconoDUAL(TM)Power Kit, Power up Commercial Agricultural vehicle
Courtesy : Infineon
As electric vehicles continue to gain traction in the agricultural, commercial, and construction sectors, the demand for efficient and reliable power systems grows. High-voltage traction systems ensure these vehicles operate effectively under heavy loads and demanding conditions such as 60,000 hours of operation time, up to 1.5 million km as well as low FIT rates. Infineon’s EconoDUAL(TM) 250kw Power Kit ,is a prime example that meets the evolving needs of Inverter systems in commercial and agricultural vehicles.
This 250kW three-phase inverter power kit is designed for eCAVs with 800V battery ,addressing the increasing demand for reliable and efficient solutions. It provides a consistent platform for developers working on eCAVs, offering numerous benefits, including a fast time to market via its system solution, and a flexible design with scalable module currents up to 900 A nominal and an easy migration path towards higher voltage class and SiC technology.
Key features
- High-Power Output: specifically designed for 800 V traction–inverter system in eCAVs.
- Accurate current measurement: It integrates our XENSIV TLE4973 Hall coreless current sensors in a compact and easy-to-mount Swoboda universal current sensor module.
- Custom Design Elements: The kit includes specially designed DC-link capacitors and a liquid-cooling system to maintain performance in challenging operating conditions.
- Component Integration: It features three FF900R12ME7 EconoDUAL(TM)3 IGBT7 power modules and 1ED3321MC12N EiceDRIVER gate drivers, ensuring compatibility and ease of assembly.
The EconoDUAL(TM) Power Kit includes three industrial grading EconoDUAL(TM) 3 IGBT7 modules capable of handling high currents efficiently, as well as gate drivers mounted on gate drive boards with booster stages that ensure reliable operation in demanding applications. Additionally, this kit is equipped with an integrated cooling system, which prevents overheating and ensures thermal stability, and is optimized for 800 V systems, with all components, including busbars and capacitors, specifically tailored for high-voltage operation.
Application Development in commercial and agricultural vehicles
The EconoDUAL(TM) Power Kit provides essential tools for addressing the challenges of designing and developing eCAVs. It is particularly suitable for light and medium-duty vehicles such as eBuses and medium-duty eTrucks, while also being applicable to other vehicle types like construction equipment and agricultural vehicles. Its integrated design and advanced components help streamline prototyping and development processes. Additionally, our 32-bit AURIX microcontrollers can be used to enhance the overall system design and ensure functional safety up to the highest ASIL D level. AURIX microcontrollers also offer integrated DS-ADC (delta sigma ADC) to enable a digital calculation of resolver positioning, eventually to replace external resolver IC e.g. Tamagawa, and reduce system complexity.
The microcontroller selection tree can be found below:
XENSIV TLE4973 current sensors, is based on TLE4973 core-less technology. It is highly accurate over temperature and lifetime due to high linearity, stray-field robustness and lack of hysteresis. There is also no need for magnetic concentrator nor a shield, achieving space optimization and design flexibility.
The post EconoDUAL(TM)Power Kit, Power up Commercial Agricultural vehicle appeared first on ELE Times.
Broadcom drives mass adoption of software-defined vehicles with expanded Ethernet switch portfolio
Courtesy : Broadcom
Broadcom’s portfolio of automotive Ethernet switches are built not only for today’s automotive network, they’re scalable for the network of the future.
Automakers have used Broadcom’s standard automotive switches for more than a decade to route data between various sensors, processing units, and actuators within the vehicle. As automakers transition from domain-based to zonal architectures, pre-planning allows the architectures to scale to newer features and benefits.Software-defined vehicles, or SDVs, have the connectivity and processing power to secure, monitor, upgrade, and update vehicle capabilities. The software for different computing functions, such as driver assist, infotainment, body control, and instrumentation, can all be distributed across different boards and processors. Sensor data can flow to multiple zones/boards versus being directly connected. It is the scalability of Ethernet hardware that allows an SDV to be improved after purchase. So, what features should you look for in a switch to support SDVs?

The first item to examine is the type of system on chip, or SoC, that is being used for compute processing in your architecture. New classes of automotive SoCs allow application processing, real-time processing, AI compute, and safety functionality in a single device. Zonal and central compute electronic control units (ECUs) can take advantage of these scalable SoCs. These SoCs have multiple multi-gigabit interfaces to the network to gather and transmit all the data they need to process. For example, AI models for autonomous drive systems can be updated to improve camera recognition and safety. As new software features are added, the amount of data sent over these SoC interfaces will increase. Just as the SoCs are optimized and designed to scale over time to handle larger compute and network needs, the Ethernet network must be designed from the start to support future needs. The automotive Ethernet switch must support multiple connections to the SoCs at the maximum line rate needed. The switch should also be able to support the scalability of each interface from 1Gbps to 10Gbps. If the SoC supports PCIe interfaces with virtualization, then the switch needs to support virtualization as well.
As the software feature workloads get distributed between compute devices, there will be a need for network performance optimizations and time-sensitive provisioning. SDVs will collect data across the network for data analytics and health monitoring. The Ethernet switches will use their packet filters to monitor specific traffic flows at line rate. Captured motor efficiency data, Ethernet network health, and autonomous drive data for AI model improvement can all traverse the Ethernet backbone to the car’s cloud connection. Dynamic configuration of the automotive Ethernet switches allows the automaker to scale the needed resources efficiently over time. Automotive Ethernet switches need to have the bandwidth scalability and timing control to handle future network needs.
As port count requirements for an ECU increase, the automotive Ethernet switch chip must be able to handle all the ports with a single die. A switch chip that uses more than one smaller switch die in a single package can cause numerous issues. The stacking or cascading switch cores have higher latency as the Ethernet packets must be stored and forwarded through each switch die. The high-speed interface between these embedded dies becomes a bottleneck for traffic that must flow from a port on one die to the port on the other die. Time synchronization becomes trickier as multiple gPTP protocol stacks are run inside the single package. Scalability is a key feature enabled effectively with a monolithic die based switch.
As mentioned in our blog,”Securing software-defined vehicles with zonal E/E architectures,” protecting SDVs using zonal electrical/electronic architectures is critically important. The SDV architecture requires a multilayer security approach. The switches need to boot authenticated images securely, and they must allow only authenticated images to be loaded during over-the-air updates. Since software-based protection is challenging at faster Ethernet speeds, MACsec packet authentication and encryption allows line-rate protection in hardware at speeds up to 10 Gbps. In addition, both DOS protection and packet filtering are needed in hardware. Additional levels of protection can be taken in hardware that are unique to an automotive network architecture. An automotive network is fixed, unlike an SMB or enterprise Ethernet network. A port on the switch connected to a RADAR will always be connected to that RADAR in every car. If the unique address of the RADAR on an Ethernet packet is ever seen ingressing on another port, then it is known that someone is spoofing that address, and the port should be quarantined. The same can be said if a second address is seen on the RADAR port, as there should only be one device connected to that port. The security features should be implemented by dedicated hardware in the switch with software running on the internal processor subsystem handling any exceptions. This enables all of the security functionality at line rate and makes the intrusion detection and prevention software clients to be efficient and effective.
50G Auto Ethernet Switch Portfolio ExpansionIn 2022, Broadcom unveiled the 50G automotive Ethernet switch product family to meet automakers’ needs and enable the future of SDVs. To drive mass adoption of SDVs, Broadcom is expanding the product family with a new cost-optimized 11-port version, the BCM89581MT. This device is a single die, lower-power, smaller-port count, 50G automotive Ethernet switch. To provide scalable flow of traffic, the BCM89581MT has multiple interfaces capable of 10Gbps connections to the latest SoCs and multi-gigabit automotive Ethernet PHYs. The high-speed interfaces can be 2.5G SGMII, USXGMII, PCIe Gen 4 single lane or XFI. This addition to Broadcom’s automotive Ethernet switch portfolio will allow for smaller port count central compute or zonal ECUs to fit into the SDV architecture. Broadcom’s automotive SDK can be seamlessly ported across the different family members.
The BCM89581MT enables original equipment manufacturers (OEMs) to realize the full network potential for smaller cost-optimized ECUs. With advanced security, scalable connections to SoCs, advanced time-synchronized networking features, and a full-feature SDK, the BCM89581MT easily allows the OEM to take advantage of the SDV features they need. Broadcom will showcase its expanded portfolio of 50G automotive Ethernet switches, including the new BCM89581MT, at the 2025 Automotive Ethernet Congress in Munich from February 18th-20th. Stop by our booth to learn more about our latest offerings and how our expanded portfolio of automotive Ethernet switch chips enable next-generation software-defined vehicles.
The post Broadcom drives mass adoption of software-defined vehicles with expanded Ethernet switch portfolio appeared first on ELE Times.
Co-packaged optics accelerating towards commercialization
Acquire the Current Challenges of Indirect Time-of-Flight (iToF) Technology with Technological Advancements
Courtesy : Onsemi
One secret behind the success of modern industrial automation is the power of 3D vision. Traditional 2D sensors can only provide flat images, creating limitations in their effectiveness in applications like device inspection. They can read a barcode which may contain the items’ dimensions but cannot independently gauge true shape and size, or any potential dents, defects or irregularities. In addition, 2D readings are at the mercy of lighting conditions, which may obfuscate or distort important areas of interest.
A breakthrough to these constraints can be done with depth sensing, processing the Z-axis in 3D, much like human vision. Now, depth cameras can tell the fullness of an object, perform precise inspections on devices, and even detect subtle facial features for applications such as access control. Thanks to these capabilities, 3D vision is a game-changer across industries – from defense and aerospace to medical, automotive and micro-technology. Whether it’s obstacle detection, facial recognition, self-driving or robotic assistants, depth sensing is the key to modern industrial automation.
Depth sensing, however of type, relies on active or passive visual protectionism. Depth sensing based on passive componentular requires highly calilytic stereo sensors and parallax, very similar to the human eye. Active sensing uses an emitted light beam towards their targets and uses the reflected energy to determine depth. This requires an energy emitter, but offers advantages like penetrating clouds/smoke, 24/7 operation and more deterministic operation.
There are several active directional techniques: direct time-of-flight (dToF), indirect time-of-flight (iToF), structured light and active stereo. Indirect time-offlight uses phase shift between the transmitted and received signals to calculate distance – it’s very accurate and the understanding hardware is simple.
In this blog you will learn about onsemi’s latest family addition, Hyperlux ID has made significant advances in iToF technology and these advances can be utilized to improve depth sensing in current industrial and commercial applications.
Existing iToF Technology Constraints Reduce Widespread AdoptioniToF sensing lies at the heart of many applications. One such popular application is face recognition as seen on various smartphones. However, this access control feature can only function at close range. Other applications that use iToF include machine vision (MV), robotics, augmented reality/virtual reality (AR/VR), biometrics and patient monitoring. Currently these applications are restricted to innulin use at close range (< 5m) with stationary objects that do not require high resolution. Several challenges restrict the potential scope of iToF technology. Among these are motion, the overhead and complexity of the hardware and data processing architecture and the need for meticulous calibration.
These significant hurdles either force engineers to implement complex 3D and expensive solutions to obtain depth, or simply to not acquire depth information at all. With remarkable innovations, onsemi introduces the Hyperlux ID family.
Hyperlux ID family that enables the benefits of iToF without previously noted restrictions. Hyperlux ID’s iToF implementation can now enable a more widespread adoption of this important technology.
Detailing the Hyperlux ID AdvancesOnsemi’s Hyperlux ID sensing family initially consists of two 1.2 megapixels (MP) iToF products, the AF0130 and AF0131. This family provides advanced sensor performance and development in four critical areas:
- Receiving reliable depth information with moving objects
Achieving optimal resolution/depth distance with high accuracy
- Reducing cost and size
- Decreasing calibration time
Each of the aforementioned areas and improvements are further detailed.
Momentum Motion ArtifactsTo enable more widespread adoption, iToF sensors need to function well with moving objects, so they can produce accurate results without motion. As mentioned, iToF sensing on light reflections using four or more different phases to calculate depth. Nearly all existing iToF sensing solutions in the marketplace do not capture and process these phases simultaneously provide which issues with moving objects. Designed with a unique proprietary integration and readout structure, the Hyperlux ID depth sensor uses global shutter with on-chip storage and real-time processing to enable fast-moving object capture applications such as conveyor belt operation, robot arms, surveillance, collision collision, attachment detection and more.

Most iToF sensors on the market today have only VGA resolution, which hinders their accuracy, and in turn, limits their applications. One reason VGA is more prevalent is due to the complex phase capture and data intensive processing mentioned prior. In contrast, the Hyperlux ID sensors are designed with 1.2 MP resolution (1280×960) using a high performance 3.5 μm back-side (BSI) pixel. As a product of its increased resolution over VGA, the Hyperlux ID sensor offers the additional critical advantage of expanded range depth. , at closer distances high-precision accuracy is provided and wider-angle optics can be used.
With higher resolution, the Hyperlux ID sensors also deliver improved quantum efficiency and reduced depth jitter. Taken together, these enhancements mean new applications for iToF sensors where high resolution and expanded depth are paramount, such as gesture recognition, quality control/inspection and access control.

As a product of increased resolution, the Hyperlux ID depth sensor can measure depth over a much greater range compared to other iToF sensors currently available. While current iToF offerings have an indoor range of less than 10 meters, the Hyperlux ID iToF sensor family can reach up to 30 meters. The usage of a high-performance global shutter pixel enables a full sensor array to closely align to active infrared lighting, which in turn limits noise provides from other infrared sources which are common indoor lights and most challenging of all – the sun.
Easier Calibration and DevelopmentAccurately record and calculating phase differences in iToF sensors require precise calibration, an extremely time-consuming process. To ease this, we have developed a proprietary method that makes Hyperlux ID sensors easier to calibrate and thus faster to set up.
To aid in development, onsemi has constructed an easy-to-use development kit that includes a baseboard, a head sensorboard and a laser board. The kit can be used both indoors and outdoors with a range of 0.5 – 30 meters. It can produce depth maps, 3D point clouds, phase-out and depth-out data from an image.
Activated, by using spread-spectrum techniques many iToF (and other infrared-enabled devices) sensors can be used in the same system without worrying of other interference devices.
onsemi’s iToF Sensors Do More for LessiToF sensors are excellent at making accurate 3D depth measurements, which have won them a solid place in industrial and commercial applications. With remarkable improvements in performance and design simplification, onsemi’s Hyperlux ID depth sensors open a new world of applications for iToF sensing depth.
Compared to iToF sensors on the market today, Hyperlux ID depth sensors work more effectively with objects in motion, outdoors and at greater distances. In addition, due to their novel design, Hyperlux ID depth sensors are more cost-effective, take up less board real estate and are easier to work with.
The Hyperlux ID family of depth sensors consists of two products: the AF0130 and AF0131. The AF0130 includes built-in depth processing while the AF0131 does not, for customers who prefer to use their own original algorithms.
The post Acquire the Current Challenges of Indirect Time-of-Flight (iToF) Technology with Technological Advancements appeared first on ELE Times.
Comptek launches Kontrox LASE 16 for industrial-scale edge-emitting laser facet passivation
Empower industrial IoT through integrated connectivity, precise positioning and value-added services with a new modem lineup from Qualcomm
Three new modems, purpose-built for IoT, bring an industry-first iSIM, cloud services and connectivity on NB-IoT and Cat 1bis networks for ubiquitous coverage.
The industrial Internet of Things (IIoT) is rapidly transforming industries, enabling businesses to achieve greater efficiency, productivity and visibility. However, deploying successful IIoT applications requires reliable connectivity, accurate positioning and cost-effective solutions. Three new modems from Qualcomm Technologies are purpose-built to address far-ranging use cases across industrial applications through an industry-first integrated SIM (iSIM), and LTE connectivity on Narrowband IoT (NB-IoT) and Cat 1bis networks, for coverage even in challenging signal environments.
The Qualcomm E41 4G Modem-RFThe Qualcomm E41 4G Modem-RF evolves IoT device capabilities by bringing integrated connectivity through an industry-first GSMA pre-certified iSIM. It offers device manufacturers the ability to simplify the device manufacturing process by reducing the need for additional parts and multiple models of the same device, helping accelerate the time to market of commercial devices, since those devices can be remotely provisioned to the desired network once manufactured through integrated connectivity capabilities. The E41 4G Modem-RF is also purpose-built for use with the Qualcomm Aware Platform so enterprises, OEMs, ODMs and developers can easily build, deploy and scale cloud-connected devices that can be tailored to solve various industrial challenges across businesses, through value-added, cloud-based services.
The Qualcomm E51 4G Modem-RF and Qualcomm E52 4G Modem-RFContinuing the mission of advancing cellular connectivity for everyone and across every device, Qualcomm is proudly introducing a new generation of modem solutions for IoT, optimized for use on NB-IoT and Cat 1bis networks. Both the Qualcomm E51 4G Modem-RF and the Qualcomm E52 4G Modem-RF feature a highly integrated design that allows for power and cost optimizations for device manufacturers. These two low-power solutions contain an integrated power management unit, support for RF communications, and a rich array of peripherals.
The former of these two solutions also removes the need for dedicated GPS hardware through cloud-based GPS positioning services, further helping device manufacturers save on device costs, while reducing positioning error in open sky and dense urban environments. Regardless of which modem ODMs and OEMs choose, they can rest assured they can utilize low-power connectivity and intelligent power management capabilities, and NB-IoT or Cat 1bis connectivity, making these modems ideal for ultra-low power connectivity across a range of IoT devices including smart meters, smart city devices, intelligent parking solutions, healthcare devices, wearable devices, IP cameras, point-of-sale terminals and more.

The Qualcomm E41 4G Modem-RF and Qualcomm E52 4G Modem-RF are both Cat 1bis solutions that represent advancements in IIoT connectivity, including a breakthrough on the former of these modems, which features an industry-first, GSMA pre-certified iSIM solution that can be programmed during manufacturing or remotely via a SIM provisioning service. This will enable devices to more readily connect to a variety of cellular networks across the globe, thereby making it easier than ever for ODMs, OEMs, MNOs and MVNOs to integrate connectivity on devices across networks.
The potential applications for the E41 4G Modem-RF span across a variety of IoT devices, including smart meters that are placed in remote areas that have historically required frequent battery replacements or manual readings. Now, those meters can operate more efficiently by using integrated connectivity and remote management to send readings proactively over the air, and alert remote decision-makers when maintenance is needed.

IoT devices are deployed in a variety of environments, including where location technologies have traditionally been challenged, such as indoor areas like warehouses and retail stores. The E41 4G Modem-RF uses several positioning techniques to address the needs of industrial IoT applications, including in these difficult signal environments, using ambient signals from existing Wi-Fi access points and cellular towers. Positioning can be achieved either directly through the modem, or through Qualcomm Aware Positioning Services, which adds cloud-based positioning services and available GNSS assistance, when paired with the all-new optional dual-band GNSS receiver, the Qualcomm QCG110. This is an ideal solution for positioning devices in open-sky environments that require precise positioning, using multiple constellations, in a power-conscious way.
With its variety of positioning technologies, the E41 4G Modem-RF provides a robust solution for IIoT applications including asset tracking and fleet management, energy and utilities, retail and mobile network operators, powering continuous asset visibility, monitoring and management capabilities even in the most challenging conditions.

All three new modems will help device manufacturers simplify the development process and reduce the time and costs to develop devices through a highly integrated design architecture. Because the E41 4G Modem-RF incorporates iSIM technology directly into the hardware design, it reduces the total cost of assembling a device, since the cost of SIM card is included in the modem. OEMs are able to develop a single device model that can be remotely programmed to work in different regions around the globe and transform the traditional manufacturing model where it’s been necessary to build multiple models of the same device, each using a different SIM, to work with different connectivity providers across regions. By utilizing the E41 4G Modem-RF’s compact design, businesses can unlock the full potential of IIoT without compromising on quality or performance, and reduce design complexity.

The capabilities of all three modems unlock a wide variety of possibilities across smart wearables in warehousing, industrial handheld devices in retail, smart metering in energy and utilities, guidance for autonomous robots across retail, warehouses and more.
In the energy and utilities sector, example uses for all three of these modems include:
- Improved operational efficiency and energy distribution on a localized grid level with reduced costs through less manual intervention.
- Long-lasting asset control capabilities for vital infrastructure, such as electric meters, through precise data collection and remote management capabilities.
- High temperature support allows devices to be deployed and used in harsh environments that are typical of energy and utilities space.
- IP cameras, wearable devices, smart meters and industrial handheld devices.
In the retail sector, examples of solutions the E41 4G Modem-RF can power include:
- Real-time inventory management and security-focused payment processing to point-of-sale systems and industrial handheld devices.
- On-device AI capabilities and advanced security surveillance functionality on IP cameras with real-time alerts and remote monitoring capabilities.
For autonomous robots in manufacturing, logistics and retail applications, the E41 4G Modem-RF provides:
- Precise positioning and connectivity, delivering efficient navigation and automation.
- Low-latency and security-focused processing for enhanced reliability during use.
At its core, the integrated and compact design of these three modems supports a wide range of IoT applications that demand both precise, low-power positioning and seamless connectivity, within a single, versatile design that can be selected depending on the target application, empowering businesses across multiple industries to achieve growth and seize new opportunities.
The post Empower industrial IoT through integrated connectivity, precise positioning and value-added services with a new modem lineup from Qualcomm appeared first on ELE Times.
Сторінки
