Українською
  In English
Новини світу мікро- та наноелектроніки
OIF highlighting how interoperability enables scalable, AI-era networks through Market Focus sessions and live demos
TRUMPF demos linear performance of 850nm 100G VCSEL and PD in Optomind’s transceiver
TRUMPF unveils 850nm multimode 100G datacom VCSEL
The Motorola 68000: A 32-Bit Brain in a 16-Bit Body
Hybrid system resolves edge AI’s on-chip memory conundrum

Edge AI—enabling autonomous vehicles, medical sensors, and industrial monitors to learn from real-world data as it arrives—can now adopt learning models on the fly while keeping energy consumption and hardware wear under tight control.
It’s made possible by a hybrid memory system that combines the best traits of two previously incompatible technologies—ferroelectric capacitors and memristors—into a single, CMOS-compatible memory stack. This novel architecture has been developed by scientists at CEA-Leti, in collaboration with scientists at French microelectronic research centers.
Their work has been published in a paper titled “A Ferroelectric-Memristor Memory for Both Training and Inference” in Nature Electronics. It explains how it’s possible to perform on-chip training with competitive accuracy, sidestepping the need for off-chip updates and complex external systems.
The on-chip memory conundrum
Edge AI requires both inference for reading data to make decisions and learning, a.k.a. training, for updating models based on new data on a chip without burning through energy budgets or challenging hardware constraints. However, for on-chip memory, while memristors are considered suitable for inference, ferroelectric capacitors (FeCAPs) are more suitable for learning tasks.
Resistive random-access memories or memristors excel at inference because they can store analog weights. Moreover, they are energy-efficient during read operations and better support in-memory computing. However, while the analog precision of memristors suffices for inference, it falls short for learning, which demands small, progressive weight adjustments.
On the other hand, ferroelectric capacitors allow rapid, low-energy updates, but their read operations are destructive, making them unsuitable for inference. Consequently, design engineers face the choice of either favoring inference and outsourcing training to the cloud or carrying out training with high costs and limited endurance.
This led French scientists to adopt a hybrid approach in which forward and backward passes use low-precision weights stored in analog form in memristors, while updates are achieved using higher-precision FeCAPs. “Memristors are periodically reprogrammed based on the most-significant bits stored in FeCAPs, ensuring efficient and accurate learning,” said Michele Martemucci, lead author of the paper on this new hybrid memory system.
How hybrid approach works
The CEA-Leti team developed this hybrid system by engineering a unified memory stack made of silicon-doped hafnium oxide with a titanium scavenging layer. This dual-mode memory device can operate as a FeCAP or a memristor, depending on its electrical formation.
In other words, the same memory unit can be used for precise digital weight storage (training) and analog weight expression (inference), depending on its state. Here, a digital-to-analog transfer method, requiring no formal DAC, converts hidden weights in FeCAPs into conductance levels in memristors.
The hardware for this hybrid system was fabricated and tested on an 18,432-device array using standard 130-nm CMOS technology, integrating both memory types and their periphery circuits on a single chip.
CEA-Leti has acknowledged funding support for this design undertaking from the European Research Council and the French Government’s France 2030 grant.
Related Content
- Speak Up to Shape Next-Gen Edge AI
- AI at the edge: It’s just getting started
- Will Memory Constraints Limit Edge AI in Logistics?
- Two new runtime tools to accelerate edge AI deployment
- For Leti and ST, the Fastest Way to Edge AI Is Through the Memory Wall
The post Hybrid system resolves edge AI’s on-chip memory conundrum appeared first on EDN.
10s to 28s charger.
![]() | Took a 10s charger and slapped a 1800w boost converter on it that has cc/cv and goes up to 125v DC. Just need to add XT30/60 and 90 contacs on it so that I have different options for different batteries. Going to change out the 10s charger for a 1500w power supply and add a volt/amp/WH display and change the pot on cc for a similiar one that i have changed the cv pot for allready and put a 800w buck converter that has CC/CV to be able to charge smaller batteries than 10s also. [link] [comments] |
Сторінки
