
Breaking through a technological roadblock that has long limited efficient edge-AI learning, a team of French scientists developed the first hybrid memory technology to support adaptive local training and inference of artificial neural networks.
In a paper titled “A Ferroelectric-Memristor Memory for Both Training and Inference” published in Nature Electronics, the team presents a new hybrid memory system that combines the best traits of two previously incompatible technologies—ferroelectric capacitors and memristors—into a single, CMOS-compatible memory stack. This novel architecture delivers a long-sought solution to one of edge AI’s most vexing challenges: how to perform both learning and inference on a chip without burning through energy budgets or challenging hardware constraints.
Led by CEA-Leti, and including scientists from several French microelectronic research centers, the project demonstrated that it is possible to perform on-chip training with competitive accuracy, sidestepping the need for off-chip updates and complex external systems. The team’s innovation enables edge systems and devices like autonomous vehicles, medical sensors, and industrial monitors to learn from real-world data as it arrives—adapting models on the fly while keeping energy consumption and hardware wear under tight control.
The Challenge: A No-Win Tradeoff
Edge AI demands both inference (reading data to make decisions) and learning (updating models based on new data). But until now, memory technologies could only do one well:
- Memristors (resistive random access memories) excel at inference because they can store analog weights, are energy-efficient during read operations, and the support in-memory computing.
- Ferroelectric capacitors (FeCAPs) allow rapid, low-energy updates, but their read operations are destructive—making them unsuitable for inference.
As a result, hardware designers faced the choice of favoring inference and outsourcing training to the cloud, or attempt training with high costs and limited endurance.
Training at the Edge
The team’s guiding idea was that while the analog precision of memristors suffices for inference, it falls short for learning, which demands small, progressive weight adjustments.
“Inspired by quantized neural networks, we adopted a hybrid approach: Forward and backward passes use low-precision weights stored in analog in memristors, while updates are achieved using higher-precision FeCAPs. Memristors are periodically reprogrammed based on the most-significant bits stored in FeCAPs, ensuring efficient and accurate learning,” said Michele Martemucci, lead author of the paper.
The Breakthrough: One Memory, Two Personalities
The team engineered a unified memory stack made of silicon-doped hafnium oxide with a titanium scavenging layer. This dual-mode device can operate as a FeCAP or a memristor, depending on how it’s electrically “formed.”
- The same memory unit can be used for precise digital weight storage (training) and analog weight expression (inference), depending on its state.
- A digital-to-analog transfer method, requiring no formal DAC, converts hidden weights in FeCAPs into conductance levels in memristors.
This hardware was fabricated and tested on an 18,432-device array using standard 130nm CMOS technology, integrating both memory types and their periphery circuits on a single chip.
In addition to CEA-Leti, the research team included scientists from Université Grenoble Alpes, CEA-List, the French National Centre for Scientific Research (CNRS), the University of Bordeaux, Bordeaux INP, IMS France, Université Paris-Saclay, and the Center for Nanosciences and Nanotechnologies (C2N).
We acknowledge funding support from the European Research Council (consolidator grant DIVERSE: 101043854) and through a France 2030 government grant (ANR-22-PEEL-0010).