Reviewed by Lexie CornerAug 1 2024
Researchers from Korea University and POSTECH have demonstrated how analog hardware using ECRAM devices can maximize artificial intelligence's computational performance, showcasing its potential for commercialization. The study has been published in Science Advances.
The research team included Professor Hyung-Min Lee from Korea University's School of Electrical Engineering, along with Professor Seyoung Kim from the Department of Materials Science and Engineering and the Department of Semiconductor Engineering. Alumni Kyungmi Noh and Ph.D. student Hyunjeong Kwak from the Department of Materials Science and Engineering at POSTECH were also involved in the study.
The scalability of current digital hardware (CPUs, GPUs, ASICs, etc.) has reached its limit due to the rapid advancement of AI technology, including applications such as generative AI. As a result, research into analog hardware tailored for AI computation is ongoing. To process AI computation in parallel, analog hardware uses a cross-point array structure with vertically crossed memory devices to modify the resistance of semiconductors based on external voltage or current.
While it provides benefits over digital hardware for specific computational tasks and continuous data processing, it remains challenging to meet the various requirements for computational learning and inference.
The research team focused on Electrochemical Random Access Memory (ECRAM), which controls electrical conductivity through ion movement and concentration, to overcome the shortcomings of analog hardware memory devices. These devices operate at relatively low power because, in contrast to conventional semiconductor memory, they have a three-terminal structure with distinct paths for reading and writing data.
The researchers successfully fabricated ECRAM devices for their study using three-terminal-based semiconductors in a 64 × 64 array. According to experiments, the hardware containing the team's devices showed high yield, uniformity, and excellent electrical and switching characteristics. The team also implemented a state-of-the-art analog-based learning algorithm, the Tiki-Taka algorithm, to further maximize the accuracy of AI neural network training computations. This was done on high-yield hardware.
The researchers demonstrated the impact of hardware training's "weight retention" property on learning, confirming that their method does not overload artificial neural networks. This highlights the possibility of commercializing the technology.
This research is significant as, to date, the largest array of ECRAM devices for processing and storing analog signals is 10×10. With different features for every device, the researchers have successfully deployed these devices on the biggest scale.
By realizing large-scale arrays based on novel memory device technologies and developing analog-specific AI algorithms, we have identified the potential for AI computational performance and energy efficiency that far surpass current digital methods.
Seyoung Kim, Professor, Pohang University of Science and Technology
The Korea Semiconductor Industry Association, the Korea Planning & Evaluation Institute of Industrial Technology (KEIT), the Public-Private Partnership for Semiconductor Talent Training Program, and IDEC's EDA Tool supported the research.
Journal Reference:
Noh, K., et al. (2024) Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator. Science Advances. doi.org/10.1126/sciadv.adl3350