Weather     Live Markets

One major issue facing artificial intelligence is the interaction between a computer’s memory and its processing capabilities. When an algorithm is in operation, data flows rapidly between these two components. However, AI models rely on a vast amount of data, which creates a bottleneck. 

A new study, published on Monday in the journal Frontiers in Science by Purdue University and the Georgia Institute of Technology, suggests a novel approach to building computer architecture for AI models using brain-inspired algorithms. The researchers say that creating algorithms in this manner could reduce the energy costs associated with AI models. 

“Language processing models have grown 5,000-fold in size over the last four years,” Kaushik Roy, a Purdue University computer engineering professor and the study’s lead author, said in a statement. “This alarmingly rapid expansion makes it crucial that AI is as efficient as possible. That means fundamentally rethinking how computers are designed.”


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Most computers today are modeled on an idea from 1945 called the von Neumann architecture, which separates processing and memory. This is where the slowdown occurs. As more people around the world utilize data-hungry AI models, the distinction between a computer’s processing and memory capacity could become a more significant issue.

Researchers at IBM called out this problem in a post earlier this year. The issue computer engineers are running up against is called the ‘memory wall.’

Breaking the memory wall

The memory wall refers to the disparity between memory and processing capabilities. Essentially, computer memory is struggling to keep up with processing speeds. This isn’t a new issue. A pair of researchers from the University of Virginia coined the term back in the 1990s. 

AI Atlas

CNET

But now that AI is prevalent, the memory wall issue is sucking up time and energy in the underlying computers that make AI models work. The paper’s researchers argue that we could try a new computer architecture that integrates memory and processing. 

Inspired by how our brains function, the AI algorithms referred to in the paper are known as spiking neural networks. A common criticism of these algorithms in the past is that they can be slow and inaccurate. However, some computer scientists argue that these algorithms have shown significant improvement over the last few years. 

The researchers suggest that AI models should utilize a concept related to SNNs, known as compute-in-memory. This concept is still relatively new in the field of AI. 

“CIM offers a promising solution to the memory wall problem by integrating computing capabilities directly into the memory system,” the authors write in the paper’s abstract. 

Medical devices, transportation, and drones are a few areas where researchers believe improvements could be made if computer processing and memory were integrated into a single system. 

“AI is one of the most transformative technologies of the 21st century. However, to move it out of data centers and into the real world, we need to dramatically reduce its energy use,” Tanvi Sharma, co-author and researcher at Purdue University, said in a statement. 

“With less data transfer and more efficient processing, AI can fit into small, affordable devices with batteries that last longer,” Sharma said. 



Read the full article here

Share.
Leave A Reply