Can we learn to learn more effectively? This article delves into the world of learning, both artificial and natural, exploring the mechanisms behind the acquisition of new skills and knowledge.
Large language models (LLMs) have captivated the world with their performance, although they sometimes encounter notable failures in reasoning tasks. Active research aims to improve these models, both at the architectural level and at the technical level to strengthen reasoning skills. However, mysteries remain regarding the emergence of these skills. What does learning really involve? What does it mean for a neural network to learn? Can a better understanding of human learning pave the way for new learning methods for future AI?
“You don’t understand anything until you learn it more than one way.” —Marvin Minsky
“That is what learning is. You suddenly understand something you've understood all your life, but in a new way.” —Doris Lessing
We will discuss how current LLMs learn, their advantages and limitations, the distinction between learning and memorization, the factors influencing learning and the emergence of effective learning, as well as a reflection on a new paradigm of learning. 'learning.
The classic method of language models, illustrated by GPT, is based on. A. Prediction of the next word in a sequence. This approach, although simple, allows models to learn the relationships between different parts of a sequence. Unsupervised learning, where the model learns on its own from large amounts of data, has been shown to be effective in understanding complex data.
However, this method raises the question of whether it is more relevant. A. Memorization than true learning. Reinforcement Learning from Human Feedback (RLHF) was developed to better align LLMs with complex human values, transforming models like GPT-three into ChatGPT. But is this enough to consider that the model has really learned?
The difference between learning and memorization remains unclear. Strategies like Chain-of-Thoughts (CoT) and self-consistency, inspired by the way students learn, attempt to increase the reasoning capabilities of models. However, these strategies are often seen as temporary solutions rather than definitive answers to LLM obstacles.
True learning seems to require more than a simple accumulation of data. Studies of the "Grokking" phenomenon show that models go through a memorization section before beginning to generalize, suggesting that learning involves a form of generalization rather than simple retention of information. This realization raises the idea that effective learning might require overcoming certain constraints, similar to those faced by the human brain.
Human learning, especially in children, offers unique perspectives on knowledge acquisition. Children learn to speak not by passively memorizing, but by actively interacting with their environment. This rich and varied interplay contrasts with the more homogeneous approach to LLM training. Research suggests that human learning, including the ability to understand the mental states of others (theory of mind), could inspire new learning methods for AIs.