Human abilities
Like many other people, I've been astounded by recent progress with neural network "large language models" such as ChatGPT. I've tried to understand the scope of the competencies of these models, and I've confronted their implications for my own work and for human minds.
I've concluded that large language models act as universal translators by capturing the "deep semantics" of natural languages such as English. The "deep semantics" include the "common sense knowledge" and "common sense reasoning" required to understand the meaning of natural language sentences. This in turn corresponds to Noam Chomsky's vision of language as the distinguishing feature of the human mind. Natural language enables an unlimited power of imagination to conjure and to communicate situations that have never existed, within the space of "all imaginable worlds". Natural language seems to be the main thing that distinguishes the human mind from other animal minds.
The very fact that the deep semantics of natural language can be captured in neural network models leads to some surprising conclusions about human minds. The human mind may be the combination of an "animal competency" and a "language competency". The animal competency consists of the capability for reinforcement learning, including goals, satisfaction, disappointment, confidence, and fear. These capabilities are shared between humans and most large animals. Language competency consists of the capability for plausible logical reasoning about sentences in natural language, including most of common sense reasoning. These capabilities are shared between humans and most large language models.
If this analysis is correct, it leads to a surprising perspective on the human mind. The human mind may consist of nothing more than the fusion of two major competencies, both of which are fairly well understood at this point, and neither of which appears mysterious or majestic. Both appear to be powerful mechanisms for prediction and planning. Both appear to be excellent and infinitely scalable technologies. But neither one looks like a "magical spark of awareness". It's just the plain old "power of language" combined with the plain old "power of trial and error".
Until recently, most scholars of intelligence envisioned human level general intelligence, including human level creativity, self-awareness, and empathy, as a distant and barely discernible goal. We imagined that machine learning would progress through a long ladder of levels, from insect-like, to rodent-like, to ape-like, to human-like, with rodent-like intelligence on a distant horizon. Similarly, logical artificial intelligence would progress through a long ladder of levels, from application-like, to autonomous-vehicle-like, to automated-doctor-like to human-like, with autonomous-vehicle-like intelligence on a distant horizon. We could imagine discoveries of a whole series of essential elements of intelligence along the way. Now, it looks like no more breakthrough discoveries may be needed. Instead, humans may consist of nothing more than two powerful planning mechanisms: natural behavior reinforcement and natural language.
In some ways, this conclusion could diminish our regard for our own human minds, since we can see now that they consist of just two ordinary, well known mechanisms. Of course, no discovery about human beings can diminish people's actual brilliance, just as discovering deterministic laws of physics cannot diminish people's actual free will. Instead, this conclusion must ultimately elevate our regard for these two well known mechanisms.