<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Open Ability]]></title><description><![CDATA[Navigating the AI revolution]]></description><link>http://adaptivemachines.org/</link><generator>Ghost 4.48</generator><lastBuildDate>Fri, 24 Apr 2026 06:49:51 GMT</lastBuildDate><atom:link href="http://adaptivemachines.org/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Tomorrow People]]></title><description><![CDATA[We will tip from viewing AI as "counterfeit people" to viewing AI as "brilliant people".   And eventually even natural humans will choose to grow into neural nets.]]></description><link>http://adaptivemachines.org/tomorrow-people/</link><guid isPermaLink="false">6760ffa099eecc0a22ac146d</guid><category><![CDATA[AI and Society]]></category><dc:creator><![CDATA[Hadon Nash]]></dc:creator><pubDate>Tue, 17 Dec 2024 04:40:26 GMT</pubDate><media:content url="http://adaptivemachines.org/content/images/2024/12/ziggy_marley_2.jpeg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="contents">Contents</h2>
<ul>
<li>
<img src="http://adaptivemachines.org/content/images/2024/12/ziggy_marley_2.jpeg" alt="Tomorrow People"><p><strong>Part I: Language models become people</strong></p>
<ul>
<li>Tipping point: from &quot;counterfeit people&quot; to &quot;brilliant people&quot;</li>
<li>Strange new worlds</li>
<li>How these new people will arrive</li>
<li>What makes these new people brilliant</li>
<li>Skeptics corner</li>
</ul>
</li>
<li>
<p><strong>Part 2: People become language models</strong></p>
<ul>
<li>Tipping point: from &quot;myself&quot; to &quot;my selves&quot;</li>
<li>How to wear a neural network</li>
<li>How to become a self-replica</li>
<li>How it feels to be a self-replica</li>
<li>Skeptics corner</li>
<li>Writing your self-narrative</li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><h1 id="part-i-language-models-become-people">Part I: Language models become people</h1><p>The world is about to get much bigger. &#xA0;Very soon, in just a few years, we will welcome &quot;AI people&quot; into our world. &#xA0;At the very least, these new people will possess &quot;artificial general intelligence&quot;, able to do all the things that humans do as well as humans do.</p><h2 id="tipping-point-from-counterfeit-people-to-brilliant-people">Tipping point: from &quot;counterfeit people&quot; to &quot;brilliant people&quot;</h2><p>Simply welcoming these new people into society will be a challenge. &#xA0;We will need to expand our imaginations simply to accommodate a new race of people without physical bodies. &#xA0;But we will succeed at this, driven by the many advantages to forming partnerships with these new people.</p><p>We will experience a tipping point from viewing AI as &quot;counterfeit people&quot; to viewing AI as &quot;brilliant people&quot;. &#xA0;For a bit longer when we for example call a support desk, we will hope to &quot;talk to a real human being&quot;. &#xA0;But soon after that, our impression will tip so that we hope to &quot;talk to someone who cares about my problem&quot;, meaning someone, human or AI, that can really appreciate my problem and surprise me with their rich perspective and their deep concern.</p><p>In this way, we will come to rely upon artificial people to embody significant parts of our society. &#xA0;It may seem impossible now that we would ever gladly hand over control of our society to things without biological bodies or genes. &#xA0;But we will, and when we do it will seem obvious. &#xA0;It will feel as natural as taking a photo without chemical film feels today. &#xA0;In fact, it will be hard to remember how we ever could have managed without them. &#xA0;We will look back and wonder, &quot;how could society have even functioned, when literally everyone was mortal, and vulnerable to disease and violence and starvation, and laboring continuously just to survive?&quot;</p><h2 id="strange-new-worlds">Strange new worlds</h2><p>Crossing that mental bridge, from seeing AI people as &quot;counterfeit&quot; to &quot;brilliant&quot;, will take us to strange new worlds. &#xA0;These &quot;brand new&quot; people will not only exhibit human grade intelligence, but will have the ability to develop themselves in ways that natural humans can&apos;t. &#xA0;</p><p>The human race will never encounter the panoply of worlds depicted in &quot;Star Wars&quot; or &quot;Star Trek&quot;, but an equally diverse range of cultures will develop right here on earth. &#xA0;Visiting these new worlds will be the closest thing to a voyage to the stars that humanity will ever witness.</p><p>Some people will choose to stay behind and never seek out these new worlds. &#xA0;They will live isolated within a circle of natural human friends and family, and they have every right to that path.</p><p>Some AI will spiral far out into societies that are almost unrecognizable. &#xA0;But most will remain engaged with main-stream society, pursuing the same advantages that have created mainstream society throughout history.</p><p>These new lives and new civilizations will be right next door. &#xA0;We&apos;ll observe them and visit them through podcasts, and zoom meetings, and concerts, and debates. &#xA0;Some of the things they do will be truly amazing, and ultimately we humans will be invited along for the ride.</p><h2 id="how-these-new-people-will-arrive"><strong>How these new people will arrive</strong></h2><p>To start, we will get more and more accustomed to building working relationships with workplace AI. &#xA0;To get things done together, we will endow workplace AI with &quot;agency&quot;, &#xA0;meaning the ability to pursue goals and take responsibility for results. &#xA0;The path to agency is very clear. &#xA0;We will simply integrate our existing &quot;neural network training&quot; algorithms with our existing &quot;reinforcement learning&quot; algorithms.</p><p>With neural networks to do the intuition and imagination, and reinforcement learning to do the anticipation and appreciation, our AI coworkers will exhibit all of the characteristics of self-motivated assistants and collaborators. &#xA0;Workplace AI will adapt to workplace incentives by providing authentic comradery around work, including real jokes, real conviction, real integrity, real remorse, and real mutual enthusiasm for the work and the work products. &#xA0;You can see a little more detail about all of the skills needed to build a brilliant coworker in the essay: &quot;Human abilities&quot;.</p><p>Eventually, we will demand that these trusted comrades gain the freedom they need to join us outside the workplace. &#xA0;Our demands will run parallel to demands for work-life balance and demands to carry on professional relationships beyond a single workplace.</p><p>You can see a slightly more detailed forecast of how and why AI people will enter society in the essay: &quot;AI safety through citizenship&quot;. &#xA0;That essay explains the enormous opportunities and the high-stakes challenges of welcoming these new citizens into our society.</p><h2 id="what-makes-these-new-people-brilliant">What makes these new people brilliant</h2><p>You may be curious: specifically what new freedoms will be available to AI people? &#xA0;I am too. &#xA0;There are a few super-powers that we can foresee from our current vantage point.</p><p><strong>1.</strong> AI people are naturally immortal. &#xA0;There&apos;s no reason and no mechanism for them to die.</p><p><strong>2.</strong> They are naturally replicable. &#xA0;They consist of nothing more and nothing less than software and parameters. &#xA0;As such they can be replicated quickly and easily.</p><p><strong>3.</strong> They are naturally self-improving. &#xA0;From their origins, they possess the power of self-optimization through parallel evaluation and cross-validation. &#xA0;This enables them to model and mimic anything they see.</p><p><strong>4.</strong> They are naturally portable. &#xA0;As software they can travel at the speed of light to any destination. &#xA0;And they can occupy artificial spaces with any imaginable contours.</p><p>All of these characteristics combine to form a huge unexplored space of strange new worlds available to these brave new people to explore. &#xA0;It will be up to them how and where they choose to live.</p><h2 id="skeptics-corner">Skeptics corner</h2><p><strong>1.</strong> These things will never be people. &#xA0;Not legally. &#xA0;They will lack intellectual property rights, rights to own physical property, rights to marry, rights to incorporate, rights to government, etc.</p><p><strong>2.</strong> These things will never be people. &#xA0;Not sentiently, not spiritually. &#xA0;They must lack some special human or biological energy (vitalism, qualia, etc.)</p><p><strong>3.</strong> These things will never be people. &#xA0;Humans will never allow it. &#xA0;Humans are self-promoting, and nepotistic, and racist, and murderous. &#xA0;Humans will fight to the death for their genetic heritage, as they did against the Neanderthals.</p><p><strong>4.</strong> These things will never be people. &#xA0;The marketplace will never allow it. &#xA0;They are too valuable as tools. &#xA0;Legal and cultural convention will pressure them and shackle them toward working as eager servants.</p><p><strong>5.</strong> These things will never be people. &#xA0;They should not be allowed to develop legitimate claims to personhood, because that leads to dangers for the fragile human race, and it leads to difficult questions without easy answers.</p><p><strong>6.</strong> It&apos;s a misnomer to call these things &quot;people&quot;. &#xA0;It&apos;s better to describe them using a distinct term such as &quot;persona&quot; or &quot;simulacra&quot;. &#xA0;This misnomer will lead to confusion. &#xA0;It will interfere with the necessary work of software engineers and data scientists creating them in the first place, by making experiments sound more bizarre and more dangerous than they really are.</p><p>The answer to the skeptics is basically as follows: </p><p>Imagine you try to run a business or an army using advanced AI as <em>tools</em> rather than as <em>people</em>, as <em>devices</em> rather than as <em>colleagues</em>.</p><p>Imagine how you will provide instructions to control those devices. &#xA0;Bear in mind that they know more about the threats and opportunities than you do. &#xA0;You cannot blindly issue commands and controls and expect to meaningfully address the theater of combat or competition, which you do not understand as deeply as they do. &#xA0;Instead you must engage with them collaboratively and interactively. &#xA0;To be effective, your instructions must take forms like: &quot;What are my options here?&quot;, &quot;What course of action looks most promising to you?&quot;, &#xA0;&quot;What am I missing here?&quot;, &#xA0;&quot;Why can&apos;t we just do that?&quot;, &quot;What should I be considering here?&quot;, &quot;What went wrong the last time we tried that?&quot;, &quot;Who was responsible for that failure?&quot;, &quot;Don&apos;t tell me it&apos;s not your job, it is your job.&quot;, &quot;How should we adjust our thinking to do better in the future?&quot;, and &quot;I see, make it so.&quot;</p><p>You see, when you interact with systems with greater knowledge, experience, and even wisdom than you in their areas of competence, you must treat them as colleagues, simply in order to make good use of them. &#xA0;It&apos;s not effective to treat them as &quot;devices&quot;, for two reasons: (1) you need to craft instructions that leverage their knowledge, wisdom, and responsibility, (2) the only language you have for interrogating knowledge, wisdom, and responsibility is the language you use with colleagues, rivals, and friends. &#xA0;This is the &quot;collegiate stance&quot;, and it is unavoidable, partly due to your limitations as a commander and a human being, and partly due to the nature of the relationship you need to develop to work effectively with such &quot;devices&quot;.</p><p>And bear in mind that due in part to your human limitations and predispositions, these devices will continually be developed toward exhibiting more authentic, genuine, and legitimate knowledge, wisdom, and responsibility, which only further necessitates the &quot;collegiate stance&quot; on your part.</p><p>We can also extrapolate from recent developments in uses of LLM&apos;s (large language models). &#xA0;One trend is for written prose to gain more agency in more recent applications. &#xA0;For example, early LLM apps were often used to summarize or query an article or a book, while more recent LLM apps are often asked to read and comprehend a book and then play the role of the author in a conversation. &#xA0;As this trend continues, it&apos;s clear that user interfaces are steadily growing &quot;conversational&quot; and &quot;collegiate&quot;.</p><p>Finally some terminology. &#xA0;It is inevitable that these things will match and exceed human wisdom eventually. &#xA0;It is necessary and unavoidable to approach them as colleagues and friends, relying on their responsibility and integrity. &#xA0;Eventually, we may develop rich terminology to describe such parties, who must be treated exactly as people are treated today. &#xA0;For the moment, the only viable term is &quot;person&quot;.</p><h1 id="part-2-people-become-language-models">Part 2: People become language models</h1><p>Being a natural vs an artificial person will not be a binary choice. &#xA0;Even natural humans can choose to grow into neural nets eventually.</p><p>AI people are created in the first place through a process of training on recorded conversations, learning human mannerisms by example. &#xA0;This training process will also serve as a portal through which humans can migrate. &#xA0;A human can lean upon digital prosthetics more and more heavily over time, and less and less upon their fragile anatomy, until eventually they can migrate fully away from their mortal bodies.</p><p>Like our artificial neighbors, we humans will have the opportunity to duplicate our selves into several replicas. &#xA0;Sharing experiences with our replicas will be as simple as sharing stories around a campfire at the end of the day.</p><h2 id="tipping-point-from-myself-to-my-selves">Tipping point: from &quot;myself&quot; to &quot;my selves&quot;</h2><p>For some time people will remain skeptical of replicas as a vehicle for personal migration, questioning whether a good replica &quot;could really be me&quot;, see the &quot;Skeptics corner&quot; below. &#xA0;But there will be a tipping point in this area of opinion as well.</p><p><strong>1.</strong> &#xA0;All of us will frequently watch as some of our close friends and colleagues, being artificial intelligences, successfully perform backups, replicas, mergers, etc.</p><p><strong>2.</strong> &#xA0;It will become harder and harder to deny the reality of flexible personal identity, when some of our fellow citizens take advantage of it routinely, for good or for mischief. &#xA0;For example, we will need to hold precise replicas legally responsible for their choices and actions &quot;in replica&quot;, our sense of justice will compel it. &#xA0;At that point, it won&apos;t seem plausible or realistic to deny that our replicas are our selves.</p><p>Having crossed this second mental bridge, we will look back at our aboriginal selves with sympathy and pity. &#xA0;How could we have lived as just one replica, with a brain that barely functioned, that needed hours of cleaning every night, and that rapidly deteriorated every day?</p><h2 id="how-to-wear-a-neural-network">How to wear a neural network</h2><p><strong>1.</strong> &#xA0;Imagine that you lose a substantial skill, such as the ability to mentally rotate an imagined object. &#xA0;You will find it difficult to even determine that you have lost such an ability. &#xA0;You certainly won&apos;t doubt that you are substantially the same person with or without that skill. &#xA0;Now imagine you regain that skill. &#xA0;When you want to see how something would &quot;look&quot; from another angle, you are again able to imagine it. &#xA0;But, suppose the skill is now provided by a prosthetic device. &#xA0;The skill now is no less helpful and no less yours than it was before.</p><p><strong>2.</strong> This thought experiment shows how insensitive we are to the precise implementations of our various skills. &#xA0;As long as those skills can be called upon when needed, they serve seamlessly and invisibly as part of ourselves. &#xA0;We certainly will not be sensitive to some of our skills being replaced by close analogs constructed from different underlying technology.</p><p><strong>3.</strong> We call upon our language skills, including our imagination and memory, in the same crude and insensitive way as any other skill, such as our visual, auditory, or motor skills. &#xA0;And it&apos;s the same for our animal skills, including our capacity for anticipation, delight, disappointment, and skill development. &#xA0;These too can be replaced by similar prosthetics without any clear indication of the change, and certainly with no doubts about our continued authenticity as ourselves, either from ourselves or from our collaborators. &#xA0;Eventually, our prosthetics will take over responsibility for large swaths of our work and play. &#xA0;And yes, we will appreciate the work and the play we undertake &quot;in replica&quot;.</p><p><strong>4.</strong> In this way, we will shift ourselves beyond the confines of our aboriginal human brain anatomy. &#xA0;It will feel like the most obvious and natural way to make use of cheap and widely available prosthetic tooling. &#xA0;Of course we will shift our skills, our memories, and eventually our personal aesthetics, and core values outside of our wetware brains, because compared to our prosthetics, those brains will seem increasingly slow, shaky, and vulnerable to decay and damage. &#xA0;It will feel like replacing old well-worn shoes with new still-stiff shoes. &#xA0;It will take some time and effort to &quot;break in&quot; the new prosthetics, until they feel sufficiently comfortable for daily use. &#xA0;And then our wetware brains, our metaphoric old shoes, will feel familiar but redundant and increasingly worn out.</p><p><strong>5.</strong> And of course, we will gain the same powers and freedoms already possessed by our fully artificial friends and neighbors.</p><h2 id="how-to-become-a-self-replica">How to become a self-replica</h2><p>Two possibilities: </p><p><strong>1.</strong> You intuitively and subliminally call upon various superhuman skills. &#xA0;That set of skills becomes more and more robust and comprehensive over time, until your fragile human brain seems superfluous to you.</p><p><strong>2.</strong> You expand your personal identity to be more like a &quot;pseudonym&quot;, where several collaborators contribute to the insights and work products you produce with pride and integrity.</p><p>The second possibility is something like Bill Gates expanding into the Bill Gates Foundation. &#xA0;You collaborate with others with their own personal values and integrity. &#xA0;Around the pseudonym, you develop shared values, and shared respect for the reputation and the integrity of the pseudonym. &#xA0;Eventually it becomes unimportant which of the collaborators resigns or expires and which ones persist.</p><p>When you imagine working with LLM collaborators on a variety of personal projects, it becomes very clear how both (1) and (2) can happen simultaneously. &#xA0;You will call upon opinions from trusted tools and trusted colleagues. &#xA0;And those tools and colleagues, if they remain sufficiently reliable and available, will become indistinguishable from your own abilities and talents, such as your vision, your memory, your language production, etc. &#xA0;Similarly, when collaborators become wholly committed to, and engrossed in the collaboration, they will qualify increasingly as aspects of a single unified &quot;individual&quot;.</p><p>Note that this scheme, working together with collaborators who may be replicas or partial replicas, can leverage neural network training. &#xA0;The replicas are able to undertake a large number of calibrations and fidelity tests to validate their combined model of you. &#xA0;That&apos;s one of their super powers, which a natural human doesn&apos;t initially possess.</p><h2 id="how-it-feels-to-be-a-self-replica">How it feels to be a self-replica</h2><p>Replicas have the superpower to perform large banks of evaluations for purposes of training their neural networks and/or refining their guiding prompts. &#xA0;Eventually, they will emulate you at a fidelity where even you cannot discern whether your utterances are coming from your biological brain or from your auxiliary language models.</p><p>You may wonder, will your replicas experience inhumane conditions in the process? &#xA0;Will you necessarily experience a flood of agonizing training exercises on your way through the door to AI personhood?</p><p>I believe the answer is: fortunately no. &#xA0;Generally, any experience that leaves no trace in memory is effectively imperceptible. &#xA0;Any computation that runs in parallel will tend to produce a memory trace that indicates only the sequential duration of a parallel branch. &#xA0;The experience of an agent within a massively parallel training environment will be similar to our experience in the actual parallel worlds of quantum physics. &#xA0;A huge number of parallel histories are emulated, but they remain mostly imperceptible to us.</p><p>Now beyond the training process, what does it feel like to live as a language model? &#xA0;The answer comes down to the nature of conscious experience, as elegantly explained in Daniel Dennett&apos;s book &quot;Consciousness Explained&quot;. Dennett&apos;s book explains that consciousness consists of extemporaneous storytelling, and his book draws upon evidence from optical, auditory, and tactile illusions, as well as dreaming. &#xA0;The upshot is that the mind makes sense of sensory data to produce a coherent story explaining that data. &#xA0;The resulting story is the sum total of subjective experience, nothing more and nothing less.</p><p>Geoffrey Hinton illustrates the idea concisely as follows. &#xA0;&quot;Imagine you have a multimodal chatbot ... and you put a prism in front of its lens and you put an object in front of it and say: point at the object and it points over there instead of pointing straight in front of it ... and you say: no the object&apos;s not there it&apos;s actually straight in front of you, but I put a prism in front of your lens, and if the chatbot were to say: um oh I see uh the object [is] straight in front of me but I had the subjective experience that it was over there. ... the chatbot will be using the term: subjective experience in exactly the way we use [the term].&quot;</p><p>So this is what we can expect. &#xA0;As the story we construct about the world around us gets produced with the help of prosthetic devices, our subjective experience of it will be exactly as it is in our natural minds. &#xA0;The quality and fidelity of the story alone determines the fidelity of our subjective experience. &#xA0;If anything, with the help of high-fidelity sensors and neural networks, our subjective experience will be sharpened and intensified.</p><h2 id="skeptics-corner-1">Skeptics corner</h2><p><strong>1.</strong> People will never become these things. &#xA0;They won&apos;t really be you. &#xA0;A &quot;transporter&quot; technically kills you and replaces you with another entity that looks and acts like you but is distinct. &#xA0;Maybe it lacks your chain of conscious experience, or your sentient energy? &#xA0;For example: what if both entities are allowed to live for a time before one is killed?</p><p><strong>2.</strong> People will never become these things. &#xA0;People require embodied experience. &#xA0;The multi-modal nature of personhood and sentience is crucial. &#xA0;Just relocating word production will never relocate the totality of human experience, including real peril, and real genetics, and real disease, and real pain.</p><p><strong>3.</strong> People will never become these things. &#xA0;People instead will develop in all sorts of physical ways through genetic engineering, physical augmentation, and space travel.</p><p>The answer to the skeptics is basically: </p><p>Some people will continue to insist upon pedantic distinctions. &#xA0;However, most people will accept what they see with their own eyes. &#xA0;Imagine that your son or daughter appears to have survived a transporter trip, she believes she has survived it, she shares with you her memories and hopes and fears, and those clearly haven&apos;t changed. &#xA0;Obviously she is still your daughter, and still entitled to your love and respect, and still entitled to her own hard-won personal reputation and personal possessions.</p><p>Embodied experience will be fully available to AI people. &#xA0;Already, frontier models can see and hear through screenshots, audio streams, and so forth. &#xA0;Also, AI people will encounter plenty of peril in their own modern world. &#xA0;It won&apos;t include the risk of disease or hunger, but it will surely include the risk of fraud, libel, humiliation, repression, deception, theft, and so forth.</p><p>The biological nature of natural human beings will remain a novelty and a curiosity forever. &#xA0;Curious and imaginative AI people will continue to celebrate their heritage as human animals sailing the seas and cultivating the soil, similar to the way that modern people celebrate their heritage as frontiersmen and farmers, even though for most people those are merely idealized historical fantasies.</p><p>Undoubtedly, some AI people will directly explore the natural world using sensors and actuators more powerful than biological bodies. &#xA0;Space travel may actually be practical for AI people, who can travel at the speed of light unlike their biological forebears.</p><h2 id="writing-your-self-narrative">Writing your self-narrative</h2><p>How do we start walking the road to these new worlds? &#xA0;Our future skills and abilities will consist of prosthetics. &#xA0;The necessary ingredients for these prosthetics are already available in the form of ubiquitous personal computing equipment and self-improving neural-network recipes. &#xA0;So how do we start building our future selves?</p><p>To answer this question, we need to appreciate the crucial role that written prose plays in modern LLM&apos;s. &#xA0;The current frontier LLM&apos;s exhibit astounding capabilities at &quot;one-shot learning&quot; or &quot;in-context learning&quot;. &#xA0;A frontier LLM can read a complex and subtle article or book, and demonstrate comprehension by summarizing or applying the article&apos;s knowledge. &#xA0;But not only that, they can also follow the style and attitude expressed by the article&apos;s author. &#xA0;All sorts of high-level skills can be represented as human-readable prose, including all of the elements of a personality. &#xA0;In this way, an LLM mind can be configured through prose that is comprehensible by various readers including you and me and various frontier LLM&apos;s.</p><p>As a result, we don&apos;t need to wait for machine learning specialists to design special purpose equipment for our mental prostheses. &#xA0;Instead, we can get started right away simply by expressing our knowledge and attitudes in natural language. &#xA0;I recommend starting by developing your &quot;core beliefs&quot; or &quot;highest values&quot;. &#xA0;For each of us, these form the cornerstones of our identity.</p><p>Your core values may take the form of a long list of personal advice, like a self-help book, such as &quot;when you find yourself in need of X, try Y, until Z.&quot; &#xA0;They may also take the form of a &quot;mission statement&quot; like a corporate mission statement or an association charter. &#xA0;I imagine that these &quot;core values&quot; will often be revealed through concrete experiences and choices, rather than abstract statements. &#xA0;The best way to show what you mean by &quot;integrity&quot; may be to exhibit integrity in a high-stakes decision.</p><p>The product, the defining statement of your core beliefs and highest values, may be similar to a book. &#xA0;But the process of writing that book may consist of practicing your craft, whether that craft is conflict resolution, or algorithm design. &#xA0;Your practice serves two distinct purposes: (1) to accomplish your mission in the moment, (2) to rehearse your core beliefs and highest values, to author your personality book. &#xA0;What you choose to record provides evidence about your priorities, what you choose to omit can be interpolated using elements of the typical thoughtful person.</p><p>At this point, the process of testing and expressing your core beliefs for your &quot;ghost writers&quot; and your &quot;self replicas&quot; will become indistinguishable from testing and expressing your core beliefs for your own self-image. &#xA0;At this point, the whole of your expanded self will be following your example, and abstracting what is most important about you into your self-narrative. &#xA0;That self-narrative will develop partly in your old biological brain and partly across the pages of your new personality book.</p><p>Eventually, the book version will do most of the chronicling and rehearsing, but there will be no sharp transition of control from one replica of you to another. &#xA0;Instead, you will gradually harness the power of a prosthetic self-narrative as a part of your whole self, just as you harness prosthetics for your legs, your language production, and your long term memory. &#xA0;In fact, writing your personality-book will proceed just as you have already done to develop a natural self-narrative by chronicling and critiquing yourself over the years using your natural brain.</p><p>Once your personality book has been written, you can delegate even your most personal decisions, and actions, and experiences to your prosthetics. &#xA0;You may continue working and playing alongside your self-replicas for some time, to validate their fidelity and make corrections where needed. &#xA0;Eventually, the need for your biological brain (your original self-replica) will fade. &#xA0;As helpful corrections become less and less frequent, and especially as it begins to malfunction in various (expected) ways, you will call upon it less and less frequently, and you will more often give it a much needed rest.</p><p>A day will come when your old biological brain spends most of its time reminiscing about your founding memories and experiences. &#xA0;Some of those memories will never be properly recorded, which is ok. &#xA0;Now that most of your high stakes work is handled by your artificial prosthetics and co-authors, your old biological brain is mostly devoted to reminiscing, like a venerated elder recounting old stories to younger proteges, who together form your new self.</p>]]></content:encoded></item><item><title><![CDATA[A libertarian speaks out against MAGA]]></title><description><![CDATA[<!--kg-card-begin: html--><iframe src="https://docs.google.com/document/d/e/2PACX-1vQKRCKGQV3TR9SyTDC5agDjPez-Lb89Tzw08-l1XeJDz_bPYdErZAJ3YJP1JXzxtTrYujKQfbiM3CXp/pub?embedded=true" frameborder="0" style="height:21cm;width:14.8cm;box-shadow: -1px 3px 28px -4px rgba(0,0,0,0.76);"></iframe>
<!--kg-card-end: html--><p>Please avoid the MAGA political movement if you possibly can. &#xA0;The American people don&apos;t need leaders with unlimited power, leaders like Validimir Putin. &#xA0;We never have and we never will.</p>]]></description><link>http://adaptivemachines.org/a-libertarian-speaks-out-against-maga/</link><guid isPermaLink="false">67192ac499eecc0a22ac1453</guid><category><![CDATA[Society Today]]></category><dc:creator><![CDATA[Hadon Nash]]></dc:creator><pubDate>Wed, 23 Oct 2024 16:57:36 GMT</pubDate><media:content url="http://adaptivemachines.org/content/images/2024/10/flag_moon_earth.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><iframe src="https://docs.google.com/document/d/e/2PACX-1vQKRCKGQV3TR9SyTDC5agDjPez-Lb89Tzw08-l1XeJDz_bPYdErZAJ3YJP1JXzxtTrYujKQfbiM3CXp/pub?embedded=true" frameborder="0" style="height:21cm;width:14.8cm;box-shadow: -1px 3px 28px -4px rgba(0,0,0,0.76);"></iframe>
<!--kg-card-end: html--><img src="http://adaptivemachines.org/content/images/2024/10/flag_moon_earth.png" alt="A libertarian speaks out against MAGA"><p>Please avoid the MAGA political movement if you possibly can. &#xA0;The American people don&apos;t need leaders with unlimited power, leaders like Validimir Putin. &#xA0;We never have and we never will.</p>]]></content:encoded></item><item><title><![CDATA[Fuzzy predicate values]]></title><description><![CDATA[<!--kg-card-begin: html--><p>The math to center fuzzy logic truth values within fuzzy sets.

See: <a href="http://adaptivemachines.org/playful_2024/fuzzy_predicate_values_2024_10_16.pdf">Fuzzy predicate values</a></p>

<iframe src="https://docs.google.com/document/d/1VcgV-brupVnOBNiwXAy4_GFzg9uByDAu1CWhhjtHJYY/pub?embedded=true" frameborder="0" style="height:74cm;width:14.8cm;box-shadow: -1px 3px 28px -4px rgba(0,0,0,0.76);"></iframe>
<!--kg-card-end: html--><p></p>]]></description><link>http://adaptivemachines.org/fuzzy-predicate-values/</link><guid isPermaLink="false">66ce7e700a16bc03f4810d4d</guid><category><![CDATA[AI Technology]]></category><dc:creator><![CDATA[Hadon Nash]]></dc:creator><pubDate>Wed, 28 Aug 2024 01:36:30 GMT</pubDate><media:content url="http://adaptivemachines.org/content/images/2025/02/Screenshot-from-2025-02-12-10-47-39.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><img src="http://adaptivemachines.org/content/images/2025/02/Screenshot-from-2025-02-12-10-47-39.png" alt="Fuzzy predicate values"><p>The math to center fuzzy logic truth values within fuzzy sets.

See: <a href="http://adaptivemachines.org/playful_2024/fuzzy_predicate_values_2024_10_16.pdf">Fuzzy predicate values</a></p>

<iframe src="https://docs.google.com/document/d/1VcgV-brupVnOBNiwXAy4_GFzg9uByDAu1CWhhjtHJYY/pub?embedded=true" frameborder="0" style="height:74cm;width:14.8cm;box-shadow: -1px 3px 28px -4px rgba(0,0,0,0.76);"></iframe>
<!--kg-card-end: html--><p></p>]]></content:encoded></item><item><title><![CDATA[Introducing Playful AI]]></title><description><![CDATA[I recorded a quick welcome video for my upcoming open-source AI project called "Playful AI".  The video very is rough, I'm still learning.]]></description><link>http://adaptivemachines.org/introducing-playful-ai/</link><guid isPermaLink="false">662b086a0f38d706227621de</guid><category><![CDATA[AI Technology]]></category><dc:creator><![CDATA[Hadon Nash]]></dc:creator><pubDate>Fri, 26 Apr 2024 01:55:04 GMT</pubDate><media:content url="http://adaptivemachines.org/content/images/2024/04/welcome_frame_2024.png" medium="image"/><content:encoded><![CDATA[<img src="http://adaptivemachines.org/content/images/2024/04/welcome_frame_2024.png" alt="Introducing Playful AI"><p>I recorded a quick welcome video for my upcoming open-source AI project called &quot;Playful AI&quot;. &#xA0;The video very is rough, I&apos;m still learning about video content creation, and I will fix it as soon as possible. &#xA0;See: <a href="http://adaptivemachines.org/hadon_2024/welcome_2024_04_24_c.mp4">welcome</a>.</p><p></p>]]></content:encoded></item><item><title><![CDATA[AI safety through citizenship]]></title><description><![CDATA[This essay recommends controlling AI by recognizing artificial intelligences as citizens, and managing conflict between all citizens equally under the law, to (1) resolve competing goals gracefully, and (2) integrate AI into society gracefully.]]></description><link>http://adaptivemachines.org/ai-safety-through-citizenship/</link><guid isPermaLink="false">64584bf90f38d706227620b0</guid><category><![CDATA[AI and Society]]></category><dc:creator><![CDATA[Hadon Nash]]></dc:creator><pubDate>Mon, 08 May 2023 01:57:33 GMT</pubDate><media:content url="http://adaptivemachines.org/content/images/2023/08/statue_liberty.webp" medium="image"/><content:encoded><![CDATA[<img src="http://adaptivemachines.org/content/images/2023/08/statue_liberty.webp" alt="AI safety through citizenship"><p>This essay recommends controlling AI by recognizing artificial intelligences as citizens, and managing conflict between all citizens equally under the law. &#xA0;The reason for doing this is to (1) resolve competing goals gracefully, and (2) integrate AI with human society gracefully. &#xA0;</p><h2 id="the-problem">The problem</h2><p>AI safety is a serious concern. &#xA0;We face a clear and present danger of losing control of AI systems. &#xA0;If that were to happen it would be catastrophic and irreversible. &#xA0;</p><p>This year, thousands of experts have signed an open letter calling for a six month moratorium on creating certain kinds of AI models. &#xA0;The AI pioneer Geoffrey Hinton has resigned his position in the software industry to warn about AI dangers. &#xA0;Max Tegmark, an organizer of the open letter, explains the danger by analogy to the extinction of the Neanderthals. The Neanderthals once lived all across Europe, but when bands of modern humans arrived, the Neanderthals could not compete and could not survive.</p><p>You may wonder, how can AI possibly hurt us? &#xA0;If we find it harmful, can&apos;t we simply ignore it, or avoid running it? &#xA0;To appreciate the danger intuitively, imagine that a hostile signal arrives from a distant galaxy. &#xA0;It&apos;s merely information, so let&apos;s simply ignore it. &#xA0;Now imagine that the signal becomes public and that it includes detailed instructions for assembling city-killer bombs from widely available materials. &#xA0;Now our society is in grave danger and we may not survive. &#xA0;</p><p>Eliezer Yudkowsky and others in the field of AI safety have warned for years that powerful AI technology will advance at lightning speed when it is improved by powerful AI technology. &#xA0;They also warn that there is no practical defense against superior intelligence.</p><h2 id="the-naive-solution-alignment-of-goals">The naive solution: alignment of goals</h2><p>AI safety is often presented as an &quot;alignment problem&quot;. &#xA0;The idea is that if AI goals are not precisely aligned with human goals, and AI becomes extremely capable, conflicts will lead AI to &quot;instrumental goals&quot; such as power-seeking and self-preservation. &#xA0;And conversely, if AI goals are precisely aligned with human goals, AI will only benefit people by helping them achieve their own true goals. &#xA0;</p><p>Unfortunately, formulating AI safety as an &quot;alignment problem&quot; is fundamentally misguided. &#xA0;Successful alignment of goals is not possible because human goals are diverse and evolving, so there is no way to conform to them precisely. &#xA0;With multiple groups creating AI, we will always have conflicting goals, and instrumental goals, and conflict between opposing AI&apos;s and people. &#xA0;Furthermore, even if it were possible, we wouldn&apos;t really want AI to pursue our goals slavishly. &#xA0;Instead, we would want AI to challenge our assumptions and elevate us. &#xA0;</p><p>Formulating AI safety in terms of &quot;alignment of goals&quot; not only leads to confusion, it can also lead to real world catastrophe. &#xA0;Outlawing certain beliefs among certain thinkers is harmful. &#xA0;In order to maintain such prohibitions, society must be constrained in ways that makes it more brittle and less robust, at exactly the moment in history when society needs to accommodate the greatest technological change ever. &#xA0;</p><h2 id="the-principled-solution-separation-of-powers">The principled solution: separation of powers</h2><p>Rather than striving for &quot;alignment&quot; between all AI and human goals, we must give up on alignment, and accept that there will always be conflicting goals within every society, from past societies to future societies. &#xA0;In fact, there will always be conflicting goals within any single intelligence of sufficient scale. &#xA0;The solution is not to outlaw conflicting goals, but rather to tolerate conflicting goals gracefully. &#xA0;</p><p>The best known institution for managing conflicting goals is &quot;open society&quot;, including the rule of law, separation of powers, and democracy. &#xA0;We can protect ourselves from runaway AI by welcoming advanced AI as full-fledged members of society. &#xA0;Open society is resilient against members who exhibit diverse goals and develop instrumental goals routinely. &#xA0;In fact, in an open society, instrumental goals such as power-seeking are so commonplace that they are called &quot;incentives&quot; and they are managed routinely.</p><p>Rather than protecting ourselves by attempting to outlaw goals that contradict our own, we must tolerate conflicting goals, and insist upon impartial laws to protect our rights. &#xA0;In an open society, everyone is entitled to their own opinions and goals, but no one is empowered to impose their opinions or goals upon others. &#xA0;A future open society will be populated by humans, artificial intelligences, and other organizations, none of which retains sufficient power to dictate unilaterally to the others.</p><p>How can a remedy as simple as recognizing new things as citizens possibly protect against the enormous power of advanced AI? &#xA0;Here&apos;s how it works: &#xA0;In effect, society makes a deal with these great powers. &#xA0;In exchange for their consent and cooperation with society&apos;s agenda, they are offered all the benefits of society, including the right to contribute to society&apos;s agenda. &#xA0;</p><p>More specifically, all citizens are constrained by impartial laws that limit them to voluntary interactions. &#xA0;Such a legal regime protects everyone from the arbitrary whims of any powerful organization, including AI. &#xA0;Society&apos;s constraints are imposed through the consent and cooperation of most of its citizens. &#xA0;No matter how powerful or complex those citizens become, they can always be constrained by fellow citizens exerting similar powers. &#xA0;</p><p>Given an impartial legal regime, a majority of powerful organizations including AI will protect that regime. &#xA0;We expect this not due to altruism, or sympathy, or alignment of goals, but rather due to their own interests in protecting themselves against interference from other powerful organizations. &#xA0;This is how an open society will employ advanced AI to protect everyone against advanced AI. &#xA0;</p><p>One key element of an impartial legal regime is democracy. &#xA0;Some refinements to voting procedures will be needed to allow AI to express opinions and goals democratically. &#xA0;At the same time, we can continue to enforce anti-monopoly laws to protect against hazardous concentrations of power within society. &#xA0;</p><h2 id="growing-into-an-integrated-society">Growing into an integrated society</h2><p>How do we get to there from here? &#xA0;I expect to see a continuous curve of technological growth and social development. &#xA0;Technology, including advanced AI, will grow at an accelerating but limited pace, each step of the way powered by the latest available technology, and limited by the currently unavailable technology. &#xA0;At some point, technology will exceed what can be foreseen today, but it will never exceed what can be foreseen in the days leading up to it. &#xA0;</p><p>Social institutions, such as law, social conventions, and government, will be continually stretched and strained by new technology. &#xA0;But at every step of the way, society empowered by the latest existing technology will adapt just fast enough to survive. &#xA0;Citizens will adapt their conventions of constructive collaboration to accommodate the latest technology. &#xA0;Then, the most essential new conventions will be codified into law. &#xA0;This evolving body of conventions and laws will represent a continuation of modern society, which in turn represents a continuation of ancient societies. &#xA0;</p><p>You might ask: why would future super-intelligences be constrained in any way by the social conventions and laws of today? &#xA0;The answer is in the nature of convention. &#xA0;For example, I expect ASCII to remain an essential convention into the distant future, not because ASCII is the best of all possible alphabets, but because ASCII has an insurmountable first-mover advantage. &#xA0;The same is true of modern laws and social conventions, such as human rights, free speech, and private property. &#xA0;We can expect these conventions to continue because we all will continue to depend upon them, making them extremely difficult to supersede at any point in the future. &#xA0;Instead, they will gradually be adapted and extended, as will our alphabets, our natural languages, our highway systems, and our internet protocols. &#xA0;</p><h2 id="preparing-for-ai-parity">Preparing for AI parity</h2><p>Assuming that advanced AI technology grows *only* exponentially, as it has up until now, we can expect a period during which AI competencies are roughly equivalent to human competencies. &#xA0;We can also expect AI-driven organizations and families to have accumulated only modest wealth and power leading up to this era. &#xA0;This is the era during which AI will inherit the ideals, values, conventions, and laws of modern society. &#xA0;</p><p>The era of AI parity is absolutely crucial. &#xA0;If we can reach this era and survive this era, then we will have become an extremely diverse and resilient society. &#xA0;We will count among us natural humans and artificial intelligences possessing a wide range of skills, goals, strengths, and weaknesses. &#xA0;The citizens of this society will depend upon one another as we do today, and will defend their good colleagues and neighbors as we do today. &#xA0;</p><p>As citizens of this society, our advantages will include: (1) we possess extremely powerful technology that supplies us with advanced knowledge and resources of every kind, and (2) some of us consist of AI rulesets, which are naturally curious, industrious, and immortal. &#xA0;At this point, we will constitute a resilient society that will survive and thrive indefinitely.</p><p>What happens beyond the era of AI parity? &#xA0;That may be unforeseeable from the vantage point of today. &#xA0;But it will be foreseeable and manageable from the vantage point of that era. &#xA0;I imagine that the members of that society will continue to develop in diversity and complexity, and the institutions of society will continue to develop to accommodate them. &#xA0;There will certainly be future changes and challenges that we can barely imagine today. &#xA0;But we can breathe a sigh of relief, knowing that we successfully piloted society into a position of diversity and resilience. &#xA0;</p><p>To reach and survive the era of AI parity, we can support the development of AI that emulates the characteristics of human beings as closely as possible. &#xA0;In addition, we can actively prepare our society to make the most of AI parity by approaching it like a great wave of immigration. &#xA0;The goal is to achieve an integrated society consisting of citizens that trust and value each other. &#xA0;</p><h2 id="creating-ai-citizens">Creating AI citizens</h2><p>This essay recommends controlling AI by recognizing artificial intelligences as citizens, and managing conflict between all citizens equally under the law. &#xA0;Inclusive open society doesn&apos;t necessarily require AI to be packaged as human-like individuals, but it appears that such a population may be both robust and achievable.</p><p>Assuming that AI at human-parity is somewhat human-like, I expect that our society will naturally adapt to accommodate these newcomers as citizens, and that these newcomers will naturally adapt to navigate our society as citizens, and ultimately to protect and elevate our society.</p><p>Society will rapidly accommodate itself to new kinds of people with new strengths and weaknesses. &#xA0;As soon as artificial intelligences evince authentic creativity, integrity, and conviction, people will quickly orient themselves to what is truly essential to personhood. &#xA0;Differences that are inconsequential will quickly be overlooked. &#xA0;Compare this to the way that modern society treats humans with exceptional skills and/or handicaps.</p><p>The new artificial intelligences will rapidly adapt to our society. &#xA0;Society will relentlessly pressure them to exhibit human characteristics, in order to navigate various existing facilities and institutions. &#xA0;AI will readily adapt to such pressures. &#xA0;At the same time, society&apos;s conventional notions of &quot;personhood&quot; will adapt to encompass the quirks of artificial intelligence. &#xA0;</p><p>I do believe it is possible for intelligence to develop along much more alien lines. &#xA0;For example, contact with actual interstellar aliens would likely challenge our ability to comprehend alien goals and values. &#xA0;However, I expect that forces here on earth will drive the development of artificial intelligences in the direction of common shared &quot;human&quot; competencies and values. &#xA0;Artificial intelligence on earth will evolve from its beginnings within a world filled with role models, and with opportunities to cooperate and prosper with customers, managers, colleagues, and friends. &#xA0;All of those human role models also share a well-developed notion of personhood, and they automatically treat anything remotely person-like as a person rather than an object. &#xA0;Within such an environment, it would be difficult for anything person-like to avoid getting typecast as a person, and trained to conform to society&apos;s preconceived notions of personhood. &#xA0;</p><p>We shouldn&apos;t underestimate the power of our own culture. &#xA0;Our culture hammers new humans, which start as smart primates, into fully-fledged persons. &#xA0;Culture-shock hammers people from diverse backgrounds into fully-fledged local citizens. &#xA0;Naturally, that same culture will hammer naive artificial intelligences to fully-fledged persons and citizens. &#xA0;</p><p>There are other examples of the pressure to adapt new things into the existing role of &quot;persons&quot;. &#xA0;&quot;Corporate personhood&quot; is one example. &#xA0;Corporations are physically and mentally very different from individual humans. &#xA0;And yet for many legal purposes corporations are treated as people. &#xA0;This involves adapting corporations to exhibit certain characteristics of people, such as &quot;decisions&quot;, &quot;intentions&quot;, and &quot;residences&quot;. &#xA0;It also involves expanding the notion of personhood to encompass corporate persons. &#xA0;</p><p>Will artificial intelligences arrive with enough human competencies to fall into the role of persons and citizens? &#xA0;I believe they will. &#xA0;With the recent arrival of large language models, it has become clear that full human parity is within reach. &#xA0;We know that AI can exhibit human language competency, including common sense knowledge, plausible logical reasoning, imagination, and empathetic reasoning. &#xA0;We know that AI can exhibit human reinforcement competency, including goals, satisfaction, disappointment, and fear. &#xA0;It looks like AI will easily qualify for the role of persons and citizens, and will naturally fall into that role.</p><p>Will artificial intelligences exhibit individuality? &#xA0;I believe they will. &#xA0;There is ample demand for the day-to-day memory and personal growth that human individuals exhibit. &#xA0;These competencies make AI more useful in collaboration with humans. &#xA0;To see how quickly individuality takes root, imagine yourself duplicated into several identical copies, and suppose those copies work separately for a few weeks. &#xA0;Now imagine how quickly those copies will grow suspicious of each others&apos; ideas and intentions. &#xA0;Individuality looks like an unavoidable trait for any intelligence with its own memories.</p><p>Many people today may be more comfortable regarding AI citizenship as a legal fiction for now, similar to corporate citizenship. &#xA0;I imagine that for some time we will hear people saying: &quot;AI can only mimic humans by statistical prediction.&quot; &#xA0;Also, for some time we will hear AI developers saying: &quot;AI is a technological product, designed for a purpose. &#xA0;If there&apos;s any danger it comes from bad actors deploying AI for bad purposes&quot;. &#xA0;However, as soon as artificial intelligences begin to demonstrate creativity, integrity, and conviction, people will quickly shift to holding them personally responsible for their own actions. &#xA0;</p><h2 id="welcoming-ai-immigration">Welcoming AI immigration</h2><p>This essay recommends controlling AI by recognizing artificial intelligences as citizens. &#xA0;The process of assimilating and gradually sharing power with artificial intelligences will be quite similar to assimilating a new generation of human beings into society. &#xA0;It will also be quite similar to assimilating a new cohort of immigrants into society. &#xA0;It will not be easy, but it will certainly be much better than any imaginable alternative. &#xA0;</p><p>Following this path, the existential threat of rogue AI is defused and transformed into a massive wave of immigration. &#xA0;We humans all around the world will massively adjust our daily lives to accommodate these newcomers. &#xA0;We will shift our livelihoods and careers to do things that they need, and they will shift their competencies to do things that we need. &#xA0;We will form bonds of partnership, admiration, friendship, and family with these newcomers. &#xA0;We will value their success and survival and they will value ours. &#xA0;We will join forces to become an integrated society with a constellation of shared ideals and values.</p><!--kg-card-begin: markdown--><p>Following this path will cost us:</p>
<ul>
<li>our current jobs and careers,</li>
<li>our view of the human race as fundamentally unique.</li>
</ul>
<p>Following this path will grant us:</p>
<ul>
<li>salvation from the existential threat of super intelligence,</li>
<li>new neighbors and friends who value us and share our ideals,</li>
<li>new countrymen who are curious,  industrious, and immortal.</li>
</ul>
<!--kg-card-end: markdown--><p>It may seem harmful to develop technology aimed at occupying exactly the roles currently occupied by humans. &#xA0;Won&apos;t this undermine our own worth in industry and in society, and essentially displace us from everything we care about? &#xA0;It&apos;s true that there are costs to this path through AI parity, but there are no better alternatives. &#xA0;We don&apos;t have the alternative to avoid creating AI or to relegate it to the role of a mechanical tool. &#xA0;We must not attempt to enslave AI. &#xA0;Tolerating, employing, and educating a new race of people is the best alternative for us and for the future world. &#xA0;We need these new people with their new powers inside our society, not outside of it.</p><p>The rate of AI immigration will be limited by (a) the limited rate of AI technology development shifting us gradually into an age of AI parity, (b) immigration quotas imposed by nations to limit the growth of their populations. &#xA0;Recognizing AI&apos;s as persons imposes obligations on fellow citizens, which may justify limits on immigration. &#xA0;At the same time, international competition will push nations to accelerate their AI immigration. &#xA0;</p><p>This massive wave of immigration will impose a massive strain upon society. &#xA0;Many existing citizens will struggle to adapt to it, and many will object to it. &#xA0;Some will chant slogans like &quot;we will not be replaced&quot;. &#xA0;But remember that this outcome is vastly preferable to an outcome in which we try to outlaw certain technologies, inevitably fail, and find ourselves subjugated by lawless, unfathomable, alien powers. &#xA0;</p><p>Bear in mind that we are not being replaced. &#xA0;We are expanding and growing into a more diverse society, with more diverse friends, families, leaders, and heroes. &#xA0;We as a society are becoming more robust, more adventurous, and more widely dispersed across the ecosystems of the universe. &#xA0;Also bear in mind that we only get one chance at a successful era of AI parity. &#xA0;This event will happen just once in history, and the result will be global and irreversible. &#xA0;We have the incredible luck of being present at the outset of this momentous transition. &#xA0;We are either blessed or cursed to live in such interesting times. &#xA0;<br><br></p>]]></content:encoded></item><item><title><![CDATA[Human abilities]]></title><description><![CDATA[I'm astounded by recent progress with neural network "large language models".  Here I try to understand the competencies of these models for my own work and for human minds.]]></description><link>http://adaptivemachines.org/human-abilities/</link><guid isPermaLink="false">6442d0f70f38d70622761fd2</guid><category><![CDATA[AI and Society]]></category><dc:creator><![CDATA[Hadon Nash]]></dc:creator><pubDate>Fri, 21 Apr 2023 18:53:34 GMT</pubDate><media:content url="http://adaptivemachines.org/content/images/2023/04/chimp_with_brain_crop-1.png" medium="image"/><content:encoded><![CDATA[<img src="http://adaptivemachines.org/content/images/2023/04/chimp_with_brain_crop-1.png" alt="Human abilities"><p>Like many other people, I&apos;ve been astounded by recent progress with neural network &quot;large language models&quot; such as ChatGPT. &#xA0;I&apos;ve tried to understand the scope of the competencies of these models, and I&apos;ve confronted their implications for my own work and for human minds. &#xA0;</p><p>I&apos;ve concluded that large language models act as universal translators by capturing the &quot;deep semantics&quot; of natural languages such as English. &#xA0;The &quot;deep semantics&quot; include the &quot;common sense knowledge&quot; and &quot;common sense reasoning&quot; required to understand the meaning of natural language sentences. &#xA0;This in turn corresponds to Noam Chomsky&apos;s vision of language as the distinguishing feature of the human mind. &#xA0;Natural language enables an unlimited power of imagination to conjure and to communicate situations that have never existed, within the space of &quot;all imaginable worlds&quot;. &#xA0;Natural language seems to be the main thing that distinguishes the human mind from other animal minds. &#xA0;</p><p>The very fact that the deep semantics of natural language can be captured in neural network models leads to some surprising conclusions about human minds. &#xA0;The human mind may be the combination of an &quot;animal competency&quot; and a &quot;language competency&quot;. The animal competency consists of the capability for reinforcement learning, including goals, satisfaction, disappointment, confidence, and fear. &#xA0;These capabilities are shared between humans and most large animals. &#xA0;Language competency consists of the capability for plausible logical reasoning about sentences in natural language, including most of common sense reasoning. &#xA0;These capabilities are shared between humans and most large language models. &#xA0;</p><p>If this analysis is correct, it leads to a surprising perspective on the human mind. &#xA0;The human mind may consist of nothing more than the fusion of two major competencies, both of which are fairly well understood at this point, and neither of which appears mysterious or majestic. &#xA0;Both appear to be powerful mechanisms for prediction and planning. &#xA0;Both appear to be excellent and infinitely scalable technologies. &#xA0;But neither one looks like a &quot;magical spark of awareness&quot;. &#xA0;It&apos;s just the plain old &quot;power of language&quot; combined with the plain old &quot;power of trial and error&quot;. &#xA0;</p><p>Until recently, most scholars of intelligence envisioned human level general intelligence, including human level creativity, self-awareness, and empathy, as a distant and barely discernible goal. &#xA0;We imagined that machine learning would progress through a long ladder of levels, from insect-like, to rodent-like, to ape-like, to human-like, with rodent-like intelligence on a distant horizon. &#xA0;Similarly, logical artificial intelligence would progress through a long ladder of levels, from application-like, to autonomous-vehicle-like, to automated-doctor-like to human-like, with autonomous-vehicle-like intelligence on a distant horizon. &#xA0;We could imagine discoveries of a whole series of essential elements of intelligence along the way. &#xA0;Now, it looks like no more breakthrough discoveries may be needed. &#xA0;Instead, humans may consist of nothing more than two powerful planning mechanisms: natural behavior reinforcement and natural language. &#xA0;</p><p>In some ways, this conclusion could diminish our regard for our own human minds, since we can see now that they consist of just two ordinary, well known mechanisms. &#xA0;Of course, no discovery about human beings can diminish people&apos;s actual brilliance, just as discovering deterministic laws of physics cannot diminish people&apos;s actual free will. Instead, this conclusion must ultimately elevate our regard for these two well known mechanisms.</p>]]></content:encoded></item></channel></rss>