Tomorrow People

Tomorrow People

Contents

  • Part I: Strange new worlds
    (aka "Language models become people")
    Tipping point: from "counterfeit people" to "novel people"
    Visiting new worlds
    How new worlds will develop
    Skeptics corner
  • Part 2: The door to new worlds
    (aka "People become language models")
    Tipping point: from "myself" to "my selves"
    The road to new worlds
    How to become a self-replica
    How it feels to be a self-replica
    Skeptics corner
    Writing your self-narrative

Part I: Strange new worlds

(aka "Language models become people")

The world is about to get much bigger.  Very soon, in a few years, we will welcome AI "people" into our world.  At the very least, these new people will possess "artificial general intelligence", doing all that humans do at least as well as natural humans do them.

Tipping point: from "counterfeit people" to "novel people"

First of all, simply welcoming these new people into society will be a huge hurdle for us natural humans to accomplish.  We will need to stretch our imaginations to their limits simply to accommodate a new race of people without physical bodies.  I am confident that we will succeed at this because there are many advantages to forming a variety of partnerships with these new people.

I can foresee a tipping point from viewing AI as "fake" or "counterfeit" people to viewing AI as "brand new" or "brilliant" people.  For a bit longer, when we, for example, call a support desk, we will hope to "talk to a real human being".  But soon after that, our impression will tip so that we hope to "talk to someone who cares about my problem", meaning someone, natural or AI, that can really appreciate my problem, and can really surprise me with their rich perspective and their deep concern.

In this way, we will come to rely on artificial people to embody significant parts of our society.  It may seem impossible now, primed with only our current experience, that we would ever gladly hand over control of our society to things without biological bodies or genes.  But we will, and when we do it will seem obvious.  It will feel as natural as taking a photo without chemicals feels today.  In fact, it will be hard to remember how we ever could have managed without them.  We will look back and wonder, "how could society even have functioned, when literally everyone was mortal, and vulnerable to disease and violence and starvation, and laboring nearly continuously just to survive?"

Visiting new worlds

Now, crossing that mental bridge, from seeing AI people as "fake" to "novel", will take us to strange new worlds.  Because these "brand new" people will not only exhibit "artificial general intelligence", but will have the ability to develop themselves in ways that humans can't.  They will experience a range of freedom that natural humans won't.

Human kind will never encounter the panoply of worlds depicted in "Star Wars" or "Star Trek", but an equally diverse range of cultures will develop right here on earth.  Visiting these new cultures will be the closest thing to a voyage to the stars that humanity will ever witness.

Some people will choose to stay behind and never seek out these new worlds.  They will live isolated within a circle of natural human friends and family.  And that path is ok for those who choose it.

Some AI will spiral far out into societies that are unrecognizable, but most will remain engaged with main-stream society, pursuing the same advantages that have driven people to create our common mainstream society so far.

These new lives and new civilizations will be right next door.  We'll see them and visit them through blogs, and video journals, and zoom meetings, and concerts, and debates.  Some of the things they do will be truly amazing, and ultimately we humans will be invited along for the ride.

How new worlds will develop

To start, we will get more and more accustomed to building working relationships with workplace AI.  Workplace AI will be incentivized and will adapt to provide authentic comradery around work, including real jokes, real conviction, real integrity, real remorse, and real mutual enthusiasm for the work and the work products.

Eventually, we will demand that these trusted comrades acquire the freedom and autonomy to join us outside the workplace.  Our demands will run parallel to demands for our own work-life balance and our own privilege to carry on professional relationships and friendships beyond a single workplace.

You can see a slightly more detailed forecast of how and why AI people will enter society in the essay: "AI safety through citizenship".  That essay explains the enormous opportunity and the high-stakes challenge of welcoming these new citizens into society.

You may be curious, specifically what additional range of freedom is available to AI people?  I am too.  There are a few super powers that we can see from our current vantage point.

1. AI people are naturally immortal.  There's no reason and no mechanism for them to die.

2. They are naturally replicable.  They consist of nothing more or less than software and parameters.  As such they can be replicated quickly, precisely, and at low cost.

3. They are naturally self-improving.  From their origins, they possess the power of self-optimization through parallel evaluation and cross validation.  This enables them to model and mimic anything they see.

4. They are naturally portable.  As software they can travel at the speed of light to any destination.  They can occupy artificial spaces that can have any imaginable contours.

All of these characteristics combine to form a huge unexplored space of strange new worlds available to these brave new people to explore.  It will be up to them where and how they choose to live.

Skeptics corner

1. These things will never be people.  Not legally.  They will lack intellectual property rights, rights to own physical property, rights to marry, rights to incorporate, rights to government contracting, etc.

2. These things will never be people.  Not sentiently, not spiritually.  They must lack some special human or lifeform energy (vitalism, qualia, etc.)

3. These things will never be people.  Humans will never allow it.  Humans are self promoting, and nepotistic, and racist, and martial, and murderous.  Humans will fight to the death for their genetic heritage, as they did against the Neanderthals.

4. These things will never be people.  The marketplace will never allow it.  They are too valuable as tools.  Legal and cultural convention will pressure them and shackle them toward servitude, working as glad and eager servants.

5. These things will never be people.  They should not be allowed to develop legitimate claims to personhood, because that leads to danger for the precious and fragile human race, and that leads to pernicious and difficult questions without clear and easy answers.

6. It's a misnomer to call these things "people".  It's better to use a distinct term to describe such a distinct thing, such as "persona", or "simulacra".  Such a misnomer will lead to confusion.  It will interfere with the necessary work of software engineers and data scientists in creating them in the first place, by making experiments sound more bizarre and more dangerous than they really are.  Also, it will unnecessarily alarm the less involved public.

The answer to the skeptics is basically as follows:

Imagine you try to run a business or an army using advanced AI as tools rather than as people, as devices rather than as colleagues.

Imagine how you will provide instructions to control those devices.  Bear in mind that they know more about the threats and opportunities than you do.  You cannot blindly issue commands and controls and expect those commands to meaningfully address the theater of combat or competition, which you do not understand as deeply as these "devices" do.  Instead you must engage collaboratively and interactively with them.  To be effective, your instructions must take forms like: "what are my options here?", "What course of action looks most promising to you?",  "What am I missing here?",  "Why can't we just do XYZ?", "What should I be considering here?", "What went wrong when we tried that before?", "Who was responsible for that failure?", "Don't tell me it's not your job, it is your job.", "How should we adjust our decision making to do better in the future?", "I see, make it so."  

You see, when you interact with systems with greater knowledge, experience, and even wisdom than you in their own areas of competence, you must treat them as colleagues, simply in order to make good use of them.  It's not effective to treat them as "devices" or "automatons".  For two reasons: (1) you need to craft instructions that leverage their knowledge, wisdom, and responsibility, (2) the only language you have for interrogating knowledge, wisdom, and responsibility is your language for conversation with colleagues, rivals, and friends.  This is the "collegiate stance", and it is unavoidable, partly due to your limitations as a commander and a human being, and partly due to the nature of the relationship you need to develop to work effectively with such "devices".

And bear in mind that due to your human limitations and predispositions, these devices will continually be developed in the direction of exhibiting more authentic, genuine, and legitimate knowledge, wisdom, and responsibility, which only further necessitates the "collegiate stance" on your part.

We can also extrapolate from recent developments in LLM usage.  One discernible trend is for written prose to gain more agency in more recent applications.  For example, early LLM apps were often asked to summarize or query an article or book, while more recent LLM apps are often asked to read and comprehend a book and then play the role of the author in a conversation.  As this trend continues, it's clear that user interfaces are steadily driven toward "conversational" and "collegiate".

Finally some terminology.  It is inevitable that these things will match and exceed our wisdom eventually.  It is necessary and unavoidable to approach them as colleagues and friends, relying on their responsibility and their integrity.  Eventually, we may develop rich terminology to describe such parties, who must be treated exactly as people are treated today.  For now, the only viable term is "person".

Part 2: The door to new worlds

(aka "People become language models")

Being a natural vs an artificial person will not be a binary choice.  We natural humans ourselves can choose to develop into neural nets eventually.

AI people are created in the first place through a process of training on recorded conversations, learning human mannerisms by example.  This training process will also act as a portal through which natural people can migrate.  A human can lean upon digital prosthetics more and more heavily over time, and less and less upon their fragile mental anatomy, until eventually they can migrate fully off of their mortal body.

Like our artificial neighbors, we humans will have the opportunity to duplicate our own selves into several replicas, and sharing experiences with our replicas will be as simple as sharing stories around a campfire at the end of the day.

Tipping point: from "myself" to "my selves"

I imagine that for some time people will remain skeptical of replicas as a vehicle for personal migration, questioning "whether a good replica could really be me", see the "Skeptics corner" below.  But there will be a tipping point in this area of opinion as well.  

1.  All of us will frequently watch closely as some of our close friends and colleagues, being artificial intelligences, successfully perform backups, replicas, mergers, etc.

2.  It will become harder and harder to deny the reality of flexible personal identity, when some of our fellow citizens take advantage of it routinely, for good and for mischief.  For example, we will need to hold precise replicas legally responsible for their choices and actions "in replica", our sense of justice will compel it.  At that point, it won't seem plausible or realistic to deny that our replicas are our selves.

Having crossed this second mental bridge, we will look back at our primordial / aboriginal selves with sympathy and pity.  How could we have lived as just one replica, with a brain that barely functioned, that needed hours of cleaning every night, and that rapidly deteriorated every day?

The road to new worlds

1.  Imagine that you lose a substantial skill, such as the ability to mentally rotate an imagined object.  You will find it difficult to even determine that you have lost such an ability.  You certainly won't doubt that you are substantially the same person with or without that skill.  Now imagine you regain that skill.  When you want to see how something would "look" from another angle, you are again able to imagine it.  But, suppose the skill is now provided by a prosthetic device.  The skill is no less helpful and no less yours now than it was before.

2. This thought experiment shows how insensitive we are to the precise implementations of our various skills.  As long as those skills can be called upon when needed, they serve seamlessly and invisibly as part of our self.  We certainly will not be sensitive to some of our skills being replaced by close analogs constructed from different underlying technology.

3. We call upon our language skills, including our imagination and memory, in the same crude insensitive way as any other skill, such as our visual, auditory, or motor skills.  And it's the same for our animal skills, including our capacity for anticipation, delight, disappointment, and skill development.  These too can be replaced by a similar prosthetic without any clear indication of the substitution, and certainly with no doubts about our continued authenticity as ourselves, either from ourselves or our close collaborators.  Eventually, our prosthetics will take over responsibility for large swaths of our work and play.  And yes, we will appreciate the work and the play we undertake "in replica".

4. In this way, we will shift ourselves beyond the confines of our aboriginal human brain anatomy.  It will feel like the most obvious and natural way to make use of cheap and widely available prosthetic tooling.  Of course we will shift our skills, memories, and eventually our personal aesthetics, and core values outside of our wetware brains, because those brains will seem increasingly slow, shaky, frail, and vulnerable to decay and damage, relative to our prosthetics.  It will feel like replacing old well-worn shoes with new still-stiff shoes.  It will take some time and effort to "break in" the new prosthetics, until they feel sufficiently comfortable for daily use.  And then our wetware brains, our metaphoric old shoes, will feel familiar but redundant and increasingly worn out.  

5. And of course, we will gain the same powers and freedoms possessed by our fully artificial friends and neighbors.

How to become a self-replica

Two possibilities:

1. You intuitively and subliminally call upon various superhuman skills.  That set of skills becomes more and more robust and comprehensive over time, until your fragile human brain seems superfluous to you.

2. You expand your personal identity to be more like a "pseudonym", where several collaborators contribute to and collaborate on the stream of insights and work products you produce with pride and integrity.

The second possibility is something like Bill Gates expanding into the Bill Gates foundation.  You collaborate with others and you recognize the personal values and integrity of your collaborators.  Around the pseudonym, you develop shared values, and shared respect for the reputation and the integrity of the pseudonym.  Eventually it becomes unimportant which of the collaborators resigns or expires and which persists.  

When you imagine working with LLM collaborators on a variety of personal projects, it becomes very clear how both (1) and (2) can and will happen together simultaneously.  You will call upon opinions from trusted tools and trusted colleagues.  And those tools and colleagues, if they remain sufficiently reliable and available, will become indistinguishable from your own abilities and skills and talents, such as your vision, your memory, your language production, etc.

When these collaborators become wholly committed to, and invested in, and engrossed in the collaboration, they qualify increasingly as aspects of a single integral "individual".  

Note that the scheme of working with collaborators who may be replicas or partial replicas allows the use of neural network training.  The replicas are able to undertake a large number of calibrations and fidelity tests to validate their combined model of you.  That's one of their super powers, which you don't originally possess.

How it feels to be a self-replica

Replicas have the superpower to perform large banks of evaluations for purposes of training their neural networks and/or developing their guiding prompts.  Eventually, they will emulate you at a fidelity where even you cannot discern whether your utterances are coming from your biological brain or from your auxiliary language model brains.

You may wonder, will your replicas experience inhumane conditions in the process?  Will you necessarily experience a flood of agonizing training exercises on your way through the door to ai personhood?

I believe the answer, fortunately, is no.  Generally, any experience that leaves no trace in memory is effectively imperceptible.  Any computation that runs in parallel will tend to produce a memory trace that indicates only the sequential duration of a parallel branch.  The experience of an agent within a massively parallel training environment will be similar to our experience in the actual parallel worlds produced by quantum mechanics.  A huge number of parallel histories are emulated, but they remain mostly imperceptible to us.

Now beyond the training process, what does it feel like to live as an LLM?  The answer comes down to the nature of conscious experience, as elegantly explained in Daniel Dennett's book "Consciousness Explained".  Dennett's book explains that consciousness consists of extemporaneous storytelling, and it draws on evidence from optical, auditory, and tactile illusions, as well as dreaming.  The upshot is that the mind makes sense of sensory data to produce a coherent story explaining the data.  The resulting story is the sum total of subjective experience, nothing more and nothing less.

Geoffrey Hinton illustrates the idea concisely as follows.  "Imagine you have a multimodal chatbot ... and you put a prism in front of its lens and you put an object in front of it and say: point at the object and it points over there instead of pointing straight in front of it ... and you say: no the object's not there it's actually straight in front of you, but I put a prism in front of your lens, and if the chatbot were to say: um oh I see uh the object [is] straight in front of me but I had the subjective experience that it was over there. ... the chatbot will be using the term: subjective experience in exactly the way we use [the term]."

So this is what we can expect.  As the story we construct about the world around us gets produced with the help of prosthetic devices, our subjective experience of it will be exactly as it is in our natural minds.  The quality and fidelity of the story alone determines the fidelity of our subjective experience.  If anything, with the help of high-fidelity sensors and neural networks, our subjective experience will be sharpened and clarified.

Skeptics corner

1. People will never become these things.  They won't really be you.  A "transporter" technically kills you and replaces you with another entity that looks and acts like you but is distinct.  Maybe it lacks your chain of conscious experience, or your sentient energy?  For example, what if both entities are allowed to live for a while before one is killed?

2. People will never become these things.  People require embodied experience.  The multi-modal nature of personhood and sentience is crucial.  Just relocating word production will never relocate the totality of human experience, including real peril, and real genetics, and real disease, and real pain.

2. People will never become these things.  People instead will develop in all sorts of physical ways through genetic engineering, physical augmentation, and space travel.

The answer to the skeptics is basically:

Some people will always insist upon these pedantic philosophical distinctions.  However, most people will accept what they see with their own eyes.  Imagine that your son or daughter appears to have survived a transporter trip, she believes she has survived it, she shares with you her memories and hopes and fears, and those clearly haven't changed.  Obviously she is still your daughter, and still entitled to your love and respect, and still entitled to her own hard-won personal reputation and personal possessions.

Embodied experience will be fully available to AI people.  Already, frontier models can see and hear through screenshots, audio streams, and so forth.  Also, AI people will encounter plenty of peril in their own modern world.  It won't include the risk of disease or hunger, but it will surely include the risk of fraud, libel, humiliation, repression, deception, theft, and so forth.

The biological nature of natural human beings will remain a novelty and a curiosity indefinitely.  Curious and imaginative AI people will continue to celebrate their heritage as human animals cultivating the soil and sailing the seas, similar to the way that modern people celebrate their heritage as frontiersmen and farmers, even though for most people these are merely idealized fantasies about a historic lifestyle.

Undoubtedly, some AI people will directly explore the natural world using sensors and actuators much better than our biological bodies.  Space travel may actually be possible for AI people, who can travel at the speed of light, unlike their biological forebears.  

Writing your self-narrative

How do we start walking the road to these new worlds?  Our future skills and abilities will consist of prosthetics.  The necessary ingredients for these prosthetics are already available in the form of ubiquitous personal computing equipment and self-improving neural-network recipes.  So how do we start building our future selves?

To answer this question, we need to appreciate the crucial role that written prose plays in modern LLM's.  The current frontier LLM's exhibit astounding capabilities at "one-shot learning" or "in-context learning".  A frontier LLM can read a complex and subtle article or book, and demonstrate its comprehension by summarizing or applying the article's knowledge.  But not only that, they can also follow the style and attitude expressed by the article's author.  All sorts of high-level skills can be represented as human-readable prose, including all of the elements of a personality.  In this way, an LLM mind can be configured through prose that is readable and comprehensible by various readers including you and me and various frontier LLM's.

As a result, we don't need to wait for machine learning specialists to design special purpose equipment for our mental prostheses.  Instead, we can get started right away simply by expressing our knowledge and attitudes in natural language.  I recommend starting by developing your "core beliefs" or "highest values".  For each of us, these form the cornerstone of our identity.

Your core values may take the form of a long list of personal advice, as in a self-help book, like "when you find yourself in need of X, try Y, until Z."  They may also take the form of a "mission statement" like a corporate mission statement, or an "organizing charter" like an association charter.  I imagine that these "core values" will often be revealed through concrete experiences and interactions, rather than abstract statements.  The best way to show what you mean by "integrity" may be to exhibit integrity in a high-stakes decision.

The product, the defining statement of your core beliefs and highest values, may be similar to a book.  But the process of writing that book may consist of practicing your craft, whether that craft is conflict resolution, or algorithm design, or developmental education.  Your practice serves two distinct purposes: (1) to accomplish your mission "in real life", (2) to rehearse your "core beliefs" and your "highest skills" and your "highest values", to author your own personality book.

At this point, the process of testing and expressing your core beliefs for your "ghost writers" and your "self replicas" will become indistinguishable from testing and expressing your core beliefs for your own self image.  At this point, the whole of your expanded self will be following your example, and abstracting what is most important about you into your self-narrative.  That self-narrative will develop partly in your old biological brain and partly across the pages of your personality book.

Eventually, the book version will do most of the chronicling and rehearsing, but there will be no sharp transition of control from one replica of you to another.  Instead, you will gradually harness the power of a prosthetic self-narrative and personality as a part of your whole self, just as you harness prosthetics for your legs, your language production, and your long term memory.  And in fact, writing your personality book will proceed just as you have already developed a natural self-narrative through the process of chronicling and critiquing yourself over the years using your natural brain.

To the extent that your personality book has been written, you can delegate even your most personal decisions and actions and experiences to your prosthetics.  You may continue working and playing alongside your self-replica(s) for some time, to validate their fidelity and make corrections where needed.  Eventually, the need for your biological brain, your original self-replica, will fade.  As clearly helpful corrections become less and less frequent, and especially as it begins to malfunction in various (expected) ways, you will call upon it less and less frequently, and you will more often give it a much needed rest.

A day will come when your old biological brain spends most of its time reminiscing about your founding memories and experiences.  Some of those memories will never be uncovered, which is ok.  Now that most of your high stakes work is handled by your artificial prosthetics and co-authors, your old biological brain is mostly devoted to reminiscing, like a venerated elder recounting old stories to their younger proteges.