FREEDOM AND SAFETY
A couple years ago I took part in a marketing video where I was recorded answering questions about my career path, interests, and goals. The next day, the videographer offered to show me the footage, so I took him up on it.
As I watched myself, I became dismayed. My eyes darted from left to right instead of looking into the camera. I clasped my hands in an awkward, unnatural position behind my back. And my head did this weird, reflexive sort of mini-nod whenever I said something I felt strongly about.
It was, in short, mortifying.
But I tried to take something positive from the mortification: I was embarrassed, but also glad to have seen the video, because it showed me what I needed to work on in terms of being a better speaker. The best way to zero in on my shortcomings was by having them played back to me.
What if we had something like this video, but for our whole selves? Rather than just being able to show us our physical movements, it would perfectly replicate our personalities, our insecurities, our senses of humor, knowledge, memories…
It may not be long before this intriguing, somewhat eerie possibility comes to pass. In fact, for some people - both alive and deceased - it already has.
The AI Foundation, which describes itself as “a non-profit and a for-profit organization working to move the world forward with AI responsibly,” is developing a tool that will let anyone build their own AI. Those at the Foundation believe that personalizing and distributing the power of AI, as opposed to having it concentrated and controlled by a select few, will help unleash its full positive potential. They emphasize that the AIs built using their tool will possess each user’s unique values and goals, and will help users overcome limitations we’re currently subject to in the non-personal-AI world.
It’s a lofty vision and, it seems, a noble one. But what it would actually look like, in practice, for us to have artificially intelligent versions of ourselves?
Author and alternative medicine advocate Deepak Chopra was happy to be the Foundation’s guinea pig: an AI version of him, in the form of an app, will go live in early 2020. Users will be able to talk to digital Deepak and get advice from him, and the app will customize itself to each user; if you tell digital Deepak that you’re prone to sinus infections or that you tend to feel sad on Sundays, he’ll remember and take that information into account for future conversations, perhaps reminding you to devote some extra time to meditation each Sunday morning.
And it’ll be win-win; the more people that use the app, the better it will get, as digital Deepak will learn from each interaction, getting wiser both in terms of his knowledge base and his ability to realistically converse with users. Though there’s apparently some work to be done on the app to get digital Deepak out of the uncanny valley, Scott Stein reported in CNET that using the app “really did just feel like I was Facetiming with Deepak.”
Celebrities, particularly authors with 73 books - that’s how many Chopra has written - have a wealth of training data for an AI to draw from when creating its digital ‘brain.’ But what about those of us who haven’t written down or published our thoughts, ideas, inspirations, and life’s work for all the world to read? How would an AI learn about us?
It could read our emails, text message conversations, and other written chat histories (this worked for a Russian woman who re-created her best friend in the form of an artificially intelligent bot after he died in a car collision). It could mine our social media history- in much the same way companies already do in order to collect data on us and target the ads we see - to see the digital interactions we’ve had with others, the issues that are important to us, and the curated versions of ourselves we’ve chosen to present to the world.
Our phones could, with our permission, record our conversations, and could even record video of what we’re doing or who we’re interacting with, then digest and integrate that information for our AIs. The app could then create its own requests for new information based on what it perceives as missing, filling in the gaps to essentially build up databases of our Selves.
Are you freaked out yet? I sure am.
Like the video that pinpointed the speaking skills I needed to sharpen, personal AIs could show us deeper ways to become better versions of ourselves, reflecting our insecurities and patterns and giving us a starting point for change. They could “think and act like us in a billion places at once,” as the Foundation’s website proclaims. A video shows people saying what they’d do with their AI: achieve better work-life balance, work on alleviating poverty, diagnose personal health issues, spread awareness about the environment, and a host of other noble missions.
I don’t know about you, but I find the idea of having a digital version of myself that knows everything about me yet possesses an intelligence that’s somehow separate from my own a little bit intriguing - but mostly terrifying.
Would my AI portray the real me? Would it portray the best me? What if it went rogue, or someone else took control of it? Could I send my AI to have conversations with people I don’t want to talk to? If they send their AI (someone I don’t want to talk to may not want to talk to me either), what would it mean for our digital selves to converse instead of our real selves? What would happen to our AIs after we die?
I recently re-watched Her, Spike Jonez’s prescient 2014 film in which Joaquin Phoenix falls in love with his artificially intelligent operating system, Samantha. He carries her around in his phone, and they talk, laugh, share secrets and fears, and build a deep emotional intimacy (to whatever extent that’s possible when the thing you’re talking to is a machine).
The movie offers an unsettling portrait of a future in which those of us who want to can become even more detached from the living, breathing people around us (“even more” because of the detachment the internet, smartphones, and social media have already visited upon us), replacing them with digital ‘beings’ that make us feel like we’re interacting with humans - minus the risk of emotional messiness and plus a far greater degree of control.
What sort of ramifications might there be of a world filled with intelligent digital duplicates of ourselves? Will you build your own AI when you have the chance?
It seems it’s only a matter of time.