FREEDOM AND SAFETY

 

The AI-hype would have you believing that we'll soon be enslaved by super-intelligent beings or hunted by killer robots. Before building that Soviet-era bunker to survive the AIpocalypse, consider more immediate issues which are already affecting society today.

 

According to 23 AI experts Martin Ford interviewed for his new book, Architects Of Intelligence, the real imminent AI-threats relate to politics, security, privacy, and the weaponization of AI.

 

To understand how these problems affect society today, it's helpful to see them from the perspective of leaders which have helped shape the current AI revolution.

 

William Falcon: Why did you decide to write Architects Of Intelligence?

 

Martin Ford: When I started a software company in the 90s, I noticed the impact of technology on jobs. That's when I started thinking about the impact of AI and robotics on the job market. That led to two books, The Lights in the Tunnel and Rise of the Robots. 

 

The purpose of Architects Of Intelligence is to draw everyone - not just AI researchers - into the discussion of immediate impacts of AI which are already affecting our society today. The book aims to highlight what some of those issues are and to teach a bit more about the technology.

 

William Falcon: Who is the audience for the book?

 

Martin Ford: Anyone! Especially people with a strong interest in AI. The book is intended for a wider audience than just researchers and engineers working or interested in AI. I also included a section in the beginning for technical terms which might be implied in some of the discussions.

 

Everyone should be concerned with AI, how it has progressed and its impact on the economy and society. It's a bit like reading a book about relativity written by Einstein - some of these people have invented key AI concepts in use today.

 

William Falcon: What is the main takeaway for those readers not directly working on AI research?

 

Martin Ford: The biggest takeaway for them is that everyone agrees that AI is going to be disruptive.  They don't agree on details of what that looks like exactly, but this technology will have a massive impact on society.

 

William Falcon: After your conversations with these researchers, what are your views on AGI (artificial general intelligence)?

 

Martin Ford: The main takeaway is that everyone I interviewed has a different idea. At the end of the book, there's a survey asking what year AGI might arrive. The average is 80 years from now. But the main takeaway - on all the issues, not just AGI - is that there isn't a consensus.

 

Even on the question of a potential existential threat of super intelligence, is that something we should really worry about? Most of them were pretty dismissive of that and thought that it's not something we should be really concerned with right now. A few do take those concerns quite seriously, though and believe we should be working on a solution now, even if the advent of super-intelligent machines lies far in the future.

William Falcon: What are more immediate concerns we should be thinking about?

 

Martin Ford: Impact on privacy, our political system and security. We've already seen some of this impact on our political system from events like Cambridge Analytica. We also have to think about security as we rely more on AI-powered autonomous systems which can become more susceptible to hacking when humans are out of the loop. The big one everyone's really worried about is the weaponization of AI.

 

Recently, we've also started to see news about AI algorithms which have led to biased decisions based on gender and race in hiring and in other areas. These are all issues that are already happening today and will become much bigger in the coming years. Everyone should be concerned about this, not just AI researchers.

 

William Falcon: Now that we've identified some of these core issues, what are tangible next steps to starting to solve them?

 

Martin Ford: Some of this will have to enter the political sphere. We'll need to develop some forms of regulation in some areas. The people I spoke to do not agree that we need to regulate AI research, but instead suggest regulating applications.

 

Self-driving cars are one obvious example. Another is how facial recognition should be used. Even reading emotions can be used in all sorts of nefarious ways to manipulate shoppers or during negotiations.

 

Who are the researchers?

  1. Yoshua Bengio (MILA lab, Université de Montréal).
  2. Stuart Russell (BAIR lab, UC Berkeley).
  3. Geoffrey Hinton (Google Brain, University of Toronto).
  4. Nick Bostrom (Future of Humanity Institute, University of Oxford).
  5. Yann Lecun (CILVR lab, Chief AI Scientist at Facebook AI, New York University).
  6. Fei-Fei Li (SAIL lab, Google, Stanford University).
  7. Demis Hassabis (Co-Founder, Google Deepmind).
  8. Andrew Ng (SAIL labAI Fund. Stanford University).
  9. Rana El Kaliouby (Co-founder, Affectiva).
  10. Ray Kurzweil (Google).
  11. Daniela Rus (CSAIL lab, MIT).
  12. James Manyika (McKinsey Global Institute).
  13. Gary Marcus (Founder, Geometric Intelligence. New York University).
  14. Barbara Grosz (Harvard University).
  15. Judea Pearl (UCLA).
  16. Jeffrey Dean (Head of Google Brain).
  17. Daphne Koller (Co-founder, Insitro. Stanford University).
  18. David Ferrucci (Founder, Elemental Cognition. Bridgewater Associates).
  19. Rodney Brooks (Chairman, Rethink Robotics).
  20. Cynthia Breazeal (Founder, Jibo. MIT Media lab).
  21. Oren Etzioni (CEO, The Allen Institute For Artificial Intelligence).
  22. Josh Tenenbaum (Department of brain and cognitive sciences, MIT).
  23. Bryan Johnson (CEO of Kernel).

https://www.forbes.com/sites/williamfalcon/2018/11/30/this-is-the-future...