About this Episode
Episode 86 of Voices in AI features Byron speaking with fellow author Amir Husain about the nature of Artificial Intelligence and Amirâs book The Sentient Machine.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron Reese: This is Voices in AI brought to you by GigaOm, and Iâm Byron Reese. Today my guest is Amir Husain. He is the founder and CEO of SparkCognition Inc., and heâs the author of The Sentient Machine, a fine book about artificial intelligence. In addition to that, he is a member of the AI task force with the Center for New American Security. He is a member of the board of advisors at UT Austinâs Department of Computer Science. Heâs a member of the Council on Foreign Relations. In short, he is a very busy guy, but has found 30 minutes to join us today. Welcome to the show, Amir.
Amir Husain: Thank you very much for having me Byron. Itâs my pleasure.
You and I had a cup of coffee a while ago and you gave me a copy of your book and Iâve read it and really enjoyed it. Why donât we start with the book. Talk about that a little bit and then weâll talk about SparkCognition Inc. Why did you write The Sentient Machine: The Coming Age of Artificial Intelligence?
Byron, I wrote this book because I thought that there was a lot of writing on artificial intelligenceâwhat it could be. Thereâs a lot of sci fi that has visions of artificial intelligence and thereâs a lot of very technical material around where artificial intelligence is as a science and as a practice today. So thereâs a lot of that literature out there. But what I also saw was there was a lot of angst back in 2015, 2014. I actually had a personal experience in that realm where outside of my South by Southwest talks there was an anti-AI protest.
So just watching those protesters and seeing what their concerns were, I felt that a lot of the sort of philosophical questions, existential questions around the advent of AI, if AI indeed ends up being like Commander Data, it has sentience, it becomes artificial general intelligence, then it will be able to do the jobs better than we can and it will be more capable in letâs say the âart of warâ than we are and therefore does this mean that we will lose our jobs. We will be meaningless and our lives will be lacking in meaning and maybe the AI will kill us?
These are the kinds of concerns that people have had around AI and I wanted to sort of reflect on notions of manâs ability to createâthe aspects around that that are embedded in our historical and religious tradition and what our conception of Man vs. he who can create, our creatorâwhat those are and how that influences how we see this age of AI where man might be empowered to create something which can in turn create, which can in turn think.
Thereâs a lot of folks also that feel that this is far away, and I am an AI practitioner and I agree I donât think that artificial general intelligence is around the corner. Itâs not going to happen next May, even though I suppose some group could surprise us, but the likely outcome is that we are going to wait a few decades. I think waiting a few decades isnât a big deal because in the grand scheme of things, in the history of the human race, what is a few decades? So ultimately the questions are still valid and this book was written to address some of those existential questions lurking in elements of philosophy, as well as science, as well as the reality of where AI stands at the moment.
So talk about those philosophical questions just broadly. What are those kinds of questions that will affect what happens with artificial intelligence?
Well I mean one question is a very simple one of self-worth. We tend to define ourselves by our capabilities and the jobs that we do. Many of our last names in many cultures are literally indicative of our profession. You know goldsmiths as an example, farmer as an example. And this is not just a European thing. Across the world you see this phenomenon of last names just reflecting the profession of a woman or a man. And it is to this extent that we internalize the jobs that we do as essentially being our identity, literally to the point where we take it on as a name.
So now when you de-link a man or a womanâs ability to produce or to engage in that particular labor that is a part of their identity, then whatâs left? Are you still, the human that you were with that skill? Are you less of a human being? Is humanity in any way linked to your ability to conduct this kind of economic labor? And this is one question that I explored in the book because I donât know whether people really contemplate this issue so directly and think about it in philosophical terms, but I do know that subjectively people get depressed when theyâre confronted with the idea that they might not be able to do the job that they are comfortable doing or have been comfortable doing for decades. So at some level obviously itâs having an impact.
And the question then is: is our ability to perform a certain class of economic labor in any way intrinsically connected to identity? Is it part of humanity? And I sort of explore this concept and I say âOK well, letâs sort of take this away and letâs cut this away letâs take away all of the extra frills, letâs take away all of what is not absolutely fundamentally uniquely human.â And that was an interesting exercise for me. The conclusions that I came toâI donât know whether I should spoil the book by sharing it hereâbut in a nutshellâthis is no surpriseâthat our cognitive function, our higher order thinking, our creativity, these are the things which make us absolutely unique amongst the known creation. And it is that which makes us unique and different. So this is one question of self worth in the age of AI, and another one isâ¦
Just to put a pin in that for a moment, in the United States the workforce participation rate is only about 50% to begin with, so only about 50% of people work because youâve got adults that are retired, you have people who are unable to work, you have people that are independently wealthyâ¦ I mean we already had like half of adults not working. Does it does it really rise to the level of a philosophical question when itâs already something we have thousands of years of history with? Like what are the really needy things that AI gets at? For instance, do you think a machine can be creative?
Absolutely I think the machine can be creative.
You think people are machines?
I do think people are machines.
So then if thatâs the case, how do you explain things like the mind? How do you think about consciousness? We donât just measure temperature, we can feel warmth, we have a first person experience of the universe. How can a machine experience the world?
Well you know look thereâs this age old discussion about qualia and thereâs this discussion about the subjective experience, and obviously thatâs linked to consciousness because that kind of subjective experience requires you to first know of your own existence and then apply the feeling of that experience to you in your mind. Essentially you are simulating not only the world but you also have a model of yourself. And ultimately in my view consciousness is an emergent phenomenon.
You know the very famous Marvin Minsky hypothesis of The Society of Mind. And in all of its details I donât know that I agree with every last bit of it, but the basic concept is that there are a large number of processes that are specialized in different things that are running in the mind, the software being the mind, and the hardware being the brain, and that the complex interactions of a lot of these things result in something that looks very different from any one of these processes independently. This in general is a phenomenon thatâs called emergence. It exists in nature and it also exists in computers.
One of the first few graphical programs that I wrote as a child in basic [coding] was drawing straight lines, and yet on a CRT display, what I actually saw were curves. Iâd never drawn curves but it turns out that when you light a large number of pixels with a certain gap in the middle and itâs on a CRT display there there are all sorts of effects and interactions like the Moire effect and so on and so forth where what you thought you were drawing was lines, and it shows up if you look at it from an angle, as curves.
So I mean the process of drawing a line is nothing like drawing a curve, there was no active intent or design to produce a curve, the curve just shows up. Itâs a very simple example of a kid writing a few lines of basic can do this experiment and look at this but there are obviously more complex examples of emergence as well. And so consciousness to me is an emergent property, itâs an emergent phenomenon. Itâs not about the one thing.
I donât think there is a consciousness gland. I think that there are a large number of processes that interact to produce this consciousness. And what does that require? It requires for example a complex simulation capability which the human brain has, the ability to think about time, to think about objects, model them and to also apply your knowledge of physical forces and other phenomena within your brain to try and figure out where things are going.
So that simulation capability is very important, and then the other capability thatâs important is the ability to model yourself. So when you model yourself and you put yourself in a simulator and you see all these different things happening, there is not the real pain that you experience when you simulate for example being struck by an arrow, but there might be some fear and a why is that fear emanating? Itâs because you watch your own model in your imagination, in your simulation suffer some sort of a problem. And now that is a very internal. Right? None of this has happened in the external world but youâre conscious of this happening, so to me at the end of the day it has some fundamental requirements. I believe simulation and self modeling are two of those requirements, but ultimately itâs an emergent property.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.