"You can't think about thinking without thinking about thinking about something". --Seymour Papert
What do I think about machines that think? It depends on what they're supposed to be thinking about. I am clearly in the camp of people who believe that AI and machine learning will contribute greatly to society. I expect that we'll find machines to be exceedingly good at things that we're not--things that involve massive amounts of data, speed, accuracy, reliability, obedience, computation, distributed networking and parallel processing.
The paradox is that at the same time we've developed machines that behave more and more like humans, we've developed educational systems that push children to think like computers and behave like robots. It turns out that for our society to scale and grow at the speed we now require, we need reliable, obedient, hardworking, physical and computational units. So we spend years converting sloppy, emotional, random, disobedient human beings into meat-based versions of robots. Luckily, mechanical and digital robots and computers will soon help reduce if not eliminate the need for people taught to behave like them.
We'll still need to overcome the fear and even disgust evoked when robot designs bring us closer and closer to the "uncanny valley," in which robots and things demonstrate almost-human qualities without quite reaching them. This is true for computer animation, zombies and even prosthetic hands. But we may be approaching the valley from both ends. If you've ever modified your voice to be understood by a voice-recognition system on the phone, you understand how, as humans, we can edge into the uncanny valley ourselves.
There are a number of theories about why we feel this revulsion, but I think it has something to with human beings feeling they're special--a kind of existential ego. This may have monotheistic roots. Right around the time Western factory workers were smashing robots with sledgehammers, Japanese workers were putting hats on the same robots in factories and giving them names. On April 7, 2003, Astro Boy, the Japanese robot character, was registered as a resident of the city of Niiza, Saitama.
If these anecdotes tell us anything, it's that animist religions may have less trouble dealing with the idea that maybe we're not really in charge. If nature is a complex system in which all things--humans, trees, stones, rivers and homes--are all animated in some way and all have their own spirits, then maybe it's okay that God doesn't really look like us or think like us or think that we're really that special.
So perhaps one of the most useful aspects of being alive in the period where we begin to ask this question is that it raises a larger question about the role of human consciousness. Human beings are part of a massively complex system--complex beyond our comprehension. Like the animate trees, stones, rivers and homes, maybe algorithms running on computers are just another part of this complex ecosystem.
As human beings we have evolved to have an ego and believe that there such a thing as a self, but mostly, that's a self-deception to allow each human unit to work within the parameters of evolutionary dynamics in a useful way. Perhaps the morality that emerges from it is a self-deception of sorts, as well. For all we know, we might just be living in a simulation where nothing really actually matters. It doesn't mean we shouldn't have ethics and good taste. I just think we can exercise our sense of responsibility in being part of a complex and interconnected system without having to rely on an argument that "I am special." As machines become an increasingly important part of these systems, their prominence will make human arguments about being special increasingly fraught. Maybe that's a good thing.
Perhaps what we think about machines that think doesn't really matter--they will "think" and the system will adapt. As with most complex systems, the outcome is mostly unpredictable. It is what it is and will be what it will be. Most of what we think is going to happen is probably hopelessly wrong and as we know from climate change, knowing that something is happening and doing something about it often have little in common.
That might sound extremely negative and defeatist, but I'm actually quite optimistic. I believe that the systems are quite adaptive and resilient and that whatever happens, beauty, happiness and fun will persist. Hopefully, human beings will have a role. My guess is that they will.
It turns out that we don't make great robots, but we're very good at doing random and creative things that would be impossibly complex--and probably a waste of resources--to code into a machine. Ideally, our educational system will evolve to more fully embrace our uniquely human strengths, rather than trying to shape us into second-rate machines. Human beings--though not necessarily our current form of consciousness and the linear philosophy around it--are quite good at transforming messiness and complexity into art, culture, and meaning. If we focus on what each of us is best at, I think that humans and machines will develop a wonderful yin-yang sort of relationship, with humans feeding off of the efficiency of our solid-state brethren, while they feed off of our messy, sloppy, emotional and creative bodies and brains.
We are descending not into chaos, as many believe, but into complexity. At the same time that the Internet connects everything outside of us into a vast, seemingly unmanageable system, we find an almost infinite amount of complexity as we dig deeper inside our own biology. Much as we're convinced that our brains run the show, all while our microbiomes alter our drives, desires, and behaviors to support their own reproduction and evolution, it may never be clear who's in charge--us, or our machines. But maybe we've done more damage by believing that humans are special than we possibly could by embracing a more humble relationship with the other creatures, objects, and machines around us.