Joi Ito's Web

Joi Ito's conversation with the living web.

Recently in the Artificial Intelligence Category

Designing our Complex Future with Machines

While I had long been planning to write a manifesto against the technological singularity and launch it into the conversational sphere for public reaction and comment, an invitation earlier this year from John Brockman to read and discuss The Human Use of Human Beings by Norbert Wiener with him and his illustrious group of thinkers as part of an ongoing collaborative book project contributed to the thoughts contained herein.

The essay below is now phase 1 of an experimental, open publishing project in partnership with the MIT Press. In phase 2, a new version of the essay enriched and informed by input from open commentary will be published online, along with essay length contributions by others inspired by the seed essay, as a new issue of the Journal of Design and Science. In phase 3, a revised and edited selection of these contributions will be published as a print book by the MIT Press.

Version 1.0

Cross-posted from the Journal of Design and Science where a number of essays have been written in response and where competition winning peer-reviewed essays will be compiled into a book to be published by MIT Press.


Nature's ecosystem provides us with an elegant example of a complex adaptive system where myriad "currencies" interact and respond to feedback systems that enable both flourishing and regulation. This collaborative model-rather than a model of exponential financial growth or the Singularity, which promises the transcendence of our current human condition through advances in technology--should provide the paradigm for our approach to artificial intelligence. More than 60 years ago, MIT mathematician and philosopher Norbert Wiener warned us that "when human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood." We should heed Wiener's warning.

INTRODUCTION: THE CANCER OF CURRENCY

As the sun beats down on Earth, photosynthesis converts water, carbon dioxide and the sun's energy into oxygen and glucose. Photosynthesis is one of the many chemical and biological processes that transforms one form of matter and energy into another. These molecules then get metabolized by other biological and chemical processes into yet other molecules. Scientists often call these molecules "currencies" because they represent a form of power that is transferred between cells or processes to mutual benefit--"traded," in effect. The biggest difference between these and financial currencies is that there is no "master currency" or "currency exchange." Rather, each currency can only be used by certain processes, and the "market" of these currencies drives the dynamics that are "life."

As certain currencies became abundant as an output of a successful process or organism, other organisms evolved to take that output and convert it into something else. Over billions of years, this is how the Earth's ecosystem has evolved, creating vast systems of metabolic pathways and forming highly complex self-regulating systems that, for example, stabilize our body temperatures or the temperature of the Earth, despite continuous fluctuations and changes among the individual elements at every scale--from micro to macro. The output of one process becomes the input of another. Ultimately, everything interconnects.

We live in a civilization in which the primary currencies are money and power--where more often than not, the goal is to accumulate both at the expense of society at large. This is a very simple and fragile system compared to the Earth's ecosystems, where myriads of "currencies" are exchanged among processes to create hugely complex systems of inputs and outputs with feedback systems that adapt and regulate stocks, flows, and connections.

Unfortunately, our current human civilization does not have the built-in resilience of our environment, and the paradigms that set our goals and drive the evolution of society today have set us on a dangerous course which the mathematician Norbert Wiener warned us about decades ago. The paradigm of a single master currency has driven many corporations and institutions to lose sight of their original missions. Values and complexity are focused more and more on prioritizing exponential financial growth, led by for-profit corporate entities that have gained autonomy, rights, power, and nearly unregulated societal influence. The behavior of these entities are akin to cancers. Healthy cells regulate their growth and respond to their surroundings, even eliminating themselves if they wander into an organ where they don't belong. Cancerous cells, on the other hand, optimize for unconstrained growth and spread with disregard to their function or context.

THE WHIP THAT LASHES US

The idea that we exist for the sake of progress, and that progress requires unconstrained and exponential growth, is the whip that lashes us. Modern companies are the natural product of this paradigm in a free-market capitalist system. Norbert Wiener called corporations "machines of flesh and blood" and automation "machines of metal." The new species of Silicon Valley mega companies--the machines of bits--are developed and run in great part by people who believe in a new religion, Singularity. This new religion is not a fundamental change in the paradigm, but rather the natural evolution of the worship of exponential growth applied to modern computation and science. The asymptote of the exponential growth of computational power is artificial intelligence.

The notion of Singularity--that AI will supercede humans with its exponential growth, and that everything we have done until now and are currently doing is insignificant--is a religion created by people who have the experience of using computation to solve problems heretofore considered impossibly complex for machines. They have found a perfect partner in digital computation--a knowable, controllable, system of thinking and creating that is rapidly increasing in its ability to harness and process complexity, bestowing wealth and power on those who have mastered it. In Silicon Valley, the combination of groupthink and the financial success of this cult of technology has created a positive feedback system that has very little capacity for regulating through negative feedback. While they would resist having their beliefs compared to a religion and would argue that their ideas are science- and evidence-based, those who embrace Singularity engage in quite a bit of arm waving and make leaps of faith based more on trajectories than ground-truths to achieve their ultimate vision.

Singularitarians believe that the world is "knowable" and computationally simulatable, and that computers will be able to process the messiness of the real world just like they have every other problem that everyone said couldn't be solved by computers. To them, this wonderful tool, the computer, has worked so well for everything so far that it must continue to work for every challenge we throw at it, until we have transcended known limitations and ultimately achieve some sort of reality escape velocity. Artificial intelligence is already displacing humans in driving cars, diagnosing cancers, and researching court documents. The idea is that AI will continue this progress and eventually merge with human brains and become an all-seeing, all-powerful, super-intelligence. For true believers, computers will augment and extend our thoughts into a kind of "amortality." (Part of Singularity is a fight for "amortality," the idea that while one may still die and not be immortal, the death is not the result of the grim reaper of aging.)

But if corporations are a precursor to our transcendance, the Singularitarian view that with more computing and bio-hacking we will somehow solve all of the world's problems or that the Singularity will solve us seems hopelessly naive. As we dream of the day when we have enhanced brains and amortality and can think big, long thoughts, corporations already have a kind of "amortality." They persist as long as they are solvent and they are more than a sum of their parts--arguably an amortal super-intelligence.

More computation does not makes us more "intelligent," only more computationally powerful.

For Singularity to have a positive outcome requires a belief that, given enough power, the system will somehow figure out how to regulate itself. The final outcome would be so complex that while we humans couldn't understand it now, "it" would understand and "solve" itself. Some believe in something that looks a bit like the former Soviet Union's master planning but with full information and unlimited power. Others have a more sophisticated view of a distributed system, but at some level, all Singularitarians believe that with enough power and control, the world is "tamable." Not all who believe in Singularity worship it as a positive transcendence bringing immortality and abundance, but they do believe that a judgment day is coming when all curves go vertical.

Whether you are on an S-curve or a bell curve, the beginning of the slope looks a lot like an exponential curve. An exponential curve to systems dynamics people shows self-reinforcement, i.e., a positive feedback curve without limits. Maybe this is what excites Singularitarians and scares systems people. Most people outside the singularity bubble believe in S-curves, namely that nature adapts and self-regulates and that even pandemics will run their course. Pandemics may cause an extinction event, but growth will slow and things will adapt. They may not be in the same state, and a phase change could occur, but the notion of Singularity--especially as some sort of savior or judgment day that will allow us to transcend the messy, mortal suffering of our human existence--is fundamentally a flawed one.

This sort of reductionist thinking isn't new. When BF Skinner discovered the principle of reinforcement and was able to describe it, we designed education around his theories. Learning scientists know now that behaviorist approaches only work for a narrow range of learning, but many schools continue to rely on drill and practice. Take, as another example, the eugenics movement, which greatly and incorrectly over-simplified the role of genetics in society. This movement helped fuel the Nazi genocide by providing a reductionist scientific view that we could "fix humanity" by manually pushing natural selection. The echoes of the horrors of eugenics exist today, making almost any research trying to link genetics with things like intelligence taboo.

We should learn from our history of applying over-reductionist science to society and try to, as Wiener says, "cease to kiss the whip that lashes us." While it is one of the key drivers of science--to elegantly explain the complex and reduce confusion to understanding--we must also remember what Albert Einstein said, "Everything should be made as simple as possible, but no simpler."1 We need to embrace the unknowability--the irreducibility--of the real world that artists, biologists and those who work in the messy world of liberal arts and humanities are familiar with.

WE ARE ALL PARTICIPANTS

The Cold War era, when Wiener was writing The Human Use of Human Beings, was a time defined by the rapid expansion of capitalism and consumerism, the beginning of the space race, and the coming of age of computation. It was a time when it was easier to believe that systems could be controlled from the outside, and that many of the world's problems would be solved through science and engineering.

The cybernetics that Wiener primarily described during that period were concerned with feedback systems that can be controlled or regulated from an objective perspective. This so-called first-order cybernetics assumed that the scientist as the observer can understand what is going on, therefore enabling the engineer to design systems based on observation or insight from the scientist.

Today, it is much more obvious that most of our problems--climate change, poverty, obesity and chronic disease, or modern terrorism--cannot be solved simply with more resources and greater control. That is because they are the result of complex adaptive systems that are often the result of the tools used to solve problems in the past, such as endlessly increasing productivity and attempts to control things. This is where second-order cybernetics comes into play--the cybernetics of self-adaptive complex systems, where the observer is also part of the system itself. As Kevin Slavin says in Design as Participation, "You're Not Stuck In Traffic--You Are Traffic."3

In order to effectively respond to the significant scientific challenges of our times, I believe we must view the world as many interconnected, complex, self-adaptive systems across scales and dimensions that are unknowable and largely inseparable from the observer and the designer. In other words, we are participants in multiple evolutionary systems with different fitness landscapes4 at different scales, from our microbes to our individual identities to society and our species. Individuals themselves are systems composed of systems of systems, such as the cells in our bodies that behave more like system-level designers than we do.

While Wiener does discuss biological evolution and the evolution of language, he doesn't explore the idea of harnessing evolutionary dynamics for science. Biological evolution of individual species (genetic evolution) has been driven by reproduction and survival, instilling in us goals and yearnings to procreate and grow. That system continually evolves to regulate growth, increase diversity and complexity, and enhance its own resilience, adaptability, and sustainability.5 As designers with growing awareness of these broader systems, we have goals and methodologies defined by the evolutionary and environmental inputs from our biological and societal contexts. But machines with emergent intelligence have discernibly different goals and methodologies. As we introduce machines into the system, they will not only augment individual humans, but they will also--and more importantly--augment complex systems as a whole.

Here is where the problematic formulation of "artificial intelligence" becomes evident, as it suggests forms, goals and methods that stand outside of interaction with other complex adaptive systems. Instead of thinking about machine intelligence in terms of humans vs. machines, we should consider the system that integrates humans and machines--not artificial intelligence, but extended intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems. And we must question and adapt our own purpose and sensibilities as designers and components of the system for a much more humble approach: Humility over Control.

We could call it "participant design"--design of systems as and by participants--that is more akin to the increase of a flourishing function, where flourishing is a measure of vigor and health rather than scale or power. We can measure the ability for systems to adapt creatively, as well as their resilience and their ability to use resources in an interesting way.

Better interventions are less about solving or optimizing and more about developing a sensibility appropriate to the environment and the time. In this way they are more like music than an algorithm. Music is about a sensibility or "taste" with many elements coming together into a kind of emergent order. Instrumentation can nudge or cause the system to adapt or move in an unpredictable and unprogrammed manner, while still making sense and holding together. Using music itself as an intervention is not a new idea; in 1707, Andrew Fletcher, a Scottish writer and politician, said, "Let me make the songs of a nation, I care not who makes its laws."

If writing songs instead of laws feels frivolous, remember that songs typically last longer than laws, have played key roles in various hard and soft revolutions and end up being transmitted person-to-person along with the values they carry. It's not about music or code. It's about trying to affect change by operating at the level songs do. This is articulated by Donella Meadows, among others, in her book Thinking in Systems.

41508102609275.png
Meadows, in her essay Leverage Points: Places to Intervene in a System, describes how we can intervene in a complex, self-adaptive system. For her, interventions that involve changing parameters or even changing the rules are not nearly as powerful or as fundamental as changes in a system's goals and paradigms.

When Wiener discussed our worship of progress, he said:

Those who uphold the idea of progress as an ethical principle regard this unlimited and quasi-spontaneous process of change as a Good Thing, and as the basis on which they guarantee to future generations a Heaven on Earth. It is possible to believe in progress as a fact without believing in progress as an ethical principle; but in the catechism of many Americans, the one goes with the other.6

Instead of discussing "sustainability" as something to be "solved" in the context of a world where bigger is still better and more than enough is NOT too much, perhaps we should examine the values and the currencies of the fitness functions7 and consider whether they are suitable and appropriate for the systems in which we participate.

CONCLUSION: A CULTURE OF FLOURISHING

Developing a sensibility and a culture of flourishing, and embracing a diverse array of measures of "success" depend less on the accumulation of power and resources and more on diversity and the richness of experience. This is the paradigm shift that we need. This will provide us with a wealth of technological and cultural patterns to draw from to create a highly adaptable society. This diversity also allows the elements of the system to feed each other without the exploitation and extraction ethos created by a monoculture with a single currency. It is likely that this new culture will spread as music, fashion, spirituality or other forms of art.

As a native Japanese, I am heartened by a group of junior high school students I spoke to there recently who, when I challenged them about what they thought we should do about the environment, asked questions about the meaning of happiness and the role of humans in nature. I am likewise heartened to see many of my students at the MIT Media Lab and in the Principles of Awareness class that I co-teach with the Venerable Tenzin Priyadarshi using a variety of metrics (currencies) to measure their success and meaning and grappling directly with the complexity of finding one's place in our complex world.

This is brilliant, sophisticated, timely. Question, what do you want to do with this manifesto? Socio-economic political cultural movement? To begin with, who do you want to read this? In what spaces?I know people who are working on this on the political side. I am interested in the arts and sciences ie buildable memory cultural side.

Don't know if people would agree with my conclusions here, but I've been working on developing my music in relation to housing issues around the Bay Area recently.I believe that it's important for us to develop a sensibility for diversity not just as an abstract exercise, but in ways that reflect our day to day lives. We're in need of new visions of how we plan to co-exist with one another, and I do think that artists have the ability to pave the way here in very real ways.

I'm also heartened by organizations such as the IEEE, which is initiating design guidelines for the development of artificial intelligence around human wellbeing instead of around economic impact. The work by Peter Seligman, Christopher Filardi, and Margarita Mora from Conservation International is creative and exciting because it approaches conservation by supporting the flourishing of indigenous people--not undermining it. Another heartening example is that of the Shinto priests at Ise Shrine, who have been planting and rebuilding the shrine every twenty years for the last 1300 years in celebration of the renewal and the cyclical quality of nature.

In the 1960s and 70s, the hippie movement tried to pull together a "whole earth" movement, but then the world swung back toward the consumer and consumption culture of today. I hope and believe that a new awakening will happen and that a new sensibility will cause a nonlinear change in our behavior through a cultural transformation. While we can and should continue to work at every layer of the system to create a more resilient world, I believe the cultural layer is the layer with the most potential for a fundamental correction away from the self-destructive path that we are currently on. I think that it will yet again be about the music and the arts of the young people reflecting and amplifying a new sensibility: a turn away from greed to a world where "more than enough is too much," and we can flourish in harmony with Nature rather than through the control of it.



1. An asymptote is a line that continually approaches a given curve but does not meet it at any finite distance. In singularity, this is the vertical line that occurs when the exponential growth curve a vertical line. There are more arguments about where this asymptote is among believers than about whether it is actually coming.

2. This is a common paraphrase. What Einstein actually said was, "It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience."

3. Western philosophy and science is "dualistic" as opposed to the more "Eastern" non-dualistic approach. A whole essay could be written about this but the idea of a subject/object or a designer/designee is partially linked to the notion of self in Western philosophy and religion.

4. Fitness landscapes arise when you assign a fitness value for every genotype. The genotypes are arranged in a high dimensional sequence space. The fitness landscape is a function on that sequence space. In evolutionary dynamics, a biological population moves over a fitness landscape driven by mutation, selection and random drift. (Nowak, M. A. Evolutionary Dynamics: Exploring the Equations of Life. Harvard University Press, 2006.)

5. Nowak, M. A. Evolutionary Dynamics: Exploring the Equations of Life. Harvard University Press, 2006.

6. Norbert Wiener, The Human Use of Human Beings (1954 edition), p.42.

7. A fitness function is a function that is used to summarize, as a measure of merit, how close a solution is to a particular aim. It is used to describe and design evolutionary systems.

Credits

Review, research and editing team: Catherine Ahearn, Chia Evers, Natalie Saltiel, Andre Uhl

Andre and Karthik were both took the Principles in Awareness class that Tenzin Priyadarshi and I taught twice over the last few years. They both independently became interested in connecting the idea of non-duality and artificial intelligence. We'd been Slacking and chatting and thinking about the topic so I invited Andre over for lunch the other day and Skyped Karthik in from India and did a Facebook Live about the topic.

The audio is available on iTunes and SoundCloud.

The next step is to write up a short post about the idea. :-)


Danny Hillis is the inventor of the Connection Machine, Co-Founder of the Long Now Foundation and visiting professor at the Media Lab. We were at a dinner recently where Danny asserted that the world could be simulated by a computer. I asked him to come to my office so I could extract this idea from him into a video.

We talked about the ability to simulate the universe digitally which obviously leads into the future of artificial intelligence, quantum physics, "why are we here" and lots of other interesting questions.

Apologies for the crappy sound and video. My default setup didn't work on the network so I had to use the camera on my Laptop.

I streamed it on Facebook Live and have posted an edited video on YouTube and audio on SoundCloud and iTunes.


Martin Nowak runs the Program for Evolutionary Dynamics at Harvard. At a recent meeting at his Lab, I heard him describe the history of life on earth in fascinating way using evolutionary dynamics. At another meeting over dinner, Danny Hillis and he disagreed on whether you could model the universe on a Turing machine - in other words, can we simulate our "run" our brains or the universe digitally.

I decided to ask Martin over to my house to see if I could extract these two stories. I streamed the conversation on Facebook Live and tried to clean it up a bit and posted it on YouTube.

Black and White Gavel in Courtroom - Law Books
Photo by wp paarz via Flickr - CC BY-SA

Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make - a modern version of what philosophers call "The Trolley Problem." The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.

Society-in-the-loop is a scaled up version of human-in-the-loop machine learning - something that Karthik Dinakar at the Media Lab has been working on and is emerging as an important part of AI research.

Typically, machines are "trained" by AI engineers using huge amounts of data. The engineers tweak what data is used, how it's weighted, the type of learning algorithm used and a variety of parameters to try to create a model that is accurate and efficient and making the right decisions and providing accurate insights. One of the problems is that because AI, or more specifically, machine learning is still very difficult to do, the people who are training the machines are usually not domain experts. The training is done by machine learning experts and the completed model after the machine is trained is often tested by experts. A significant problem is that any biases or errors in the data will create models that reflect those biases and errors. An example of this would be data from regions that allow stop and frisk - obviously targeted communities will appear to have more crime.

Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data. Karthik calls this process 'lensing', of extracting the human perspective or lens of a domain expert and fit it to algorithms that learn from both the data and the extracted lens, all during training time. We believe this has implications for making tools for probabilistic programming and for the democratization of machine learning.

At a recent meeting with philosophers, clergy and AI and technology experts, we discussed the possibility of machines taking over the job of judges. We have evidence that machines can make very accurate assessments of things that involve data and it's quite reasonable to assume that decisions that judges make such as bail amounts or parole could be done much more accurately by machines than by humans. In addition, there is research that shows expert humans are not very good set setting bail or granting parole appropriately. Whether you get a hearing by the parole board before or after their lunch has a significant effect on the outcome, for instance. (There has been some critiques of the study cited in this article, and the authors of the paper of responded to them.)

In the discussion, some of us proposed the idea of replacing judges for certain kinds of decisions, bail and parole as examples, with machines. The philosopher and several clergy explained that while it might feel right from a utilitarian perspective, that for society, it was important that the judges were human - it was even more important than getting the "correct" answer. Putting aside the argument about whether we should be solving for utility or not, having the buy-in of the public would be important for the acceptance of any machine learning system and it would be essential to address this perspective.

There are two ways that we could address this concern. One way would be to put a "human in the loop" and use machines to assist or extend the capacity of the human judges. It is possible that this would work. On the other hand, experiences in several other fields such as medicine or flying airplanes have shown evidence that humans may overrule machines with the wrong decision enough that it would make sense to prevent humans from overruling machines in some cases. It's also possible that a human would become complacent or conditioned to trust the results and just let the machine run the system.

The second way would be for the machine to be trained by the public - society in the loop - in a way that the people felt that that the machine reliability represented fairly their, mostly likely, diverse set of values. This isn't unprecedented - in many ways, the ideal government would be one where the people felt sufficiently informed and engaged that they would allow the government to exercise power and believe that it represented them and that they were also ultimately responsible for the actions of the government. Maybe there is way to design a machine that could garner the support and the proxy of the public by being able to be trained by the public and being transparent enough that the public could trust it. Governments deal with competing and conflicting interests as will machines. There are obvious complex obstacles including the fact that unlike traditional software, where the code is like a series of rules, a machine learning model is more like a brain - it's impossible to look at the bits and understand exactly what it does or would do. There would need to be a way for the public to test and audit the values and behavior of the machines.

If we were able to figure out how to take the input from and then gain the buy-in of the public as the ultimate creator and controller of this machine, it might solve the other side of this judicial problem - the case of a machine made by humans that commits a crime. If, for instance, the public felt that they had sufficient input into and control over the behavior of a self-driving car, could the public also feel that the public, or the government representing the public, was responsible for the behavior and the potential damage caused by a self-driving car, and help us get around the product liability problem that any company developing self-driving cars will face?

How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.

Credits
  • Iyad Rahwan - The phrase “society in the loop” and many ideas.
  • Karthik Dinakar - Teaching me about “human in the loop” machine learning and being my AI tutor and many ideas.
  • Andrew McAfee - Citation and thinking on parole boards.
  • Natalie Saltiel - Editing.

Minerva Priory library
The library at the Minerva Priory, Rome, Italy.

I recently participated in a meeting of technologists, economists and European philosophers and theologians. Other attendees included Andrew McAfee, Erik Brynjolfsson, Reid Hoffman, Sam Altman, Father Eric Salobir. One of the interesting things about this particular meeting for me was to have a theological (in this case Christian) perspective to our conversation. Among other things, we discussed artificial intelligence and the future of work.

The question about how machines will replace human beings and place many people out of work is well worn but persistently significant. Sam Altman and others have argued that the total increase in productivity will create an economic abundance that will enable us to pay out a universal "basic income" to those who are unemployed. Brynjolfsson and McAfee have suggested a "negative income tax"-a supplement instead of a tax for low-income workers that would help the financial redistribution without disrupting the other important outcomes generated by the practice of work.

Those supporting the negative income tax recognize that the importance of work is not just the income derived from it, but also the anchor that it affords us both socially and psychologically. Work provides a sense of purpose as well as a way to earn social status. The places we work give us both the opportunity for socializing as well as the structure that many of us need to feel productive and happy.

So while AI and other technologies may some day create a productivity abundance that allows us to eliminate the financial need to work, we will still need to find ways to obtain the social status-as well as a meaningful purpose-we get from work. There are many people who work in our society who aren't paid. One of the largest groups are stay-at-home men and women whose work it is to care for their homes and children. Their labor is not currently counted toward the GDP, and they often do not earn the social status and value they deserve. Could we somehow change the culture and create mechanisms and institutions that provided dignity and social status to people who don't earn money? In some ways academia, religious institutions and non-profit service organizations have some of this structure: social status and dignity that isn't driven primarily by money. Couldn't there be a way to extend this value structure more broadly?

And how about creative communities? Why couldn't we develop some organizing principle that would allow amateur writers, dancers or singers to define success by measures other than financial returns? Could this open up creative roles in society beyond the small sliver of professionals who can be supported by the distribution and consumption by the mass media? Could we make "starving artist" a quaint metaphor of the past? Can we disassociate the notion of work from productivity as it has been commonly understood and accepted? Can "inner work" be considered more fruitful when seen in light of thriving and eudaemonia?

Periclean Athens seems to be a good example of a moral society where people didn't need to work to be engaged and productive.* Could we image a new age where our self-esteem and shared societal value is not associated with financial success or work as we know it? Father Eric asks, "What does it mean to thrive?" What is our modern day eudaemonia? We don't know. But we do know that whatever it is, It will require a fundamental cultural change: change that is difficult, but not impossible. A good first step would be to begin work on our culture alongside our advances in technology and financial innovations so that the future looks more like Periclean Athens than a world of disengaged kids with nothing to do. If it was the moral values and virtues that allowed Periclean Athens to function, how might we develop them in time for a world without work as we currently know it?



* There were many slaves in Periclean Athens. For the future machine age, will be need to be concerned about the rights of machines? Will we be creating a new class of robot slaves?

Credits
  • Reid Hoffman - Ideas
  • Erik Brynjolfsson - Ideas
  • Andrew McAfee - Ideas
  • Tenzin Priyadarshi - Ideas
  • Father Eric Salobir - Ideas
  • Ellen Hoffman - Editing
  • Natalie Saltiel - Editing

This year's annual Edge question was "What do you think about machines that think?"

Here's my answer:

"You can't think about thinking without thinking about thinking about something". --Seymour Papert

What do I think about machines that think? It depends on what they're supposed to be thinking about. I am clearly in the camp of people who believe that AI and machine learning will contribute greatly to society. I expect that we'll find machines to be exceedingly good at things that we're not--things that involve massive amounts of data, speed, accuracy, reliability, obedience, computation, distributed networking and parallel processing.

The paradox is that at the same time we've developed machines that behave more and more like humans, we've developed educational systems that push children to think like computers and behave like robots. It turns out that for our society to scale and grow at the speed we now require, we need reliable, obedient, hardworking, physical and computational units. So we spend years converting sloppy, emotional, random, disobedient human beings into meat-based versions of robots. Luckily, mechanical and digital robots and computers will soon help reduce if not eliminate the need for people taught to behave like them.

We'll still need to overcome the fear and even disgust evoked when robot designs bring us closer and closer to the "uncanny valley," in which robots and things demonstrate almost-human qualities without quite reaching them. This is true for computer animation, zombies and even prosthetic hands. But we may be approaching the valley from both ends. If you've ever modified your voice to be understood by a voice-recognition system on the phone, you understand how, as humans, we can edge into the uncanny valley ourselves.

There are a number of theories about why we feel this revulsion, but I think it has something to with human beings feeling they're special--a kind of existential ego. This may have monotheistic roots. Right around the time Western factory workers were smashing robots with sledgehammers, Japanese workers were putting hats on the same robots in factories and giving them names. On April 7, 2003, Astro Boy, the Japanese robot character, was registered as a resident of the city of Niiza, Saitama.

If these anecdotes tell us anything, it's that animist religions may have less trouble dealing with the idea that maybe we're not really in charge. If nature is a complex system in which all things--humans, trees, stones, rivers and homes--are all animated in some way and all have their own spirits, then maybe it's okay that God doesn't really look like us or think like us or think that we're really that special.

So perhaps one of the most useful aspects of being alive in the period where we begin to ask this question is that it raises a larger question about the role of human consciousness. Human beings are part of a massively complex system--complex beyond our comprehension. Like the animate trees, stones, rivers and homes, maybe algorithms running on computers are just another part of this complex ecosystem.

As human beings we have evolved to have an ego and believe that there such a thing as a self, but mostly, that's a self-deception to allow each human unit to work within the parameters of evolutionary dynamics in a useful way. Perhaps the morality that emerges from it is a self-deception of sorts, as well. For all we know, we might just be living in a simulation where nothing really actually matters. It doesn't mean we shouldn't have ethics and good taste. I just think we can exercise our sense of responsibility in being part of a complex and interconnected system without having to rely on an argument that "I am special." As machines become an increasingly important part of these systems, their prominence will make human arguments about being special increasingly fraught. Maybe that's a good thing.

Perhaps what we think about machines that think doesn't really matter--they will "think" and the system will adapt. As with most complex systems, the outcome is mostly unpredictable. It is what it is and will be what it will be. Most of what we think is going to happen is probably hopelessly wrong and as we know from climate change, knowing that something is happening and doing something about it often have little in common.

That might sound extremely negative and defeatist, but I'm actually quite optimistic. I believe that the systems are quite adaptive and resilient and that whatever happens, beauty, happiness and fun will persist. Hopefully, human beings will have a role. My guess is that they will.

It turns out that we don't make great robots, but we're very good at doing random and creative things that would be impossibly complex--and probably a waste of resources--to code into a machine. Ideally, our educational system will evolve to more fully embrace our uniquely human strengths, rather than trying to shape us into second-rate machines. Human beings--though not necessarily our current form of consciousness and the linear philosophy around it--are quite good at transforming messiness and complexity into art, culture, and meaning. If we focus on what each of us is best at, I think that humans and machines will develop a wonderful yin-yang sort of relationship, with humans feeding off of the efficiency of our solid-state brethren, while they feed off of our messy, sloppy, emotional and creative bodies and brains.

We are descending not into chaos, as many believe, but into complexity. At the same time that the Internet connects everything outside of us into a vast, seemingly unmanageable system, we find an almost infinite amount of complexity as we dig deeper inside our own biology. Much as we're convinced that our brains run the show, all while our microbiomes alter our drives, desires, and behaviors to support their own reproduction and evolution, it may never be clear who's in charge--us, or our machines. But maybe we've done more damage by believing that humans are special than we possibly could by embracing a more humble relationship with the other creatures, objects, and machines around us.