Joi Ito's Web

Joi Ito's conversation with the living web.

June 2016 Archives

Black and White Gavel in Courtroom - Law Books
Photo by wp paarz via Flickr - CC BY-SA

Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make - a modern version of what philosophers call "The Trolley Problem." The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.

Society-in-the-loop is a scaled up version of human-in-the-loop machine learning - something that Karthik Dinakar at the Media Lab has been working on and is emerging as an important part of AI research.

Typically, machines are "trained" by AI engineers using huge amounts of data. The engineers tweak what data is used, how it's weighted, the type of learning algorithm used and a variety of parameters to try to create a model that is accurate and efficient and making the right decisions and providing accurate insights. One of the problems is that because AI, or more specifically, machine learning is still very difficult to do, the people who are training the machines are usually not domain experts. The training is done by machine learning experts and the completed model after the machine is trained is often tested by experts. A significant problem is that any biases or errors in the data will create models that reflect those biases and errors. An example of this would be data from regions that allow stop and frisk - obviously targeted communities will appear to have more crime.

Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data. Karthik calls this process 'lensing', of extracting the human perspective or lens of a domain expert and fit it to algorithms that learn from both the data and the extracted lens, all during training time. We believe this has implications for making tools for probabilistic programming and for the democratization of machine learning.

At a recent meeting with philosophers, clergy and AI and technology experts, we discussed the possibility of machines taking over the job of judges. We have evidence that machines can make very accurate assessments of things that involve data and it's quite reasonable to assume that decisions that judges make such as bail amounts or parole could be done much more accurately by machines than by humans. In addition, there is research that shows expert humans are not very good set setting bail or granting parole appropriately. Whether you get a hearing by the parole board before or after their lunch has a significant effect on the outcome, for instance. (There has been some critiques of the study cited in this article, and the authors of the paper of responded to them.)

In the discussion, some of us proposed the idea of replacing judges for certain kinds of decisions, bail and parole as examples, with machines. The philosopher and several clergy explained that while it might feel right from a utilitarian perspective, that for society, it was important that the judges were human - it was even more important than getting the "correct" answer. Putting aside the argument about whether we should be solving for utility or not, having the buy-in of the public would be important for the acceptance of any machine learning system and it would be essential to address this perspective.

There are two ways that we could address this concern. One way would be to put a "human in the loop" and use machines to assist or extend the capacity of the human judges. It is possible that this would work. On the other hand, experiences in several other fields such as medicine or flying airplanes have shown evidence that humans may overrule machines with the wrong decision enough that it would make sense to prevent humans from overruling machines in some cases. It's also possible that a human would become complacent or conditioned to trust the results and just let the machine run the system.

The second way would be for the machine to be trained by the public - society in the loop - in a way that the people felt that that the machine reliability represented fairly their, mostly likely, diverse set of values. This isn't unprecedented - in many ways, the ideal government would be one where the people felt sufficiently informed and engaged that they would allow the government to exercise power and believe that it represented them and that they were also ultimately responsible for the actions of the government. Maybe there is way to design a machine that could garner the support and the proxy of the public by being able to be trained by the public and being transparent enough that the public could trust it. Governments deal with competing and conflicting interests as will machines. There are obvious complex obstacles including the fact that unlike traditional software, where the code is like a series of rules, a machine learning model is more like a brain - it's impossible to look at the bits and understand exactly what it does or would do. There would need to be a way for the public to test and audit the values and behavior of the machines.

If we were able to figure out how to take the input from and then gain the buy-in of the public as the ultimate creator and controller of this machine, it might solve the other side of this judicial problem - the case of a machine made by humans that commits a crime. If, for instance, the public felt that they had sufficient input into and control over the behavior of a self-driving car, could the public also feel that the public, or the government representing the public, was responsible for the behavior and the potential damage caused by a self-driving car, and help us get around the product liability problem that any company developing self-driving cars will face?

How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.

Credits
  • Iyad Rahwan - The phrase “society in the loop” and many ideas.
  • Karthik Dinakar - Teaching me about “human in the loop” machine learning and being my AI tutor and many ideas.
  • Andrew McAfee - Citation and thinking on parole boards.
  • Natalie Saltiel - Editing.


Copyright xkcd CC BY-NC

Back when I first started blogging, the standard post took about 5 min and was usually written in a hurry after I thought of something to say in the shower. If it had mistakes, I'd add/edit/reblog any fixes.

As my post have gotten longer and the institutions affected by my posts have gotten bigger, fussier and more necessary to protect - I've started becoming a bit more careful about what I say and how I say it.

Instead of blog first, think later - agile blogging - I now have a process that feel a bit more like blogging by committee. (Actually, it's not as bad as it sounds. You, the reader are benefiting from better thought through blog posts because of this process.)

When I have an idea, I usually hammer out a quick draft, stick it in a Google Doc and then invite in anyone that might be able to help including experts, my team working on the particular topic and editors and communications people. It's a different bunch of people depending on the post, but almost everything I've posted recently is a result of a group effort.

Jeremy Rubin, a recent MIT grad who co-founded the Digital Currency Initiative at MIT mentioned that maybe I should be giving people credit for helping - not that he wouldn't help if he didn't get credit, but he thought that as a general rule, it would be a good idea. I agreed, but I wasn't sure exactly how to do it elegantly. (See what I did here?)

I'm going to start adding contributors at the bottom of blog posts as sort of a "credits" section, but if anyone has any good examples or thoughts on how to give people credit for helping edit and contributing ideas to a post or an informal paper like my posts on my blog and pubpub, I'd really like to see them.

Credits
  • Jeremy Rubin came up with the idea.
  • I wrote this all by myself.

Leafy bubble
Photo by Martin Thomas via Flickr - CC-BY

In 2015, I wrote a blog post about how I thought that Bitcoin was similar in many ways to the Internet. The metaphor that I used was that Bitcoin was like email - the first killer app - and that the Bitcoin Blockchain was like The Internet - the infrastructure that was deployed to support it but that could be used for so many other things. I suggested that The Blockchain was to finance and law what the Internet was to media and advertising.

I still believe it is true, but the industry is out over its skis. Over a billion dollars have been invested in Bitcoin and Fintech startups, tracking and exceeding investment in Internet investments in 1996. Looking at many of the businesses, they look like startups during that period, but instead of pets.com, we have blockchain for X. I don't think today's blockchain is the Internet in 1996 - it's probably more like the Internet in 1990 or the late 80's - we haven't agreed on the IP protocol and there is no Cisco or PSINet. Many of the application layer companies are building on an infrastructure that isn't ready from a stability or a scalability perspective and they are either bad idea or good idea too early. Also, very few people actually understand the necessary combination of cryptography, security, finance and computer science to design these systems. Those that do are part of a very small community and there are not enough to go around to support the $1bn castle we are building on this immature infrastructure. Lastly, unlike content on the Internet, the assets that the blockchain will be moving around and the irreversibility of many of the elements do not lend the blockchain to the same level agile software development - throw stuff out and see what sticks - that we can do for web apps and services.

There are startups and academics working on these basic layers, but I wish there were more. I have a feeling that we might be in a bit of a bubble and that bubble might pop or have a correction, but in the long run, hopefully we'll figure out the infrastructure and will be able to build something decentralized and open. Maybe a bubble pop will get rid of some of the noise from the system and let us focus like the first dot-com bust did for the Internet. On the other hand, we could end up with a crappy architecture and a bunch of fintech apps that don't really do much more than make existing things more efficient. We are at an important moment where decisions will be made about whether everyone will trust a truly decentralized system and where irresponsible deployments could scare people away. I think that as a community we need to increase our collaboration and diligently eliminate bugs and bad designs without slowing down innovation and research.

Instead of building apps, we need to be building the infrastructure. It's unclear whether we will end up with some version of Bitcoin becoming "The Internet" or whether some other project like Ethereum becomes the single standard. It's also possible that we end up with a variety of different systems that somehow interoperate. The worst case would be that we focus so much on the applications that we ignore the infrastructure, miss out on the opportunity to build a truly decentralized system, and end up with a system that resembles mobile Internet instead of wired Internet - one controlled by monopolies that charge you by the megabyte and have impossibly expensive roaming fees versus the flat fee and reasonable cost of wired Internet in most places.

There are many pieces to the infrastructure that need to be designed and tested. There are many ideas for different consensus protocols - the way in which a particular blockchain makes their public ledger tamper proof and secure. Then there are arguments about how much scriptability should be built into the blockchain itself versus on a layer above it - there are good arguments on either side of the argument. There is also the issue of privacy and anonymity versus identity and regulatory controls.

It looks like the Bitcoin Core developer team is making headway on Segregated Witness which should address many concerns including some of the scaling issues that people have had. On the other hand, it looks like Ethereum which has less history but a powerful and easier to use scripting / programing system is getting a lot of traction and interest from people trying to design new uses for the blockchain. Other projects like Hyperledger are designing their own blockchain systems as well as code that is blockchain agnostic.

The Internet works because we have clear layers of open standards. TCP/IP, for instance, won over ATM - a competing standard in some ways - because it turned out that the end-to-end principle where the core of the network was super-simple and "dumb" allowed the edges of the network to be very innovative. It took awhile for the battle between the standards to play out to the point where TCP/IP was the clear winner. A lot of investment in ATM driven technology ended up being wasted. The problem with the blockchain is that we don't even know where the layers should be and how we will manage the process of agreeing on the standards.

The (Ethereum) Decentralized Autonomous Organization project or "The DAO" is one of the more concerning projects I see right now.* The idea is to create "entities" that are written in code on Ethereum. These entities can sell units similar to shares in a company and invest and spend the money and operate much like a fund or a corporation. Investors would look at the code and determine whether they thought the entity made sense and they would buy tokens hoping for a return. This sounds like something from a science fiction novel and we all dreamed about these sorts of things when, as cypherpunks in the early 90's, we dared to dream on mailing lists and hacker meetups. The problem is, The DAO has attracted over $150M in investors and is "real," but is built on top of Ethereum which hasn't been tested as much as Bitcoin and is still working out its consensus protocol even considering a completely new consensus protocol for their next version.

It appears that The DAO hasn't been fully described legally and may expose its investors to liabilities as partners in a partnership. Unlike contracts written by lawyers in English, if you screw up the code of a DAO, it's unclear how you could change it easily. Courts can deal with mistakes in contract language by trying to determine the intent, but in code enforced by distributed consensus rules, there is no such mechanism. Also, code can be attacked by malicious code and there is a risk that a bug could create vulnerabilities. Recently, Dino Mark, Vlad Zamfir, and Emin Gün Sirer - key developers and researchers - published "A Call for a Temporary Moratorium on The DAO" describing vulnerabilities in The DAO. I fear that The DAO also raises the red flags for a variety of regulators that we probably don't want at the table right now. The DAO could be the Mt. Gox for Ethereum - a project whose failure may cause many people to lose their money and cause the public and regulators to try to slam the brakes on blockchain development.

Regardless of whether I rain on the parade, I'm sure that startups and investors in this space will continue to barrel forward, but I believe that as many of us as possible should focus on the infrastructure and the opportunities at the lowest layers of this stack we are trying to build. I think that getting the consensus protocol right, trying to figure out how to keep things decentralized, how to deal with the privacy issues without causing over-regulation, how we might completely reinvent the nature of money and accounting - these are the things that are exciting and important to me.

I believe there are some exciting areas for businesses to begin working and exploring practical applications - securitization of things that currently have a market failure such as solar panels in developing countries, or applications where there are standardized systems because of the lack of trust creates a very inefficient market such as trade finance.

Central banks and governments have begun to exploring innovations as well. The Singapore government is considering issuing government bonds on a blockchain. Some papers have imagined central banks taking deposits and issuing digital cash directly to individuals. Some regulators have begun to plan sandboxes to allow people to innovate and test ideas in regulatory safety zones. It is ironically possible that some of the more interesting innovations may come from experiments by governments despite the initial design of Bitcoin having been to avoid governments. Having said that, it's quite likely that governments will be more likely to hinder rather than help the development of a robust decentralized architecture.


* Just a few days after this post, The DAO was "attacked" as I feared. Here's an interesting post by the alleged "attacker". Reddit quickly determined that the signature in that post wasn't valid. And another post by the alleged attacker that they're bribing the miners not to fork. Whether these are actually the attacker or epic trolls, very interesting arguments.

Credits

Minerva Priory library
The library at the Minerva Priory, Rome, Italy.

I recently participated in a meeting of technologists, economists and European philosophers and theologians. Other attendees included Andrew McAfee, Erik Brynjolfsson, Reid Hoffman, Sam Altman, Father Eric Salobir. One of the interesting things about this particular meeting for me was to have a theological (in this case Christian) perspective to our conversation. Among other things, we discussed artificial intelligence and the future of work.

The question about how machines will replace human beings and place many people out of work is well worn but persistently significant. Sam Altman and others have argued that the total increase in productivity will create an economic abundance that will enable us to pay out a universal "basic income" to those who are unemployed. Brynjolfsson and McAfee have suggested a "negative income tax"-a supplement instead of a tax for low-income workers that would help the financial redistribution without disrupting the other important outcomes generated by the practice of work.

Those supporting the negative income tax recognize that the importance of work is not just the income derived from it, but also the anchor that it affords us both socially and psychologically. Work provides a sense of purpose as well as a way to earn social status. The places we work give us both the opportunity for socializing as well as the structure that many of us need to feel productive and happy.

So while AI and other technologies may some day create a productivity abundance that allows us to eliminate the financial need to work, we will still need to find ways to obtain the social status-as well as a meaningful purpose-we get from work. There are many people who work in our society who aren't paid. One of the largest groups are stay-at-home men and women whose work it is to care for their homes and children. Their labor is not currently counted toward the GDP, and they often do not earn the social status and value they deserve. Could we somehow change the culture and create mechanisms and institutions that provided dignity and social status to people who don't earn money? In some ways academia, religious institutions and non-profit service organizations have some of this structure: social status and dignity that isn't driven primarily by money. Couldn't there be a way to extend this value structure more broadly?

And how about creative communities? Why couldn't we develop some organizing principle that would allow amateur writers, dancers or singers to define success by measures other than financial returns? Could this open up creative roles in society beyond the small sliver of professionals who can be supported by the distribution and consumption by the mass media? Could we make "starving artist" a quaint metaphor of the past? Can we disassociate the notion of work from productivity as it has been commonly understood and accepted? Can "inner work" be considered more fruitful when seen in light of thriving and eudaemonia?

Periclean Athens seems to be a good example of a moral society where people didn't need to work to be engaged and productive.* Could we image a new age where our self-esteem and shared societal value is not associated with financial success or work as we know it? Father Eric asks, "What does it mean to thrive?" What is our modern day eudaemonia? We don't know. But we do know that whatever it is, It will require a fundamental cultural change: change that is difficult, but not impossible. A good first step would be to begin work on our culture alongside our advances in technology and financial innovations so that the future looks more like Periclean Athens than a world of disengaged kids with nothing to do. If it was the moral values and virtues that allowed Periclean Athens to function, how might we develop them in time for a world without work as we currently know it?



* There were many slaves in Periclean Athens. For the future machine age, will be need to be concerned about the rights of machines? Will we be creating a new class of robot slaves?

Credits
  • Reid Hoffman - Ideas
  • Erik Brynjolfsson - Ideas
  • Andrew McAfee - Ideas
  • Tenzin Priyadarshi - Ideas
  • Father Eric Salobir - Ideas
  • Ellen Hoffman - Editing
  • Natalie Saltiel - Editing

Great_Dome,_MIT_-_IMG_8390.JPG
Photo by Daderot [Public domain], via Wikimedia Commons

When I was first appointed as the director of the MIT Media Lab, The New York Times said it was an "unusual choice" - which it was since my highest academic degree was my high school diploma, and, in fact, had dropped out of undergraduate programs at both Tufts and the University of Chicago, as well as a doctoral program at Hitotsubashi University in Tokyo.

When first approached about the position, I was given advice that I shouldn't apply considering my lack of a degree. Months later, I was contacted again by Nicholas Negroponte, who was on the search committee, and who invited me to visit MIT for interviews. Turns out they hadn't come up with a final candidate from the first list.

The interview with the faculty, student and staff went well - two of the most exciting days of my life - although quite painful as well, as a major earthquake in Japan occurred the night between the two days. In so many ways, those two days are etched into my mind.

The committee got back to me quickly. I was their first choice, and needed to come back and have meetings with the School of Architecture + Planning Dean Adele Santos, and possibly the provost (now MIT president) Rafael Reif, since I was such an unorthodox candidate. When I sat down to meet with Rafael in his fancy office, he gave me a bit of a "what are you doing here?" look and asked, "How can I help you?" I explained the unusual circumstance of my candidacy. He smiled and said, "Welcome to MIT!" in the warm and welcoming way he treats everyone.

As the director of the Media Lab, my job is to oversee the operations and research of the Lab. At MIT, the norm is for research labs and academic programs to be separated-like church and state-but the Media Lab is unique in that it has "its own" academic Program in Media Arts and Sciences within the School of Architecture + Planning, which is tightly linked to the research.

Since its inception, the Lab has always emphasized hands-on research: learning by doing, demoing and deploying our works rather than just publishing. The academic program is led by a faculty member, currently Pattie Maes, with whom I work very closely.

My predecessor, as well as Nicholas, the lab's founding director, both had faculty appointments. However, in my case, due to the combination of my not knowing any better and the Institute not being sure about whether I had the chops to advise students and be sufficiently academic, I was not given the faculty position when I joined.

In most cases, it didn't matter. I participated in all of the faculty meetings, and except for rare occasions, was made to feel completely empowered and supported. The only awkward moments were when I was mistakenly addressed as "Professor Ito," or after explaining my position to academics from other universities had to endure responses like, "Oh! I thought you were faculty but you're on the ADMINISTRATIVE side of the house!"

So I didn't feel like I NEEDED to be a professor. When I was offered the opportunity to submit a proposal to become a professor, I wasn't sure exactly how it would help. I asked a few of my mentors and they said that it would allow me to have a life at MIT after I was no longer Lab director. Frankly, I can't imagine ever leaving my role as director of the Lab, but that was a nice option. Also, becoming a professor makes me more formally part of the Institute itself. It is a vote of confidence since it requires approval by the academic council.

I am not interested in starting my own research group, but rather have always viewed the entire Media Lab itself my "research group," as well as my passion. However, as I help start new initiatives and support faculty, from time to time, I have become more involved in thinking and doing things that require a more academic frame of mind. Lastly, I have begun to have more opinions about the academic program at the Media Lab and more broadly at MIT. Becoming a faculty member would give me a much better position from which to express these opinions.

With these thoughts in mind-and with advice from my wise mentors-I requested, and today received, appointment as a member of the MIT faculty, as a Professor of the Practice in Media Arts and Sciences.

I still remember when I used to argue with my sister, a double PhD, researcher, and faculty member, calling her "academic" as a derogatory term. I remember many people warning me when I took the role as the director of the Media Lab that I wouldn't fit in or that I'd get sick of it. I've now been at MIT approximately five years - longer than I've been at any other job - (and my sister, Mimi, is now an entrepreneur.) I feel like I've finally found my true calling and am happier than I've ever been with my work, my community and the potential for growth and impact for myself and the community in which I serve.

So thank you MIT and all of my mentors, peers, students, staff, and friends who have supported me so far. I look forward to continuing this journey to see where it goes.

I've posted the research statement that I submitted to MIT for the promotion case.

The appointment is effective July 1, 2016.