Joi Ito's conversation with the living web.

A few weeks ago I was asked to make some remarks at the MIT-Harvard Conference on the Uyghur Human Rights Crisis. When the student organizer of the conference, Zuly Mamat, asked me to speak at the event, I wasn't sure what I would say because I'm definitely not an expert on this topic. But as I dove into researching what is happening to the Uyghur community in China, I realized that it connected to a lot of the themes I have run up against in my own work, particularly the importance of considering the ethical and social implications of technology early on in the design and development process. The Uyghur human rights crisis demonstrates how the technology we build, even with the best of intentions, may be used to surveil and harm people. Many of my activities these days are focused on the prevention of misuse of technology in the future, but it requires more than just bolting ethicists onto product teams - I think it involves a fundamental shift in our priorities and a redesign of the relationship of the humanities and social sciences with engineering and science in academia and society. As a starting point, I think it is critically important to facilitate conversations about this problem through events like this one. You can view the video of the event here and read my edited remarks below.


Edited transcript.

Hello, I'm Joi Ito, the Director of the MIT Media Lab. I'm probably the least informed about this topic of everyone here, so first of all, I'm very grateful to all of the people who have been working on this topic and for helping me get more informed. I'm broadly interested in human rights, its relationship with technology and our role as Harvard and MIT and academia in general to intervene in these types of situations. So I want to talk mainly about that.

One of the things to think about not just in this case, but also more broadly, is the role of technology in surveillance and human rights. In the talks today, we've heard about some specific examples of how technology is being used to surveil the Uyghur community in China, but I thought I'd talk about it a little more generally. I specifically want to address the continuing investment in and ascension of the engineering and sciences in the world through ventures like MIT's new College of Computing, in terms of their influence and the scale at which they're being deployed. I believe that thinking about the ethical aspects of these investments is essential.

I remember when J.J. Abrams, one of our Director's Fellows and a film director for those of you who don't know, visited the Media Lab. We have 500 or so ongoing projects at the Media Lab and he asked some of the students, "Do you do anything that involves things like war or surveillance or things that you know, harm people?" And all of the students said, "No, of course we don't do that kind of thing. We make technology for good." And then he said, "Well let me re-frame that question, can you imagine an evil villain in any of my shows or movies using anything here to do really terrible things?" And everybody went, "Yeah!"

What's important to understand is that most engineers and scientists are developing tools to try to help the world, whether it's trying to model the brains of children in order to increase the quality and the effectiveness of education, or using sensors to help farmers grow crops. But what most people don't spend enough time thinking about is the dual use nature of the technology - the fact that technology can easily be used in ways that the designer did not intend.

Now, I think there are a lot of arguments about whose job it is to think about how technology can be used in unexpected and harmful ways. If I took the faculty in the Media Lab and put them on a line where at one end, the faculty believe we should think about all the social implications before doing anything, and at the other end they believe we should just build stuff and society will figure it out, I think there would be a fairly even distribution along the line. I would say that at MIT that's also roughly true. My argument is that we actually have to think more about the social implications of technology before designing it. It's very hard to un-design things, and I'm not saying that it's an easy task, and I'm not saying that we have to get everything perfect, but I think that having a more coherent view of the world and these implications is tremendously important.

The Media Lab is a little over 30 years old, and I've been there for 8 years, but I was very involved in the early days of the Internet. The other day, I was describing to Susan Silbey, the current faculty chair at MIT, how when we were building the Internet we thought if we could just provide a voice to everyone, if we could just connect everyone together, we would have world peace. I really believed that when we started, and I was expressing to Susan how naïve I feel now that the Internet has become something that's more akin to the little girl in the Exorcist, for those of you who have seen the movie. But Susan, being an anthropologist and historian said, "Well when you guys talked about connecting everybody together, we knew. The social scientists knew that it was going to be a mess."

One of the really important things I learned from my conversation with Susan was the extent to which the humanities have thought about and fought about a lot of these things. History has taught us a lot of these things. I know that it's somewhat taboo to invoke Nazi Germany in too many conversations, but if you look at the data that was collected in Europe to support social services, much of it was later used by the Nazis to roundup and persecute the Jews. And it's not exactly the same situation, but a lot of the databases that we're creating to help poor and disadvantaged families are also being used by the immigration services to find and target people for deportation.

Even the databases and technology that we use and create for the best of intentions can be subverted depending on who's in charge. So thinking about these systems is tremendously important. At MIT, we are, and I think that Zuly mentioned some of the specifics, working with tech companies that are working directly on surveillance technology or are in some way creating technologies that could be used for surveillance in China. Again thinking about the ethical issues is very important. I will point out that there are whole disciplines that work in this, STS, science technology in society, that's really what they do. They think about the impact of science and technology in society. They think about it in a historical context and provide us with a framework for thinking about these things. Thinking about how to integrate anthropology and STS into both the curriculum and the research at MIT is tremendously important.

The other thing to think about is allowing engineers more freedom to explore the application and impact of their work. One of the problems with scholarship is that many researchers don't have the freedom to fully test their hypotheses. For example, in January, Eric Topol tweeted about his paper that showed that of the 15 most impactful machine learning and medicine papers that had been published, none of them had been clinically validated. Many cases, in machine learning, you get some data, you tweak it and you get a very high effectiveness and then you walk away. Then the clinicians come in and they say "oh, but we can't replicate this, and we don't have the expertise" or "we tried it but it doesn't seem to work in practice." We're not providing, if you're following an academic path, the proper incentives for the computer scientists to integrate with and work closely with the clinicians in the field. One of the other challenges that we have is that our reward systems and the incentives that are in place don't encourage technologists to explore the social implications of the tech they produce. When this is the case, you fall a little bit short of actually getting to the question, "well, what does this actually mean?"

I co-teach a course at Harvard Law School called the Applied Challenges in Ethics and Governance of Artificial Intelligence, and through that class we've explored some research that considers the ethical and social impact of AI. To give you an example, one Media Lab project that we discussed was looking at risk scores used by the criminal justice system for sentencing and pre-trial assessments and bail. The project team initially thought "oh, we could just use a blockchain to verify the data and make the whole criminal sentencing system more efficient." But as the team started looking into it, they realized that the whole criminal justice system was somewhat broken. And as they started going deeper and deeper into the problem, they realized that while these prediction systems were making policing and judging possibly more efficient, they were also taking power away from the predictee and giving it to the predictor.

Basically, these automated systems were saying "okay, if you happen to live in this zip code, you will have a higher recidivism rate." But in reality, rearrest has more to do with policing and policy and the courts than it does with the criminality of the individual. By saying that this risk score can accurately predict how likely it is that this person will commit another crime, you're attributing the agency to the individual when actually much of the agency lies with the system. And by focusing on making the prediction tool more accurate, you end up ignoring existing weaknesses and biases in the overall justice system and the cause of those weaknesses. It's reminiscent of Caley Horan's writing on the history of insurance and redlining. She looks at the way in which insurance pricing, called actuarial fairness, became a legitimate way to use math to discriminate against people and how it took the debate away from the feminists and the civil rights leaders and made it an argument about the accuracy of algorithms.

The researchers who were trying to improve the criminal risk scoring system have completely pivoted to recommending that we stop using automated decision making in criminal justice. Instead they think we should use technology to look at the long term effects of policies in the criminal justice system and not to predict the criminality of individuals.

But this outcome is not common. I find that whether we're talking about tenure cases or publications or funding, we don't typically allow our researchers to end up in places that contradict the fundamental place where they started. So I think that's another thing that's really important. How do we create both research and curricular opportunities for people to explore their initial assumptions and hypotheses? As we think about this and this conversation, we should ask "how can we integrate this into our educational system?" Our academic process is really important and I love that we have scholars that are working on this, but how we bring this mentality to engineers and scientists is something that I'd love to think about and maybe in the Breakout Sessions we can work on that.

Now I want to pivot a little bit and talk about the role of academia in the Uyghur crisis. I know there are people who view this meeting as provocative or political and it reminds me of the March for Science that we had several years ago. I gave a talk at the first March for Science. Before the talk, when I was at a dinner table with a bunch of faculty (I won't name the faculty), someone said, "Why are you doing that? It's very political. We try not to be political, we're just scientists." And I said, "Well when it becomes political to tell the truth, when being supportive of climate science is political, when trying to support fundamental scientific research is political, then I'm political." So I don't want to be partisan, but I think if the truth is political, then I think we need to be political.

And this is not a new concept. If you look at the history of MIT, or just the history of academic freedom (there's the Statement of Principles on Academic Freedom and Tenure) you will find a bunch of interesting MIT history. In the late 40s and 50s, during the McCarthy period, society was going after communists and left wing people out of the fear of Communism. And many institutions were turning over their left wing Marxist academics, or firing them under pressure from the government. But MIT was quite good about protecting their Marxist affiliated faculty, and there's a very famous case that shows this. Dirk Struik, a math professor at MIT, was indicted by the Middlesex grand jury on charges of advocating the overthrow of the US and Massachusetts governments in 1951. At the time MIT suspended him with pay, but once the court abandoned the case due to lack of evidence and the fact that states shouldn't be ruling on this type of charge, MIT reinstated Professor Struik. This is a quote from the president at the time, James Killian about the incident.

"MIT believes that its faculty, as long as its members abide by the law, maintain the dignity and responsibility of their position, must be free to inquire, to challenge and to doubt in their search for what is true and good. They must be free to examine controversial matters, to reach conclusions of their own, to criticize and be criticized, and only through such unqualified freedom of thought and investigation can an educational institution, especially one dealing with science, perform its function of seeking truth."

Many of you may wonder why we have tenure at universities. We have tenure to protect our ability to question authority, to speak the truth and to really say what we think without fear of retribution.

There's another important case that demonstrates MIT's willingness to protect its faculty and students. In the early 1990s, MIT and a bunch of Ivy League schools came up with this idea to provide financial aid for low income students on a need basis. The Ivy League schools got together to coordinate on how they would assess need and how they would figure out how much financial aid to give to students. Weirdly, the United States government sued the Ivy League schools saying that this was an antitrust case, which was ridiculous because it was a charity. Most of the other universities caved in after this lawsuit, but Chuck Vest the president at the time said, "MIT has a long history of admitting students based on merit and a tradition of ensuring these students full financial aid." He refused to deny students financial aid, and a multi-year lawsuit ensued, in which eventually MIT won. And then this need-based scholarship system was enshrined in actual policy in the United States.

Many of the people who are here at MIT today probably don't remember this, but there's a great documentary film that shows MIT students and faculty literally clashing with police on these streets in an anti-Vietnam War protest 50 years ago. So in the not so distant past, MIT has been a very political place when it meant protecting our freedom to speak up.

More recently, I personally experienced this support for academic freedom. When Chelsea Manning's fellowship at the Harvard Kennedy School was rescinded, she emailed me and asked if she could speak at the Media Lab. I was thinking about it, and I asked the administration what they thought, and they thought it was a terrible idea. And when they told me that I said, "You know, now that means I have to invite her." I remember our Provost Marty saying, "I know." And that's what I think is wonderful about being here at MIT: the fact that the administration understands that faculty must be allowed to act independently of the Institute. Another example is when the administration was deciding what to do about funding from Saudi Arabia. The administration released a report, which has a few critics, that basically said, "we're going to let people decide what they want to do." I think each group or faculty member at MIT is permitted to make their own decision about whether to accept funding from Saudi Arabia. MIT, in my experience, has always stood by the academic freedom of whatever unit at the Institute that's trying to do what it wants to do.

I think we're in a very privileged place and I think that it's not only our freedom, but our obligation to speak up. It's also our responsibility to fight for the academic freedom of people in our community as well as people in other communities, and provide leadership. I really do want to thank the organizers of this conference for doing that. I think it's very bold, but I think it's very becoming of both MIT and Harvard. I read a very disturbing report from Human Rights Watch that talked about how Chinese scholars overseas are starting to have difficulties in speaking up, which I think is somewhat unprecedented because of the capabilities of today's technology. And I think there are similar reports about scholars from Saudi Arabia. The ability of these countries to surveil their citizens overseas and impinge on their academic freedom is a tremendously important topic to discuss, and think about both technically, legally and otherwise. I think it's also a very important thing for us to talk about how to protect the freedoms of students studying here.

Thank you again for making this topic now very front of mind for me. On the panel I'd love to try to describe some concrete steps that we can take to continue to protect this freedom that we have. Thank you.

Credits

Transcription and editing: Samantha Bates

Leave a comment