Joi Ito's Web

Joi Ito's conversation with the living web.

Recently in the Technology Category

I received a lot of excited feedback from people who saw the 60 Minutes segment on the Media Lab. I also got a few less congratulatory messages questioning the "gee-whiz-isn't-this-all-great" depiction of the Lab and asking why we seemed so relentlessly upbeat at a time when so many of the negative consequences of technology are coming to light. Juxtaposed with the first segment in the program about Aleksandr Kogan, the academic who created the Cambridge Analytica app that mined Facebook, the Media Lab segment appeared, to some, blithely upbeat. And perhaps it reinforced the sometimes unfair image of the Media Lab as a techno-Utopian hype machine.

Of course, the piece clocked in at about 12 minutes and focused on a small handful of projects; it's to be expected that it didn't represent the full range of research or the full spectrum of ideas and questions that this community brings to its endeavors. In my interview, most of my comments focused on how we need more reflection on where we have come in science and technology over the 30-plus years that the Media Lab has been around. I also stressed how at the Lab we're thinking a lot more about the impact technology is having on society, climate, and other systems. But in such a short piece--and one that was intended to showcase technological achievements, not to question the ethical rigor applied to those achievements--it's no surprise that not much of what I said made it into the final cut.

What was particularly interesting about the 60 Minutes segment was the producers' choice of "Future Factory" for the title. I got a letter from one Randall G. Nichols, of Missouri, pointing out that "No one in the segment seems to be studying the fact that technology is creating harmful conditions for the Earth, worse learning conditions for a substantial number of kids, decreasing judgment and attention in many of us, and so on." If we're manufacturing the future here, shouldn't we be at least a little concerned about the far-reaching and unforeseen impact of what we create here? I think most of us agree that, yes, absolutely, we should be! And what I'd say to Randall is, we are.

In fact, the lack of critical reflection in science and technology has been on my mind-I wrote about it in Resisting Reduction. Much of our work at the Lab helps us better understand and intervene responsibly in societal issues, including Deb Roy's Depolarization by Design class and almost all of the work in the Center for Civic Media. There's Kevin Esvelt's work that involves communities in deployment of the CRISPR gene drive and Danielle Wood's work generally and, more specifically, her interest in science and racial issues. And Pattie Maes is making her students watch Black Mirror to imagine how the work we do in the Lab might unintentionally go wrong. I'm also teaching a class on the ethics and governance of AI with Jonathan Zittrain from Harvard Law School, which aims to ensure that the generation now rising is more thoughtful about the societal impact of AI as it is deployed. I could go on.

It's not that I'm apologetic about the institutional optimism that the 60 Minutes piece captured. Optimism is a necessary part of our work at the Lab. Passion and optimism drive us to push the boundaries of science and technology. It's healthy to have a mix of viewpoints-critical, contemplative, and optimistic-in our ecosystem. Not all aspects of that can necessarily be captured in 12 minutes, though. I'm sure that our balance of caution and optimism isn't satisfactory for quite a few critical social scientists, but I think that a quick look at some of the projects I mention will show a more balanced approach than would appear to be the case from the 60 Minutes segment.

Having said that, I believe that we need to continue to integrate social sciences and reflection even more deeply into our science and technology work. While I have a big voice at the Lab, the Lab operates on a "permissionless innovation" model where I don't tell researchers what to do (and neither do our funders). On the other hand, we have safety and other codes that we have to follow--is there an equivalent ethical or social code that we or other institutions should have? Harrison Eiteljorg, II thinks so. He wrote, "I would like to encourage you to consider adding to your staff at least one scholar whose job is to examine projects for the ethical implications for the work and its potential final outcome." I wonder, what would such a process look like?

More socially integrated work in technology has continued to increase in both the rest of society and at the Lab. One of my questions is whether the Lab is changing fast enough, and whether the somewhat emergent way that the work is infusing itself in the Lab is the appropriate way. Doing my own work in ethical and critical work and having conversations is the easiest way to contribute, but I wonder if there is more that we as a Lab should be doing.

One of the main arcs of the 60 Minutes piece was showing how technology built in the Lab's early days--touch screens, voice command, things that were so far ahead of their time in the 80s and 90s as to seem magical--have gone out into the world and become part of the fabric of our everyday lives. The idea of highlighting the Lab as a "future factory" was to suggest that the loftiest and "craziest" ideas we're working on now might one day be just as commonplace. But I'd like to challenge myself, and everyone at the Media Lab, to demonstrate our evolution in thoughtful critique, as well.