Joi Ito's Web

Joi Ito's conversation with the living web.

34581570_10156015313486998_718869846225321984_o.jpgIn 2011, when we announced that I would join the Media Lab as the new Director, many people thought it was an unusual choice partially because I had never earned a higher degree - not even an undergraduate degree. I had dropped out of Tufts as well as the University of Chicago and had spent most of my life doing all sorts of weird jobs and building and running companies and nonprofits.

I think it took quite a bit of courage on the part of the Media Lab and MIT to hire a Director with no college degree, but once we got over the hump, some felt it was a kind of "badge of honor." (I'm also sure, not everyone felt this way.)

Jun Murai, father of the Japanese Internet and my mentor in Japan, who is the Dean of the Graduate School of Media and Governance at Keio University in Japan, had been encouraging me to complete a PhD in his program. We had been discussing this in earnest from June 2010,when they confirmed that Keio would be OK with awarding a PhD to someone without a Bachelor's or a Master's degree. When I joined the Media Lab, I asked the co-founder and first Director of the Lab, Nicholas Negroponte, whether it would help me if I completed the PhD. He recommended (at the time) that I not complete the PhD because it was more interesting that I didn't have a degree.

Eight years later, I am often referred to as "the academic" when I'm on panels; I advise and work with many students including PhD students. It felt that it was time to finish the PhD. In other words, one product of my profession is degrees and I felt like I needed to try the product. Even Nicholas agreed when I asked him.

The degree that I earned is a "Thesis PhD" which is a less common type of PhD that you don't see very much in the US. It involves writing about and defending the academic value and contribution of your work, rather than doing new work in residence in an institution. The sequencing and the ordering is different than typical PhDs.

The process involved writing a dissertation and putting together a package that was accepted by the university. After that, a committee was formally constituted with Jun Murai as the lead advisor and Rod Van Meter, Keiko Okawa, Hiroya Tanaka, and Jonathan Zittrain as committee members and thesis readers. They provided feedback and detailed critique on the thesis, which I rewrote based on this feedback. Oh June 6, I defended the thesis publicly at Keio University and, based on the questions and feedback from the defense, I rewrote the dissertation again.

On June 21 I had a final exam, which involved a presentation to the committee of all of the changes and responses to the criticisms and suggestions. The committee had a closed-door discussion and formally accepted the dissertation. I rewrote, formatted, and polished the dissertation some more and submitted the final version in printed form on July 20.

Finally, on behalf of the committee, Jun Murai prepared and presented the case at a faculty meeting on July 30, 2018 where they voted and awarded the PhD.

Although by definition and according to rules the dissertation is entirely my own work, I couldn't have done it without the help of my advisors, collaborators, and all of the people I've worked with over the years.

While I started this project mostly to understand the process and "see what it was like" to work on a degree, I learned a lot during the process of researching, reading, and talking to people about my dissertation. The dissertation, titled "The Practice of Change", is available online both in PDF and in LaTeX as a GitHub repo. It's a summary of a lot of the work that I've done so far, a question about how we understand, design solutions for, and try to address the current challenges to our society, and how the work going on at the Media Lab might be applied to or provide inspiration for people trying to work on addressing these challenges.

In some ways, the dissertation feels like I've gone around and kicked a dozen hornet's nests. I've mostly stayed out of extremely academic discourse in the past, but the process of trying to understand a number of different disciplines to try to understand and describe the context of my work has caused me to wade into many old and new arguments. I'm sure that many of my forays into various disciplines will cause annoyance to those well versed in those disciplines, but those constructive criticisms that I've received about my treatment of various disciplines have surfaced an exciting array of future work for me.

So while I do not believe that I have yet become a "serious academic" or that I will be focused primarily on research and academic output, I feel like I've discovered a new lens through which to look at things -- a new world to explore. It reminds me of entering a new zone in a game like World of Warcraft where there are new quests, new skills, new reps to grind, and lots of new things to learn. So fun.

Credits

To my late godfather Timothy Leary for “Question Authority and Think For Yourself.”

To Jun Murai for pushing me to do this dissertation.

To my thesis advisors: Hiroya Tanaka, Rodney D. Van Meter, Keiko Okawa and Jonathan L. Zittrain for their extensive feedback, guidance and encouragement.

To Nicholas Negroponte for the Media Lab and his mentorship.

To the late Kenichi Fukui for encouraging me to think about complex systems and the limits of reduction.

To the late John Perry Barlow for the “Declaration of Independence of Cyberspace.”

To Hashim Sarkis for sending me in the direction of Foucault.

To Martin Nowak for his guidance on Evolutionary Dynamics.

To my colleagues at MIT and particularly at the Media Lab for continuous inspiration and my raison d’être.

To my research colleagues Karthik Dinakar, Chia Evers, Natalie Saltiel, Pratik Shah, and Andre Uhl for helping me with everything, including this thesis.

To Yuka Sasaki, Stephanie Strom, and Mika Tanaka for their help on helping me pull this dissertation together.

To David Weinberger for “The final edit.”

To Sean Bonner, Danese Cooper, Ariel Ekblaw, Pieter Franken, Mizuko Ito, Mike Linksvayer, Pip Mothersill, Diane Peters, Deb Roy and Jeffrey Shapard for their feedback on various parts of the dissertation.

Finally, thanks to Kio and Mizuka for making room in our family life to work on this and for supporting me through the process.

In the summer of 1990, I was running a pretty weird nightclub in the Roppongi neighborhood of Tokyo. I was deeply immersed in the global cyberpunk scene and working to bring the Tokyo node of this fast-expanding, posthuman, science-fiction-and-psychedelic-drug-fueled movement online. The Japanese scene was more centered around videogames and multimedia than around acid and other psychedelics, and Timothy Leary, a dean of ’60s counterculture and proponent of psychedelia who was always fascinated with anything mind-expanding, was interested in learning more about it. Tim anointed the Japanese youth, including the 24-year-old me, “The New Breed.” He adopted me as a godson, and we started writing a book about The New Breed together, starting with “tune in, turn on, take over,” as a riff off Tim’s original and very famous “turn on, tune in, drop out.” We never finished the book, but we did end up spending a lot of time together. (I should dig out my old notes and finish the book.)

Tim introduced me to his friends in Los Angeles and San Francisco. They were a living menagerie of the counterculture in the United States since the ’60s. There were the traditional New Age types: hippies, cyberpunks, and transhumanists, too. In my early twenties, I was an eager and budding techno-utopian, dreaming of the day when I would become immortal and ascend to the stars into cryogenic slumber to awake on a distant planet. Or perhaps I would have my brain uploaded into a computer network, to become part of some intergalactic superbrain.

Good times. Those were the days and, for some, still are.

We’ve been yearning for immortality at least since the Epic of Gilgamesh. In Greek mythology, Zeus grants Eos’s mortal lover Tithonus immortality—but the goddess forgets to ask for eternal youth as well. Tithonus grows old and decrepit, begging for death. When I hear about life extension today, I am often perplexed, even frustrated. Are we are talking about eternal youth, eternal old age, or having our cryogenically frozen brains thawed out 2,000 years from now to perform tricks in a future alien zoo?

The latest enthusiasm for eternal life largely stems not from any acid-soaked, tie-dyed counterculture but from the belief that technology will enhance humans and make them immortal. Today’s transhumanist movement, sometimes called H+, encompasses a broad range of issues and diversity of belief, but the notion of immortality—or, more correctly, amortality—is the central tenet. Transhumanists believe that technology will inevitably eliminate aging or disease as causes of death and instead turn death into the result of an accidental or voluntary physical intervention.

As science marches forward, and age reversal and the elimination of diseases becomes a real possibility, what once seemed like a science fiction dream is becoming more real, transforming the transhumanist movement and its role in society from a crazy subculture to a Silicon Valley money- and technology-fueled “shot on goal” and more of a practical “hedge” than the sci-fi dream of its progenitors.

Transhumanism can be traced back to futurists in the ’60s, most notably FM-2030. As the development of new, computer-based technologies began to turn into a revolution to rival the Industrial Revolution, Max More defined transhumanism as the effort to become “posthuman” through scientific advances like mind “uploading.” He developed his own variant of Transhumanism and named it Extropy, and together with Tom Morrow, founded the Extropy Institute, whose email list created a community of Extopians in the internet’s cyberpunk era. Its members discussed AI, cryonics, nanotech and crypotoanarchy, among other things, and some reverted to transhumanism, creating an organization now known as Humanity+. As the Tech Revolution continued, Extropians and transhumanists began actively experimenting with technology’s ability to deliver amortality.

In fact, Timothy Leary planned to have his head frozen by Alcor, preserving his brain and, presumably, his sense of humor and unique intelligence. But as he approached his death—I happened to visit him the night before he died in 1996—the vibe of the Alcor team moving weird cryo-gear into his house creeped Tim out, and he ended up opting for the “shoot my ashes into space” path, which seemed more appropriate to me as well. All of his friends got a bit of his ashes, too, and having Timothy Leary ashes became “a thing” for a while. It left me wondering, every time I spoke to groups of transhumanists shaking their fists in the air and rattling their Alcor “freeze me when I die” bracelets: How many would actually go through with the freezing?

That was 20 years ago. The transhumanist and Extropian movements (and even the Media Lab) have gotten more sober since those techno-utopian days, when even I was giddy with optimism. Nonetheless, as science fiction gives way to real science, many of the ZOMG if only conversations are becoming arguments about when and how, and the shift from Haight-Ashbury to Silicon Valley has stripped the movement of its tie-dye and beads and replaced them with Pied Piper shirts. Just as the road to hell is paved with good intentions, the road that brought us Cambridge Analytica and the Pizzagate conspiracy was paved with optimism and oaths to not be evil.

Renowned Harvard geneticist George Church once told me that breakthroughs in biological engineering are coming so fast we can’t predict how they will develop going forward. Crispr, a low-cost gene editing technology that is transforming our ability to design and edit the genome, was completely unanticipated; experts thought it was impossible ... until it wasn’t. Next-generation gene sequencing is decreasing in price, far faster than Moore’s Law for processors. In many ways, bioengineering is moving faster than computing. Church believes that amortality and age reversal will seem difficult and fraught with issues ... until they aren’t. He is currently experimenting with age reversal in dogs using gene therapy that has been successful in mice, a technique he believes is the most promising of nine broad approaches to mortality and aging—genome stability, telomere extension, epigenetics, proteostasis, caloric restriction, mitochondrial research, cell senescence, stem cell exhaustion, and intercellular communication.

Church’s research is but one of the key discoveries giving us hope that we may someday understand aging and possibly reverse it. My bet is that we will significantly lengthen, if not eliminate, the notion of “natural lifespan,” although it’s impossible to predict exactly when.

But what does this mean? Making things technically possible doesn’t always make them societally possible or even desirable, and just because we can do something doesn’t mean we should (as we’re increasingly realizing, watching the technologies we have developed transform into dark zombies instead of the wonderful utopian tools their designers imagined).

Human beings are tremendously adaptable and resilient, and we seem to quickly adjust to almost any technological change. Unfortunately, not all of our problems are technical and we are really bad at fixing social problems. Even the ones that we like to think we’ve fixed, like racism, keep morphing and getting stronger, like drug-resistant pathogens.

Don’t get me wrong—I think it’s important to be optimistic and passionate and push the boundaries of understanding to improve the human condition. But there is a religious tone in some of the arguments, and even a Way of the Future Church, which believes that “the creation of ‘super intelligence’ is inevitable.” As Yuval Harari writes in Homo Deus, “new technologies kill old gods and give birth to new gods.” When he was still just Sir Martin Rees, now Lord Martin Rees once told a group of us a story (which has been retold in various forms in various places) about how he was interviewed by what he called “the society for the abolition of involuntary death” in California. The members offered to put him in cryonic storage when he died, and when he politely told them he’d rather be dead than in a deep freeze, they called him a “deathist.”

Transhumanists correctly argue that every time you take a baby aspirin (or have open heart surgery), you’re intervening to make your life better and longer. They contend that there is no categorical difference between many modern medical procedures and the quest to beat death; it’s just a matter of degree. I tend to agree.

Yet we can clearly imagine the perils of amortality. Would dictators hold onto power endlessly? How would universities work if faculty never retired? Would the population explode? Would endless life be only for the wealthy, or would the poor be forced to toil forever? Clearly many of our social and philosophical systems would break. Back in 2003, Francis Fukuyama, in Our Posthuman Future: Consequences of the Biotechnology Revolution, warned us of the perils of life extension and explained how biotech was taking us into a posthuman future with catastrophic consequences to civilization even with the best intentions.

I think it’s unlikely that we’ll be uploading our minds to computers any time soon, but I do believe changes that challenge what it means to be “human" are coming. Philosopher Nikola Danaylov in his Transhumanist Manifesto says, “We must all respect autonomy and individual rights of all sentience throughout the universe, including humans, non-human animals, and any future AI, modified life forms, or other intelligences.” That sounds progressive and good.

Still, in his manifesto Nikola also writes, “Transhumanists of the world unite—we have immortality to gain and only biology to lose.” That sounds a little scary to me. I poked Nikola about this, and he pointed out that he wrote this manifesto a while ago and his position has become more subtle. But many of his peers are as radical as ever. I think transhumanism, especially its strong, passionate base in exuberant Silicon Valley, could use an overhaul that makes it more attentive to and integrated with our complex societal systems. At the same time, we need to help the “left-behind” parts of society catch up and participate in, rather than just become subjected to, the technological transformations that are looming. Now that the dog has caught the car, tranhumanism has to transform our fantasy into a responsible reality.

I, for one, still dream of flourishing in the future through advances in science and technology, but hopefully one that addresses societal inequities, retains the richness and diversity of our natural systems and indigenous cultures, rather than the somewhat simple and sterile futures depicted by many science fiction writers and futurists. Timothy Leary liked to remind us to remember our hippie roots, with their celebration of diversity and nature, and I hear him calling us again.

Everyone from the ACLU to the Koch brothers wants to reduce the number of people in prison and in jail. Liberals view mass incarceration as an unjust result of a racist system. Conservatives view the criminal justice system as an inefficient system in dire need of reform. But both sides agree: Reducing the number of people behind bars is an all-around good idea.

To that end, AI—in particular, so-called predictive technologies—has been deployed to support various parts of our criminal justice system. For instance, predictive policing uses data about previous arrests and neighborhoods to direct police to where they might find more crime, and similar systems are used to assess the risk of recidivism for bail, parole, and even sentencing decisions. Reformers across the political spectrum have touted risk assessment by algorithm as more objective than decision-making by an individual. Take the decision of whether to release someone from jail before their trial. Proponents of risk assessment argue that many more individuals could be released if only judges had access to more reliable and efficient tools to evaluate their risk.

Yet a 2016 ProPublica investigation revealed that not only were these assessments often inaccurate, the cost of that inaccuracy was borne disproportionately by African American defendants, whom the algorithms were almost twice as likely to label as a high risk for committing subsequent crimes or violating the terms of their parole.

We’re using algorithms as crystal balls to make predictions on behalf of society, when we should be using them as a mirror to examine ourselves and our social systems more critically. Machine learning and data science can help us better understand and address the underlying causes of poverty and crime, as long as we stop using these tools to automate decision-making and reinscribe historical injustice.

Training Troubles

Most modern AI requires massive amounts of data to train a machine to more accurately predict the future. When systems are trained to help doctors spot, say, skin cancer, the benefits are clear. But, in a creepy illustration of the importance of the data used to train algorithms, a team at the Media Lab created what is probably the world’s first artificial intelligence psychopath and trained it with a notorious subreddit that documents disturbing, violent death. They named the algorithm Norman and began showing it Rorschach inkblots. They also trained an algorithm with more benign inputs. The standard algorithm saw birds perched on a tree branch, Norman saw a man electrocuted to death.

So when machine-based prediction is used to make decisions affecting the lives of vulnerable people, we run the risk of hurting people who are already disadvantaged—moving more power from the governed to the governing. This is at odds with the fundamental premise of democracy.

States like New Jersey have adopted pretrial risk assessment in an effort to minimize or eliminate the use of cash-based bail, which multiple studies have shown is not only ineffective but also often deeply punitive for those who cannot pay. In many cases, the cash bail requirement is effectively a means of detaining defendants and denying them one of their most basic rights: the right to liberty under the presumption of innocence.

While cash bail reform is an admirable goal, critics of risk assessment are concerned that such efforts might lead to an expansion of punitive nonmonetary conditions, such as electronic monitoring and mandatory drug testing. Right now, assessments provide little to no insight about how a defendant’s risk is connected to the various conditions a judge might set for release. As a result, judges are ill-equipped to ask important questions about how release with conditions such as drug testing or GPS-equipped ankle bracelets actually affect outcomes for the defendants and society. Will, for instance, an ankle bracelet interfere with a defendant’s ability to work while awaiting trial? In light of these concerns, risk assessments may end up simply legitimizing new types of harmful practices. In this, we miss an opportunity: Data scientists should focus more deeply on understanding the underlying causes of crime and poverty, rather than simply using regression models and machine learning to punish people in high-risk situations.

Such issues are not limited to the criminal justice system. In her latest book, Automating Inequality, Virginia Eubanks describes several compelling examples of failed attempts by state and local governments to use algorithms to help make decisions. One heartbreaking example Eubanks offers is the use of data by the Office of Children, Youth, and Families in Allegheny County, Pennsylvania, to screen calls and assign risk scores to families that help decide whether case workers should intervene to ensure the welfare of a child.

To assess a child’s particular risk, the algorithm primarily “learns” from data that comes from public agencies, where a record is created every time someone applies for low-cost or free public services, such as the Supplemental Nutrition Assistance Program. This means that the system essentially judges poor children to be at higher risk than wealthier children who do not access social services. As a result, the symptoms of a high-risk child look a lot like the symptoms of poverty, the result of merely living in a household that has trouble making ends meet. Based on such data, a child could be removed from her home and placed into the custody of the state, where her outcomes look quite bleak, simply because her mother couldn’t afford to buy diapers.

Look for Causes

Rather than using predictive algorithms to punish low-income families by removing their children, Eubanks argues we should be using data and algorithms to assess the underlying drivers of poverty that exist in a child’s life and then ask better questions about which interventions will be most effective in stabilizing the home.

This is a topic that my colleague Chelsea Barabas discussed at length at the recent Conference on Fairness, Accountability, and Transparency, where she presented our paper, “Interventions Over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” In the paper, we argue that the technical community has used the wrong yardstick to measure the ethical stakes of AI-enabled technologies. By narrowly framing the risks and benefits of artificial intelligence in terms of bias and accuracy, we’ve overlooked more fundamental questions about how the introduction of automation, profiling software, and predictive models connect to outcomes that benefit society.

To reframe the debate, we must stop striving for “unbiased” prediction and start understanding causal relationships. What caused someone to miss a court date? Why did a mother keep a diaper on her child for so long without changing it? The use of algorithms to help administer public services presents an amazing opportunity to design effective social interventions—and a tremendous risk of locking in existing social inequity. This is the focus of the Humanizing AI in Law (HAL) work that we are doing at the Media Lab, along with a small but growing number of efforts involving the combined efforts of social scientists and computer scientists.

This is not to say that prediction isn’t useful, nor is it to say that understanding causal relationships in itself will fix everything. Addressing our societal problems is hard. My point is that we must use the massive amounts of data available to us to better understand what’s actually going on. This refocus could make the future one of greater equality and opportunity, and less a Minority Report–type nightmare.

On May 13, 2018, I innocently asked:

240 replies later, it is clear that blogs don't make it into the academic journalsphere and people cited two main reasons, the lack of longevity of links and the lack of peer review. I would like to point out that my blog URLs have been solid and permanent since I launched this version of my website in 2002 but it's a fairly valid point. There are a number of ideas about how to solve this, and several people pointed out that The Internet Archive does a pretty good job of keeping an archive of many sites.

There was quite a bit of discussion about peer review. Karim Lakhani posted a link about a study he did on peer review:

In the study, he says that, "we find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel."

Many people on Twitter mentioned pre-prints which is an emerging trend of publishing drafts before peer review since it can take so long. Many fields are skipping formal peer review and just focusing on pre-prints. In some fields ad hoc and informal peer groups are reviewing pre-prints and some journals are even referring to these informal review groups.

This sounds an awful lot like how we review each other's work on blogs. We cite, discuss and share links -- the best blog posts getting the most links. In the early days of Google, this would guarantee being on the first page of search results. Some great blog posts like Tim O'Reilly's "What Is Web 2.0" have ended up becoming canonical. So when people tell me that their professors don't want them to cite blogs in their academic papers, I'm not feelin' it.

It may be true that peer review is better than the alternatives, but it definitely could be improved. SCIgen, invented in 2005 by MIT researchers creates meaningless papers that have been successfully submitted to conferences. In 2014 Springer and IEEE removed more than 120 papers when a French researcher discovered that they were computer-generated fakes. Even peer review itself has been successfully imitated by machines.

At the Media Lab and MIT Press, we are working on trying to think about new ways to publish with experiments like PubPub. There are discussions about the future of peer review. People like Jess Polka at ASAPbio are working on these issues as well. Very excited about the progress, but a long way to go.

One thing we can do is make blogs more citation friendly. Some people on Twitter mentioned that it's more clear who did what in an academic paper than on a blog post. I started, at the urging of Jeremy Rubin, to put credits at the bottom of blog posts when I received a lot of help -- for example my post on the FinTech Bubble. Also, Boris just added a "cite" button at the bottom of each of my blog posts. Try it! I suppose the next thing is to consider DOI numbers for each post although it seems non-obvious how independent bloggers would get them without paying a bunch of money.

One annoying thing is that the citation format for blogs suck. When you Goggle, "cite blog post," you end up at... a blog post about "How to Cite a Blog Post in MLA, APA, or Chicago." According to that blog post, the APA citation for this post would be, "Ito, J. (2018, May). Citing Blogs. [Blog post]. https://joi.ito.com/weblog/2018/05/28/citing-blogs.html" That's annoying. Isn't the name of my blog relevant? If you look at the Citing Electronic Sources section of the MIT Academic Integrity website, they link to the Purdue OWL page. Purdue gives a slightly more cryptic example using a blog comment in the square brackets, but roughly similar. I don't see why the name of my blog is less important than some random journal so I'm going to put it in italics - APA guidelines be damned. Who do we lobby to change the APA guidelines to lift blog names out of the URL and into the body of the citation?

Credits

Boris Anthony, Travis Rich for the work on citations for this blog and the discussion about the citation format.

Amy Brand for the link to the Peer Review Transparency site and the introduction to Jess Polka.

I received a lot of excited feedback from people who saw the 60 Minutes segment on the Media Lab. I also got a few less congratulatory messages questioning the "gee-whiz-isn't-this-all-great" depiction of the Lab and asking why we seemed so relentlessly upbeat at a time when so many of the negative consequences of technology are coming to light. Juxtaposed with the first segment in the program about Aleksandr Kogan, the academic who created the Cambridge Analytica app that mined Facebook, the Media Lab segment appeared, to some, blithely upbeat. And perhaps it reinforced the sometimes unfair image of the Media Lab as a techno-Utopian hype machine.

Of course, the piece clocked in at about 12 minutes and focused on a small handful of projects; it's to be expected that it didn't represent the full range of research or the full spectrum of ideas and questions that this community brings to its endeavors. In my interview, most of my comments focused on how we need more reflection on where we have come in science and technology over the 30-plus years that the Media Lab has been around. I also stressed how at the Lab we're thinking a lot more about the impact technology is having on society, climate, and other systems. But in such a short piece--and one that was intended to showcase technological achievements, not to question the ethical rigor applied to those achievements--it's no surprise that not much of what I said made it into the final cut.

What was particularly interesting about the 60 Minutes segment was the producers' choice of "Future Factory" for the title. I got a letter from one Randall G. Nichols, of Missouri, pointing out that "No one in the segment seems to be studying the fact that technology is creating harmful conditions for the Earth, worse learning conditions for a substantial number of kids, decreasing judgment and attention in many of us, and so on." If we're manufacturing the future here, shouldn't we be at least a little concerned about the far-reaching and unforeseen impact of what we create here? I think most of us agree that, yes, absolutely, we should be! And what I'd say to Randall is, we are.

In fact, the lack of critical reflection in science and technology has been on my mind-I wrote about it in Resisting Reduction. Much of our work at the Lab helps us better understand and intervene responsibly in societal issues, including Deb Roy's Depolarization by Design class and almost all of the work in the Center for Civic Media. There's Kevin Esvelt's work that involves communities in deployment of the CRISPR gene drive and Danielle Wood's work generally and, more specifically, her interest in science and racial issues. And Pattie Maes is making her students watch Black Mirror to imagine how the work we do in the Lab might unintentionally go wrong. I'm also teaching a class on the ethics and governance of AI with Jonathan Zittrain from Harvard Law School, which aims to ensure that the generation now rising is more thoughtful about the societal impact of AI as it is deployed. I could go on.

It's not that I'm apologetic about the institutional optimism that the 60 Minutes piece captured. Optimism is a necessary part of our work at the Lab. Passion and optimism drive us to push the boundaries of science and technology. It's healthy to have a mix of viewpoints-critical, contemplative, and optimistic-in our ecosystem. Not all aspects of that can necessarily be captured in 12 minutes, though. I'm sure that our balance of caution and optimism isn't satisfactory for quite a few critical social scientists, but I think that a quick look at some of the projects I mention will show a more balanced approach than would appear to be the case from the 60 Minutes segment.

Having said that, I believe that we need to continue to integrate social sciences and reflection even more deeply into our science and technology work. While I have a big voice at the Lab, the Lab operates on a "permissionless innovation" model where I don't tell researchers what to do (and neither do our funders). On the other hand, we have safety and other codes that we have to follow--is there an equivalent ethical or social code that we or other institutions should have? Harrison Eiteljorg, II thinks so. He wrote, "I would like to encourage you to consider adding to your staff at least one scholar whose job is to examine projects for the ethical implications for the work and its potential final outcome." I wonder, what would such a process look like?

More socially integrated work in technology has continued to increase in both the rest of society and at the Lab. One of my questions is whether the Lab is changing fast enough, and whether the somewhat emergent way that the work is infusing itself in the Lab is the appropriate way. Doing my own work in ethical and critical work and having conversations is the easiest way to contribute, but I wonder if there is more that we as a Lab should be doing.

One of the main arcs of the 60 Minutes piece was showing how technology built in the Lab's early days--touch screens, voice command, things that were so far ahead of their time in the 80s and 90s as to seem magical--have gone out into the world and become part of the fabric of our everyday lives. The idea of highlighting the Lab as a "future factory" was to suggest that the loftiest and "craziest" ideas we're working on now might one day be just as commonplace. But I'd like to challenge myself, and everyone at the Media Lab, to demonstrate our evolution in thoughtful critique, as well.