Joi Ito's Web

Joi Ito's conversation with the living web.

May 2019 Archives

This column is the second in a series about young people and screens. Read the first post, about connected parenting, here.

When I was in high school, I emailed the authors of the textbooks we used so I could better question my teachers; I spent endless hours chatting with the sysadmins of university computer systems about networks; and I started threads online for many of my classes where we had much more robust conversations than in the classroom. The first conferences I attended as a teenager were conferences with mostly adult communities of online networkers who eventually became my mentors and colleagues.

I cannot imagine how I would have learned what I have learned or met the many, many people who’ve enriched my life and work without the internet. So I know first-hand how, today, the internet, online games, and a variety of emerging technologies can significantly benefit children and their experiences.

That said, I also know that, in general, the internet has become a more menacing place than when I was in school. To take just one example, parents and other industry observers share a growing concern about the content that YouTube serves up to young people. A Sesame Street sing-along with Elmo leads to one of those weird color ball videos leading to a string of clips that keeps them glued to screens, with increasingly stranger-engaging content of questionable social or educational value, interspersed with stuff that looks like content, but might be some sort of sponsored content for Play-Doh. The rise of commercial content for young people is exemplified by YouTube Kidfluencers, which markets itself as a tool that gives brands using YouTube “an added layer of kid safety,” and their rampant marketing has many parents up in arms.

In response, Senator Ed Markey, a longtime proponent of children’s online privacy protections, is cosponsoring a new bill to expand the Children’s Online Privacy Protection Act (COPPA). It would, among other things, extend protection to children from age 12 to 15 and ban online marketing videos targeted at them. The hope is that this will compel sites like YouTube and Facebook to manage their algorithms so that they do not serve up endless streams of content promoting commercial products to children. It gets a little complicated, though, because in today’s world, the kids themselves are brands, and they have product lines of their own. So the line between self-expression and endorsements is very blurry and confounds traditional regulations and delinations.

The proposed bill is well-intentioned and may limit exposure to promotional content, but it may also have unintended consequences. Take the existing version of COPPA, passed in 1998, which introduced a parental permission requirement for children under 13 to participate in commercial online platforms. Most open platforms responded by excluding those under 13, rather than take on the onerous parental permission process and challenges of serving children. This drove young people’s participation underground on these sites, since they could easily misrepresent their age or use the account of a friend or caregiver. Research and everyday experience indicates that young people under 13 are all over YouTube and Facebook, and busy caregivers, including parents, are often complicit in letting this happen.

That doesn’t mean, of course, that parents aren’t concerned about the time their young people are spending on screens, and Google and Facebook have responded, respectively, with the kid-only “spaces” on YouTube and Messenger.

But these policy and tech solutions ignore the underlying reality that young people crave contact with bigger young people and grown-up expertise, and that mixed-age interaction is essential to their learning and development.

Not only is banning young people from open platforms an iffy, hard-to-enforce proposition, it’s unclear whether it is even the best thing for them. It's possible that this new bill could damage the system like other well-intentioned efforts have in the past. I can’t forget the overly stringent Computer Fraud and Abuse Act. Written a year after the movie War Games, the law made it a felony to break the terms of service of an online service, so that, say, an investigative journalist couldn’t run a script to test on Facebook to make sure the algorithm was doing what they said it was. Regulating these technologies requires an interdisciplinary approach involving legal, policy, social, and technical experts working closely with industry, government, and consumers to get them to work the way we want them to.

Given the complexity of the issue, is the only way to protect young people to exclude them from the grown-up internet? Can algorithms be optimized for learning, high-quality content, and positive intergenerational communication for young people? What gets less attention rather than outright restriction is how we might optimize these platforms to provide joy, positive engagement, learning, and healthy communities for young people and families.

Children are exposed to risks at churches, schools, malls, parks, and anywhere adults and children interact. Even when harms and abuses happen, we don’t talk about shutting down parks and churches, and we don’t exclude young people from these intergenerational spaces. We also don’t ask parents to evaluate the risks and give written permission every time their kid walks into an open commercial space like a mall or grocery store. We hold the leadership of these institutions accountable, pushing them to establish positive norms and punish abuse. As a society, we know the benefits of these institutions outweigh the harms.

Based on a massive EU-wide study of children online, communication researcher Sonia Livingstone argues that internet access should be considered a fundamental right of children. She notes that risks and opportunities go hand in hand: “The more often children use the internet, the more digital skills and literacies they generally gain, the more online opportunities they enjoy and—the tricky part for policymakers—the more risks they encounter.” Shutting down children’s access to open online resources often most harms vulnerable young people, such as those with special needs or those lacking financial resources. Consider, for example, the case of a home- and wheelchair-bound child whose parents only discovered his rich online gaming community and empowered online identity after his death. Or Autcraft, a Minecraft server community where young people with autism can foster friendships via a medium that often serves them better than face-to-face interactions.

As I was working on my last column about young people and screen time, I spent some time talking to my sister, Mimi Ito, who directs the Connected Learning Lab at UC Irvine. We discussed how these problems and the negative publicity around screens were causing caregivers to develop unhealthy relationships with their children while trying to regulate their exposure to screens and the content they delivered. The messages caregivers are getting about the need to regulate and monitor screen time are much louder than messages about how they can actively engage with young people’s online interests. Mimi’s recent book, Affinity Online: How Connection and Shared Interest Fuel Learning, features a range of mixed-age, online communities that demonstrate how young people can learn from other young people and adult experts online. Often it’s the young people themselves that create communities, enforce norms, and insist on high-quality content. One of the cases, investigated by Rachel Cody Pfister, as her PhD work at the University of California, San Diego, is Hogwarts at Ravelry, a community of Harry Potter fans who knit together on Ravelry, an online platform for fiber arts. A 10-year-old girl founded the community, and members ranged from 11 to 70-plus at the time of Rachel’s study.

Hogwarts at Ravelry is just one of a multitude of examples of free and open intergenerational online learning communities of different shapes and sizes. The MIT Media Lab, where I work, is home to Scratch, a project created in the Lifelong Kindergarten group. Millions of young people around the world are part of a safe, creative, and healthy space for creative coding. Some Reddit groups like /r/aww for cute animal content, or a range of subreddits on Pokemon Go, are lively spaces of intergenerational communication. Like with Scratch, these massive communities thrive because of strict content and community guidelines, algorithms optimized to support these norms, and dedicated human moderation.

YouTube is also an excellent source of content for learning and discovering new interests. One now famous 12-year-old learned to dubstep just by watching YouTube videos, for example. The challenge is squaring the incentives of free-for-all commercial platforms like YouTube with the needs of special populations like young people and intergenerational sub-communities with specific norms and standards. We need to recognize that young people will make contact with commercial content and grown-ups online, and we need to figure out better ways to regulate and optimize platforms to serve participants of mixed ages. This means bringing young people’s interests, needs, and voices to the table, not shutting them out or making them invisible to online platforms and algorithms. This is why I’ve issued a call for research papers about algorithmic rights and protections for children together with my sister and our colleague and developmental psychologist, Candice Odgers. We hope to spark an interdisciplinary discussion of issues among a wide range of stakeholders to find answers to questions like: How can we create interfaces between the new, algorithmically governed platforms and their designers and civil society? How might we nudge YouTube and other platforms to be more like Scratch, designed for the benefit of young people and optimized not for engagement and revenue but instead for learning, exploration, and high-quality content? Can the internet support an ecosystem of platforms tailored to young people and mixed-age communities, where children can safely learn from each other, together with and from adults?

I know how important it is for young people to have connections to a world bigger and more diverse than their own. And I think that developers of these technologies (myself included) have a responsibility to design them based on scientific evidence and the participation of the public. We can’t leave it to commercial entities to develop and guide today’s learning platforms and internet communities—but we can’t shut these platforms down or prevent children from having access to meaningful online relationships and knowledge, either.

A few weeks ago I was asked to make some remarks at the MIT-Harvard Conference on the Uyghur Human Rights Crisis. When the student organizer of the conference, Zuly Mamat, asked me to speak at the event, I wasn't sure what I would say because I'm definitely not an expert on this topic. But as I dove into researching what is happening to the Uyghur community in China, I realized that it connected to a lot of the themes I have run up against in my own work, particularly the importance of considering the ethical and social implications of technology early on in the design and development process. The Uyghur human rights crisis demonstrates how the technology we build, even with the best of intentions, may be used to surveil and harm people. Many of my activities these days are focused on the prevention of misuse of technology in the future, but it requires more than just bolting ethicists onto product teams - I think it involves a fundamental shift in our priorities and a redesign of the relationship of the humanities and social sciences with engineering and science in academia and society. As a starting point, I think it is critically important to facilitate conversations about this problem through events like this one. You can view the video of the event here and read my edited remarks below.


Edited transcript.

Hello, I'm Joi Ito, the Director of the MIT Media Lab. I'm probably the least informed about this topic of everyone here, so first of all, I'm very grateful to all of the people who have been working on this topic and for helping me get more informed. I'm broadly interested in human rights, its relationship with technology and our role as Harvard and MIT and academia in general to intervene in these types of situations. So I want to talk mainly about that.

One of the things to think about not just in this case, but also more broadly, is the role of technology in surveillance and human rights. In the talks today, we've heard about some specific examples of how technology is being used to surveil the Uyghur community in China, but I thought I'd talk about it a little more generally. I specifically want to address the continuing investment in and ascension of the engineering and sciences in the world through ventures like MIT's new College of Computing, in terms of their influence and the scale at which they're being deployed. I believe that thinking about the ethical aspects of these investments is essential.

I remember when J.J. Abrams, one of our Director's Fellows and a film director for those of you who don't know, visited the Media Lab. We have 500 or so ongoing projects at the Media Lab and he asked some of the students, "Do you do anything that involves things like war or surveillance or things that you know, harm people?" And all of the students said, "No, of course we don't do that kind of thing. We make technology for good." And then he said, "Well let me re-frame that question, can you imagine an evil villain in any of my shows or movies using anything here to do really terrible things?" And everybody went, "Yeah!"

What's important to understand is that most engineers and scientists are developing tools to try to help the world, whether it's trying to model the brains of children in order to increase the quality and the effectiveness of education, or using sensors to help farmers grow crops. But what most people don't spend enough time thinking about is the dual use nature of the technology - the fact that technology can easily be used in ways that the designer did not intend.

Now, I think there are a lot of arguments about whose job it is to think about how technology can be used in unexpected and harmful ways. If I took the faculty in the Media Lab and put them on a line where at one end, the faculty believe we should think about all the social implications before doing anything, and at the other end they believe we should just build stuff and society will figure it out, I think there would be a fairly even distribution along the line. I would say that at MIT that's also roughly true. My argument is that we actually have to think more about the social implications of technology before designing it. It's very hard to un-design things, and I'm not saying that it's an easy task, and I'm not saying that we have to get everything perfect, but I think that having a more coherent view of the world and these implications is tremendously important.

The Media Lab is a little over 30 years old, and I've been there for 8 years, but I was very involved in the early days of the Internet. The other day, I was describing to Susan Silbey, the current faculty chair at MIT, how when we were building the Internet we thought if we could just provide a voice to everyone, if we could just connect everyone together, we would have world peace. I really believed that when we started, and I was expressing to Susan how naïve I feel now that the Internet has become something that's more akin to the little girl in the Exorcist, for those of you who have seen the movie. But Susan, being an anthropologist and historian said, "Well when you guys talked about connecting everybody together, we knew. The social scientists knew that it was going to be a mess."

One of the really important things I learned from my conversation with Susan was the extent to which the humanities have thought about and fought about a lot of these things. History has taught us a lot of these things. I know that it's somewhat taboo to invoke Nazi Germany in too many conversations, but if you look at the data that was collected in Europe to support social services, much of it was later used by the Nazis to roundup and persecute the Jews. And it's not exactly the same situation, but a lot of the databases that we're creating to help poor and disadvantaged families are also being used by the immigration services to find and target people for deportation.

Even the databases and technology that we use and create for the best of intentions can be subverted depending on who's in charge. So thinking about these systems is tremendously important. At MIT, we are, and I think that Zuly mentioned some of the specifics, working with tech companies that are working directly on surveillance technology or are in some way creating technologies that could be used for surveillance in China. Again thinking about the ethical issues is very important. I will point out that there are whole disciplines that work in this, STS, science technology in society, that's really what they do. They think about the impact of science and technology in society. They think about it in a historical context and provide us with a framework for thinking about these things. Thinking about how to integrate anthropology and STS into both the curriculum and the research at MIT is tremendously important.

The other thing to think about is allowing engineers more freedom to explore the application and impact of their work. One of the problems with scholarship is that many researchers don't have the freedom to fully test their hypotheses. For example, in January, Eric Topol tweeted about his paper that showed that of the 15 most impactful machine learning and medicine papers that had been published, none of them had been clinically validated. Many cases, in machine learning, you get some data, you tweak it and you get a very high effectiveness and then you walk away. Then the clinicians come in and they say "oh, but we can't replicate this, and we don't have the expertise" or "we tried it but it doesn't seem to work in practice." We're not providing, if you're following an academic path, the proper incentives for the computer scientists to integrate with and work closely with the clinicians in the field. One of the other challenges that we have is that our reward systems and the incentives that are in place don't encourage technologists to explore the social implications of the tech they produce. When this is the case, you fall a little bit short of actually getting to the question, "well, what does this actually mean?"

I co-teach a course at Harvard Law School called the Applied Challenges in Ethics and Governance of Artificial Intelligence, and through that class we've explored some research that considers the ethical and social impact of AI. To give you an example, one Media Lab project that we discussed was looking at risk scores used by the criminal justice system for sentencing and pre-trial assessments and bail. The project team initially thought "oh, we could just use a blockchain to verify the data and make the whole criminal sentencing system more efficient." But as the team started looking into it, they realized that the whole criminal justice system was somewhat broken. And as they started going deeper and deeper into the problem, they realized that while these prediction systems were making policing and judging possibly more efficient, they were also taking power away from the predictee and giving it to the predictor.

Basically, these automated systems were saying "okay, if you happen to live in this zip code, you will have a higher recidivism rate." But in reality, rearrest has more to do with policing and policy and the courts than it does with the criminality of the individual. By saying that this risk score can accurately predict how likely it is that this person will commit another crime, you're attributing the agency to the individual when actually much of the agency lies with the system. And by focusing on making the prediction tool more accurate, you end up ignoring existing weaknesses and biases in the overall justice system and the cause of those weaknesses. It's reminiscent of Caley Horan's writing on the history of insurance and redlining. She looks at the way in which insurance pricing, called actuarial fairness, became a legitimate way to use math to discriminate against people and how it took the debate away from the feminists and the civil rights leaders and made it an argument about the accuracy of algorithms.

The researchers who were trying to improve the criminal risk scoring system have completely pivoted to recommending that we stop using automated decision making in criminal justice. Instead they think we should use technology to look at the long term effects of policies in the criminal justice system and not to predict the criminality of individuals.

But this outcome is not common. I find that whether we're talking about tenure cases or publications or funding, we don't typically allow our researchers to end up in places that contradict the fundamental place where they started. So I think that's another thing that's really important. How do we create both research and curricular opportunities for people to explore their initial assumptions and hypotheses? As we think about this and this conversation, we should ask "how can we integrate this into our educational system?" Our academic process is really important and I love that we have scholars that are working on this, but how we bring this mentality to engineers and scientists is something that I'd love to think about and maybe in the Breakout Sessions we can work on that.

Now I want to pivot a little bit and talk about the role of academia in the Uyghur crisis. I know there are people who view this meeting as provocative or political and it reminds me of the March for Science that we had several years ago. I gave a talk at the first March for Science. Before the talk, when I was at a dinner table with a bunch of faculty (I won't name the faculty), someone said, "Why are you doing that? It's very political. We try not to be political, we're just scientists." And I said, "Well when it becomes political to tell the truth, when being supportive of climate science is political, when trying to support fundamental scientific research is political, then I'm political." So I don't want to be partisan, but I think if the truth is political, then I think we need to be political.

And this is not a new concept. If you look at the history of MIT, or just the history of academic freedom (there's the Statement of Principles on Academic Freedom and Tenure) you will find a bunch of interesting MIT history. In the late 40s and 50s, during the McCarthy period, society was going after communists and left wing people out of the fear of Communism. And many institutions were turning over their left wing Marxist academics, or firing them under pressure from the government. But MIT was quite good about protecting their Marxist affiliated faculty, and there's a very famous case that shows this. Dirk Struik, a math professor at MIT, was indicted by the Middlesex grand jury on charges of advocating the overthrow of the US and Massachusetts governments in 1951. At the time MIT suspended him with pay, but once the court abandoned the case due to lack of evidence and the fact that states shouldn't be ruling on this type of charge, MIT reinstated Professor Struik. This is a quote from the president at the time, James Killian about the incident.

"MIT believes that its faculty, as long as its members abide by the law, maintain the dignity and responsibility of their position, must be free to inquire, to challenge and to doubt in their search for what is true and good. They must be free to examine controversial matters, to reach conclusions of their own, to criticize and be criticized, and only through such unqualified freedom of thought and investigation can an educational institution, especially one dealing with science, perform its function of seeking truth."

Many of you may wonder why we have tenure at universities. We have tenure to protect our ability to question authority, to speak the truth and to really say what we think without fear of retribution.

There's another important case that demonstrates MIT's willingness to protect its faculty and students. In the early 1990s, MIT and a bunch of Ivy League schools came up with this idea to provide financial aid for low income students on a need basis. The Ivy League schools got together to coordinate on how they would assess need and how they would figure out how much financial aid to give to students. Weirdly, the United States government sued the Ivy League schools saying that this was an antitrust case, which was ridiculous because it was a charity. Most of the other universities caved in after this lawsuit, but Chuck Vest the president at the time said, "MIT has a long history of admitting students based on merit and a tradition of ensuring these students full financial aid." He refused to deny students financial aid, and a multi-year lawsuit ensued, in which eventually MIT won. And then this need-based scholarship system was enshrined in actual policy in the United States.

Many of the people who are here at MIT today probably don't remember this, but there's a great documentary film that shows MIT students and faculty literally clashing with police on these streets in an anti-Vietnam War protest 50 years ago. So in the not so distant past, MIT has been a very political place when it meant protecting our freedom to speak up.

More recently, I personally experienced this support for academic freedom. When Chelsea Manning's fellowship at the Harvard Kennedy School was rescinded, she emailed me and asked if she could speak at the Media Lab. I was thinking about it, and I asked the administration what they thought, and they thought it was a terrible idea. And when they told me that I said, "You know, now that means I have to invite her." I remember our Provost Marty saying, "I know." And that's what I think is wonderful about being here at MIT: the fact that the administration understands that faculty must be allowed to act independently of the Institute. Another example is when the administration was deciding what to do about funding from Saudi Arabia. The administration released a report, which has a few critics, that basically said, "we're going to let people decide what they want to do." I think each group or faculty member at MIT is permitted to make their own decision about whether to accept funding from Saudi Arabia. MIT, in my experience, has always stood by the academic freedom of whatever unit at the Institute that's trying to do what it wants to do.

I think we're in a very privileged place and I think that it's not only our freedom, but our obligation to speak up. It's also our responsibility to fight for the academic freedom of people in our community as well as people in other communities, and provide leadership. I really do want to thank the organizers of this conference for doing that. I think it's very bold, but I think it's very becoming of both MIT and Harvard. I read a very disturbing report from Human Rights Watch that talked about how Chinese scholars overseas are starting to have difficulties in speaking up, which I think is somewhat unprecedented because of the capabilities of today's technology. And I think there are similar reports about scholars from Saudi Arabia. The ability of these countries to surveil their citizens overseas and impinge on their academic freedom is a tremendously important topic to discuss, and think about both technically, legally and otherwise. I think it's also a very important thing for us to talk about how to protect the freedoms of students studying here.

Thank you again for making this topic now very front of mind for me. On the panel I'd love to try to describe some concrete steps that we can take to continue to protect this freedom that we have. Thank you.

Credits

Transcription and editing: Samantha Bates