Joi Ito's Web

Joi Ito's conversation with the living web.

Albert-László Barabási PhotologAlbert-László BarabásiMon, Dec 31, 16:14 UTC

A few weeks ago I was asked to make some remarks at the MIT-Harvard Conference on the Uyghur Human Rights Crisis. When the student organizer of the conference, Zuly Mamat, asked me to speak at the event, I wasn't sure what I would say because I'm definitely not an expert on this topic. But as I dove into researching what is happening to the Uyghur community in China, I realized that it connected to a lot of the themes I have run up against in my own work, particularly the importance of considering the ethical and social implications of technology early on in the design and development process. The Uyghur human rights crisis demonstrates how the technology we build, even with the best of intentions, may be used to surveil and harm people. Many of my activities these days are focused on the prevention of misuse of technology in the future, but it requires more than just bolting ethicists onto product teams - I think it involves a fundamental shift in our priorities and a redesign of the relationship of the humanities and social sciences with engineering and science in academia and society. As a starting point, I think it is critically important to facilitate conversations about this problem through events like this one. You can view the video of the event here and read my edited remarks below.


Edited transcript.

Hello, I'm Joi Ito, the Director of the MIT Media Lab. I'm probably the least informed about this topic of everyone here, so first of all, I'm very grateful to all of the people who have been working on this topic and for helping me get more informed. I'm broadly interested in human rights, its relationship with technology and our role as Harvard and MIT and academia in general to intervene in these types of situations. So I want to talk mainly about that.

One of the things to think about not just in this case, but also more broadly, is the role of technology in surveillance and human rights. In the talks today, we've heard about some specific examples of how technology is being used to surveil the Uyghur community in China, but I thought I'd talk about it a little more generally. I specifically want to address the continuing investment in and ascension of the engineering and sciences in the world through ventures like MIT's new College of Computing, in terms of their influence and the scale at which they're being deployed. I believe that thinking about the ethical aspects of these investments is essential.

I remember when J.J. Abrams, one of our Director's Fellows and a film director for those of you who don't know, visited the Media Lab. We have 500 or so ongoing projects at the Media Lab and he asked some of the students, "Do you do anything that involves things like war or surveillance or things that you know, harm people?" And all of the students said, "No, of course we don't do that kind of thing. We make technology for good." And then he said, "Well let me re-frame that question, can you imagine an evil villain in any of my shows or movies using anything here to do really terrible things?" And everybody went, "Yeah!"

What's important to understand is that most engineers and scientists are developing tools to try to help the world, whether it's trying to model the brains of children in order to increase the quality and the effectiveness of education, or using sensors to help farmers grow crops. But what most people don't spend enough time thinking about is the dual use nature of the technology - the fact that technology can easily be used in ways that the designer did not intend.

Now, I think there are a lot of arguments about whose job it is to think about how technology can be used in unexpected and harmful ways. If I took the faculty in the Media Lab and put them on a line where at one end, the faculty believe we should think about all the social implications before doing anything, and at the other end they believe we should just build stuff and society will figure it out, I think there would be a fairly even distribution along the line. I would say that at MIT that's also roughly true. My argument is that we actually have to think more about the social implications of technology before designing it. It's very hard to un-design things, and I'm not saying that it's an easy task, and I'm not saying that we have to get everything perfect, but I think that having a more coherent view of the world and these implications is tremendously important.

The Media Lab is a little over 30 years old, and I've been there for 8 years, but I was very involved in the early days of the Internet. The other day, I was describing to Susan Silbey, the current faculty chair at MIT, how when we were building the Internet we thought if we could just provide a voice to everyone, if we could just connect everyone together, we would have world peace. I really believed that when we started, and I was expressing to Susan how naïve I feel now that the Internet has become something that's more akin to the little girl in the Exorcist, for those of you who have seen the movie. But Susan, being an anthropologist and historian said, "Well when you guys talked about connecting everybody together, we knew. The social scientists knew that it was going to be a mess."

One of the really important things I learned from my conversation with Susan was the extent to which the humanities have thought about and fought about a lot of these things. History has taught us a lot of these things. I know that it's somewhat taboo to invoke Nazi Germany in too many conversations, but if you look at the data that was collected in Europe to support social services, much of it was later used by the Nazis to roundup and persecute the Jews. And it's not exactly the same situation, but a lot of the databases that we're creating to help poor and disadvantaged families are also being used by the immigration services to find and target people for deportation.

Even the databases and technology that we use and create for the best of intentions can be subverted depending on who's in charge. So thinking about these systems is tremendously important. At MIT, we are, and I think that Zuly mentioned some of the specifics, working with tech companies that are working directly on surveillance technology or are in some way creating technologies that could be used for surveillance in China. Again thinking about the ethical issues is very important. I will point out that there are whole disciplines that work in this, STS, science technology in society, that's really what they do. They think about the impact of science and technology in society. They think about it in a historical context and provide us with a framework for thinking about these things. Thinking about how to integrate anthropology and STS into both the curriculum and the research at MIT is tremendously important.

The other thing to think about is allowing engineers more freedom to explore the application and impact of their work. One of the problems with scholarship is that many researchers don't have the freedom to fully test their hypotheses. For example, in January, Eric Topol tweeted about his paper that showed that of the 15 most impactful machine learning and medicine papers that had been published, none of them had been clinically validated. Many cases, in machine learning, you get some data, you tweak it and you get a very high effectiveness and then you walk away. Then the clinicians come in and they say "oh, but we can't replicate this, and we don't have the expertise" or "we tried it but it doesn't seem to work in practice." We're not providing, if you're following an academic path, the proper incentives for the computer scientists to integrate with and work closely with the clinicians in the field. One of the other challenges that we have is that our reward systems and the incentives that are in place don't encourage technologists to explore the social implications of the tech they produce. When this is the case, you fall a little bit short of actually getting to the question, "well, what does this actually mean?"

I co-teach a course at Harvard Law School called the Applied Challenges in Ethics and Governance of Artificial Intelligence, and through that class we've explored some research that considers the ethical and social impact of AI. To give you an example, one Media Lab project that we discussed was looking at risk scores used by the criminal justice system for sentencing and pre-trial assessments and bail. The project team initially thought "oh, we could just use a blockchain to verify the data and make the whole criminal sentencing system more efficient." But as the team started looking into it, they realized that the whole criminal justice system was somewhat broken. And as they started going deeper and deeper into the problem, they realized that while these prediction systems were making policing and judging possibly more efficient, they were also taking power away from the predictee and giving it to the predictor.

Basically, these automated systems were saying "okay, if you happen to live in this zip code, you will have a higher recidivism rate." But in reality, rearrest has more to do with policing and policy and the courts than it does with the criminality of the individual. By saying that this risk score can accurately predict how likely it is that this person will commit another crime, you're attributing the agency to the individual when actually much of the agency lies with the system. And by focusing on making the prediction tool more accurate, you end up ignoring existing weaknesses and biases in the overall justice system and the cause of those weaknesses. It's reminiscent of Caley Horan's writing on the history of insurance and redlining. She looks at the way in which insurance pricing, called actuarial fairness, became a legitimate way to use math to discriminate against people and how it took the debate away from the feminists and the civil rights leaders and made it an argument about the accuracy of algorithms.

The researchers who were trying to improve the criminal risk scoring system have completely pivoted to recommending that we stop using automated decision making in criminal justice. Instead they think we should use technology to look at the long term effects of policies in the criminal justice system and not to predict the criminality of individuals.

But this outcome is not common. I find that whether we're talking about tenure cases or publications or funding, we don't typically allow our researchers to end up in places that contradict the fundamental place where they started. So I think that's another thing that's really important. How do we create both research and curricular opportunities for people to explore their initial assumptions and hypotheses? As we think about this and this conversation, we should ask "how can we integrate this into our educational system?" Our academic process is really important and I love that we have scholars that are working on this, but how we bring this mentality to engineers and scientists is something that I'd love to think about and maybe in the Breakout Sessions we can work on that.

Now I want to pivot a little bit and talk about the role of academia in the Uyghur crisis. I know there are people who view this meeting as provocative or political and it reminds me of the March for Science that we had several years ago. I gave a talk at the first March for Science. Before the talk, when I was at a dinner table with a bunch of faculty (I won't name the faculty), someone said, "Why are you doing that? It's very political. We try not to be political, we're just scientists." And I said, "Well when it becomes political to tell the truth, when being supportive of climate science is political, when trying to support fundamental scientific research is political, then I'm political." So I don't want to be partisan, but I think if the truth is political, then I think we need to be political.

And this is not a new concept. If you look at the history of MIT, or just the history of academic freedom (there's the Statement of Principles on Academic Freedom and Tenure) you will find a bunch of interesting MIT history. In the late 40s and 50s, during the McCarthy period, society was going after communists and left wing people out of the fear of Communism. And many institutions were turning over their left wing Marxist academics, or firing them under pressure from the government. But MIT was quite good about protecting their Marxist affiliated faculty, and there's a very famous case that shows this. Dirk Struik, a math professor at MIT, was indicted by the Middlesex grand jury on charges of advocating the overthrow of the US and Massachusetts governments in 1951. At the time MIT suspended him with pay, but once the court abandoned the case due to lack of evidence and the fact that states shouldn't be ruling on this type of charge, MIT reinstated Professor Struik. This is a quote from the president at the time, James Killian about the incident.

"MIT believes that its faculty, as long as its members abide by the law, maintain the dignity and responsibility of their position, must be free to inquire, to challenge and to doubt in their search for what is true and good. They must be free to examine controversial matters, to reach conclusions of their own, to criticize and be criticized, and only through such unqualified freedom of thought and investigation can an educational institution, especially one dealing with science, perform its function of seeking truth."

Many of you may wonder why we have tenure at universities. We have tenure to protect our ability to question authority, to speak the truth and to really say what we think without fear of retribution.

There's another important case that demonstrates MIT's willingness to protect its faculty and students. In the early 1990s, MIT and a bunch of Ivy League schools came up with this idea to provide financial aid for low income students on a need basis. The Ivy League schools got together to coordinate on how they would assess need and how they would figure out how much financial aid to give to students. Weirdly, the United States government sued the Ivy League schools saying that this was an antitrust case, which was ridiculous because it was a charity. Most of the other universities caved in after this lawsuit, but Chuck Vest the president at the time said, "MIT has a long history of admitting students based on merit and a tradition of ensuring these students full financial aid." He refused to deny students financial aid, and a multi-year lawsuit ensued, in which eventually MIT won. And then this need-based scholarship system was enshrined in actual policy in the United States.

Many of the people who are here at MIT today probably don't remember this, but there's a great documentary film that shows MIT students and faculty literally clashing with police on these streets in an anti-Vietnam War protest 50 years ago. So in the not so distant past, MIT has been a very political place when it meant protecting our freedom to speak up.

More recently, I personally experienced this support for academic freedom. When Chelsea Manning's fellowship at the Harvard Kennedy School was rescinded, she emailed me and asked if she could speak at the Media Lab. I was thinking about it, and I asked the administration what they thought, and they thought it was a terrible idea. And when they told me that I said, "You know, now that means I have to invite her." I remember our Provost Marty saying, "I know." And that's what I think is wonderful about being here at MIT: the fact that the administration understands that faculty must be allowed to act independently of the Institute. Another example is when the administration was deciding what to do about funding from Saudi Arabia. The administration released a report, which has a few critics, that basically said, "we're going to let people decide what they want to do." I think each group or faculty member at MIT is permitted to make their own decision about whether to accept funding from Saudi Arabia. MIT, in my experience, has always stood by the academic freedom of whatever unit at the Institute that's trying to do what it wants to do.

I think we're in a very privileged place and I think that it's not only our freedom, but our obligation to speak up. It's also our responsibility to fight for the academic freedom of people in our community as well as people in other communities, and provide leadership. I really do want to thank the organizers of this conference for doing that. I think it's very bold, but I think it's very becoming of both MIT and Harvard. I read a very disturbing report from Human Rights Watch that talked about how Chinese scholars overseas are starting to have difficulties in speaking up, which I think is somewhat unprecedented because of the capabilities of today's technology. And I think there are similar reports about scholars from Saudi Arabia. The ability of these countries to surveil their citizens overseas and impinge on their academic freedom is a tremendously important topic to discuss, and think about both technically, legally and otherwise. I think it's also a very important thing for us to talk about how to protect the freedoms of students studying here.

Thank you again for making this topic now very front of mind for me. On the panel I'd love to try to describe some concrete steps that we can take to continue to protect this freedom that we have. Thank you.

Credits

Transcription and editing: Samantha Bates

Applied Ethical and Governance Challenges in Artificial Intelligence (AI)

Part 3: Intervention

We recently completed the third and final section of our course that I co-taught with Jonathan Zittrain and TA'ed by Samantha Bates, John Bowers and Natalie Saltiel. The plan was to try to bring the discussion of diagnosis and prognosis in for a landing and figure out how to intervene.

The first class of this section (the eighth class of the course) looked at the use of algorithms in decision making. One paper that we read was the most recent in a series of papers by Jon Kleinberg, Sendhil Mullainathan and Cass Sunstein that supported the use of algorithms in decision making such as pretrial risk assessments - the particular paper we read focused on the use of algorithms for measuring the bias of the decision making. Sendhil Mullainathan, one of the authors of the paper joined us in the class. The second paper was by Rodrigo Ochigame, a history and science and technology in society (STS) student who criticized the fundamental premise of reducing notions such as "fairness" to "computationals formalisms" such as algorithms. The discussion which at points took the form of a lively debate was extremely interesting and helped us and the students see how important it is to question the framing of the questions and the assumptions that we often make when we begin working on a solution without coming to a societal agreement of the problem.

In the case of pretrial risk assessments, the basic question about whether rearrests are more of an indicator of policing practice or the "criminality of the individual" fundamentally changes whether the focus should be on the "fairness" and accuracy of the prediction of the criminality of the individual or whether we should be questioning the entire system of incarceration and its assumptions.

At the end of the class, Sendhil agreed to return to have a deeper and longer conversation with my Humanizing AI in Law (HAL) team to discuss this issue further.

In the next class, we discussed the history of causal inference and how statistics and correlation have dominated modern machine learning and data analysis. We discussed the difficulties and challenges in validating causal claims but also the importance of causal claims. In particular, we looked at how legal precedent has from time to time made references to the right to individualized sentencing. Clearly, risk scores used in sentencing that are protected by trade secrets and confidentiality agreements challenge the right to due process as expressed in the Wisconsin v. Loomis case as well as the right to an individualized sentence.

The last class focused on adversarial examples and technical debt - which helped us think about when and how policies and important "tests" and controls can and should be put in place vs when, if ever, we should just "move quickly and break things." I'm not sure if it was the consensus of the class, but I felt that somehow we needed a new design process that allowed for the creation of design stories and "tests" that could be developed by the users and members of the affected communities that were integrated into the development process - participant design that was deeply integrated into something that looked like agile development story and test development processes. Fairness and other contextual parameters are dynamic and can only be managed through interactions with the systems in which the algorithms are deployed. Figuring out a way to somehow integrate the dynamic nature of the social system seems like a possible approach for mitigating a category of technical debt and designing systems untethered from the normative environments in which they are deployed.

Throughout the course, I observed students learning from one another, rethinking their own assumptions, and collaborating on projects outside of class. We may not have figured out how to eliminate algorithmic bias or come up with a satisfactory definition of what makes an autonomous system interpretable, but we did find ourselves having conversations and coming to new points of view that I don't think would have happened otherwise.

It is clear that integrating humanities and social science into the conversation about law, economics and technology is required for us to navigate ourselves out of the mess that we've created and to chart a way forward into a our uncertain future with our increasingly algorithmic societal systems.

- Joi

Syllabus Notes

By Samantha Bates

In our final stage of the course, the intervention stage, we investigated potential solutions to the problems we identified earlier in the course. Class discussions included consideration of the various tradeoffs of implementing potential solutions and places to intervene in different systems. We also investigated the balance between waiting to address potential weaknesses in a given system until after deployment versus proactively correcting deficiencies before deploying the autonomous system.

Class Session 8: Intervening on behalf of fairness

This class was structured as a conversation involving two guests, University of Chicago Booth School of Business Professor Sendhil Mullainathan and MIT PhD student Rodrigo Ochigame. As a class we debated whether elements of the two papers were reconcilable given their seemingly opposite viewpoints.

  • "Discrimination in the Age of Algorithms" by Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R. Sunstein (February 2019).

  • [FORTHCOMING] "The Illusion of Algorithmic Fairness" by Rodrigo Ochigame (2019)

The main argument in "Discrimination in the Age of Algorithms" is that algorithms make it easier to identify and prevent discrimination. The authors point out that current obstacles to proving discrimination are primarily caused by opacity around human decision making. Human decision makers can make up justifications for their decisions after the fact or may be influenced by bias without even knowing it. The authors argue that by making algorithms transparent, primarily through the use of counterfactuals, we can determine which components of the algorithm are causing a biased outcome. The paper also suggests that we allow algorithms to consider personal attributes such as race and gender in certain contexts because doing so could help counteract human bias. For example, if managers consistently give higher performance ratings to male workers over female workers, the algorithm won't be able to figure out that managers are discriminating against women in the workplace if it can't incorporate data about gender. But if we allow the algorithm to be aware of gender when calculating work productivity, it may be able to uncover existing biases and prevent them from being perpetuated.

The second assigned reading, "The Illusion of Algorithmic Fairness," demonstrates that attempts to reduce elements of fairness to mathematical equations have persisted throughout history. Discussions about algorithmic fairness today mirror many of the same points of contention reached in past debates about fairness, such as whether we should optimize for utility or optimize for fair outcomes. Consequently, fairness debates today have inherited some assumptions from these past discussions. In particular, we "take many concepts for granted including probability, risk, classification, correlation, regression, optimization, and utility." The author argues that despite our technical advances, fairness remains "irreducible to a mathematical property of algorithms, independent from specific social contexts." He shows that any attempt at formalism will ultimately be influenced by the social and political climate of the time. Moreover, researchers frequently use misrepresentative, historical data to create "fair" algorithms. The way that the data is framed and interpreted can be misrepresentative and frequently reinforces existing discrimination (for example, predictive policing algorithms predict future policing, not future crime.)

These readings set the stage for a conversation about how we should approach developing interventions. While "Discrimination in the Age of Algorithms" makes a strong case for using algorithms (in conjunction with counterfactuals) to improve the status quo and make it easier to prove discrimination in court, "The Illusion of Algorithmic Fairness" cautions against trying to reduce components of fairness to mathematical properties. The "Illusion of Algorithmic Fairness" paper shows that this is not a new endeavor. Humans have tried to standardize the concept of fairness as early as 1700 and we have proved time and again that determining what is fair and what is unfair is much too complicated and context dependent to model in an algorithm.

Class Session 9: Intervening on behalf of interpretability

In our second to last class, we discussed causal inference, how it differs from correlative machine learning techniques, and its benefits and drawbacks. We then considered how causal models could be deployed in the criminal justice context to generate individualized sentences and what an algorithmically informed individualized sentence would look like.

The Book of Why describes the emerging field of causal inference, which attempts to model how the human brain works by considering cause and effect relationships. The introduction delves a little into the history of causal inference and explains that it took time for the field to develop because it was nearly impossible for scientists to communicate causal relationships using mathematical terms. We've now devised ways to model what the authors call "the do-operator" (which indicates that there was some action/form of intervention that makes the relationship causal rather than correlative) through diagrams, mathematical formulas and lists of assumptions.

One main point of the introduction and the book is that "data are dumb" because they don't explain why something happened. A key component of causal inference is the creation of counterfactuals to help us understand what would have happened had certain circumstances been different. The hope with causal inference is that it will be less impacted by bias because causal inference models do not look for correlations in data, but rather focus on the "do-operator." A causal inference approach may also make algorithms more interpretable because counterfactuals will offer a better way to understand how the AI makes decisions.

The other assigned reading, State of Wisconsin v. Eric Loomis, is a 2016 case about the use of risk assessment tools in the criminal justice system. In Loomis, the court used a risk assessment tool, COMPAS, to determine the defendant's risk of pretrial recidivism, general recidivism, and violent recidivism. The key question in this case was whether the judge should be able to consider the risk scores when determining a defendant's sentence. The State Supreme Court in this case decided that judges could consider the risk score because they also take into account other evidence when making sentencing decisions. For the purposes of this class, the case provided a lede into a discussion about the right to an individualized sentence and whether risk assessment scores can result in more fair outcomes for defendants. However, it turns out that risk assessment tools should not be employed if the goal is to produce individualized sentences. Despite their appearance of generating unique risk scores for defendants, risk assessment scores are not individualized as they compare information about an individual defendant to data about similar groups of offenders to determine that individual's recidivism risk.

Class Session 10: Intervening against adversarial examples and course conclusion

We opened our final class with a discussion about adversarial examples and technical debt before wrapping up the course with a final reflection on the broader themes and findings of the course.

The term "technical debt" refers to the challenge of keeping machine learning systems up to date. While technical debt is a factor in any type of technical system, machine learning systems are particularly susceptible to collecting a lot of technical debt because they tend to involve many layers of infrastructure (code and non code). Technical debt also tends to accrue more in systems that are developed and deployed quickly. In a time crunch, it is more likely that new features will be added without deleting old ones and that the systems will not be checked for redundant features or unintended feedback loops before they are deployed. In order to combat technical debt, the authors suggest several approaches including, fostering a team culture that encourages simplifying systems and eliminating unnecessary features and creating an alert system that signals when a system has run up against pre-programmed limits and requires review.

During the course retrospective, students identified several overarching themes of the class including, the effectiveness and importance of interdisciplinary learning, the tendency of policymakers and industry leaders to emphasize short term outcomes over long term consequences of decisions, the challenge of teaching engineers to consider the ethical implications of their work during the development process, and the lack of input from diverse groups in system design and deployment.

Credits

Syllabus Notes by Samantha L. Bates

Syllabus by Samantha Bates, John Bowers and Natalie Saltiel

Like most parents of young children, I've found that determining how best to guide my almost 2-year-old daughter's relationship with technology--especially YouTube and mobile devices--is a challenge. And I'm not alone: One 2018 survey of parents found that overuse of digital devices has become the number one parenting concern in the United States.

Empirically grounded, rigorously researched advice is hard to come by. So perhaps it's not surprising that I've noticed a puzzling trend in my friends who provide me with unsolicited parenting advice. In general, my most liberal and tech-savvy friends exercise the most control and are weirdly technophobic when it comes to their children's screen time. What's most striking to me is how many of their opinions about children and technology are not representative of the broader consensus of research, but seem to be based on fearmongering books, media articles, and TED talks that amplify and focus on only the especially troubling outcomes of too much screen time.

I often turn to my sister, Mimi Ito, for advice on these issues. She has raised two well-adjusted kids and directs the Connected Learning Lab at UC Irvine, where researchers conduct extensive research on children and technology. Her opinion is that "most tech-privileged parents should be less concerned with controlling their kids' tech use and more about being connected to their digital lives." Mimi is glad that the American Association of Pediatrics (AAP) dropped its famous 2x2 rule--no screens for the first two years, and no more than two hours a day until a child hits 18. She argues that this rule fed into stigma and parent-shaming around screen time at the expense of what she calls "connected parenting"--guiding and engaging in kids' digital interests.

One example of my attempt at connected parenting is watching YouTube together with Kio, singing along with Elmo as Kio shows off the new dance moves she's learned. Everyday, Kio has more new videos and favorite characters that she is excited to share when I come home, and the songs and activities follow us into our ritual of goofing off in bed as a family before she goes to sleep. Her grandmother in Japan is usually part of this ritual in a surreal situation where she is participating via FaceTime on my wife's iPhone, watching Kio watching videos and singing along and cheering her on. I can't imagine depriving us of these ways of connecting with her.

The (Unfounded) War on Screens

The anti-screen narrative can sometimes read like the War on Drugs. Perhaps the best example is Glow Kids, in which Nicholas Kardaras tells us that screens deliver a dopamine rush rather like sex. He calls screens "digital heroin" and uses the term "addiction" when referring to children unable to self-regulate their time online.

More sober (and less breathlessly alarmist) assessments by child psychologists and data analysts offer a more balanced view of the impact of technology on our kids. Psychologist and baby observer Alison Gopnik, for instance, notes: "There are plenty of mindless things that you could be doing on a screen. But there are also interactive, exploratory things that you could be doing." Gopnik highlights how feeling good about digital connections is a normal part of psychology and child development. "If your friends give you a like, well, it would be bad if you didn't produce dopamine," she says.

Other research has found that the impact of screens on kids is relatively small, and even the conservative AAP says that cases of children who have trouble regulating their screen time are not the norm, representing just 4 percent to 8.5 percent of US children. This year, Andrew Przybylski and Amy Orben conducted a rigorous analysis of data on more than 350,000 adolescents and found a nearly negligible effect on psychological well-being at the aggregate level.

In their research on digital parenting, Sonia Livingstone and Alicia Blum-Ross found widespread concern among parents about screen time. They posit, however, that "screen time" is an unhelpful catchall term and recommend that parents focus instead on quality and joint engagement rather than just quantity. The Connected Learning Lab's Candice Odgers, a professor of psychological sciences, reviewed the research on adolescents and devices and found as many positive as negative effects. She points to the consequences of unbalanced attention on the negative ones. "The real threat isn't smartphones. It's this campaign of misinformation and the generation of fear among parents and educators."

We need to immediately begin rigorous, longitudinal studies on the effects of devices and the underlying algorithms that guide their interfaces and their interactions with and recommendations for children. Then we can make evidence-based decisions about how these systems should be designed, optimized for, and deployed among children, and not put all the burden on parents to do the monitoring and regulation.

My guess is that for most kids, this issue of screen time is statistically insignificant in the context of all the other issues we face as parents--education, health, day care--and for those outside my elite tech circles even more so. Parents like me, and other tech leaders profiled in a recent New York Times series about tech elites keeping their kids off devices, can afford to hire nannies to keep their kids off screens. Our kids are the least likely to suffer the harms of excessive screen time. We are also the ones least qualified to be judgmental about other families who may need to rely on screens in different ways. We should be creating technology that makes screen entertainment healthier and fun for all families, especially those who don't have nannies.

I'm not ignoring the kids and families for whom digital devices are a real problem, but I believe that even in those cases, focusing on relationships may be more important than focusing on controlling access to screens.

Keep It Positive

One metaphor for screen time that my sister uses is sugar. We know sugar is generally bad for you and has many side effects and can be addictive to kids. However, the occasional bonding ritual over milk and cookies might have more benefit to a family than an outright ban on sugar. Bans can also backfire, fueling binges and shame as well as mistrust and secrecy between parents and kids.

When parents allow kids to use computers, they often use spying tools, and many teens feel parental surveillance is invasive to their privacy. One study showed that using screen time to punish or reward behavior actually increased net screen time use by kids. Another study by Common Sense Media shows what seems intuitively obvious: Parents use screens as much as kids. Kids model their parents--and have a laserlike focus on parental hypocrisy.

In Alone Together, Sherry Turkle describes the fracturing of family cohesion because of the attention that devices get and how this has disintegrated family interaction. While I agree that there are situations where devices are a distraction--I often declare "laptops closed" in class, and I feel that texting during dinner is generally rude--I do not feel that iPhones necessarily draw families apart.

In the days before the proliferation of screens, I ran away from kindergarten every day until they kicked me out. I missed more classes than any other student in my high school and barely managed to graduate. I also started more extracurricular clubs in high school than any other student. My mother actively supported my inability to follow rules and my obsessive tendency to pursue my interests and hobbies over those things I was supposed to do. In the process, she fostered a highly supportive trust relationship that allowed me to learn through failure and sometimes get lost without feeling abandoned or ashamed.

It turns out my mother intuitively knew that it's more important to stay grounded in the fundamentals of positive parenting. "Research consistently finds that children benefit from parents who are sensitive, responsive, affectionate, consistent, and communicative" says education professor Stephanie Reich, another member of the Connected Learning Lab who specializes in parenting, media, and early childhood. One study shows measurable cognitive benefits from warm and less restrictive parenting.

When I watch my little girl learning dance moves from every earworm video that YouTube serves up, I imagine my mother looking at me while I spent every waking hour playing games online, which was my pathway to developing my global network of colleagues and exploring the internet and its potential early on. I wonder what wonderful as well as awful things will have happened by the time my daughter is my age, and I hope a good relationship with screens and the world beyond them can prepare her for this future.

2019 Applied Ethical and Governance Challenges in AI - Notes from Part 2: Prognosis »

This is the second of three parts of the syllabus and summaries prepared by Samantha Bates who TAs the Applied Ethical and Governance Challenges in Artificial Intelligence course which I co-teach with Jonathan Zittrain. John Bowers and Natalie Satiel are also TAs for the course. I posted Part I earlier in the month. My takeaways: In Part I, we defined the space and tried to frame and understand some of the problems. We left with concerns about the reductionist, poorly defined and oversimplified notions of fairness and explainability in much of the literature. We also left feeling quite challenged by...

2019 Applied Ethical and Governance Challenges in AI - Notes from Part 1: Diagnosis »

Jonathan Zittrain and I are co-teaching a class together for the third time. This year, the title of the course is Applied Ethical and Governance Challenges in Artificial Intelligence. It is a seminar, which means that we invite speakers for most of the classes and usually talk about their papers and their work. The speakers and the papers were mostly curated by our amazing teaching assistant team - Samantha Bates, John Bowers and Natalie Satiel. One of the things that Sam does is help prepare for the class by summarizing the paper and the flow of the class and I...

Supposedly 'Fair' Algorithms Can Perpetuate Discrimination »

How the use of AI runs the risk of re-creating the insurance industry's inequities of the previous century.

The Quest to Topple Science-Stymying Academic Paywalls »

Scientific publishers charge so much that even Harvard can’t afford it anymore. A new publishing infrastructure could help.

What the California Wildfires Can Teach Us About Data Sharing »

Citizen collection of radiation information after Fukushima and of air quality information after the recent fires serve as a model for everyone.

What the Boston School Bus Schedule Can Teach Us About AI »

An MIT team built an algorithm to optimize bell times and bus routes. The furor around the plan offers lessons in how we talk to people when we talk to them about artificial intelligence.

The Next Great (Digital) Extinction »

How today's internet is rapidly and indifferently killing off many systems while allowing new types of organizations to emerge.

The Educational Tyranny of the Neurotypicals »

The current school system is too rigid, and it’s designed for a different world anyway.

Why Westerners Fear Robots and the Japanese Do Not »

The hierarchies of Judeo-Christian religions mean that those cultures tend to fear their overlords. Beliefs like Shinto and Buddhism are more conducive to have faith in peaceful coexistence.