Joi Ito's Web

Joi Ito's conversation with the living web.

Jonathan Zittrain and I are co-teaching a class together for the third time. This year, the title of the course is Applied Ethical and Governance Challenges in Artificial Intelligence. It is a seminar, which means that we invite speakers for most of the classes and usually talk about their papers and their work. The speakers and the papers were mostly curated by our amazing teaching assistant team - Samantha Bates, John Bowers and Natalie Satiel.

One of the things that Sam does is help prepare for the class by summarizing the paper and the flow of the class and I realized that it was a waste for this work to just be crib notes for the instructors. I asked Sam for permission to publish the notes and the syllabus on my blog as a way for people to learn some of what we are learning and start potentially interesting conversations.

The course is structured as three sets of three classes on three focus areas. Previous classes were more general overviews of the space, but as the area of research matured, we realized that it would be more interesting to go deep in key areas than to go over what a lot of people probably already know.

We chose three main topics: fairness, interpretability, and adversarial examples. We then organized the classes to hit each topic three times, starting with diagnosis (identifying the technical root of the problem), then prognosis (exploring the social impact of those problems) then intervention (considering potential solutions to the problems we've identified while taking into account the costs and benefits of each proposed solution). See the diagram below for a visual of the structure.

The students in the class are half MIT and half Harvard students with diverse areas of expertise including software engineering, law, policy and other fields. The class has really been great and I feel that we're going deeper on many of the topics than I've ever gone before. The downside is that we are beginning to see how difficult the problems are. Personally, I'm feeling a bit overwhelmed by the scale of the work we have ahead of us to try to minimize the harm to society by the deployment of these algorithms.

We just finished the prognosis phase and are about to start intervention. I hope that we find something to be optimistic about as we enter that phase.

Please find below the summary and the syllabus for the introduction and the first phase - the diagnosis phase - by Samantha Bates along with links to the papers.

The tl;dr summary of the first phase is... we have no idea how to define fairness and it probably isn't reducible to a formula or a law, but it is dynamic. Interpretability sounds like a cool word, but as Zachary Lipton said in his talk to our class, it is a "wastebasket taxon" like the word "antelope" where we call anything that sort of looks like an antelope, an antelope, even if it has really no relationship with other antelopes. A bunch of students from MIT made it very clear to us that we are not prepared for adversarial attacks and that it was unclear whether we could build algorithms that were both robust against these attacks and still functionally effective.

Part 1: Introduction and Diagnosis

By Samantha Bates

Syllabus Notes: Introduction and Diagnosis Stage

This first post summarizes the readings assigned for the first four classes, which encompasses the introduction and the diagnosis stage. In the diagnosis stage, the class identified the core problems in AI related to fairness, interpretability, and adversarial examples and considered how the underlying mechanisms of autonomous systems contributed to those problems. As a result, our class discussions involved defining terminology and studying how the technology works. Included below is the first part of the course syllabus along with notes summarizing the main takeaways from each of the assigned readings.

Class Session 1: Introduction

In our first class session, we presented the structure and motivations behind the course, and set the stage for later class discussions by assigning readings that critique the current state of the field.

Both readings challenge the way Artificial Intelligence (AI) research is currently conducted and talked about, but from different perspectives. Michael Jordan's piece is mainly concerned with the need for more collaboration across disciplines in AI research. He argues that we are experiencing the creation of a new branch of engineering that needs to incorporate non-technical as well as engineering challenges and perspectives. "Troubling Trends in Machine Learning Scholarship" focuses more on falling standards and non-rigorous research practices in the academic machine learning community. The authors rightly point out that academic scholarship must be held to the highest standards in order to preserve public and academic trust in the field.

We chose to start out with readings that critique the current state of the field because they encourage students to think critically about the papers they will read throughout the semester. Just as the readings show that the use of precise terminology and explanation of thought are particularly important to prevent confusion, we challenge students to carefully consider how they present their own work and opinions. The readings set the stage for our deep dives into specific topic areas (fairness, interpretability, adversarial AI) and also set some expectations about how students should approach the research we will discuss throughout the course.

Class Session 2: Diagnosing problems of fairness

For our first class in the diagnosis stage, the class was joined by Cathy O'Neil, a data scientist and activist who has become one of the leading voices on fairness in machine learning.

Cathy O'Neil's book, Weapons of Math Destruction, is a great introduction to predictive models, how they work, and how they can become biased. She refers to flawed models that are opaque, scalable, and have the potential to damage lives (frequently the lives of the poor and disadvantaged) as Weapons of Math Destruction (WMDs). She explains that despite good intentions, we are more likely to create WMDs when we don't have enough data to draw reliable conclusions, use proxies to stand in for data we don't have, and try to use simplistic models to understand and predict human behavior, which is much too complicated to accurately model with just a handful of variables. Even worse, most of these algorithms are opaque, so the people impacted by these models are unable to challenge their outputs.

O'Neil demonstrates that the use of these types of models can have serious unforeseen consequences. Because WMDs are a cheap alternative to human review and decision-making, WMDs are more likely to be deployed in poor areas, and thus tend to have a larger impact on the poor and disadvantaged in our society. Additionally, WMDs can actually lead to worse behavior. In O'Neil's example of the Washington D.C. School District's model that used student test scores to identify and root out ineffective teachers, some teachers changed their students' test scores in order to protect their jobs. Although the WMD in this scenario was deployed to improve teacher effectiveness, it actually had the opposite effect by creating an unintended incentive structure.

The optional reading, "The Scored Society: Due Process for Automated Predictions," discusses algorithmic fairness in the credit scoring context. Like Cathy O'Neil, the authors contend that credit scoring algorithms exacerbate existing social inequalities and argue that our legal system has a duty to change that. They propose opening the credit scoring and credit sharing process to public review while also requiring that credit scoring companies educate individuals about how different variables influence their scores. By attacking the opacity problem that Cathy O'Neil identified as one of three characteristics of WMDs, the authors believe the credit scoring system can become more fair without infringing on intellectual property rights or requiring that we abandon the scoring models altogether.

Class Session 3: Diagnosing problems of interpretability

Zachary Lipton, an Assistant Professor at Carnegie Mellon University who is working intensively on defining and addressing problems of interpretability in machine learning, joined the class on Day 3 to discuss what it means for a model to be interpretable.

Class session three was our first day discussing interpretability, so both readings consider how best to define interpretability and why it is important. Lipton's paper asserts that interpretability reflects a number of different ideas and that its current definitions are often too simplistic. His paper primarily raises stage-setting questions: What is interpretability? In what contexts is interpretability most necessary? Does creating a model that is more transparent or can explain its outputs make it interpretable?

Through his examination of these questions, Lipton argues that the definition of interpretability depends on why we want a model to be interpretable. We might demand that a model be interpretable so that we can identify underlying biases and allow those affected by the algorithm to contest its outputs. We may also want an algorithm to be interpretable in order to provide more information to the humans involved in the decision, to give the algorithm more legitimacy, or to uncover possible causal relationships between variables that can then be tested further. By clarifying the different circumstances in which we demand interpretability, Lipton argues that we can get closer to a working definition of interpretability that better reflects its many facets.

Lipton also considers two types of proposals to improve interpretability: increasing transparency and providing post-hoc explanations. The increasing transparency approach can apply to the entire model (simulatability), meaning that a user should be able to reproduce the model's output if given the same input data and parameters. We can also improve transparency by making the different elements of the model (the input data, parameters, and calculations) individually interpretable, or by showing that during the training stage, the model will come to a unique solution regardless of the training dataset. However, as we will discuss further during the interventions stage of the course, providing more transparency at each level does not always make sense depending on the context and the type of model employed (for example a linear model vs. a neural network model). Additionally, improving the transparency of a model may decrease the model's accuracy and effectiveness. A second way to improve interpretability is to require post-hoc interpretability, meaning that the model must explain its decision-making process after generating an output. Post-hoc explanations can take the form of text, visuals, saliency maps, or analogies that show how a similar decision was reached in a similar context. Although post-hoc explanations can provide insight into how individuals affected by the model can challenge or change its outputs, Lipton cautions that these explanations can be unintentionally misleading, especially if they are influenced by our human biases.

Ultimately, Lipton's paper concludes that it is extremely challenging to define interpretability given how much it depends on external factors like context and the motivations for making a model interpretable. Without a working definition of the term it remains unclear how to determine whether a model is interpretable. While the Lipton paper focuses more on defining interpretability and considering why it is important, the optional reading, "Towards a rigorous Science of Interpretable Machine Learning," dives deeper into the various methods used to determine whether a model is interpretable. The authors define interpretability as the "ability to explain or present in understandable terms to a human" and are particularly concerned about the lack of standards for evaluating interpretability.

Class Session 4: Diagnosing vulnerabilities to adversarial examples

In our first session on adversarial examples, the class was joined by LabSix, a student-run AI research group at MIT that is doing cutting-edge work on adversarial techniques. LabSix gave a primer on adversarial examples and presented some of its own work.

The Gilmer et. al. paper is an accessible introduction to adversarial examples that defines them as "inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake." The main thrust of the paper is an examination of the different scenarios in which an attacker may employ adversarial examples. The authors develop a taxonomy to categorize these different types of attacks: "indistinguishable perturbation, content-preserving perturbation, non-suspicious input, content-constrained input, and unconstrained input." For each category of attack, the authors explore the different motivations and constraints of the attacker. By gaining a better understanding of the different types of attacks and the tradeoffs of each type, the authors argue that the designers of machine learning systems will be better able to defend against them.

The paper also includes an overview of the perturbation defense literature, which the authors criticize for failing to consider adversarial example attacks in plausible, real-world situations. For example, a common hypothetical situation posed in the defense literature is an attacker perturbing the image of a stop sign in an attempt to confuse a self-driving car. The Gilmer et. al. paper; however, points out that the engineers of the car would have considered and prepared for naturally occurring misclassification errors caused by the system itself or real world events (for example, the stop sign could be blown over by the wind). The authors also argue that there are likely easier, non technical methods that the attackers could use to confuse the car, so the hypothetical is not the most realistic test case. The authors' other main critique of the defense literature is that it does not acknowledge how improving certain aspects of a system's defense structure can make other aspects of the system less robust and thus more vulnerable to attack.

The recommended reading by Christian Szegedy et. al. is much more technical and requires some machine learning background to understand all of the terminology. Although it is a challenging read, we included it in the syllabus because it introduced the term "adversarial examples" and laid some of the foundation for research on this topic.



Credits

Figure and Notes by Samantha Bates

Syllabus by Samantha Bates, John Bowers and Natalie Saltiel

During the Long Hot Summer of 1967, race riots erupted across the United States. The 159 riots--or rebellions, depending on which side you took--were mostly clashes between the police and African Americans living in poor urban neighborhoods. The disrepair of these neighborhoods before the riots began and the difficulty in repairing them afterward was attributed to something called redlining, an insurance-company term for drawing a red line on a map around parts of a city deemed too risky to insure.

In an attempt to improve recovery from the riots and to address the role redlining may have played in them, President Lyndon Johnson created the President's National Advisory Panel on Insurance in Riot-Affected Areas in 1968. The report from the panel showed that once a minority community had been redlined, the red line established a feedback cycle that continued to drive inequity and deprive poor neighborhoods of financing and insurance coverage--redlining had contributed to creating poor economic conditions, which already affected these areas in the first place. There was a great deal of evidence at the time that insurance companies were engaging in overtly discriminatory practices, including redlining, while selling insurance to racial minorities, and would-be home- and business-owners were unable to get loans because financial institutions require insurance when making loans. Even before the riots, people there couldn't buy or build or improve or repair because they couldn't get financing.

Because of the panel's report, laws were enacted outlawing redlining and creating incentives for insurance companies to invest in developing inner-city neighborhoods. But redlining continued. To justify their discriminatory pricing or their refusal to sell insurance in urban centers, insurance companies developed sophisticated arguments about the statistical risks that certain neighborhoods presented.

The argument insurers used back then--that their job was purely technical and that it didn't involve moral judgments--is very reminiscent of the arguments made by some social network platforms today: That they are technical platforms running algorithms and should not be, and are not, involved in judging the content. Insurers argued that their job was to adhere to technical, mathematical, and market-based notions of fairness and accuracy and provide what was viewed--and is still viewed--as one of the most essential financial components of society. They argued that they were just doing their jobs. Second-order effects on society were really not their problem or their business.

Thus began the contentious career of the notion of "actuarial fairness," an idea that would spread in time far beyond the insurance industry into policing and paroling, education, and eventually AI, igniting fierce debates along the way over the push by our increasingly market-oriented society to define fairness in statistical and individualistic terms rather than relying on the morals and community standards used historically.

Risk spreading has been a central tenet of insurance for centuries. Risk classification has a shorter history. The notion of risk spreading is the idea that a community such as a church or village could pool its resources to help individuals when something unfortunate happened, spreading risk across the group--the principle of solidarity. Modern insurance began to assign a level of risk to an individual so that others in the pool with her had roughly the same level of risk--an individualistic approach. This approach protected individuals from carrying the expense of someone with a more risk-prone and costly profile. This individualistic approach became more prevalent after World War II, when the war on communism made anything that sounded too socialist unpopular. It also helped insurance companies compete in the market. By refining their risk classifications, companies could attract what they called "good risks." This saved them money on claims and forced competitors to take on more expensive-to-insure "bad risks."

(A research colleague of mine, Rodrigo Ochigame, who focuses on algorithmic fairness and actuarial politics, directed me to historian Caley Horan, who is working on an upcoming book titled Insurance Era: The Privatization of Security and Governance in the Postwar United States that will elaborate on many of the ideas in this article, which is based on her research.)

The original idea of risk spreading and the principle of solidarity was based on the notion that sharing risk bound people together, encouraging a spirit of mutual aid and interdependence. By the final decades of the 20th century, however, this vision had given way to the so-called actuarial fairness promoted by insurance companies to justify discrimination.

While discrimination was initially based on outright racist ideas and unfair stereotypes, insurance companies evolved and developed sophisticated-seeming calculations to show that their discrimination was "fair." Women should pay more for annuities because statistically they lived longer, and blacks should pay more for damage insurance when they lived in communities where crime and riots were likely to occur. While overt racism and bigotry still exist across American society, in insurance it has been integrated into and hidden from the public behind mathematics and statistics that are so difficult for nonexperts to understand that fighting back becomes nearly impossible.

By the late 1970s, women's activists had joined civil rights groups in challenging insurance redlining and risk-rating practices. These new insurance critics argued that the use of gender in insurance risk classification was a form of sex discrimination. Once again, insurers responded to these charges with statistics and mathematical models. Using gender to determine risk classification, they claimed, was fair; the statistics they used showed a strong correlation between gender and the outcomes they insured against.

And many critics of insurance inadvertently bought into the actuarial fairness argument. Civil rights and feminist activists in the late 20th century lost their battles with the insurance industry because they insisted on arguing about the accuracy of certain statistics or the validity of certain classifications rather than questioning whether actuarial fairness--an individualistic notion of market-driven pricing fairness--was a valid way of structuring a crucial and fundamental social institution like insurance in the first place.

But fairness and accuracy are not necessarily the same thing. For example, when Julia Angwin pointed out in her ProPublica report that risk scores used by the criminal justice system were biased against people of color, the company that sold the algorithmic risk score system argued that its scores were fair because they were accurate. The scores accurately predicted that people of color were more likely to reoffend. This likelihood of reoffense, called the recidivism rate, is the likelihood that someone recommits a crime after being released, and the rate is calculated primarily using arrest data. But this correlation contributes to discrimination, because using arrests as a proxy for recommitting a crime means the algorithm is codifying biases in arrests, such as a police officer bias to arrest more people of color or to patrol more heavily in poor neighborhoods. This risk of recidivism is used to set bail and determine sentencing and parole, and it informs predictive policing systems that direct police to neighborhoods likely to have more crime.

There are several obvious problems with this. If you believe the risk scores are accurate in predicting the future outcomes of a certain group of people, then it means it's "fair" that a person is more likely to spend more time in jail simply because they are black. This is actuarially "fair" but clearly not "fair" from a social, moral, or anti-discrimination perspective.

The other problem is that there are fewer arrests in rich neighborhoods, not because people there aren't smoking as much pot as in poor neighborhoods but because there is less policing. Obviously, one is more likely to be rearrested if one lives in an overpoliced neighborhood, and that creates a feedback loop--more arrests mean higher recidivism rates. In very much the same way that redlining in minority neighborhoods created a self-fulfilling prophecy of uninsurable communities, overpolicing and predictive policing may be "fair" and "accurate" in the short term, but the long-term effects on communities have been shown to be negative, creating self-fulfilling prophecies of poor, crime-ridden neighborhoods.

Angwin also showed in a recent ProPublica report that, despite regulations, insurance companies charge minority communities higher premiums than white communities, even when the risks are the same. The Spotlight team at The Boston Globe reported that the household median net worth in the Boston area was $247,500 for whites and $8 for nonimmigrant blacks--the result of redlining and unfair access to housing and financial services. So while redlining for insurance is not legal, when Amazon decides to provide Amazon Prime free same-day shipping to its "best" customers, it's effectively redlining--reinforcing the unfairness of the past in new and increasingly algorithmic ways.

Like the insurers, large tech firms and the computer science community also tend to frame "fairness" in a depoliticized, highly technical way involving only mathematics and code, which reinforces a circular logic. AI is trained to use the outcomes of discriminatory practices, like recidivism rates, to justify continuing practices such as incarceration or overpolicing that may contribute to the underlying causes of crime, such as poverty, difficulty getting jobs, or lack of education. We must create a system that requires long-term public accountability and understandability of the effects on society of policies developed using machines. The system should help us understand, rather than obscure, the impact of algorithms on society. We must provide a mechanism for civil society to be informed and engaged in the way in which algorithms are used, optimizations set, and data collected and interpreted.

The computer scientists of today are more sophisticated in many ways than the actuaries of yore, and they often sincerely are trying to build algorithms that are fair. The new literature on algorithmic fairness usually doesn't simply equate fairness with accuracy, but instead defines various trade-offs between fairness and accuracy. The problem is that fairness cannot be reduced to a simple self-contained mathematical definition--fairness is dynamic and social and not a statistical issue. It can never be fully achieved and must be constantly audited, adapted, and debated in a democracy. By merely relying on historical data and current definitions of fairness, we will lock in the accumulated unfairnesses of the past, and our algorithms and the products they support will always trail the norms, reflecting past norms rather than future ideals and slowing social progress rather than supporting it.

Science is built, enhanced, and developed through the open and structured sharing of knowledge. Yet some publishers charge so much for subscriptions to their academic journals that even the libraries of the world’s wealthiest universities such as Harvard are no longer able to afford the prices. Those publishers’ profit margins rival those of the most profitable companies in the world, even though research is largely underwritten by governments, and the publishers don’t pay authors and researchers or the peer reviewers who evaluate those works. How is such an absurd structure able to sustain itself—and how might we change it?

When the World Wide Web emerged in the ’90s, people began predicting a new, more robust era of scholarship based on access to knowledge for all. The internet, which started as a research network, now had an easy-to-use interface and a protocol to connect all of published knowledge, making each citation just a click away … in theory.

Instead, academic publishers started to consolidate. They solidified their grip on the rights to prestigious journals, allowing them to charge for access and exclude the majority of the world from reading research publications—all while extracting billions in dollars of subscription fees from university libraries and corporations. This meant that some publishers, such as Elsevier, the science, technology, and medicine-focused branch of the RELX Group publishing conglomerate, are able today to extract huge margins—36.7 percent in 2017 in Elsevier’s case, more profitable than Apple, Google/Alphabet, or Microsoft that same year.

And in most scholarly fields, it’s the most important journals that continue to be secured behind paywalls—a structure that doesn’t just affect the spread of information. Those journals have what we call high “impact factors,” which can skew academic hiring and promotions in a kind of self-fulfilling cycle that works like this: Typically, anyone applying for an academic job is evaluated by a committee and by other academics who write letters of evaluation. In most fields, papers published in peer-reviewed journals are a critical part of the evaluation process, and the so-called impact factor, which is based on the citations that a journal gets over time, is important. Evaluators, typically busy academics who may lack deep expertise in a candidate’s particular research topic, are prone to skim the submitted papers and rely heavily on the number of papers published and the impact factor—as a proxy for journal prestige and rigor—in their assessment of the qualifications of a candidate.

And so young researchers are forced to prioritize publication in journals with high impact factors, faulty as they are, if they want tenure or promotions. The consequence is that important work gets locked up behind paywalls and remains largely inaccessible to anyone not in a major research lab or university. This includes the taxpayers who funded the research in the first place, the developing world, and the emerging world of nonacademic researchers and startup labs.

Breaking Down the Walls

To bypass the paywalls, in 2011 Alexandra Elbakyan started Sci-Hub, a website that provides free access to millions of otherwise inaccessible academic papers. She was based in Kazakhstan, far from the courts where academic publishers can easily bring lawsuits. In the movie Paywall, Elbakyan says that Elsevier’s mission was to make “uncommon knowledge common,” and she jokes that she was just trying to help the company do that because it seemed unable to do so itself. While Elbakyan has been widely criticized for her blatant disregard for copyright, Sci-Hub has become a popular tool among academics, even at major universities, because it removes the friction of paywalls and provides links to collaborators beyond them. She was able to do what the late Aaron Swartz, my Creative Commons colleague and dear friend, envisioned but was unable to achieve in his lifetime.

But, kind of like the Berlin Wall, the academic journal paywall can crumble, and several efforts are underway to undermine it. The Open Access, or OA, movement—a worldwide effort to make scholarly research literature freely accessible online—began several decades ago. Essentially, researchers upload the unpublished version of their papers to a repository focused on subject matter or operated by an academic institution. The movement was sparked by services like arXiv.org, which Cornell University started in 1991, and became mainstream when Harvard established the first US self-archiving policy in 2008; other research universities around the world quickly followed.

Many publications have since found ways to allow open access in their journals by permitting it but charging an expensive (usually hundreds or thousands of dollars per article) “article processing charge,” or APC, that is paid by the institution or the author behind the research as a sort of cost of being published. OA publishers such as the Public Library of Science, or PLOS, charge APCs to make papers available without a paywall, and many traditional commercial publishers also allow authors to pay an APC so that their papers appearing in what is technically a paywalled journal can be available publicly.

When I was CEO of Creative Commons a decade ago, at a time when OA was beginning in earnest, one of my first talks was to a group of academic publishers. I remember trying to describe our proposal to allow authors to have a way to mark their works with the rights they wished to grant to their work, including the use of their work without charge but with attribution. The first comment from the audience came from an academic publisher who declared my comments “disgusting.”

We’ve come a long way since then. Even RELX now allows open access for some of its journals and uses Creative Commons licenses to mark works that are freely available.

Many publishers I’ve talked to are preparing to make open access to research papers a reality. In fact, most journals already allow some open access through the expensive article processing charges I mentioned earlier.

So in some ways, it feels like “we won.” But has the OA movement truly reached its potential to transform research communication? I don't think so, especially if paid open access just continues to enrich a small number of commercial journal publishers. We have also seen the emergence of predatory OA journals with no peer review or other quality control measures, and that, too, has undermined the OA movement.

We can pressure publishers to lower APC charges, but if they have control of the platforms and key journals, they will continue to extract high fees even in an OA world. So far, they have successfully prevented collective bargaining through confidentiality agreements and other legal means.

Another Potential Solution

The MIT Press, led by Amy Brand, and the Media Lab recently launched a collaboration called The Knowledge Futures Group. (I am director of the Media Lab and a board member at the press.) Our aim is to create a new open knowledge ecosystem. The goal is to develop and deploy infrastructure to allow free, rigorous, and open sharing of knowledge and to start a movement toward greater institutional and public ownership of that infrastructure, reclaiming territory ceded to publishers and commercial technology providers.

(In some ways, the solution might be similar to what blogging was to online publishing. Blogs were simple scripts, free and open source software, and a bunch of open standards that interoperate between services. They allowed us to create simple and very low cost informal publishing platforms that did what you used to have to buy multimillion-dollar Content Management Systems for. Blogs led the way for user generated content and eventually social media.)

While academic publishing is more complex, a refactoring and an overhaul of the software, protocols, processes, and business underlying such publishing could revolutionize it financially as well as structurally.

We are developing a new open source and modern publishing platform called PubPub and a global, distributed method of understanding public knowledge called Underlay. We have established a lab to develop, test, and deploy other technologies, systems, and processes that will help researchers and their institutions. They would have access to an ecosystem of open source tools and an open and transparent network to publish, understand, and evaluate scholarly work. We imagine developing new measures of impact and novelty with more transparent peer review; publishing peer reviews; and using machine learning to help identify novel ideas and people and mitigate systemic biases, among other things. It is imperative that we establish an open innovation ecosystem as an alternative to the control that a handful of commercial entities maintain over not only the markets for research information but also over academic reputation systems and research technologies more generally.

One of the main pillars of academic reputation is authorship, which has become increasingly problematic as science has become more collaborative. Who gets credit for research and discovery can have a huge impact on researchers and institutions. But the order of author names on a journal article has no standardized meaning. It is often determined more by seniority and academic culture than by actual effort or expertise. As a result, credit is often not given where credit is due. With electronic publishing, we can move beyond a “flat” list of author names, in the same way that film credits specify the contributions of those involved, but we have continued to allow the constraints of print guide our practices. We can also experiment with and improve peer review to provide better incentives, processes, and fairness.

It’s essential for universities, and core to their mission, to assert greater control over systems for knowledge representation, dissemination, and preservation. What constitutes knowledge, the use of knowledge, and the funding of knowledge is the future of our planet, and it must be protected from twisted market incentives and other corrupting forces. The transformation will require a movement involving a global network of collaborators, and we hope to contribute to catalyzing it.

When a massive earthquake and tsunami hit the eastern coast of Japan on March 11, 2011, the Fukushima Daiichi Nuclear Power Plant failed, leaking radioactive material into the atmosphere and water. People around the country as well as others with family and friends in Japan were, understandably, concerned about radiation levels—but there was no easy way for them to get that information. I was part of a small group of volunteers who came together to start a nonprofit organization, Safecast, to design, build, and deploy Geiger counters and a website that would eventually make more than 100 million measurements of radiation levels available to the public.

We started in Japan, of course, but eventually people around the world joined the movement, creating an open global data set. The key to success was the mobile, easy to operate, high-quality but lower-cost kit that the Safecast team developed, which people could buy and build to collect data that they might then share on the Safecast website.

While Chernobyl and Three Mile Island spawned monitoring systems and activist NGOs as well, this was the first time that a global community of experts formed to create a baseline of radiation measurements, so that everyone could monitor radiation levels around the world and measure fluctuations caused by any radiation event. (Different regions have very different baseline radiation levels, and people need to know what those are if they are to understand if anything has changed.)

More recently Safecast, which is a not-for-profit organization, has begun to apply this model to air quality in general. The 2017 and 2018 fires in California were the air quality equivalent of the Daiichi nuclear disaster, and Twitter was full of conversations about N95 masks and how they were interfering with Face ID. People excitedly shared posts about air quality; I even saw Apple Watches displaying air quality figures. My hope is that this surge of interest in air quality among Silicon Valley elites will help advance a field, namely the monitoring of air quality, that has been steadily developing but has not yet been as successful as Safecast was with radiation measurements. I believe this lag stems in part from the fact that Silicon Valley believes so much in entrepreneurs, people there try to solve every problem with a startup. But that’s not always the right approach.

Hopefully, interest in data about air quality and the difficulty in getting a comprehensive view will drive more people to consider an open data and approach over proprietary ones. Right now, big companies and governments are the largest users of data that we’ve handed to them—mostly for free—to lock up in their vaults. Pharmaceutical firms, for instance, use the data to develop drugs that save lives, but they could save more lives if their data were shared. We need to start using data for more than commercial exploitation, deploying it to understand the long-term effects of policy, and create transparency around those in power—not of private citizens. We need to flip the model from short-term commercial use to long-term societal benefit.

The first portable air sensors were the canaries that miners used to monitor for poison gases in coal mines. Portable air sensors that consumers could easily use were developed in the early 2000s, and since then the technology for measuring air quality has changed so rapidly that data collected just a few years ago is often now considered obsolete. Nor is “air quality” or the Air Quality Index standardized, so levels get defined differently by different groups and governments, with little coordination or transparency.

Yet right now, the majority of players are commercial entities that keep their data locked up, a business strategy reminiscent of software before we “discovered” the importance of making it free and open source. These companies are not coordinating or contributing data to the commons and are diverting important attention and financial resources away from nonprofit efforts to create standards and open data, so we can conduct research and give the public real baseline measurements. It’s as if everyone is building and buying thermometers that measure temperatures in Celsius, Fahrenheit, Delisle, Newton, Rankine, Réaumur, and Rømer, or even making up their own bespoke measurement systems without discussing or sharing conversion rates. While it is likely to benefit the businesses to standardize, companies that are competing have a difficult time coordinating on their own and try to use proprietary nonstandard improvements as a business advantage.

To attempt to standardize the measurement of small particulates in the air, a number of organizations have created the Air Sensor Workgroup. The ASW is working to build an Air Quality Data Commons to encourage sharing of data with standardized measurements, but there is little participation from the for-profit startups making the sensors that suddenly became much more popular in the aftermath of the fires in California.

Although various groups are making efforts to reach consensus on the science and process of measuring air quality, they are confounded by these startups that believe (or their investors believe) their business depends on big data that is owned and protected. Startups don’t naturally collaborate, share, or conduct open research, and I haven’t seen any air quality startups with a mechanism for making data collected available if the business is shut down.

Air quality startups may seem like a niche issue. But the issue of sharing pools of data applies to many very important industries. I see, for instance, a related challenge in data from clinical trials.

The lack of central repositories of data from past clinical trials has made it difficult, if not impossible, for researchers to look back at the science that has already been performed. The federal government spends billions of dollars on research, and while some projects like the Cancer Moonshot mandate data openness, most government funding doesn’t require it. Biopharmaceutical firms submit trial data evidence to the FDA—but not to researchers or the general public as a rule, in much the same way that most makers of air quality detection gadgets don’t share their data. Clinical trial data and medical research funded by government thus may sit hidden behind corporate doors at big companies. Preventing the use of such data impedes discovery of new drugs through novel techniques and makes it impossible for benefits and results to accrue to other trials.

Open data will be key to modernizing the clinical trial process and integrating AI and other advanced techniques used for analyses, which would greatly improve health care in general. I discuss some these considerations in my PhD thesis in more detail.

Some clinical trials have already begun requiring the sharing of individual patient data for clinical analyses within six months of a trial’s end. And there are several initiatives sharing data in a noncompetitive manner, which lets researchers create promising ecosystems and data “lakes” that could lead to new insights and better therapies.

Overwhelming public outcry can also help spur the embrace of open data. Before the 2011 earthquake in Japan, only the government there and large corporations held radiation measurements, and those were not granular. People only began caring about radiation measurements when the Fukushima Daiichi site started spewing radioactive material, and the organizations that held that data were reticent to release it because they wanted to avoid causing panic. However, the public demanded the data, and that drove the activism that fueled the success of Safecast. (Free and open source software also started with hobbyists and academics. Initially there was a great deal of fighting between advocacy groups and corporations, but eventually the business models clicked and free and open source software became mainstream.)

We have a choice about which sensors we buy. Before going out and buying a new fancy sensor or backing that viral Kickstarter campaign, make sure the organization behind it makes a credible case about the scholarship underpinning its technology; explains its data standards; and most importantly, pledges to share its data using a Creative Commons CC0 dedication. For privacy-sensitive data sets that can’t be fully open, like those at Ancestry.com and 23andme, advances in cryptography such as multiparty computation and zero knowledge proofs would allow researchers to learn from data sets without the release of sensitive details.

We have the opportunity and the imperative to reframe the debate on who should own and control our data. Big Data's narrative sells the idea that those owning the data control the market, and it is playing out in a tragedy of the commons, confounding the use of information for society and science.


When the Boston public school system announced new start times last December, some parents found the schedules unacceptable and pushed back. The algorithm used to set these times had been designed by MIT researchers, and about a week later, Kade Crockford, director of the Technology for Liberty Program at the ACLU of Massachusetts, emailed asking me to cosign an op-ed that would call on policymakers to be more thoughtful and democratic when they consider using algorithms to change policies that affect the lives of residents. Kade, who is also a Director's Fellow at the Media Lab and a colleague of mine, is always paying attention to the key issues in digital liberties and is great at flagging things that I should pay attention to. (At the time, I had no contact with the MIT researchers who designed the algorithm.)

I made a few edits to her draft, and we shipped it off to the Boston Globe, which ran it on December 22, 2017, under the headline "Don’t blame the algorithm for doing what Boston school officials asked." In the op-ed, we piled on in criticizing the changes but argued that people shouldn't criticize the algorithm, but rather the city’s political process that prescribed the way in which the various concerns and interests would be optimized. That day, the Boston Public Schools decided not to implement the changes. Kade and I high-fived and called it a day.

The protesting families, Kade and I did what we thought was fair and just given the information that we had at the time. A month later, a more nuanced picture emerged, one that I think offers insights into how technology can and should provide a platform for interacting with policy—and how policy can reflect a diverse set of inputs generated by the people it affects. In what feels like a particularly dark period for democracy and during a time of increasingly out-of-control deployment of technology into society, I feel a lesson like this one has given me greater understanding of how we might more appropriately introduce algorithms into society. Perhaps it even gives us a picture of what a Democracy 2.0 might look like.

A few months later, having read the op-ed in the Boston Globe, Arthur Delarue and Sébastien Martin, PhD students in the MIT Operations Research Center and members of the team that built Boston’s bus algorithm, asked to meet me. In very polite email, they told me that I didn’t have the whole story.

Kade and I met later that month with Arthur, Sebastien, and their adviser, MIT professor Dimitris Bertsimas. One of the first things they showed us was a photo of the parents who had protested against the schedules devised by the algorithm. Nearly all of them were white. The majority of families in the Boston school system are not white. White families represent only about 15 percent the public school population in the city. Clearly something was off.

The MIT researchers had been working with the Boston Public Schools on adjusting bell times, including the development of the algorithm that the school system used to understand and quantify the policy trade-offs of different bell times and, in particular, their impact on school bus schedules. The main goal was to reduce costs and generate optimal schedules.

The MIT team described how the award-winning original algorithm, which focused on scheduling and routing, had started as a cost-calculation algorithm for the Boston Public Schools Transportation Challenge. Boston Public Schools had been trying to change start times for decades but had been confounded by the optimizations and a way to improve the school schedule without tripling the costs, which is why it organized Transportation Challenge to begin with. The MIT team was the first to figure out a way to balance all of these factors and produce a solution. Until then, calculating the cost of the complex bus system had been such a difficult problem that it presented an impediment to even considering bell time changes.

After the Transportation Challenge, the team continued to work with the city, and over the previous year they had participated in a community engagement process and had worked with the Boston school system to build on top of the original algorithm, adding new features that were included to produce a plan for new school start times. They factored in equity—existing start times were unfair, mostly to lower-income families—as well as recent research on teenage sleep that showed starting school early in the day may have negative health and economic consequences for high school students. They also tried to prioritize special education programs and prevent young children from leaving school too late. They wanted to do all this without increasing the budget, and even reducing it.

From surveys, the school system and the researchers knew that some families in every school would be unhappy with any change. They could have added additional constraints on the algorithm to limit some of outlier situations, such as ending the school day at some schools at 1:30 pm, which was particularly exasperating for some parents. The solution that they were proposing significantly increased the number of high school students starting school after 8 am and significantly decreased the number of elementary school students dismissed after 4 pm so they wouldn’t have to go home after dark. Overall it was much better for the majority of people. Although they were aware that some parents wouldn’t be happy, they weren't prepared for the scale of response from angry parents who ended up with start times and bus schedules that they didn't like.

Optimizing the algorithm for greater “equity" also meant many of the planned changes were "biased" against families with privilege. My view is that the fact that an algorithm was making decisions also upset people. And the families who were happy with the new schedule probably didn’t pay as much attention. The families who were upset marched on City Hall in an effort to overturn the planned changes. The ACLU and I supported the activist parents at the time and called "foul" on the school system and the city. Eventually, the mayor and the city caved to the pressure and killed off years of work and what could have been the first real positive change in busing in Boston in decades.

While I'm not sure privileged families would give up their good start times to help poor families voluntarily, I think that if people had understood what the algorithm was optimizing for—sleep health of high school kids, getting elementary school kids home before dark, supporting kids with special needs, lowering costs, and increasing equity overall—they would agree that the new schedule was, on the whole, better than the previous one. But when something becomes personal very suddenly, people to feel strongly and protest.

It reminds me a bit of a study, conducted by the Scalable Cooperation Group at the Media Lab based on earlier work by Joshua Greene, which showed people would support the sacrifice by a self-driving car of its passenger if it would save the lives of a large number of pedestrians, but that they personally would never buy a passenger-sacrificing self-driving car.

Technology is amplifying complexity and our ability to change society, altering the dynamics and difficulty of consensus and governance. But the idea of weighing trade-offs isn't new, of course. It's a fundamental feature of a functioning democracy.

While the researchers working on the algorithm and the plan surveyed and met with parents and school leadership, the parents were not aware of all of the factors that went into the final optimization of the algorithm. The trade-offs required to improve the overall system were not clear, and the potential gains sounded vague compared to the very specific and personal impact of the changes that affected them. And by the time the message hit the nightly news, most of the details and the big picture were lost in the noise.

A challenge in the case of the Boston Public Schools bus route changes was the somewhat black-box nature of the algorithm. The Center for Deliberative Democracy has used a process it calls deliberative polling, which brings together a statistically representative group of residents in a community to debate and deliberate policy goals over several days in hopes of reaching a consensus about how a policy should be shaped. If residents of Boston could have more easily understood the priorities being set for the algorithm, and hashed them out, they likely would have better understood how the results of their deliberations were converted into policy.

After our meeting with the team that invented the algorithm, for instance, Kade Crockford introduced them to David Scharfenberg, a reporter at the Boston Globe who wrote an article about them that included a very well done simulation allowing readers to play with the algorithm and see how changing cost, parent preferences, and student health interact as trade-offs—a tool that would have been extremely useful in explaining the algorithm from the start.

The lessons learned from Boston’s effort to use technology to improve its bus routing system and start times provides a valuable lesson in understanding how to ensure that such tools aren’t used to reinforce and increase biased and unfair policies. They can absolutely make systems more equitable and fair, but they won’t succeed without our help.