Joi Ito's Web

Joi Ito's conversation with the living web.

Versions of these pieces appeared in
WIRED Ideas.

If you looked at how many people check books out of libraries these days, you would see failure. Circulation, an obvious measure of success for an institution established to lend books to people, is down. But if you only looked at that figure, you'd miss the fascinating transformation public libraries have undergone in recent years. They've taken advantage of grants to become makerspaces, classrooms, research labs for kids, and trusted public spaces in every way possible. Much of the successful funding encouraged creative librarians to experiment and scale when successful, iterating and sharing their learnings with others. If we had focused our funding to increase just the number of books people were borrowing, we would have missed the opportunity to fund and witness these positive changes.

I serve on the boards of the MacArthur Foundation and the Knight Foundation, which have made grants that helped transform our libraries. I've also worked over the years with dozens of philanthropists and investors--those who put money into ventures that promise environmental and public health benefits in addition to financial returns. All of us have struggled to measure the effectiveness of grants and investments that seek to benefit the community, the environment, and so forth. My own research interest in the begun to analyse the ways in which people are currently measuring impact and perhaps find methods to better measure the impact of these investments.

As we see in the library example, simple metrics often aren't enough when it comes to quantifying success. They typically are easier to measure, and they're not unimportant. When it comes to health, for example, iron levels might be important, but anemia isn't the only metric we care about. Being healthy is about being nourished and thus resilient so that when something does happen, we recover quickly.

Iron levels may be a proxy for this, but they aren't the proxy. Being happy is even more complicated; it involves health but also more abstract things such as feelings of purpose, belonging to a community, security, and many other things. Similarly, while I believe rigor and best practices are important and support the innovation and thinking going into these metrics when it comes to all types of philanthropy, I think we risk oversimplifying problems and thus having the false sense of clarity that quantitative metrics tend to create.

One of the reasons philanthropists sometimes fail to measure what really matters is that the global political economy primarily seeks what is efficient and scalable. Unfortunately, efficiency and scalability are not the same as a healthy system. In fact, many things that grow quickly and without constraints are far from healthy--consider cancer. Because of our belief in markets, we tend to accept that an economy has to be growing for society to be healthy--but this notion is misguided, particularly when it comes to things we consider social goods. If we examine a complex system like the environment, for instance, we can see that healthy rainforests don't grow in overall size but rather are extremely resilient, always changing and adapting.

There is more to assessing a complex system than looking at its growth, efficiency, and the handful of other qualities that can be quantified and thus measured.

As biologists know, healthy ecosystems are robust and resilient. They can tolerate reductions in certain species populations ... until they can't. Scholars in ecology and biology have tried to model the robustness and resilience of systems in an effort to understand how to build and maintain such systems. Scientists have tried to apply these models to non-biological systems like the internet and ask questions, such as "How many and which nodes can you remove from the internet before it stops functioning?" These models are different from the mathematics economists use. Instead of relying on aggregate numbers and formulae, they use network models of nodes and links to ponder dynamics among connections in the system, rather than stocks and flows of economies.

Maybe there is something to learn from biologists and ecologists--the people who study the complex and messy real world of nature--when philanthropists are thinking about how to save the planet. We know from ecology and biology, for instance, that monocultures and simple approaches tend to be weak and fragile. The strongest systems are highly diverse and iterate quickly. When the immune system goes to war against a pathogen, the body engages in an arms race of mutations, deploying a diversity of approaches and constant iteration, communication, and coordination. Scientists also are learning that the microbiome, brain, and immune system are more integrated and complex than we ever imagined; they actually understand and tackle the more complex diseases currently beyond our scientific abilities. This research is pushing biology and computational models to a whole new and exciting level.

Many diseases, just like all of the systems that philanthropy tries to address, are complex networks of connected problems that go beyond any one specific pathway or molecule. Obesity is often described as simply a matter of managing one's calories and consequently cast as a lack of willpower on the part of an overweight individual. But it is probably more accurately understood in the context of a global food system that is incentivized by financial markets to produce low cost, high-calorie, unhealthy, and addictive foods. Calorie counting as the primary way to lose weight has been a rule of thumb, but we are learning that healthy fats are fine while sugar calories cause insulin resistance, which often leads to diabetes and obesity. So solving the obesity problem is going to require much more than increasing or reducing any one single thing like calories.
It's our food system that is unhealthy, and one result is overweight individuals.

In such a complex world, what are we to do? We need respect for plurality and heterogeneity. It's not that we shouldn't measure things, but rather that we should measure different things, have different approaches and iterate and adapt. This is how nature builds resilient networks and systems. Because we as a society have an obsession with scale and other common measures of success, researchers and do-gooders have a natural tendency to want to use simple measures (as described in our blog post) and other "gold standards" to gauge the impact of the money spent and effort expended. I would urge us to instead support greater experimentation, smaller projects, more coordination and better communication. We should surely measure indicators of negative effects--blood tests to measure what may be going wrong (or right) with our bodies are very useful for instance.

We also need to consider that every change usually has multiple effects, some positive and others negative. We must constantly look for additional side effects and dynamically adapt whatever we do. Sticking with our obesity example, there is evidence that high fat, low sugar diets, generally known as ketogenic diets, are great for losing weight and preventing diabetes; the improvement can be assessed by measuring one's blood glucose levels. However, recent studies show that this diet might contribute to thyroid problems and if we adhere to one, we must monitor thyroid function and occasionally take breaks from it.

Coming up with hypotheses about causal relationships, testing them and connecting them to larger complex models of how we think the world works is an important step. In addition, asking whether we are asking the right questions and solving the right problems, rather than prematurely focusing on solutions, is key. Jed Emerson, who pioneered early attempts to monetize the economic value of social impact, makes the same point in his recent book The Purpose of Capital.

For the last 1,300 years, the Ise Shrine in Japan has been ritually rebuilt by craftspeople every 20 years. The lumber mostly comes from the shrine's forest managed in 200 year time scales as part of a national afforestation plan dating back centuries. The number of people working at Ise Shrine isn't growing, the shrine isn't trying to expand its business, and its workers are happy and healthy--the shrine is flourishing. Their primary concern is the resilience of the forest, rivers, and natural environment around the shrine. How would we measure their success and what can we learn from their flourishing as we try to manage our society and our planet?

It is heartening to see impact investors developing evidence-based methods to tackle the complex and critical challenges that face us. It's also heartening that capital markets and investors are supportive of investing, and in some cases even accepting reduced returns, in an effort to help tackle our big, complex challenges. We must, however, make changes in the way we fund potential solutions so that it supports a diversity of disciplines and approaches. That, in turn will require new methods of measurement and perhaps we can take advantage of some very old ones, such as the data from Shinto priests who have been measuring ice on a lake for resist oversimplification. If we don't, we risk wasting these funds or, even worse, amplifying existing problems and creating new ones.

I would like to suggest a new word.

Anthropocosmos, n. and adj. Chiefly with "the." The epoch during which human activity is considered to be a significant influence on the balance, beauty, and ecology of the entire universe.

Based on ...

Anthropocene, n. and adj. Chiefly with "the." The era of geological time during which human activity is considered to be the dominant influence on the environment, climate, and ecology of the earth. --The Oxford English Dictionary

As we become painfully aware of the extent to which human activity is influencing the planet and its environment, we are also accelerating into the epoch of space exploration. Not only will our influence substantially affect the future of this blue dot we call Earth, but also our never-ending desire to explore and expand our frontiers is extending humanity's influence on the cosmos. I think of it as the Anthropocosmos, a term that captures the idea of how we must responsibly consider our role in the universe in the same way that Anthropocene expresses our responsibility for this world.

The struggle to protect the commons--the public spaces and resources we all depend on, like the oceans or Central Park--is not a new problem. Shepherds grazing sheep on shared land without consideration for other flocks will soon find grass growing thin. We already know that farming and the timber industry deplete the forests, and the destruction of that commons in turn affects the commons that is the air we breathe. These are versions of the same problem--the tragedy of the commons. It suggests that, left unchecked, self-interest can deplete resources that support the common good.

Joi Ito is an Ideas contributor for WIRED, and his association with the magazine goes back to its inception. He is coauthor with Jeff Howe of Whiplash: How to Survive Our Faster Future and director of the MIT Media Lab.

The early days of the internet were an amazing example of people and organizations from a variety of sectors coming together to create a global commons that was self-governed and well-managed by those who built it. Similarly, we're now in an internet-like moment in which we can imagine an explosion of innovation in space, our ultimate commons, as nongovernment groups, companies, and individuals begin to drive progress there. We can learn from the internet--its successes and failures--to create a generative and well-managed ecosystem in space as we grow into our responsibility as stewards of the Anthropocosmos.

Like the internet, space exploration has been mostly a government-vs.-government race and a government-with-government collaboration. The internet started out as Arpanet, which was funded by the Department of Defense's Advanced Research Projects Agency and operated by the military until 1990. A great deal of anxiety and deliberation went into the decision to allow commercial and nonresearch uses of the network, much as NASA extensively deliberated over opening the doors to "public-private partnership" leading up to the Commercial Crew Program launch in 2010. This year is the 50th anniversary of the Apollo 11 mission that put men on the moon, a multibillion-dollar effort funded by US taxpayers. Today, the private space industry is robust, and private firms compete to deliver payloads, and soon, put people into orbit and on the moon.

The state of the development of the space industry reminds me of where the internet was in the early '90s. The cost of putting a satellite into orbit has gone from supercomputer-level costs and design cycles to just a few thousand dollars, similar to the cost of a fully loaded personal computer. In many ways, SpaceX, Blue Origin, and Rocket Lab are like UUNET and PSINet1 --the first commercial internet service providers--doing more efficiently what government-funded research networks did in the past.

1 Disclosure: I was at one point an employee of PSINet and the CEO of PSINet Japan.

When these private, for-profit ISPs took over the process of building out the internet into a global network, we saw an explosion of innovation--and a dot-com bubble, followed by a crash, and then another surge following the crash. When we were connecting everyone to the internet, we couldn't imagine all the possible things--good and bad--that it would bring. In the same way, space development will most likely expand far beyond the obvious--mining, human settlements, basic research--to many other ideas. The question now is, how can we direct the self-interested businesses that will undoubtedly power entrepreneurial expansion, growth, and innovation in space toward the shared, long term health of the space commons?

In the early days of the internet, everyone pitched in like people tending a community garden. We were a band of jolly pirates on a newly discovered island paradise far away from the messiness of the real world. In "A Declaration of the Independence of Cyberspace," John Perry Barlow even declared cyberspace a new place, saying "We are forming our own social contract. This governance will arise according to the conditions of our world, not yours." His utopian idea, which I shared at the time, is now echoed by some of today's spacebound entrepreneurs who dream of settling Mars or deploying terraforming pods on planets across the galaxy.

While it wasn't obvious how life on the internet would play out when we were building the early infrastructure, back then academics, businesses, and virtually anyone else who was interested worked on its standards and resource allocation. We created governance mechanisms in communities like ICANN for coordination and dispute resolution, run by people dedicated to the protection and flourishing of the internet commons. In short, we built the foundations on which everyone could develop businesses and communities. At least in the beginning, the internet effectively harnessed the self-interest of commercial players and money from the markets to develop open protocols, free for everyone to use, that the communities designed. In the early 1990s, the internet was one of the best examples of a well-managed commons, with no one controlling it and everyone benefiting from it.

A quarter-century on, cyberspace hasn't evolved into the independent, self-organized utopia that Barlow envisioned. As the internet "democratized," new users and entrepreneurs who weren't involved in the genesis of the internet joined. It was overrun by people who didn't think of themselves as pirate gardeners tending the sacred network that supported this idealistic cyberspace--our newly created commons. They were more interested in products and services created by companies, and these companies often didn't care as much about ideals as in making returns for their investors. On the early internet, for example, people ran their own web servers, and fees for connectivity were always flat--sometimes simply free--and almost all content was shared. Today, we have near-monopolies, walled garden services; the mobile internet is metered and expensive; and copyright is vigorously enforced. From the perspective of this internet pioneer and others, cyberspace has become a much less hospitable place for users as well as developers, a tragedy of the commons.

Such disregard for the commons, if allowed to continue into planetary orbit and beyond, could have tangibly negative consequences. The decisions we make in the sociopolitical, economic, and architectural foundations of Earth's near-space cocoon will directly impact daily life on the surface--from debris falling in populated areas to advertisements that could block our view of the skies. A piece of space junk has already hit a woman in Oklahoma and an out-of-control Chinese space station caused a lot of anxiety and luckily fell harmlessly into the Pacific Ocean.

So I think the rules and governance models for space are extremely important to understand to mitigate known problems such as space debris, set precedents for the unknown, and managing the race to lunar settlements. We already have the Outer Space Treaty, which governs our efforts and protects our resources in space as a shared commons. The International Space Station is a great example of a coordinated effort by many competing interests to develop standards and work together on a common project that benefits all participants.

However, recent announcements by Vice President Mike Pence of an "America First" agenda for the moon and space fail to acknowledge the fact that the US pursues space exploration and science with deep coordination and interdependence with other countries. As new opportunities are emerging for humans to develop economic activities and communities in orbit around the Earth, on asteroids, and beyond, nationalistic actions by the Trump administration could undermine the opportunity to pursue a multiple stakeholder, internationally coordinated approach to designing future human space activities and ensure that space benefits all humankind.

As space becomes more commercial and pedestrian like the internet, we must not allow the cosmos to become a commercial and government free-for-all with disregard for the commons and shared values. In a recent Wall Street Journal article, Media Lab PhD student and director of the Media Lab Space Exploration Initiative2 Ariel Ekblaw suggested we need a new generation of "space planners" and "space architects" to coordinate such expansive growth while enabling open innovation. Through such communities, we can build the space equivalents of ICANN and the Internet Engineering Task Force, in coordination with international policy and governance guidance from the UN Office for Outer Space Affairs.

Disclosure : I am one of the two principal investigators on this initiative.

I am hopeful that Ariel and a new generation of space architects can learn from our successes and failures in protecting the internet commons and build a better paradigm for space, one that will robustly self-regulate and allow growth and generative creativity while developing strong norms that help us with our environmental and societal issues here on Earth. Already there are positive signs: SpaceX recently decided to fly low to limit space debris.

Fifty years ago, America "won" the moonshot. Today, we must "win" the Earthshot. The internet connected our world like never before, and as the iconic 1968 Earthrise photo shows, space helps us see our world like never before. Serving as responsible stewards of these crucial commons profoundly expands our circles of awareness. My dear friend Margarita Mora often asks, "What kind of ancestors do we want to be?" I want to be an ancestor who helped make the Anthropocene and the Anthropocosmos periods of history when humans helped the universe flourish with life and prosperity.

Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of Government except all those other forms that have been tried from time to time.
--Winston Churchill

I was on the board of the International Corporation for Names and Numbers (ICANN) from 2004 to 2007. This was a thankless task that I viewed as something like being on jury duty in exchange for being permitted to use the internet, upon which much of my life was built. Maybe people hate ICANN because it seems so bureaucratic, slow, and political, but I will always defend it as the best possible solution to something that is really hard--resolving the problem of allocating names and numbers for the internet when every country and every sector in the world has reasons for believing that they deserve a particular range of IP addresses or the rights to a domain name.

I view the early architecture of the internet as the most successful experiment in decentralized governance. The internet service providers and the people who ran the servers didn't need to know how the whole thing ran, they just needed to make sure that their corner of the internet was working properly and that people's email and packets magically found their way around the internet to the right places. Almost everything was decentralized except one piece--the determination of the unique names and numbers that identified every single unique thing connected to the internet. So it makes sense that this is the thing that was the hardest thing to do for the open and decentralized idealists there.

After Reuters picked up the news on May 20 that ICANN handed over the top level domain (TLD) .amazon to Jeff Bezos' Amazon.com, pending a 30 day comment period, Twitter and the broader internet turned into a flurry of conversations criticizing the ICANN process. It brought out all of the usual conspiracy theorists and internet governance pundits, which brought back old memories and reminded me how some things are still the same, even though much on the internet is barely recognizable from the early days. And while it made me cringe and wish that the people of the Amazon basin had gotten control of that TLD, I agree with ICANN's decision. I remembered my time at ICANN and how hard it was to make the right decisions in the face of what, to the public, appeared to be obviously wrong.

Originally, early internet pioneer Jon Postel ran the root servers that managed the names and numbers, and he decided who got what. Generally speaking, the rule was first come first serve, but be reasonable about the names you ask for. A move to design a more formal governance process for managing these resources began as the internet became more important and included institutions such as the Berkman Center, where I am a faculty associate. The death of Jon Postel accelerated the process and triggered a somewhat contentious move by the US Commerce Department and others to step in to create ICANN.

ICANN is a multi-stakeholder nonprofit organization originally created under the US Department of Commerce that has since transitioned to become a global multi-stakeholder community. Its complicated organizational structure includes various supporting organizations to represent country-level TLD organizations, the public, businesses, governments, the domain name registrars and registries, network security, etc. These constituencies are represented on the board of directors that deliberates on and makes many of the key decisions that deal with names and numbers on the internet. One of the keys to the success of ICANN was that it wasn't controlled by governments like the United Nations or the International Telecommunications Union (ITU), but that the governments were just part of an advisory function--the Government Advisory Council (GAC). This allowed many more voices at the table as peers than traditional intergovernmental organizations.

The difficulty of the process is that business and intellectual property interests believe international trademark laws should govern who gets to control the domain names. The "At Large" community, which represented users, has other views, and the GAC represents governments who have completely different views on how things should be decided. It's like playing with a Rubik's cube that actually doesn't have a solution.

The important thing was that everyone was in the room when we made decisions and got to say their say and the board, which represented all of the various constituents, would vote and ultimately make decisions after each of the week-long deliberation sessions. Everyone walked away feeling that they had their say and that in the end, they were somehow committed to adhere to the consensus-like process.

When I joined the board, my view was to be extremely transparent about the process and to stick to our commitments and focus on good governance, even if some of the decisions made us feel uncomfortable.

During my tenure, we had two very controversial votes. One was the approval of the .xxx TLD. Some governments, such as Brazil, thought that it would be a kind of "sex pavilion" that would increase pornography on the internet. The US conservative Christian community engaged in a letter-writing campaign to ICANN and to politicians to block the approval. The ICM Registry, the company proposing the domain, suggested that .xxx would allow them to create best practices including preventing copyright infringement and other illegal activity and create a way to enforce responsible adult entertainment.

It was first proposed in 2000 by the ICM Registry and resubmitted in 2004. They received a great deal of pushback and continued to fight for approval. In 2008, ICM filed an application with the International Centre for Dispute Resolution and the domain came up for vote again in 2009, when I was on the board. The proposal was struck down in a 9 to 5 vote against the domain--I voted in the minority, in favor of the proposal, because I didn't feel that we should deviate from our process and allow political pressure to sway us. Eventually, in 2011, ICANN approved the .xxx generic top-level domain.

In 2005 we approved .cat for Catalan, which also received a great deal of criticism and pushback because the community worried that it would be the beginning of a politicization of TLDs by various separatist movements and that ICANN would become the battleground for these disputes. But this concern never really manifested.

Then, on March 10, 2019, the board of ICANN approved the TLD .amazon, against the protests of the Amazon Cooperation Treaty Organization and the governments of South America representing the Amazon Basin. The vote was the result of seven years of deliberations and process, with governments arguing that a company shouldn't get the name of geographic region and Jeff Bezos' Amazon arguing that it had complied with all of the required processes.

When I first joined MIT, we owned what was called net 18. In other words, any IP address that started with 18. The IP addresses 18.0.0.1 through 18.255.255.254 were all owned by MIT. You could recognize any MIT computer because its IP address started with 18. MIT, one of the early users of the internet, was allocated a whole "class A" segment of the internet which adds up to 2,147,483,646 IP addresses--more than most countries. Clearly this wasn't "fair," but it was consistent with the "first come first serve" style of early internet resource allocation. In April 2017, MIT sold 8 million of these addresses to Amazon and broke up our net 18, to the sorrow of many of us who so cherished this privilege and status. This also required us to renumber many things at MIT and turn our network into a much more "normal" one.

Although I shook my fist at Amazon and capitalism when I heard this, in hindsight the elitist notion that MIT should have 2 billion IP addresses was also wrong and Amazon probably needed the addresses more.

So it was with similar ire that I read the tweet that said that Amazon got .amazon. I've been particularly involved in the protection of the rights of indigenous people through my conservation and cultural activities and my first reaction was that, yet again, Western capitalism and colonialism were treading on the rights of the vulnerable.

But then I remembered those hours and hours of deliberation and fighting over .xxx and the crazy arguments about why we couldn't let this happen. I also remember fighting until I was red in the face about how we needed to stick to our principles and our self-declared guidelines and not allow pressure from US politicians and their constituents to sway us.

While I am not close to the ICANN process these days, I can imagine the pressure that they must have come under. You can see the foot-dragging and years of struggle just reading the board resolution approving .amazon.

So while it annoys me, and I wish that .amazon went to the people of the Amazon basin, I also feel like ICANN is probably working and doing its job. The job of ICANN is to govern the name space in an open and inclusive process and to steward this process in the best, but never perfect, way possible. And if you really care, we are in that 30 day public comment period so speak up!

This column is the second in a series about young people and screens. Read the first post, about connected parenting, here.

When I was in high school, I emailed the authors of the textbooks we used so I could better question my teachers; I spent endless hours chatting with the sysadmins of university computer systems about networks; and I started threads online for many of my classes where we had much more robust conversations than in the classroom. The first conferences I attended as a teenager were conferences with mostly adult communities of online networkers who eventually became my mentors and colleagues.

I cannot imagine how I would have learned what I have learned or met the many, many people who’ve enriched my life and work without the internet. So I know first-hand how, today, the internet, online games, and a variety of emerging technologies can significantly benefit children and their experiences.

That said, I also know that, in general, the internet has become a more menacing place than when I was in school. To take just one example, parents and other industry observers share a growing concern about the content that YouTube serves up to young people. A Sesame Street sing-along with Elmo leads to one of those weird color ball videos leading to a string of clips that keeps them glued to screens, with increasingly stranger-engaging content of questionable social or educational value, interspersed with stuff that looks like content, but might be some sort of sponsored content for Play-Doh. The rise of commercial content for young people is exemplified by YouTube Kidfluencers, which markets itself as a tool that gives brands using YouTube “an added layer of kid safety,” and their rampant marketing has many parents up in arms.

In response, Senator Ed Markey, a longtime proponent of children’s online privacy protections, is cosponsoring a new bill to expand the Children’s Online Privacy Protection Act (COPPA). It would, among other things, extend protection to children from age 12 to 15 and ban online marketing videos targeted at them. The hope is that this will compel sites like YouTube and Facebook to manage their algorithms so that they do not serve up endless streams of content promoting commercial products to children. It gets a little complicated, though, because in today’s world, the kids themselves are brands, and they have product lines of their own. So the line between self-expression and endorsements is very blurry and confounds traditional regulations and delinations.

The proposed bill is well-intentioned and may limit exposure to promotional content, but it may also have unintended consequences. Take the existing version of COPPA, passed in 1998, which introduced a parental permission requirement for children under 13 to participate in commercial online platforms. Most open platforms responded by excluding those under 13, rather than take on the onerous parental permission process and challenges of serving children. This drove young people’s participation underground on these sites, since they could easily misrepresent their age or use the account of a friend or caregiver. Research and everyday experience indicates that young people under 13 are all over YouTube and Facebook, and busy caregivers, including parents, are often complicit in letting this happen.

That doesn’t mean, of course, that parents aren’t concerned about the time their young people are spending on screens, and Google and Facebook have responded, respectively, with the kid-only “spaces” on YouTube and Messenger.

But these policy and tech solutions ignore the underlying reality that young people crave contact with bigger young people and grown-up expertise, and that mixed-age interaction is essential to their learning and development.

Not only is banning young people from open platforms an iffy, hard-to-enforce proposition, it’s unclear whether it is even the best thing for them. It's possible that this new bill could damage the system like other well-intentioned efforts have in the past. I can’t forget the overly stringent Computer Fraud and Abuse Act. Written a year after the movie War Games, the law made it a felony to break the terms of service of an online service, so that, say, an investigative journalist couldn’t run a script to test on Facebook to make sure the algorithm was doing what they said it was. Regulating these technologies requires an interdisciplinary approach involving legal, policy, social, and technical experts working closely with industry, government, and consumers to get them to work the way we want them to.

Given the complexity of the issue, is the only way to protect young people to exclude them from the grown-up internet? Can algorithms be optimized for learning, high-quality content, and positive intergenerational communication for young people? What gets less attention rather than outright restriction is how we might optimize these platforms to provide joy, positive engagement, learning, and healthy communities for young people and families.

Children are exposed to risks at churches, schools, malls, parks, and anywhere adults and children interact. Even when harms and abuses happen, we don’t talk about shutting down parks and churches, and we don’t exclude young people from these intergenerational spaces. We also don’t ask parents to evaluate the risks and give written permission every time their kid walks into an open commercial space like a mall or grocery store. We hold the leadership of these institutions accountable, pushing them to establish positive norms and punish abuse. As a society, we know the benefits of these institutions outweigh the harms.

Based on a massive EU-wide study of children online, communication researcher Sonia Livingstone argues that internet access should be considered a fundamental right of children. She notes that risks and opportunities go hand in hand: “The more often children use the internet, the more digital skills and literacies they generally gain, the more online opportunities they enjoy and—the tricky part for policymakers—the more risks they encounter.” Shutting down children’s access to open online resources often most harms vulnerable young people, such as those with special needs or those lacking financial resources. Consider, for example, the case of a home- and wheelchair-bound child whose parents only discovered his rich online gaming community and empowered online identity after his death. Or Autcraft, a Minecraft server community where young people with autism can foster friendships via a medium that often serves them better than face-to-face interactions.

As I was working on my last column about young people and screen time, I spent some time talking to my sister, Mimi Ito, who directs the Connected Learning Lab at UC Irvine. We discussed how these problems and the negative publicity around screens were causing caregivers to develop unhealthy relationships with their children while trying to regulate their exposure to screens and the content they delivered. The messages caregivers are getting about the need to regulate and monitor screen time are much louder than messages about how they can actively engage with young people’s online interests. Mimi’s recent book, Affinity Online: How Connection and Shared Interest Fuel Learning, features a range of mixed-age, online communities that demonstrate how young people can learn from other young people and adult experts online. Often it’s the young people themselves that create communities, enforce norms, and insist on high-quality content. One of the cases, investigated by Rachel Cody Pfister, as her PhD work at the University of California, San Diego, is Hogwarts at Ravelry, a community of Harry Potter fans who knit together on Ravelry, an online platform for fiber arts. A 10-year-old girl founded the community, and members ranged from 11 to 70-plus at the time of Rachel’s study.

Hogwarts at Ravelry is just one of a multitude of examples of free and open intergenerational online learning communities of different shapes and sizes. The MIT Media Lab, where I work, is home to Scratch, a project created in the Lifelong Kindergarten group. Millions of young people around the world are part of a safe, creative, and healthy space for creative coding. Some Reddit groups like /r/aww for cute animal content, or a range of subreddits on Pokemon Go, are lively spaces of intergenerational communication. Like with Scratch, these massive communities thrive because of strict content and community guidelines, algorithms optimized to support these norms, and dedicated human moderation.

YouTube is also an excellent source of content for learning and discovering new interests. One now famous 12-year-old learned to dubstep just by watching YouTube videos, for example. The challenge is squaring the incentives of free-for-all commercial platforms like YouTube with the needs of special populations like young people and intergenerational sub-communities with specific norms and standards. We need to recognize that young people will make contact with commercial content and grown-ups online, and we need to figure out better ways to regulate and optimize platforms to serve participants of mixed ages. This means bringing young people’s interests, needs, and voices to the table, not shutting them out or making them invisible to online platforms and algorithms. This is why I’ve issued a call for research papers about algorithmic rights and protections for children together with my sister and our colleague and developmental psychologist, Candice Odgers. We hope to spark an interdisciplinary discussion of issues among a wide range of stakeholders to find answers to questions like: How can we create interfaces between the new, algorithmically governed platforms and their designers and civil society? How might we nudge YouTube and other platforms to be more like Scratch, designed for the benefit of young people and optimized not for engagement and revenue but instead for learning, exploration, and high-quality content? Can the internet support an ecosystem of platforms tailored to young people and mixed-age communities, where children can safely learn from each other, together with and from adults?

I know how important it is for young people to have connections to a world bigger and more diverse than their own. And I think that developers of these technologies (myself included) have a responsibility to design them based on scientific evidence and the participation of the public. We can’t leave it to commercial entities to develop and guide today’s learning platforms and internet communities—but we can’t shut these platforms down or prevent children from having access to meaningful online relationships and knowledge, either.

Like most parents of young children, I've found that determining how best to guide my almost 2-year-old daughter's relationship with technology--especially YouTube and mobile devices--is a challenge. And I'm not alone: One 2018 survey of parents found that overuse of digital devices has become the number one parenting concern in the United States.

Empirically grounded, rigorously researched advice is hard to come by. So perhaps it's not surprising that I've noticed a puzzling trend in my friends who provide me with unsolicited parenting advice. In general, my most liberal and tech-savvy friends exercise the most control and are weirdly technophobic when it comes to their children's screen time. What's most striking to me is how many of their opinions about children and technology are not representative of the broader consensus of research, but seem to be based on fearmongering books, media articles, and TED talks that amplify and focus on only the especially troubling outcomes of too much screen time.

I often turn to my sister, Mimi Ito, for advice on these issues. She has raised two well-adjusted kids and directs the Connected Learning Lab at UC Irvine, where researchers conduct extensive research on children and technology. Her opinion is that "most tech-privileged parents should be less concerned with controlling their kids' tech use and more about being connected to their digital lives." Mimi is glad that the American Association of Pediatrics (AAP) dropped its famous 2x2 rule--no screens for the first two years, and no more than two hours a day until a child hits 18. She argues that this rule fed into stigma and parent-shaming around screen time at the expense of what she calls "connected parenting"--guiding and engaging in kids' digital interests.

One example of my attempt at connected parenting is watching YouTube together with Kio, singing along with Elmo as Kio shows off the new dance moves she's learned. Everyday, Kio has more new videos and favorite characters that she is excited to share when I come home, and the songs and activities follow us into our ritual of goofing off in bed as a family before she goes to sleep. Her grandmother in Japan is usually part of this ritual in a surreal situation where she is participating via FaceTime on my wife's iPhone, watching Kio watching videos and singing along and cheering her on. I can't imagine depriving us of these ways of connecting with her.

The (Unfounded) War on Screens

The anti-screen narrative can sometimes read like the War on Drugs. Perhaps the best example is Glow Kids, in which Nicholas Kardaras tells us that screens deliver a dopamine rush rather like sex. He calls screens "digital heroin" and uses the term "addiction" when referring to children unable to self-regulate their time online.

More sober (and less breathlessly alarmist) assessments by child psychologists and data analysts offer a more balanced view of the impact of technology on our kids. Psychologist and baby observer Alison Gopnik, for instance, notes: "There are plenty of mindless things that you could be doing on a screen. But there are also interactive, exploratory things that you could be doing." Gopnik highlights how feeling good about digital connections is a normal part of psychology and child development. "If your friends give you a like, well, it would be bad if you didn't produce dopamine," she says.

Other research has found that the impact of screens on kids is relatively small, and even the conservative AAP says that cases of children who have trouble regulating their screen time are not the norm, representing just 4 percent to 8.5 percent of US children. This year, Andrew Przybylski and Amy Orben conducted a rigorous analysis of data on more than 350,000 adolescents and found a nearly negligible effect on psychological well-being at the aggregate level.

In their research on digital parenting, Sonia Livingstone and Alicia Blum-Ross found widespread concern among parents about screen time. They posit, however, that "screen time" is an unhelpful catchall term and recommend that parents focus instead on quality and joint engagement rather than just quantity. The Connected Learning Lab's Candice Odgers, a professor of psychological sciences, reviewed the research on adolescents and devices and found as many positive as negative effects. She points to the consequences of unbalanced attention on the negative ones. "The real threat isn't smartphones. It's this campaign of misinformation and the generation of fear among parents and educators."

We need to immediately begin rigorous, longitudinal studies on the effects of devices and the underlying algorithms that guide their interfaces and their interactions with and recommendations for children. Then we can make evidence-based decisions about how these systems should be designed, optimized for, and deployed among children, and not put all the burden on parents to do the monitoring and regulation.

My guess is that for most kids, this issue of screen time is statistically insignificant in the context of all the other issues we face as parents--education, health, day care--and for those outside my elite tech circles even more so. Parents like me, and other tech leaders profiled in a recent New York Times series about tech elites keeping their kids off devices, can afford to hire nannies to keep their kids off screens. Our kids are the least likely to suffer the harms of excessive screen time. We are also the ones least qualified to be judgmental about other families who may need to rely on screens in different ways. We should be creating technology that makes screen entertainment healthier and fun for all families, especially those who don't have nannies.

I'm not ignoring the kids and families for whom digital devices are a real problem, but I believe that even in those cases, focusing on relationships may be more important than focusing on controlling access to screens.

Keep It Positive

One metaphor for screen time that my sister uses is sugar. We know sugar is generally bad for you and has many side effects and can be addictive to kids. However, the occasional bonding ritual over milk and cookies might have more benefit to a family than an outright ban on sugar. Bans can also backfire, fueling binges and shame as well as mistrust and secrecy between parents and kids.

When parents allow kids to use computers, they often use spying tools, and many teens feel parental surveillance is invasive to their privacy. One study showed that using screen time to punish or reward behavior actually increased net screen time use by kids. Another study by Common Sense Media shows what seems intuitively obvious: Parents use screens as much as kids. Kids model their parents--and have a laserlike focus on parental hypocrisy.

In Alone Together, Sherry Turkle describes the fracturing of family cohesion because of the attention that devices get and how this has disintegrated family interaction. While I agree that there are situations where devices are a distraction--I often declare "laptops closed" in class, and I feel that texting during dinner is generally rude--I do not feel that iPhones necessarily draw families apart.

In the days before the proliferation of screens, I ran away from kindergarten every day until they kicked me out. I missed more classes than any other student in my high school and barely managed to graduate. I also started more extracurricular clubs in high school than any other student. My mother actively supported my inability to follow rules and my obsessive tendency to pursue my interests and hobbies over those things I was supposed to do. In the process, she fostered a highly supportive trust relationship that allowed me to learn through failure and sometimes get lost without feeling abandoned or ashamed.

It turns out my mother intuitively knew that it's more important to stay grounded in the fundamentals of positive parenting. "Research consistently finds that children benefit from parents who are sensitive, responsive, affectionate, consistent, and communicative" says education professor Stephanie Reich, another member of the Connected Learning Lab who specializes in parenting, media, and early childhood. One study shows measurable cognitive benefits from warm and less restrictive parenting.

When I watch my little girl learning dance moves from every earworm video that YouTube serves up, I imagine my mother looking at me while I spent every waking hour playing games online, which was my pathway to developing my global network of colleagues and exploring the internet and its potential early on. I wonder what wonderful as well as awful things will have happened by the time my daughter is my age, and I hope a good relationship with screens and the world beyond them can prepare her for this future.

During the Long Hot Summer of 1967, race riots erupted across the United States. The 159 riots--or rebellions, depending on which side you took--were mostly clashes between the police and African Americans living in poor urban neighborhoods. The disrepair of these neighborhoods before the riots began and the difficulty in repairing them afterward was attributed to something called redlining, an insurance-company term for drawing a red line on a map around parts of a city deemed too risky to insure.

In an attempt to improve recovery from the riots and to address the role redlining may have played in them, President Lyndon Johnson created the President's National Advisory Panel on Insurance in Riot-Affected Areas in 1968. The report from the panel showed that once a minority community had been redlined, the red line established a feedback cycle that continued to drive inequity and deprive poor neighborhoods of financing and insurance coverage--redlining had contributed to creating poor economic conditions, which already affected these areas in the first place. There was a great deal of evidence at the time that insurance companies were engaging in overtly discriminatory practices, including redlining, while selling insurance to racial minorities, and would-be home- and business-owners were unable to get loans because financial institutions require insurance when making loans. Even before the riots, people there couldn't buy or build or improve or repair because they couldn't get financing.

Because of the panel's report, laws were enacted outlawing redlining and creating incentives for insurance companies to invest in developing inner-city neighborhoods. But redlining continued. To justify their discriminatory pricing or their refusal to sell insurance in urban centers, insurance companies developed sophisticated arguments about the statistical risks that certain neighborhoods presented.

The argument insurers used back then--that their job was purely technical and that it didn't involve moral judgments--is very reminiscent of the arguments made by some social network platforms today: That they are technical platforms running algorithms and should not be, and are not, involved in judging the content. Insurers argued that their job was to adhere to technical, mathematical, and market-based notions of fairness and accuracy and provide what was viewed--and is still viewed--as one of the most essential financial components of society. They argued that they were just doing their jobs. Second-order effects on society were really not their problem or their business.

Thus began the contentious career of the notion of "actuarial fairness," an idea that would spread in time far beyond the insurance industry into policing and paroling, education, and eventually AI, igniting fierce debates along the way over the push by our increasingly market-oriented society to define fairness in statistical and individualistic terms rather than relying on the morals and community standards used historically.

Risk spreading has been a central tenet of insurance for centuries. Risk classification has a shorter history. The notion of risk spreading is the idea that a community such as a church or village could pool its resources to help individuals when something unfortunate happened, spreading risk across the group--the principle of solidarity. Modern insurance began to assign a level of risk to an individual so that others in the pool with her had roughly the same level of risk--an individualistic approach. This approach protected individuals from carrying the expense of someone with a more risk-prone and costly profile. This individualistic approach became more prevalent after World War II, when the war on communism made anything that sounded too socialist unpopular. It also helped insurance companies compete in the market. By refining their risk classifications, companies could attract what they called "good risks." This saved them money on claims and forced competitors to take on more expensive-to-insure "bad risks."

(A research colleague of mine, Rodrigo Ochigame, who focuses on algorithmic fairness and actuarial politics, directed me to historian Caley Horan, who is working on an upcoming book titled Insurance Era: The Privatization of Security and Governance in the Postwar United States that will elaborate on many of the ideas in this article, which is based on her research.)

The original idea of risk spreading and the principle of solidarity was based on the notion that sharing risk bound people together, encouraging a spirit of mutual aid and interdependence. By the final decades of the 20th century, however, this vision had given way to the so-called actuarial fairness promoted by insurance companies to justify discrimination.

While discrimination was initially based on outright racist ideas and unfair stereotypes, insurance companies evolved and developed sophisticated-seeming calculations to show that their discrimination was "fair." Women should pay more for annuities because statistically they lived longer, and blacks should pay more for damage insurance when they lived in communities where crime and riots were likely to occur. While overt racism and bigotry still exist across American society, in insurance it has been integrated into and hidden from the public behind mathematics and statistics that are so difficult for nonexperts to understand that fighting back becomes nearly impossible.

By the late 1970s, women's activists had joined civil rights groups in challenging insurance redlining and risk-rating practices. These new insurance critics argued that the use of gender in insurance risk classification was a form of sex discrimination. Once again, insurers responded to these charges with statistics and mathematical models. Using gender to determine risk classification, they claimed, was fair; the statistics they used showed a strong correlation between gender and the outcomes they insured against.

And many critics of insurance inadvertently bought into the actuarial fairness argument. Civil rights and feminist activists in the late 20th century lost their battles with the insurance industry because they insisted on arguing about the accuracy of certain statistics or the validity of certain classifications rather than questioning whether actuarial fairness--an individualistic notion of market-driven pricing fairness--was a valid way of structuring a crucial and fundamental social institution like insurance in the first place.

But fairness and accuracy are not necessarily the same thing. For example, when Julia Angwin pointed out in her ProPublica report that risk scores used by the criminal justice system were biased against people of color, the company that sold the algorithmic risk score system argued that its scores were fair because they were accurate. The scores accurately predicted that people of color were more likely to reoffend. This likelihood of reoffense, called the recidivism rate, is the likelihood that someone recommits a crime after being released, and the rate is calculated primarily using arrest data. But this correlation contributes to discrimination, because using arrests as a proxy for recommitting a crime means the algorithm is codifying biases in arrests, such as a police officer bias to arrest more people of color or to patrol more heavily in poor neighborhoods. This risk of recidivism is used to set bail and determine sentencing and parole, and it informs predictive policing systems that direct police to neighborhoods likely to have more crime.

There are several obvious problems with this. If you believe the risk scores are accurate in predicting the future outcomes of a certain group of people, then it means it's "fair" that a person is more likely to spend more time in jail simply because they are black. This is actuarially "fair" but clearly not "fair" from a social, moral, or anti-discrimination perspective.

The other problem is that there are fewer arrests in rich neighborhoods, not because people there aren't smoking as much pot as in poor neighborhoods but because there is less policing. Obviously, one is more likely to be rearrested if one lives in an overpoliced neighborhood, and that creates a feedback loop--more arrests mean higher recidivism rates. In very much the same way that redlining in minority neighborhoods created a self-fulfilling prophecy of uninsurable communities, overpolicing and predictive policing may be "fair" and "accurate" in the short term, but the long-term effects on communities have been shown to be negative, creating self-fulfilling prophecies of poor, crime-ridden neighborhoods.

Angwin also showed in a recent ProPublica report that, despite regulations, insurance companies charge minority communities higher premiums than white communities, even when the risks are the same. The Spotlight team at The Boston Globe reported that the household median net worth in the Boston area was $247,500 for whites and $8 for nonimmigrant blacks--the result of redlining and unfair access to housing and financial services. So while redlining for insurance is not legal, when Amazon decides to provide Amazon Prime free same-day shipping to its "best" customers, it's effectively redlining--reinforcing the unfairness of the past in new and increasingly algorithmic ways.

Like the insurers, large tech firms and the computer science community also tend to frame "fairness" in a depoliticized, highly technical way involving only mathematics and code, which reinforces a circular logic. AI is trained to use the outcomes of discriminatory practices, like recidivism rates, to justify continuing practices such as incarceration or overpolicing that may contribute to the underlying causes of crime, such as poverty, difficulty getting jobs, or lack of education. We must create a system that requires long-term public accountability and understandability of the effects on society of policies developed using machines. The system should help us understand, rather than obscure, the impact of algorithms on society. We must provide a mechanism for civil society to be informed and engaged in the way in which algorithms are used, optimizations set, and data collected and interpreted.

The computer scientists of today are more sophisticated in many ways than the actuaries of yore, and they often sincerely are trying to build algorithms that are fair. The new literature on algorithmic fairness usually doesn't simply equate fairness with accuracy, but instead defines various trade-offs between fairness and accuracy. The problem is that fairness cannot be reduced to a simple self-contained mathematical definition--fairness is dynamic and social and not a statistical issue. It can never be fully achieved and must be constantly audited, adapted, and debated in a democracy. By merely relying on historical data and current definitions of fairness, we will lock in the accumulated unfairnesses of the past, and our algorithms and the products they support will always trail the norms, reflecting past norms rather than future ideals and slowing social progress rather than supporting it.

Science is built, enhanced, and developed through the open and structured sharing of knowledge. Yet some publishers charge so much for subscriptions to their academic journals that even the libraries of the world’s wealthiest universities such as Harvard are no longer able to afford the prices. Those publishers’ profit margins rival those of the most profitable companies in the world, even though research is largely underwritten by governments, and the publishers don’t pay authors and researchers or the peer reviewers who evaluate those works. How is such an absurd structure able to sustain itself—and how might we change it?

When the World Wide Web emerged in the ’90s, people began predicting a new, more robust era of scholarship based on access to knowledge for all. The internet, which started as a research network, now had an easy-to-use interface and a protocol to connect all of published knowledge, making each citation just a click away … in theory.

Instead, academic publishers started to consolidate. They solidified their grip on the rights to prestigious journals, allowing them to charge for access and exclude the majority of the world from reading research publications—all while extracting billions in dollars of subscription fees from university libraries and corporations. This meant that some publishers, such as Elsevier, the science, technology, and medicine-focused branch of the RELX Group publishing conglomerate, are able today to extract huge margins—36.7 percent in 2017 in Elsevier’s case, more profitable than Apple, Google/Alphabet, or Microsoft that same year.

And in most scholarly fields, it’s the most important journals that continue to be secured behind paywalls—a structure that doesn’t just affect the spread of information. Those journals have what we call high “impact factors,” which can skew academic hiring and promotions in a kind of self-fulfilling cycle that works like this: Typically, anyone applying for an academic job is evaluated by a committee and by other academics who write letters of evaluation. In most fields, papers published in peer-reviewed journals are a critical part of the evaluation process, and the so-called impact factor, which is based on the citations that a journal gets over time, is important. Evaluators, typically busy academics who may lack deep expertise in a candidate’s particular research topic, are prone to skim the submitted papers and rely heavily on the number of papers published and the impact factor—as a proxy for journal prestige and rigor—in their assessment of the qualifications of a candidate.

And so young researchers are forced to prioritize publication in journals with high impact factors, faulty as they are, if they want tenure or promotions. The consequence is that important work gets locked up behind paywalls and remains largely inaccessible to anyone not in a major research lab or university. This includes the taxpayers who funded the research in the first place, the developing world, and the emerging world of nonacademic researchers and startup labs.

Breaking Down the Walls

To bypass the paywalls, in 2011 Alexandra Elbakyan started Sci-Hub, a website that provides free access to millions of otherwise inaccessible academic papers. She was based in Kazakhstan, far from the courts where academic publishers can easily bring lawsuits. In the movie Paywall, Elbakyan says that Elsevier’s mission was to make “uncommon knowledge common,” and she jokes that she was just trying to help the company do that because it seemed unable to do so itself. While Elbakyan has been widely criticized for her blatant disregard for copyright, Sci-Hub has become a popular tool among academics, even at major universities, because it removes the friction of paywalls and provides links to collaborators beyond them. She was able to do what the late Aaron Swartz, my Creative Commons colleague and dear friend, envisioned but was unable to achieve in his lifetime.

But, kind of like the Berlin Wall, the academic journal paywall can crumble, and several efforts are underway to undermine it. The Open Access, or OA, movement—a worldwide effort to make scholarly research literature freely accessible online—began several decades ago. Essentially, researchers upload the unpublished version of their papers to a repository focused on subject matter or operated by an academic institution. The movement was sparked by services like arXiv.org, which Cornell University started in 1991, and became mainstream when Harvard established the first US self-archiving policy in 2008; other research universities around the world quickly followed.

Many publications have since found ways to allow open access in their journals by permitting it but charging an expensive (usually hundreds or thousands of dollars per article) “article processing charge,” or APC, that is paid by the institution or the author behind the research as a sort of cost of being published. OA publishers such as the Public Library of Science, or PLOS, charge APCs to make papers available without a paywall, and many traditional commercial publishers also allow authors to pay an APC so that their papers appearing in what is technically a paywalled journal can be available publicly.

When I was CEO of Creative Commons a decade ago, at a time when OA was beginning in earnest, one of my first talks was to a group of academic publishers. I remember trying to describe our proposal to allow authors to have a way to mark their works with the rights they wished to grant to their work, including the use of their work without charge but with attribution. The first comment from the audience came from an academic publisher who declared my comments “disgusting.”

We’ve come a long way since then. Even RELX now allows open access for some of its journals and uses Creative Commons licenses to mark works that are freely available.

Many publishers I’ve talked to are preparing to make open access to research papers a reality. In fact, most journals already allow some open access through the expensive article processing charges I mentioned earlier.

So in some ways, it feels like “we won.” But has the OA movement truly reached its potential to transform research communication? I don't think so, especially if paid open access just continues to enrich a small number of commercial journal publishers. We have also seen the emergence of predatory OA journals with no peer review or other quality control measures, and that, too, has undermined the OA movement.

We can pressure publishers to lower APC charges, but if they have control of the platforms and key journals, they will continue to extract high fees even in an OA world. So far, they have successfully prevented collective bargaining through confidentiality agreements and other legal means.

Another Potential Solution

The MIT Press, led by Amy Brand, and the Media Lab recently launched a collaboration called The Knowledge Futures Group. (I am director of the Media Lab and a board member at the press.) Our aim is to create a new open knowledge ecosystem. The goal is to develop and deploy infrastructure to allow free, rigorous, and open sharing of knowledge and to start a movement toward greater institutional and public ownership of that infrastructure, reclaiming territory ceded to publishers and commercial technology providers.

(In some ways, the solution might be similar to what blogging was to online publishing. Blogs were simple scripts, free and open source software, and a bunch of open standards that interoperate between services. They allowed us to create simple and very low cost informal publishing platforms that did what you used to have to buy multimillion-dollar Content Management Systems for. Blogs led the way for user generated content and eventually social media.)

While academic publishing is more complex, a refactoring and an overhaul of the software, protocols, processes, and business underlying such publishing could revolutionize it financially as well as structurally.

We are developing a new open source and modern publishing platform called PubPub and a global, distributed method of understanding public knowledge called Underlay. We have established a lab to develop, test, and deploy other technologies, systems, and processes that will help researchers and their institutions. They would have access to an ecosystem of open source tools and an open and transparent network to publish, understand, and evaluate scholarly work. We imagine developing new measures of impact and novelty with more transparent peer review; publishing peer reviews; and using machine learning to help identify novel ideas and people and mitigate systemic biases, among other things. It is imperative that we establish an open innovation ecosystem as an alternative to the control that a handful of commercial entities maintain over not only the markets for research information but also over academic reputation systems and research technologies more generally.

One of the main pillars of academic reputation is authorship, which has become increasingly problematic as science has become more collaborative. Who gets credit for research and discovery can have a huge impact on researchers and institutions. But the order of author names on a journal article has no standardized meaning. It is often determined more by seniority and academic culture than by actual effort or expertise. As a result, credit is often not given where credit is due. With electronic publishing, we can move beyond a “flat” list of author names, in the same way that film credits specify the contributions of those involved, but we have continued to allow the constraints of print guide our practices. We can also experiment with and improve peer review to provide better incentives, processes, and fairness.

It’s essential for universities, and core to their mission, to assert greater control over systems for knowledge representation, dissemination, and preservation. What constitutes knowledge, the use of knowledge, and the funding of knowledge is the future of our planet, and it must be protected from twisted market incentives and other corrupting forces. The transformation will require a movement involving a global network of collaborators, and we hope to contribute to catalyzing it.

When a massive earthquake and tsunami hit the eastern coast of Japan on March 11, 2011, the Fukushima Daiichi Nuclear Power Plant failed, leaking radioactive material into the atmosphere and water. People around the country as well as others with family and friends in Japan were, understandably, concerned about radiation levels—but there was no easy way for them to get that information. I was part of a small group of volunteers who came together to start a nonprofit organization, Safecast, to design, build, and deploy Geiger counters and a website that would eventually make more than 100 million measurements of radiation levels available to the public.

We started in Japan, of course, but eventually people around the world joined the movement, creating an open global data set. The key to success was the mobile, easy to operate, high-quality but lower-cost kit that the Safecast team developed, which people could buy and build to collect data that they might then share on the Safecast website.

While Chernobyl and Three Mile Island spawned monitoring systems and activist NGOs as well, this was the first time that a global community of experts formed to create a baseline of radiation measurements, so that everyone could monitor radiation levels around the world and measure fluctuations caused by any radiation event. (Different regions have very different baseline radiation levels, and people need to know what those are if they are to understand if anything has changed.)

More recently Safecast, which is a not-for-profit organization, has begun to apply this model to air quality in general. The 2017 and 2018 fires in California were the air quality equivalent of the Daiichi nuclear disaster, and Twitter was full of conversations about N95 masks and how they were interfering with Face ID. People excitedly shared posts about air quality; I even saw Apple Watches displaying air quality figures. My hope is that this surge of interest in air quality among Silicon Valley elites will help advance a field, namely the monitoring of air quality, that has been steadily developing but has not yet been as successful as Safecast was with radiation measurements. I believe this lag stems in part from the fact that Silicon Valley believes so much in entrepreneurs, people there try to solve every problem with a startup. But that’s not always the right approach.

Hopefully, interest in data about air quality and the difficulty in getting a comprehensive view will drive more people to consider an open data and approach over proprietary ones. Right now, big companies and governments are the largest users of data that we’ve handed to them—mostly for free—to lock up in their vaults. Pharmaceutical firms, for instance, use the data to develop drugs that save lives, but they could save more lives if their data were shared. We need to start using data for more than commercial exploitation, deploying it to understand the long-term effects of policy, and create transparency around those in power—not of private citizens. We need to flip the model from short-term commercial use to long-term societal benefit.

The first portable air sensors were the canaries that miners used to monitor for poison gases in coal mines. Portable air sensors that consumers could easily use were developed in the early 2000s, and since then the technology for measuring air quality has changed so rapidly that data collected just a few years ago is often now considered obsolete. Nor is “air quality” or the Air Quality Index standardized, so levels get defined differently by different groups and governments, with little coordination or transparency.

Yet right now, the majority of players are commercial entities that keep their data locked up, a business strategy reminiscent of software before we “discovered” the importance of making it free and open source. These companies are not coordinating or contributing data to the commons and are diverting important attention and financial resources away from nonprofit efforts to create standards and open data, so we can conduct research and give the public real baseline measurements. It’s as if everyone is building and buying thermometers that measure temperatures in Celsius, Fahrenheit, Delisle, Newton, Rankine, Réaumur, and Rømer, or even making up their own bespoke measurement systems without discussing or sharing conversion rates. While it is likely to benefit the businesses to standardize, companies that are competing have a difficult time coordinating on their own and try to use proprietary nonstandard improvements as a business advantage.

To attempt to standardize the measurement of small particulates in the air, a number of organizations have created the Air Sensor Workgroup. The ASW is working to build an Air Quality Data Commons to encourage sharing of data with standardized measurements, but there is little participation from the for-profit startups making the sensors that suddenly became much more popular in the aftermath of the fires in California.

Although various groups are making efforts to reach consensus on the science and process of measuring air quality, they are confounded by these startups that believe (or their investors believe) their business depends on big data that is owned and protected. Startups don’t naturally collaborate, share, or conduct open research, and I haven’t seen any air quality startups with a mechanism for making data collected available if the business is shut down.

Air quality startups may seem like a niche issue. But the issue of sharing pools of data applies to many very important industries. I see, for instance, a related challenge in data from clinical trials.

The lack of central repositories of data from past clinical trials has made it difficult, if not impossible, for researchers to look back at the science that has already been performed. The federal government spends billions of dollars on research, and while some projects like the Cancer Moonshot mandate data openness, most government funding doesn’t require it. Biopharmaceutical firms submit trial data evidence to the FDA—but not to researchers or the general public as a rule, in much the same way that most makers of air quality detection gadgets don’t share their data. Clinical trial data and medical research funded by government thus may sit hidden behind corporate doors at big companies. Preventing the use of such data impedes discovery of new drugs through novel techniques and makes it impossible for benefits and results to accrue to other trials.

Open data will be key to modernizing the clinical trial process and integrating AI and other advanced techniques used for analyses, which would greatly improve health care in general. I discuss some these considerations in my PhD thesis in more detail.

Some clinical trials have already begun requiring the sharing of individual patient data for clinical analyses within six months of a trial’s end. And there are several initiatives sharing data in a noncompetitive manner, which lets researchers create promising ecosystems and data “lakes” that could lead to new insights and better therapies.

Overwhelming public outcry can also help spur the embrace of open data. Before the 2011 earthquake in Japan, only the government there and large corporations held radiation measurements, and those were not granular. People only began caring about radiation measurements when the Fukushima Daiichi site started spewing radioactive material, and the organizations that held that data were reticent to release it because they wanted to avoid causing panic. However, the public demanded the data, and that drove the activism that fueled the success of Safecast. (Free and open source software also started with hobbyists and academics. Initially there was a great deal of fighting between advocacy groups and corporations, but eventually the business models clicked and free and open source software became mainstream.)

We have a choice about which sensors we buy. Before going out and buying a new fancy sensor or backing that viral Kickstarter campaign, make sure the organization behind it makes a credible case about the scholarship underpinning its technology; explains its data standards; and most importantly, pledges to share its data using a Creative Commons CC0 dedication. For privacy-sensitive data sets that can’t be fully open, like those at Ancestry.com and 23andme, advances in cryptography such as multiparty computation and zero knowledge proofs would allow researchers to learn from data sets without the release of sensitive details.

We have the opportunity and the imperative to reframe the debate on who should own and control our data. Big Data's narrative sells the idea that those owning the data control the market, and it is playing out in a tragedy of the commons, confounding the use of information for society and science.


When the Boston public school system announced new start times last December, some parents found the schedules unacceptable and pushed back. The algorithm used to set these times had been designed by MIT researchers, and about a week later, Kade Crockford, director of the Technology for Liberty Program at the ACLU of Massachusetts, emailed asking me to cosign an op-ed that would call on policymakers to be more thoughtful and democratic when they consider using algorithms to change policies that affect the lives of residents. Kade, who is also a Director's Fellow at the Media Lab and a colleague of mine, is always paying attention to the key issues in digital liberties and is great at flagging things that I should pay attention to. (At the time, I had no contact with the MIT researchers who designed the algorithm.)

I made a few edits to her draft, and we shipped it off to the Boston Globe, which ran it on December 22, 2017, under the headline "Don’t blame the algorithm for doing what Boston school officials asked." In the op-ed, we piled on in criticizing the changes but argued that people shouldn't criticize the algorithm, but rather the city’s political process that prescribed the way in which the various concerns and interests would be optimized. That day, the Boston Public Schools decided not to implement the changes. Kade and I high-fived and called it a day.

The protesting families, Kade and I did what we thought was fair and just given the information that we had at the time. A month later, a more nuanced picture emerged, one that I think offers insights into how technology can and should provide a platform for interacting with policy—and how policy can reflect a diverse set of inputs generated by the people it affects. In what feels like a particularly dark period for democracy and during a time of increasingly out-of-control deployment of technology into society, I feel a lesson like this one has given me greater understanding of how we might more appropriately introduce algorithms into society. Perhaps it even gives us a picture of what a Democracy 2.0 might look like.

A few months later, having read the op-ed in the Boston Globe, Arthur Delarue and Sébastien Martin, PhD students in the MIT Operations Research Center and members of the team that built Boston’s bus algorithm, asked to meet me. In very polite email, they told me that I didn’t have the whole story.

Kade and I met later that month with Arthur, Sebastien, and their adviser, MIT professor Dimitris Bertsimas. One of the first things they showed us was a photo of the parents who had protested against the schedules devised by the algorithm. Nearly all of them were white. The majority of families in the Boston school system are not white. White families represent only about 15 percent the public school population in the city. Clearly something was off.

The MIT researchers had been working with the Boston Public Schools on adjusting bell times, including the development of the algorithm that the school system used to understand and quantify the policy trade-offs of different bell times and, in particular, their impact on school bus schedules. The main goal was to reduce costs and generate optimal schedules.

The MIT team described how the award-winning original algorithm, which focused on scheduling and routing, had started as a cost-calculation algorithm for the Boston Public Schools Transportation Challenge. Boston Public Schools had been trying to change start times for decades but had been confounded by the optimizations and a way to improve the school schedule without tripling the costs, which is why it organized Transportation Challenge to begin with. The MIT team was the first to figure out a way to balance all of these factors and produce a solution. Until then, calculating the cost of the complex bus system had been such a difficult problem that it presented an impediment to even considering bell time changes.

After the Transportation Challenge, the team continued to work with the city, and over the previous year they had participated in a community engagement process and had worked with the Boston school system to build on top of the original algorithm, adding new features that were included to produce a plan for new school start times. They factored in equity—existing start times were unfair, mostly to lower-income families—as well as recent research on teenage sleep that showed starting school early in the day may have negative health and economic consequences for high school students. They also tried to prioritize special education programs and prevent young children from leaving school too late. They wanted to do all this without increasing the budget, and even reducing it.

From surveys, the school system and the researchers knew that some families in every school would be unhappy with any change. They could have added additional constraints on the algorithm to limit some of outlier situations, such as ending the school day at some schools at 1:30 pm, which was particularly exasperating for some parents. The solution that they were proposing significantly increased the number of high school students starting school after 8 am and significantly decreased the number of elementary school students dismissed after 4 pm so they wouldn’t have to go home after dark. Overall it was much better for the majority of people. Although they were aware that some parents wouldn’t be happy, they weren't prepared for the scale of response from angry parents who ended up with start times and bus schedules that they didn't like.

Optimizing the algorithm for greater “equity" also meant many of the planned changes were "biased" against families with privilege. My view is that the fact that an algorithm was making decisions also upset people. And the families who were happy with the new schedule probably didn’t pay as much attention. The families who were upset marched on City Hall in an effort to overturn the planned changes. The ACLU and I supported the activist parents at the time and called "foul" on the school system and the city. Eventually, the mayor and the city caved to the pressure and killed off years of work and what could have been the first real positive change in busing in Boston in decades.

While I'm not sure privileged families would give up their good start times to help poor families voluntarily, I think that if people had understood what the algorithm was optimizing for—sleep health of high school kids, getting elementary school kids home before dark, supporting kids with special needs, lowering costs, and increasing equity overall—they would agree that the new schedule was, on the whole, better than the previous one. But when something becomes personal very suddenly, people to feel strongly and protest.

It reminds me a bit of a study, conducted by the Scalable Cooperation Group at the Media Lab based on earlier work by Joshua Greene, which showed people would support the sacrifice by a self-driving car of its passenger if it would save the lives of a large number of pedestrians, but that they personally would never buy a passenger-sacrificing self-driving car.

Technology is amplifying complexity and our ability to change society, altering the dynamics and difficulty of consensus and governance. But the idea of weighing trade-offs isn't new, of course. It's a fundamental feature of a functioning democracy.

While the researchers working on the algorithm and the plan surveyed and met with parents and school leadership, the parents were not aware of all of the factors that went into the final optimization of the algorithm. The trade-offs required to improve the overall system were not clear, and the potential gains sounded vague compared to the very specific and personal impact of the changes that affected them. And by the time the message hit the nightly news, most of the details and the big picture were lost in the noise.

A challenge in the case of the Boston Public Schools bus route changes was the somewhat black-box nature of the algorithm. The Center for Deliberative Democracy has used a process it calls deliberative polling, which brings together a statistically representative group of residents in a community to debate and deliberate policy goals over several days in hopes of reaching a consensus about how a policy should be shaped. If residents of Boston could have more easily understood the priorities being set for the algorithm, and hashed them out, they likely would have better understood how the results of their deliberations were converted into policy.

After our meeting with the team that invented the algorithm, for instance, Kade Crockford introduced them to David Scharfenberg, a reporter at the Boston Globe who wrote an article about them that included a very well done simulation allowing readers to play with the algorithm and see how changing cost, parent preferences, and student health interact as trade-offs—a tool that would have been extremely useful in explaining the algorithm from the start.

The lessons learned from Boston’s effort to use technology to improve its bus routing system and start times provides a valuable lesson in understanding how to ensure that such tools aren’t used to reinforce and increase biased and unfair policies. They can absolutely make systems more equitable and fair, but they won’t succeed without our help.

Somewhere between 2 and 3 billion years ago, what scientists call the Great Oxidation Event, or GOE, took place, causing the mass extinction of anaerobic bacteria, the dominant life form at the time. A new type of bacteria, cyanobacteria, had emerged, and it had the photosynthetic ability to produce glucose and oxygen out of carbon dioxide and water using the power of the sun. Oxygen was toxic to many anaerobic cousins, and most of them died off. In addition to being a massive extinction event, the oxygenation of the planet kicked off the evolution of multicellular organisms (620 to 550 million years ago), the Cambrian explosion of new species (540 million years ago), and an ice age that triggered the end of the dinosaurs and many cold-blooded species, leading to the emergence of the mammals as the apex group (66 million years ago) and eventually resulting in the appearance of Homo sapiens, with all of their social sophistication and complexity (315,000 years ago).

I’ve been thinking about the GOE, the Cambrian Explosion, and the emergence of the mammals a lot lately, because I’m pretty sure we’re in the midst of a similarly disruptive and pivotal moment in history that I’m calling the Great Digitization Event, or GDE. And right now we’re in that period where the oxygen, or in this case the internet as used today, is rapidly and indifferently killing off many systems while allowing new types of organizations to emerge.

As WIRED celebrates its 25th anniversary, the Whole Earth Catalog its 50th anniversary, and the Bauhaus its 100th anniversary, we’re in a modern Cambrian era, sorting through an explosion of technologies enabled by the internet that are the equivalent of the stunning evolutionary diversity that emerged some 500 million years ago. Just as in the Great Oxidation Event, in which early organisms that created the conditions for the explosion of diversity had to die out or find a new home in the mud on the ocean floor, the early cohort that set off the digital explosion is giving way to a new, more robust form of life. As Fred Turner describes in From Counterculture to Cyberculture, we can trace all of this back to the hippies in the 1960s and 1970s in San Francisco. They were the evolutionary precursor to the advanced life forms observable in the aftermath at Stoneman Douglas High School. Let me give you a first-hand account of how the hippies set off the Great Digitization Event.

From the outset, members of that movement embraced nascent technological change. Stewart Brand, one of the Merry Pranksters, began publishing the Whole Earth Catalog in 1968, which spawned a collection of other publications that promoted a vision of society that was ecologically sound and socially just. The Whole Earth Catalog gave birth to one of the first online communities, the Whole Earth ‘Lectronic Link, or WELL, in 1985.

Around that time, R.U. Sirius and Mark Frost1 started the magazine High Frontiers, which was later relaunched with Queen Mu and others as Mondo 2000. The magazine helped legitimize the burgeoning cyberpunk movement, which imbued the growing community of personal computer users and participants in online communities with an ‘80s version of hippie sensibilities and values. A new wave of science fiction, represented by William Gibson’s Neuromancer, added the punk rock dystopian edge.

Timothy Leary, a “high priest” of the hippie movement and New Age spirituality, adopted me as his godson when we met during his visit to Japan in 1990, and he connected me to the Mondo 2000 community that became my tribe. Mondo 2000 was at the hub of cultural and technological innovation at the time, and I have wonderful memories of raves advertising “free VR” and artist groups like Survival Research Labs that connected the hackers from the emerging Silicon Valley scene with Haight-Ashbury hippies.

I became one of the bridges between the Japanese techno scene and the San Francisco rave scene. Many raves in San Francisco happened in the then-gritty area south of Market Street, near Townsend and South Park. ToonTown, a rave producer, set up its offices (and living quarters) there, which attracted designers and others who worked in the rave business, such as Nick Philip, a British BMX'er and designer. Nick, who started out designing flyers for raves using photocopy machines and collages, created a clothing brand called Anarchic Adjustment, which I distributed in Japan and which William Gibson, Dee-Lite, and Timothy Leary wore. He began using computer graphics tools from companies like Silicon Graphics to create the artwork for T-shirts and posters.

In August 1992, Jane Metcalfe and Louis Rossetto rented a loft in the South Park area because they wanted to start a magazine to chronicle what had evolved from a counterculture into a powerful new culture built around hippie values, technology, and the new Libertarian movement. (In 1971, Louis had appeared on the cover of The New York Times Magazine as coauthor, with Stan Lehr, of “Libertarianism, The New Right Credo.”) When I met them, they had a desk and a 120-page laminated prototype for what would become WIRED. Nicholas Negroponte, who had cofounded the MIT Media Lab in 1985, was backing Jane and Louis financially. The founding executive editor of WIRED was Kevin Kelly, who was formerly one of the editors of the Whole Earth Catalog. I got involved as a contributing editor. I didn’t write articles at the time, but made my debut in the media in the third issue2 of WIRED, mentioned as a kid addicted to MMORPGs in an article by Howard Rheingold. Brian Behlendorf, who ran the SFRaves mailing list, announcing and talking about the SF rave scene, became the webmaster of HotWired, a groundbreaking exploration of the new medium of the Web.

WIRED came along just as the internet and the technology around it really began to morph into something much bigger than a science fiction fantasy, in other words, on the cusp of the GDE. The magazine tapped into the design talent around South Park, literally connecting to the design and development shop Cyborganic, with ethernet cables strung inside of the building where they shared a T1 line. It embraced the post-psychedelic design and computer graphics that distinguished the rave community and established its own distinct look that bled over into the advertisements in the magazine, like one Nick Philip designed for Absolut, with the most impact coming from people such as Barbara Kuhr and Erik Adigard.

Structured learning didn't serve me particularly well. I was kicked out of kindergarten for running away too many times, and I have the dubious distinction of having dropped out of two undergraduate programs and a doctoral business and administration program. I haven't been tested, but have come to think of myself as "neuroatypical" in some way.

"Neurotypical" is a term used by the autism community to describe what society refers to as "normal." According to the Centers for Disease Control, one in 59 children, and one in 34 boys, are on the autism spectrum--in other words, neuroatypical. That's 3 percent of the male population. If you add ADHD--attention deficit hyperactivity disorder--and dyslexia, roughly one out of four people are not "neurotypicals."

In NeuroTribes, Steve Silberman chronicles the history of such non-neurotypical conditions, including autism, which was described by the Viennese doctor Hans Asperger and Leo Kanner in Baltimore in the 1930s and 1940s. Asperger worked in Nazi-occupied Vienna, where institutionalized children were actively euthanized, and he defined a broad spectrum of children who were socially awkward. Others had extraordinary abilities and a "fascination with rules, laws and schedules," to use Silberman's words. Leo Kanner, on the other hand, described children who were more disabled. Kanner's suggestion that the condition was activated by bad parenting made autism a source of stigma for parents and led to decades of work attempting to "cure" autism rather than developing ways for families, the educational system, and society to adapt to it.

Our schools in particular have failed such neurodiverse students, in part because they've been designed to prepare our children for typical jobs in a mass-production-based white- and blue-collar environment created by the Industrial Revolution. Students acquire a standardized skillset and an obedient, organized, and reliable nature that served society well in the past--but not so much today. I suspect that the quarter of the population who are diagnosed as somehow non-neurotypical struggle with the structure and the method of modern education, and many others probably do as well.

I often say that education is what others do to you and learning is what you do for yourself. But I think that even the broad notion of education may be outdated, and we need a completely new approach to empower learning: We need to revamp our notion of "education" and shake loose the ordered and linear metrics of the society of the past, when we were focused on scale and the mass production of stuff. Accepting and respecting neurodiversity is the key to surviving the transformation driven by the internet and AI, which is shattering the Newtonian predictability of the past and replacing it with a Heisenbergian world of complexity and uncertainty.

In Life, Animated, Ron Suskind tells the story of his autistic son Owen, who lost his ability to speak around his third birthday. Owen had loved the Disney animated movies before his regression began, and a few years into his silence it became clear he'd memorized dozens of Disney classics in their entirety. He eventually developed an ability to communicate with his family by playing the role, and speaking in the voices, of the animated characters he so loved, and he learned to read by reading the film credits. Working with his family, Owen recently helped design a new kind of screen-sharing app, called Sidekicks, so other families can try the same technique.

Owen's story tells us how autism can manifest in different ways and how, if caregivers can adapt rather than force kids to "be normal," many autistic children survive and thrive. Our institutions, however, are poorly designed to deliver individualized, adaptive programs to educate such kids.

In addition to schools poorly designed for non-neurotypicals, our society traditionally has had scant tolerance or compassion for anyone lacking social skills or perceived as not "normal." Temple Grandin, the animal welfare advocate who is herself somewhere on the spectrum, contends that Albert Einstein, Wolfgang Mozart, and Nikola Tesla would have been diagnosed on the "autistic spectrum" if they were alive today. She also believes that autism has long contributed to human development and that "without autism traits we might still be living in caves." She is a prominent spokesperson for the neurodiversity movement, which argues that neurological differences must be respected in the same way that diversity of gender, ethnicity or sexual orientation is.

Despite challenges with some of the things that neurotypicals find easy, people with Asperger's and other forms of autism often have unusual abilities. For example, the Israeli Defense Force's Special Intelligence Unit 9900, which focuses on analyzing aerial and satellite imagery, is partially staffed with people on the autism spectrum who have a preternatural ability to spot patterns. I believe at least some of Silicon Valley's phenomenal success is because its culture places little value on conventional social and corporate values that prize age-based experience and conformity that dominates most institutions on the East Coast and most of society as a whole. It celebrates nerdy, awkward youth and has turned their super-human, "abnormal" powers into a money-making machine that is the envy of the world. (This new culture is wonderfully inclusive from a neurodiversity perspective but white-dude centric and problematic from a gender and race perspective.)

This sort of pattern recognition and many other unusual traits associated with autism are extremely well suited for science and engineering, often enabling a super-human ability to write computer code, understand complex ideas and elegantly solve difficult mathematical problems.

Unfortunately, most schools struggle to integrate atypical learners, even though it's increasingly clear that interest-driven learning, project-based learning, and undirected learning seem better suited for the greater diversity of neural types we now know exist.

Ben Draper, who runs the Macomber Center for Self Directed Learning, says that while the center is designed for all types of children, kids whose parents identify them as on the autism spectrum often thrive at the center when they've had difficulty in conventional schools. Ben is part of the so-called unschooling movement, which believes that not only should learning be self-directed, in fact we shouldn't even focus on guiding learning. Children will learn in the process of pursuing their passions, the reasoning goes, and so we just need to get out of their way, providing support as needed.

Many, of course, argue that such an approach is much too unstructured and verges on irresponsibility. In retrospect, though, I feel I certainly would have thrived on "unschooling." In a recent paper, Ben and my colleague Andre Uhl, who first introduced me to unschooling, argue that it not only works for everyone, but that the current educational system, in addition to providing poor learning outcomes, impinges on the rights of children as individuals.

MIT is among a small number of institutions that, in the pre-internet era, provided a place for non-neurotypical types with extraordinary skills to gather and form community and culture. Even MIT, however, is still trying to improve to give these kids the diversity and flexibility they need, especially in our undergraduate program.

I'm not sure how I'd be diagnosed, but I was completely incapable of being traditionally educated. I love to learn, but I go about it almost exclusively through conversations and while working on projects. I somehow kludged together a world view and life with plenty of struggle, but also with many rewards. I recently wrote a PhD dissertation about my theory of the world and how I developed it. Not that anyone should generalize from my experience--one reader of my dissertation said that I'm so unusual, I should be considered a "human sub-species." While I take that as a compliment, I think there are others like me who weren't as lucky and ended up going through the traditional system and mostly suffering rather than flourishing. In fact, most kids probably aren't as lucky as me and while some types are more suited for success in the current configuration of society, a huge percentage of kids who fail in the current system have a tremendous amount to contribute that we aren't tapping into.

In addition to equipping kids for basic literacy and civic engagement, industrial age schools were primarily focused on preparing kids to work in factories or perform repetitive white-collar jobs. It may have made sense to try to convert kids into (smart) robotlike individuals who could solve problems on standardized tests alone with no smartphone or the internet and just a No. 2 pencil. Sifting out non-neurotypical types or trying to remediate them with drugs or institutionalization may have seemed important for our industrial competitiveness. Also, the tools for instruction were also limited by the technology of the times. In a world where real robots are taking over many of those tasks, perhaps we need to embrace neurodiversity and encourage collaborative learning through passion, play, and projects, in other words, to start teaching kids to learn in ways that machines can't. We can also use modern technology for connected learning that supports diverse interests and abilities and is integrated into our lives and communities of interest.

At the Media Lab, we have a research group called Lifelong Kindergarten, and the head of the group, Mitchel Resnick, recently wrote a book by the same name. The book is about the group's research on creative learning and the four Ps--Passion, Peers, Projects, and Play. The group believes, as I do, that we learn best when we are pursuing our passion and working with others in a project-based environment with a playful approach. My memory of school was "no cheating," "do your own work," "focus on the textbook, not on your hobbies or your projects," and "there's time to play at recess, be serious and study or you'll be shamed"--exactly the opposite of the four Ps.

Many mental health issues, I believe, are caused by trying to "fix" some types of neurodiversity or by simply being insensitive or inappropriate to people who have them. Many mental "illnesses" can be "cured" by providing the appropriate interface to learning, living, or interacting for that person focusing on the four Ps. My experience with the educational system, both as its subject and, now, as part of it, is not so unique. I believe, in fact, that at least the one-quarter of people who are diagnosed as somehow non-neurotypical struggle with the structure and the method of modern education. People who are wired differently should be able to think of themselves as the rule, not as an exception.

Credits

Edits by Iyasu Nagata on July 8, 2021

As a Japanese, I grew up watching anime like Neon Genesis Evangelion, which depicts a future in which machines and humans merge into cyborg ecstasy. Such programs caused many of us kids to become giddy with dreams of becoming bionic superheroes. Robots have always been part of the Japanese psyche—our hero, Astro Boy, was officially entered into the legal registry as a resident of the city of Niiza, just north of Tokyo, which, as any non-Japanese can tell you, is no easy feat. Not only do we Japanese have no fear of our new robot overlords, we’re kind of looking forward to them.

It’s not that Westerners haven’t had their fair share of friendly robots like R2-D2 and Rosie, the Jetsons’ robot maid. But compared to the Japanese, the Western world is warier of robots. I think the difference has something to do with our different religious contexts, as well as historical differences with respect to industrial-scale slavery.

The Western concept of “humanity” is limited, and I think it’s time to seriously question whether we have the right to exploit the environment, animals, tools, or robots simply because we’re human and they are not.

Sometime in the late 1980s, I participated in a meeting organized by the Honda Foundation in which a Japanese professor—I can’t remember his name—made the case that the Japanese had more success integrating robots into society because of their country’s indigenous Shinto religion, which remains the official national religion of Japan.

Followers of Shinto, unlike Judeo-Christian monotheists and the Greeks before them, do not believe that humans are particularly “special.” Instead, there are spirits in everything, rather like the Force in Star Wars. Nature doesn’t belong to us, we belong to Nature, and spirits live in everything, including rocks, tools, homes, and even empty spaces.

The West, the professor contended, has a problem with the idea of things having spirits and feels that anthropomorphism, the attribution of human-like attributes to things or animals, is childish, primitive, or even bad. He argued that the Luddites who smashed the automated looms that were eliminating their jobs in the 19th century were an example of that, and for contrast he showed an image of a Japanese robot in a factory wearing a cap, having a name and being treated like a colleague rather than a creepy enemy.

The general idea that Japanese accept robots far more easily than Westerners is fairly common these days. Osamu Tezuka, the Japanese cartoonist and the creator of Atom Boy noted the relationship between Buddhism and robots, saying, ''Japanese don't make a distinction between man, the superior creature, and the world about him. Everything is fused together, and we accept robots easily along with the wide world about us, the insects, the rocks—it's all one. We have none of the doubting attitude toward robots, as pseudohumans, that you find in the West. So here you find no resistance, simply quiet acceptance.'' And while the Japanese did of course become agrarian and then industrial, Shinto and Buddhist influences have caused Japan to retain many of the rituals and sensibilities of a more pre-humanist period.

In Sapiens, Yuval Noah Harari, an Israeli historian, describes the notion of “humanity” as something that evolved in our belief system as we morphed from hunter-gatherers to shepherds to farmers to capitalists. As early hunter-gatherers, nature did not belong to us—we were simply part of nature—and many indigenous people today still live with belief systems that reflect this point of view. Native Americans listen to and talk to the wind. Indigenous hunters often use elaborate rituals to communicate with their prey and the predators in the forest. Many hunter-gatherer cultures, for example, are deeply connected to the land but have no tradition of land ownership, which has been a source of misunderstandings and clashes with Western colonists that continues even today.

It wasn’t until humans began engaging in animal husbandry and farming that we began to have the notion that we own and have dominion over other things, over nature. The notion that anything—a rock, a sheep, a dog, a car, or a person—can belong to a human being or a corporation is a relatively new idea. In many ways, it’s at the core of an idea of “humanity” that makes humans a special, protected class and, in the process, dehumanizes and oppresses anything that’s not human, living or non-living. Dehumanization and the notion of ownership and economics gave birth to slavery at scale.

In Stamped from the Beginning, the historian Ibram X. Kendi describes the colonial era debate in America about whether slaves should be exposed to Christianity. British common law stated that a Christian could not be enslaved, and many plantation owners feared that they would lose their slaves if they were Christianized. They therefore argued that blacks were too barbaric to become Christian. Others argued that Christianity would make slaves more docile and easier to control. Fundamentally, this debate was about whether Christianity—giving slaves a spiritual existence—increased or decreased the ability to control them. (The idea of permitting spirituality is fundamentally foreign to the Japanese because everything has a spirit and therefore it can’t be denied or permitted.)

This fear of being overthrown by the oppressed, or somehow becoming the oppressed, has weighed heavily on the minds of those in power since the beginning of mass slavery and the slave trade. I wonder if this fear is almost uniquely Judeo-Christian and might be feeding the Western fear of robots. (While Japan had what could be called slavery, it was never at an industrial scale.)

Lots of powerful people (in other words, mostly white men) in the West are publicly expressing their fears about the potential power of robots to rule humans, driving the public narrative. Yet many of the same people wringing their hands are also racing to build robots powerful enough to do that—and, of course, underwriting research to try to keep control of the machines they’re inventing, although this time it doesn’t involved Christianizing robots … yet.

Douglas Rushkoff, whose book, Team Human, is due out early next year, recently wrote about a meeting in which one of the attendees’ primary concerns was how rich people could control the security personnel protecting them in their armored bunkers after the money/climate/society armageddon. The financial titans at the meeting apparently brainstormed ideas like using neck control collars, securing food lockers, and replacing human security personnel with robots. Douglas suggested perhaps simply starting to be nicer to their security people now, before the revolution, but they thought it was already too late for that.

Friends express concern when I make a connection between slaves and robots that I may have the effect of dehumanizing slaves or the descendants of slaves, thus exacerbating an already tense and advanced war of words and symbols. While fighting the dehumanization of minorities and underprivileged people is important and something I spend a great deal of effort on, focusing strictly on the rights of humans and not the rights of the environment, the animals, and even of things like robots, is one of the things that has gotten us in this awful mess with the environment in the first place. In the long run, maybe it’s not so much about humanizing or dehumanizing, but rather a problem of creating a privileged class—humans—that we use to arbitrarily justify ignoring, oppressing, and exploiting.

Technology is now at a point where we need to start thinking about what, if any, rights robots deserve and how to codify and enforce those rights. Simply imagining that our relationships with robots will be like those of the human characters in Star Wars with C-3PO, R2-D2 and BB-8 is naive.

As Kate Darling, a researcher at the MIT Media Lab, notes in a paper on extending legal rights to robots, there is a great deal of evidence that human beings are sympathetic to and respond emotionally to social robots—even non-sentient ones. I don’t think this is some gimmick; rather, it’s something we must take seriously. We have a strong negative emotional response when someone kicks or abuses a robot—in one of the many gripping examples Darling cites in her paper, a US military officer called off a test using a leggy robot to detonate and clear minefields because he thought it was inhumane. This is a kind of anthropomorphization, and, conversely, we should think about what effect abusing a robot has on the abusing human.

My view is that merely replacing oppressed humans with oppressed machines will not fix the fundamentally dysfunctional order that has evolved over centuries. As a Shinto, I’m obviously biased, but I think that taking a look at “primitive” belief systems might be a good place to start. Thinking about the development and evolution of machine-based intelligence as an integrated “Extended Intelligence” rather than artificial intelligence that threatens humanity will also help.

As we make rules for robots and their rights, we will likely need to make policy before we know what their societal impact will be. Just as the Golden Rule teaches us to treat others the way we would like to be treated, abusing and “dehumanizing” robots prepares children and structures society to continue reinforcing the hierarchical class system that has been in place since the beginning of civilization.

It’s easy to see how the shepherds and farmers of yore could easily come up with the idea that humans were special, but I think AI and robots may help us begin to imagine that perhaps humans are just one instance of consciousness and that “humanity” is a bit overrated. Rather than just being human-centric, we must develop a respect for, and emotional and spiritual dialogue with, all things.

Around the time I turned 40, I decided to address the trifecta of concerns I had about climate change, animal rights, and my health: I went hard vegan. My doctor had been warning me to cut down on red meat, and I had also moved to a rural Japanese farming village populated by farmers growing a wide variety of veggies. They were delicious.

After a while, the euphoria wore off and the culinary limitations of vegan food, especially when traveling, became challenging. I joined the legions of ex-vegans to become a cheating pescaterian. (I wonder if this article will get me bumped off of the Wikipedia Notable Vegans list.) Five years later, the great Tohoku earthquake of 2011 hit Japan, dumping a pile of radioactive cesium-137 on top of our organic garden and shattering the wonderful organic loop we had created. I took my job at the Media Lab and moved to the US the same year, thus starting my slow but steady reentry into the community of animal eaters.

Ten years after I proclaimed myself vegan, I met Isha Datar1, the executive director of New Harvest, an organization devoted to advancing the science of what she calls “cellular agriculture.” Isha is trying to figure out how to grow any agricultural product—milk, eggs, flavors, fragrances, fish, fruit—from cells instead of animals.

Art fans will remember Oron Catts and Ionat Zurr, who in 2003 served “semi-living steak” grown from the skeletal muscles of frogs as an art project called Disembodied Cuisine. Five years later, they presented “Victimless Leather” at MoMA in New York, an installation that involved tissue growing inside a glass container in the shape of a leather jacket. Protests broke out when the museum had to disconnect the life support system because the jacket grew too big.

Isha wasn’t trying to make provocative art. We now have more challenging choices to make than simply whether to be vegan, pescatarian or carnivore, thanks to technology that has given us an explosion of meat-like products that run the ethical gambit in their production processes. She was and is trying to solve our food problem, and New Harvest is supporting and coordinating research efforts at numerous labs and research groups.

Civilians often clump the alternative meat companies and labs together in some kind of big meatless meatball, but, just like different kinds of self-driving car systems, they’re quite distinct. The Society of Automotive Engineers identifies five levels of autonomy; similarly, I see six levels of cellular agriculture. Just as “driver assist” is nice, having a car pick me up and drive me home is a completely different deal, and the latter might not evolve from the former—they might have separate development paths. I think the different branches of cellular agriculture are developing the same way.

Level 0: Just Be Vegan

Some plants are very high in protein, like beans, and they taste great just the way they are.

Level 1: Go Alternative

As a vegan, I ate a lot of processed plant-based proteins like tofu that feel fleshy and taste savory. I call these Level 1 meat alternatives. Many vegan Chinese restaurants serve “fake meat,” which is usually some sort of seitan, a wheat gluten, or textured vegetable protein like textured soy. It’s flavored and has a texture similar to some sort of animal protein, say, shrimp. This kind of protein substitute is a meat alternative—a plant-based protein that starts to mimic the experience of eating meat. Veggie burgers fall into this category.

Level 2: Get Cultured

These meat alternatives are also plant-based, but they contain some “cultured” proteins that are produced using a new scientific process. Yeast or bacteria are engineered to ferment some plant substances and output products that mimic or even replicate the proteins that make a plant-based recipe taste, smell, look or feel more like meat. Impossible Foods’ Impossible Burger falls into this category because its key ingredient is a protein called heme that is produced by genetically engineered yeast. Heme imparts “bloodiness” and “meatiness” to the plant-based burger-like base. This process relies on the industrial biotechnology and large-scale fermentation systems that are already used in the food industry. JUST’s Just Scramble “scrambled eggs” uses a proprietary process to create a plant-based protein as well, combining processes used in the pharmaceutical business, food R&D labs, and chemistry labs.

Level 3: Post-Vegan

Foods at this level are made of plant-based ingredients combined with cultured animal cells (as opposed to the products of bacterial fermentation). In other words, cells as ingredient, plants for mass. The animal cells provide the color, smell, or taste of meat, but not the substance. This relies on industrial biotech and large-scale cell-culture production methods already used in the pharma industry. Level 3 is the first level that requires going beyond the tools and the science already available in the food business.

Level 4: That's a Spicy Meatball

Level 4 alternatives are pure cultured animal cells like the products Memphis Meats and others are working on. The texture and shape of a real steak comes from the muscle cells that grow around the bones and otherwise self-organize into bundles of tissue. At Level 4, we aren’t really dealing with sophisticated texture yet, so we’re pretty much turning the cells we’ve grown into meatballs. (The difference between this and Level 3 is that most of the mass of the food here is animal cells, whereas Level 3 is mostly plant-based with cells sprinkled on top.)

Right now, the primary “media” for cell cultures is fetal serum (the most common type is harvested from cow fetuses), and it currently takes roughly 50 liters of serum and costs about $6,000 to produce a single beef burger. A key breakthrough needed to push us into Level 4 reasonably is figuring out a viable way of feeding cells using non-animal sources of energy. This will involve new science on the cell side and on the media side. And we need to better understand and reproduce nutrients and flavor molecules in addition to producing pure calories.

Level 5: Tastes Like Chicken

Now we get something actually like a chicken thigh or T-bone steak. This is the Jetsons’ version that people imagine when they hear the phrase “lab-grown meat.” It is very much the goal of the alternative meat effort, and no one has achieved it yet. Scientifically, this requires the kind of advanced tissue science that is currently being developed to allow us to swap failed organs in our bodies with replacements grown outside of our bodies.

A beaker full of animal cells doesn’t give you the texture of a steak; with this technology, scientists can use 3-D scaffolding to encourage 3-D growth, and they can grow blood vessels in these tissues as well. We can even use plant-based materials as the scaffolding, but what we really want is for that scaffolding to also grow, which is how organs in our body grow. It turns out that research in regenerative medicine and tissue science is giving us a better understanding of how we might create the texture and scaffolding required to grow an actual kidney instead of just a petri dish full of kidney cells. Scientists have not really focused however, on the idea of deploying tissue science for food ... yet.

Level 6: ZOMG What Is This?

Tasty fake meat is exciting, but not nearly as exciting as the idea of a completely new food system with a diversity of inputs and completely new outputs—a completely new food science. Imagine augmented meat tissue with novel nutritional profiles, texture, flavor and other characteristics—in other words, instead of just trying to recreate meat, scientists develop completely new ingredients that are actually “post-meat.”

Let me explain what investors and I find so exciting about all this activity. My dream, and Isha’s dream, is that we figure out a way to make use of extremely efficient “energy harvesters” like algae, kelp, fungi, or anything else that can take a renewable energy source like the sun and convert it into calories. The idea is to figure out a mechanism to convert these organic stores of energy into inputs for bioreactors, which would then transform these calories into anything we want.

Scientists have made so many advances in terms of using microbes as factories (including fermentation) as well as in genomics, tissue engineering, and stem cells, that it’s feasible to imagine a system that unleashes a culinary bonanza of nutritional, flavor and texture options for future chefs while also lowering the environmental impact of belching cows, concentrated animal-feeding operations, and expensive and energy-inefficient refrigerated supply chains. (The livestock industry uses 70 percent of all land suitable for agriculture, and livestock accounts for as much as 51 percent of greenhouse gas emissions.) Eating meat is one of the most environmentally negative things humans do. I can imagine a food supply system that is even more efficient than eating fresh plants, which still requires refrigeration: Move the materials and calories around in shelf-stable forms, and simply “just add water” at the end in the way that adding water magically spawned sea monkeys when I was a kid.

Such a food industry would also need to develop bioreactors—think bread machines with cell cartridges or breweries that make meat, not beer—that would intake the raw materials and spit out lamb chops. That feels like an engineering task to be undertaken once the cellular biology gets worked out.

So far, most of the investment in the companies trying to rethink meat has come from venture capitalists, and they are impatient. This puts the startups they underwrite under pressure to get products to market quickly and generate financial returns, and makes it highly unlikely that we’ll get to Level 4 or 5 with VC-backed science alone. Basic research funding from philanthropy and government needs to be increased, and biomedical researchers need to be convinced to apply their expertise and knowledge to cellular agriculture.

And, indeed, many labs that Isha is working with are working on the basic research. Some are focused on establishing cell cultures from agricultural animals; others are working to grow animals cells on plants by removing cells from the plants, replacing them with living muscle cells effectively using the plant as a scaffolding.

The work of Isha's small network of scientists reminds me of the early days of neuroscience, when there was almost no federal funding for brain research. Then, suddenly, it became “a thing.” I think we’re reaching that same moment for meat, as climate change becomes an ever more pressing concern; the health impact of eating meat becomes more clear; and our population approaches 10 billion people, threatening our food supply.

Most of the people currently supporting the cellular agriculture movement are animal rights advocates. That’s a fine motivation, but figuring out a completely new design for the creation of food is going to take some real science, and we need to start now. Not only might it save us from future starvation, make a major contribution to reversing climate change, correct the antibiotic resistance armageddon, and help restore fish populations in the oceans, it might also unlock a culinary creative explosion.

1 Disclosure: After meeting Isha, I recruited her to be a Director’s Fellow at the Media Lab where she is inspiring us with her work and her vision.

In the summer of 1990, I was running a pretty weird nightclub in the Roppongi neighborhood of Tokyo. I was deeply immersed in the global cyberpunk scene and working to bring the Tokyo node of this fast-expanding, posthuman, science-fiction-and-psychedelic-drug-fueled movement online. The Japanese scene was more centered around videogames and multimedia than around acid and other psychedelics, and Timothy Leary, a dean of ’60s counterculture and proponent of psychedelia who was always fascinated with anything mind-expanding, was interested in learning more about it. Tim anointed the Japanese youth, including the 24-year-old me, “The New Breed.” He adopted me as a godson, and we started writing a book about The New Breed together, starting with “tune in, turn on, take over,” as a riff off Tim’s original and very famous “turn on, tune in, drop out.” We never finished the book, but we did end up spending a lot of time together. (I should dig out my old notes and finish the book.)

Tim introduced me to his friends in Los Angeles and San Francisco. They were a living menagerie of the counterculture in the United States since the ’60s. There were the traditional New Age types: hippies, cyberpunks, and transhumanists, too. In my early twenties, I was an eager and budding techno-utopian, dreaming of the day when I would become immortal and ascend to the stars into cryogenic slumber to awake on a distant planet. Or perhaps I would have my brain uploaded into a computer network, to become part of some intergalactic superbrain.

Good times. Those were the days and, for some, still are.

We’ve been yearning for immortality at least since the Epic of Gilgamesh. In Greek mythology, Zeus grants Eos’s mortal lover Tithonus immortality—but the goddess forgets to ask for eternal youth as well. Tithonus grows old and decrepit, begging for death. When I hear about life extension today, I am often perplexed, even frustrated. Are we are talking about eternal youth, eternal old age, or having our cryogenically frozen brains thawed out 2,000 years from now to perform tricks in a future alien zoo?

The latest enthusiasm for eternal life largely stems not from any acid-soaked, tie-dyed counterculture but from the belief that technology will enhance humans and make them immortal. Today’s transhumanist movement, sometimes called H+, encompasses a broad range of issues and diversity of belief, but the notion of immortality—or, more correctly, amortality—is the central tenet. Transhumanists believe that technology will inevitably eliminate aging or disease as causes of death and instead turn death into the result of an accidental or voluntary physical intervention.

As science marches forward, and age reversal and the elimination of diseases becomes a real possibility, what once seemed like a science fiction dream is becoming more real, transforming the transhumanist movement and its role in society from a crazy subculture to a Silicon Valley money- and technology-fueled “shot on goal” and more of a practical “hedge” than the sci-fi dream of its progenitors.

Transhumanism can be traced back to futurists in the ’60s, most notably FM-2030. As the development of new, computer-based technologies began to turn into a revolution to rival the Industrial Revolution, Max More defined transhumanism as the effort to become “posthuman” through scientific advances like mind “uploading.” He developed his own variant of Transhumanism and named it Extropy, and together with Tom Morrow, founded the Extropy Institute, whose email list created a community of Extopians in the internet’s cyberpunk era. Its members discussed AI, cryonics, nanotech and crypotoanarchy, among other things, and some reverted to transhumanism, creating an organization now known as Humanity+. As the Tech Revolution continued, Extropians and transhumanists began actively experimenting with technology’s ability to deliver amortality.

In fact, Timothy Leary planned to have his head frozen by Alcor, preserving his brain and, presumably, his sense of humor and unique intelligence. But as he approached his death—I happened to visit him the night before he died in 1996—the vibe of the Alcor team moving weird cryo-gear into his house creeped Tim out, and he ended up opting for the “shoot my ashes into space” path, which seemed more appropriate to me as well. All of his friends got a bit of his ashes, too, and having Timothy Leary ashes became “a thing” for a while. It left me wondering, every time I spoke to groups of transhumanists shaking their fists in the air and rattling their Alcor “freeze me when I die” bracelets: How many would actually go through with the freezing?

That was 20 years ago. The transhumanist and Extropian movements (and even the Media Lab) have gotten more sober since those techno-utopian days, when even I was giddy with optimism. Nonetheless, as science fiction gives way to real science, many of the ZOMG if only conversations are becoming arguments about when and how, and the shift from Haight-Ashbury to Silicon Valley has stripped the movement of its tie-dye and beads and replaced them with Pied Piper shirts. Just as the road to hell is paved with good intentions, the road that brought us Cambridge Analytica and the Pizzagate conspiracy was paved with optimism and oaths to not be evil.

Renowned Harvard geneticist George Church once told me that breakthroughs in biological engineering are coming so fast we can’t predict how they will develop going forward. Crispr, a low-cost gene editing technology that is transforming our ability to design and edit the genome, was completely unanticipated; experts thought it was impossible ... until it wasn’t. Next-generation gene sequencing is decreasing in price, far faster than Moore’s Law for processors. In many ways, bioengineering is moving faster than computing. Church believes that amortality and age reversal will seem difficult and fraught with issues ... until they aren’t. He is currently experimenting with age reversal in dogs using gene therapy that has been successful in mice, a technique he believes is the most promising of nine broad approaches to mortality and aging—genome stability, telomere extension, epigenetics, proteostasis, caloric restriction, mitochondrial research, cell senescence, stem cell exhaustion, and intercellular communication.

Church’s research is but one of the key discoveries giving us hope that we may someday understand aging and possibly reverse it. My bet is that we will significantly lengthen, if not eliminate, the notion of “natural lifespan,” although it’s impossible to predict exactly when.

But what does this mean? Making things technically possible doesn’t always make them societally possible or even desirable, and just because we can do something doesn’t mean we should (as we’re increasingly realizing, watching the technologies we have developed transform into dark zombies instead of the wonderful utopian tools their designers imagined).

Human beings are tremendously adaptable and resilient, and we seem to quickly adjust to almost any technological change. Unfortunately, not all of our problems are technical and we are really bad at fixing social problems. Even the ones that we like to think we’ve fixed, like racism, keep morphing and getting stronger, like drug-resistant pathogens.

Don’t get me wrong—I think it’s important to be optimistic and passionate and push the boundaries of understanding to improve the human condition. But there is a religious tone in some of the arguments, and even a Way of the Future Church, which believes that “the creation of ‘super intelligence’ is inevitable.” As Yuval Harari writes in Homo Deus, “new technologies kill old gods and give birth to new gods.” When he was still just Sir Martin Rees, now Lord Martin Rees once told a group of us a story (which has been retold in various forms in various places) about how he was interviewed by what he called “the society for the abolition of involuntary death” in California. The members offered to put him in cryonic storage when he died, and when he politely told them he’d rather be dead than in a deep freeze, they called him a “deathist.”

Transhumanists correctly argue that every time you take a baby aspirin (or have open heart surgery), you’re intervening to make your life better and longer. They contend that there is no categorical difference between many modern medical procedures and the quest to beat death; it’s just a matter of degree. I tend to agree.

Yet we can clearly imagine the perils of amortality. Would dictators hold onto power endlessly? How would universities work if faculty never retired? Would the population explode? Would endless life be only for the wealthy, or would the poor be forced to toil forever? Clearly many of our social and philosophical systems would break. Back in 2003, Francis Fukuyama, in Our Posthuman Future: Consequences of the Biotechnology Revolution, warned us of the perils of life extension and explained how biotech was taking us into a posthuman future with catastrophic consequences to civilization even with the best intentions.

I think it’s unlikely that we’ll be uploading our minds to computers any time soon, but I do believe changes that challenge what it means to be “human" are coming. Philosopher Nikola Danaylov in his Transhumanist Manifesto says, “We must all respect autonomy and individual rights of all sentience throughout the universe, including humans, non-human animals, and any future AI, modified life forms, or other intelligences.” That sounds progressive and good.

Still, in his manifesto Nikola also writes, “Transhumanists of the world unite—we have immortality to gain and only biology to lose.” That sounds a little scary to me. I poked Nikola about this, and he pointed out that he wrote this manifesto a while ago and his position has become more subtle. But many of his peers are as radical as ever. I think transhumanism, especially its strong, passionate base in exuberant Silicon Valley, could use an overhaul that makes it more attentive to and integrated with our complex societal systems. At the same time, we need to help the “left-behind” parts of society catch up and participate in, rather than just become subjected to, the technological transformations that are looming. Now that the dog has caught the car, tranhumanism has to transform our fantasy into a responsible reality.

I, for one, still dream of flourishing in the future through advances in science and technology, but hopefully one that addresses societal inequities, retains the richness and diversity of our natural systems and indigenous cultures, rather than the somewhat simple and sterile futures depicted by many science fiction writers and futurists. Timothy Leary liked to remind us to remember our hippie roots, with their celebration of diversity and nature, and I hear him calling us again.

Everyone from the ACLU to the Koch brothers wants to reduce the number of people in prison and in jail. Liberals view mass incarceration as an unjust result of a racist system. Conservatives view the criminal justice system as an inefficient system in dire need of reform. But both sides agree: Reducing the number of people behind bars is an all-around good idea.

To that end, AI—in particular, so-called predictive technologies—has been deployed to support various parts of our criminal justice system. For instance, predictive policing uses data about previous arrests and neighborhoods to direct police to where they might find more crime, and similar systems are used to assess the risk of recidivism for bail, parole, and even sentencing decisions. Reformers across the political spectrum have touted risk assessment by algorithm as more objective than decision-making by an individual. Take the decision of whether to release someone from jail before their trial. Proponents of risk assessment argue that many more individuals could be released if only judges had access to more reliable and efficient tools to evaluate their risk.

Yet a 2016 ProPublica investigation revealed that not only were these assessments often inaccurate, the cost of that inaccuracy was borne disproportionately by African American defendants, whom the algorithms were almost twice as likely to label as a high risk for committing subsequent crimes or violating the terms of their parole.

We’re using algorithms as crystal balls to make predictions on behalf of society, when we should be using them as a mirror to examine ourselves and our social systems more critically. Machine learning and data science can help us better understand and address the underlying causes of poverty and crime, as long as we stop using these tools to automate decision-making and reinscribe historical injustice.

Training Troubles

Most modern AI requires massive amounts of data to train a machine to more accurately predict the future. When systems are trained to help doctors spot, say, skin cancer, the benefits are clear. But, in a creepy illustration of the importance of the data used to train algorithms, a team at the Media Lab created what is probably the world’s first artificial intelligence psychopath and trained it with a notorious subreddit that documents disturbing, violent death. They named the algorithm Norman and began showing it Rorschach inkblots. They also trained an algorithm with more benign inputs. The standard algorithm saw birds perched on a tree branch, Norman saw a man electrocuted to death.

So when machine-based prediction is used to make decisions affecting the lives of vulnerable people, we run the risk of hurting people who are already disadvantaged—moving more power from the governed to the governing. This is at odds with the fundamental premise of democracy.

States like New Jersey have adopted pretrial risk assessment in an effort to minimize or eliminate the use of cash-based bail, which multiple studies have shown is not only ineffective but also often deeply punitive for those who cannot pay. In many cases, the cash bail requirement is effectively a means of detaining defendants and denying them one of their most basic rights: the right to liberty under the presumption of innocence.

While cash bail reform is an admirable goal, critics of risk assessment are concerned that such efforts might lead to an expansion of punitive nonmonetary conditions, such as electronic monitoring and mandatory drug testing. Right now, assessments provide little to no insight about how a defendant’s risk is connected to the various conditions a judge might set for release. As a result, judges are ill-equipped to ask important questions about how release with conditions such as drug testing or GPS-equipped ankle bracelets actually affect outcomes for the defendants and society. Will, for instance, an ankle bracelet interfere with a defendant’s ability to work while awaiting trial? In light of these concerns, risk assessments may end up simply legitimizing new types of harmful practices. In this, we miss an opportunity: Data scientists should focus more deeply on understanding the underlying causes of crime and poverty, rather than simply using regression models and machine learning to punish people in high-risk situations.

Such issues are not limited to the criminal justice system. In her latest book, Automating Inequality, Virginia Eubanks describes several compelling examples of failed attempts by state and local governments to use algorithms to help make decisions. One heartbreaking example Eubanks offers is the use of data by the Office of Children, Youth, and Families in Allegheny County, Pennsylvania, to screen calls and assign risk scores to families that help decide whether case workers should intervene to ensure the welfare of a child.

To assess a child’s particular risk, the algorithm primarily “learns” from data that comes from public agencies, where a record is created every time someone applies for low-cost or free public services, such as the Supplemental Nutrition Assistance Program. This means that the system essentially judges poor children to be at higher risk than wealthier children who do not access social services. As a result, the symptoms of a high-risk child look a lot like the symptoms of poverty, the result of merely living in a household that has trouble making ends meet. Based on such data, a child could be removed from her home and placed into the custody of the state, where her outcomes look quite bleak, simply because her mother couldn’t afford to buy diapers.

Look for Causes

Rather than using predictive algorithms to punish low-income families by removing their children, Eubanks argues we should be using data and algorithms to assess the underlying drivers of poverty that exist in a child’s life and then ask better questions about which interventions will be most effective in stabilizing the home.

This is a topic that my colleague Chelsea Barabas discussed at length at the recent Conference on Fairness, Accountability, and Transparency, where she presented our paper, “Interventions Over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” In the paper, we argue that the technical community has used the wrong yardstick to measure the ethical stakes of AI-enabled technologies. By narrowly framing the risks and benefits of artificial intelligence in terms of bias and accuracy, we’ve overlooked more fundamental questions about how the introduction of automation, profiling software, and predictive models connect to outcomes that benefit society.

To reframe the debate, we must stop striving for “unbiased” prediction and start understanding causal relationships. What caused someone to miss a court date? Why did a mother keep a diaper on her child for so long without changing it? The use of algorithms to help administer public services presents an amazing opportunity to design effective social interventions—and a tremendous risk of locking in existing social inequity. This is the focus of the Humanizing AI in Law (HAL) work that we are doing at the Media Lab, along with a small but growing number of efforts involving the combined efforts of social scientists and computer scientists.

This is not to say that prediction isn’t useful, nor is it to say that understanding causal relationships in itself will fix everything. Addressing our societal problems is hard. My point is that we must use the massive amounts of data available to us to better understand what’s actually going on. This refocus could make the future one of greater equality and opportunity, and less a Minority Report–type nightmare.

On December 15, 2017, the United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston, issued a damning report on his visit to the United States. He cited data from the Stanford Center on Inequality and Poverty, which reports that “in terms of labor markets, poverty, safety net, wealth inequality, and economic mobility, the US comes in last of the top 10 most well-off countries, and 18th amongst the top 21.” Alston wrote that “the American Dream is rapidly becoming the American Illusion, as the US now has the lowest rate of social mobility of any of the rich countries.” Just a few days before, on December 11, The Boston Globe's Spotlight team ran a story showing that the median net worth of nonimmigrant African American households in the Boston area is $8, in contrast to the $247,500 net worth for white households in the Boston area.

Clearly income disparity is ripping the nation apart, and none of the efforts or programs seeking to address it seems to be working. I myself have been, for the past couple of years, engaged in a broad discussion about the future of work with some thoughtful tech leaders and representatives of the Catholic Church who have similar concerns, and the notion of a universal basic income (UBI) keeps coming up. Like many of my friends who fiddle with ideas about the future of work, I’ve avoided actually having a firm opinion about UBI for years. Now I have decided it’s time to get my head around it.

Touted as an elegant solution to the problem of poverty in America and the impending decimation of jobs by automation, UBI is a hot topic today in the “salons” hosted by tech and hedge-fund billionaires. The idea of UBI in fact is an old idea, older than me even: Either through direct cash payments or some sort of negative income tax, we should support people in need—or even everyone—to increase well-being and lift society overall.

Interestingly, this notion has had broad support from conservatives like Milton Friedman and progressives such as Martin Luther King Jr. On the other hand, UBI also has been criticized by conservatives as well as liberals.

Conservative proponents of UBI argue that it could shrink a huge array of costly social welfare services like health care, food assistance, and unemployment support by providing a simple, inexpensive way to let individuals, rather than the government, decide what to spend the money on. Liberals see it as a way to redistribute wealth and empower groups like stay-at-home parents, whose work doesn’t produce income—making them ineligible for unemployment benefits. In addition, these UBI advocates see it as a way to eliminate poverty.

Nevertheless, just as many conservatives and liberals don’t like the concept. Conservatives against UBI worry that it will decrease incentives to work and cost too much, racking up a bill that those who do work will have to pay. Skeptical liberals worry that employers will use it as an excuse to pay even lower wages. They also fear politicians will offer it as a rationale to gut existing social programs and unwind institutions that help those most in need. The result is that UBI is a partisan issue that, paradoxically, has bipartisan support.

I was on a panel at a recent conference when the moderator asked audience and panel members what they thought of UBI. The overwhelming consensus of the 500 or so people in the room appeared to be “we're skeptical, but should experiment.” UBI sounds like a good or not-so-good idea to different constituents because we have so little understanding of either how we would do it, or how people would react. None of us really knows what we’re talking about when it comes to UBI, akin to being in a drunken bar argument before there were smartphones and Wikipedia. But there are a few basic principles and pieces of research that can help.


Universal Basic Income, In Theory


Much of the resurgent interest in UBI has come from Silicon Valley. Tech titans and the academics around them are concerned that the robots and artificial intelligence they’ve built will rapidly displace humans in the workforce, or at least push them into dead-end jobs. Some researchers say robots will replace the low-paying jobs people don’t want, while others maintain people will end up getting the worst jobs not worthy of robots. UBI may play a role in which scenario comes to pass.

Last year, Elon Musk told the National Governors Association that job disruption caused by technology was “the scariest problem to me,” admitting that he had no easy solution. Musk and other entrepreneurs see UBI as way to provide a cushion and a buffer to give humans time to retrain themselves to do what robots can’t do. Some believe that it might even spawn a new wave of entrepreneurs, giving those displaced workers a shot at the American Dream.

They may be getting ahead of themselves. Luke Martinelli, a researcher at the University of Bath Institute for Policy Research, has written that “an affordable UBI is inadequate, and an adequate UBI is unaffordable.” I believe that is roughly true.

One of the biggest problems with UBI is that a base sum that would allow people to refuse work and look for something better (rather than just allowing employers to pay workers less) is around $1000 per month, which would cost most countries somewhere between 5 percent to 35 percent of their GDP. That looks expensive compared with the cost to any developed country of eradicating poverty, so the only way a nation could support this kind of UBI would be to eliminate all funding for social programs. That would be applauded by libertarians and some conservatives, but not by many others.

Underpinning the Silicon Valley argument for UBI is the belief in exponential growth powered by science and technology, as described by Peter Diamandis in his book Abundance: The Future is Better Than You Think. Diamandis contends that technological progress, including gains in health, the power of computing, and the development of machine intelligence, among other things, will lead to a kind of technological transcendence that makes today’s society look like how we view the Dark Ages. He argues that the human mind is unable to intuitively grasp this idea, and so we constantly underestimate long-term effects. If you plot progress out a few decades, Diamandis writes, we end up with unimaginable abundance: “We will soon have the ability to meet and exceed the basic needs of every man, woman, and child on the planet. Abundance for all is within our grasp.” (Technologists often forget is that we actually already have enough food to feed the world; the problem is that it’s just not properly distributed.)

Many tech billionaires think they can have their cake and eat it too, that they are so rich and smart the trickle-down theory can lift the poor out of poverty without anyone or anything suffering. And why shouldn’t they think so? Their companies and their wealth have grown exponentially, and it doesn’t appear as though there is any end in sight, as Marc Andreessen prophetically predicted in his famous essay, “Why Software is Eating the World.” Most of Silicon Valley’s leaders gained their wealth in an exponentially growing market without having to engage in the aggressive tactics that marked the creation of wealth in the past. They feel their businesses inherently “do good,” and that, I believe, allows them to feel more charitable, broadly speaking.


Universal Basic Income, In Practice


If the technologists are correct and automation is going to substantially increase US GDP, then who better to figure out what to do about the associated problems than the technologists themselves—or so their thinking goes. Tech leaders are underwriting experiments and financing research on UBI to prepare for a future that will allow them and their companies to continue in ascendance while keeping society stable. (Various localities and organizations already have experimented with forms of UBI over the years. In some cases, they have produced evidence that people receiving UBI do in fact continue to work, and that UBI gives people the ability to quit lousy jobs and look for better ones, or complete or go back to school.) Sam Altman, president of Y Combinator, has a project to give people free money and see what happens to them over time, for instance.

Altman's experiment, prosaically named the Basic Income Project, will involve 3,000 people in two states over five years. Some 1,000 of them will be given $1,000 a month, and the rest will get just $50 a month and serve as a sort of control group. It should reveal some important information about how people will behave when given free money, providing an evidence-based way to think about UBI—we don’t have much of that evidence now. Among the questions hopefully to be answered: Will people use the cushion of free money to look for better work? Will they go back to school for retraining? Will neurological development of children improve? Will crime rates go down?

As with many ideas with diverse support at high levels, the particulars of execution can make or break UBI in practice. Take the recent, much heralded UBI experiment in Finland. A Finnish welfare agency, Kela, and a group of researchers proposed paying between 550 and 700 euros a month to both workers and nonworkers around that country. Finland’s conservative government then began tweaking the proposal, most importantly eliminating the part of the plan that paid people who had jobs, and only providing UBI for those receiving unemployment benefits instead. It had no interest in whether UBI would allow people to look for better jobs or to train themselves for the jobs of the future. The government declared that the “primary goal of the basic income experiment is related to promoting employment.” And so what started as a credible experiment in empowering labor and liberal values became a conservative program to get more people to go back to crappy jobs—and a great warning about the impact that politics can have on efforts to test or deploy UBI. (We must wait until 2019 to see the full extent of the outcome.)

Chris Hughes, a cofounder of Facebook and not-quite-billionaire, is the person I found with a plan for UBI that’s halfway between Silicon Valley’s techno-utopian vision and the vision held by the liberal East Coast types that I mostly hang out with these days.

His new book Fair Shot: Rethinking Inequality and How We Earn outlines his views on UBI, but here’s my brief version of what Hughes is thinking: He believes we can do UBI now. He says we can “provide every single American stability through cash” by providing a monthly $500 supplement to lower-middle income taxpayers through the Earned Income Tax Credit, or EITC. He would expand EITC to include child care, elder care, and education as types of work that would be eligible for EITC. (Currently if the jobs are unpaid jobs, they are not eligible.) Hughes contends that this would cut poverty in America by half. According to his numbers, right now the EITC costs the US $70 billion a year, and his UBI proposal would tack on an additional $290 billion. Citing research by Emmanuel Saez and Gabriel Zucman showing that less than 1 percent of Americans control as much wealth as 90 percent of Americans, Hughes' plan to pay for that expansion involves increasing the income tax for the top 1 percent, or people earning more than $250,000 a year, to 50 percent from 35 percent, and treating capital gains as income—moving long-term capital gains from 20 percent to 50 percent, hitting the wealthiest the hardest.

He’s putting his money where his mouth is too, underwriting a project that will give $500 a month to residents of Stockton, California.

Will UBI save America? Our Congress and president just passed a tax law that reduces taxes on the country’s wealthiest, but I still think Hughes' proposal is reasonable in part because EITC is a pretty popular program. My fear is that the current political climate and our ability to discuss things rationally are severely impaired, and that's without factoring in the usual challenges of turning rational ideas into law. In the meantime, it’s great that Silicon Valley billionaires have recognized the potential negative impact of their businesses and are looking at and funding experiments to provide better evidence-based understanding of UBI, even if evidence appears to have less and less currency in today’s world.

Am I optimistic? No. Should we get cracking on trying everything we can, and is UBI a decent shot on goal? Yes and yes.

When we look at a stack of blocks or a stack of Oreos, we intuitively have a sense of how stable it is, whether it might fall over, and in what direction it may fall. That’s a fairly sophisticated calculation involving the mass, texture, size, shape, and orientation of the objects in the stack.

Researchers at MIT led by Josh Tenenbaum hypothesize that our brains have what you might call an intuitive physics engine: The information that we are able to gather through our senses is imprecise and noisy, but we nonetheless make an inference about what we think will probably happen, so we can get out of the way or rush to keep a bag of rice from falling over or cover our ears. Such a “noisy Newtonian” system involves probabilistic understandings and can fail. Consider this image of rocks stacked in precarious formations.

Based on most of your experience, your brain tells you that it's not possible for them to remain standing. Yet there they are. (This is very similar to the physics engines inside videogames like Grand Theft Auto that simulate a player’s interactions with objects in their 3-D worlds.)

For decades, artificial intelligence with common sense has been one of the most difficult research challenges in the field—artificial intelligence that “understands” the function of things in the real world and the relationship between them and is thus able to infer intent, causality, and meaning. AI has made astonishing advances over the years, but the bulk of AI currently deployed is based on statistical machine learning that takes tons of training data, such as images on Google, to build a statistical model. The data are tagged by humans with labels such as “cat” or “dog”, and a machine’s neural network is exposed to all of the images until it is able to guess what the image is as accurately as a human being.

One of the things that such statistical models lack is any understanding of what the objects are—for example that dogs are animals or that they sometimes chase cars. For this reason, these systems require huge amounts of data to build accurate models, because they are doing something more akin to pattern recognition than understanding what’s going on in an image. It’s a brute force approach to “learning” that has become feasible with the faster computers and vast datasets that are now available.

It’s also quite different from how children learn. Tenenbaum often shows a video by Felix Warneken, Frances Chen, and Michael Tomasello, of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, of a small child watching an adult walk repeatedly into a closet door, clearly wanting to get inside but failing to open it properly. After just a few attempts, the child pulls the door open, allowing the adult to walk through. What seems cute but obvious for humans to do—to see just a few examples and come up with a solution—is in fact very difficult for a computer to do. The child opening the door for the adult instinctively understands the physics of the situation: There is a door, it has hinges, it can be pulled open, the adult trying to get inside the closet cannot simply walk through it. In addition to the physics the child understands, he is able to guess after a few attempts that the adult has an intention to go through the door but is failing.

This requires an understanding that human beings have plans and intentions and might want or need help to accomplish them. The capacity to learn a complex concept and also learn the specific conditions under which that concept is realized is an area where children exhibit natural, unsupervised mastery.

Infants like my own 9-month-old learn through interacting with the real world, which appears to be training various intuitive engines or simulators inside of her brain. One is a physics engine (to use Tenenbaum's term) that learns to understand—through piling up building blocks, knocking over cups, and falling off of chairs—how gravity, friction, and other Newtonian laws manifest in our lives and set parameters on what we can do.

In addition, infants from birth exhibit a social engine that recognizes faces, tracks gazes, and tries to understand how other social objects in the world think, behave, and interact with them and each other. This “social gating hypothesis,” proposed by Patricia Kuhl, professor of speech and hearing sciences at the University of Washington, argues that our ability to speak is fundamentally linked to the development of social understanding through our social interactions as infants. Elizabeth Spelke, a cognitive psychologist at Harvard University, and her collaborators have been working to show how infants develop a “intuitive psychology” to infer people’s goals from as early as 10 months.

In his book, Thinking, Fast and Slow, Daniel Kahneman explains that the intuitive part of our brain is not so good at statistics or math. He proposes the following problem. A baseball bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much is the cost of the ball? Our intuition wants to say, 10 cents, but that’s wrong. If the ball is 10 cents, and the bat is $1 more, the bat would be $1.10, which would make the total $1.20. The correct answer is that the ball is 5 cents and the bat is $1.05, bringing the total to $1.10. Clearly, you can fool our intuition about statistics, just like the stacked rocks existing in the natural world confuse our internal physics engine.

But academics and economists often use such examples as reasons to undervalue the role of intuition in science and academic study, and that’s a huge mistake. The intuitive engines that help us quickly assess physical or social situations are doing extremely complex computations that may not even be explainable; it may be impossible to compute them linearly. For example, an expert skier can’t explain what she does, nor can you learn to ski just by reading instructions. Your brain and your whole body learn to move, synchronize, and operate in a very complex way to enter a state of flow where everything works without linear thinking.

Your brain goes through a tremendous transformation in your infancy. Infant brains initially grow twice as many connections between neurons as adults have, and these are pruned back as a child’s brain matures. Their brains develop an intuitive understanding of the complex systems they interact with—stairs, mom, dad, friends, cars, snowy mountains. Some will learn the difference between dozens of types of waves, to help them navigate the seas, or the difference between many types of snow. As the brain develops, it prunes the connections that don’t appear to be important as we mature.

While our ability to explain, argue, and understand each other using words is extremely important, it is also important to understand that words are simplified representations and can mean different things to different people. Many ideas or things that we know cannot be reduced to words; when they are, the words do not transmit more than a summary of the actual idea or understanding.

Just as we should not dismiss the expert skier who cannot explain how they ski, we should not dismiss the intuition of the shamans who hear nature telling them that things are out of balance. It may be that our view of many of the sensibilities of indigenous people and their relationships with nature as “primitive”—because they can’t explain it and we can’t understand—is in fact more about our lack of an environmental intuition engine. Our senses may have pruned those neurons because they weren’t needed in our urban worlds. We spend most of our lives with our noses in books and screens and sitting in cubicles becoming educated so that we understand the world. Does our ability to explain things mathematically or economically really mean that we understand things such as ecological systems better than the brains of those who were immersed in a natural environment from infancy, who understand them intuitively?

Maybe a big dose of humility and an effort to integrate the nonlinear and intuitive understanding of the minds of people we view as less educated—people who have learned through doing and observing instead of through textbooks—would substantially benefit our understanding of how things work and what we can do about the problems currently unsolvable with our modern tools. It’s also yet another argument for diversity. Reductionist mathematical and economic models are useful from an engineering point of view, but we should be mindful to appreciate our limited ability to describe complex adaptive systems using such models, which don’t really allow for intuition and run the risk of neglecting its role in human experience.

If Tenenbaum and his colleagues are successful in developing machines that can learn intuitive models of the world, it’s possible they will suggest things that either they can’t initially explain or that are so complex we are unable to comprehend them with current theories and tools. Whether we are talking about the push for more explainability in machine learning and AI models or we are trying to fathom how indigenous people interact with nature, we will reach the limits of explainability. It is this space, beyond the explainable, that is the exciting cutting edge of science, where we discover and press beyond our current understanding of the world.

Ever since my friends and I set up a Digicash server to sell music and artwork with a digital currency called eCash representing real gold, back in the ’90s, I’ve been waiting for the day when cryptocurrencies—digital currencies that operate independently of central banks by using encryption to generate units and verify transfers of funds—would transform the world. Cryptocurrencies are finally here, but not exactly in the way that I envisioned.

And so since last year, I’ve found myself issuing warnings instead of accolades about the latest trend in the frothy world of cryptocurrencies: ICOs, or initial coin offerings. The initial idea was a pretty good one—blockchain technology could be used to issue new cryptographically secure “tokens” or “coins” that are easy to transmit peer-to-peer. The coins could be sold to fund open-source software projects and other services that people find useful but are hard to finance with traditional structures. They could even function as shares and thus allow startups to finance themselves far more efficiently, from a broader range of people, and without the intermediaries that take fees and require a drawn-out process. Or the “coins” could represent some unit of utility, such as a gigabyte of storage or access to a network.

My concern with today’s ICOs is that they’re being fueled by the gold-rush mentality around cryptocurrencies, and so are deployed in irresponsible ways that are causing harm to individuals and damaging the ecosystem of developers and organizations. We haven’t set up the legal, technical, or normative controls yet, and many people are taking advantage of this.

Thus, ICOs are to cryptocurrencies what Trump is to American democracy: not what the founders of the institution envisioned.

It doesn’t have to be that way.

Think of an ICO as a means of creating digital certificates that have signatures, rules, programs, and other attributes controlled cryptographically. You could create a digital version of a check, a stock certificate, an IOU, or a gift card for a hamburger or a barrel of oil. That makes these certificates equivalent to a security, a commodity, or even just a simple financial transaction.

In their traditional forms, each of these elements have different risks and different regulatory bodies governing them. The Securities and Exchange Commission, the Treasury Department, and so on play a role in reducing financial risks and preventing financial crimes. In other words, some of the rules and regulations—the friction—in the existing system is there to protect investors, customers, and society.

But those regulators haven’t caught up with ICOs quite yet. Issuers are getting rich and unwitting investors are buying tokens of questionable value.

On July 25, 2017, the SEC announced that if a token looks like a security, it will regulate and treat the token as a security. It subsequently set up a task force to go after ICOs that are scamming investors and exploiting gray areas in securities laws. But many of the tokens issued through ICOs today are not shares in a company. Rather, they are “tokenized” versions of some sort of product, service, or asset, or a promise to invest funds in research or infrastructure. Issuers are calling the sale of such tokens a “crowd sale” instead of a “funding” to make it clear that people are buying a product rather than a security—and, intentionally or not, avoiding regulatory scrutiny.

A Swiss platform for posting jobs, for instance, used a crowd sale to sell what it calls Global Jobcoin, which buyers can use to pay for employment services. Meanwhile, someone—it is almost impossible to figure out who—is using a crowd sale to peddle Jesus Coins, which promise to forgive sins and fight corruption in “the church,” among other things.

I’m not saying all ICOs are sketchy. Some have legitimate uses, such as Filecoin, which aims to allow a token holder access to storage online and rewards people for hosting files.

The problem is that many of these tokens are traded on exchanges, and are thus viewed by investors as commodities or currencies to trade in and out of. Most tokens aren’t “pegged” to anything in the real world, and their exchange rates fluctuate. Most tokens are currently going up in value, which has attracted a large number of speculators who aren’t looking for workers or forgiveness of sin. They don't really care about the underlying asset linked to tokens, and are investing on the Greater Fools Theory—the idea that someone dumber than them will buy their tokens for more than they paid. This is a pretty good bet … until it isn’t.

Requiring companies to sell tokens only to accredited investors won’t solve the problem, because those investors will later sell them to speculators or, worse, to people who have seen the ads online promising to provide the secret of making a bundle on cryptocurrencies. And Wall Street has never been willing to end a rip-roaring party once the keg is tapped.

The regulatory intervention that has just begun will need to be much more sophisticated and technically informed, and in the meantime there’s a long line of people who’ve read about the skyrocketing price of Bitcoin (or Jesus Coin) and are waiting for a chance to buy into one of the myriad ICOs coming down the pipeline.

And volatility adds to the burdens of young companies issuing tokens, which will need functions similar to a central bank and corporate-style investor relations in addition to just trying to run their core businesses. If these companies fail, investors will get some benefit in a fire sale or liquidation, but token holders will end up with something akin to the Zimbabwean dollar in my scrapbook.

But a coin without volatility would be of little interest to such speculators, and it would be quite easy to design. We could start by simply pegging the value of tokens to something, say $1 or the price of one hamburger. A pegged token would fluctuate in “value” only to the extent that the underlying asset fluctuated. If the subscription price is fixed or you only eat hamburgers, there would be much less fluctuation or volatility.

For people hoping to make a fast buck, that linkage would remove potential upside value, narrowing the market of the coins to mostly just those people who would use the service. Having said that, even with a value pegged to some underlying asset, it’s possible that the current irrational market would still make prices go crazy. If the issuer didn’t own or have the ability to produce the underlying asset, the owners of its coins might be in peril of holding a valueless proxy. For example, concerns have escalated recently that Tether, a cryptocurrency pegged to the dollar, may or may not have actual dollars to back its tokens. If it doesn’t, then it’s sort of like an uninsured bank printing its own version of dollar bills without anything in its vaults. People have been buying Tether as a proxy for dollars on cryptocurrency exchanges, and so its failure might cause the price of Bitcoin to plummet and, more broadly, do substantial damage to the market.

A lot of otherwise productive developers are devoting their expertise and attention to working on shallow, quick money ICOs rather than working to sort out the underlying infrastructure and protocols in academic and more open deliberative settings not fueled by warped financial interest.

It reminds me of the late-’90s dot-com bubble, when the now-defunct Pets.com was spending investor money buying Super Bowl ads to sell products at 30 percent of what they cost the company itself to buy. I understand the desire of venture capital to use blockchain and other technologies underpinning ICOs, and for new companies to take this nearly “free money” to build their businesses. But there is, I feel, an ethical issue in such knowing exploitation. I’ve pleaded my case with entrepreneurs, investors, and developers, but it’s like trying to stand in front of a buffalo stampede.

ICO mania will no doubt run its course, as all such financial manias do. But in the meantime, people will be hurt and there will be a painful correction. The one upside is this: As in the wake of the dot-com implosion, serious developers and investors will continue to work to build what will be a more robust network and foundation for the future of the blockchain and cryptocurrencies.

My friend Bill Schoenfeld, along with a small number of investors, made a lot of money when the real estate bubble in Japan popped. At some point, the Japanese real estate bubble got going so fast that almost no one was assessing the underlying value, but Bill insisted on pricing real estate doing just that. When the bubble popped and the prices went into free-fall, he bought a lot of property at a rational price. Bubbles make pricing irrational going up as well as going down. Maybe the clever thing to do right now is for people to assess the real underlying value of these tokens and be prepared to buy the ones that are actually valuable when the bubble pops.

Amara’s law famously states that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” The largest and most successful companies on the internet were built after the first bubble, when the protocols and the technologies became mature. I’m holding my nose, squinting my eyes, and imagining—and running for the mountains beyond the dust storm around the ICO stampede.

Note on conflict of interests: When I helped found the MIT Media Lab Digital Currency Initiative, I sold my shares in all blockchain and bitcoin related companies and have not invested in any companies engaging in cryptocurrencies as their primary activity. I do not hold any material amount of any cryptocurrency. I believe that in the current phase of our work, it is important for me to be clear of any conflicts of interest. You can see a more complete conflict of interest disclosure on my website.