Joi Ito's Web

Joi Ito's conversation with the living web.

Somewhere between 2 and 3 billion years ago, what scientists call the Great Oxidation Event, or GOE, took place, causing the mass extinction of anaerobic bacteria, the dominant life form at the time. A new type of bacteria, cyanobacteria, had emerged, and it had the photosynthetic ability to produce glucose and oxygen out of carbon dioxide and water using the power of the sun. Oxygen was toxic to many anaerobic cousins, and most of them died off. In addition to being a massive extinction event, the oxygenation of the planet kicked off the evolution of multicellular organisms (620 to 550 million years ago), the Cambrian explosion of new species (540 million years ago), and an ice age that triggered the end of the dinosaurs and many cold-blooded species, leading to the emergence of the mammals as the apex group (66 million years ago) and eventually resulting in the appearance of Homo sapiens, with all of their social sophistication and complexity (315,000 years ago).

I’ve been thinking about the GOE, the Cambrian Explosion, and the emergence of the mammals a lot lately, because I’m pretty sure we’re in the midst of a similarly disruptive and pivotal moment in history that I’m calling the Great Digitization Event, or GDE. And right now we’re in that period where the oxygen, or in this case the internet as used today, is rapidly and indifferently killing off many systems while allowing new types of organizations to emerge.

As WIRED celebrates its 25th anniversary, the Whole Earth Catalog its 50th anniversary, and the Bauhaus its 100th anniversary, we’re in a modern Cambrian era, sorting through an explosion of technologies enabled by the internet that are the equivalent of the stunning evolutionary diversity that emerged some 500 million years ago. Just as in the Great Oxidation Event, in which early organisms that created the conditions for the explosion of diversity had to die out or find a new home in the mud on the ocean floor, the early cohort that set off the digital explosion is giving way to a new, more robust form of life. As Fred Turner describes in From Counterculture to Cyberculture, we can trace all of this back to the hippies in the 1960s and 1970s in San Francisco. They were the evolutionary precursor to the advanced life forms observable in the aftermath at Stoneman Douglas High School. Let me give you a first-hand account of how the hippies set off the Great Digitization Event.

From the outset, members of that movement embraced nascent technological change. Stewart Brand, one of the Merry Pranksters, began publishing the Whole Earth Catalog in 1968, which spawned a collection of other publications that promoted a vision of society that was ecologically sound and socially just. The Whole Earth Catalog gave birth to one of the first online communities, the Whole Earth ‘Lectronic Link, or WELL, in 1985.

Around that time, R.U. Sirius and Mark Frost1 started the magazine High Frontiers, which was later relaunched with Queen Mu and others as Mondo 2000. The magazine helped legitimize the burgeoning cyberpunk movement, which imbued the growing community of personal computer users and participants in online communities with an ‘80s version of hippie sensibilities and values. A new wave of science fiction, represented by William Gibson’s Neuromancer, added the punk rock dystopian edge.

Timothy Leary, a “high priest” of the hippie movement and New Age spirituality, adopted me as his godson when we met during his visit to Japan in 1990, and he connected me to the Mondo 2000 community that became my tribe. Mondo 2000 was at the hub of cultural and technological innovation at the time, and I have wonderful memories of raves advertising “free VR” and artist groups like Survival Research Labs that connected the hackers from the emerging Silicon Valley scene with Haight-Ashbury hippies.

I became one of the bridges between the Japanese techno scene and the San Francisco rave scene. Many raves in San Francisco happened in the then-gritty area south of Market Street, near Townsend and South Park. ToonTown, a rave producer, set up its offices (and living quarters) there, which attracted designers and others who worked in the rave business, such as Nick Philip, a British BMX'er and designer. Nick, who started out designing flyers for raves using photocopy machines and collages, created a clothing brand called Anarchic Adjustment, which I distributed in Japan and which William Gibson, Dee-Lite, and Timothy Leary wore. He began using computer graphics tools from companies like Silicon Graphics to create the artwork for T-shirts and posters.

In August 1992, Jane Metcalfe and Louis Rossetto rented a loft in the South Park area because they wanted to start a magazine to chronicle what had evolved from a counterculture into a powerful new culture built around hippie values, technology, and the new Libertarian movement. (In 1971, Louis had appeared on the cover of The New York Times Magazine as coauthor, with Stan Lehr, of “Libertarianism, The New Right Credo.”) When I met them, they had a desk and a 120-page laminated prototype for what would become WIRED. Nicholas Negroponte, who had cofounded the MIT Media Lab in 1985, was backing Jane and Louis financially. The founding executive editor of WIRED was Kevin Kelly, who was formerly one of the editors of the Whole Earth Catalog. I got involved as a contributing editor. I didn’t write articles at the time, but made my debut in the media in the third issue2 of WIRED, mentioned as a kid addicted to MMORPGs in an article by Howard Rheingold. Brian Behlendorf, who ran the SFRaves mailing list, announcing and talking about the SF rave scene, became the webmaster of HotWired, a groundbreaking exploration of the new medium of the Web.

WIRED came along just as the internet and the technology around it really began to morph into something much bigger than a science fiction fantasy, in other words, on the cusp of the GDE. The magazine tapped into the design talent around South Park, literally connecting to the design and development shop Cyborganic, with ethernet cables strung inside of the building where they shared a T1 line. It embraced the post-psychedelic design and computer graphics that distinguished the rave community and established its own distinct look that bled over into the advertisements in the magazine, like one Nick Philip designed for Absolut, with the most impact coming from people such as Barbara Kuhr and Erik Adigard.

Structured learning didn't serve me particularly well. I was kicked out of kindergarten for running away too many times, and I have the dubious distinction of having dropped out of two undergraduate programs and a doctoral business and administration program. I haven't been tested, but have come to think of myself as "neuroatypical" in some way.

"Neurotypical" is a term used by the autism community to describe what society refers to as "normal." According to the Centers for Disease Control, one in 59 children, and one in 34 boys, are on the autism spectrum--in other words, neuroatypical. That's 3 percent of the male population. If you add ADHD--attention deficit hyperactivity disorder--and dyslexia, roughly one out of four people are not "neurotypicals."

In NeuroTribes, Steve Silberman chronicles the history of such non-neurotypical conditions, including autism, which was described by the Viennese doctor Hans Asperger and Leo Kanner in Baltimore in the 1930s and 1940s. Asperger worked in Nazi-occupied Vienna, where institutionalized children were actively euthanized, and he defined a broad spectrum of children who were socially awkward. Others had extraordinary abilities and a "fascination with rules, laws and schedules," to use Silberman's words. Leo Kanner, on the other hand, described children who were more disabled. Kanner's suggestion that the condition was activated by bad parenting made autism a source of stigma for parents and led to decades of work attempting to "cure" autism rather than developing ways for families, the educational system, and society to adapt to it.

Our schools in particular have failed such neurodiverse students, in part because they've been designed to prepare our children for typical jobs in a mass-production-based white- and blue-collar environment created by the Industrial Revolution. Students acquire a standardized skillset and an obedient, organized, and reliable nature that served society well in the past--but not so much today. I suspect that the quarter of the population who are diagnosed as somehow non-neurotypical struggle with the structure and the method of modern education, and many others probably do as well.

I often say that education is what others do to you and learning is what you do for yourself. But I think that even the broad notion of education may be outdated, and we need a completely new approach to empower learning: We need to revamp our notion of "education" and shake loose the ordered and linear metrics of the society of the past, when we were focused on scale and the mass production of stuff. Accepting and respecting neurodiversity is the key to surviving the transformation driven by the internet and AI, which is shattering the Newtonian predictability of the past and replacing it with a Heisenbergian world of complexity and uncertainty.

In Life, Animated, Ron Suskind tells the story of his autistic son Owen, who lost his ability to speak around his third birthday. Owen had loved the Disney animated movies before his regression began, and a few years into his silence it became clear he'd memorized dozens of Disney classics in their entirety. He eventually developed an ability to communicate with his family by playing the role, and speaking in the voices, of the animated characters he so loved, and he learned to read by reading the film credits. Working with his family, Owen recently helped design a new kind of screen-sharing app, called Sidekicks, so other families can try the same technique.

Owen's story tells us how autism can manifest in different ways and how, if caregivers can adapt rather than force kids to "be normal," many autistic children survive and thrive. Our institutions, however, are poorly designed to deliver individualized, adaptive programs to educate such kids.

In addition to schools poorly designed for non-neurotypicals, our society traditionally has had scant tolerance or compassion for anyone lacking social skills or perceived as not "normal." Temple Grandin, the animal welfare advocate who is herself somewhere on the spectrum, contends that Albert Einstein, Wolfgang Mozart, and Nikola Tesla would have been diagnosed on the "autistic spectrum" if they were alive today. She also believes that autism has long contributed to human development and that "without autism traits we might still be living in caves." She is a prominent spokesperson for the neurodiversity movement, which argues that neurological differences must be respected in the same way that diversity of gender, ethnicity or sexual orientation is.

Despite challenges with some of the things that neurotypicals find easy, people with Asperger's and other forms of autism often have unusual abilities. For example, the Israeli Defense Force's Special Intelligence Unit 9900, which focuses on analyzing aerial and satellite imagery, is partially staffed with people on the autism spectrum who have a preternatural ability to spot patterns. I believe at least some of Silicon Valley's phenomenal success is because its culture places little value on conventional social and corporate values that prize age-based experience and conformity that dominates most institutions on the East Coast and most of society as a whole. It celebrates nerdy, awkward youth and has turned their super-human, "abnormal" powers into a money-making machine that is the envy of the world. (This new culture is wonderfully inclusive from a neurodiversity perspective but white-dude centric and problematic from a gender and race perspective.)

This sort of pattern recognition and many other unusual traits associated with autism are extremely well suited for science and engineering, often enabling a super-human ability to write computer code, understand complex ideas and elegantly solve difficult mathematical problems.

Unfortunately, most schools struggle to integrate atypical learners, even though it's increasingly clear that interest-driven learning, project-based learning, and undirected learning seem better suited for the greater diversity of neural types we now know exist.

Ben Draper, who runs the Macomber Center for Self Directed Learning, says that while the center is designed for all types of children, kids whose parents identify them as on the autism spectrum often thrive at the center when they've had difficulty in conventional schools. Ben is part of the so-called unschooling movement, which believes that not only should learning be self-directed, in fact we shouldn't even focus on guiding learning. Children will learn in the process of pursuing their passions, the reasoning goes, and so we just need to get out of their way, providing support as needed.

Many, of course, argue that such an approach is much too unstructured and verges on irresponsibility. In retrospect, though, I feel I certainly would have thrived on "unschooling." In a recent paper, Ben and my colleague Andre Uhl, who first introduced me to unschooling, argue that it not only works for everyone, but that the current educational system, in addition to providing poor learning outcomes, impinges on the rights of children as individuals.

MIT is among a small number of institutions that, in the pre-internet era, provided a place for non-neurotypical types with extraordinary skills to gather and form community and culture. Even MIT, however, is still trying to improve to give these kids the diversity and flexibility they need, especially in our undergraduate program.

I'm not sure how I'd be diagnosed, but I was completely incapable of being traditionally educated. I love to learn, but I go about it almost exclusively through conversations and while working on projects. I somehow kludged together a world view and life with plenty of struggle, but also with many rewards. I recently wrote a PhD dissertation about my theory of the world and how I developed it. Not that anyone should generalize from my experience--one reader of my dissertation said that I'm so unusual, I should be considered a "human sub-species." While I take that as a compliment, I think there are others like me who weren't as lucky and ended up going through the traditional system and mostly suffering rather than flourishing. In fact, most kids probably aren't as lucky as me and while some types are more suited for success in the current configuration of society, a huge percentage of kids who fail in the current system have a tremendous amount to contribute that we aren't tapping into.

In addition to equipping kids for basic literacy and civic engagement, industrial age schools were primarily focused on preparing kids to work in factories or perform repetitive white-collar jobs. It may have made sense to try to convert kids into (smart) robotlike individuals who could solve problems on standardized tests alone with no smartphone or the internet and just a No. 2 pencil. Sifting out non-neurotypical types or trying to remediate them with drugs or institutionalization may have seemed important for our industrial competitiveness. Also, the tools for instruction were also limited by the technology of the times. In a world where real robots are taking over many of those tasks, perhaps we need to embrace neurodiversity and encourage collaborative learning through passion, play, and projects, in other words, to start teaching kids to learn in ways that machines can't. We can also use modern technology for connected learning that supports diverse interests and abilities and is integrated into our lives and communities of interest.

At the Media Lab, we have a research group called Lifelong Kindergarten, and the head of the group, Mitchel Resnick, recently wrote a book by the same name. The book is about the group's research on creative learning and the four Ps--Passion, Peers, Projects, and Play. The group believes, as I do, that we learn best when we are pursuing our passion and working with others in a project-based environment with a playful approach. My memory of school was "no cheating," "do your own work," "focus on the textbook, not on your hobbies or your projects," and "there's time to play at recess, be serious and study or you'll be shamed"--exactly the opposite of the four Ps.

Many mental health issues, I believe, are caused by trying to "fix" some types of neurodiversity or by simply being insensitive or inappropriate to people who have them. Many mental "illnesses" can be "cured" by providing the appropriate interface to learning, living, or interacting for that person focusing on the four Ps. My experience with the educational system, both as its subject and, now, as part of it, is not so unique. I believe, in fact, that at least the one-quarter of people who are diagnosed as somehow non-neurotypical struggle with the structure and the method of modern education. People who are wired differently should be able to think of themselves as the rule, not as an exception.

Credits

Edits by Iyasu Nagata on July 8, 2021

As a Japanese, I grew up watching anime like Neon Genesis Evangelion, which depicts a future in which machines and humans merge into cyborg ecstasy. Such programs caused many of us kids to become giddy with dreams of becoming bionic superheroes. Robots have always been part of the Japanese psyche—our hero, Astro Boy, was officially entered into the legal registry as a resident of the city of Niiza, just north of Tokyo, which, as any non-Japanese can tell you, is no easy feat. Not only do we Japanese have no fear of our new robot overlords, we’re kind of looking forward to them.

It’s not that Westerners haven’t had their fair share of friendly robots like R2-D2 and Rosie, the Jetsons’ robot maid. But compared to the Japanese, the Western world is warier of robots. I think the difference has something to do with our different religious contexts, as well as historical differences with respect to industrial-scale slavery.

The Western concept of “humanity” is limited, and I think it’s time to seriously question whether we have the right to exploit the environment, animals, tools, or robots simply because we’re human and they are not.

Sometime in the late 1980s, I participated in a meeting organized by the Honda Foundation in which a Japanese professor—I can’t remember his name—made the case that the Japanese had more success integrating robots into society because of their country’s indigenous Shinto religion, which remains the official national religion of Japan.

Followers of Shinto, unlike Judeo-Christian monotheists and the Greeks before them, do not believe that humans are particularly “special.” Instead, there are spirits in everything, rather like the Force in Star Wars. Nature doesn’t belong to us, we belong to Nature, and spirits live in everything, including rocks, tools, homes, and even empty spaces.

The West, the professor contended, has a problem with the idea of things having spirits and feels that anthropomorphism, the attribution of human-like attributes to things or animals, is childish, primitive, or even bad. He argued that the Luddites who smashed the automated looms that were eliminating their jobs in the 19th century were an example of that, and for contrast he showed an image of a Japanese robot in a factory wearing a cap, having a name and being treated like a colleague rather than a creepy enemy.

The general idea that Japanese accept robots far more easily than Westerners is fairly common these days. Osamu Tezuka, the Japanese cartoonist and the creator of Atom Boy noted the relationship between Buddhism and robots, saying, ''Japanese don't make a distinction between man, the superior creature, and the world about him. Everything is fused together, and we accept robots easily along with the wide world about us, the insects, the rocks—it's all one. We have none of the doubting attitude toward robots, as pseudohumans, that you find in the West. So here you find no resistance, simply quiet acceptance.'' And while the Japanese did of course become agrarian and then industrial, Shinto and Buddhist influences have caused Japan to retain many of the rituals and sensibilities of a more pre-humanist period.

In Sapiens, Yuval Noah Harari, an Israeli historian, describes the notion of “humanity” as something that evolved in our belief system as we morphed from hunter-gatherers to shepherds to farmers to capitalists. As early hunter-gatherers, nature did not belong to us—we were simply part of nature—and many indigenous people today still live with belief systems that reflect this point of view. Native Americans listen to and talk to the wind. Indigenous hunters often use elaborate rituals to communicate with their prey and the predators in the forest. Many hunter-gatherer cultures, for example, are deeply connected to the land but have no tradition of land ownership, which has been a source of misunderstandings and clashes with Western colonists that continues even today.

It wasn’t until humans began engaging in animal husbandry and farming that we began to have the notion that we own and have dominion over other things, over nature. The notion that anything—a rock, a sheep, a dog, a car, or a person—can belong to a human being or a corporation is a relatively new idea. In many ways, it’s at the core of an idea of “humanity” that makes humans a special, protected class and, in the process, dehumanizes and oppresses anything that’s not human, living or non-living. Dehumanization and the notion of ownership and economics gave birth to slavery at scale.

In Stamped from the Beginning, the historian Ibram X. Kendi describes the colonial era debate in America about whether slaves should be exposed to Christianity. British common law stated that a Christian could not be enslaved, and many plantation owners feared that they would lose their slaves if they were Christianized. They therefore argued that blacks were too barbaric to become Christian. Others argued that Christianity would make slaves more docile and easier to control. Fundamentally, this debate was about whether Christianity—giving slaves a spiritual existence—increased or decreased the ability to control them. (The idea of permitting spirituality is fundamentally foreign to the Japanese because everything has a spirit and therefore it can’t be denied or permitted.)

This fear of being overthrown by the oppressed, or somehow becoming the oppressed, has weighed heavily on the minds of those in power since the beginning of mass slavery and the slave trade. I wonder if this fear is almost uniquely Judeo-Christian and might be feeding the Western fear of robots. (While Japan had what could be called slavery, it was never at an industrial scale.)

Lots of powerful people (in other words, mostly white men) in the West are publicly expressing their fears about the potential power of robots to rule humans, driving the public narrative. Yet many of the same people wringing their hands are also racing to build robots powerful enough to do that—and, of course, underwriting research to try to keep control of the machines they’re inventing, although this time it doesn’t involved Christianizing robots … yet.

Douglas Rushkoff, whose book, Team Human, is due out early next year, recently wrote about a meeting in which one of the attendees’ primary concerns was how rich people could control the security personnel protecting them in their armored bunkers after the money/climate/society armageddon. The financial titans at the meeting apparently brainstormed ideas like using neck control collars, securing food lockers, and replacing human security personnel with robots. Douglas suggested perhaps simply starting to be nicer to their security people now, before the revolution, but they thought it was already too late for that.

Friends express concern when I make a connection between slaves and robots that I may have the effect of dehumanizing slaves or the descendants of slaves, thus exacerbating an already tense and advanced war of words and symbols. While fighting the dehumanization of minorities and underprivileged people is important and something I spend a great deal of effort on, focusing strictly on the rights of humans and not the rights of the environment, the animals, and even of things like robots, is one of the things that has gotten us in this awful mess with the environment in the first place. In the long run, maybe it’s not so much about humanizing or dehumanizing, but rather a problem of creating a privileged class—humans—that we use to arbitrarily justify ignoring, oppressing, and exploiting.

Technology is now at a point where we need to start thinking about what, if any, rights robots deserve and how to codify and enforce those rights. Simply imagining that our relationships with robots will be like those of the human characters in Star Wars with C-3PO, R2-D2 and BB-8 is naive.

As Kate Darling, a researcher at the MIT Media Lab, notes in a paper on extending legal rights to robots, there is a great deal of evidence that human beings are sympathetic to and respond emotionally to social robots—even non-sentient ones. I don’t think this is some gimmick; rather, it’s something we must take seriously. We have a strong negative emotional response when someone kicks or abuses a robot—in one of the many gripping examples Darling cites in her paper, a US military officer called off a test using a leggy robot to detonate and clear minefields because he thought it was inhumane. This is a kind of anthropomorphization, and, conversely, we should think about what effect abusing a robot has on the abusing human.

My view is that merely replacing oppressed humans with oppressed machines will not fix the fundamentally dysfunctional order that has evolved over centuries. As a Shinto, I’m obviously biased, but I think that taking a look at “primitive” belief systems might be a good place to start. Thinking about the development and evolution of machine-based intelligence as an integrated “Extended Intelligence” rather than artificial intelligence that threatens humanity will also help.

As we make rules for robots and their rights, we will likely need to make policy before we know what their societal impact will be. Just as the Golden Rule teaches us to treat others the way we would like to be treated, abusing and “dehumanizing” robots prepares children and structures society to continue reinforcing the hierarchical class system that has been in place since the beginning of civilization.

It’s easy to see how the shepherds and farmers of yore could easily come up with the idea that humans were special, but I think AI and robots may help us begin to imagine that perhaps humans are just one instance of consciousness and that “humanity” is a bit overrated. Rather than just being human-centric, we must develop a respect for, and emotional and spiritual dialogue with, all things.

As part of my work in developing the Knowledge Futures Group collaboration with the MIT Press, I'm doing a deep dive into trying to understand the world of academic publishing. One of the interesting things that I discovered as I navigated the different protocols and platforms was the Digital Object Identifier (DOI). There is a foundation that manages DOIs and coordinates a federation of registration agencies. DOIs are used for many things, but the general idea is to create a persistent identifier for some digital object like a dataset or a publication and manage it at a meta-level to the URL, which might change over the lifetime of the drafting and the publication of an academic journal article or the movement of a movie through a supply chain.

One registration agency, Crossref, focuses on DOIs for academic publications and citations across these publications and their service has proliferated the use of DOIs as a convenient and effective way of rigorously managing and tracking citations. Many services, like ORCID which manages affiliations and publications for academics, use DOIs as one way to import and manage publications.

Although DOIs can be used for many things, because they are somewhat non-trivial to get and set up and because of the success of Crossref which services academic publishers, they have become somewhat synonymous with authority, trustworthiness and formal publishing. Although Geoffrey Bilder from Crossref warns us that this is not true and that DOIs shouldn't signal that, I think that in fact they do, for now.

Something I noted as I started playing with all of the various tools available to academics to manage their profiles and their citations, and having only one peer reviewed paper to my name so far (thanks Karthik, Chelsea and Madars for that!), was that my blog posts weren't getting indexed. Also, as I was doing research while working on my dissertation, I noticed that blogs generally weren't very heavily cited. Using my privilege and in the name of research, I started bugging Amy Brand, director of the MIT Press, who worked on the adoption of DOIs when she was at Crossref. I asked whether I could get DOIs for my blog posts.

It wasn't as easy as it sounds. First of all, you need a DOI prefix--sort of like a domain--registered through one of the registration providers. Amy helped me get one, under the MIT Press, via Crossref. Boris defined the DOI suffix format, set up a submission generator and integrated everything into my blog. Alexa from MIT Press worked on getting the DOIs from my blog to Crossref. The next problem is that "blogs" are not a category of "thing" in the DOI world so the closest category according to the experts was "dataset." So, this thing, formerly known as a blog post, that I'm writing is now a dataset contribution to the scholarly world. I do believe that it meets the standard of something that someone might possibly want to cite, so I don't feel guilty having a DOI assigned to it. I hope that Crossref would consider adding a blog post "creationType" or extend the schema more broadly for other citable web resources.

Also, I wish APA would update their blog citation format so that the name of the blog is part of the citation and not just the URL. In a rare act of disobedience, I've gone rogue and added the name of this blog in the APA citation template on this blog against their official guidelines. Strictly speaking, the APA citation for this post would be "Ito, J. (2018, August 22). Blog DOI enabled. [Blog post]. https://doi.org/10.31859/20180822.2140" but the citation tool here gives you: "Ito, J. (2018, August 22). Blog DOI enabled. Joi Ito's Web [Blog post]. https://doi.org/10.31859/20180822.2140". Sorry not sorry if you get dinged on your paper for using the modified format.

When I tweeted about the issue of blog posts not being cited, one of the concerns from the Twittersphere was lack of peer review for blogs. I think this is a valid request and concern, but not all things that are worthy of being cited need to be peer reviewed. On the other hand, clearly citing others, noting any contributors and their contribution to a blog post, and having some sort of peer review when it makes sense, is probably a good idea.

I'm not stuck on the use of the world "blog" although that's what I think this is. I just think that having an ability to rapidly publish, as blogs enable us to do, and have it connect to the world of academic literature is something worth considering.

Recently, academic preprint servers have become very popular and a growing number of academics are skipping journal publishing altogether, putting their papers on archive servers and presenting them at conferences instead of submitting them to journals.

My sense is that blogs can play a role in this ecosystem if we can tweak the academic publishing side, the culture on both sides and some of the practices on the blogging side. Geoffrey suggests that DOIs should be assigned to anything that is citationworthy and I agree, but I think that blogs are and could be even more like informal publications than just a merely citationworthy blobs of data.

Boris Anthony who has been my partner in thinking about this stuff and has been designing and maintaining my blog for the last 15 years or so has been thinking deeply about the semantic web and the creation of knowledge and was critical in getting it sorted out on this blog. He was also the one who convinced me not to convert all of my blog posts into DOI'ed objects, but to pick the ones that might have some scholarly value. :-)

PS There appears to be a DOI plugin for Wordpress using a prefix registered by the developer.

Credits

Boris Anthony for doing the actual technical and design work to get the DOIs deployed on this site and for help with the ideas and the editing of the post.

Amy Brand for her guidance in getting, understanding and writing about DOIs.

Alexa Masi for helping us sort out how to get the DOIs properly formatted and sent over to Crossref.

Around the time I turned 40, I decided to address the trifecta of concerns I had about climate change, animal rights, and my health: I went hard vegan. My doctor had been warning me to cut down on red meat, and I had also moved to a rural Japanese farming village populated by farmers growing a wide variety of veggies. They were delicious.

After a while, the euphoria wore off and the culinary limitations of vegan food, especially when traveling, became challenging. I joined the legions of ex-vegans to become a cheating pescaterian. (I wonder if this article will get me bumped off of the Wikipedia Notable Vegans list.) Five years later, the great Tohoku earthquake of 2011 hit Japan, dumping a pile of radioactive cesium-137 on top of our organic garden and shattering the wonderful organic loop we had created. I took my job at the Media Lab and moved to the US the same year, thus starting my slow but steady reentry into the community of animal eaters.

Ten years after I proclaimed myself vegan, I met Isha Datar1, the executive director of New Harvest, an organization devoted to advancing the science of what she calls “cellular agriculture.” Isha is trying to figure out how to grow any agricultural product—milk, eggs, flavors, fragrances, fish, fruit—from cells instead of animals.

Art fans will remember Oron Catts and Ionat Zurr, who in 2003 served “semi-living steak” grown from the skeletal muscles of frogs as an art project called Disembodied Cuisine. Five years later, they presented “Victimless Leather” at MoMA in New York, an installation that involved tissue growing inside a glass container in the shape of a leather jacket. Protests broke out when the museum had to disconnect the life support system because the jacket grew too big.

Isha wasn’t trying to make provocative art. We now have more challenging choices to make than simply whether to be vegan, pescatarian or carnivore, thanks to technology that has given us an explosion of meat-like products that run the ethical gambit in their production processes. She was and is trying to solve our food problem, and New Harvest is supporting and coordinating research efforts at numerous labs and research groups.

Civilians often clump the alternative meat companies and labs together in some kind of big meatless meatball, but, just like different kinds of self-driving car systems, they’re quite distinct. The Society of Automotive Engineers identifies five levels of autonomy; similarly, I see six levels of cellular agriculture. Just as “driver assist” is nice, having a car pick me up and drive me home is a completely different deal, and the latter might not evolve from the former—they might have separate development paths. I think the different branches of cellular agriculture are developing the same way.

Level 0: Just Be Vegan

Some plants are very high in protein, like beans, and they taste great just the way they are.

Level 1: Go Alternative

As a vegan, I ate a lot of processed plant-based proteins like tofu that feel fleshy and taste savory. I call these Level 1 meat alternatives. Many vegan Chinese restaurants serve “fake meat,” which is usually some sort of seitan, a wheat gluten, or textured vegetable protein like textured soy. It’s flavored and has a texture similar to some sort of animal protein, say, shrimp. This kind of protein substitute is a meat alternative—a plant-based protein that starts to mimic the experience of eating meat. Veggie burgers fall into this category.

Level 2: Get Cultured

These meat alternatives are also plant-based, but they contain some “cultured” proteins that are produced using a new scientific process. Yeast or bacteria are engineered to ferment some plant substances and output products that mimic or even replicate the proteins that make a plant-based recipe taste, smell, look or feel more like meat. Impossible Foods’ Impossible Burger falls into this category because its key ingredient is a protein called heme that is produced by genetically engineered yeast. Heme imparts “bloodiness” and “meatiness” to the plant-based burger-like base. This process relies on the industrial biotechnology and large-scale fermentation systems that are already used in the food industry. JUST’s Just Scramble “scrambled eggs” uses a proprietary process to create a plant-based protein as well, combining processes used in the pharmaceutical business, food R&D labs, and chemistry labs.

Level 3: Post-Vegan

Foods at this level are made of plant-based ingredients combined with cultured animal cells (as opposed to the products of bacterial fermentation). In other words, cells as ingredient, plants for mass. The animal cells provide the color, smell, or taste of meat, but not the substance. This relies on industrial biotech and large-scale cell-culture production methods already used in the pharma industry. Level 3 is the first level that requires going beyond the tools and the science already available in the food business.

Level 4: That's a Spicy Meatball

Level 4 alternatives are pure cultured animal cells like the products Memphis Meats and others are working on. The texture and shape of a real steak comes from the muscle cells that grow around the bones and otherwise self-organize into bundles of tissue. At Level 4, we aren’t really dealing with sophisticated texture yet, so we’re pretty much turning the cells we’ve grown into meatballs. (The difference between this and Level 3 is that most of the mass of the food here is animal cells, whereas Level 3 is mostly plant-based with cells sprinkled on top.)

Right now, the primary “media” for cell cultures is fetal serum (the most common type is harvested from cow fetuses), and it currently takes roughly 50 liters of serum and costs about $6,000 to produce a single beef burger. A key breakthrough needed to push us into Level 4 reasonably is figuring out a viable way of feeding cells using non-animal sources of energy. This will involve new science on the cell side and on the media side. And we need to better understand and reproduce nutrients and flavor molecules in addition to producing pure calories.

Level 5: Tastes Like Chicken

Now we get something actually like a chicken thigh or T-bone steak. This is the Jetsons’ version that people imagine when they hear the phrase “lab-grown meat.” It is very much the goal of the alternative meat effort, and no one has achieved it yet. Scientifically, this requires the kind of advanced tissue science that is currently being developed to allow us to swap failed organs in our bodies with replacements grown outside of our bodies.

A beaker full of animal cells doesn’t give you the texture of a steak; with this technology, scientists can use 3-D scaffolding to encourage 3-D growth, and they can grow blood vessels in these tissues as well. We can even use plant-based materials as the scaffolding, but what we really want is for that scaffolding to also grow, which is how organs in our body grow. It turns out that research in regenerative medicine and tissue science is giving us a better understanding of how we might create the texture and scaffolding required to grow an actual kidney instead of just a petri dish full of kidney cells. Scientists have not really focused however, on the idea of deploying tissue science for food ... yet.

Level 6: ZOMG What Is This?

Tasty fake meat is exciting, but not nearly as exciting as the idea of a completely new food system with a diversity of inputs and completely new outputs—a completely new food science. Imagine augmented meat tissue with novel nutritional profiles, texture, flavor and other characteristics—in other words, instead of just trying to recreate meat, scientists develop completely new ingredients that are actually “post-meat.”

Let me explain what investors and I find so exciting about all this activity. My dream, and Isha’s dream, is that we figure out a way to make use of extremely efficient “energy harvesters” like algae, kelp, fungi, or anything else that can take a renewable energy source like the sun and convert it into calories. The idea is to figure out a mechanism to convert these organic stores of energy into inputs for bioreactors, which would then transform these calories into anything we want.

Scientists have made so many advances in terms of using microbes as factories (including fermentation) as well as in genomics, tissue engineering, and stem cells, that it’s feasible to imagine a system that unleashes a culinary bonanza of nutritional, flavor and texture options for future chefs while also lowering the environmental impact of belching cows, concentrated animal-feeding operations, and expensive and energy-inefficient refrigerated supply chains. (The livestock industry uses 70 percent of all land suitable for agriculture, and livestock accounts for as much as 51 percent of greenhouse gas emissions.) Eating meat is one of the most environmentally negative things humans do. I can imagine a food supply system that is even more efficient than eating fresh plants, which still requires refrigeration: Move the materials and calories around in shelf-stable forms, and simply “just add water” at the end in the way that adding water magically spawned sea monkeys when I was a kid.

Such a food industry would also need to develop bioreactors—think bread machines with cell cartridges or breweries that make meat, not beer—that would intake the raw materials and spit out lamb chops. That feels like an engineering task to be undertaken once the cellular biology gets worked out.

So far, most of the investment in the companies trying to rethink meat has come from venture capitalists, and they are impatient. This puts the startups they underwrite under pressure to get products to market quickly and generate financial returns, and makes it highly unlikely that we’ll get to Level 4 or 5 with VC-backed science alone. Basic research funding from philanthropy and government needs to be increased, and biomedical researchers need to be convinced to apply their expertise and knowledge to cellular agriculture.

And, indeed, many labs that Isha is working with are working on the basic research. Some are focused on establishing cell cultures from agricultural animals; others are working to grow animals cells on plants by removing cells from the plants, replacing them with living muscle cells effectively using the plant as a scaffolding.

The work of Isha's small network of scientists reminds me of the early days of neuroscience, when there was almost no federal funding for brain research. Then, suddenly, it became “a thing.” I think we’re reaching that same moment for meat, as climate change becomes an ever more pressing concern; the health impact of eating meat becomes more clear; and our population approaches 10 billion people, threatening our food supply.

Most of the people currently supporting the cellular agriculture movement are animal rights advocates. That’s a fine motivation, but figuring out a completely new design for the creation of food is going to take some real science, and we need to start now. Not only might it save us from future starvation, make a major contribution to reversing climate change, correct the antibiotic resistance armageddon, and help restore fish populations in the oceans, it might also unlock a culinary creative explosion.

1 Disclosure: After meeting Isha, I recruited her to be a Director’s Fellow at the Media Lab where she is inspiring us with her work and her vision.