Joi Ito's Web

Joi Ito's conversation with the living web.

Black and White Gavel in Courtroom - Law Books
Photo by wp paarz via Flickr - CC BY-SA

Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make - a modern version of what philosophers call "The Trolley Problem." The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.

Society-in-the-loop is a scaled up version of human-in-the-loop machine learning - something that Karthik Dinakar at the Media Lab has been working on and is emerging as an important part of AI research.

Typically, machines are "trained" by AI engineers using huge amounts of data. The engineers tweak what data is used, how it's weighted, the type of learning algorithm used and a variety of parameters to try to create a model that is accurate and efficient and making the right decisions and providing accurate insights. One of the problems is that because AI, or more specifically, machine learning is still very difficult to do, the people who are training the machines are usually not domain experts. The training is done by machine learning experts and the completed model after the machine is trained is often tested by experts. A significant problem is that any biases or errors in the data will create models that reflect those biases and errors. An example of this would be data from regions that allow stop and frisk - obviously targeted communities will appear to have more crime.

Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data. Karthik calls this process 'lensing', of extracting the human perspective or lens of a domain expert and fit it to algorithms that learn from both the data and the extracted lens, all during training time. We believe this has implications for making tools for probabilistic programming and for the democratization of machine learning.

At a recent meeting with philosophers, clergy and AI and technology experts, we discussed the possibility of machines taking over the job of judges. We have evidence that machines can make very accurate assessments of things that involve data and it's quite reasonable to assume that decisions that judges make such as bail amounts or parole could be done much more accurately by machines than by humans. In addition, there is research that shows expert humans are not very good set setting bail or granting parole appropriately. Whether you get a hearing by the parole board before or after their lunch has a significant effect on the outcome, for instance. (There has been some critiques of the study cited in this article, and the authors of the paper of responded to them.)

In the discussion, some of us proposed the idea of replacing judges for certain kinds of decisions, bail and parole as examples, with machines. The philosopher and several clergy explained that while it might feel right from a utilitarian perspective, that for society, it was important that the judges were human - it was even more important than getting the "correct" answer. Putting aside the argument about whether we should be solving for utility or not, having the buy-in of the public would be important for the acceptance of any machine learning system and it would be essential to address this perspective.

There are two ways that we could address this concern. One way would be to put a "human in the loop" and use machines to assist or extend the capacity of the human judges. It is possible that this would work. On the other hand, experiences in several other fields such as medicine or flying airplanes have shown evidence that humans may overrule machines with the wrong decision enough that it would make sense to prevent humans from overruling machines in some cases. It's also possible that a human would become complacent or conditioned to trust the results and just let the machine run the system.

The second way would be for the machine to be trained by the public - society in the loop - in a way that the people felt that that the machine reliability represented fairly their, mostly likely, diverse set of values. This isn't unprecedented - in many ways, the ideal government would be one where the people felt sufficiently informed and engaged that they would allow the government to exercise power and believe that it represented them and that they were also ultimately responsible for the actions of the government. Maybe there is way to design a machine that could garner the support and the proxy of the public by being able to be trained by the public and being transparent enough that the public could trust it. Governments deal with competing and conflicting interests as will machines. There are obvious complex obstacles including the fact that unlike traditional software, where the code is like a series of rules, a machine learning model is more like a brain - it's impossible to look at the bits and understand exactly what it does or would do. There would need to be a way for the public to test and audit the values and behavior of the machines.

If we were able to figure out how to take the input from and then gain the buy-in of the public as the ultimate creator and controller of this machine, it might solve the other side of this judicial problem - the case of a machine made by humans that commits a crime. If, for instance, the public felt that they had sufficient input into and control over the behavior of a self-driving car, could the public also feel that the public, or the government representing the public, was responsible for the behavior and the potential damage caused by a self-driving car, and help us get around the product liability problem that any company developing self-driving cars will face?

How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.

Credits
  • Iyad Rahwan - The phrase “society in the loop” and many ideas.
  • Karthik Dinakar - Teaching me about “human in the loop” machine learning and being my AI tutor and many ideas.
  • Andrew McAfee - Citation and thinking on parole boards.
  • Natalie Saltiel - Editing.


Copyright xkcd CC BY-NC

Back when I first started blogging, the standard post took about 5 min and was usually written in a hurry after I thought of something to say in the shower. If it had mistakes, I'd add/edit/reblog any fixes.

As my post have gotten longer and the institutions affected by my posts have gotten bigger, fussier and more necessary to protect - I've started becoming a bit more careful about what I say and how I say it.

Instead of blog first, think later - agile blogging - I now have a process that feel a bit more like blogging by committee. (Actually, it's not as bad as it sounds. You, the reader are benefiting from better thought through blog posts because of this process.)

When I have an idea, I usually hammer out a quick draft, stick it in a Google Doc and then invite in anyone that might be able to help including experts, my team working on the particular topic and editors and communications people. It's a different bunch of people depending on the post, but almost everything I've posted recently is a result of a group effort.

Jeremy Rubin, a recent MIT grad who co-founded the Digital Currency Initiative at MIT mentioned that maybe I should be giving people credit for helping - not that he wouldn't help if he didn't get credit, but he thought that as a general rule, it would be a good idea. I agreed, but I wasn't sure exactly how to do it elegantly. (See what I did here?)

I'm going to start adding contributors at the bottom of blog posts as sort of a "credits" section, but if anyone has any good examples or thoughts on how to give people credit for helping edit and contributing ideas to a post or an informal paper like my posts on my blog and pubpub, I'd really like to see them.

Credits
  • Jeremy Rubin came up with the idea.
  • I wrote this all by myself.

Leafy bubble
Photo by Martin Thomas via Flickr - CC-BY

In 2015, I wrote a blog post about how I thought that Bitcoin was similar in many ways to the Internet. The metaphor that I used was that Bitcoin was like email - the first killer app - and that the Bitcoin Blockchain was like The Internet - the infrastructure that was deployed to support it but that could be used for so many other things. I suggested that The Blockchain was to finance and law what the Internet was to media and advertising.

I still believe it is true, but the industry is out over its skis. Over a billion dollars have been invested in Bitcoin and Fintech startups, tracking and exceeding investment in Internet investments in 1996. Looking at many of the businesses, they look like startups during that period, but instead of pets.com, we have blockchain for X. I don't think today's blockchain is the Internet in 1996 - it's probably more like the Internet in 1990 or the late 80's - we haven't agreed on the IP protocol and there is no Cisco or PSINet. Many of the application layer companies are building on an infrastructure that isn't ready from a stability or a scalability perspective and they are either bad idea or good idea too early. Also, very few people actually understand the necessary combination of cryptography, security, finance and computer science to design these systems. Those that do are part of a very small community and there are not enough to go around to support the $1bn castle we are building on this immature infrastructure. Lastly, unlike content on the Internet, the assets that the blockchain will be moving around and the irreversibility of many of the elements do not lend the blockchain to the same level agile software development - throw stuff out and see what sticks - that we can do for web apps and services.

There are startups and academics working on these basic layers, but I wish there were more. I have a feeling that we might be in a bit of a bubble and that bubble might pop or have a correction, but in the long run, hopefully we'll figure out the infrastructure and will be able to build something decentralized and open. Maybe a bubble pop will get rid of some of the noise from the system and let us focus like the first dot-com bust did for the Internet. On the other hand, we could end up with a crappy architecture and a bunch of fintech apps that don't really do much more than make existing things more efficient. We are at an important moment where decisions will be made about whether everyone will trust a truly decentralized system and where irresponsible deployments could scare people away. I think that as a community we need to increase our collaboration and diligently eliminate bugs and bad designs without slowing down innovation and research.

Instead of building apps, we need to be building the infrastructure. It's unclear whether we will end up with some version of Bitcoin becoming "The Internet" or whether some other project like Ethereum becomes the single standard. It's also possible that we end up with a variety of different systems that somehow interoperate. The worst case would be that we focus so much on the applications that we ignore the infrastructure, miss out on the opportunity to build a truly decentralized system, and end up with a system that resembles mobile Internet instead of wired Internet - one controlled by monopolies that charge you by the megabyte and have impossibly expensive roaming fees versus the flat fee and reasonable cost of wired Internet in most places.

There are many pieces to the infrastructure that need to be designed and tested. There are many ideas for different consensus protocols - the way in which a particular blockchain makes their public ledger tamper proof and secure. Then there are arguments about how much scriptability should be built into the blockchain itself versus on a layer above it - there are good arguments on either side of the argument. There is also the issue of privacy and anonymity versus identity and regulatory controls.

It looks like the Bitcoin Core developer team is making headway on Segregated Witness which should address many concerns including some of the scaling issues that people have had. On the other hand, it looks like Ethereum which has less history but a powerful and easier to use scripting / programing system is getting a lot of traction and interest from people trying to design new uses for the blockchain. Other projects like Hyperledger are designing their own blockchain systems as well as code that is blockchain agnostic.

The Internet works because we have clear layers of open standards. TCP/IP, for instance, won over ATM - a competing standard in some ways - because it turned out that the end-to-end principle where the core of the network was super-simple and "dumb" allowed the edges of the network to be very innovative. It took awhile for the battle between the standards to play out to the point where TCP/IP was the clear winner. A lot of investment in ATM driven technology ended up being wasted. The problem with the blockchain is that we don't even know where the layers should be and how we will manage the process of agreeing on the standards.

The (Ethereum) Decentralized Autonomous Organization project or "The DAO" is one of the more concerning projects I see right now.* The idea is to create "entities" that are written in code on Ethereum. These entities can sell units similar to shares in a company and invest and spend the money and operate much like a fund or a corporation. Investors would look at the code and determine whether they thought the entity made sense and they would buy tokens hoping for a return. This sounds like something from a science fiction novel and we all dreamed about these sorts of things when, as cypherpunks in the early 90's, we dared to dream on mailing lists and hacker meetups. The problem is, The DAO has attracted over $150M in investors and is "real," but is built on top of Ethereum which hasn't been tested as much as Bitcoin and is still working out its consensus protocol even considering a completely new consensus protocol for their next version.

It appears that The DAO hasn't been fully described legally and may expose its investors to liabilities as partners in a partnership. Unlike contracts written by lawyers in English, if you screw up the code of a DAO, it's unclear how you could change it easily. Courts can deal with mistakes in contract language by trying to determine the intent, but in code enforced by distributed consensus rules, there is no such mechanism. Also, code can be attacked by malicious code and there is a risk that a bug could create vulnerabilities. Recently, Dino Mark, Vlad Zamfir, and Emin Gün Sirer - key developers and researchers - published "A Call for a Temporary Moratorium on The DAO" describing vulnerabilities in The DAO. I fear that The DAO also raises the red flags for a variety of regulators that we probably don't want at the table right now. The DAO could be the Mt. Gox for Ethereum - a project whose failure may cause many people to lose their money and cause the public and regulators to try to slam the brakes on blockchain development.

Regardless of whether I rain on the parade, I'm sure that startups and investors in this space will continue to barrel forward, but I believe that as many of us as possible should focus on the infrastructure and the opportunities at the lowest layers of this stack we are trying to build. I think that getting the consensus protocol right, trying to figure out how to keep things decentralized, how to deal with the privacy issues without causing over-regulation, how we might completely reinvent the nature of money and accounting - these are the things that are exciting and important to me.

I believe there are some exciting areas for businesses to begin working and exploring practical applications - securitization of things that currently have a market failure such as solar panels in developing countries, or applications where there are standardized systems because of the lack of trust creates a very inefficient market such as trade finance.

Central banks and governments have begun to exploring innovations as well. The Singapore government is considering issuing government bonds on a blockchain. Some papers have imagined central banks taking deposits and issuing digital cash directly to individuals. Some regulators have begun to plan sandboxes to allow people to innovate and test ideas in regulatory safety zones. It is ironically possible that some of the more interesting innovations may come from experiments by governments despite the initial design of Bitcoin having been to avoid governments. Having said that, it's quite likely that governments will be more likely to hinder rather than help the development of a robust decentralized architecture.


* Just a few days after this post, The DAO was "attacked" as I feared. Here's an interesting post by the alleged "attacker". Reddit quickly determined that the signature in that post wasn't valid. And another post by the alleged attacker that they're bribing the miners not to fork. Whether these are actually the attacker or epic trolls, very interesting arguments.

Credits

Minerva Priory library
The library at the Minerva Priory, Rome, Italy.

I recently participated in a meeting of technologists, economists and European philosophers and theologians. Other attendees included Andrew McAfee, Erik Brynjolfsson, Reid Hoffman, Sam Altman, Father Eric Salobir. One of the interesting things about this particular meeting for me was to have a theological (in this case Christian) perspective to our conversation. Among other things, we discussed artificial intelligence and the future of work.

The question about how machines will replace human beings and place many people out of work is well worn but persistently significant. Sam Altman and others have argued that the total increase in productivity will create an economic abundance that will enable us to pay out a universal "basic income" to those who are unemployed. Brynjolfsson and McAfee have suggested a "negative income tax"-a supplement instead of a tax for low-income workers that would help the financial redistribution without disrupting the other important outcomes generated by the practice of work.

Those supporting the negative income tax recognize that the importance of work is not just the income derived from it, but also the anchor that it affords us both socially and psychologically. Work provides a sense of purpose as well as a way to earn social status. The places we work give us both the opportunity for socializing as well as the structure that many of us need to feel productive and happy.

So while AI and other technologies may some day create a productivity abundance that allows us to eliminate the financial need to work, we will still need to find ways to obtain the social status-as well as a meaningful purpose-we get from work. There are many people who work in our society who aren't paid. One of the largest groups are stay-at-home men and women whose work it is to care for their homes and children. Their labor is not currently counted toward the GDP, and they often do not earn the social status and value they deserve. Could we somehow change the culture and create mechanisms and institutions that provided dignity and social status to people who don't earn money? In some ways academia, religious institutions and non-profit service organizations have some of this structure: social status and dignity that isn't driven primarily by money. Couldn't there be a way to extend this value structure more broadly?

And how about creative communities? Why couldn't we develop some organizing principle that would allow amateur writers, dancers or singers to define success by measures other than financial returns? Could this open up creative roles in society beyond the small sliver of professionals who can be supported by the distribution and consumption by the mass media? Could we make "starving artist" a quaint metaphor of the past? Can we disassociate the notion of work from productivity as it has been commonly understood and accepted? Can "inner work" be considered more fruitful when seen in light of thriving and eudaemonia?

Periclean Athens seems to be a good example of a moral society where people didn't need to work to be engaged and productive.* Could we image a new age where our self-esteem and shared societal value is not associated with financial success or work as we know it? Father Eric asks, "What does it mean to thrive?" What is our modern day eudaemonia? We don't know. But we do know that whatever it is, It will require a fundamental cultural change: change that is difficult, but not impossible. A good first step would be to begin work on our culture alongside our advances in technology and financial innovations so that the future looks more like Periclean Athens than a world of disengaged kids with nothing to do. If it was the moral values and virtues that allowed Periclean Athens to function, how might we develop them in time for a world without work as we currently know it?



* There were many slaves in Periclean Athens. For the future machine age, will be need to be concerned about the rights of machines? Will we be creating a new class of robot slaves?

Credits
  • Reid Hoffman - Ideas
  • Erik Brynjolfsson - Ideas
  • Andrew McAfee - Ideas
  • Tenzin Priyadarshi - Ideas
  • Father Eric Salobir - Ideas
  • Ellen Hoffman - Editing
  • Natalie Saltiel - Editing

Great_Dome,_MIT_-_IMG_8390.JPG
Photo by Daderot [Public domain], via Wikimedia Commons

When I was first appointed as the director of the MIT Media Lab, The New York Times said it was an "unusual choice" - which it was since my highest academic degree was my high school diploma, and, in fact, had dropped out of undergraduate programs at both Tufts and the University of Chicago, as well as a doctoral program at Hitotsubashi University in Tokyo.

When first approached about the position, I was given advice that I shouldn't apply considering my lack of a degree. Months later, I was contacted again by Nicholas Negroponte, who was on the search committee, and who invited me to visit MIT for interviews. Turns out they hadn't come up with a final candidate from the first list.

The interview with the faculty, student and staff went well - two of the most exciting days of my life - although quite painful as well, as a major earthquake in Japan occurred the night between the two days. In so many ways, those two days are etched into my mind.

The committee got back to me quickly. I was their first choice, and needed to come back and have meetings with the School of Architecture + Planning Dean Adele Santos, and possibly the provost (now MIT president) Rafael Reif, since I was such an unorthodox candidate. When I sat down to meet with Rafael in his fancy office, he gave me a bit of a "what are you doing here?" look and asked, "How can I help you?" I explained the unusual circumstance of my candidacy. He smiled and said, "Welcome to MIT!" in the warm and welcoming way he treats everyone.

As the director of the Media Lab, my job is to oversee the operations and research of the Lab. At MIT, the norm is for research labs and academic programs to be separated-like church and state-but the Media Lab is unique in that it has "its own" academic Program in Media Arts and Sciences within the School of Architecture + Planning, which is tightly linked to the research.

Since its inception, the Lab has always emphasized hands-on research: learning by doing, demoing and deploying our works rather than just publishing. The academic program is led by a faculty member, currently Pattie Maes, with whom I work very closely.

My predecessor, as well as Nicholas, the lab's founding director, both had faculty appointments. However, in my case, due to the combination of my not knowing any better and the Institute not being sure about whether I had the chops to advise students and be sufficiently academic, I was not given the faculty position when I joined.

In most cases, it didn't matter. I participated in all of the faculty meetings, and except for rare occasions, was made to feel completely empowered and supported. The only awkward moments were when I was mistakenly addressed as "Professor Ito," or after explaining my position to academics from other universities had to endure responses like, "Oh! I thought you were faculty but you're on the ADMINISTRATIVE side of the house!"

So I didn't feel like I NEEDED to be a professor. When I was offered the opportunity to submit a proposal to become a professor, I wasn't sure exactly how it would help. I asked a few of my mentors and they said that it would allow me to have a life at MIT after I was no longer Lab director. Frankly, I can't imagine ever leaving my role as director of the Lab, but that was a nice option. Also, becoming a professor makes me more formally part of the Institute itself. It is a vote of confidence since it requires approval by the academic council.

I am not interested in starting my own research group, but rather have always viewed the entire Media Lab itself my "research group," as well as my passion. However, as I help start new initiatives and support faculty, from time to time, I have become more involved in thinking and doing things that require a more academic frame of mind. Lastly, I have begun to have more opinions about the academic program at the Media Lab and more broadly at MIT. Becoming a faculty member would give me a much better position from which to express these opinions.

With these thoughts in mind-and with advice from my wise mentors-I requested, and today received, appointment as a member of the MIT faculty, as a Professor of the Practice in Media Arts and Sciences.

I still remember when I used to argue with my sister, a double PhD, researcher, and faculty member, calling her "academic" as a derogatory term. I remember many people warning me when I took the role as the director of the Media Lab that I wouldn't fit in or that I'd get sick of it. I've now been at MIT approximately five years - longer than I've been at any other job - (and my sister, Mimi, is now an entrepreneur.) I feel like I've finally found my true calling and am happier than I've ever been with my work, my community and the potential for growth and impact for myself and the community in which I serve.

So thank you MIT and all of my mentors, peers, students, staff, and friends who have supported me so far. I look forward to continuing this journey to see where it goes.

I've posted the research statement that I submitted to MIT for the promotion case.

The appointment is effective July 1, 2016.

Accounting underlies finance, business, and enables the levying of taxes for raising armies, building cities, and managing resources at scale. In fact, it is the way that the world keeps track of almost everything of value.

Accounting predates money, and was originally used by ancient communities to track and manage their limited resources. There are accounting records from Mesopotamia dating back more than 7,000 years, listing the exchange of goods. Over time, accounting became the language and information infrastructure for trade. Accounting and auditing enabled the creation of vast empires, such as those built by the Egyptians and the Romans.

As accounting scaled, it made sense to go from counting sheep, bushels of grain, and cords of wood, to calculating and managing resources using their exchange value in terms of an abstract unit: money. In addition to exchange, money allowed for recording and managing obligations. So where earlier bookkeeping just kept records of promises and exchanges between individuals (Alice lent Bob a goat on this date), money opened up a new realm of accounting by dramatically simplifying the management of accounts and allowing markets, companies, and governments to scale. However, through the centuries, this once powerful simplification has a resulted in a surprising downside-a downside made worse in today's digitally connected world.

Defining Value

While companies today use enterprise resource planning (ERP) systems to keep track of widgets, contractual obligations, and employees, the accounting system-and the laws that support it-require us to convert just about everything into monetary value, and enter it into a ledger system based on the 700-year-old double-entry bookkeeping method. This is the very same system used by the Florentine merchants of the 13th century and described by Luca Pacioli, the "father of accounting," in his book Summa de Arithmetica, Geometria, Proportioni et Proportionalità, published 1494.

When you take, for instance, a contract that pays out $1 million if it rains tomorrow, and put it into your accounts, you will be required to guess the chance of rain-maybe 50 percent-and value that asset at something like $500,000. The contract will actually never pay out $500,000; in the end, it will either be worth zero (no rain) or $1 million (rain). But if you were forced to trade it today, you'd probably sell it for something close to $500,000; so for tax and management purposes, you "value" the contract at $500,000. On the other hand, if you are unable to sell it because there are no buyers, it might actually be valued at zero today by regulators interested in liquidity, but then suddenly valued at $1 million tomorrow if it rains.

Basically, a company's accounts are an aggregate of cells in various ledgers with numbers that represent a numerical value denominated in some currency-yen, dollars, euros, etc.-and those numbers are added up and organized into both a balance sheet and an income statement that show the health of the company to management and investors. They are also used to calculate profits and the amount of tax owed to governments. This balance sheet is a list of assets and liabilities. If you looked in the assets column, you'd have a number of items that you would be reporting as having value, including things like printing presses, lines of code, intellectual property, obligations from people who may or may not pay you in the future, cash in various countries' currencies, and best guesses on things like the future prices of a commodity or the value of another company.

As an auditor, investor, or trading partner, you might want to drill down and try to test the assumptions that the company is making and see what would happen if those were incorrect at the time they were recorded, or turned out to be wrong sometime in the future. You might also want to understand how buying another company would change your own company based on the way your obligations and bets interacted with theirs. You could rack up millions of dollars in auditor fees to "get to the bottom" of any number of assumptions. The process would involve manually reviewing the legal contracts, and also the assumptions made in every cell of every spreadsheet. That's because standard accounting is a very "lossy" process that reduces complex and context-dependant functions and transforms them into static numbers at every step. The underlying information is somewhere, but only exposed with a lot of manual digging.

The modern complex financial system is full of companies that have figured out ways to guess when investors and the companies themselves have made mistakes in their assumptions. These companies bet against a company with inaccurate pricing or take advantage of the gap in information to convert this into financial returns for themselves. When these mistakes are duplicated across the system, it can cause fluctuation amplification that also allows companies to make more money both as markets rise, as well as fall, if they can successfully predict those fluctuations. In fact, as long as the whole system doesn't collapse, smart traders make more money on fluctuation than on stability.

Just like rodent exterminators aren't excited about the idea of rodents being completely eliminated because they would no longer have jobs, those financial institutions that make money by "making the system more efficient and eliminating waste" don't really want a stable system that isn't wasteful.

Right now, the technology of the financial system is built on top of a way of thinking about money and value that was designed back when all we had were pen and paper, and when reducing the complexity of the web of dependencies and obligations was the only way to make the system functionally efficient. The way we reduce complexity is to use a common method of pricing, put elements into categories, and add them up. This just builds on 700­-year­-old building blocks, trying to make the system "better" by doing very sophisticated analysis of the patterns and information without addressing the underlying problem of a lossy and oversimplified view of the world: a view where everything of "value" should be as quickly as possible recorded as a number.

The standard idea of the "value" of things is a reductionist view of the world that is useful to scale the trading of commodities that are roughly of equal worth to a large set of people. But, in fact, most things have very different values to different people at different times, and I would argue that much-if not most-things of value can't and probably shouldn't be reduced to numbers on a spreadsheet. Financial "value" has a very specific meaning. A home clearly has "value" because someone can live in it and it's useful. However, if no one wants to buy it and no one is buying similar homes on the market, you can't set a price for it; it is illiquid and it is impossible to determine its "fair market value." Some contracts and financial instruments are nonnegotiable, may not have a "fair market value," and may even have no value to you if you needed money (or an apple) RIGHT NOW. Part of the confusion comes from the difficulty of describing legal and mathematical ideas in plain English, and the role of context and timing.

One example is exchange rates. My wife moved to Boston from Japan several years ago, but still converts prices into yen. She sometimes comments on how expensive something has gotten because the value of the yen has diminished. Because most of our earnings and spending are in dollars, I always have to remind her, the "value" in yen is irrelevant to her now, although not irrelevant to her mother, who is in Japan.

We have become accustomed to the notion that things have a "price," and that "price" is equivalent to its "value." But an email from you to me about a feeling that you had about our last conversation is probably valuable to me at a particular time and probably not valuable to most people. A single apple is worth a lot more to a hungry person than the owner of an apple orchard. Context is everything.

"Can't Buy Me Love" - The Beatles

The economics notion of consumers making financial decisions to maximize "utility" as a kind of proxy for happiness is another example of how the notion of a universal system of "value" oversimplifies its complexity-so much so that the models that assume that humans are "economically rational" actors in a marketplace simply don't work. The simplest version of this model would mean that the more money you had, the happier you would be, which Daniel Kahneman and Angus Deaton argue is true up to about $75,000 a year in annual income.[1]

Today, we have the technology and the computational power to create a system of accounts that could retain and deal with a lot of the complexity that the current system was designed to avoid. There is, for example, no reason that every entry in our books needs to be a number. Each cell could be an algorithmic representation of the obligations and dependencies that it represents. In fact, using machine learning, accounts could become sophisticated probabilistic models for what might happen depending on how things around them change. This would mean that the "value" of any system would change depending on who was asking, their location, and the time parameters.

Today, when a bank regulator conducts a stress test, it gives a bank a scenario-changes in the credit markets or the prices of certain things. The bank is then required to return a report on whether it would crash or remain solvent. This requires a lot of human labor to go through the accounts and run simulations. But what if the accounts were all algorithmic, and instead you could instantly run a program to provide the answer to the question? What if you had a learning model that could answer a more important question: "What sets of changes to the market WOULD make it crash, and why?" That's really what we want to know. We want to know this not just for one bank, but the whole system of banks, investors, and everything that interacts.

When I'm buying something from a company-let's say a credit default swap from your company, AIG-what I would want to know is whether, when the day comes to pay the obligation, in the unlikely chance that the AA mortgage-backed bonds that I was betting against defaulted, would your company be able to pay? Right now, there is no easy way to do this. However, what if all of the obligations and contracts, instead of being written on paper and recorded as numbers, were actually computable and "visible"? You'd immediately be able to see that, in fact, in the scenario in which you'd have to pay me, you'd actually have no money since you'd written similar contracts to so many people that you'd be broke. Right now, even the banks themselves can't see this unless an internal investigator thinks to look for this ahead of time.

Rethinking the Fundamentals of Accounting

With cutting edge cryptography like zero-knowledge proofs and secure multiparty computation, there are ways we might be able to keep these accounts open to each other without compromising business and personal privacy. While computing every contract as a cell in a huge set of accounts, every time anyone asked a question it would exceed even today's computing capacity. But with machine learning and the creation of models, we might be able to dampen, if not stabilize, the massive amplifications of fluctuations. These bubbles and collapses occur today, in part, because we are building our whole system on an oversimplified house of cards, with the handlers having an incentive to make them fragile and opaque in order to introduce inefficiencies they can exploit later to make money for themselves.

I think the current excitement about Bitcoin and distributed ledgers has created a great opportunity to take advantage of its flexible and reprogrammable nature, allowing us to rethink the fundamental system of accounts. I'm much more interested in this than in apps for banks, or even new ideas in finance, which will address some of the symptoms without taking a shot at eliminating one of the root causes of the impossibly complex and outdated system that we've built on a 700 year old double-entry bookkeeping method-the very same system used by the Florentine merchants of the 13th century. It feels like we are using integers when we should be using imaginary numbers. Reinventing accounting should be more like discovering a new number theory than tweaking the algorithms, which is what I feel like we've been doing for the last several hundred years.

--

Originally posted on PubPub.ito.com. Please read and post comments there.

References

[1]Daniel Kahneman and Angus Deaton. "High Income Improves Evaluation of Life But Not Emotional Well-Being". Proceedings of the National Academy of Sciences. (2010).

Screen Shot 2016-04-12 at 3.12.58 AM.png
Our website circa 1996

Thanks to Boris Anthony and Daiji Hirata for helping to upgrade and clean up my blog.

We upgraded the platform to Movable Type Pro 6.2.4. (Yes, I still use Movable Type!) Daiji and Boris got Facebook Instant Articles working inspired by Dave Winer and with the help from folks over at Facebook. (Thanks!) Boris cleaned up the design of the blog and also made it responsive - much more mobile friendly.

What's amazing to me is how well the design has held up over the years.

We (the founding team of Eccosys) set up our web server in 1993. I have a feeling this thing dated July 1, 1993 is the first journal-like thing I posted on the Internet: "Howard Mentioned me in Wired!" In 2002, with Justin Hall's help, I converted my static website to a "blog." I had a journal on my static website, but this new "blogging" thing made updating much easier.

Boris took over my blog design in the summer of 2003 and we relaunched the site in July of 2003. In 2008, we did a major redesign including a shiny new logo from Susan Kare.

It is quite amazing to me that with all of the various changes in technology, that most of the content on my website has been able to migrate through all of these upgrades. I'm also happy that Archive.org keeps archives of this site with its original designs and eccosys.com which preceded it, although the very first versions from 1993 are lost as far as I know. The web and its standards are very robust and I hope they stay that way.

LibrePlanet 2016 and the World Wide Web Consortium (W3C) happened to be having meetings at MIT at the same time so Harry Halpin from the W3C thought that it would be a great opportunity to have a public discussion about Digital Restrictions Management* (DRM). The W3C was having a discussion about DRM and the World Wide Web and considering Encrypted Media Extensions (EME) which would build DRM support into the Web standards and various parties were trying argue against it. They didn't have room over at CSAIL so he approached me about having it at the Media Lab and I agree to host it as long as it was clear that this didn't didn't signal some official position by the the Lab.

We were able to pull together an interesting panel with Richard Stallman from the Free Software Foundation, Danny O'Brien from the Electronic Frontier Foundation and Harry Halpin from the World Wide Web Consortium as the moderator. Harry and I were speaking on behalf of ourselves and not our (in my case various) organizations and affiliations.

As you might imagine with this group, it wasn't a debate, but arguments against DRM from a various perspectives and levels of intensity. :-)

Here's the blurb from Harry.

Will the future of the Web include Digital Rights Management? The World Wide Web Consortium (W3C), the MIT-based international standards body in charge of "bringing the Web to its full potential" is in process of deciding if they should continue their work on Encrypted Media Extensions (EME). The recommendation of EME by W3C would standardize the use of Digital Rights Management (DRM) across browsers. The Free Software Foundation (FSF) has petitioned W3C to stop all work on EME and DRM-related technologies. The W3C will consider adopting a DRM non-aggression covenant drafted by the Electronic Frontier Foundation (EFF) at its Advisory Committee meeting at MIT next week.

This is an open invitation for genuine person-to-person dialogue with people from MIT, FSF, EFF, and W3C about DRM on the Web (and any other topics of importance to the Web).

Speakers:

- Joi Ito (Media Lab)
- Richard Stallman (Free Software Foundation)
- Danny O'Brien (EFF)
- W3C Team Member(s)
- Moderator: Harry Halpin (W3C)

March 20th 2016, 8 PM

* Richard Stallman insists we call it Digital Restrictions Management although industry more commonly refers to it as "Digital Rights Management."

I wrote a bit about DRM in my PubPub post, "Why anti-money laundering laws and poorly designed copyright laws are similar and should be revised."

25638451103_17fa3bdea7_o.jpg

Sitting at home and looking out the window was a bit other-worldly. A snowy day in April is rare even in Boston. I seem to have gotten myself sick again. (After being mostly immune to everything for years, I've had a series of colds and flues this year. More on my theories about this in another post.)

For the last few days, Boris, Daiji and I have been following in the footsteps of Dave Winer and have been trying to get my RSS feed from my Movable Type Blog to become compatible with Facebook Instant Articles so that it would be approved. We have been going back and forth with the Facebook team who have been friendly and responsive. I THINK we finally have it working.

So here we go. If you read this on Facebook on the app, you should see the thunderbolt mark and it should load really easily.

Thanks to Dave for getting me going on this thread and to Boris, Daiji and the folks at Facebook who helped out. My Open Web feels a bit more loved tonight than it did before.

IMG_0600.jpg

This is what this page looks like on my iPhone.

We've been talking a lot about the importance of the Open Web and where Medium fits into the ecosystem of walled gardens and this Open Web. Evan Williams, founder and CEO of Medium, was nice enough to chat on Skype and allow me to post it. I've known Ev from the Blogger days and the Twitter days and have been a user of every one of his products and the conversation reminded me how much I enjoy having product conversations with Ev.

It sounds like while Medium has and is focused on creating a great authoring platform, Ev is more open to supporting the Open Web than some might have feared. Look forward to seeing support for more interoperability and working with them on it.

Recent Comments

Monthly Archives