Most recently in the Network Technology Category
There seems to be some sort of general rule that technologies and systems like conversations on the Internet, the US democracy (and its capture by powerful financial interests), the Arab Spring movement and many other things that were wonderfully optimistic and positive at the beginning seem to begin to regress and fail as they scale or age. Most of these systems seem to evolve into systems that are resistant to redesign and overthrow as they adapt like some sophisticated virus or cancer. It's related to but harder to fix than the tragedy of the commons.
I want to write a longer post trying to understand this trend/effect, but I was curious about whether there was some work already in understanding this effect and whether there was already a name for this idea. If not, what we should call it, assuming people agree that it's a "thing"?
Accounting underlies finance, business, and enables the levying of taxes for raising armies, building cities, and managing resources at scale. In fact, it is the way that the world keeps track of almost everything of value.
Accounting predates money, and was originally used by ancient communities to track and manage their limited resources. There are accounting records from Mesopotamia dating back more than 7,000 years, listing the exchange of goods. Over time, accounting became the language and information infrastructure for trade. Accounting and auditing enabled the creation of vast empires, such as those built by the Egyptians and the Romans.
As accounting scaled, it made sense to go from counting sheep, bushels of grain, and cords of wood, to calculating and managing resources using their exchange value in terms of an abstract unit: money. In addition to exchange, money allowed for recording and managing obligations. So where earlier bookkeeping just kept records of promises and exchanges between individuals (Alice lent Bob a goat on this date), money opened up a new realm of accounting by dramatically simplifying the management of accounts and allowing markets, companies, and governments to scale. However, through the centuries, this once powerful simplification has a resulted in a surprising downside-a downside made worse in today's digitally connected world.
While companies today use enterprise resource planning (ERP) systems to keep track of widgets, contractual obligations, and employees, the accounting system-and the laws that support it-require us to convert just about everything into monetary value, and enter it into a ledger system based on the 700-year-old double-entry bookkeeping method. This is the very same system used by the Florentine merchants of the 13th century and described by Luca Pacioli, the "father of accounting," in his book Summa de Arithmetica, Geometria, Proportioni et Proportionalità, published 1494.
When you take, for instance, a contract that pays out $1 million if it rains tomorrow, and put it into your accounts, you will be required to guess the chance of rain-maybe 50 percent-and value that asset at something like $500,000. The contract will actually never pay out $500,000; in the end, it will either be worth zero (no rain) or $1 million (rain). But if you were forced to trade it today, you'd probably sell it for something close to $500,000; so for tax and management purposes, you "value" the contract at $500,000. On the other hand, if you are unable to sell it because there are no buyers, it might actually be valued at zero today by regulators interested in liquidity, but then suddenly valued at $1 million tomorrow if it rains.
Basically, a company's accounts are an aggregate of cells in various ledgers with numbers that represent a numerical value denominated in some currency-yen, dollars, euros, etc.-and those numbers are added up and organized into both a balance sheet and an income statement that show the health of the company to management and investors. They are also used to calculate profits and the amount of tax owed to governments. This balance sheet is a list of assets and liabilities. If you looked in the assets column, you'd have a number of items that you would be reporting as having value, including things like printing presses, lines of code, intellectual property, obligations from people who may or may not pay you in the future, cash in various countries' currencies, and best guesses on things like the future prices of a commodity or the value of another company.
As an auditor, investor, or trading partner, you might want to drill down and try to test the assumptions that the company is making and see what would happen if those were incorrect at the time they were recorded, or turned out to be wrong sometime in the future. You might also want to understand how buying another company would change your own company based on the way your obligations and bets interacted with theirs. You could rack up millions of dollars in auditor fees to "get to the bottom" of any number of assumptions. The process would involve manually reviewing the legal contracts, and also the assumptions made in every cell of every spreadsheet. That's because standard accounting is a very "lossy" process that reduces complex and context-dependant functions and transforms them into static numbers at every step. The underlying information is somewhere, but only exposed with a lot of manual digging.
The modern complex financial system is full of companies that have figured out ways to guess when investors and the companies themselves have made mistakes in their assumptions. These companies bet against a company with inaccurate pricing or take advantage of the gap in information to convert this into financial returns for themselves. When these mistakes are duplicated across the system, it can cause fluctuation amplification that also allows companies to make more money both as markets rise, as well as fall, if they can successfully predict those fluctuations. In fact, as long as the whole system doesn't collapse, smart traders make more money on fluctuation than on stability.
Just like rodent exterminators aren't excited about the idea of rodents being completely eliminated because they would no longer have jobs, those financial institutions that make money by "making the system more efficient and eliminating waste" don't really want a stable system that isn't wasteful.
Right now, the technology of the financial system is built on top of a way of thinking about money and value that was designed back when all we had were pen and paper, and when reducing the complexity of the web of dependencies and obligations was the only way to make the system functionally efficient. The way we reduce complexity is to use a common method of pricing, put elements into categories, and add them up. This just builds on 700-year-old building blocks, trying to make the system "better" by doing very sophisticated analysis of the patterns and information without addressing the underlying problem of a lossy and oversimplified view of the world: a view where everything of "value" should be as quickly as possible recorded as a number.
The standard idea of the "value" of things is a reductionist view of the world that is useful to scale the trading of commodities that are roughly of equal worth to a large set of people. But, in fact, most things have very different values to different people at different times, and I would argue that much-if not most-things of value can't and probably shouldn't be reduced to numbers on a spreadsheet. Financial "value" has a very specific meaning. A home clearly has "value" because someone can live in it and it's useful. However, if no one wants to buy it and no one is buying similar homes on the market, you can't set a price for it; it is illiquid and it is impossible to determine its "fair market value." Some contracts and financial instruments are nonnegotiable, may not have a "fair market value," and may even have no value to you if you needed money (or an apple) RIGHT NOW. Part of the confusion comes from the difficulty of describing legal and mathematical ideas in plain English, and the role of context and timing.
One example is exchange rates. My wife moved to Boston from Japan several years ago, but still converts prices into yen. She sometimes comments on how expensive something has gotten because the value of the yen has diminished. Because most of our earnings and spending are in dollars, I always have to remind her, the "value" in yen is irrelevant to her now, although not irrelevant to her mother, who is in Japan.
We have become accustomed to the notion that things have a "price," and that "price" is equivalent to its "value." But an email from you to me about a feeling that you had about our last conversation is probably valuable to me at a particular time and probably not valuable to most people. A single apple is worth a lot more to a hungry person than the owner of an apple orchard. Context is everything.
"Can't Buy Me Love" - The Beatles
The economics notion of consumers making financial decisions to maximize "utility" as a kind of proxy for happiness is another example of how the notion of a universal system of "value" oversimplifies its complexity-so much so that the models that assume that humans are "economically rational" actors in a marketplace simply don't work. The simplest version of this model would mean that the more money you had, the happier you would be, which Daniel Kahneman and Angus Deaton argue is true up to about $75,000 a year in annual income.
Today, we have the technology and the computational power to create a system of accounts that could retain and deal with a lot of the complexity that the current system was designed to avoid. There is, for example, no reason that every entry in our books needs to be a number. Each cell could be an algorithmic representation of the obligations and dependencies that it represents. In fact, using machine learning, accounts could become sophisticated probabilistic models for what might happen depending on how things around them change. This would mean that the "value" of any system would change depending on who was asking, their location, and the time parameters.
Today, when a bank regulator conducts a stress test, it gives a bank a scenario-changes in the credit markets or the prices of certain things. The bank is then required to return a report on whether it would crash or remain solvent. This requires a lot of human labor to go through the accounts and run simulations. But what if the accounts were all algorithmic, and instead you could instantly run a program to provide the answer to the question? What if you had a learning model that could answer a more important question: "What sets of changes to the market WOULD make it crash, and why?" That's really what we want to know. We want to know this not just for one bank, but the whole system of banks, investors, and everything that interacts.
When I'm buying something from a company-let's say a credit default swap from your company, AIG-what I would want to know is whether, when the day comes to pay the obligation, in the unlikely chance that the AA mortgage-backed bonds that I was betting against defaulted, would your company be able to pay? Right now, there is no easy way to do this. However, what if all of the obligations and contracts, instead of being written on paper and recorded as numbers, were actually computable and "visible"? You'd immediately be able to see that, in fact, in the scenario in which you'd have to pay me, you'd actually have no money since you'd written similar contracts to so many people that you'd be broke. Right now, even the banks themselves can't see this unless an internal investigator thinks to look for this ahead of time.
Rethinking the Fundamentals of Accounting
With cutting edge cryptography like zero-knowledge proofs and secure multiparty computation, there are ways we might be able to keep these accounts open to each other without compromising business and personal privacy. While computing every contract as a cell in a huge set of accounts, every time anyone asked a question it would exceed even today's computing capacity. But with machine learning and the creation of models, we might be able to dampen, if not stabilize, the massive amplifications of fluctuations. These bubbles and collapses occur today, in part, because we are building our whole system on an oversimplified house of cards, with the handlers having an incentive to make them fragile and opaque in order to introduce inefficiencies they can exploit later to make money for themselves.
I think the current excitement about Bitcoin and distributed ledgers has created a great opportunity to take advantage of its flexible and reprogrammable nature, allowing us to rethink the fundamental system of accounts. I'm much more interested in this than in apps for banks, or even new ideas in finance, which will address some of the symptoms without taking a shot at eliminating one of the root causes of the impossibly complex and outdated system that we've built on a 700 year old double-entry bookkeeping method-the very same system used by the Florentine merchants of the 13th century. It feels like we are using integers when we should be using imaginary numbers. Reinventing accounting should be more like discovering a new number theory than tweaking the algorithms, which is what I feel like we've been doing for the last several hundred years.
Daniel Kahneman and Angus Deaton. "High Income Improves Evaluation of Life But Not Emotional Well-Being". Proceedings of the National Academy of Sciences. (2010).
LibrePlanet 2016 and the World Wide Web Consortium (W3C) happened to be having meetings at MIT at the same time so Harry Halpin from the W3C thought that it would be a great opportunity to have a public discussion about Digital Restrictions Management* (DRM). The W3C was having a discussion about DRM and the World Wide Web and considering Encrypted Media Extensions (EME) which would build DRM support into the Web standards and various parties were trying argue against it. They didn't have room over at CSAIL so he approached me about having it at the Media Lab and I agree to host it as long as it was clear that this didn't didn't signal some official position by the the Lab.
We were able to pull together an interesting panel with Richard Stallman from the Free Software Foundation, Danny O'Brien from the Electronic Frontier Foundation and Harry Halpin from the World Wide Web Consortium as the moderator. Harry and I were speaking on behalf of ourselves and not our (in my case various) organizations and affiliations.
As you might imagine with this group, it wasn't a debate, but arguments against DRM from a various perspectives and levels of intensity. :-)
Here's the blurb from Harry.
Will the future of the Web include Digital Rights Management? The World Wide Web Consortium (W3C), the MIT-based international standards body in charge of "bringing the Web to its full potential" is in process of deciding if they should continue their work on Encrypted Media Extensions (EME). The recommendation of EME by W3C would standardize the use of Digital Rights Management (DRM) across browsers. The Free Software Foundation (FSF) has petitioned W3C to stop all work on EME and DRM-related technologies. The W3C will consider adopting a DRM non-aggression covenant drafted by the Electronic Frontier Foundation (EFF) at its Advisory Committee meeting at MIT next week.
This is an open invitation for genuine person-to-person dialogue with people from MIT, FSF, EFF, and W3C about DRM on the Web (and any other topics of importance to the Web).
- Joi Ito (Media Lab)
- Richard Stallman (Free Software Foundation)
- Danny O'Brien (EFF)
- W3C Team Member(s)
- Moderator: Harry Halpin (W3C)
March 20th 2016, 8 PM
* Richard Stallman insists we call it Digital Restrictions Management although industry more commonly refers to it as "Digital Rights Management."
I wrote a bit about DRM in my PubPub post, "Why anti-money laundering laws and poorly designed copyright laws are similar and should be revised."
Sitting at home and looking out the window was a bit other-worldly. A snowy day in April is rare even in Boston. I seem to have gotten myself sick again. (After being mostly immune to everything for years, I've had a series of colds and flues this year. More on my theories about this in another post.)
For the last few days, Boris, Daiji and I have been following in the footsteps of Dave Winer and have been trying to get my RSS feed from my Movable Type Blog to become compatible with Facebook Instant Articles so that it would be approved. We have been going back and forth with the Facebook team who have been friendly and responsive. I THINK we finally have it working.
So here we go. If you read this on Facebook on the app, you should see the thunderbolt mark and it should load really easily.
Thanks to Dave for getting me going on this thread and to Boris, Daiji and the folks at Facebook who helped out. My Open Web feels a bit more loved tonight than it did before.
This is what this page looks like on my iPhone.
In the post that follows I'm trying to develop what I see to be strong analogues to another crucial period/turning point in the history of technology, but like all such comparisons, the differences are as illuminating as the similarities. I'm still not sure how far I should be stretching the metaphors, but it feels like we might be able to learn a lot about the future of Bitcoin from the history of the Internet. This is my first post about Bitcoin and I'm really looking more for reactions and new ideas than trying to prove a point. Feedback and links to things I should read would be greatly appreciated.
I'm fundamentally an Internet person -- my real business life started around the dawn of the Internet and for most of my adult life, I've been involved in building layers and pieces of the Internet, from helping start the first commercial Internet service provider in Japan to investing in Twitter and helping bring it to Japan. I've also served on the boards of the Open Source Initiative, the Internet Corporation for Names and Numbers (ICANN), The Mozilla Foundation, Public Knowledge, Electronic Privacy Information Center (EPIC), and been the CEO of Creative Commons. Given my experiences in the early days of the net, it's possible that I'm biased and everything new looks like the Internet.
Having said that, I believe that there are many parallels between the Internet and Bitcoin and there are many lessons from the Internet that can help provide guidance in thinking about Bitcoin and its future, but there are also some important differences.
The similarity is that Bitcoin is a transportation infrastructure that is decentralized, efficient and based on an open protocol. Instead of transferring packets of data over a dynamic network in contrast to the circuits and leased lines that preceded the Internet, Bitcoin's protocol, the blockchain, allows trust to be established between mutually distrusting parties in an efficient and decentralized way. Although you could argue that the ledger is "centralized", it's created through mechanical decentralized consensus.
The Internet has a root -- in other words, just because you use the Internet Protocol doesn't mean that you're necessarily part of the Internet. To be part of THE Internet, you have to agree to the names and numbers protocol and root servers that are administered by ICANN and its consensus process. You can use the Internet Protocol and make your own network, using your own rules for names and numbers, but then you're just a network and not The Internet.
Similarly, you can use the blockchain protocol to create alternative bitcoins or alt.coins. This allows you to innovate and use many of the technological benefits of Bitcoin, but you are no longer technically interoperable with Bitcoin and do not benefit from the network effect or the trust that Bitcoin has.
Also like the beginning of the Internet, there are competing ideas at each of the levels. AOL created a dialup network and really helped to popularize email. It eventually dumped its dialup network, its core business, but survived as an Internet service. Many people still have AOL email accounts.
With crypto-currencies, there are coins that don't connect to the "genesis block" of Bitcoin -- alt.coins that use fundamentally the same technology. There are alt.coins that use slightly different protocols and some that are fundamentally different.
On top of the coin layer, there are various services such as wallets, exchanges, service providers with varying levels of vertical integration -- some agnostic to whichever cryptocurrency ends up "winning" and some tightly linked. There are technologies and services being built on top of the infrastructure that use the network for fundamentally different things than transacting units of value, just as voice over IP used the same network in a very different way.
In the early days of the Internet, most online services were a combination of dialup and x.25 a competing packet switching protocol developed by Comité Consultatif International Téléphonique et Télégraphique, (CCITT), the predecessor to the International Telecom Union (ITU), a standards body that hangs off of the United Nations. Many services like The Source or CompuServe used x.25 before they started offering their services over the Internet.
I believe the first killer app for the Internet was email. On most of the early online services, you could only send email to other people on the same service. When Internet email came to these services, suddenly you could send email to anyone. This was quite amazing and notably, email is still one of the most important applications on the Internet.
As the Internet proliferated, the TCP/IP stack, free software that anyone could download for free and install on their computer to connect it to the Internet, was further developed and deployed. This allowed applications that ran on your computer to use the Internet to talk to other programs running on other computers. This created the machine-to-machine network. It was no longer just about typing text into a terminal window. The file transfer protocol (FTP) and later Gopher, a text-based browsing and downloading service popular before the web was invented, allowed you to download music and images and create a world wide web of content. Eventually, permissionless innovation on top of this open architecture gave birth to the World Wide Web, Napster, Amazon, eBay, Google and Skype.
I remember twenty years ago, giving a talk to advertising agencies, media companies and banks explaining how important and disruptive the Internet would be. Back then, there were satellite photos of the earth and a webcam pointing at a coffee pot on the Internet. Most people didn't have the imagination to see how the Internet would fundamentally disrupt commerce and media, because Amazon, eBay and Google hadn't been invented -- just email and Usenet-news. No one in these big companies believed that they had to learn anything about the Internet or that the Internet would affect their business -- I mostly got blank stares or snores.
Similarly, I believe that Bitcoin is the first "killer app" of The Blockchain as email was the killer app for the beginning of the Internet. We are in the process of inventing eBay, Amazon and Google. My hunch is that The Blockchain will be to banking, law and accountancy as The Internet was to media, commerce and advertising. It will lower costs, disintermediate many layers of business and reduce friction. As we know, one person's friction is another person's revenue.
One of the main things we worked on when I was on the board of ICANN was trying to keep the Internet from forking. There were many organizations that didn't agree with ICANN's policies or didn't like the US's excessive influence over the Internet. Our job was to listen to everyone and create an inclusive and consensus-based process so that people felt that the benefits of the network effect outweighed the energy and cost of dealing with this process. In general we succeeded. It helped that almost all of the founders and key technical minds and technical standards organizations that designed and ran the Internet worked together with ICANN. This interface between the policy makers and the technologists -- however painful -- was viewed as something that wasn't great but worked better than any of the other alternatives.
One question is whether there is an ICANN equivalent needed for Bitcoin. Is Bitcoin email and The Blockchain TCP/IP?
One argument about why it might not be the same is that ICANN fundamentally had to deal with the centralization caused by the name space problem created by domain names. Domain names are essential for the way we think the Internet works and you need a standards body to deal with the conflicts. The solutions to Bitcoin's centralization problems will look nothing like a domain name system (DNS), because although there is currently centralization in the form of mining pools and core development, the protocol is fundamentally designed to need decentralization to function at all. You could argue that the Internet requires a degree of decentralization, but it has so far survived its relationship with ICANN.
One other important function that ICANN provides is a way to discuss changes to the core technology. It also coordinates the policy conversation between the various stakeholders: the technology people, the users, business and governments. The registrars and registries were the main stakeholders since they ran the "business" that feeds ICANN and provides a lot of the infrastructure together with the ISPs.
For Bitcoin it's the miners -- the people and companies that do the computation required to secure the network by producing the cryptographically secure blockchain at the core of Bitcoin -- all in exchange for bitcoin rewards from the network itself. Any technical changes that the developers want to make to Bitcoin will not be adopted unless the miners adopt them, and the developers and the miners have different incentives. It's possible that the miners have some similarities to the registrars and registries, but they are fundamentally different in that they are not customer-facing and don't really care what you think.
As with ICANN, the users do matter and are key for the network effect value of Bitcoin, but without the miners the engine doesn't run. The miners aren't as easy to identify as the registrars and registries and it's unclear how the dynamics of incentives for the miners will develop with the value of bitcoin fluctuating, the difficulty of mining increasing and the transaction fees being market driven. It's possible that they will develop into a community with a user interface and a governance function, but they are mostly hidden and independent for a variety of reasons that are unlikely to change for now. Having said that, one of the first publicly traded Bitcoin companies is a miner.
The core developers are different as well. The founders of the Internet may have been slightly hippy-like, but they were mostly government-funded and fairly government-friendly. Cutting a deal with the Department of Commerce seemed like a pretty good idea to them at the time.
The core Bitcoin developers are cypherpunks who do what they do because they don't trust governments or the global banking system and are trying to build a distributed and autonomous system, one that is impervious to regulation and meddling by anyone at any time. At some level, Bitcoin was designed to not care what regulators think. The miners have an economic interest in Bitcoin having value, since that's what they're paid in, and they care about scale and the network effect, but the miners probably don't care if it's Bitcoin or an alt.coin that ends up winning, as long as their investments in hardware and plant don't disappear before they make a return on their investment.
Regulators clearly have an incentive to influence the rules of the network, but it's unclear whether the core developers really need to care what the regulators think. Having said that, without some sort of buy-in by regulators, it's unlikely to scale or have the mainstream impact that the Internet did.
Very much like the early days of the Internet, when we saw the power of Internet email but hadn't yet invented the Web, we are just imagining the potential uses of concepts such as crypto-equity and smart contracts ... to name just a few.
I believe it's possible that over-regulation could cause Bitcoin or the blockchain to never achieve its full potential and remain a feature of the side-economy, much in the same way that the Tor anonymizing system is extremely valuable to people who really need privacy but not really used by "normal people"... yet.
What helped make the Internet successful was the lack of regulation and the generally inclusive and permissionless nature of innovation. This was driven in large part by free and open source software and the venture capital community. The question I have is whether the fact that we're now talking about "money" and not "content," and that we seem to be innovating at a much higher speed (venture capital investment in Bitcoin is outpacing early Internet investments), the dialog in popular media is growing, and governments are very interested in Bitcoin makes this a completely different game. I think ideas like the five-year moratorium on Bitcoin regulation proposed by US Representative Steve Stockman are a good idea. We really have no idea what this whole thing is going to turn into, so a focus on dialog versus regulation is key.
I also believe that layer unbundling and innovation at each layer, assuming that the other layers will sort themselves out, is a good idea. In other words, exchanges and wallets that are coin-agnostic or experiments with colored coins, side chains and other innovations that are "unbundled" as much as possible allow the learnings and the systems created to survive regardless of exactly how the architecture turns out.
It feels a lot to me like when we were arguing over ethernet and token ring -- for the average user, it doesn't really matter which we end up with as long as in the end it's all interoperable. What's different is that there is more at stake and it's moving really fast, so the shape of failure and the cost of failure might be much more severe than when we were trying to figure out the Internet and a lot more people are watching.
A few weeks ago, Stewart Brand emailed me asked if I was still playing World of Warcraft and if I had read DAEMON. I was still playing World of Warcraft and hadn't read DAEMON. A few days later, thanks to Amazon, I was reading DAEMON.
Years ago, I remember thinking about Multi User Dungeons (MUDs) and how much they affected people in the real world. I knew people who were obsessed with MUDs, the first Multi-User Online Role Playing Games (MMORPGs). I was obsessed. (I think the first time I ever appeared in Wired was in 1993 when Howard Rheingold with Kevin Kelly wrote about MUDs and mentioned my obsession.) In MUDs, people got married, people got divorced, people lost their jobs, people shared ideas... The MUDs I played touched the real world through all of the people in the game.
Unlike the World of Warcraft and more like Second Life, MUDs allowed players to create rooms, monsters and objects. When you entered a MUD, it was like entering the collective intelligence of all of the people who played the game. There were quests that were designed by people using their knowledge of Real Life™. Playing in their worlds was like walking through their brains. These worlds merged and collided as people from everywhere collaborated in creating MUDs of various themes with various objectives.
At some point in the evolution of MMORPGs, MUDs forked and we ended up with most of the people who liked creating objects and worlds in places like Second Life where, while you CAN make games, most of what happens is world creation. The "gamers" ended up in games like World of Warcraft where the game play aspect has been honed to a fine art, but the player content creation aspect has been completely lost. (Although most of the developers are former obsessive players.)
What I envisioned back when I was playing and hacking MUDs more was that if you turned the world a bit inside out and imagined that YOU were the MUD, the people who played your game were like little pawns or interfaces for you in the real world. They inputted content and created worlds and taught you about the real world. They promoted you to their friends. They played obsessively increasing experience points and commitment to the game so that they would forever feed you and keep you alive. They would set up servers and pay for hosting just to feed their obsession and protect their investment. If you became extremely popular, a group of your players would spawn a new MUD with your DNA-code and there would be another one of you.
The hardcore players would hack your open source code and keep you evolving. The Wizards would educate and add character to each instance of your code. The players would be your footprint in Real Life™.
When most of the gamers moved to corporation owned closed source games designed by a team of developers, I stopped having this dream. The games were no longer "alive" in the same way I had envisioned them evolving.
After reading DAEMON, this dream is back. Leinad Zeraus depicts a world where a collosall computer daemon designed by a genius MMO designer begins to take over the world after his death. In many ways, the vision is similar to the vision I had, but the author adds a macabre twist and many many more orders of scale to make this one of the most inspiring books I've read in a long time. The author is "an independent systems consultant to Fortune 100 companies. He has designed enterprise software for the defense, finance and entertainment industries." He uses his experience to make the book extremely believable and realistic and still mind-blowing.
It was super fun to read and is a book I'd recommend to any who loves the Net and gaming. I'd also recommend it to anyone who doesn't. It's a great book to learn about the importance of understanding all of this - before it's too late.
Yesterday, we started planning our veggie garden and started a compost bin. I'm trying to figure out what percentage of my total food intake I can grow at home. We have a relatively large yard by Japanese standards so most of this will be a matter of personal energy. I'm going to start small this year but try to increase my nutritional independence from commercial networks every year.
My goal is to be able to cover nearly all of our fertilizer needs through the composting of all of our biodegradable garbage this year.
Thinking through the various scenarios, I realized that I could significantly reduce inputs and outputs from our house by going this route. When I imagine walking over to the garden every morning, picking my veggies, then chucking the waste into the compost bin, I get a happy feeling inside. I realize this is pretty simple and not so significant, but "just add water and sunlight" is very appealing.
I think that I can also make a significant impact on my energy inputs through photovoltaics and maybe some day get off of the power grid. This requires a larger financial investment but is an area that I've already done a bit of work in this area from my time at ECD.
In my lab/office/Tokyo pad we just finished setting up (thanks to the folks at WIDE) a dark fiber connection to the WIDE box at the Japanese Internet exchange. It is currently a 1G connection. WIDE is a research project and I'm only paying for the dark fiber. WIDE is routing for me. I am not going through a single licensed telecom provider for my Internet connectivity. Consequently, going from 1G to 10G is just a matter of buying more hardware and has no impact on the running cost. More bandwidth is just about more hardware. The way it SHOULD be.
It's exciting to think about making my footprint smaller and smaller in nutrition and energy and thinking about nutrition, energy and bandwidth more and more as assets that I operate rather than services from big companies.
I was going to Twitter this as I was sitting here drinking my morning tea, but it turned into a blog post. Thanks Twitter. ;-)
news on sina.com.cn, in which China Telecom, one the biggest ISP in China, release an official statement( with my rough translation)This really throws the notion of "cyberspace" into the physical world. My sympathies to everyone affected. Hope they figure out how to fix those cables quickly.
China Telecom has confirmed that, according to China institute of earthquake monitoring, at Dec 26, 20:26-20:34 Beijing Time, 7.2 and 6.7 magnitude earth quake have occurred in the South China Sea. Affected by the earthquake, Sina-US cable, Asia-Pacific Cable 1, Asia-Pacific Cable 2, FLAG Cable, Asia-Euro Cable and FNAL cable was broken and cut up. The break-off point is located 15 km south to Taiwan, which severely affected the International and national tele-communication in neighboring regions.
It was also reported that communication directed to China mainland, Taiwan, US and Europe were all massively interrupted. Internet connection to countries and region outside of China mainland became very difficult. Voice communication and telephone services were also affected.
China Telecom has claimed that due to the aftershock of the earthquake, the repairing works would be very tough. In addition undersea operation is also not easy to handle with. So this phenomenon is going to exist for certain period.
I DID NOT KNOW that the Internet was a series of tubes.AlterNetSenator Ted Stevens: The Remix
Posted by Melissa McEwan at 6:57 AM on July 11, 2006.
Last month, Senator Ted Stevens (R-Alaska) gave a rather stunning speech on the issue of net neutrality, in which he made such clueless statements as: "I just the other day got, an internet was sent by my staff at 10 o’clock in the morning on Friday and I just got it yesterday," and "[T]he internet is not something you just dump something on. It’s not a truck. It’s a series of tubes."
Now, the good folks at Boldheaded have turned his "skillful fusion of political doublespeak and perplexing ignorance on how the Internet works" into the DJ Ted Stevens Techno Remix: "A Series of Tubes." [Stream or Download above]
All I can say is just go listen. And then laugh and laugh and laugh.
Very funny. ;-)
via Scott via Deb
I just got a new Vodafone Japan phone to mess around with the network. In particular, I'm curious about how SMS evolves or fails to evolve in Japan.
So here's what I tested. I have a T-Mobile US SIM in a Nokia phone and was able to send and receive SMSs over both the Vodafone 3G network and the NTT DoCoMo 3G network. I was able to send an SMS to my Vodafone Japan phone, but not to my NTT DoCoMo phone. However, I was NOT able to reply to the SMS. As far as I can tell, but Vodafone Japan and DoCoMo disable sending SMSs to any other network than their own, but Vodfhone Japan allows you to receive an SMS from outside the network. This is for people with accounts on those networks. Their networks DO allow people who roam on their networks to send and receive SMS freely.
I am going to Finland tomorrow so I will try to use my Vodafone Japan phone there and see if it still blocks my SMS. I have a feeling that since the SMS server is probably where they block it, that it probably won't change anything.
The good news is that the 3G networks in Japan allow 3G phones and 3G subscribers from outside of Japan to roam on the Japanese networks. The bad news is that the Japanese networks are bringing their old-fashioned closed network philosophy and crippling connectivity between their networks. How stupid.
Benetech creates technology that serves humanity by blending social conscience with Silicon Valley expertise. We build innovative solutions that have lasting impact on critical needs around the world.Webcams and other digital communication could give ordinary people feedback on results acheived due to donation of their money and time.
This would give the power of oversight formerly reserved for wealthy philanthropists.
Does this hint toward disruptive digital technology underming the NGO world with individualized philanthropy that cuts out the middle men?
In France bloggers have been investigated by police for inciting the riots.
Also, my audiocast on the riots for the New York Times website. (My first podcast-style effort)
Blogs and sms messages were apparently used to coordinate violent action on a large scale.
What should authorities do?
Is there an alternative to censorship?
As the Web 2.0 bandwagon gets bigger and faster, more and more people seem to be blogging about it. I am increasingly confronted by people who ask me what it is. Just like I don't like "blogging" and "blogosphere", I don't like the word. However, I think it's going to end up sticking. I don't like it because it coincides with another bubbly swell in consumer Internet (the "web") and it sounds like "buzz 2.0". I think all of the cool things that are going on right now shouldn't be swept into some name that sounds like a new software version number for a re-written presentation by venture captitalists to their investors from the last bubble.
What's going on right now is about open standards, open source, free culture, small pieces loosely joined, innovation on the edges and all of the good things that WE FORGOT when we got greedy during the last bubble. These good Internet principles are easily corrupted when you bring back "the money". (As a VC, I realize I'm being a bit hypocritical here.) On the other hand, I think/hope Web 2.0 will be a bit better than Web 1.0. Both Tiger and GTalk use Jabber, an open standard, instead of the insanity of MSN Messenger, AOL IM and Yahoo IM using proprietary standards that didn't interoperate. At least Apple and Google are TRYING to look open and good.
I think blogging, web services, content syndication, AJAX, open source, wikis, and all of the cool new things that are going on shouldn't be clumped together into something that sounds like a Microsoft product name. On the other hand, I don't have a better solution. Web 2.0 is probably a pretty good name for a conference and probably an easy way to explain why we're so excited to someone who doesn't really care.
While we're at labeling the web x.0. Philip Torrone jokingly mentioned to me the other day (inside Second Life) that 3D was Web 3.0. I agree. 3D and VR have been around for a long time and there is a lot of great work going on, but I think we're finally getting to the phase where it's integrated with the web and widely used. I think the first step for me was to see World of Warcraft (WoW) with its 4M users and the extensible client. The only machine I have where I can turn on all of the video features is my duel CPU G5. On my powerbook I have to limit my video features and can't concurrently use other applications while playing. Clearly there is a hardware limit which is a good sign since hardware getting faster is a development we can count on.
Second Life (SL) is sort of the next step in development. Instead of trying to control all real-money and real-world relationship with things in the game like Blizzard does with WoW, SL encourages it. SL is less about gaming and more about building and collaboration. However, SL is not open source and is a venture capital backed for-profit company that owns the platform. I love it, but I think there's one more step.
Croquet, which I've been waiting for for a long time appears to be in the final phases of a real release. Croquet, if it takes off should let you build things like SL but in a distributed and open source way. It is basically a 3D collaborative operating system. If it takes off, it should allow us to take our learning from WoW and SL and do to them what "Web 2.0" is doing to traditional consumer Internet services.
However, don't hold your breath. WoW blows away SL in terms of snappy graphics and response time and has a well designed addictive and highly-tuned gaming environment. Croquet is still in development and is still way behind SL in terms of being easy to use. It will take time for the more open platforms to catch up to the closed ones, but I think they're coming.
Web 3.0 is on its way! Actually, lets not call it Web 3.0.
I don't know how much deep thought was involved when George Bush called the Internet "the internets" but this reflects a real risk that we face today. If you look at the traffic of many large countries with non-English languages, you will find that the overwhelming majority of the traffic stays inside the country. In countries like China and Japan where there is sufficient content in the local language and most people can't or don't like to read English this is even more so. I would say that the average individual probably doesn't really notice the Internet outside of their country or really care about content not in their native language.
Physical mail inside of these countries is delivered with addressing in their local language. It's not surprising that on the issue of International Domain Names (IDNs) there is a strong and emotion position inside of these countries that people should be able to write URLs in their native scripts. Take my name for example, the same Chinese characters for my name can be transliterated into English as either Johichi Itoh or Joichi Ito. This problem is aggravated in languages such as Chinese where there are more dialects and many more readings for the same set of characters. Why should these people be forced to learn some sort of roman transliteration in order to access the company page where they know the official Chinese characters for the names.
Similarly, there are people who don't like the policies of the Internet and either want to censor or otherwise manage differently THEIR internet. Others who don't like the way DNS works, have proposed alternative roots. This is possible and easy to do, but you end up with "the internets".
It is the fact that we have a single root and that we have global policies and protocols which allows the Internet to be a single network and allows anyone to reach anyone else in the world. Clearly, allowing anyone in the world to reach anyone else in the world with a single click introduces a variety of problems, but it creates a single global network which allows dialog and innovation to be shared worldwide without going through gateways or filters. This attribute of the Internet is a key to the future of a global democracy and I believe we need to fight to preserve this.
Since more and more people are using the Internet, there are more and more diverse views about the policies and control. This is clearly making consensus more difficult and ICANN is one of the groups which is having to adapt to the increasing number of inputs in the consensus process. This is all the more reason to work harder to keep everything together. Please. Lets fight to keep the Internet and not let it turn into the internets... It is a difficult process with various flaws, but if we give up, it will be very difficult if not impossible for all of to talk again very soon.
Micah Sifry has written a nice piece about why wifi and cheap broadband is an essential enabler and more important than direct aid for communities which need help. He references various examples and source. I completely agree. I remember speaking to a UN diplomat who said that the Internet has changed the face of global policy making. He told us that the Anti-Personal Land-mine Treaty would not have happened if it weren't for email and the ability for NGOs to get information, organize and pressure governments and the UN using the Internet. I believe that at every level, it is essential to empower individuals and communities with a voice and the Internet is in a position to enable people for the first time at a reasonable cost. It is about global voices.
I believe that it is easy enough to run a basic Wifi, Internet and Voice over IP network that in many cases municipal governments can run them. I realize this hurts competition and this is what Verizon argued when they tried to stop Philadelphia for setting up their own Wifi network, but I think it would be better than what we have now. In many places broadband is controlled by organizations that are effectively monopolies anyway. See for example the new ruling in the US that cable companies don't have to allow others to provide access through their network. Would you rather have the network run by a monopoly that is controlled by a bunch of greedy shareholders or a local government that the people at least have some control over?
People will argue that allowing local governments to operate networks will stifle innovation because of lack of competition. I think that the benefit is worth the cost of providing cheaper and more universal access. The network is becoming less and less a "service" and more and more a "thing". You can buy a bunch of routers and hook them together and you have a pretty good network. You do need maintenance, but you don't need some huge company with a bunch of bell-heads running the thing. Simple access is more like a road than a full-service hotel. It just has to be cheap and work.
I agree that this isn't for all municipal governments, but I think the central governments of the world should try very hard not to give in to the pressure of the telco lobbies and stifle the attempts of municipal governments to provide network services including voice. I also believe that non-profits and NGOs can play a huge role in helping provide access in addition to municipal governments as well as helping municipal governments set up such networks.
At the Internet Association Japan meeting yesterday, the folks from Impress gave a summary of their 10th annual Internet survey.
I took notes based on a verbal presentation so there could be some mistakes. If anyone notices any, please let me know.Impress 2005 Internet White PaperThere are 32,244,000 broadband households which is 36.2%.
There are 70,072,000 Internet users.
72.5% of people have heard of blogs, up from 39% last year.
25% of women in their teens and 20's have blogs.
9.5% of Internet users use RSS Readers.
46.5% of Internet users have decreased spending in physical shops because of online shopping.
29.6% of offices have wifi up from 10.7% last year.
2.8% of companies have corporate blogs and over 50% express no intention of ever having corporate blogs.
5.5% of companies have corporate web pages for mobile phone users.
I'm sure everyone knows what BitTorrent is, but it is the most popular peer-to-peer file sharing protocol for sharing large files. Before you had to have a tracker to create "torrents" which coordinated this sharing, but now you don't. This should make it even easier for people to make BitTorrent enclosures in blog entries and otherwise use BitTorrent to share files. Having said that, there are value added trackers like Prodigem which I'm sure people will use to charge for and otherwise track their files.
BitTorrentBitTorrent Goes Trackerless: Publishing with BitTorrent gets easier!
As part of our ongoing efforts to make publishing files on the Web painless and disruptively cheap, BitTorrent has released a 'trackerless' version of BitTorrent in a new release.
In prior versions of BitTorrent, publishing was a 3 step process. You would:
1. Create a ".torrent" file -- a summary of your file which you can put on your blog or website
2. Create a "tracker" for that file on your webserver so that your downloaders can find each other
3. Create a "seed" copy of your download so that your first downloader has a place to download from
Many of you have blogs and websites, but dont have the resources to set up a tracker. In the new version, we've created an optional 'trackerless' method of publication. Anyone with a website and an Internet connection can host a BitTorrent download!
Although still in Beta release, the trackerless version of BitTorrent, and the latest production version are available at http://www.bittorrent.com/
This may be a bit geeky for some people, but for anyone who's been worried about how we're going to get IPv6 everywhere, this should be good news. Congratulations Earthlink R&D! I'm going to get a WRT54G router and try this out right away...Mr BlogPractical IPv6
We finally released a project we've been working on in EarthLink R&D for some time now. I was not the lead engineer on this project but it's perhaps one of the most exciting things we've done in R&D to date, if not the most exciting thing.
Basically it's a demonstration of a practical IPv6 migration strategy. There is a sandbox that allows users to obtain their own /64 IPv6 subnet of real routable addresses (Goodbye NAT -- YEAH!)
Here's how it works: Simply get an account at http://www.research.earthlink.net/ipv6/accounts.html to get your own personal block of 18,446,744,073,709,551,616 IPv6 addresses; install the firmware onto your standard Linksys WRT54G router, and blamo, you have IPv6. With this special code installed on your Linksys router, your IPv4 works as normal; you'll still have your NAT IPv4 LAN. But in addition to that, any IPv6 capable machine on the LAN will get a real, honest to goodness, routable IPv6 address too. It couldn't be easier. This works for Mac OS X, Linux/UNIX, as well as Windows XP. You don't have to do anything special on the machines on the LAN. They just work, as they say.
So with this code installed on the router and your IPv6 accounts setup, nothing breaks. You continue to use your LAN as before, but you suddenly also get real IPv6 addresses. Easy migration. No forklift required.
Technorati Tags: IPv6
I had dinner with Steve Crocker last night. I met him before through David Isenberg, but since he is the Security and Stability Advisory Committee Liaison to the ICANN board, I am getting a chance to hang out with him more these days. Among other things, he's well known for being the author of RFC 1.
His explained the software that his company Shinkuro produced and I tried it today. It solves a BUNCH of needs that I had. It's basically a very cryptographically robust, cross-platform collaboration tool. It allows you to create groups and share folders of files, has a shared chat space (like IRC) and allows you to share your desktop screen with other members of the group (yes, across platforms). The shared files are transfered in the background and edits to files are sent as diffs which can be accepted into the original by the recipient. There is also standard IM with your buddy list. The great thing is that all of the traffic is encrypted. 256 bit AES and 2048 bit RSA keys. Each message is encrypted with a unique key, and the key is transmitted under the RSA key. This is very important since I know for a fact that people sniff IM and other traffic at many of the conferences and public places.
The folder in the groups is really nifty. You drop files into a folder and you can see who has received the files and see any changes that are waiting for you. This seems so much more organized than the tons of attachments and updates I receive before board meetings and conference calls.
It seems similar to Groove in some ways, but is more lightweight and most importantly cross-platform. (Mac, Windows, Linux.)
You can download it at www.shinkuro.com and for now it's free. If you register it, you will get all of the features. My id is jito!shinkuro.com if you want to invite me to be your friend or into a group. As I've said before, I think email is dead and I'm always looking for things like this that help me survive the post-email era.
I'm listening to Andrew Odlyzko giving a talk right now about why Quality of Service (QoS) and real-time streaming is stupid. He showed a slide showing that P2P and other traffic are generally transmitting files at faster speeds than their bit rates. Basically, if you cache and buffer, you can have outages in the downloads and you'll usually be fine. I agree. I can see why carriers would want to spread the rumor that QoS is some feature that we have to have, but it's strange that so many researchers seem to think we will need QoS supported video streaming. Maybe they need to stop watching cable TV.
I'm getting an error that says "Your account is currently suspended" when I try to log into my joiitosk AIM account. Does anyone know what this error means and how I resolve it?
Update: Article in eWeek about this, but it doesn't say whether AOL is going to give us our accounts back. Thanks to Cours for the link.
If this is true, this is VERY bad behavior. 5060 is the port that SIP uses. I can understand why a phone company wouldn't want "free phone calls over the Internet" running on their system, but this is exactly the kind of behavior that makes Internet folks dislike telephone company control.David BeckemeyerBT appears to be blocking third-party VoIP
I've been biting my tongue on this since I first ran across it several months back. But now I have to say something. If someone can prove me wrong on this, fine, I'll post a retraction, but now I'm going to say it: British Telecom appears to be explicitly blocking VoIP for their DSL subscribers.
I've worked with an associate to examine the situation and all signs point to an explicit blocking of VoIP. In Cisco ACL-speak, it appears there is a rule somewhere in the BT network being applied to inbound packets of the form:deny udp any eq 5060 any
Can anyone else corroborate this fact?
VoIP stands for "Voice over IP" and SIP is the open standard "Session Initiation Protocol" used to set up calls over the Internet
UPDATE: Looks like it is a customer router issue, but still may be BT driven. Update on Mr. Blog.
I'm sitting in the Italian Parliament (I think.) The panel I was on was dealing with the impact of digital/Internet on content creation and distribution. It started yesterday and continued today. I think it lasted about seven hours or so in total. I found myself in violent disagreement at the beginning because they kept talking about piracy. The interesting thing about this panel (probably more common in other cultures, but new for me) was that we had to come to a written consensus by the end of the session and present it in the Parliament building. It would then be distributed to politicians across Europe as a recommendation.
I found myself negotiating like some UN diplomat.
In the end, here is where we ended up on a few of my "hot buttons".
Organized, for-profit, commercial piracy was different from P2P file sharing by individuals. We could not agree on the impact of P2P file sharing, but we agreed that punishing file sharing was not the only/best way to deal with the issue. I pushed for a stronger stance, my position being that as Chris Anderson says in The Long Tail, it's a matter of price and convenience. People will pay if the experience is better. That was not included in the statement, but "education" was used instead. Blah. I just made a statement that I disagree with this and that there is not enough evidence that P2P filesharing of music is really bad for the music industry.
It appeared that people had a VERY bad image of Creative Commons. For some reason they thought that CC was trying to force people to share and was anti-copyright. I explained the CC was built upon copyright and was trying to help artists choose their copyright.
This part turned out quite well in the statement. They said that CC was a tool, not to steal from artists, but to give them the choice to share and lower the parasitic costs (legal) of choosing a license. They concluded that CC was NOT a threat as they had originally envisioned, but a complimentary and a good thing. The tone was very pro-artist and less tolerant of distributors, the idea of giving more control to artists seemed to be quite attractive.
I'm about to have a chance to object to some of the issues I see in the statement and give an address about my thoughts. I'm going to talk about the value of the Long Tail and Creative Commons.
I've just been nominated to the board of ICANN (Internet Corporation For Assigned Names and Numbers) and will be officially joining already seated members at the conclusion of the ICANN Meeting in Cape Town, South Africa, December 1-5. ("Nominated" technically because I officially join in December, but the selection process is completed.)
This is the end of a two or so year process of people telling me I should get involved and others warning me against it. Some of my wisest advisors urged me not to join saying things like, "you will make 3 mistakes in your life... this is one of them..." or "friends don't let friends do ICANN."
ICANN has its share of problems and a negative image associated with it in many circles. I've even taken my fare share of cheap shots at ICANN.
I am joining ICANN for two reasons. ICANN is changing and it's critical that ICANN is successful.
I've talked to on the phone and met a great number of people involved in ICANN in a variety of capacities. I realized that ICANN today is not what ICANN was a few years ago. Please reset your biases and pay attention to what they are doing. Yes. There are still problems, but they are being addressed by an extremely committed team of people who are doing amazing work. Also, take a look at the board. It's very geographically and professionally diverse. It's not some puppet of the US or special interests.
Why is ICANN important? If ICANN is not successful in proving that it can manage some of the critical elements of the Internet such as the name space and IP addresses, ICANN will be dissolved and the ITU will step in. Why would that be bad? I am generally in favor of multi-lateral approaches, but in the case of the ITU, I believe it is biased towards the telephone monopolies. The ITU was built by telcos to set technical standards for telcos. That suits the telephone system architecture, which is highly centralised and is structured as a patchwork of geographic monopolies. The Internet is decentralised, and there are many small companies and individuals working at the peripheries to develop new applications for the overall network. The governance process has to reflect the diversity and the needs of these companies, as well as the needs of the network providers.
I believe that many of the things that ICANN is doing are important, but the single biggest factor leading to my decision to try to participate in ICANN is to try to prove that the people of the Internet can govern themselves without direct involvement from nation-states and to try to help build an organization that can deliver that promise.
People have been reporting about the FBI ordering a hosting provider, Rackspace, with offices in the US and the UK to seize at least two servers from Indymedia's UK datacenter. Indymedia is a well known edgy alternative news site which was established to provide grassroots coverage of the WTO protests in Seattle. It has grown into a multinational resource for some hardcore journalism including a lot of work on the Diebold and the Patroit Act issues. The reports as well as Indymedia's page on this story say that the FBI has not provided a reason for the seizure to Indymedia. The statement from Rackspace says:
In past, Indymedia has done stuff like posting photos of undercover police officers. However, according to Indymedia, the "FBI asked for the Nantes post on swiss police to be removed, but admitted no laws were violated". This time the FBI has not told them what they've done wrong and Rackspace is under a gag order so they can't even tell Indymedia exactly what hardware they removed.RackspaceIn the present matter regarding Indymedia, Rackspace Managed Hosting, a U.S. based company with offices in London, is acting in compliance with a court order pursuant to a Mutual Legal Assistance Treaty (MLAT), which establishes procedures for countries to assist each other in investigations such as international terrorism, kidnapping and money laundering. Rackspace responded to a Commissioner’s subpoena, duly issued under Title 28, United States Code, Section 1782 in an investigation that did not arise in the United States. Rackspace is acting as a good corporate citizen and is cooperating with international law enforcement authorities. The court prohibits Rackspace from commenting further on this matter.
This implies that some non-US entity had the FBI force an action in the UK under MLAT. This means that Indymedia is being suspected of engaging in international terrorism, kidnapping or money laundering. I've seen some extreme reporting on Indymedia, but terrorism, kidnapping or money laundering? I guess the definition of "terrorism" has been expanded to meet popular demand these days, but come on... really?
This reminds me of toywar. A group of Swiss artists established in 1994 who are Golden Nica award winners from my Ars Electronica jury in 1996 call themselves etoy. Later, Etoys, founded in 1998 tried to take the etoy.com domain by force. They got a temporary injunction against the web site because a judge in LA agreed that it was confusing to customers of Etoys. Network Soutions complied and went beyond their call of duty and shut down etoy.com email as well for good measure. Swiss artists can be sued in a US court and having their email shut down by a US registrar.
My point is, be careful where your data lives...
UPDATE: nyc.indymedia.org is speculating that it is because Indymedia published the identities of the RNC delegates.
UPDATE2: It appears that maybe it wasn't the RNC, but the photos of the police officers according to Cryptome.
UPDATE3: imajes has an written a letter to his MPs. Maybe others should do the same.
Dr. Mark Petrovic and David Beckemeyer at Earthlink R&D have developed a proof of concept P2P application using SIP called SIPshare. SIP stands for Session Initiation Protocol and is one of the key technologies for the open standards around Voice over IP (VoIP). This application is pure P2P use of SIP. It is completely decentralized. According to David Beckemeyer this project is quite important.
SIP has been waylaid in regulatory and execution problems in the past and many people have written it off as a non-starter. I'm seeing more and more companies who are actually using it for cool stuff and proving that it's ready for prime time now. If you written off SIP and haven't taken a look at what people doing with it for the last six months, I suggest you take another look.David Beckemeyer in emailThis may not sound like that big of a deal, as file sharing has been done, but I think this is a really big event. It's not about file sharing. Nobody is really going to use the demo app Mark built. It's about demonstrating that pure P2P can be done over SIP and that SIP is about more than just voice and video.
In some sense, the SIP wars to me are about sneaking in some aspects of the original "stupid network" baack into the NAT hell we've created. If we can do what it takes to get NAT boxes to support SIP (be consistent in how they do NAT so the edges can use STUN et al), then we have reclamied the ability to have individually addressable nodes, where we use SIP as the new IP network almost. This may be getting carried away, but anyway...
via David Beckemeyer
I want to start playing with BitTorrent and integrating it into blogging more. I think I need a BitTorrent tracker. Can anyone recommend a respectable public tracker or does anyone have a machine they'd be willing to run a public tracker on? I want try to experiment with a variety of legal uses of BitTorrent.
Today, an associate professor at the most prestigious university in Japan, Tokyo University was arrested today for developing a tool that enables piracy. The program is a P2P system cally Winny. Previously two of the users had been arrested. I got a call from Asahi Shimbun (Japanese newspaper) today asking me for a comment for the morning news tomorrow. I hope the print it. I think it's an absolute disgrace to Japan. While the US is fighting in congress, Hollywood pushing to ban P2P and Boucher et al are fighting for DMCRA, Japanese police go and arrest someone developing P2P software with a VERY sketchy case. The thing is, it's quite likely he will be found guilty.
I once served as an expert witness on the FLMASK case. FLMASK was a program that could be used to allow password protected scrambling of areas of an image so that porn sites could post pictures that passed the Japanese censors, but allowed users to unscramble them. The police were so upset that they cracked down on the hardcore porn sites with the argument that even with FLMASK'ed "clean" images, they would be deemed hardcore. The problem was, this left the developer of FLMASK free from claims that his software enabled anything illegal. So they busted him for LINKING to these porn sites that got busted as users of his software. They deemed linking to a porn site as the same as actually running a porn site. I was the chairman of Infoseek Japan at the time so I obviously had a lot to say about that. The amazing thing is... after overwhelming evidence of the stupidity of the allegations, the guy was found guilty.
Anyway, Japan is yet again leading the world in stupid Internet policing.
Here are some thoughts on where I think things are going in the mobile and content space.
Several crucial shifts in technology are emerging that will drastically affect the relationship between users and technology in the near future. Wireless Internet is becoming ubiquitous and economically viable. Internet capable devices are becoming smaller and more powerful.
Alongside technological shifts, new social trends are emerging. Users are shifting their attention from packaged content to social information about location, presence and community. Tools for identity, trust, relationship management and navigating social networks are becoming more popular. Mobile communication tools are shifting away from a 1-1 model, allowing for increased many-to-many interactions; such a shift is even being used to permit new forms of democracy and citizen participation in global dialog.
While new technological and social trends are occurring, it is not without resistance, often by the developers and distributors of technology and content. In order to empower the consumer as a community member and producer, communication carriers, hardware manufacturers and content providers must understand and build models that focus less on the content and more on the relationships.
Computing started out as large mainframe computers, software developers and companies “time sharing” for slices of computing time on the large machines. The mini-computer was cheaper and smaller, allowing companies and labs to own their own computers. The mini computer allowed a much greater number of people to have access to computers and even use them in real time. The mini computer lead to a burst in software and networking technologies. In the early 80’s, the personal computer increased the number of computers by an order of magnitude and again, led to an explosion in new software and technology while lowering the cost even more. Console gaming companies proved once again that unit costs could be decreased significantly by dramatically increasing the number of units sold. Today, we have over a billion cell phones in the market. There are tens of millions camera phones. The incredible number of these devices has continued to lower the unit cost of computing as well as devices imbedded in these devices such as small cameras. High end phones have the computing power of the personal computers of the 80’s and the game consoles of the 90’s.
History repeats with WiFi
There are parallels in the history of communications and computing. In the 1980’s the technology of packet switched networks became widely deployed. Two standards competed. X.25 was a packet switched network technology being promoted by CCITT (a large, formal international standards body) and the telephone companies. It involved a system run by telephone companies including metered tariffs and multiple bilateral agreements between carriers to hook up.
Concurrently, universities and research labs were promoting TCP/IP and the Internet opportunity for loosely organized standards meetings being operated with flat rate tariffs and little or no agreements between the carriers. People just connected to the closest node and everyone agreed to freely carry traffic for others.
There were several “free Internet” services such as “The Little Garden” in San Francisco. Commercial service providers, particularly the telephone company operators such as SprintNet tried to shut down such free services by threatening not to carry this free traffic.
Eventually, large ISPs began providing high quality Internet connectivity and finally the telephone companies realized that the Internet was the dominant standard and shutdown or acquired the ISPs.
A similar trend is happening in wireless data services. GPRS is currently the dominant technology among mobile telephone carriers. GPRS allows users to transmit packets of data across the carrier network to the Internet. One can roam to other networks as long as the mobile operators have agreements with each other. Just like in the days of X.25, the system requires many bilateral agreements between the carriers; their goal is to track and bill for each packet of information.
Competing with this standard is WiFi. WiFi is just a simple wireless extension to the current Internet and many hotspots provide people with free access to the Internet in cafes and other public areas. WiFi service providers have emerged, while telephone operators –such as a T-Mobile and Vodaphone- are capitalizing on paid WiFi services. Just as with the Internet, network operators are threatening to shut down free WiFi providers, citing a violation of terms of service.
Just as with X.25, the GPRS data network and the future data networks planned by the telephone carriers (e.g. 3G) are crippled with unwieldy standards bodies, bilateral agreements, and inherently complicated and expensive plant operations.
It is clear that the simplicity of WiFi and the Internet is more efficient than the networks planned by the telephone companies. That said, the availability of low cost phones is controlled by mobile telephone carriers, their distribution networks and their subsidies.
Content vs Context
Many of the mobile telephone carriers are hoping that users will purchase branded content manufactured in Hollywood and packaged and distributed by the telephone companies using sophisticated technology to thwart copying.
Broadband in the home will always be cheaper than mobile broadband. Therefore it will be cheaper for people to download content at home and use storage devices to carry it with them rather than downloading or viewing content over a mobile phone network. Most entertainment content is not so time sensitive that it requires real time network access.
The mobile carriers are making the same mistake that many of the network service providers made in the 80s. Consider Delphi, a joint venture between IBM and Sears Roebuck. Delphi assumed that branded content was going to be the main use of their system and designed the architecture of the network to provide users with such content. Conversely, the users ended up using primary email and communications and the system failed to provide such services effectively due to the mis-design.
Similarly, it is clear that mobile computing is about communication. Not only are mobile phones being used for 1-1 communications, as expected through voice conversations; people are learning new forms of communication because of SMS, email and presence technologies. Often, the value of these communication processes is the transmission of “state” or “context” information; the content of the messages are less important.
Copyright and the Creative Commons
In addition to the constant flow of traffic keeping groups of people in touch with each other, significant changes are emerging in multimedia creation and sharing. The low cost of cameras and the nearly television studio quality capability of personal computers has caused an explosion in the number and quality of content being created by amateurs. Not only is this content easier to develop, people are using the power of weblogs and phones to distribute their creations to others.
The network providers and many of the hardware providers are trying to build systems that make it difficult for users to share and manipulate multimedia content. Such regulation drastically stifles the users’ ability to produce, share and communicate. This is particularly surprising given that such activities are considered the primary “killer application” for networks.
It may seem unintuitive to argue that packaged commercial content can co-exist alongside consumer content while concurrently stimulating content creation and sharing. In order to understand how this can work, it is crucial to understand how the current system of copyright is broken and can be fixed.
First of all, copyright in the multimedia digital age is inherently broken. Historically, copyright works because it is difficult to copy or edit works and because only few people produce new works over a very long period of time. Today, technology allows us to find, sample, edit and share very quickly. The problem is that the current notion of copyright is not capable of addressing the complexity and the speed of what technology enables artists to create. Large copyright holders, notably Hollywood studios, have aggressively extended and strengthened their copyright protections to try to keep the ability to produce and distribute creative works in the realm of large corporations.
Hollywood asserts, “all rights reserved” on works that they own. Sampling music, having a TV show running in the background in a movie scene or quoting lyrics to a song in a book about the history of music all require payment to and a negotiation with the copyright holder. Even though the Internet makes available a wide palette of wonderful works based on content from all over the world, the current copyright practices forbid most of such creation.
However, most artists are happy to have their music sampled if they receive attribution. Most writers are happy to be quoted or have their books copied for non-commercial use. Most creators of content realize that all content builds on the past and the ability for people to build on what one has created is a natural and extremely important part of the creative process.
Creative Commons tries to give artists that choice. By providing a more flexible copyright than the standards “all rights reserved” copyright of commercial content providers, Creative Commons allows artists to set a variety of rights to their works. This includes the ability to reuse for commercial use, copy, sample, require attribution, etc. Such an approach allows artists to decide how their work can be used, while providing people with the materials necessary for increased creation and sharing.
Creative Commons also provides for a way to make the copyright of pieces of content machine-readable. This means that a search engine or other tool to manipulate content is able to read the copyright. As such, an artist can search for songs, images and text to use while having the information to provide the necessary attribution.
Creative Commons can co-exist with the stringent copyright regimes of the Hollywood studios while allowing professional and amateur artists to take more control of how much they want their works to be shared and integrated into the commons. Until copyright law itself is fundamentally changed, the Creative Commons will provide an essential tool to provide an alternative to the completely inflexible copyright of commercial content.
Content is not like some lump of gold to be horded and owned which diminishes in value each time it is shared. Content is a foundation upon which community and relationships are formed. Content is the foundation for culture. We must evolve beyond the current copyright regime that was developed in a world where the creation and transmission of content was unwieldy and expense, reserved to those privileged artists who were funded by commercial enterprises. This will provide the emerging wireless networks and mobile devices with the freedom necessary for them to become the community building tools of sharing that is their destiny.
David Isenberg blogs about the "Bellhead" background of Roy Neel, Howard Dean's new campaign manager.
Sony and Docomo have announced that they are working together to put contactless IC chips in phones. Sony's FeliCa (type C contactless IC chip) is slowly becoming a defacto standard in Japan. (The government is backing a different standard, type B.) Currently the Japan Railways, AM/PM and others are using it for payments. Many companies use it for company ID's. The problem is that you can't see how much is left in your card and it's a pain to "charge" the card with more money. Putting it on a phone lets you download money from your bank and see how much is left. I worry about the privacy and security issues, but connecting an RF payment system with a phone totally makes sense.
I have a theory that Docomo has to become an identity/payment company and dump the voice and other bit-pushing businesses and go flat rate or free on the network. Docomo should buy a credit card company and use the bit-pushing business as a stick when collecting money. There are some regulations regarding payment businesses that make it difficult, but I'm sure the government would waive this if there was enough of a social need. Right now, the transaction business that credit card companies do doesn't make money. This has driven credit card companies to become loan companies that lobby the government to allow them to charge crazy interest rates. These interest rates cause people to end up in debt hell and commit suicide. If Docomo replaced credit cards as the primary non-cash transaction, credit system and could use network service termination to lower the collection costs, I bet they could make enough money on the transaction business to cover the bit-pushing.
Docomo is Japan's biggest mobile carrier that does about $8B / yr in data revenues.
Dan Gillmor writes about how censorware blocks his site. It's blocking mine too.
Dan GillmorSimon Phipps alerts me that one of the big censorware outfits, SurfControl, is blocking this and other blogs as a default setting for some customers. He points to Jon Udell's report of a surrealistic conversation with a company salesdroid upon his own such discovery. Good grief.
SurfControl puts all blogs under Usenet, a fairly bizarre characterization of the genre, but par for the course for the censorware mavens. They tend to sweep big categories into their filter, and then let you try to find your own way to escape.
Speaking of false positives, I'm also against blacklists because they can also cause false positives that are difficult to correct. Smartmobs was blacklisted by Verio and it took Roland two months of hell to get it sorted out.
I know I use a blacklist for my comment filtering. It's a stop-gap measure until someone figures out a better solution.
If this entry is cryptic to you, you need to learn more about the broadcast flag and why it is bad. Click on the links.
I was noodling around trying to organize "the space" in my head and put this picture together. The x axis is the "context". IE low context is stuff like CD's and books which don't change, are worth approximately the same amount to most people and don't have much timing or personal context. The far right is very personal, very timing sensitive, high context information such as information about your current "state". Then there is everything in between. The top layer is the type of content sorted by how much context they involve. The next layer is how they are aggregated and syndicated. Below that are substrates that are currently segmented vertically, but could be unified horizontally with open standards. Anyway, just a first path. Thoughts and feedback appreciated.
UPDATE: Changed color to red and edited the examples to be brand agnostic.
This sounds really bad. How can a company that tries to sell trust act in such a blatantly untrustworthy way...
In response to a question, he bascially indicates that ICANN doesn't have the power to keep VeriSign from doing what it's done. The company will have a dialogue with whoever wants to talk, but it plans to "reintroduce" Site Finder.
I think VeriSign has already won the key part of this war. It has persuaded reporters to call Site Finder a "service" instead of what it truly is, a misuse of its monopoly.
Andrew Fried, Senior Special Agent for the US Treasury Department posted this on the NANOG list regarding Verisign and the SiteFinder thing. Very cool that someone "patched" Bind to fix the "bug". Also very cool that someone like Andrew is speaking in his own voice in a public forum about this issue.Andrew FriedI have been following the various threads relating to Verisign and wanted to make one comment that I feel has been missing. Simply put, I would like to publicly express my appreciation to Mr. Vixie for taking the time to add the "root-delegation-only" patch for Bind. I'm fairly new to NANOG, but I'm sure that others beside myself also feel a thank you is appropriate.
If I were Microsoft I would probably like micro-content and metadata. IE and the browser wars were the pits for them. They should hate html by now. Microsoft also hates Google. Google hates metadata. Google likes scraping html, mixing it with their secret sauce and creating the all-mighty page ranking. Anything that detracts value from this rocket science or makes things complicated for Google or easy for other people is probably a bad thing for Google.
If the Net started to look more and more like XML based syndication and subscriptions with lots of links in the feeds to metadata and other namespaces, it would be more and more difficult to create page ranking out of plain old html.
My guess is that Microsoft knows this and intends to be there when it happens instead of totally missing it at the beginning like when the Internet got started. I have a feeling they will embrace a lot of the open standards that we are creating in the blog space now, but that they will add their usual garbage afterwards in the name spaces and metadata so that at the end of the day it all turns funky and Microsoft.
Just a thought...
I blogged earlier about SiteFinder and everyone agreed it was a "bad thing." VeriSign just got sued for it.ReutersVeriSign Sued Over Controversial Web Service
Thu September 18, 2003 09:13 PM ET
SAN FRANCISCO (Reuters) - An Internet search company on Thursday filed a $100 million antitrust lawsuit against VeriSign Inc., accusing the Web address provider of hijacking misspelled and unassigned Web addresses with a service it launched this week.
Thanks for the link Peggy!
I wonder if someone at Verisign thought this was a clever hack. It's stupid stuff like this that makes it very clear to everyone that Verisign is in a position to abuse their power.dejah420@MetaFilterVerisign modifies the infrastructure of the net to point back to themselves. Verisign has rigged all .com and .net mistyped domains to reroute to their branded search page. This makes them effectively the biggest cybersquatter on the net, and will make it impossible for most spam filters at the network level to operate as well as seriously complicating the lives of network administrators everywhere. posted by dejah420 at 8:07 PM PST
Two of my emails to ado got blocked by SpamAssassin today. According to him SpamAssassin message, my server was an open relay. I asked about this on #joiito and crysflame pointed to an article that explains that Osirusoft which Spam Assassin uses to check for open relays is broken. "Apparently, after having been DDOS'ed, the Osirusoft people have 'given up the ghost' and are now returning back every IP as a spam source when queried!"
So if you want to get mail from me, please reconfigure SpamAssassin as explained on the use PERL; site.
UPDATE: µthe inquirer has an article about this.
via Scott MaceInternet NewsReport: ISPs Block 17 Percent of Legit E-mail By Brian Morrissey
Top Internet service providers blocked 17 percent of legitimate permission-based e-mail in the first half of the year, according to a report issued by Return Path.
I pronounce email officially broken. If 17 percent of legit email is being blocked by spam filters, it's not officially working. No wonder I'm using blogs, IRC and IM for my primary modes of connecting with important people these days.
I don't care what excuses people give. The people who made smtp should have thought more about host authentication and the people who made IPv4 should have made longer IP addresses. My guess is that there were people who were voicing concerns who had more vision.
I have a feeling we are going to be kicking ourselves in the same way when we realize we "forgot" to put privacy into ID systems.
Mitch Kapor and Tim O'Reilly are among advisory board members of Nutch, a new open source search engine project which will try to:
Sounds good to me!
- fetch several billion pages per month
- maintain an index of these pages
- search that index up to 1000 times per second
- provide very high quality search results
- operate at minimal cost
John Battelle at Business 2.0 says, "Watch Out, Google".
Yesterday, I had dinner with Robert Kaye. He is the founder of Musicbrainz. Musicbrainz is a metadata project that is creating a database of album artist, title and track information similar to how CDDB used to do it when they were not a corporation. Many people were upset by CDDB's move use the commons created by the community for commercial purposes. Robert was so angry with this betrayal of the community that he started Musicbrainz. Musicbrainz will be set up as a non profit and Robert swears that he will never "sell-out". In fact, we talked about using some sort of emergent democracy that would allow the users to force a way to take shift control in the event that something like this might happen. We talked about the value of such escrow agents of perhaps the DNS and domain name with some sort of tool to allow the users to discuss and trigger a shift in control. This could be a way to force projects like this to stick to their original principles and help build trust at the same time.
Robert seemed like an extremely dedicated, smart and visionary guy and I think his focus and commitment to deliver this service is extremely admirable.
His service is unique in many ways. He is using a sound fingerprint key method to identify the songs. (He got beat up a bit on slashdot because he was using patented technology for this, but I think this is fine. He can always switch later if someone decided to make an open source version.) Basically, his client software scans all of your mp3's looks them up on his database and fixes all of your bad tags. If you have data that isn't in his database, you can submit it. It is a much more automatic and viral approach to what CDDB does.
So far it is only available on Windows, but he's working on an OS X version now...
Cory is talking about the broadcast flag issue that he has been quite active in resisting. He blogged about it on Boing Boing, but it is basically a flag that can be set in broadcast video to prevent redistribution of it on the Net. The idea is to get commodity hardware and software companies to implement this. The broadcast flag is part I in a three part plan. Part II is to force all analog to digital converters to have technology to sense for watermarks and disable the conversion of anything that had a copywritten watermarks. Part III is to redesign the Internet so that every packet is examined for infringement and discard them.
Sean thinks that the media industry has been bashed so much recently that things are much better than the past. He thinks that there is a viable model that allows people to rip and discover music...
Morgan says that Tivo will be profitable next year... Customers are "happy as clams..." Morgan is talking to the advertising industry about how to use the "real estate" in the living room where families in the US spend 7 hours a day. Wrestling with lots of issues such as copying content between Tivo's. The idea of attacking this without support of the industry didn't make sense to Tivo.
This amazing project involves archiving everything using low cost technology. The Connection Machine in the data center was originally running, but now it all runs on PC's with UNIX. There are over a 150 terabytes of data in the data center. There is room for a petabyte. Brewster is on the board of the Library of Congress and is also working with the Library of Alexandria in Egypt on this project. He is trying to recruit other libraries to swap content and mirror the archives. It is such a huge and important project that I couldn't HELP MYSELF... I'm involved. I'm going to try to figure out how to get Japan involved.
Brewster, for those of you who don't know him was one of the founders of WAIS (a great pre-web tool for indexing and publishing information that I used A LOT on my Mac) and Thinking Machines that created the Connection Machine, a massive parallel processing computer. He's quite a legend and it was a great honor and a lot of fun to meet him.
We talked about spam filters earlier. I use TMDA which is based on whitelisting. The controllable regex multilator is a technical filtering technology. These technologies keep getting smarter. It sort of reminds me of the convolutions we used to go through at Infoseek to get rid of spam sites from our indexes. I remember that some site used to produced different pages to the infoseek search bot by looking at the id... Anyway, this "CRM114" looks interesting.
CRM114 - the Controllable Regex MutilatorCRM-114 is a system to examine incoming e-mail, system log streams, data files or other data streams, and to sort, filter, or alter the incoming files or data streams according to whatever the user desires. Criteria for categorization of data can be by satisfaction of regexes, by sparse spectra, or by other means. Accuracy of the sparse spectra function has been seen in excess of 99 per cent, for 1/4 megabyte of learning text. In other words, CRM114 learns, and it learns fast .
This is totally amazing. An open source, P2P, email, IM, calendar... total personal information management system with "The Dream Team." Even Andy Hertzfeld is on the team. We've been talking about how cool something like this would be for years. Finally someone is doing this. Where do I sign up? This totally relates to blogs as well. Dan told me about it this weekend, but I waited until his article came out before I blogged it. The Web Site for the Open Source Applications Foudation has more information.
Posted on Sun, Oct. 20, 2002
Software idea may be just crazy enough to work
By Dan Gillmor
Mercury News Technology Columnist
this is an excerpt from the middle
If the software lives up to the developers' plans, it will have wide appeal. It should be highly adaptable to personal tastes, with robust collaborative features. I'm especially hopeful about a feature to build in strong encryption in a way that lets users protect their privacy without having to think about it.
The Chandler architecture builds on other open-source projects. These include Python, a development language and environment that's gaining more and more fans among programmers, and Jabber, a communications infrastructure that started life as an instant-messaging alternative but has evolved into a robust platform of its own.
One of the Chandler developers, Andy Hertzfeld, is volunteering his services. Hertzfeld is well-known in the software community, partly for his key role in creating Apple's original Macintosh and Mac operating system. An open-source company he co-founded a few years ago, Eazel, died during the Internet bubble's immediate aftermath.
``I hope we make a great application that I love to use myself, and that eventually millions of people will enjoy using,'' he says. ``Hopefully, we'll be able to make e-mail a lot more secure, without encumbering the user with technical detail. We can make accessing and managing information of all kinds more convenient if we're lucky. And we'll be helping to pave the way for free software to displace proprietary operating systems at the center of the commercial software industry.''
When Barak was visiting a few weeks ago, he was raving about it as well. GoodContacts is basically a contact management package that talks to Outlook or Act! and spams them with email and asks people to update their info. The good thing about GoodContacts is that they don't keep your contact list, they just enable you to spam from your computer. That's why I thought about using it until I realized I would have to switch to Outlook. (and why I am still drooling) It is viral, useful and cool. It triggered a "flashbulb moment" for Stewart.
And that leads me to the flashbulb. Imagine that we all have one phone number and one e-mail address that knows where we are. Imagine that the network keeps track of our location and our personal data, and automatically updates anyone who might be interested. Imagine that we don't have to think about whether the right phone number or address is stored in the network or our PC or our PDA or our phone. Imagine that all these little details of personal life are just handled. Yeah, yeah, I'm dreaming. But if that stuff happens, it will start with dumb little programs like GoodContacts. That's enlightening.
boldface added by Joi for emphasis
I have great respect for Stewart and all this SOUNDS good, but the lightbulb that flashed for me was. OUTLOOK? PERSONAL DATA? Ack! I would like something with similar functionality. It would be great, but I still can't imagine using a Microsoft product for contact management considering all of the security and privacy problems they have. I also would HATE for all of this information to ever end up not being local. Be careful when you ask "the network" to do stuff for you. I envision something similar, but a much different architecture.
Think IM buddy lists. Everyone should be able to have identities that are separate from their "entities". (see my paper about for more thoughts about this) You should be able to have multiple identities for the various roles. Each identity would be attached to different attributes such as memberships, age, corporate roles, or writing pseudonyms. Locally, you would be able to attach current information such as shipping address, home address, current phone, voicemailbox, etc. to each of the identities, being able to manage which identity was "active" or capable of routing to you at any given time. At work you would want your personal phone calls screened, your business contacts on. At home, you could reverse them.
Managing our identities and personal information in this age of privacy destruction will be essential. I truely believe that privacy underpins democracy and that "viral" solutions that give people like Microsoft or their software, access to our contact info should be watched carefully. Peer to peer, multi-vendor, multi-id, hash/digital signature based connectivity is much more interesting for me.
But maybe Stewart was going to get to the architecture next. I think it's a great idea, but the architecture discussion has to happen NOW.
Jabber hits critical PR mass, interop finally hits IM
News.com: Out with AOL, in with Jabber. It had to happen eventually. Now it has. The non-interoperative closed doors on IM systems from AOL, MSN and Yahoo are now fated to open. The sooner those companies realize this is a Good Thing that their customers have always wanted, they better off they'll be. Apple shoud take the lead in opening up IM, as it has with so many other standards (USB, SCSI, FireWire, wi-fi and now Rendezvous).
The company's new iChat already makes some use of the Jabber IM protocol. I suspect the only reason iChat is closed (except to AIM) is due to some contractual agreement with AOL. But that also puts Apple in a unique position to tell AOL the gig is up.
Marc CanterDUUUUUUUUDE! Apple's new iChat IS AIM. It's licensed technology. That's the only way Apple can link into the AIM universe. That's what AOL announced is their inter-op strategy -let others license THEIR engine. So - no - I don't think you'll see little Stevie taking any leadership steps here.........
And BTW - it should be noted that the ONLY way to get Renezvous to work is to open it up. It wouldn't do much - if all it did was configure Apple products - right? Apple is using Open Source as a puppet to achieve their own ends. Whenever Apple does something good is more by strategical manipulation that anything else.
So I heard from a VERY reliable source that ICQ does not really mind people plugging into their network. For instance there is a client called trilian that lets you: "Communicate with Flexibility and Style. Trillian is everything you need for instant messaging. Connect to ICQ®, AOL Instant Messenger(SM), MSN Messenger, Yahoo! Messenger and IRC in a single, sleek and slim interface."
So I agree that Jabber seems cool and maybe the next big thing, but what do I do with all of my old buddy lists? Also, if you're going to make me switch again, I'd like IP telephony seamlessly built into IM so that I don't have to have a phone number any more. It's stupid that the government in Japan allocates phone number when all you really need is a buddy list and an IM account.
I have to figure out a cooler way of formating quotes from various people... How's this?
I just got the beta of opencola. (Thanks Howard!) On the surface, it looks like a bookmarking, meta-searching relevance tracking front end. Very useful just for meta-searching various search engines and news sources and filing your information. You have various folders for different topics and you mark the relevance of various documents and you can continue to search for more stuff similar to what you like. The cool thing is that you can add peers that can look at your public folders and share recommendations with. It is similar to a company we invested in that unfortunately didn't end up making it past "beta" called FatBubble... Howard talks about opencola in Smartmobs. I think it was started by Cory Doctorow of BoingBoing. Anyway, so far it looks great. The only problem is I have no PEERS! If someone else can download the beta and post their id here as a comment or email me their id we can be peers. (I do have the choice of rating the relevance of peers. ;-0 ) Anyway, definitely worth a look.
Social Network Diagram for ITO JOICHI
Found this strange site that has extracted data from conference attendances and created graphical maps of social network. Pretty scary. I attended an Open Source Solutions conference organized by Robert Steele, a former CIA expert on Open Source Intelligence. There were a bunch of CIA and KGB folks at the conference. Anyway, the list of attendants of this conference among other lists seem to have made it into this database...
Spam is an issue that has been discussed and discussed. Laws have even been passed about it. The reason I decided to write something now about it is because I've been using a spam filter for awhile and I think it is working. Usually. I also think it represents the proper way of thinking about Spam.
Sen and I have talked about spam a lot and we often talk about how it is yet another basic mistake in the way that the Internet was designed. If only smtp could let you authenticate the user before you received mail, we could make much better mail filter. Alas, this can not be changed now. (Or it would be very difficult.) So we have to come up with some solutions.
The best solution we have found so far is whitelisting. It is a way to make a filter that only lets mail from certain addresses through. Originally Sen had made a script for me where I had a separate mailbox for mail from people who were in my address book. Now we have moved over to TMDA which lets you create white lists, black lists and a variety of other things. I have mine set up so that I can create and maintain my own list of addresses and domains that I want to receive mail from. We also have it set up so that if someone sends mail that is from someone not on the list, they get a message asking to reply and confirm that they are a human being. Once we receive the confirmation, the mail comes through. To filter for humans I don't want to get mail from or for that intelligent spam robot, I can make black lists.
There are still various problems with the system, but it works quite well. I still have it set up so that I have a mailbox for all of the mail that is rejected and I go through this periodically to make sure I did't miss something important.
In Japan, spam has become a huge issue because the recipient has to pay for the mail on i-mode phones. NTT Docomo is trying very hard to deal with this issue with filters of their own, but there are still major problems.
I do think that spam should be solved by certificates, authentication, keys, etc. on the user side and not by some huge central server...
- Electronics 1
- Computer Science 2
- Middle East 1
- Buddhism 3
- Evolution 1
- Live Video 1
- Academia 3
- Digital Garage 1
- Creativity 1
- Design 2
- China 2
- Manufacturing 2
- Mindfulness 2
- Biotech 3
- Artificial Intelligence 5
- Facebook 1
- Science 14
- Cryptocurrency 7
- Cybernetics 1
- MIT Media Lab 9
- Activism 81
- Art 53
- Blogging about Blogging 507
- Books 67
- Business and the Economy 23
- Computer and Network Risks 29
- Consumer Electronics 23
- Cool Web Sites 81
- CPSR 4
- Creative Commons 153
- Eating and Cooking 41
- Ecology 14
- Economics 42
- Email 18
- Emergent Democracy 114
- Energy 13
- Flash 5
- Gadgets 89
- Games 35
- Gender 10
- Global Politics 118
- Global Voices 41
- Hardware 16
- Health and Medicine 94
- Heckling 46
- Human Rights 19
- Humor 164
- ICANN 50
- Identity 15
- IM 2
- Information and Media 63
- Intellectual Property 128
- Internet Policy 16
- Introspective 79
- IRC 47
- Japanese Culture 124
- Japanese National ID 29
- Japanese Policy 97
- Japanese Politics 50
- Joi's Diary 662
- Joicards 4
- Leadership and Entrepreneurship 22
- LOAF 15
- Marketing 37
- Media and Journalism 172
- Moblogging 47
- Movies 45
- Mozilla 13
- Music 103
- Neoteny 20
- Network Technology 57
- Open Source Software 15
- People 21
- Photo 155
- Podcasts 17
- Privacy 104
- Python Fun 18
- Reforming Japanese Democracy 28
- Religion 30
- SARS 12
- Search 51
- Second Life 6
- Sharing Economy 24
- Six Apart 11
- Social Software 117
- Socialtext 5
- Software 82
- Technology Controversy 72
- Technorati 26
- US Policy and Politics 205
- Venture Capital 18
- Video 36
- VoIP 12
- Warblogging 101
- Wiki 64
- Wireless and Mobile 112
- World of Warcraft 19
- November 2016 5
- September 2016 12
- June 2016 5
- April 2016 4
- March 2016 7
- February 2016 3
- December 2015 1
- November 2015 1
- October 2015 1
- April 2015 1
- January 2015 2
- October 2014 2
- September 2014 3
- February 2014 1
- October 2013 1
- July 2013 1
- March 2013 1
- February 2013 1
- December 2012 2
- June 2012 2
- April 2012 2
- January 2012 2
- December 2011 2
- November 2011 1
- October 2011 1
- September 2011 3
- August 2011 2
- May 2011 1
- April 2011 1
- March 2011 2
- December 2010 2
- November 2010 1
- October 2010 2
- September 2010 1
- August 2010 1
- July 2010 5
- June 2010 3
- May 2010 5
- March 2010 1
- February 2010 2
- January 2010 3
- December 2009 4
- November 2009 1
- October 2009 2
- September 2009 1
- August 2009 1
- June 2009 1
- May 2009 4
- April 2009 8
- March 2009 5
- February 2009 4
- January 2009 10
- December 2008 23
- November 2008 14
- October 2008 10
- September 2008 11
- August 2008 13
- July 2008 18
- June 2008 16
- May 2008 6
- April 2008 5
- March 2008 4
- February 2008 10
- January 2008 10
- December 2007 13
- November 2007 8
- October 2007 11
- September 2007 14
- August 2007 9
- July 2007 14
- June 2007 14
- May 2007 13
- April 2007 23
- March 2007 19
- February 2007 14
- January 2007 13
- December 2006 20
- November 2006 12
- October 2006 5
- September 2006 10
- August 2006 7
- July 2006 8
- June 2006 20
- May 2006 14
- April 2006 10
- March 2006 17
- February 2006 17
- January 2006 20
- December 2005 23
- November 2005 45
- October 2005 37
- September 2005 28
- August 2005 37
- July 2005 37
- June 2005 29
- May 2005 48
- April 2005 55
- March 2005 44
- February 2005 37
- January 2005 43
- December 2004 57
- November 2004 79
- October 2004 85
- September 2004 62
- August 2004 78
- July 2004 77
- June 2004 61
- May 2004 72
- April 2004 56
- March 2004 76
- February 2004 74
- January 2004 94
- December 2003 71
- November 2003 69
- October 2003 72
- September 2003 71
- August 2003 59
- July 2003 65
- June 2003 60
- May 2003 53
- April 2003 79
- March 2003 106
- February 2003 71
- January 2003 68
- December 2002 56
- November 2002 54
- October 2002 73
- September 2002 50
- August 2002 61
- July 2002 32
- June 2002 12
- May 2002 1
- April 2002 2
- December 2001 1
- October 2001 1
- July 2001 1
- February 2001 1
- January 2001 1
- December 2000 1
- November 2000 1
- October 2000 1
- September 2000 1
- August 2000 1
- July 2000 1
- June 2000 1
- May 2000 1
- April 2000 2
- March 2000 1
- February 2000 1
- January 2000 1
- December 1999 1
- November 1999 1
- October 1999 1
- September 1999 3
- April 1999 1
- February 1999 5
- January 1999 2
- December 1998 2
- October 1998 1
- August 1998 7
- November 1997 1
- October 1997 1
- June 1997 1
- April 1997 1
- October 1996 1
- October 1995 1
- June 1995 1
- May 1995 1
- March 1995 2
- November 1994 1
- July 1993 2