Joi Ito's Web

Joi Ito's conversation with the living web.

I'm listening to Andrew Odlyzko giving a talk right now about why Quality of Service (QoS) and real-time streaming is stupid. He showed a slide showing that P2P and other traffic are generally transmitting files at faster speeds than their bit rates. Basically, if you cache and buffer, you can have outages in the downloads and you'll usually be fine. I agree. I can see why carriers would want to spread the rumor that QoS is some feature that we have to have, but it's strange that so many researchers seem to think we will need QoS supported video streaming. Maybe they need to stop watching cable TV.


You're listening in realtime? Is this intentionally ironic?

In the beginning, for MP3, there was Constant Bit Rate (CBR) encoding, and life was good. Then, folks set out to improve the process and introduced Variable Bit Rate (VBR) encoding, and life was better: smaller overall size for equal or better quality.

I think the idea here is that to maximize quality and minimize bandwidth, real-time streaming wants to use VBR and thus needs QoS support in the underlying network to help guide/parameterize the VBR encoding. A stream should be able to say "oh, the stream needs to up the bitrate, so ensure this connection gets enough throughput" - etc. With QoS and traffic shaping, the infrastructure can decide to throttle back a lower priority network flow to allow the higher priority real-time stream to transmit smoothly.

The point isn't about "streaming doesn't use 100% of the pipe" so much as "there needs to be a mechanism to prevent other users of bandwidth (i.e., P2P) from hogging the pipe and for bandwidth-sensitive apps (i.e., realtime streaming) to assert how much of the pipe it needs."

All that said, it still doesn't change the fact that real-time streaming is stupid. Even "live TV" is broadcast on a delay ...

I want QoS for things like XBox Live and VoIP. If I'm uploading all of the pictures from my latest photo expedition (gigabytes worth) to some server I don't want that to kill my DSL connection so that I can't use a VoIP phone or play my XBox.

I agree that with media I would much rather deal with buffered media than "real time" media. I have a Comcast cable box that does the "On Demand" stuff and I'd rather have it on a DVR any day of the week.

The only "QoS" that matters is "Quantity of Service".

I'd have to agree with Andrew. It's why i hate streaming video.

for the recreational use of the Net which seems to dominate this web page, Joi is theoretically correct in his assertion. For large IP networks, or for time sensitive business applications which traverse the public Internet, QoS and certain applications of real time streaming are not stupid, they are in fact quite important. Caches & buffers are workarounds not solutions in many cases, add this to the fact that lots of datastreams dont model well to p2p or multicast and you begin to appretiate QoS options on your IP network.

If I've completely mis-interpreted Joi's comments, please forgive me.

Chris, can you give me an example of such an application? An application where it isn't cheaper just to throw more bandwidth at it than to manage QoS?

Ian, I was listening to him speak in real life.

I would imagine that real-time streaming would be essential in cases where there is never a large number of users online at the same time, or willing to serve the same file at the same time, in order to support p2p models. I can think of a number of other cases where it's necessary. P2P only gets fast for things where there's a demand sufficient to engage many people in serving a file...

But Trevor, you still don't need "streaming" do you? Basically, as long as your bandwidth is fast, you can play a file while it is download. Like on most browsers... if you are downloading an mp3, it will start playing it when it figures out when you'll finish. It feels like streaming, but it doesn't use stupid QoS. If you have enough bandwidth for streaming, you should have enough for download.

Ok, I see what you're saying now.

As to that point, I think managing quality is just another tool in the bag of tricks to make the experience good for a user.

You can cache a file and play it while it's downloading, but usually unless you're getting multiple streams concurrently, you have to cache for a perceptible amount of time before being able to smoothly play video. You can ameliorate that by reducing the audio quality a bit, or the video quality, or both, until there's a significant enough cache buildup...

Sure, it's unnecessary when you have enough bandwidth, but you just won't always have enough bandwidth in every case, IMHO... So it's not going to be a showstopper, but it will likely be something that differentiates the very best user experience from the 'usually fine, but sometimes laggy' one...

file vs stream ;-) yea, i am with the file side on this debate.

But if you don't have enough bandwidth Trevor, how is QoS going to help?

Joi -- it's going to help because it can give what is subjectively a better experience to the user, although they're not getting the highest 'quality' images...

i.e. it's much better to have the audio and _some_ video continue smoothly, and cut way back on the detail of the frames of a video (from a user's perspective), than to have to stop the playback and make the user wait for sufficient cache to build up again...

I mean, you could do that with many multiple versions of a file for different bandwidths, or you can do it dynamically, but it's really the same thing...

By analogy, in games, the user would rather have a decent frame rate and cut-back on complexity of graphics than stop gameplay while it renders particle effects...

Trevor Hill wrote@14:
i.e. it's much better to have the audio and _some_ video continue smoothly, and cut way back on the detail of the frames of a video (from a user's perspective), than to have to stop the playback and make the user wait for sufficient cache to build up again...
A telecommunication device is generally unaware of the semantics of the data carried in the data packets it forwards. It's thus unlikely that a "QoS" router will be able to sufficiently buffer and transcode — e.g. from MPEG-2 to AVC — or re-compress audio or video data to lower the stream's bandwidth requirements...
Though a hypothetical Cell-based "application stream processor" router might be able to perform such magic, by privileging bandwidth instead of latency characteristics ;-)

I get the feeling you may be talking about something different than I am though...

I'm just talking about managing quality within streaming algorithms. I get the feeling you're talking about something specific like a lower level network protocol I'm not aware of...


I wish you would qualify what you are talking about before making blanket accusations of "stupidity". QoS has pre-accpeted meanings and implications for those of us in networking/telco/server management. I'd like to know if you are talking about something else.

I tried to hedge my statement by saying that if you are talking about consumer/dead-end user level stuff, then I wont completely agree with you but I wont argue the point either. For free (no money due||taken without rights) content, or non time sensitive information, then there really is little reason to bother with QoS or traffic prioritization. To me this seems obvious, but since I'm not sure if you are talking about this or something else, I raised a flag in my previous post.

If OTOH, you have real time data (price feeds, analysts calls, earnings reports, etc) on a congested IP network, or data which customers are paying for (any of the above, or in the case of end users things like PPV events, VOD|MOD or any subscription based content) it is then essentially the responsiblity of the network manager to ensure that the traffic has the best chance of reaching the reciever over the afformentioned congested IP network. This is true of internal large scale IP networks as well as the public Internet.

And as to throwing more money at the problem by increasing the size of the pipes, the money may not be there or the price is higher than the return. In any case larger pipes often lead to more waste of bandwidth. I'd rather make the best effort to prioritze my traffic before throwing money at the problem blindly.

As before, If you and I are talking about two different things, please excuse me.

Trevor and Chris: I guess there are two things. My main gripe is to try to make the network "smarter" by creating infrastructure level QoS technology at the network layer. I'm not too concerned about the end-to-end stuff since it doesn't affect me as much, but sometimes I'm not sure whether it's not cheaper to get more bandwidth and I don't know if there is really a market for it.

Chris, what time sensitive stuff is actually broadband?

To be clear. My main gripe is implementing QoS at the network layer to try to create streaming content businesses for things that appear to me relatively non-time sensitive. Why stream on 3G when you can get the podcast? Why do you need Bart Simpson in real time instead of with a 30 second delay?

chris' and trevor's comments are right on.

have you ever had a voice conversation on the phone with someone with a 30s delay?

now imagine this with your iChat AV.

you're probably using a LOT of QoS now without realizing it.

I'm still not sure exactly what you are talking about, but from the last paragraph I'm guessing you mean in regards to for-pay services intended for end users. Since I'm reading meaning into your words, as always, please excuse me if I'm going in the wrong direction.

Two problems with your base assumptions:

1) regarding "podcasting" (assuming here you mean pay for downloadable audio content either fee||free):

- You do know that this is way outside the reach of "normal" customers, right? "Normal" people are the ones that really sustain large scale businesses. Go to Yodobashi/Bic Camera/Sakuraya and look at the customers buying computers/keiteis/PDAs (any content capable terminal device). Try and explain "podcasting" to them in 30 seconds (a long TV ad in Japan). I'm not saying that people are stupid, I'm saying that "podcasting" is a very very long ways away from mass market friendly.

- Who in their right mind would want to build out infrastructure to stream content that is intended for time shifted consumption anyways? If its not in a time shift format, then its intended for real time consumption (fee||free) in the first place so the "podcasting" assumption is out.

2) "Bart Simpson in real time" (assumption of subscription/fee based video services, VOD type stuff) :

- QoS isnt always about real time as opposed to a short delay. If a customer is paying for a video stream, be it a PPV event or perhaps a subscription to a TV show, there is a level of expectation on the customers part. People dont want to pay for "bufferring...." especially not in the middle of a play or just before a punch line. Unreliable service leads to loss of paying customers (remember the first year or so of DoCoMo's 3G service?) Its also not easy to sell the idea of "download overnight, watch it tomorrow" to unsophisticated customers.

- As I understand it, lots of the PPV type services planned for 3G phones will be delivered over private telco networks rather than over the public Internet. In the case of private networks, QoS is a built in assumption for the network designers. Over-congested networks lead to bad service lead to loss of customers.

In regards to the question of what time sesitive content is actually served to a target market of broadband customers, I'd say darn little at this point. I know there has been discussion for years to try various things like that, analysts conf calls and quarterly reports come to mind first, but these always stall after brief trials because of a network chicken and egg problem. Not enough customers have reliable fast connections to the public Internet to create demand, and no good way to ensure a clear path on the network which might help stimulate demand (see buffering comment above).

IMNSHO, as a self identified network guy, smart networks are good. I want optimal routing protocols and I want my data to get to where its supposed to go as quick as it can. Increasing the pipes also increases the ammount of crap that goes over them (some smart person has probably already said it, but "data expands to fill available bandwidth"). If someone is paying to access data on my network, I want to give that data the best chance of reaching the customer that I can.

NB that I do not advocate levels of traffic prioritization which would give all the public Internet a better chance to see the latest episode of "Survivor", but I do agree that well designed networks (which may include QoS, distributed caches and scalable bandwidth among other things) make a hell of a lot more sense than just throwing more money at the problem to buy fatter pipes. Guess I'm just all about the motainai spirit ^_^

Count me in the QoS-sceptic camp. I've also implemented QoS architectures on WANs, and understand their value, but these were on infrastructures over which we had full end-to-end control.

I'm very dubious about the feasibility and reasonableness of attempting to implement a uniform semantics e.g. for MPLS tags over a network that you do not fully control like the Internet, where traffic might be carried over the networks of totally independent ISP entities.

How do we realistically negotiate or try to enforce that ourMPLS-tagged video stream should actually have a higher priority than the other MPLS traffic criss-crossing the Internet exchange points ? Logic would dictate that we should introduce differentiated pricing of traffic according to its priority, but how we'd do that with the mind-boggling combination of inter-ISP patterns is something I've yet to see a solution for...

(Of course, the likes of MCI^H^H^HVerizon etc. would love to use that as an argument to sell you end-to-end services on their infrastructure, shutting out competitive carriers. This obviously won't work with the Internet)

Well, let's say we had an all-IP world, and there was plenty of backbone bandwidth, and let's say we used files as much as possible.

Wouldn't we still need to prioritize on something like a mobile phone, where the bandwidth will necessarily be constricted to some degree? I obviously want my voice call to take priority over my podcast downloads.

Of course for this application, I only need prioritization at the edge of the network, between me and the base station, not between me and the person I am speaking to.

There may also be a case for something like multicast streaming. I think this is what Trevor was getting at. So if we want to send an identical TV program, or a stream of market prices to many different hosts, then you do need some sort of stream. However, with the multicast, the latency problems are likely to arise at the edge of the network rather than in the middle. There doesn't appear to be that much need for sophisticated QoS. You might decide you needed a segregated multicast backbone though.

The point about voice and game stuff is that you don't really need very much bandwidth compared to what people are talking about when they talk about streaming content. I have a crappy ADSL line at home and I have never had problems with Vonage or Skype. I think we have plenty of bandwidth for games and voice.

Smart caching can do mostly what multicast can do if you are talking about timeshifted content. You just need to hide "buffering"... You are already getting a delay when you watch a broadcast, you just don't know it.

Give me a good example of a broadband, must-be-real-time (without even at 30 second delay) application.

PPV sports events. What do you want to bet that within a year, Livedoor is going to be trying to sell just that here in Japan?

Well, for game stuff, you sometimes hear about issues with DSL. But the problem isn't with the 'core' network, the problem is at the edge, i.e., with the link between the exchange and the home, which may have interleaving switched on, resulting in better bandwidth but higher latency (i.e., less responsive).

Chris, do you think Livedoor will be able to compete with digital satellite broadcasting for those sort of mass appeal events? Isn't 30 second delays for PPV sports events only important for widely viewed events? For these events, aren't there already competing channels available via satellite etc? In the long term, aren't we going to have enough bandwidth to be as reliable via public "unmanaged" Internet to deliver the same quality?

It might be a terminology issue, but I think that fast caching looks very similar to multicast to the user. What I'm arguing against here is not against making the ends a bit smarter, I'm arguing against complicated "optimization" or throttling at the network level to "stream" video when you can just pass it as a file and let the ends deal with what they do with the file. Basically, optimizing the network for one type of traffic de-optimizes it for another. It goes against the stupid network idea and I still haven't seen a good reason why we can't do it as a file transfer.


In japan its not about competition anyway, its about perception and market share, you should know that better than I. Livedoor wont "compete" they will merely collude as everyone else does. I figure they are buying the baseball team and Fuji TV as a way to have something to resell. I guess they are betting on timeshifted streams. I'm betting the typical customers will be OLs who work late but want to watch their favorite drama when they get home. I'm even betting the streams will include ads just like the broadcast. With more and more homes here signing up for FTTH, delivery to the edge will not require as much specialization and if they can optimize their bit of the core enough to get the traffic close to the edge at a reasonable time.

Dont get me wrong, I've tried to point out that I'm not advocating optimizing the public internet at the expense of other traffic. I do recognize the commercial reality that while peering agreements exist and are honored at the interconnection points, the providers networks belong to them and they can do what they want inside of them. I've never read my EULA with Tepco/Point but I bet it says they guarantee me JFS.

As far as passing files go, with what we have now, it just wont work commercially for video and I cant see why thats not obvious to you. The file formats and transport protocols support it, but it boils down to "pay now watch later" which I really dont believe anyone would go for. PPV type events that people want to see "now" like sports dont support that idea. "Now" means streaming, no way around it with the transports and protocols we have today. Time shifting does support "pay now, watch later", but people like "pay now, watch now" which comes back to sending them the file in chucks. Whether its done by tweaking some of the network or using an akamai style distributed cache, their perception is they are watching in real time.

BTW, caching and multicast are very different animals. If multicast were so easy in public networks, we'd have lots of cool stuff on the mbone already.

While P2P technologies can deliver bits to a user quickly, there is a subtle point about which bits it's getting to a user. Namely, it does not get the user the first bits in the file, first. The user gets random bits. This is very much intentional.

When Bram Cohen was asked last Wednesday at Stanford about the possibilities of altering BitTorrent to better enable streaming, his response was surprising to the audience; he claimed it would be a really bad idea. If everyone has the beginning bits, then the file becomes harder and harder to stream as it goes on, as fewer and fewer people have later chunks. And with the BitTorrent protocol (and any other tit-for-tat protocol), it is maximally useless to progress through the file linearly, since every peer you download from can be guaranteed to have a superset of the information you have, making sure that you'll be choked; and rightly so as you have definitionally nothing to give back to those from whom you've downloaded.

If you're downloading, say, Eyes on the Prize, with tit-for-tat P2P, you may not be able to play it at all until the entire file has been downloaded. BitTorrent in particular has the efficient (if peculiar) habit of recording to disk the data chunks as they arrive, only descrambling the hodgepodge when the entire file has been completed. (Try playing an MP3 file only partially downloaded by BT; it's pretty amusing.) So many P2P technologies have the disadvantage of making you wait until the whole deal is done. Now, I'll let you that there are some exceptions, like Kazaa, which are not tit-for-tat and download files linearly, saving to disk sequentially, and that DO permit streaming. And I admit there are private P2P networks like Grouper that have peer-streaming explicitly built in.

But I think that in the long run, tit-for-tat P2P is the most robust of the public solutions and most likely to be with us for the long haul...and THIS kind of P2P is unlikely to ever be efficient at streaming, as it works directly against its model of efficient peered information exchange.

So for the consumer hoping to start watching in seconds versus start watching in hours or days, I would suggest that streaming may have some benefit in immediate gratification that may P2P networks cannot offer.

Humbly Yours,


3 TrackBacks

Listed below are links to blogs that reference this entry: Real-time streaming is stupid.

TrackBack URL for this entry:

Joi Ito's Web: Real-time streaming is stupid: I'm listening to Andrew Odlyzko giving a talk right now about why Quality of Service (QoS) and real-time streaming is stupid. He showed a slide showing that P2P and other traffic are... Read More

Via Joi Ito:

I'm listening to Andrew Odlyzko giving a talk right now about why Quality of Service (

Read More

Nah - QoS has it place for high availability, high value, and/or paid content. As I think about it, even high value paid content is time shifted more and more these days - a la podcasting. Joi Ito's Web: Real-time Read More