Dr. Dobb's TechNetcast
Stormix Technologies - It's time to close the windows.


h o m e
a r c h i v e s
s c h e d u l e
f o r u m

c o d e b y t e s
r e c u r s e

dr. dobbs journal

send us email

subscribe to program announcement mailing list


now playing on
technetcast

•XFree86 Project Status
•Sun Research Brazil Project
•XML/SOAP with Don Box
•An Interview with Kevin Mitnick
•ECHELON and the Insecurity Industry
•Larry Wall: Camel Lot #6
•The GNU Hurd
•Spiritual Robots Symposium

audio/video
What's Wrong with HTTP and Why It Doesn't Matter, Part 1 (45:00) PLAY
What's Wrong with HTTP and Why it Doesn't Matter, Part 2 (50:00) PLAY
discussion forum (5 messages)


What's Wrong with HTTP and Why It Doesn't Matter

An overview of HTTP and a a thorough review of the issues surrounding the protocol that carries over three fourths of the Internet traffic by HTTP/1.1 WG member Jeffrey Mogul.

Following a careful overview of how it works, Jeffrey Mogul gives a step-by-step critique of HTTP from a protocol design perspective and makes the case that the benefits of HTTP ultimately outweigh its limitations. Covers data models, transactions, caching, multiplexing, streaming, transport...

More programs from USENIX 1999.


Transcript Excerpts

Clearly HTTP is the killer application protocol for the Internet. Studies show that about 75% of the bytes on the Internet backbone are HTTP traffic. [Many] of the other bytes for DNS are probably HTTP related. We know that .com stocks are worth billions, even if the underlying businesses aren't actually worth that much Even my mother uses the Web. So it's clearly made its way in the world. At the same time the protocol design is kind of a mess. It's complicated. The specifications are sometimes confusing. Many of the basic concepts in the specifications are wrong from a protocol design point of view (which I'll talk about). There are a number of inefficiencies, and there's even a spelling error in the specification that we can't get rid of That's a bit of a problem. So what I'm going to do is to talk about the flaws in the HTTP design, then at the end of the talk explain why it doesn't matter. [] I think my main purpose in choosing this topic is to lead people to a better understanding of HTTP as a protocol design. [] I will mostly be concentrating on some of the bad issues because that's what you can learn from --the mistakes to avoid for future protocol designers and also people who are implementing things on top of HTTP and need to know what its limitations are. []

[] The meat of this talk [is about] the mistakes I see in the protocol. [] When I gave this talk a few weeks ago at work, people [indicated that] some of those mistakes look like big ones and [others] like small ones, but [there was no way to] tell from [the] slides which ones [I believed were] big mistakes. So I came up with a rating system. Three mosquitoes for a really annoying problem, two flies for something that it bugs me that we didn't solve it, but [that may not be] fundamentally annoying, and a single ant for something that looks like it might become trouble, but we can get away without solving. It's amazing what you can find on the Web with a clip art search.

[] Caching was basically an afterthought. I don't think it was in the 0.9 protocol at all. There's really no way to do cache coherency, meaning the ability to make sure the two caches that hold the same responses for the same resource actually keep them identical. It doesn't seem to be extensible to new methods. In other words, you can't really add new methods [to the HTTP protocol] because caches wouldn't understand what to do with them. And in 1.0 there were a number of transparency failures that went undetected. Let me define what I mean by transparency. A cache behaves transparently when the response that you get from that cache is exactly the same response, maybe with some slightly different time stamps, as you would [obtain] if the request was sent directly to the original server. In other words, you can't tell that a cache is there except that it sped things up a bit. Now if you don't have transparency, applications can't trust the caches because they don't want people to get wrong answers, so they bypass the caches and you lose the ability to cache altogether.

In order to talk about caching I first have to be a little more detailed about how caching works. The basic model of HTTP caching is that we have a client A that starts by requesting a resource, we call R, by a proxy, P. P then forwards A's request to the server, gets back the response, forwards the response and also stores it for later use. Later on, client B requests the same resource by the same proxy and the proxy realizes that it has that resource in its cache and returns it directly. Now you can complicate things a bit. The cache can actually be in your browser, for example. In many cases the server's response includes what we call a 'validator'. In 1.0 it was the last modified date. In 1.1 we added the ability to send back what's called an 'entity tag' as well. I used the last modified date as the example. Supposing that a client [wants to check] whether its cached value for a resource is still valid, in other words, whether it hasn't been changed on the server. It can supply a header called 'if modified since' with the date the resource was previously received from the server. The server then has two choices (aside from errors): it can send back a '200' and the okay command status and the entire contents of the resource. It would typically do this if the resource had changed. However, if the resource has not been modified since this date here in time, it can send back a '304' response with no body, basically saying 'not modified'. This is faster because you're not transmitting the bits over the network. You still have a round trip, however. This allows the client to make sure its cache is up to date without actually retransmitting a large file. In 1.1 we added some improvements. We added some ability to detect various kinds of transparency failures. A cache that's operating disconnected from the network can tell you it's doing that. And we added some mechanisms to make it less likely that transparency failures occur, generally trying to be more explicit about what's going on. But there still are some problems. One is the ability to deal with the [synchronous] update of resources. Two different clients can talk to two different proxies. One client does an update via PUT and the other one then reads the file and doesn't realize it's been updated because the two caches aren't coherent.

We don't really have a good handle on how cache is to deal with [protocol] extensions. The caching rules are extremely complicated in the specification. I have to take some blame for that since I wrote a lot of them, but some of them are just inherently complicated because we started by fixing something that was already in progress. And there are some performance problems because of what we can and can't do with caching.

So I'll go into these in a little detail. One of the problems with cache coherence is that we force people to use relatively conservative expiration deadlines if there's any possibility that something may change on the server or on another cache. Conservative expiration deadlines then cause more cache validation traffic, so it reduces the value of caching. It also means if you're doing anything like distributed authoring, you have to disable caching entirely, at least in this model. There are some people who have proposed various solutions. I don't think I'm going to go into them for lack of time. They all right now have drawbacks and limitations, but there are potential research projects here. Security is an especially difficult problem because you have to be able to define who is allowed to modify a cache entry without a really very strong identification mechanism. As I said caching in 1.1 is complicated.

[]

There are a few problems with cache performance. There are basically three reasons for caching. One is to improve response time for the user's point of view. One is to lower the bandwidth requirements, which is basically dollars for somebody providing bandwidth versioning. And a third and somewhat less well-known reason is to provide some availability during disconnection.

So the basic simple caching mechanisms have pretty much hit their limits. Proxies aren't going to get much better than they are now if we don't improve things. There are a number of research and slightly-beyond-research approaches that people have done to get around some of these limitations. One, for example, is prefetching. This allows you to hide latency, although it does increase your bandwidth requirements usually. You fetch something by guessing in advance when somebody might use it. Then if you guess right, they get it faster. If you guess wrong, you've transmitted bits over the network that you've had to pay for but didn't use. Another approach is to try to take the bits that you have in your cache and make better use of them. Delta encoding [uses this approach], this is a research project that I worked on. It wasn't my initial idea, but I did some follow-up studies on that. That seems to work pretty well. I don't know if Fred Douglas is in the audience, but he and some of the people from AT&T; talked about decomposing complicated pages into static or dynamic parts. For example, a stock quote page can have one stock quote that varies frequently and then the rest of the page is fairly static. If you can turn that into a static part that you load once and a dynamic part that you continually update, you only have to update a much smaller piece. All these [solutions] require some amount of HTTP enhancements. Then there are also some cache-management issues. Trying to decide, for example, what to replace when you run out of space in your cache, how to balance the cost of disk I/O. There's a lot of work that's been done on cache cooperations. Some of it is actually valuable.

[]

Back to another 'two fly' problem: the original broad goal of the Web was to make it international and multilingual. That's what the content negotiation mechanism was for. I would say this was a semi-failure because there's a fair amount of fuzziness [here]. Beyond this goal, things got fuzzy. It wasn't clear who was in charge. Whether it was the user's preference (for example, I would prefer to see things in this font rather than this font), or the site designer's preference (I'd rather have my users see my identity in this font rather than that font). There's really no technology that you can use to resolve these competing preferences, but we never got into how to get around this problem. [] Part of the problem is that people tried to [agglomerate] together under one general mechanism [resources] that actually have different characteristics. [] For example, a document that's in French is in some sense fundamentally different from a document that's in English, at least to those of us who don't actually do artificial intelligence. It's very hard to translate between them. On the other hand, there are some things like presentation, fonts or spacing which you might be able to do more mechanical transformations on. Then there are implementation parameters like the color of the screen --black, white or colored, the ability to display images at all. These three things are sufficiently different that they really ought to be thought of at least partly differently. The mistake was to try to force them all into the same general mechanism. People had different goals because they were actually solving different problems.

Now there are some issues about transport. The goal of any network protocol is efficient and reliable message transport. We have a number of issues [here]. In 1.1 we added some fixes. As I said earlier we have this precision connection model that allows us to reuse the same TCP connection for multiple requests. That definitely is a big win. And pipelining allows you to avoid latency. That's also a big win. We added the ability to negotiate compression between clients and servers, so it encourages the use of compression, although I don't know how successful that's been in practice. We have what's called the chunk decoding which allows you to specify the length of messages in chunks rather than having to precompute the entire message before you compute its length. We made some fairly careful rules about how to use that. We added a mechanism --far too complicated for me to describe here-- called "100 CONTINUE" response and EXPECT headers, which allows you to avoid transmitting a large request body to a server that's not about to accept the request in the first place. As I said, it's complicated, it's not clear that it will actually work. I don't know whether anybody has really succeeded in making it work in practice. We also provided a weak form of an atomic read-modify-write mechanism. The ability to read a file, then write it only if it hasn't changed since you last read it. But it hasn't really been tested, and it only applies to single resources.

We have an efficiency problem. Because we're using ASCII headers, we have some that are extremely verbose. For example, there is a header labeled 'if unmodified since.' That's a nice long name by itself. Then we have this time stamp which, first of all, gives the day of the week. This is redundant because you should be able to deduce it from [the date itself]. There's no reason to send 'Thursday' in this message. Also [the date includes] 'GMT'. That's another four bytes (with the space) that are useless because the spec requires you to send GMT. And yet we still send the time zone. As you can see, this is a little too verbose. The verbosity actually is a problem, especially if you're dealing with things like wireless paths where the bandwidths aren't that high. Some traces I did a few years ago suggested that the mean request size, including the URL, --URLs are several dozen bytes usually-- is about 300 bytes. Response headers average about 160 bytes. This is 460 bytes just to get an average message back and forth. That's a lot of bits over a wireless link. Also these verbose headers require a fairly complicated parser. In many cases this has been a problem. You could imagine a simpler mechanism [such as a] one- or two- byte binary format with a code that effectively implies the header name, maybe tokens to specify the data type and the length of the header, binary codes for things like numbers and enumerated types and dates. [] The date [could be] turned into a standard Unix 32-bit binary number represented in hex. This may actually save a large number of bits over, say, several billion retrievals per day.

[]

We [also] have a reliability problem with our transport in several ways. First of all, there really is no way of cleanly stopping a message transfer without killing the TCP connection. If you hit the stop button, then you have to basically close the TCP connection. In many cases the user may want to stop something, but then continue [interacting] with the same server. You now have to reopen the connection and reestablish context. A fair amount of buffered data may actually have been in transit in the connection and made it most of the way through the network. It may even have made it to your receiving host and not received by your application. [That data] is all thrown away. I think this is a mistake. It would have been nice to have something like Telnet has where you can send an attention signal and have it clean up what's going on without losing the connection and all the buffered data. That's a small problem I would say, but it's certainly an efficiency problem.

[Here's] another problem that we ran into, pretty much at the last minute. You're supposed to not send more data in a message than your recipient can actually buffer. You would think that we all learned that in protocol kindergarten. The second piece of that is you ought to have a mechanism for finding out how much data your recipient can buffer. If you look at all the other protocols we use, such as IP or TCP, all have either rules or negotiation mechanisms to prevent this from being a problem. Unfortunately, we kind of neglected this in HTTP. About a year ago somebody [discovered] this complicated scenario that involved proxies and chunk decoding and the use of trailers where you end up with a situation where a proxy is required to buffer a message that it has no buffer space for. So we added a last minute spec change that's a bit of a kluge. I think the spec actually says, "trust us -- you have to do it like this because otherwise you'll get it wrong." I think most people will find this mysterious unless they actually work through the specific problem. What [HTTP needs] is end-to-end and hop-by-hop buffer limits that are explicit in the protocol with reasonable defaults so you don't actually have to send bytes back and forth, and a negotiation mechanism for getting larger buffers if you want them, such as what TCP has. We don't have that, so as a result we're stuck with kluges and the hope that people don't use arbitrarily small buffers.

Another problem [has to do with] message structure. [HTTP has] no multiplexing. This is partly because we started out with a single request per connection model. We moved to multiple requests per connection, but we didn't allow you to do anything but send them completely sequentially. So if you send a bunch of requests on one connection and one of them stalls, everything behind it stalls as well. What you'd like to be able to do is to let a later request bypass an earlier one that's stalled and deliver things out of order. But it would require a number of things, including some sort of ID. Right now there's no way of identifying that this response matches this request except by the order they arrive in.

Another problem is that there really aren't any multi-resource operations in HTTP. You can't, for example, revalidate a whole list of cache entries or get all the images of a page in one operation. These would seem to be more efficient ways of doing things. You can kluge some of this using additional message headers, but I think there are inter-operation problems if you do anything elaborate like that. So basically most people have ignored multi-resource operations.

There's also really no way to atomically group updates to a set of entry resources. For example, I want to say "change this resource A as long as resource B hasn't changed since I read it." In a database you might consider this to be like a debit/credit transaction with a lock around the whole thing. We don't really have a way of doing that to do this consistently. You also can't do an atomic rename. You can do a GET and then a PUT and then a DELETE, but that's not necessarily atomic. []

Then there's this little issue about cookies. I'm sure most of you have heard at least something about cookies because you're often warned to distrust them. A cookie is basically a way of encapsulating the server-specific state, but storing it on the client so the server doesn't have to store it. Originally we started out with an ad hoc extension that Netscape came up with. Other people then more or less implemented the same thing, but because Netscape's specification isn't all that specific, they didn't get it quite the same. RFC 2109 actually was a proposed standard. We had some problems. First of all, because they were multiple and not necessarily identical imitations of the previous version, we had some interoperation problems that certainly made it more complicated to define a new standard. More importantly perhaps, we had some problems with privacy concerns --as you are probably aware, cookies can lead to privacy problems. There is a three-way conflict between technology, which is basically what we're [involved in], policy issues, which the IETF [deals with], and the profit motive, which is what really drives things. The IETF requires standard documents to have a security consideration section. So the people who wrote the standard for cookies thought they should try to not only write down something about security, but maybe even solve some of these privacy problems [as well]. But there was no real consensus about how far you should go to meet the IETF's concerns. The problem is that a lot of the ad-supported sites have so much to lose from strong privacy that the vendors had no choice but to go along with the ad-supported sites rather than the privacy fanatics. So I don't think they really resolved [this issue]. I'm not even sure that the cookie standard has really progressed in any sense.

There are some other social issues which we didn't really deal with. Here's an interesting question for those of you who are lawyers: if a document ends up in a cache and then [is retrieved] from the cache [by] somebody else, it that a copyright violation and is the cache operator legally liable for that? Another way of looking at this is, "how does the proxy know what's legal and what isn't?" We actually talked about adding a header saying in effect "don't cache this because if you did you'd be violating the law." Then a lawyer told us "this isn't a formal opinion" but you guys writing the spec could get into trouble for putting stuff in there that could be later misused. So we did the obvious thing -- we ran as far away as we could from this. As a result, there's nothing [about this] in [the spec]. Hopefully there won't be this rash of lawsuits once the copyright lawyers figure out what's going on

Another social problem is advertising. There are two competing goals here. The ad vendors and the people who buy ads want accurate counting. They don't want to pay for more ads than get delivered. At the same time those of us who use the network don't want excess overhead. We don't want to pay for the shipment of lots of bits of ads. There are some trust issues involved here. Are the counts honest and are proxies usurped just when you're placing ads from one advertiser with ads of their own. You could do that in most cases today. We don't really have any good technical solutions for these trust issues. Also there are some issues about wanting the user to be able to refuse ads versus wanting the content provider who is providing the valuable content along with this ad to be open to support their site. As I said, these are basically social issues, but I think there are some technical mechanisms that might come into play later on to resolve some of these.

TechNetCast Catalog:
usenix technical conference 1999 HTTP Jeffrey Mogul 

Related Programs:
• A Brief History of Unix and the Internet
• 1999 USENIX Technical Conference
• Linux BOF at USENIX 1999

FORUM

add a message to this thread

Jeffery Mogul on HTTP
posted by amonymous 1999-08-06 [#212]

It's really amazing that the web is built on such a shoddy
protocol. And that holds for other Internet protocols too.
Think how much richer and faster the web would be if HTTP was
a binary protocol and suported transactions, multiplexing etc...
[reply]

Jeffery Mogul on HTTP
posted by Dale Wick 1999-08-09 [#213]

In my experience it is much better to create a standard and
distribute it than it is to start with the ultimate protocol,
but take years longer than the market is willing to wait.
Really, an ASCII based protocol header is an order of magnitude
easier to code to, and debug than a binary one is, and also
easier to extend in a decentralized manner.

When it comes to optimizing, binary encoding make things
faster, but hellish to deal with. The better solution is the
suggested compressed headers. As for transactions, the odds
that there would be reliable implementation with respect to
inter-operability is doubtful. Just look at databases for
comparison which has endless issues. Simpler, faster, easier
to design [esp. limited scope], easier to implement make for a
fast growing technology.

[reply]

Jeffery Mogul on HTTP: Compression
posted by Nathan Fain 1999-08-14 [#219]

I agree... compression seems better then switching the headers from plain text to binary. Compression in the protocol would be something that I would like to see more discussion about.

A Few questions concerning compression:
1) What would be the disadvantages of having the connection compressed from the proxy to the destination
2) What would be the disadvantages of having the connection compressed from the client (w/o proxy) to the destination.

I can only think of one: It would be harder to implement the protocol in our programs. I mean, I can easily write a HTTP client or server through Perl or any other language. I can do it directly. Whereas, with compression I would have to work through another Module or be a genius on the compression protocol being used.
Another small problem would be that I could no longer just open a telnet session to port 80 and test my server.

So maybe compression should be optional.

[3 replies, expand] [reply]

Jeffery Mogul on HTTP
posted by Terry Cumming 1999-08-24 [#241]

I find the issue of compressed or binary headers interesting. I've implemented a web server on MVS and also created a protocol that had to deal with these issues.

Our clients wanted ASCII all the way and didn't want compact binary type and length fields. It added overhead but allowed any language to generate the protocol via
[reply]

Jeffrey Mogul on HTTP: the slides
posted by Jeff Mogul 1999-10-07 [#278]

Finally, after only a few months of prodding on my
part, USENIX has made my slides available on the
Web without requiring a password.

Find them at
http://www.usenix.org/events/usenix99/invitedtalks.html
[reply]

Back to
TechNetCast Home


(c) 1999-2000, Dr. Dobb's TechNetCast