Where did the megabits go?
Nowadays people have super-hyper fast Internet connections. For example, I have fiber optic cables coming into my apartment (lucky me!) meaning that I can get a connection of 100/100 megabits. What does this 100/100 then mean? Well, literally it means 100 megabits in and 100 megabits out.
To me it is "engineering porn", big numbers, so it just has to be something cool. But to make some sense of the numbers, we can think that we used to have modem connections of 56kbit (==0,056Mbit). It was considered so fast that people were saying that what anyone would do with such a speed. And nowdays you may not even be able to download some webpages with that speed as they have grown huge.
First reason, why your connection seems to be slower than it should, is that you are viewing sites (or downloading something) that are located in a computer that has a slow response time. Either it is really far away or the server is inefficient (or it is giving you a limited bandwidth because of much traffic or for some other excuse).
Second reason, is that your target in HUGE. It doesn’t mean connection is slow but if the site has loads and loads of stuff in it, there just is so much that has to be downloaded.
After these obvious reasons comes something more interesting and not so well known:
Normally we use TCP to transfer data over the Internet. Mostly we talk about webpages, images, music, videos… The thing with TCP is that when the server sends us packages of data, we need to respond to it and tell that “yes, it came through, send me more!”. After that statement, the server will send more, but not a second earlier. This makes sure that the data gets through.
You probably can easily see that these messages take time. We actually speak about round-trip delay time which comes from the fact that speed of light has its limitations (in here at least ;) ), routers have finite processing speed (== the travel takes more time than just the time what the signal needs to go through the cables!) and of course the server also has a processor that needs to take action.
This round-trip delay is same as the ping-time, what is something every computer can do: computers can send to other computers “ping”-messages where the other computers response “ping” if they are listening. Like you can say “hi” to someone and if they wish to talk to you (and are not giving the silent treatment), they will answer you “hi”.
These packages that are sent through TCP have a maximum size. This means that there is a maximum speed for data transfer. Tough luck.
Because of this speed limit, the more complicated the route from the server to the client is, the more it takes time (there is then normally also more of routers etc.). Rule of thumb is that the longer the route is geographically, the longer it will take also because of this, not just because bits have to travel longer time in cables. So, sometimes it might be faster to send storage drives via airmail (this type of transfer can be called Sneakernet) than try to transfer huge amounts of data via networks… :) (I’m not kidding! Unfortunately.)
But there is also good news! We do have so much more of the processor capacity in hand now that we can go for not-so-reliable-methods-of-transfer because processors can now be used to check the coming data with error correction and we can achieve faster transfers. That is: speed up via not demanding those response messages. After that we get more bits if we just have larger bandwidth which wouldn’t be possible with the TCP and the round-trip delay.
What are these new methods? Umm... Gnutella at least, there should be some, we just aren’t really using them. Yet.
- 21.06.2012 Fixed typos (again....) and added the link to the Sneakernet.
By the way, check this out: http://blog.cloudflare.com/the-bandwidth-of-a-boeing-747-and-its-impact Someone wrote about the same thing just one day after me :D There is more specific info about delay times etc.