Really a big story?
|
Author | Content |
---|---|
sbergman27 May 02, 2006 12:04 PM EDT |
As I read it, some guy somewhere has DSL and is getting better speed out of Linux than XP with whatever hardware he has. I'll bet there are others that get better speed out of XP on their hardware, and still more who are having trouble getting their DSL to work at all in Linux and/or Windows. FWIW, back when internet access meant a 56k modem, Windows 9x walked all over Linux for browsing responsiveness while simultaneously downloading a file. The reason is really "the way tcp/ip works". (This was covered on lkml.) There are some tricks you can apply to make the situation better, but Linux did not apply them by default. I guess windows did, because way back when I was dual booting, the difference was not just noticeable, but striking. Linux may walk all over XP for ADSL access today. I don't know. But this story doesn't prove it one way or the other. |
jdixon May 02, 2006 1:28 PM EDT |
> FWIW, back when internet access meant a 56k modem, Windows 9x walked all over Linux for browsing responsiveness while simultaneously downloading a file. Strange. That was never the case for me. What distribution? I was running Slackware. |
sbergman27 May 02, 2006 2:19 PM EDT |
Multiple. Slackware '96, RedHat 4.x, 5.0, Mandrake. (After moving from SCO Unix 3.2v4.2/DOS-Merge as my primary desktop OS, I used Win95 for 2.5 years, dual booting Linux for about a year until it was ready to assume all my desktop tasks.) I had observed the behavior for a long time before a question was asked on LKML. Alan Cox explained how tcp/ip handles transient connections like http requests in the presence of a long running, bandwidth consuming stream like an FTP download. One of the things that can help is reducing the tx queue length. But I never got Linux working as well as Windows in that respect. And, believe me, I wanted to. At any rate, my main point is that anyone on any side could go out and scan the help sites to find someone getting suboptimal DSL performance on Operating System Q and then publicize it. If a Microsoft friendly site did the same to Linux we'd be in a uproar. |
jdixon May 02, 2006 7:45 PM EDT |
> Alan Cox explained how tcp/ip handles transient connections like http requests in the presence of a long running, bandwidth consuming stream like an FTP download. Oh, you mean the ftp download was hogging the bandwith and not letting the other tcp/ip connections through in a reasonable time. Yes, this could happen, though I never had a significant problem with it. I forget what the recommended fix was, but it didn't seem complicated. It tells you how little it affected me that I never bothered to try fixing it. |
sbergman27 May 03, 2006 7:11 AM EDT |
Basically (and I'm sure some kind soul will correct any errors), It's not just a matter of "hogging bandwidth". The interactive application, which makes many small, transitory connections is actually penalized while the ftp transfer effectively gets priority. If I understand correctly, if a connection has a higher percentage of packages that don't make it through, or are substantially delayed, that connection backs off on its attempts, as a congestion control measure. The ftp transfer has a very low percentage of lost/delayed packets and so it is going full speed. But if a new http connection is brought up and a packet or 2 don't make it, it backs off, effectively lowering its priority. I found it to be quite noticeable, and I observed it on my own, as well as my modem-using clients' machines. Though mine was the only one that I actually had a side by side comparison of Linux/Windows on. Researching the matter at the time, there were some recommended "tricks" like the tx queue length setting which Alan recommended. But none of them really helped that much that I could tell. |
number6x May 03, 2006 9:00 AM EDT |
Older versions of IE and versions of IIS used to play a trick instead of following tcp/ip standards. The server would not wait for the 'ack' from the browser, but start sending right away. at 14.4, 28.8, and even 56k this would cause an improvement in response time. There were other undocumented features as well. But they followed pretty much the same pattern. Of course all of these features left IIS open to being more easily exploited in a ddos attack. The software was so quick to start spewing out data and setting up multiple threads before even the most basic authentication, it would quickly overwhelm itself. They have made IIS better. It takes its time and crosses the t's and dots the i's now. Those were such innocent times, were they not? The point of the article was that the user could actually explain what was happening on Linux. You can examine speed settings in configuration files, and see they match the data through put actually seen. However in Windows you set things in some GUI and hope they have meaning to your hardware. He even bought third party software that was supposed to examine his hardware, and optimize the mysterious settings somewhere behind the curtain in the land of Oz. The old saying "You get what you pay for." needs to be updated: "Sure you payed for it, but you really don't know what you get any more." |
Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]
Becoming a member of LXer is easy and free. Join Us!