Stability Issues, are they kidding?
|
Author | Content |
---|---|
Sander_Marechal Apr 27, 2007 9:17 AM EDT |
> Stability Issues, are they kidding? Talk about a talented salesperson.. Oh well, at least its not M$ I don't think they're kidding. You're talking about AIX here. That's one helluva stable monstrosity. Linux is far, far more stable than 98% of the stuff that's out there, but it's (still) no match for AIX or HP-UX in some area's. It's just that usually Linux is still the better choice, because it's so much easier to work with. I can understand that for e.g. an airline or a bank they'll sacrifice ease of use for that slight gain in stability. |
jdixon Apr 27, 2007 10:22 AM EDT |
Actually, I think this line explains things better than the stability one: > The airline's original plans called for a totally Oracle-based solution, but that was subsequently shifted to a multivendor approach to better match Qantas' specific needs, Translation: Not everything we want to run is available on Linux yet. |
dinotrac Apr 27, 2007 5:54 PM EDT |
jdixon - Maybe so, but I have to agree with sander. AIX -- like the machines upon which it runs -- is one stable beast. Linux skunks Windows, but not AIX. Now -- when bang for the buck matters, things change. For that matter, if you use a lot of free software, Linux looks good because it has become the primary development platform for most free software. Still, it ain't all things to all people and that's OK. |
jdixon Apr 27, 2007 6:50 PM EDT |
> Maybe so, but I have to agree with sander. AIX -- like the machines upon which it runs -- is one stable beast. Yes, the top line Unix machines, including Solaris, have better stability and scalability than Linux. As you said though, it comes at a significant cost. For most people and most applications, Linux is more than they need. I will grant that for someone like Qantas, it may be worth the extra cost; but reading between the lines, I still think my analysis is accurate. In this case, there's no reason both can't be true. |
dcparris Apr 27, 2007 7:03 PM EDT |
My company recently announced it was switching to a Windows-based web server solution because the "Linux programming language" had issues. That announcement had been reviewed something like a dozen times by several in-house and vendor experts. Marketing just passed along what was handed to them. So JDixon might have a point - we all know the many cases where one problem was pointed to, while the real problem lay elsewhere, frequently visible in the article text. |
jsusanka Apr 27, 2007 8:08 PM EDT |
"Yes, the top line Unix machines, including Solaris, have better stability and scalability than Linux." I think this all subjective and I have seen metrics at our work where linux on hp servers blow the water out of solaris/aix boxes at a 1/5th of the cost. stability and scalability wise. put linux on a Z machine and it will be right there with the big boys while you access it from your nokia 880 - linux is pretty remarkable and can be made to do just about anything with the proper time, expertise, and money. you just can't say that with the aix's and solaris's. |
Sander_Marechal Apr 28, 2007 2:05 AM EDT |
Quoting:I have seen metrics at our work where linux on hp servers blow the water out of solaris/aix boxes at a 1/5th of the cost. What was the work done? That matters. The old Unices shine under heavy stress with traditional workloads (lots of transaction processing for example) but Linux would kick AIX's butt on e.g. renderfarms. |
dinotrac Apr 28, 2007 3:01 AM EDT |
>linux is pretty remarkable and can be made to do just about anything with the proper time, expertise, and money. you just can't say that with the aix's and solaris's. Ummm.....why not? That statement sounds pretty silly to me. Time, expertise, and money will go a long way on AIX and, I presume, on Solaris. |
jsusanka Apr 28, 2007 6:13 AM EDT |
"Ummm.....why not? That statement sounds pretty silly to me. Time, expertise, and money will go a long way on AIX and, I presume, on Solaris." then why hasn't there been any embedded aix or solaris. |
jdixon Apr 28, 2007 6:35 AM EDT |
> ...then why hasn't there been any embedded aix or solaris. Not enough demand for the vendor to create one. They're commercial products, and the vendor has to see an ROI great enough to cover the cost of developing the product. Anyone else who wants to won't have access to the code. That's Linux's great advantage. There is no up front cost and the code is freely available, so someone can develop something just because they want to. |
dinotrac Apr 28, 2007 6:52 AM EDT |
>then why hasn't there been any embedded aix or solaris. And what, exactly, has that got to do with stable systems? I don't remember seeing an embedded OS/400, either, and that's one of the most stable operating systems for one of the most stable platforms ever sold. On the other hand, there IS an embedded Windows. |
jsusanka Apr 28, 2007 8:14 AM EDT |
guess I should explain it a little clearer. ""Ummm.....why not? That statement sounds pretty silly to me. Time, expertise, and money will go a long way on AIX and, I presume, on Solaris."" but will it go as far as it can with linux? my point being is that linux can do the tasks of aix, solaris and it can be just as stable if not more stable and fault tolerant with less money. secondly "And what, exactly, has that got to do with stable systems?" see above - nothing really just making a point that linux can do more and scale more on more platforms than any of the oses that were mentioned. I still like the other systems and have been a system administrator (still am on aix and solaris) on all of them including os400 and they have all been a joy to administer. these systems were good at doing their single intended tasks and linux can be just as good if not better at those tasks for far less money. "What was the work done? " mostly oracle databases along with middleware servers along with some transcation processing - bill of lading, shipping etc. |
hkwint Apr 28, 2007 2:04 PM EDT |
IMHO Linux is more scalable than Solaris or AIX, since Linux runs from being an RTOS in embedded devices (think MontaVista) like traffic lights (talking about stability...), imaging tools in hospitals, to top-10 supercomputers. It may lack in some specific area's where AIX or Solaris may be the better choice, but in general I think Linux is more scalable. Nonetheless, there _is_ an RTOS version of Solaris, I found out. Maybe it's just a gut feeling. |
dcparris Apr 29, 2007 9:00 AM EDT |
Now I'm confused. I thought "scalable" refers to the range of applications and users the system can handle, while "portability" refers to the number of hardware platforms it can run on. I would say that GNU/Linux is more portable - or at least flexible. I cannot verify how well it scales in comparison to AIX or Solaris. There seems to be a perception that BSD is more stable than GNU/Linux. Using the same jargon helps me to keep clear what we're talking about. |
jdixon Apr 29, 2007 11:49 AM EDT |
> Now I'm confused. The terms are defined in a variety of ways, usually to suit the purposes of whatever person happens to be making a specific claim at that time. They're also widely misused. So you're not the only one. Your definition of scalable is a valid one. Scalable can also be used to refer to the number of processors/size of machine(s) the system can run on, as well as the generic amount of load it can handle. > There seems to be a perception that BSD is more stable than GNU/Linux. At least one network load study in the past has backed this up. I have no idea if it's still true or not. For the loads most people will have, the difference wasn't significant. |
herzeleid Apr 29, 2007 12:46 PM EDT |
Quoting: jdixon: Yes, the top line Unix machines, including Solaris, have better stability and scalability than Linux. Just to be be clear about the point you're trying to make here, when you say " top line Unix machines" do you mean hardware, software or the system as a whole? If so, then when you say "linux", so you mean hardware, software, or the system as a whole? So, are you comparing apples to hamburgers? Surely you can see how silly it is to compare hpux running on superdome hardware to linux running on your old dell. I work in a data center populated with hundreds of unix servers - linux, hpux, solaris, and aix - and I see no evidence for the claim that linux is somehow less stable than the old school proprietary unices. Lets be realistic here, all of the flavors I mentioned have hundreds of days of uptime in our data center, so it's a little hard to pick one and say "this is more stable". I will say that linux seems to be every bit as robust, and in fact requires far less rebooting, than either solaris or hpux. On linux we do library upgrades, network changes, kernel parameter tuning, all on the fly, without any interruption of service, while adjusting the same kernel parameter on e.g. solaris requires editing /etc/system, followed by a reboot.. hpux is stable once you have it set up, but if you change anything all bets are off - I've seen the hpux network stack get so horked up by simply changing IP address, that the machine had to be rebooted. The recent DST time zone changes are another example - with linux, the update was quick and easy, requiring just a few seconds to install the updated timezone package, with no downtime. with solaris, patching and reboot was necessary. I'm not sure about hpux, I was fortunate enough not to be involved in that maintenance, but it wouldn't surprise me if that required a reboot too. IOW, I don't buy the hype that the old proprietary unices are somehow more stable than linux. I've worked with them all for years, and linux is in their league. You have to consider the system as a whole however. |
Sander_Marechal Apr 29, 2007 2:29 PM EDT |
herzeleid, we're mostly comparing Linux vs Unix of the big uber-hardware here. After all, that's the kind of hardware a company requiring the extra percent-point in stability for great cost usually runs. There's a very simple explanation why old Unix runs better on big old mainframes: testing. There aren't too many mainframes around. Old unix was built to run on them. Linux was ported to it and isn't tested as much as say, Linux on x86_64 or Linux on ARM because there simply aren't that many machines around. But guess what, we can still dominate the world if we beat old Unix 99% of the time :-) |
jdixon Apr 30, 2007 3:21 AM EDT |
> do you mean hardware, software or the system as a whole? System as a whole. > So, are you comparing apples to hamburgers? In this case, you'd have to ask Qantas. They're the ones doing the comparison. Yes, it's probably an apples to oranges comparison, but isn't that the comparison most businesses are going to be making? > Lets be realistic here, all of the flavors I mentioned have hundreds of days of uptime in our data center, so it's a little hard to pick one and say "this is more stable". Didn't I say the difference wouldn't matter to most people? Anyway, we're talking top end systems and high loads here. Linux simply hasn't been used enough in that environment to give us valid data. And guess what, most of the companies that are using it probably won't give out that data. The consider it a "competitive advantage". The data I've seen tends to be several years old and tends to confirm that, at that time, Linux was not yet up to handling that environment. I suspect the 2.6 kernels have gone a long way to correcting those problems, so that may not be the case now. |
dinotrac Apr 30, 2007 5:31 AM EDT |
Stability means a lot of things, and the Linux developers do some things that aren't real friendly to system stability. Perversely, that includes improving the linux kernel. For example, there was a change in the wireless API for 2.6.20. That change required that wireless device drivers be updated to match. No big deal in the overall scheme of things, but a temporary annoyance for me, using a vendor-provided (but free) driver that hasn't yet been updated. I expect that the vendor will update the driver shortly, somebody else will update it, or I will. For the moment, I can't move to 2.6.20 (which means I can't use the stock Ubuntu 7.0.4 kernel, btw). Take this tiny and insignificant bit and realize that it is SOP for the Linux kernel, and you have a destabilizing influence on Linux. API changes mean percolating rewrites and more opportunities for bugs and vulnerabilities. It's definitely good news/bad news, and the good news, I think, outweighs the bad. Still, you have to consider whether you have to give more considerations to moving forward for kernel improvements (or bug fixes) with Linux than you would with AIX or Solaris. That must be weighed against the fact that Linux could not improve so rapidly or accommodate so many uses had the developers chosen to follow more traditional philosophies of preserving APIs. Linux does a lot of things very, very well. A few things it does a bit less well. C'est la vie. |
pat Apr 30, 2007 9:24 AM EDT |
So, IBM made a sale, good for them. This is better then reading that they are switching to that toy database that runs on that toy operating system. |
dinotrac Apr 30, 2007 9:29 AM EDT |
Pat - No kidding! Since when does a success for IBM (or Sun or HP, etc) equal a knock on Linux? Last I looked, they don't run Lexus Sedans at Indy or in Formula 1. Lexus seems untarnished by the fact. |
herzeleid Apr 30, 2007 10:25 AM EDT |
Quoting: dino: Stability means a lot of things, and the Linux developers do some things that aren't real friendly to system stability. Perversely, that includes improving the linux kernel.That's a red herring. Tell me, what effect do the wireless API changes in 2.6.21 have on the long-running SLES 9 servers in our data center? I'll give you a hint: it has absolutely no effect. The stability of deployed systems is completely independent of any upstream kernel changes. IOW, what does a manager of such systems care about current kernel development activity? he doesn't care, since it's of no relevance, and his SLES servers will continue to run, rock solid and dependable, regardless of what changes are made in 2.6.2x - |
azerthoth Apr 30, 2007 10:39 AM EDT |
herzeleid, there is a fairly large hole in your argument there, that being the difference between deploy(ing) and deployed. Not many people change the engine in their car after they buy it, lots of people though wil look at maintenance issues before they buy it. |
jimf Apr 30, 2007 10:55 AM EDT |
> Not many people change the engine in their car after they buy it, lots of people though wil look at maintenance issues before they buy it. Sure will, and if the parts are going to be going to be available, or can't be retrofitted, or personnel aren't going to be competent to work with it. The deal may not fly. Problem is, I don't see Linux as the looser in that area, so herzeleid's still right. |
herzeleid Apr 30, 2007 11:06 AM EDT |
Quoting: azeroth: herzeleid, there is a fairly large hole in your argument there, that being the difference between deploy(ing) and deployed.I'm sure the vendor of choice will be quite happy to handle all those details and shield the customer from having to deal with any of the API churn. That's what linux vendors do. So, a non-issue for anyone who obtains their distro from a vendor. (IOW all enterprise customers) |
number6x Apr 30, 2007 11:13 AM EDT |
Older established institutions that deal with many transactions have a long history of computer use that leads them to their definition of 'stability'. This definition is mainly based on years of IBM Big Iron mainframe use. Even Solaris and HP still don't match this kind of stability. They are used to very few central computers that rarely need maintenance. Any downtime is scheduled months in advance. AIX on power, HP and Solaris do come close however. Linux can too, but the most successful Linux users seem to have come up with a new definition of 'stability'. Even though Linux can be used in this way, it often is not. People generally do not have a few spare Z/OS machines sitting in a closet they can throw a copy of Linux on. They might be willing to create an LPAR for Linux, but they'll probably want to do it with IBM and not on their own. They will do nothing to jeopardize the multi-million dollar investment they already have in hardware and licensing. How can Linux provide legendary stability in a 'new' way? Think of how Amazon or Google uses Linux. They embrace the hardware failure rate in cheap off the shelf boxes. They build server farms where a few machines failing a day have no effect. It’s really a new way of thinking about achieving a stable infrastructure. Free software like Linux and BSD make it possible for less. However the old established guys will keep thinking in their old established ways for some time. Their entire support infrastructure is built around vendor support contracts purchased along with server equipment as part of the equation. Getting them to change will take time. The fact that IBM will sell support for Linux used like Z/OS or Solaris on high end machines is a good start, but it is still frightening top many established businesses. The young upstarts can take the risks because they aren't as calcified as the old timers yet. Barring a major economic situation (like bankruptcy), Qantas will probably remain slow to change. |
jdixon Apr 30, 2007 11:20 AM EDT |
> Barring a major economic situation (like bankruptcy), Qantas will probably remain slow to change. Or the success of a number of companies embracing Linux as their stable platform of choice. Nothing convinces the old guard like success. They just don't want to be the guinea pigs. |
number6x Apr 30, 2007 12:54 PM EDT |
jdixon, Good point. More companies showing how to lower costs and increase profits through FOSS would help get some older companies out of the rut they are stuck in. I keep thinking back to the anti- micro computer policies of the late 70's and early 80's. By the mid - 80's most companies started using what were starting to be called PC's in more ways. By 1990, it was acceptable even for bosses to have PC's on their desks. More Linux use will lead to more Linux use, and so on... |
dinotrac Apr 30, 2007 1:05 PM EDT |
OK, herzeleid, I will try to make this simple -- A) The wireless change is what we call an example. Shocking though it may seem, wireless devices are not some island unto themselves. This strange attitude of "improving the kernel is more important than maintaining APIs" shows up in other places as well. Go figure. And...we don't even need to bring up things like virtual memory or dispatching algorithms. B) If you read my post, you will see that I was not criticizing the approach. It is why Linux has matured so much so rapidly and why it does so many things so well. It is not, however, the best way to get ultimate reliability. Talk to folks who worked on the old pre-deregulation telephone switches. Ask them how easy it was to get changes approved for that software. C) IOW, what does a manager of such systems care about current kernel development activity? That manager can go merrily about his way forever. However, should he ever need a new distrobution/kernel --- say to maximize throughput, to use a new device, etc, he (or she) will care very much about current kerenl development activity. Every change introduces new potential for bugs -- bugs that don't bite others, but that his installation, because of its size of the way that it pushes the system, may enbounter. Her distributions may be patched with local patches to support hardware that isn't generally on the market. The drivers for that hardware may be upgraded due to the API changes and they may have bugs that don't show up in testing, etc. Again, I've never said that Linux is unstable. Not even remotely. I have sold a few jobs based on its stability. However, when you get into the rarified air of 5 9s and better - 5 minutes of downtime or less in a year, the Linux development process is not the best. |
dinotrac Apr 30, 2007 1:12 PM EDT |
jdixon - >They just don't want to be the guinea pigs. Close, but not quite right. Lots of folks are deploying Linux in lots of ways in machine-critical systems. I don't think deploying Linux makes you a guinea pig. In fact -- look at the actual article!!! Qantas had no problem putting its systems on Linux. They were happy to be "guinea pigs", if you will. They are switching from Linux to AIX, not choosing between the two for a green pasture project. |
jdixon Apr 30, 2007 1:35 PM EDT |
> Qantas had no problem putting its systems on Linux. Point taken. I was arguing the more general case in my response to number6x however, not something specific to Qantas. |
dinotrac Apr 30, 2007 2:40 PM EDT |
>I was arguing the more general case in my response to number6x however, not something specific to Qantas. Sure. I think, however, that Linux really has grown beyond the guinea pig stage. More likely a matter of businesses being VERY conservative with their bread and butter systems. Same factor will apply to companies who won't move from mainframes to, say, Solaris -- the safest thing to do is nothing. When you can't do nothing, the next-safest thing to do is whatever most resembles what you're doing now. When you've got billions riding on your systems, you handle them with care. |
jdixon Apr 30, 2007 3:35 PM EDT |
> More likely a matter of businesses being VERY conservative with their bread and butter systems. Same thing from two different viewpoints, Dino. You say tomato, I say... |
herzeleid Apr 30, 2007 3:38 PM EDT |
Quoting: dino: When you've got billions riding on your systems, you handle them with care.My point exactly. So why on earth would the manager of such a system ever decide to "download the latest kernel" and compile it himself? Answer. No way he would ever do anything like that. Mr manager runs the official, vendor supplied kernels, thank you very much! So I'm still having a hard time seeing any concrete way in which the admittedly rapid pace of kernel development could have any bearing whatsoever on already deployed systems, such as the 30 SLES servers which run our infrastructure here. |
dinotrac Apr 30, 2007 4:17 PM EDT |
>So why on earth would the manager of such a system ever decide to "download the latest kernel" and compile it himself? Ummm.... 1. Using a distribution's official kernel is no guarantee against the kind of scenarios I have describe. 2. Your experience in the great wide world seems to be pretty limited. There are shops who do things because they have little choice in the matter. |
herzeleid Apr 30, 2007 6:18 PM EDT |
Quoting: dino: 1. Using a distribution's official kernel is no guarantee against the kind of scenarios I have describe.The kind of scenario you describe would never arise in a serious production environment. Think about it. You've got the company's business riding on your transaction server, you're telling me that one fine day the manager will decide that he wants better device support for some newfangled gadget, so he'll roll the dice with a new self compiled kernel? No, I don't think so. Quoting: dino: 2. Your experience in the great wide world seems to be pretty limited. There are shops who do things because they have little choice in the matter.Thanks for the vote of confidence - perhaps I can't boast as many years IT experience as some of the dinosaurs here, but I've deployed linux (and other unix) solutions in a number of businesses, from one-man shops selling discount software online, to multinational corporations with data centers spread out over 400 miles. I hold down a day job as a unix system administrator with a fortune 100 company, and I consult on the side, for small and medium businesses. I have to say, this is the first mention I've ever heard about my lack of experience. |
robntina Apr 30, 2007 8:05 PM EDT |
Something to consider:
I read a book recently that said "There are often two reasons to do (or not do) something. One that sounds good, and the real reason." Perhaps they got a new IT boss who rather then look un-knowledgable about Linux or perhaps being too lazy to learn the Linux system (the real reason) pushes to replace it with something he is more familiar with because it is more "stable" (sounds good). This happened in my company after I sold it, except in that case they hired a Windows guy to run the system (the first stupid decision, why would you hire a windows guy to run a non-windows system?). Months later I stopped in to visit and he had converted the whole business to his familiar Windows rather then have to look stupid (appearently it's ok to be stupid, just not look stupid) or learn anything. He can now justify his existence by dealing with all the Windows problems he is comfortable with that I never had to deal with. (I have to admit that after I switched over to Linux, I came to realize that most of the issues I had with "computers" were really issues with Windows. The problems just disappeared) Now whatever their situation is, I would be willing to bet that the stability thing is just something that sounds good. Just a thought. |
dinotrac May 01, 2007 12:49 AM EDT |
>I have to say, this is the first mention I've ever heard about my lack of experience. Well, glad to be the first to make you aware that the world is a large and varied place. It sounds like you fall victim to a common conceit -- so common that it inflicts us all: if you ain't seen it, it don't exist. It's not a knock on you. It's unavoidable. It's been 20 years since I worked at EDS, and it still frames the way I look at IT shops. |
jdixon May 01, 2007 5:10 AM EDT |
> The kind of scenario you describe would never arise in a serious production environment. If they're already running a custom kernel, and the hardware changes, they'll probably have to compile a new custom kernel. Or are you arguing that no one would ever run a custom kernel in a "serious production environment"? That argument doesn't sound plausible to me. |
dinotrac May 01, 2007 6:29 AM EDT |
>That argument doesn't sound plausible to me. It's not. I'd forgotten about it before, but I remember reading an article about some sort of sorting machine used by the post office (and, I believe, only by the post office) that required custom drivers. Though not the norm, there are many companies/organizations throughout the IT universe that have "outside of the norm" needs. Heck, Google! They modify the OS pretty heavily. Of course, their mods are aimed at surviving unreliability, but, hey!, it's an example of custom kernels in a production environment -- and, yes, a web site/search engine IS a production environment if it is your company's product. |
herzeleid May 01, 2007 8:14 AM EDT |
Quoting: jdixon: If they're already running a custom kernel, and the hardware changes, they'll probably have to compile a new custom kernel. Or are you arguing that no one would ever run a custom kernel in a "serious production environment"? That argument doesn't sound plausible to me.Nice try, but how can you say on the one hand, that these serious production environments are ultra conservative, and schedule changes months in advance, and then turn around and say that they fly by the seat of their pants and run experimental kernels? You can't have it both ways. If they are conservative and uptight, they can be just as conservative and uptight with SLES running on HP/Compaq hardware as they can on a hpux superdome or a sunfire e12k. Such a linux server will stay up for 1500 days, or until you take it down. Your call. OTOH, if they are technically skilled and confident, and have no problem tweaking the kernel and writing custom code, (google, slashdot, amazon) linux is an even better fit than it's proprietary cousins, since it is ideal for such scenarios. So, no matter how you cut it, linux works. I say the Qantas thing was a PR stunt and the result of some sort of agenda, not a genuine technical issue. |
jdixon May 01, 2007 9:00 AM EDT |
> and then turn around and say that they fly by the seat of their pants and run experimental kernels? Where did I say they might ran experimental kernels? I said they might run "custom" kernels. That's not even remotely the same thing. A custom kernel is one compiled to your specifications, nothing more or less. > OTOH, if they are technically skilled and confident, and have no problem tweaking the kernel and writing custom code, (google, slashdot, amazon) linux is an even better fit than it's proprietary cousins, since it is ideal for such scenarios. Well, at least we agree on something. > So, no matter how you cut it, linux works. I say the Qantas thing was a PR stunt and the result of some sort of agenda, not a genuine technical issue. That's always possible, but there's simply no way we can know for certain. It's always possible that Qantas hit on some obscure kernel bug and weren't able to get it resolved to their satisfaction. It's also possible they were lying through their teeth. Or anything in between. We have absolutely no way of knowing. |
herzeleid May 01, 2007 9:18 AM EDT |
Quoting: jdixon: That's always possible, but there's simply no way we can know for certain. It's always possible that Qantas hit on some obscure kernel bug and weren't able to get it resolved to their satisfaction. It's also possible they were lying through their teeth. Or anything in between. We have absolutely no way of knowing.Call it an intuitive gut feeling backed by years of experience. |
jdixon May 01, 2007 9:50 AM EDT |
> Call it an intuitive gut feeling backed by years of experience. OK. But stating that it's an opinion doesn't really back up your argument. :) My intuition agrees that there's more to the story than was covered in the article. We're just disagreeing about the details. |
herzeleid May 01, 2007 10:37 AM EDT |
Quoting: jdixon: OK. But stating that it's an opinion doesn't really back up your argument. :)Not an opinion - a flash of intuition. Big difference. I won't even try to explain that, but it works for me. end of story. OTOH, the arguments I made were an effort to debunk some of the straw men I saw bandied about here (e.g. that the rapid pace of kernel development somehow affects the stability of already deployed linux systems) |
dinotrac May 01, 2007 12:08 PM EDT |
> the rapid pace of kernel development somehow affects the stability of already deployed linux systems Who made that point? Sure wasn't me. |
herzeleid May 01, 2007 12:58 PM EDT |
Quoting: dino: Who made that point? Sure wasn't me.Wonderful! we're on the same page then. |
dinotrac May 01, 2007 6:16 PM EDT |
>Wonderful! we're on the same page then. Page ain't the problem. It's the chapter I'm not sure about... |
herzeleid May 01, 2007 6:24 PM EDT |
Quoting: dino: Page ain't the problem. It's the chapter I'm not sure about...Thank you. It takes a big man to admit when he's wrong. |
dinotrac May 02, 2007 3:34 AM EDT |
>Thank you. It takes a big man to admit when he's wrong. Sure does. When I am, I will. |
Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]
Becoming a member of LXer is easy and free. Join Us!