Virtualization is a symptom of OS failure
|
Author | Content |
---|---|
Grishnakh May 19, 2011 8:30 PM EDT |
Maybe I'm missing something, but this whole idea of "virtualization" seems like something that shouldn't exist, and is really a work-around for failures in operating systems to do what they're really supposed to. OSes are supposed to interact with the hardware directly, and partition it and control access to it so that multiple users can all use the same machine, and run multiple processes on it, without anything interfering with each other (neither users nor processes). Back in the days of the mainframe, it was perfectly normal for one single computer to have dozens or hundreds of people logged into it at once, all doing different things, and none interfering with each other, beyond having to share the limited resources available. No one was able to "crash" the system, so the only way you could tell anyone else was on there besides looking at a "who" command (or whatever the equivalent was for that OS), was if your jobs ran really slow because there were too many users. Years later, we got some of that technology as multiuser, multiprocessing operating systems became common on consumer-grade PC hardware. Lots of people can log into a single Linux machine, for instance, and shouldn't (in theory) have any trouble beyond having to share limited resources. So why do we need virtualization? As far as I can tell, two reasons: 1) crappy application software that only runs on certain OSes. So if you need App A which only runs on OSes X, and you also need App B which only runs on OS Y, and you also need App C which only runs on OS Z, then you set up a system with a VM so that you can run all three OSes on the same machine. But this isn't some kind of triumph of technology, it's something that should never have been needed in the first place. If the stupid app vendors wrote their apps decently and made version available for all OSes, this wouldn't be necessary. 2) users who demand their "own" machine, so they don't have to worry about other users screwing it up. This demand doesn't seem like based on reality, just blind fear, except maybe with Windows where OS installations "degrading" does seem to be a fact of life, but there's lots of places where Linux VMs are actively used so different customers can have their own machines. Again, I don't see why this is necessary. Just use the same machine everyone else is using. If you really must have your own, to use a particular distro or whatever, either rent your own separate machine (assuming this is for hosting), or buy one. |
Sander_Marechal May 20, 2011 1:02 AM EDT |
Sorry, but you are totally missing the reason for VM technology. The reasons are machine utilisation and availability (and, in the end, money). There are many sound technical reasons to distribute functions over multiple servers. But it's a waste of money to buy 10 servers that are each only used for 25%. Virtualisation means that that you only need 3 servers in that case, saving you the cost of 7 machines. And that's a lot (when you factor in rackspace, bandwidth, maintenance, etcetera). Availability is a related problem. When you have a really important server, you can't afford to have some hardware component fail and take it offline. To solve that in a non-virtual way you could buy special hardware where everything is redundant (expensive! Starts at $15,000 or so) or you need extra servers for redundancy (assuming that your application can even run in such a way). Virtualisation means a separation between your OS/software and the physical hardware. When hardware fails, OSes can migrate smoothly to other hardware and continue running without a hitch. Literally. There's this really cool video of two servers running VMWare. They are showing a remote desktop where they are watching some video in a video player. Then they yank the cable on the one server, triggering a migration to the other server. The video they are watching on the remote desktop keeps running and doesn't drop a single frame. That's cool stuff. Virtualisation technology is about saving boatloads in hardware cost while ensuring availability in the face of hardware failure. Everything else is just icing on the cake. |
krisum May 20, 2011 3:56 AM EDT |
> And that's a lot (when you factor in rackspace, bandwidth, maintenance, etcetera). And most significantly: power+cooling. |
dinotrac May 20, 2011 6:01 AM EDT |
@Grishnakh - Yes, you are missing something, including an appreciation of history. VMs are not some new-fangled way to make up for the shortcomings of new OS's. The first commercially marketed VM (that I'm aware of) was IBM's VM/370 all the way back in 1972. Back in the bad old mainframe days -- big old multi-user systems monitored by professional systems people and costing bazillions of dollars -- we found plenty of good reasons to use VM, including the use of OS's optimized to different tasks running on the same hardware, OS configuration testing, etc,etc etc. The isolation offered by a VM (Those folks you sneer at as operating out of blind fear) is a valuable commodity. This may come as a shock to you, but there are people out there who try to do bad things to computer systems. There are also people who do the systems equivalent of smoking in bed. There is nothing stupid or blind about not wishing to be at the mercy of either group. A little homework is in order. |
helios May 20, 2011 9:53 AM EDT |
A little homework is in order. Awwww, Dad...I wanted to watch Doogie Howser.... |
dinotrac May 20, 2011 10:07 AM EDT |
Sorry, son, you'll just have to get your Doogie fix another day. |
humdinger70 May 20, 2011 11:58 AM EDT |
Ah yes! VM/370 (or VM/SP or z/VM these days). The halcyon days. You could run the interactive CP/CMS, a batch OS (DOS/VS, MFT/MVT, MVS, OS/390) or even (gasp!) VM/370 itself. Right, the first OS that could run a copy of itself while running on the bare metal. What did it mean? It meant you could apply patches (PTFs and APARs in IBM terminology) that IBM supplied in a test environment and if the OS crashed, only the VM was affected, not your production stuff. Brilliant! IBM really didn't want to market it, but those shops that installed it complained bitterly when IBM wanted to withdraw it (they wanted to push their cash cows like DOS/VS or OS/MVS) from active service. |
hkwint May 20, 2011 1:04 PM EDT |
But if VM/370 was 'visionary', why didn't it last? From history lessons, I only remember CTSS from '62, and later in the sixties Multics and Unics. Seems I haven't reached the 70's yet or so. |
dinotrac May 20, 2011 1:17 PM EDT |
Hans -- Why didn't it last? The same reason DOS, Linux 1.0, and others didn't last. New versions came along! z/VM is the modern descendant of VM/370. |
gus3 May 20, 2011 4:56 PM EDT |
Also, H-P's PA-RISC machines did hardware partitioning (enabling multiple OS's) from the OpenBoot PROM level, IIRC. At least, I think I saw something like that a couple times... |
Koriel May 20, 2011 6:40 PM EDT |
VM's are about saving money, regardless if your a server farm or a developer. As a developer my customers expect me to support and develop apps for whatever platform they are using. At the moment that means I have to support Linux, Win XP, Win 2000, Win NT and Windows 7, I expect OSX to get added to that list soonish. Now the problem is i'm not rich, I cant afford to buy all the hardware to support all those configs so what do i do ? Well the answer is simple the machines I do have all run linux, I then load up a copy of Vmware Workstation for the princely sum of about $190 and have a VM for whatever OS's I am currently supporting. Cheap and efficient! It would be even cheaper if the VirtualBox crowd would get its act together but i wont hold my breath on that score Sun didn't fix the limitations in VBox and I don't expect Oracle will either. |
Grishnakh May 20, 2011 6:49 PM EDT |
No one's yet explained why, in theory, VMs are really needed. Yes, if your customers demand support for various OSes, then VMs are indeed a valuable work-around for this failure of OSes to do their job and for application software to not be dependent upon particular OSes. If you need VMs to achieve redundancy and fail-over, again, you're using the VM as a work-around for a failure of the OS to provide these functions, or a failure of the OS to be reliable. If you're using the VM to partition the hardware, so that you can use fewer machines to do the work of more machines, either you're not using the machines right (why are you only running one service per machine?), the application software isn't working right, or the OS isn't working right. The OS is supposed to be the software that communicates directly to hardware, with all other software running on top. There isn't supposed to be a middle-man. If there is, that's because the OS isn't working right. |
hkwint May 20, 2011 7:56 PM EDT |
Grishnakh: Maybe you know CNC-machines. These days, it seems I work with one; though I don't touch it. Anyway. In the ideal world, the milling cutter you fit them with never breaks or wears off. Reality is different though. Sometimes a human error, sometimes just a bad tool, sometimes a software error, and if not it just wears off. That's why you have always have a spare one. Some CNC machines also have some kind of 'tool repository', because with one tool you drill, and with one tool you mill, and with another one you saw. Sure, one could devise a "universal tool", which does all. I can use a the milling cutter to saw, and the milling cutter can dig holes. But it's not efficient, it's better to choose the right tool. For example, with the saw I can saw larger sizes, with less loss of material and faster. I think that pretty much sums up why VM's are needed: Because reality is not as perfect as theory. |
azerthoth May 20, 2011 8:51 PM EDT |
There is also isolation considerations on top of automatic fail over and resource conservation. Then add in sandboxing, playpens, and testing/development flexibility. |
krisum May 22, 2011 2:50 AM EDT |
@Grishnakh > Yes, if your customers demand support for various OSes, then VMs are indeed a valuable work-around for this failure of OSes to do their job and for application software to not be dependent upon particular OSes. That is not the primary application (except for few desktop users). Read Sander's response again. > If you need VMs to achieve redundancy and fail-over, again, you're using the VM as a work-around for a failure of the OS to provide these functions, or a failure of the OS to be reliable. OSes can hardly provide hardware redundancy without special underlying isolation layer (like a VM). > (why are you only running one service per machine?) One reason is network/service setup and desired isolation characteristics in any medium/large setup. |
Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]
Becoming a member of LXer is easy and free. Join Us!