Wednesday 26 September 2007

Virtualization – it’s really clever

I’ve recently been taking a look at virtualization and The IBM Virtualization Engine platform, and I’ve got to say that I am very impressed with the concept behind it. I’d really like to hear from people who have implemented it to see how successful they have found it to be.

Virtualization started life in the late 1960s with the original implementation of VM/CMS. The problem that VM/CMS solved was how to let lots of people work at the same time on the fairly limited hardware that was available. It was not unknown in those days for developers to book slots on the hardware to do their work. CMS (Conversational Monitoring System) was developed at Cambridge and gave each person sitting at a terminal their own virtual computer. They had disks, memory, processing power, and things like card readers and card punches, all apparently available to them. They would do their work and VM (Virtual Machine) would run as a hypervisor (rather than an operating system as such) and dispatch the different virtual machines running according to priorities it was given.

In the late 1980s and early 1990s, the story takes the next step forward with the introduction of PR/SM (Processor Resource/System Manager) and LPARs (Logical PARtitions). This worked by having something like the VM hypervisor running in microcode on the hardware. Users were then able to divide up the existing processor power and channels between different LPARs. And the reason they would want to do that is so the same physical hardware could run multiple operating systems. That meant one large partition for production work and smaller ones for development and testing, but all located on the same hardware. It made management and control much easier.

The next big leap forward takes place in the middle of 2005 with the introduction of the z9 processor. This took the idea of processors and peripherals appearing to be available one step further. Rather than physically dividing up the processor and channels into LPARs, everything was logically divided. These virtual machines are then prioritized and dispatched accordingly. What it also did, that is very clever, is it allowed insufficient or non-existent resources to be simulated, so they would appear to be available.


If that was the end of the story, that would still be quite clever – but it’s not. IBM long ago realized that it couldn’t pretend it was the only supplier of computer equipment. And most companies – through takeovers, mergers, and anomalous decisions – have ended up with a mish-mash of server hardware, most of which, eventually, becomes the IS department’s responsibility. IBM has cleverly extended its virtualization concept to cover all the servers that exist at a site. It combines them all together into one large unit. Now you might think this would be large and unwieldy and completely the wrong thing to do, but in fact the opposite is true. It now becomes possible to control these disparate servers from a single console and monitor them from one place (which could be a browser). Management becomes much easier. It’s not only possible to manage System z, System i, and System p components, it’s also possible to manage x86-based servers. It can also manage virtual machines generated by Microsoft Virtual Server, VMware, and open source Xen.


So, basically, IBM has come up with a way of making virtual components appear to be available to virtual computer systems running across almost any server that currently exists. This maximizes the usage of the resources available to suit the workload. And it has done it in a way that makes management of such a complex system fairly straightforward. Really clever, eh?

No comments: