Home Connected
|

Server Virtualization – what, why & how

So, you think you know what “virtualization” is? According to the Collins on-line dictionary, it isn’t a real word. Of course not. In this business, we like to turn verbs into nouns.The Collins’ definition of the verb “virtualize” is to transform (something into an artificial computer-generated version of itself which functions as if it were real

Virtual goods, virtual services, and so on, are all part of the on-line gaming lexicon. Anything bought or sold in a “virtual” world is, as the dictionary states, is made to seem as though it is real. But it isn’t. Of course not.

What does any of this have to do with data networks? In the networking world – some might argue that it isn’t real either – virtualization refers to the logical partitioning of physical servers located in a data center or cloud computing center into a number of “virtual machines”. This is done to permit the server’s hardware to be shared by a number of applications that may require distinct and potentially different run-time systems and services.

For example, we might want to run Oracle’s new Social Networking application in its own virtual environment to provide it with a predictable amount of cpu cycles, memory space, i/o bandwidth, etc to satisfy a requirement for more predictable user response times.How do we do this?

Let’s examine the diagram below:

The heart of server virtualization lies in the “hypervisor” (or “virtual machine manager” in non-IBM 360 lingo) which is a layer of software that directly controls the server’s hardware – cpu, memory, i/o devices, control registers (below) – while supplying a software emulation of that same hardware to the operating system (above).  Note the drawing calls out the physical i/o interfaces at the bottom; there is also a hardware emulation layer above (see the smaller hardware icons located in the layer above the “VMware Virtualization Layer).

We want the operating system supporting the applications to run without modification, so any hardware resources that the operating system relies upon – timers, memory, control registers, i/o devices – have to provided by way of specialized software supplied by the hypervisor vendor which emulates those hardware resources. One or more instances of potentially multiple operating systems run on top of the emulated hardware layer.The operating system supplies the run-time services for the applications (above) as if it were actually controlling the underlying hardware, as happens in the non-virtualized computing environment.

But why go to all this trouble? The motivation is really very simple – modern (Intel) processors are extremely fast, capable of executing as many as 12 instructions per clock cycle (Intel’s latest effort in its Itanium processor family, code-named “Poulson”). At a clock rate of 3.3GHz, that’s an execution rate of one instruction per 3.64 nsec, or 275,000,000 instructions per second. That is a lot of computing power, more than can be typically used even by scores of applications running simultaneously. Virtualization is a technology that allows us to more fully utilize the computing power available, while providing a very flexible environment for accommodating applications that may require different operating systems in which to run.

This means that we can efficiently deploy many applications within a single physical server, creating potentially several “virtual machines” which then can support several applications at once.This saves rack space, power and cooling – always at a premium in a data or cloud computing center.

More applications supporting more users in a smaller footprint – that’s the “so what” of server virtualization. Today, virtualization is a requirement, not an option, in the data center and cloud computing markets – our customers expect and demand it.

Next time we will take a look at network virtualization, and examine the in’s and out’s of Avaya VENA – Virtual Enterprise Network Architecture – and its relationship to server virtualization.

wms

William Seifert is the former Chief Technology Officer of Data Solutions at Avaya. He has served on numerous corporate boards, and holds a Bachelor and Master of Science, Electrical Engineering from Michigan State University. more

0 comments