Virtualisation and its Evolution over the Years

Early on, IT administrators began to realize that conventional methods of handling IT environments were no longer effective because of the rapid changes in requirements in agile business environments. The demand for the faster time-to-market for applications, the installation or upgrade requests, the need to quickly apply security patches to operating systems and applications, and many other management complications led to a new strategy for server handling and management.
IT organisations need a more nimble strategy to manage environments that easily adapt to rapidly changing needs, where new functions can be deployed in days rather than several weeks. Given these problems, it is natural that corporations are progressively employing more technological innovations.

IT challenges that led to virtualisation
Advances over the decades

Virtualisation, a technological innovation long associated with mainframe computer systems, has been changing IT facilities due to its capability to consolidate hardware resources and decrease energy costs. This has led it to grow as a practical technological innovation for mobile phones and exclusive personal systems, as well as being used to re-conceive nimble and cloud computing.

Virtualisation, in this new, effective era of cloud computing, is pushed by the need for spending budgets effectively, for agility and for meeting other challenges in the traditional environment.
20.1 Advances in virtualisation over the decades


Figure 1: Advances in virtualisation over the decades [figure1.png]

In the 1960s, the time-sharing of systems was preferred to batch processing systems. Virtualisation became a means to fully utilise hardware components and assist in the optimum use of systems on a time-sharing basis.

The phrase ‘hypervisor’ was first used in 1965, mentioning software that associated an IBM RPQ for the IBM 360/65. A hypervisor or Virtual Machine Monitor enables multiple guest operating systems on a host computer.

In the mid-60s, IBM's Cambridge Scientific Center developed the CP-40, the first edition of the CP/CMS. It went into production in January 1967 and was designed to implement full virtualisation. IBM mainframes have supported several completely virtualised operating systems since the early 70s.
20.2 Resource utilisation before virtualisation

Figure 2: Resource utilisation before virtualisation [figure2.png]

In the 80s and into the 90s, virtualisation was mostly overlooked, as affordable PCs and Intel-based hosts became popular. Over time, the expenditure on physical facilities, failover and catastrophe protection needs, the high cost of systems servicing, and low server utilisation became problems that required a new solution.

20.3 Resource utilisation after virtualisation

Figure 3: Resource utilisation after virtualisation [figure 3.png]
In the late 90s, x86 virtualisation was achieved through complicated software techniques, which overrode the processor's lack of virtualisation support and accomplished affordable efficiency. Virtualisation of Intel-based devices became a potential solution. This was followed by the arrival of VMware, which overcame the hardware limitations that blocked virtualisation on Intel-based structures. Since then, virtualisation achievements have led to what could be called a virtualisation rebirth these days. The current generation is discovering what had already been done long ago, but is implementing the advantages in the present technical landscape. In 1999, VMware used x86 techniques to address many of these difficulties and convert x86 techniques into a general purpose, shared hardware infrastructure that offers full isolation, flexibility and a choice of OSs for application environments.

After 2000, there has been a lot of success in the area of virtualisation. In the mid-2000s, both Intel and AMD included hardware assistance to their processor chips making virtualisation software easier, and later hardware changes offered considerable speed improvements.

Subsequently, software vendors have developed virtualisation solutions, and organisations have implemented virtualisation to solve business needs.

The benefits of virtualisation

There has been a natural development towards organisations wanting their own servers, data centres, networks and desktop environments and being able to manage them. The preferred end-state would be an environment that has progressed to allow resources anywhere on a network to be dynamically provisioned and consumed, based on application and user requirements. All this would be across a dynamic IT infrastructure, which would be extremely automated, inter-linked and structured to support business procedures, instead of data being isolated in silos.

Typically, organisations advance through various stages of virtualisation. Virtualisation drives agile business solutions, for instance, by resolving specific issues like charge backs, and global concerns such as workload balancing and time to market.

20.4 Virtualisation and its benefits

Figure 4: Virtualisation and its benefits [figure 4.png]

The first stage involves virtualising the ‘low hanging fruit’.  Consolidation and disaster recovery strategies are at first targeted to earn returns on capital investments from virtualising programmes that have low business impact. Server virtualisation is also a very popular practice, with many organisations wanting to bring down both CAPEX and OPEX levels.

The second level of virtualisation is when things get confusing, mainly due to the complex design in Level 1. Many CIOs now have a ‘Virtualisation first’ policy, to enjoy the cost benefits. In this second level, companies have begun to use  applications, servers, storage, and networks as pools of resources that can be managed in aggregate rather than isolated silos. Organisations may have experienced unanticipated below-par performance during Level 1 virtualisation, so they will need help to guarantee efficiency of the business critical applications that are virtualised in Level 2 and beyond.

Many enterprise-level implementations of Level 2 virtualisation store the ‘virtualised’ desktop on a remote server, instead of on the local storage. Thus, when customers work from their local machines, all the applications, procedures, and data used are kept on the hosting server and run centrally. This allows customers using mobile phones or thin clients with very rudimentary hardware specs to run OSs and applications that would normally be beyond their capabilities.

Mobile virtualisation is a technological innovation that allows several OSs or virtual machines to run at the same time on a mobile phone or connected wireless device. It uses a hypervisor to create a secure separation between the hardware and the application that operates on top of it. In 2008, the telecom industry became interested in using the benefits of virtualisation for mobile phones and other gadgets like tablets, net-books and machine-to-machine (M2M) modules. With mobile virtualisation, manufacturing feature-rich phones has become easier through the re-use of applications and hardware, which reduces the length of the development time. One such example is using mobile virtualisation to make low-cost Android smart-phones. Semiconductor companies such as ST-Ericsson have implemented mobile virtualisation as part of their low-cost Android platform strategy.
The next step
20.6 Virtualisation and cloud computing

Figure 6: Virtualisation and cloud computing [figure 5.png]
With the advent of cloud computing as an infrastructure choice, the evolution wheel is again turning, with companies beginning to investigate the technology options available to them. Private cloud solutions — whether on-premise in your own data centre or off-premise (hosted by a partner) — could provide even greater possibilities for performance with enhanced IT automation, elasticity of resources, and deeper capabilities for tracking usage.

An important stage has been reached in the evolution of this technology with the virtualising of mission critical applications, as well as the transformation that has resulted in virtualised resources becoming a pool. At this point, infrastructure is shared and IT must embrace a service management focus to deliver the private cloud in which physical and logical resources are made available through a virtual service layer across the enterprise. Whether an organisation selects embedded or third-party management applications, one thing is certain -- implementing better control over business virtual machine will give it more flexibility to take virtualisation to the next stage.

Anyone already using a virtualised server environment providing basic IT elasticity and scalability will be aware that organisations are able to enjoy the extra advantages provided by private cloud implementations such as self-servicing, increased automation and measurement, and the delivery of IT Infrastructure as a Service (IaaS). With cloud computing, workloads are allocated to software and services, which are utilised over a network of servers in various locations, in a group known as a cloud. These are assigned to connections, and accessed over a thin client or other access point, like an iPhone or laptop. Users can access the cloud for resources on demand.

Originally Posted on Linux For You (OSFY) in May 2012

Labels: ,