In August 2007, EMC Corp’s 10% IPO of market leading server virtualisation vendor, VMWare, raised $957m in spite of wider market turmoil. The shares rose by 75% on a day that the Dow Jones fell 207 points. The same month, Citrix, the thin client service provider, bought XenSource, an opensource virtualisation company, for $500m. A little over a year earlier, to complement its growing virtualisation offering Microsoft bought Softricity, a leading provider of application virtualisation and dynamic streaming technology. These developments brought considerable attention to the virtualisation community and are set against a background where many experts are predicting that over half of computer systems will be virtualised in one form or another by 2015. There is little doubt that virtualisation is now part of mainstream information technology management.

Virtualisation in information technology is not new – it has been used in relation to mainframes since the 1960s – but it has undoubtedly undergone a renaissance in recent years and is being utilised in new ways to meet new concerns. It is likely to be of increasing significance in the next few years as its weaknesses are countered and strengths leveraged, but what do organisations considering deploying virtualisation technologies need to consider to ensure they reap the desired benefits?

This briefing seeks to provide an overview of the virtualisation landscape. We will look at the technologies that are available and the ways that the licences and contracts required to govern this growing marketplace will differ from traditional models of software sourcing and support. The topics covered are:

What is Virtualisation?

Virtualisation is a broad term referring to the deployment of computer resources which allow underlying physical devices to be represented as software objects and hidden from the end user experience. Virtualisation uses software to abstract, divide, combine or allocate hardware resources among one or more virtual environments. Virtualisation can be applied to storage (to combine a number of silos into one virtual store), servers (to run multiple operating systems on one physical server), hardware (to run multiple operating systems or applications on a single desktop/laptop) and even software or applications (whether by thin client, streaming or web-hosting). The virtual partitions are hidden from the user as the virtual machines or virtual environments behave in the same manner as a physical system would but virtualisation can help to deliver benefits in reliability, security and manageability of IT.

IBM started using virtualisation in relation to mainframes during the 1960s to optimise expensive and relatively scarce computing resources. The development of PCs with operating systems and adequate hardware for reasonable prices in the 1980s and 1990s, removed the need to virtualise systems. However, the continuing proliferation of computing resources since the 1990s has created a need to more effectively manage and utilise these resources and virtualisation is marketed as one of the best ways of achieving this.

The main categories of virtualisation are set out below, although there is not always a clear defining line between the techniques. Organisations often use one or more of the technologies to develop coherent infrastructures that meet their business needs.

A. Storage Virtualisation

Examples include – IBM, EMC

Storage virtualisation has existed for some time. RAID (Redundant Array of Independent Disks) allowed a number of physical disks to be grouped and presented to operating systems as one or more logical, virtual disks. Administrators for an operating system have no need to know the physical components of a RAID volume to be able to format and partition it. Storage virtualisation now includes fibre channel and iSCSI (Internet Small Computer Systems Interface) storage area networks but the intention is still to ease the burden of administering storage and data.

B. Network Virtualisation

Examples include – Cisco, Hewlett Packard

Virtual local area networks have also existed for several decades to emulate a local area network regardless of the physical location of the hosts. This enables large corporate networks to be created between systems across a number of separate local networks or a local network to be sub-divided into several separate virtual local area networks. This provides benefits in terms of network scalability, security and management.

Network Interface Card teaming is used to improve fault tolerance and performance by grouping multiple physical cards, which enable communication between computers into a single virtual network card.

C. Server Virtualisation

Examples include – VMWare, IBM, Intel, NEC, Sun Microsystems

Server virtualisation creates virtual machines on the physical server, with each virtual machine containing elements of the hardware emulated so that it behaves as if it were a physical server. This provides a mechanism to consolidate multiple servers as multiple virtual machines can be created on any given physical server. Advanced features allow for automatic failover, dynamic relocation, load balancing and consolidated back-up. The vast majority of server virtualisation is deployed using host-based server virtualisation but an operating system virtualisation approach can also be adopted.

Host-based Server Virtualisation

Examples include – VMWare ESX Server

Host-based server virtualisation allows the creation of a series of virtual machines on the same physical host system with different operating systems working on each. Full virtualisation involves complete hardware emulation, which therefore offers total portability for each virtual machine – allowing a virtual machine to be easily moved between servers for simple platform migration and disaster recovery staging.

Each virtual machine interfaces with its host system via a virtual machine monitor (VMM). The VMM:

  • presents the emulated hardware to the virtual machine;

  • isolates the virtual machines from the underlying hardware and each other;

  • prevents one unstable virtual machine from affecting overall performance; and

  • passes instructions between the virtual machine and a host operating system sitting on the hardware called a hypervisor.

The main role of the hypervisor is to communicate with the VMM to coordinate access to the hardware resources.


Examples include – VMWare’s open Virtual Machine Interface (VMI), Xen, IBM

Virtualisation software uses a hypervisor and VMM as a virtualisation layer to emulate and co-ordinate access to the underlying computer system. In full virtualisation, a guest operating system runs unmodified on this virtualisation layer. However, improved performance and efficiency is achieved by enabling communication between the guest operating system and the VMM - each can cooperate to obtain better performance when running on a virtual machine. This type of communication is referred to as paravirtualisation and allows the VMM to be simpler and virtual machines that run on it to achieve performance closer to non-virtualised hardware.

Paravirtualisation requires the presentation of an application programming interface (API) to the virtual machines. This requires operating systems to be built in line with one of the open APIs used by the virtualisation suppliers or for the operating system vendor to release or licence the API for its operating system.

Hardware-assisted Virtualisation

Examples include – Intel, AMD, NEC

Hardware developers have recognised the need to make their equipment virtualisation-friendly and have added features to aid server virtualisation. Processors and operating systems work on the basis of privilege levels that define the actions that can be performed. There are typically four privilege levels (or rings) 0-3. The operating system will use ring 0 – the highest level of privilege; applications use ring 3. However, for virtualisation to work, the hypervisor/VMM needs to run at ring 0 and the operating system has to be de-privileged to ring 1. This leads to considerable work for the VMM to monitor hardware accesses and system calls by the operating system – executing them itself and emulating the results.

Paravirtualisation is one technique for reducing this privileged instruction processing latency, but hardware virtualisation technologies create two classes of ring (privileged/root for VMMs and de-privileged/non-root for operating systems) that give the operating system the necessary benefits of its expected ring level. This allows virtualised guest operating systems to process privileged instructions without the VMM being required to translate. Hardware-assisted virtualisation also removes the need for a paravirtualisation-enabled operating system but can sacrifice the virtual machine portability offered by full virtualisation.

Operating System Virtualisation

Examples include – Sun Microsystems, SWsoft Virtuozzo

With operating system virtualisation, the virtualisation layer sits on top of a host operating system and allows multiple virtual environments to share a common operating system. This allows each virtual environment to run with less overhead (disk space, RAM, etc) than a fully virtualised host. Full hardware emulation is not required as the operating system sits directly on the physical system and this allows for near-native performance.

Operating system virtualisation can reduce the number of operating systems and in turn operating system licences – with server virtualisation the number of licensed operating systems will remain the same.

The downsides for operating system virtualisation are that:

  • legacy operating systems are rarely supported;

  • server consolidation is more difficult where multiple operating systems are in use;

  • changes and upgrades made to the host operating system need to be isolated from each running virtual environment so that they can be tested prior to deployment; and

  • there are questions over the ability to isolate virtual environments from each other and the underlying hardware.

D. Desktop Virtualisation

Examples include – VMWare Workstation

Desktop virtualisation allows users to create and run multiple virtual machines on a desktop PC or laptop (the term is used in a slightly different context – Citrix and others use the term to refer to some forms of centralised computing (see below)). Each virtual machine represents a complete PC, including the processor, memory, network connections and peripheral ports. Desktop virtualisation lets you use your virtual machines to run Windows, Linux and a host of other operating systems side-by-side on the same computer. You can switch between operating systems instantly, share files between virtual machines and access any peripheral devices connected.

Desktop virtualisation is used to:

  • Host legacy applications and overcome platform migration issues;

  • Configure and test new software or patches in an isolated environment;

  • Automate tasks for software development and testing; and

  • Switch between multi-tier versions of enterprise applications on a single PC.

E. Centralised Computing/Software Virtualisation

Software virtualisation, also called application or presentation virtualisation, can be of two forms. These are centralised computing and web-hosted applications. Web-hosted applications are dealt with in the next section, but we will first look at centralised computing.

Centralised computing seeks to use software virtualisation to decouple the operating system and/or applications from the physical desktops and laptops within an organisation, so that the software application or operating system sits on a central server rather than on individual computers or laptops. Only a virtual interface is sent over the network to the client device and data is sent to and from the server – it is not stored on the local machine. This delivers benefits in terms of data security, as all data is held centrally, and supports remote/flexible access as users are not tied to any one device (see also the table under the section ‘The Business Benefits of Virtualisation’).

There are two main ways to achieve this virtualisation – terminal server computing and application streaming. These technologies are increasingly being used alongside one another so should not be viewed as competing with each other necessarily.

The principle difference between terminal server computing and application streaming is the point of execution. For terminal server computing, the applications are physically installed and executed only on the back-end servers. For application streaming, software is streamed to the device as it is needed but is executed there, running as if it were on the client.

Terminal Server/Thin Client

Examples include – Citrix Presentation Server, Microsoft Terminal Server, VMWare Virtual Desktop Infrastructure, NEC VPCC Solution

In the context of centralised computing, there are two applications of terminal servers. In the first, a central terminal server provides an operating system desktop (Windows or Linux) to multiple user terminals (called “thin clients”). All processing and data storage is on the back-end server(s). In the second model, an ordinary computer acts as a temporary terminal server allowing another device using a remote desktop application to access its desktop via the internet or a wide area network (WAN).

Thin client systems can work on ISDN connections given their lower bandwidth requirements but the network demands remain at a constant level so large organisations require a big pipe and significant investment in the back-end infrastructure where all processing will take place.

With products like VMWare Virtual Desktop Infrastructure, users access their own complete desktop environment in a virtual machine stored in a central data centre. The virtual machine running on a server in the data centre is an image of a complete PC – operating system, applications and configurations. The client machine (thin client or PC) used to access the virtual desktop image on the server only needs to run a remote display protocol.

Application Streaming

Examples include – Altiris Software Virtualisation Solution, AppStream

Virtualisation by streaming is based on the fact that the majority of applications and operating systems do not require all of their code/functionality in order to start up. In fact, the requirement can be as low as 10% of the software. Application streaming leverages this detail by only streaming the required functionality from the central server on initial call up of an application or operating system. Further functionality is streamed when or if required but this is invisible to the user provided their network/internet connection is functioning.

Application streaming requires a good broadband connection given the initial streaming load but the demands placed on the network are low after that. As is also the case with terminal server computing and web-hosted software, application streaming faces what is known as the “airplane problem” – what happens if you want to work offline because you do not have access to a network connection? However, the main barrier to application streaming deployment is the need to package each application individually, which is a time-consuming process.

Packaging of software for streaming involves a sequencer or integrated packaging application running a start-up of the software to be packaged and recording the sequence of events that are required at start-up. This allows the code required for these steps to be prioritised and the software packaged appropriately in order that the correct elements are streamed to the remote client device first. This allows the virtualisation software (sandbox) to record the order of executions made and response required so that it can emulate the installation when the application is streamed to the user device.

Application Streaming

F. Web Hosted Software/Software as a Service

Examples include – Google Apps, Buzzword (Adobe), IBM Lotus Symphony, Microsoft Live, Salesforce

Software applications can also be web-hosted. Some companies use their web servers to host software applications – supporting remote access, flexible working and business continuity processes. Web technology is cheap, robust and easy to use but has security issues because it is based on a public network, so encryption and authentication become vital.

Public web-hosted software applications are another area of rapid growth and are part of the trend towards on-demand or utility computing. This is also widely referred to as Software as a Service. The office suite market (word processor, spreadsheet, email, presentations, calendar, etc) is especially active with Google, Microsoft, IBM and Adobe all involved.

Software delivered in this way is accessed via a website and is often provided free of charge (with possible corporate support packages and value added services available at a price). Funding is often also provided indirectly through advertising placed on the website. Many providers of these services operate on a “long tail” model – offering a standardised service, at low cost to a large number of users, from large scale server farms with state of the art disaster recovery facilities. In terms of benefits, providers are able to offer the latest versions of software immediately as they can constantly update, modify and develop their websites (this is known as “permanent beta”). In addition, this sort of web-hosting has obvious portability benefits as software can be accessed through a browser from any computer (Windows, Linux, Mac) in any location.

These low cost models are undeniably attractive to businesses looking to reduce the total cost of ownership of their IT systems. However, there are a number of drawbacks to web-hosted software. Firstly, users will require an internet connection to access the software, and, although the applications are offering an increasing amount of offline functionality (usually only after initial log-in), users will need to remain on line to retrieve web forms (blank documents, spreadsheets, etc) or to save or retrieve their data. This weakness is being mitigated by the “web on the go” capabilities of tools such as Google Gears, Microsoft Silverlight and Adobe Air that allow access to online functionality offline but is unlikely to be eliminated.

Secondly, web-hosted applications tend to be short on functionality when compared to existing desktop tools.

Thirdly, there are significant data protection and privacy concerns around allowing a third party to hold an organisation’s data, documents and records – particularly in unknown locations on standard terms of business with little recourse to the service provider in the event of problems.

The Business Benefits of Virtualisation

The marketing materials for virtualisation products identify numerous potential benefits of virtualisation and there is little doubt that there are gains to be made from deploying virtualisation technologies.

The table below identifies some of the possible benefits but should be considered alongside the issues raised in the sections ‘Common Problems with Virtualisation’, ‘Licensing for Virtualisation Infrastructures’ and ‘Legal & Contractual Considerations’ that follow.

Key Benefit

What Virtualisation Offers

Hardware Utilisation

  • Server utilisation in large organisations can be as low as 5-15%, as a result of applications requiring a dedicated environment for reliability and support or using a different operating system to other applications. Often these are legacy systems that cannot be easily upgraded to run on newer systems. Virtualisation allows the consolidation of server use by creating a series of virtual machines that provide dedicated environments on one physical server.

  • Savings can be achieved in reduced hardware costs from the reduction in servers required, plus decreased energy usage from running and cooling the servers.

  • Similarly with desktop virtualisation, desktop resources can be consolidated as multiple operating systems can be run on a single PC.

  • Thin client solutions reduce client side hardware costs as only dumb terminals are required.


  • Operating system server virtualisation can reduce the number of software licences required as each virtual environment is hosted on the same operating system.

  • Application streaming could allow licensing savings as applications are only streamed when required, rather than having software locally installed in case it is needed.

Workload Management

  • Virtual machine portability means that workload-balancing tools can help spread computing tasks between physical resources on a network automatically depending on demand, helping them to run more efficiently.


  • Removing the need for local installation of applications or operating systems on client devices (through thin client or streaming), reduces the support costs for deployment across an estate.

  • Where data and applications are centralised, bugs and errors can be fixed centrally - greatly reducing the costs of IT support and the duration of interruptions.


  • Virtualisation allows an organisation to run a number of applications (including legacy versions) at the same time, avoiding tie-in, improving choice and flexibility, and mitigating the risks of deploying upgrades (releases can be more easily tested and, if necessary, stepped-back).

Business Continuity

  • Virtualisation tools provide reliable failover and system back-up schemes between virtual machines on a single physical server.

  • Where applications and data can be stored on central servers, they can be accessed from anywhere. Users can access their applications and data remotely through mobile devices and home PCs, as if they were in the workplace.

  • In addition, the risk of data loss can be reduced by physically moving central servers out of high risk areas, such as major cities. This change in physical location will not be noticed by end users.


  • Virtualisation allows for rapid addition of virtual machines and environments (subject to physical capacity and network bandwidth).

  • Software virtualisation allows for simple expansion of an estate (e.g. new employees/offices coming online) as local installations are not required – all that is need is an internet connection and a pc/laptop/dumb terminal.

Monitoring & Access

  • Centrally stored applications and data can be monitored, maintained and managed more easily, since it is all in one place.

  • Monitoring of data/software access is useful in supervising employees and monitoring licensing compliance but also in ensuring compliance with data protection legislation.

  • In addition, data can be uploaded to the network connected server from any remote location. This supports flexible and mobile working, as all other users on the network can have instant access to that data.


  • Isolation between different virtual machines ensures one virtual machine cannot be corrupted by a virus or malware present in another virtual machine.

  • Using software virtualisation, data and applications can be stored on central servers behind a company’s firewall and in a company’s physical control. Since the data is not stored on the local machines, it is secure if a device is lost, stolen or attacked.

  • With software virtualisation, there is a reduced risk of a company’s central applications and data being adversely affected by individual local machines.

  • Centralisation of data and applications also has the potential to make backing up easier, since all the data and applications are stored in one place.

Regulatory Compliance

  • The centralisation of data may aid companies’ compliance with legislation such as Basel II or Sarbanes-Oxley, as the data and processes that business leaders are responsible for tracking under this legislation are captured and held centrally.

  • Centralisation of data is also a potential answer to current issues around public sector data security following recent high profile breaches in the UK.


  • The risks of outsourcing can be reduced by virtualisation as data, applications and intellectual property can be held centrally within the company, and remotely accessed by the supplier of the services.

  • Virtualisation can also reduce some risks of outsourcing services as it is easier to switch between suppliers if all data is centrally held. This will reduce the costs of changing suppliers.

Testing & Development

  • Virtualisation can be used to run multiple versions of software at the same time, in separate environments, on the same physical machine. This is especially useful for IT developers.

  • A new virtual machine session can be established on a platform to test a new or upgraded application on the same physical environment, in isolation from other virtual machines, prior to release into live.

  • Companies customising software for numerous clients (e.g. banks) can run the different versions from the same desktop.


  • Decreasing hardware requirements through consolidation reduces the energy consumed in running (and cooling) an organisation’s servers.

Common Problems with Virtualisation

This section discusses some of the common problems observed with virtualisation technologies and projects.

  • Measuring Return on Investment – The inability to measure the return on investment of virtualisation deployments is a common reason why virtualisation projects cannot be called a success. This is because it is more difficult to measure CPU utilisation (and in turn the utilisation ratio achieved by virtualisation). It is harder to identify performance bottlenecks in virtual systems than in real ones as they can be caused by a greater number of factors, including inefficiency of hardware emulation, problems with the host platform or the virtualisation settings. Some vendors offer tools to help locate these bottlenecks but this is another specialist tool made necessary by virtualisation.

  • Performance – With full server virtualisation, performance times can suffer as the VMM translates instructions between the emulated hardware and the actual system device drivers. Such performance degradation varies greatly from nominal levels to latency of up to 20% for certain activities. Latency up to 10% may have little significant impact on users but it does have a direct effect on the time taken to complete back-ups. Paravirtualisation and hardware-assisted virtualisation have been developed to address this weakness but these technologies are less mature.

  • Planning Deployment – It is difficult for a potential customer to evaluate in advance exactly what a virtualisation deployment will look like and what virtualisation ratio (virtual machines per physical server) can be achieved. Some vendors offer their own tools to assist in capacity planning (e.g. VMWare Capacity Planner) or there is third party software available (e.g. PowerRecon from PlateSpin) but this tends to be expensive, adding to the upfront investment costs.

  • Support – Some software vendors are unwilling to support software when utilised within a virtualised environment, and require a problem to be replicated on physical hardware first. Some software vendors only offer support with premium packages or their most recent releases. Virtualisation vendors suggest this is a means to prevent virtualisation and the reduced licence fees this might lead to, but the software vendors maintain that it is impossible to underwrite all of their software programs where they cannot be certain on what environment or configuration it will be deployed.

  • Security – Despite the security benefits of virtualisation (see above), there is concern that the hypervisor introduces a new layer of software susceptible to malicious attack. This threat is mitigated by the small amount of code used in hypervisors but users will need to add security updates to hypervisors as a regular task within their IT management policies. There have been very few reported attacks so far but this is unlikely to remain the case as virtualisation spreads.

  • Network and Bandwidth – When several virtual machines have to share one or two network or storage controllers, performance bottlenecks are likely – consolidating computing does not add capacity to the network. Application streaming, in particular, demands a fast connection for the initial start up of software but thin client and web-hosted services also require an ‘always on’ connection. Software virtualisation, in effect, makes internet connectivity a business necessity.

  • IT Management/Asset Control – Server consolidation does not lead to a reduction in managed systems, only physical systems. In fact, it adds complexity by adding layers of software and virtual partitioning. Maintaining asset registers becomes more difficult as virtual machines are so easily created, moved or deleted – and there is no fallback means of sending staff out to manually count physical hardware.

  • Training – Full virtualisation platforms require significant training for the staff who will be tasked with maintaining them. This training can be expensive and the shortage of experts means it is not always available.

  • No ‘one application/vendor’ solution – Adoption of virtualisation requires a shift in culture for IT management but the technologies are complex. Deploying virtualisation successfully requires a number of tools to be implemented (capacity planning, workload balancing, and utilisation measurement on top of the virtualisation software itself). This is a substantial undertaking in terms of upfront investment of time and money.

  • Disaster Recovery Planning – Redundant hardware needs to be maintained on physical host systems to allow for disaster recovery, but this is not always accounted for in virtualisation projects (or customers’ expectations). The server virtualisation product must allow for dynamic virtual machine failover.

  • Lack of Functionality – One of web-hosted software’s principal issues in securing widespread uptake has been vendors’ inability to offer compelling functionality in comparison to traditional offline applications.

Licensing for Virtualisation Infrastructures

It should be clear that any technology that shifts where and how software is used will have a significant impact on licensing frameworks for software. Any business investing in virtualisation technologies is likely to see increased complexity in its licensing arrangements. This is because of:

  • additional licences with virtualisation vendors for virtualisation software and management tools;

  • the portability of licences as software is easily moved between virtual machines and environments or streamed to remote locations on demand;

  • resistance from some software vendors to virtualisation; and

  • the mixture of open source and proprietary licensing models used.

This section looks at the licensing issues that arise within the virtualisation market today.

A. Virtualisation Vendors

Most virtualisation software is licensed on similar terms and conditions as traditional software and on a per processor basis. The software may be provided free-of-charge with the vendor making money from support and consultancy or selling the complementary management tools required to use virtualisation effectively. Some vendors use open source licensing models and others use a proprietary model. An open source model allows third parties to develop management tools or hardware vendors to provide the equipment for hardware-assisted virtualisation. However, the proprietary vendors argue that openly licensing software code can increase security risks as you cannot control who has access.

B. Traditional Software Vendors

When a business is considering a virtualisation deployment it should consider the impact this has on its existing licensing arrangements. The terms of traditional software licensing (particularly operating systems) will normally deal expressly with use within virtualised infrastructures but some may not. Where virtualisation is dealt with, the use of the software will often be restricted in some way or support will be limited to problems that can be replicated in non-virtualised environments. This is because vendors are unwilling to underwrite the performance of their software in such a wide range of unknown environments but future versions will be built for use within virtualised infrastructures. The approach of Microsoft is illustrative of the approach of proprietary software vendors.

Until January 2008, when it reversed its position in seeming recognition of the inevitable expansion of virtualisation, Microsoft had refused permission for its consumer versions of Vista to be used on virtual machines or environments. It argued that the introduction of a hypervisor layer created a security risk and that virtualisation was inappropriate for most consumers who relied on Microsoft to minimise their security risks. However, if a business bought Microsoft’s Ultimate Edition it could use it within a virtualised or emulated hardware system but the user must not access or play media or applications protected by any Microsoft digital, information or enterprise rights management technology.

Similarly for Windows Server 2003, Microsoft permits virtualisation only with its Enterprise and Datacenter editions. For any physical server licensed for Enterprise (which is 4 times as expensive as the standard edition), Microsoft allows up to 4 instances of Windows Server to be run in virtual machines at no extra charge. For Datacenter users, an unlimited number of instances of Windows Server can be run at no extra cost.

For its premier-level support customers, Microsoft will use commercially reasonable efforts to investigate issues with its software running on virtualised systems. For non-premier level customers, any issues must be reproduced independently from the hardware virtualisation software (unless, of course, it is Microsoft’s virtualisation solution being deployed).

Many vendors have moved away from a per processor licensing approach and offer a range of licensing options – processors, concurrent users, named users, time, unlimited. However, greater flexibility and choice also creates greater complexity and increased demands on management.

Any requirement to upgrade software versions or support tiers in order to facilitate a virtualisation programme may undermine any forecast cost savings.

C. Policing for Software Vendors

Licensing compliance becomes more complex within a virtualised world (particularly where on-demand application streaming is deployed) and, although dynamic software management tools (from AppStream, VMWare and Microsoft among others) assist customers to maintain compliance, software vendors will find it increasingly difficult to review their customers’ compliance because virtual machines are more difficult to count than physical ones. Most virtualisation management tools provide reporting functions that can provide snapshots, but the ease of deployment, removal and shifting of software means that an organisation’s software use is less constant.

D. Web-Hosted Application Vendors

Web-based application providers are offering a vanilla service to millions of users at very low cost. As such, the licensing terms they offer have no warranties (e.g. around data security) and no service levels. The risk is taken nearly entirely by the customer in return for the dramatic cost reductions. This may be acceptable to some businesses but probably not in relation to business critical applications or for businesses in heavily regulated environments. Those customers who do pursue a web-based approach will surely start to look at insuring against internet downtime and data loss.

E. Partnering Agreements

The last year has seen the number of tie-ups between organisations within the virtualisation market place increase. On top of the high profile takeovers (Microsoft of Softricity and Calista, Citrix of XenSource) there are deals being done between hardware vendors (Hewlett Packard, Intel, NEC) and virtualisation vendors (VMWare, XenSource) to provide servers with virtualisation software pre-installed. In November 2006, Microsoft and Novell entered into a series of strategic agreements aimed at improving the interoperability and manageability between Windows and Linux, and creating a virtualisation solution for the two platforms. In January 2008, Microsoft and Citrix announced plans to co-market desktop virtualisation solutions in an effort to curb VMWare’s dominance in this area.

Microsoft is committed to its proprietary licensing approach and argues that it has legitimate security and service quality concerns and that the tie-up with Novell proves that its terms for licensing its APIs are not unreasonable or obstructive to the market. Without access to the APIs, which are not based on any of the open standards used by VMWare and others, third parties cannot provide Windows paravirtualisation solutions. This makes it difficult for third party hardware virtualisation solutions to achieve near-native performance so we are likely to see further developments in this area in the near future.

Legal & Contractual Considerations

Virtualisation can be managed in much the same way as traditional IT deployments – it can be handled in-house or outsourced. The same key legal and contractual issues arise with virtualisation projects as with other IT projects but this section deals only with the issues specific to virtualisation.

  • Proof-of-Concept/Due Diligence – Virtualisation is likely to be a new consideration for most organisations and they will need to take care to fully understand the issues and implications before proceeding. Virtualisation vendors will offer scoping services to provide customers with an insight into what its virtualised infrastructure might look like, but this will only be part of the question. Will the customer need to upgrade its existing licensing and support arrangements? What staff retraining will be required? How much additional central management is likely to be required?
    As has been seen earlier, network reliability and bandwidth are key to supporting software virtualisation and as devices no longer work offline, customers will normally be left with the risk of network unavailability. This means that the initial scoping exercise needs to include an audit of a company’s network performance and capacity. If the virtualisation vendor looks at this, the customer should try to obtain a warranty against this assessment.

  • Licensing Audit – Software vendors have been reviewing their licensing terms to consider the implications of virtualisation but they may forget to review any audit provisions that enable them to police licensing compliance. Software vendors may want reports on deployment of virtual machines or environments or instances of application streaming.

  • Data Access – Virtualisation, particularly web-hosting applications or outsourced software virtualisation, will lead to considerable amounts of data being held centrally by a third party. A business will need to ensure it has access rights to this data in all circumstances (particularly in the event of termination of the agreement). Strong provisions on data retention and security are equally important.

  • Support – As has been mentioned previously, virtualisation increases the complexity of IT infrastructure. Virtualisation involves more software and a series of highly mobile virtual machines and environments. Against this background it may be difficult to identify where responsibility for a problem lies (see the insistence of some software vendors that an issue be replicated in a non-virtualised environment) but the customer will want to try to ensure that it is not left in the middle of two or more providers each denying responsibility. Outsourcing management responsibility to a third party may help to manage this risk.

  • Back-Up/Disaster Recovery – Virtualisation can enable greatly enhanced back-up and disaster recovery functionality but this needs to be properly deployed and managed. For example, in the rush to maximize server consolidation, businesses may need to maintain unused or remote servers to support disaster recovery. This is particularly important when outsourcing a virtualisation project.

  • Data Protection – For software virtualisation, the central storage of data may give rise to data protection questions. This is particularly likely if the information is stored in a different jurisdiction or by a third party. This is because the storage of data on a third party’s facilities is likely to constitute a transfer of data if any of the documents, spreadsheets, etc, contain personal data. In the UK, the 7th Principle of the Data Protection Act requires a “data controller” to only choose a third party “data processor” that offers sufficient guarantees in respect of the technical and organisational security measures governing the processing to be carried out and to take reasonable steps to ensure compliance with those measures (such as conducting regular audits and reviews).
    This can be a very complex question and is a particular barrier to the uptake of web-hosted applications by businesses as the location of the providers’ server farms are not visible to the end user. Furthermore, the standardised terms normally exclude any liability for data loss and do not provide audit access to the server farms.

About the authors

Howard Rubin


Howard is a commercial lawyer with over 20 years experience advising clients in the IT industry. He specialises in areas of law which are directly relevant to clients buying IT goods or services or for IT companies selling them. As a lawyer experienced in the IT sector he not only has considerable experience in contract law but has specific interest in intellectual property law and licensing. Howard has particular expertise in negotiating agreements between major international companies and government bodies.

For two years, Howard was the IT functional partner responsible for delivery (policy, strategy and budget) of ICT across the firm on international basis. He now chairs the firm’s internal IT policy group and is a member of its International Finance Committee.

Howard has written on international copyright, contributed to Database Law published by Jordans and lectures on contract, copyright and competition law in relation to IT both in the UK and abroad.

Tel: +44 (0)20 7415 6000
Fax: +44 (0)20 7415 6111
Direct: +44 (0)20 7415 6187

Barry Jennings


Barry trained at Bird & Bird and qualified into the commercial department in September 2004. From May 2004 to May 2005, Barry worked on full-time secondment as Contract Manager for VOSA on its MOT Computerisation PFI Agreement with Siemens IT Solutions & Services. Since May 2005, Barry has continued to work in a legal assurance role for VOSA on a part-time basis, whilst working for other public and private sector clients, primarily on IT related projects and contracts, the remainder of the time.

Barry is deputy editor of Bird & Bird’s IT & E-Commerce Bulletin. He also chairs the firm’s Technology Knowledge Group, helping keep Bird & Bird lawyers briefed on key technology trends and topics, and co-chairs the Commercial Group’s legal know-how meetings.

Barry has written articles on PFI/PPP agreements, intelligent transport systems and telematics, plus internal briefing papers on a range of IT topics (including web 2.0, software as a service, net neutrality and IPTV).

Tel: +44 (0)20 7415 6000

Fax: +44 (0)20 7415 6111

Direct: +44 (0)20 7905 6382