Virtualisation has been with us for a very long time – since IBM added it to mainframes in 1967: it came to Unix servers in the 1990s and to x86 ones from 2003 onwards. Utility computing models of the late 1990s tended to identify ‘pools’ of resources including servers, storage, clients and networking. Virtualisation in the server space has the dual value of increasing performance and reducing hardware costs, even users need new budgets for the hypervisors and virtualisation management software.
In the storage systems space virtualisation started with splitting files from blocks in a shift from ‘direct’ to ‘network’ attached storage. However virtualisation is having a dramatic effect – running multiple VMs on a server and/or when implementing VDI increase the spending on storage. The difficulty in mixing and matching physical and virtual components has changed storage systems, with innovative suppliers offering thin provisioning, snapshots and other ways to manage the boot storms when lots of office workers log into their virtual desktops at the same time, but there’s something missing. While VMware and others freed us from having to choose single hardware vendors in the server space, typical approaches in the storage space remain proprietary, requiring users to choose the portfolio of a single vendor rather than a heterogenous approach. Almost all users chase down the bottlenecks here by picking a single hardware supplier today. As always, being tied to a single vendor leads to the thought that you may be paying too much.
Storage hypervisons such as IBM’s SVC, Virsto, DataCore and Falconstor promise to free users from single hardware choices and, if successful, will help reduce hardware spending and change the economics of storage in the process. That’s why we’re spending so much time researching the subject – and why you should too. As with the server market at the beginning of the 2000s vested interests may delay the start of the revolution, but there’s a huge potential for vendors to become ‘the VMware of storage’.
Implementing server virtualisation also increases the spending on networking and data communications, which has even further to go than storage before it can be considered as a pool with open vendor choices. So users will need to chase this bottleneck down once they’ve cleared the storage one. There are other bottlenecks in systems management, where advanced automation can help and in applications, where virtualisation is still not well catered for. Many users are also looking at alternatives in server virtualisation to get away from single supplier software – so the chasing the bottlenecks looks like a continuous circle. We may never get there, but the aim is to arrived at balanced virtualisation with performance, cost and single vendor bottlenecks chased out.