Server virtualisation makes sense on almost every level – better utilisation of hardware, ability to split server components amongst different VMs to improve resiliency and trouble-shooting, abstracting server software from the hardware to ease migrating subsystems “into the cloud”. But it’s only half the job. You not only have to virtualise the servers, you have to virtualise the infrastructure that supports it.
Imagine going to all the time and money of building Soccer City. You can pack 94,000 people into it to watch a game – but it’s not often you have a game so momentous you’ll fill the whole place. That’s money down the tubes. So, you virtualise the pitch – you now have multiple games running at the same time, making your infrastructure utilisation so much better. You can see fans flooding in and out, coursing through the access tunnels to the stands, and every seat is full.
But which fans are there to see which games, which seats can they use, which tunnels should be reserved for the biggest crowds of fans, how do you work out which teams pay what for the floodlights they use?
Virtualisation cannot stop with the servers – the entire access infrastructure also needs to be virtualised. Some of this can be ‘quasi-virtualised’ – routers and firewalls generally need to know where connections are coming from and going to, but not terribly much more. The main data pipes are rather like the roads to the stadium: they can be used by pretty much anyone to get near – the problems come in closer to the action area. This is the domain of the Application Delivery Controller, which acts as the traffic marshal and the shuttle service, making data flows to the right place more smoothly (load balancing, session management) and faster (acceleration through SSL offloading, etc).
Until now ADCs were pretty much in the same camp as firewalls and routers – often blindingly fast in sheer horsepower, but fairly monolithic in approach to traffic management. More recently, ADCs have started to feature clustered multiprocessing, which allows their processing power to be scaled up or down as needed by adding more blades (rather like previous generation server clusters).
Now, more recently, this hardware scaling capability has been dramatically improved through Virtual Clustered Multiprocessing (vCMP), which allows the ADC to manage different application traffic streams inside different instances of a “Virtual Application Delivery Controller”, giving exactly the same benefits of more granular session and traffic management for the ADC as server virtualisation gives for the server hardware.
This means multi-tenant operations (either cloud providers or large corporates with large and distinct business units) can virtualise their servers for accounting/billing/resource allocation/security/portability reasons, and now also virtualise connectivity to those resources for better control over application response times, security and resource allocation.
This radical change in how large datacenters are architected and provisioned has come about through developments in software virtualisation techniques, especially in hypervisor technology. This was only partly useful in the ADC space as much of the hypervisor tech was designed around general-purpose and commoditised hardware and guest operating systems. ADC’s are not general purpose, however, and the amounts of sheer computation power they need to provide to not only inspect and dynamically route packets based on the application concerned with close to zero latency, but also do processor-intensive tasks like SSL encryption/decryption and compression.
This led to F5 developing ADC hardware-specific hypervisor technology, bringing it to market in the last two years. This was a necessary step, as at the same time, data encryption requirements have jumped radically – SSL/TLS is moving from 1024 to 2048 bits, requiring a massive five to seven times as much processing power. Only custom-developed hypervisor software specifically created to optimize custom-developed ADC hardware can deliver the pure horsepower required for true on-the-fly, full wire-speed traffic management and acceleration.
Virtual servers can be brought up, dropped or moved… and now the same approach can be taken with connection resources: application traffic streams can be assigned more or less bandwidth and re-routed on the fly between virtual servers, servers or even data centres, without any interruption to sessions.
With server virtualisation data centre managers were able to let several football teams play on the same field at the same time. Now, with Virtual Clustered Multi Processing on the ADC, you can also have multiple different groups of fans, coming and going when they need, through access tunnels dedicated to them as needed, and on top of that you can now choose to charge the different groups different rates, and re-route them instantly if the game they’re watching runs over or you prefer them to leave Soccer City from another exit.
That’s the difference between multi-tenancy and true virtualisation.
Martin Walshaw, Senior Systems Engineer at F5 Networks and speaker at the upcoming IP Expo