Date: Wed, 17 Oct 2007 17:51:48 -0700 (PDT) From: Matthew Dillon <dillon@apollo.backplane.com> To: Marcel Moolenaar <xcllnt@mac.com> Cc: freebsd-arch@freebsd.org Subject: Re: kernel level virtualisation requirements. Message-ID: <200710180051.l9I0pmWH068335@apollo.backplane.com> References: <470E5BFB.4050903@elischer.org> <470FD0DC.5080503@gritton.org> <20071013004539.R1002@10.0.0.1> <47107996.5090607@elischer.org> <ff5vdh$jol$1@ger.gmane.org> <200710172216.l9HMGhbd067251@apollo.backplane.com> <EB16056D-F5C1-4F0E-A97A-CAA17BB75F8E@mac.com>
next in thread | previous in thread | raw e-mail | index | archive | help
:I thought virtualization was used to address under-utilization of :hardware :and to provide better control over usage parameters that may be covered :by SLAs... : :-- :Marcel Moolenaar :xcllnt@mac.com Well, those are related topics but not really the primary motivation for a service provider. The bottom line for any provider is always going to be money and the biggest money burn is always going to be man-power, not computing power. What virtual kernels give the provider is a way to manage customer resources generically. Turning the resources into a big black box means they can be managed like a big black box. Don't put a lot of truck into SLA agreements, the vast majority of service providers play fast and loose with SLA terms. I remember all the SLAs we provided for various levels of service at BEST Internet all those years ago... what a joke. If you want to know how serious a provider is about a SLA look at the part of the contract which outlines the penalties for violating one. Then calculate the actual cost to the provider for occassionally violating a customer SLA. The reality is that supporting most SLAs does not require sophisticated resource management, you just need to able to shuffle the customer instance to available physical resources and make sure you have enough physical resources to handle all your customers. Shuffling the resources looks like nothing more then a quick 10-second reboot to the customer. That's how fast it can be. I suppose we could argue over chicken-and-egg but I will tell you that in my experience the driver is money. Customers who demand very specific hard-to-implement SLAs are few and far between. Providers never start with 'what kind of SLA can we provide the customer' and then build resources to support it. Providers always start with 'how can I implement this generic service as cheaply as possible' and then build SLAs that fit the service. Only once the service has got momentum will a provider look to see how they can expand the SLA to get more customers (but also remember that this sort of expansion is going after a very small customer population... that why it isn't priority #1). Here's the crux of the SLA: Almost to a one they guarentee a high degree of uptime, but most don't really say anything much about reboots or minor rollbacks and most customers either don't ask or don't care. If a customer needs something more sophisticated then he's going to pay through the nose for it. The vast majority of these services are provided to customers who don't care about anything other then uptime. That is what drives the market. High-end nitch providers deal with the remainder. -- You do get a lot of SLA potential for free with virtual kernel instances. When a customer instance is nothing more then a disk image, and managing that disk image requires nothing more then copying it to another machine, or basing it on a networked filesystem so it can be run on any machine, or replicating it to provide downtime guarentees by simply booting the instance on another machine when the primary goes down... well, think about how much work that would be for one of us to automate? I can't imagine it would take more then a day to put together and automate the management infrastructure. When I first started using virtual kernels for testing I got bogged down in traditional thinking... I was thinking of the virtual kernel kinda like an application running on a machine (which is how a jailed environment is normally viewed). But that isn't the right way to think about it. The right way to think about a virtual kernel is to simply treat it like a physical machine that happens to be portable. It's as easy as putting the filesystem image on a NFS server and then running the virtual kernel on any physical box I desire. No matter where I actually run it, POOF, it boots up on my network and can be accessed as if it were a physical machine. No matter what I stuff into that virtual machine.. no matter how sophisticated it is or what applications and services are run inside it, my external management of the image stays the same. You can't get much better then that. -Matt
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200710180051.l9I0pmWH068335>