Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 20 Mar 2010 16:19:50 +0000
From:      Bruce Simpson <bms@incunabulum.net>
To:        freebsd-xen@freebsd.org
Subject:   Re: FreeBSD on Xen with hw virtualization support
Message-ID:  <4BA4F5A6.508@incunabulum.net>
In-Reply-To: <ade45ae91003192110r774050dbld840aebcdfe7cb17@mail.gmail.com>
References:  <20100318204746.GA57903@cons.org>	<e8f0b581003190547u42de13c5u2613d76913af1db5@mail.gmail.com> <ade45ae91003192110r774050dbld840aebcdfe7cb17@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 03/20/10 04:10, Tim Judd wrote:
> This is the first time I've heard of any penalty on HVM systems.  What
> I'd like to know, given that I now have some googl'ing I need to do
> about this, is that for those who have already done this; how big is
> the impact?  Is it so much that general usability and patience a
> sysadmin does not normally have would drive them insane?
>    

Nobody's measuring it, to my knowledge, because nobody's had to -- if 
it's 'good enough' for them, they'll stick with what's there.

Of course, blue chip quality engineering is normally what I pitch for in 
the beginning. Then, we identify what the client can actually get away with.

This is largely borne out of experience subcontracting for companies in 
the 3GPP space here in the UK, and it does require excellent 
communication between client and consultant. Any pitch to mass consumer 
market, usually requires the same level of engineering and project 
management expertise, as for a blue chip b2b sale.

There is a performance penalty, because Xen has to emulate real hardware 
for HVM, using code largely cribbed from QEMU.

Normally this happens within the hypervisor itself, however, this is 
problematic, because there is then no good way to book the CPU/memory/IO 
involved to the domU doing the I/O. This in turn affects scheduling 
parameters.

Using the grant_table abstraction, though, it's possible to shuffle it 
into a 'driver domain', as I/O's to the 'driver domain' (another stub 
Xen domain which hosts the drivers) can then still benefit from 
grant_table's page flipping in shared memory, whilst booking the I/O so 
we have a reflection of the true cost of running that HVM domU.

I strongly feel pv_ops is the right way to go, however this innovation 
is happening outside of Citrix itself; it's mostly the Fedora camp who 
are pushing it.

You could consider it a Xen fork. No one is doing this work 'for free', 
the cost of the innovation is borne by their employers in their line of 
business.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4BA4F5A6.508>