Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 07 Jun 2017 20:01:00 +0200
From:      Harry Schmalzbauer <freebsd@omnilan.de>
To:        freebsd-virtualization@freebsd.org
Subject:   PCIe passthrough really that expensive?
Message-ID:  <59383F5C.8020801@omnilan.de>

next in thread | raw e-mail | index | archive | help
 Hello,

some might have noticed my numerous posts recently, mainly in
freebsd-net@, but all around the same story – replacing ESXi. So I hope
nobody minds if I ask for help again to alleviate some of my knowledge
deficiencies about PCIePassThrough.
As last resort for special VMs, I always used to have dedicated NICs via
PCIePassThrough.
But with bhyve (besides other undiscovered strange side effects) I don't
understand the results utilizing bhyve-passthru.

Simple test: Copy iso image from NFSv4 mount via 1GbE (to null).

Host, using if_em (hartwell): 4-8kirqs/s (8 @mtu 1500), system idle ~99-100%
Passing this same hartwell devcie to the guest, running the identical
FreeBSD version like the host, I see 2x8kirqs/s, MTU independent, and
only 80%idle, while almost all cycles are spent in Sys (vmm).
Running the same guest in if_bridge(4)-vtnet(4) or vale(4)-vtnet(4)
deliver identical results: About 80% attainable throughput, only 80%
idle cycles.

So interrupts triggerd by PCI devices, which are controlled via
bhyve-passthru, are as expensive as interrupts triggered by emulated
devices?
I thought I'd save these expensive VM_Exits by using the passthru path.
Completely wrong, is it?

I haven't ever done authoritative ESXi measures, but I remember that
there was a significant saving using VMDirectPath. Big enough that I
never felt the need for measuring. Is there any implementation
difference? Some kind of intermediate interrupt moderation maybe?

Thanks for any hints/links,

-harry



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?59383F5C.8020801>