Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 17 Oct 2002 13:02:13 +0200
From:      Marko Zec <zec@tel.fer.hr>
To:        "J. 'LoneWolf' Mattsson" <lonewolf-freebsd@earthmagic.org>
Cc:        freebsd-net@freebsd.org, freebsd-stable@freebsd.org
Subject:   Re: RFC: BSD network stack virtualization
Message-ID:  <3DAE98B4.4058023A@tel.fer.hr>
References:  <3DADD864.15757E4E@tel.fer.hr> <3DADD864.15757E4E@tel.fer.hr> <5.1.0.14.2.20021017184945.02958380@helios.earthmagic.org>

next in thread | previous in thread | raw e-mail | index | archive | help
"J. 'LoneWolf' Mattsson" wrote:

> At 08:59 17/10/2002 +0200, Ruben van Staveren wrote:
> >Isn't this something that can overcome the current shortcomings of jail(2) ?
> >(the no other stacks/no raw sockets problem)

It should be possible even to run multiple jails within each virtual image, if
one wishes to do so :)
Actually, my code reuses the jail framework for providing separation (hiding)
between the user processes, therefore the behavior in that area will be very
similar. Everything else done on the networking layer is free of "jail" legacy,
as my concept is completely different: providing multiple truly independent
network stacks, instead of hiding parts of the monolithic one, which was the
approach taken by jail implementation.
An additional goodie is the introduction of soft limit option on average CPU
usage per each virtual image. This can be very useful in virtual hosting
applications, to prevent starving of CPU resources from runaway or malicious
processes running in a single virtual image.

> I've been tempted at looking into jail-ifying raw sockets as well, but time
> has precluded me from doing so (and from tracking -stable regularly). I
> must say that this virtualization sounds very promising in making the jail
> even more useful! And of course all the other avenues that are made
> possible with this. I guess the main/traditional question to ask first
> would be:
> This change adds abstraction, therefore it probably reduces performance -
> by how much?

In most parts of the code the virtualization is achieved via introduction of a
single additional level of indirection for all virtualized symbols/variables,
which are now contained in the new struct vimage, unique for each virtual image.
Therefore the calls to most of the networking functions within the kernel had to
be extended with an additional argument, which passes the pointer to the current
vimage struct.
This additional overhead shouldn't present a to significant problem, particularly
not for the current 1+ GHz CPUs with fast memory. Some preliminary tests (netperf
on TCP flows) show that the performance penalty is generally minimal, somewhere
around 1-2%, compared to normal maximum throughputs (not limited by the media
speed). As I perform more systematic and accurate measurements, I'll post them on
my web page.

Best regards,

Marko


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3DAE98B4.4058023A>