From owner-freebsd-xen@FreeBSD.ORG Sun Jan 30 11:36:13 2011 Return-Path: Delivered-To: freebsd-xen@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2F3BB1065672 for ; Sun, 30 Jan 2011 11:36:13 +0000 (UTC) (envelope-from joovke@joovke.com) Received: from mail06.syd.optusnet.com.au (mail06.syd.optusnet.com.au [211.29.132.187]) by mx1.freebsd.org (Postfix) with ESMTP id B5A698FC1A for ; Sun, 30 Jan 2011 11:36:12 +0000 (UTC) Received: from [172.16.0.242] (c122-106-11-245.rivrw1.nsw.optusnet.com.au [122.106.11.245]) by mail06.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p0UBa9BZ022113 for ; Sun, 30 Jan 2011 22:36:10 +1100 Message-ID: <4D454D28.8050106@joovke.com> Date: Sun, 30 Jan 2011 22:36:08 +1100 From: Alex User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101208 Thunderbird/3.1.7 MIME-Version: 1.0 To: freebsd-xen@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: terrible performance with xn0 interface and PF X-BeenThere: freebsd-xen@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussion of the freebsd port to xen - implementation and usage List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 30 Jan 2011 11:36:13 -0000 Hi guys, I managed to get the XENHVM kernel working, obviously I had to adjust my pf.conf as the network interface is now xn0 instead of re0. All i did was edit the config, and replace all instances of re0 with xn0. The performance seems to be aweful. I was wondering why network connectivity was so slow. A download test struggled to do 2KB/s. I disabled pf and suddenly the speed skyrocketed. Any ideas where to look? I have the following in my kernel for PF: device pf device pflog device pfsync options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build and pf.conf (very basic setup): mailblocklist = "{ 69.6.26.0/24 }" #blacklist = "{ 202.16.0.11 }" # Rule 0 (xn0) #pass in quick on xn0 inet proto icmp from any to (xn0) label "RULE 0 -- ACCEPT " #block mail server(s) that continue to try and send me junk block in quick on xn0 inet proto tcp from $mailblocklist to (xn0) port 25 #block anyone else who's in the blacklist #block in quick on xn0 inet from $blacklist to (xn0) pass in quick on xn0 inet proto tcp from any to (xn0) port { 110, 25, 80, 443, 21, 53 } flags any label "RULE 0 -- ACCEPT " pass in quick on xn0 inet proto udp from any to (xn0) port 53 label "RULE 0 -- ACCEPT " # # Rule 1 (lo0) pass quick on lo0 inet from any to any no state label "RULE 1 -- ACCEPT " # # Rule 2 (xn0) -- allow all outbound connectivity pass out quick on xn0 inet from any to any label "RULE 2 -- ACCEPT " # Rule 3 (xn0) # deny all not matched by above block in quick on xn0 inet from any to any no state label "RULE 3 -- DROP " -------------------------- Any ideas why I would be seeing such a performance hit? The only thing that's changed is the kernel+network interface type.