Date: Thu, 14 Jan 2010 19:42:42 -0800 From: Tim Bisson <bissont@gmail.com> To: Deploy IS INFO <info@deployis.eu> Cc: freebsd-xen@freebsd.org Subject: Re: Can't boot 8/CURRENT hvm on Quad-Core Opteron 2352 Message-ID: <1A8F87C2-3790-42CA-B98C-2017B3F8EF9C@gmail.com> In-Reply-To: <4B4C703A.3000501@deployis.eu> References: <BE5A8730-B0EA-4CE8-8C2F-B3DC7A5899EA@gmail.com> <20100112093628.GH62907@deviant.kiev.zoral.com.ua> <4B4C703A.3000501@deployis.eu>
next in thread | previous in thread | raw e-mail | index | archive | help
On Jan 12, 2010, at 4:51 AM, Deploy IS INFO wrote: >=20 >=20 > Kostik Belousov wrote: >> On Mon, Jan 11, 2010 at 11:36:00PM -0800, Timothy Bisson wrote: >>> Hi, >>>=20 >>> I'm trying to run a FreeBSD 8/CURRENT hvm on XEN, but booting = FreeBSD currently panics on a quad-core Operton 2352 box while booting = from the iso (disabling ACPI doesn't help). >>>=20 >>> However, I'm successful at running a FreeBSD 8/CURRENT hvm on XEN on = a Intel Xeon Nehalem box. I tried booting the same installed disk image = (from the Nehalem box) on the Operteron box, but that also resulted in = a panic while booting. >>>=20 >>> The CURRENT iso I'm using is from: >>> = ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/201001/FreeBSD-9.0-CURRENT-201= 001-amd64-bootonly.iso >>>=20 >>> I'm using Xen-3.3.1 on both physical boxes, and a BSD 6 hvm works on = both the Nehalem and Operton boxes... >>>=20 >>> Here's the backtrace from the opteron box: >>> kernel trap 9 with interrupts disabled >>>=20 >>>=20 >>> Fatal trap 9: general protection fault while in kernel mode >>> cpuid =3D 0; apic id =3D 00 >>> instruction pointer =3D 0x20:0xffffffff80878193 >>> stack pointer =3D 0x28:0xffffffff81044bb0 >>> frame pointer =3D 0x28:0xffffffff81044bc0 >>> code segment =3D base 0x0, limit 0xfffff, type 0x1b >>> =3D DPL 0, pres 1, long 1, def32 0, gran 1 >>> processor eflags =3D resume, IOPL =3D 0 >>> current process =3D 0 () >>> [thread pid 0 tid 0 ] >>> Stopped at pmap_invalidate_cache_range+0x43: clflushl = (%rdi) >>> db> bt >>> bt >>> Tracing pid 0 tid 0 td 0xffffffff80c51fc0 >>> pmap_invalidate_cache_range() at pmap_invalidate_cache_range+0x43 >>> pmap_change_attr_locked() at pmap_change_attr_locked+0x368 >>> pmap_change_attr() at pmap_change_attr+0x43 >>> pmap_mapdev_attr() at pmap_mapdev_attr+0x112 >>> lapic_init() at lapic_init+0x29 >>> madt_setup_local() at madt_setup_local+0x26 >>> apic_setup_local() at apic_setup_local+0x13 >>> mi_startup() at mi_startup+0x59 >>> btext() at btext+0x2c >>>=20 >>>=20 >>> I took a look through the bug database and didn't see any similar = problem reports. Is it reasonable to file a bug report? Is there = additional information that I should be reporting? >> Set hw.clflush_disable=3D1 at the loader prompt. >=20 > Hi, >=20 > I'm trying also FreeBSD8 on a nehalem box. When I configure more than = 2 vcpus for the HVM guest I met with the following: >=20 > - With the XENHVM kernel the boot simply stops when the message about = WITNESS performance came. No debug messages, nothing, just stops there = and the PV drivers just attaches before the message >=20 > - With the GENERIC kernel the re driver somehow fails to receive = packages when 4 vcpus configured. Tcpdump showed that packages are going = out, but somehow none received. I'd say it's not really a FreeBSD = problem, but it's wierd enough. >=20 > Could you verify these these two problem on the Opteron and your = Nehalem machine? We are also using Xen-3.3.1. >=20 > Regards, > Andras The XENHVM kernel works fine (both boxes) with more than 2 vcpus. I = received a panic regarding the xn driver, but that went away once I = configured the vif to use netfront. I don't know what model you specify to use the re driver, but the ed and = em drivers work fine.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1A8F87C2-3790-42CA-B98C-2017B3F8EF9C>