Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 Oct 2013 15:59:03 +0530
From:      Shanker Balan <mail@shankerbalan.net>
To:        "freebsd-xen@freebsd.org" <freebsd-xen@freebsd.org>
Subject:   Re: Latest current -CURRENT (rev 255904) panics with "device hyperv" on XenServer 6.2
Message-ID:  <7D0E64DD-7545-4B53-AEF8-173CDED63F44@shankerbalan.net>
In-Reply-To: <11704071.7RatWqnjQ8@snasonovnbwxp.bcc>
References:  <11704071.7RatWqnjQ8@snasonovnbwxp.bcc>

next in thread | previous in thread | raw e-mail | index | archive | help
On 04-Oct-2013, at 1:02 PM, Sergey Nasonov <snasonov@bcc.ru> wrote:

>=20
>=20
> >I just tried FreeBSD 10 ALPHA4 ISO. The ISO fails to boot on
> >XenServer 6.2 resulting with the same HyperV panic
> >
> >Regards.
> >@shankerbalan
> =20
> Hi,
> You can disable viridian support for that VM by command:
> xe vm-param-set platform:viridian=3Dfalse uuid=3D<vm_uuid>
> =20
>=20

Hi Sergey,

Thank you very much for suggesting the workaround.

My use case is for users uploading the ISO to CloudStack private cloud =
and not being able to
install FreeBSD 10. As an end user of cloudstack, I don't have access to =
the hypervisor (XenServer)
to make the required param changes.

End user's only have the ability to upload ISOs and create VMs and they =
will hit the panic.


> after that check platform parameter for FBSD VM by command:
> xe vm-param-get param-name=3Dplatform uuid=3D<vm_uuid>
> viridian: false; timeoffset: 0; nx: true; acpi: 1; apic: true; pae: =
true
> =20
> It helps me run FreeBSD 10 ALPHA 4 without problems.=20
> =20
> I have tested live migration with both statis and dynamic memory =
configuration. If I set static configuration, for example 512 MB RAM =
then VM migration ends fine. VM migration during acive dynamic memory =
configuration (512 MB - min 1024 MB - max) triggers huge amount of =
console messages:
> =20
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame =
0xfffffe003d291900
> kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe003d2919b0
> witness_warn() at witness_warn+0x4a8/frame 0xfffffe003d291a70
> uma_zalloc_arg() at uma_zalloc_arg+0x3b/frame 0xfffffe003d291ae0
> malloc() at malloc+0x101/frame 0xfffffe003d291b30
> balloon_process() at balloon_process+0x44a/frame 0xfffffe003d291bb0
> fork_exit() at fork_exit+0x84/frame 0xfffffe003d291bf0
> fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe003d291bf0
> --- trap 0, rip =3D 0, rsp =3D 0xfffffe003d291cb0, rbp =3D 0 ---
> uma_zalloc_arg: zone "16" with the following non-sleepable locks held:
> exclusive sleep mutex balloon_lock (balloon_lock) r =3D 0 =
(0xffffffff816e7158) locked @ /usr/src/sys/dev/xen/balloon/balloon.c:339
> exclusive sleep mutex balloon_mutex (balloon_mutex) r =3D 0 =
(0xffffffff816e7138) locked @ /usr/src/sys/dev/xen/balloon/balloon.c:373
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame =
0xfffffe003d291900
> kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe003d2919b0
> witness_warn() at witness_warn+0x4a8/frame 0xfffffe003d291a70
> uma_zalloc_arg() at uma_zalloc_arg+0x3b/frame 0xfffffe003d291ae0
> malloc() at malloc+0x101/frame 0xfffffe003d291b30
> balloon_process() at balloon_process+0x44a/frame 0xfffffe003d291bb0
> fork_exit() at fork_exit+0x84/frame 0xfffffe003d291bf0
> fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe003d291bf0
> --- trap 0, rip =3D 0, rsp =3D 0xfffffe003d291cb0, rbp =3D 0 ---
> uma_zalloc_arg: zone "16" with the following non-sleepable locks held:
> exclusive sleep mutex balloon_lock (balloon_lock) r =3D 0 =
(0xffffffff816e7158) locked @ /usr/src/sys/dev/xen/balloon/balloon.c:339
> exclusive sleep mutex balloon_mutex (balloon_mutex) r =3D 0 =
(0xffffffff816e7138) locked @ /usr/src/sys/dev/xen/balloon/balloon.c:373
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame =
0xfffffe003d291900
> kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe003d2919b0
> witness_warn() at witness_warn+0x4a8/frame 0xfffffe003d291a70
> uma_zalloc_arg() at uma_zalloc_arg+0x3b/frame 0xfffffe003d291ae0
> malloc() at malloc+0x101/frame 0xfffffe003d291b30
> balloon_process() at balloon_process+0x44a/frame 0xfffffe003d291bb0
> fork_exit() at fork_exit+0x84/frame 0xfffffe003d291bf0
> fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe003d291bf0
> --- trap 0, rip =3D 0, rsp =3D 0xfffffe003d291cb0, rbp =3D 0 ---
> uma_zalloc_arg: zone "16" with the following non-sleepable locks held:
> exclusive sleep mutex balloon_lock (balloon_lock) r =3D 0 =
(0xffffffff816e7158) locked @ /usr/src/sys/dev/xen/balloon/balloon.c:339
> exclusive sleep mutex balloon_mutex (balloon_mutex) r =3D 0 =
(0xffffffff816e7138) locked @ /usr/src/sys/dev/xen/balloon/balloon.c:373
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame =
0xfffffe003d291900
> kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe003d2919b0
> witness_warn() at witness_warn+0x4a8/frame 0xfffffe003d291a70
> uma_zalloc_arg() at uma_zalloc_arg+0x3b/frame 0xfffffe003d291ae0
> malloc() at malloc+0x101/frame 0xfffffe003d291b30
> balloon_process() at balloon_process+0x44a/frame 0xfffffe003d291bb0
> fork_exit() at fork_exit+0x84/frame 0xfffffe003d291bf0
> fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe003d291bf0
> --- trap 0, rip =3D 0, rsp =3D 0xfffffe003d291cb0, rbp =3D 0 ---
> uma_zalloc_arg: zone "16" with the following non-sleepable locks held:
> exclusive sleep mutex balloon_lock (balloon_lock) r =3D 0 =
(0xffffffff816e7158) locked @ /usr/src/sys/dev/xen/balloon/balloon.c:339
> exclusive sleep mutex balloon_mutex (balloon_mutex) r =3D 0 =
(0xffffffff816e7138) locked @ /usr/src/sys/dev/xen/balloon/balloon.c:373
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame =
0xfffffe003d291900 =20
> =20
> After migration VM running on another physical host and ssh session =
didnt interrupted. But I loss console access over XenCenter.=20

Interesting. I will be creating a FreeBSD template on CloudStack. Will =
update my experience shortly.

It would be wonderful if FreeBSD 10 would work out of the box on =
CloudStack+XenServer. Currently, CloudStack deployment
I know of use VMware ESXI to host FreeBSD deployments.

Thank you.
@shankerbalan




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7D0E64DD-7545-4B53-AEF8-173CDED63F44>