Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Jan 2018 15:01:28 +0000
From:      Grzegorz Junka <list1@gjunka.com>
Cc:        freebsd-questions@freebsd.org, freebsd-drivers@freebsd.org
Subject:   Re: Server doesn't boot when 3 PCIe slots are populated
Message-ID:  <b362cc2f-6bcc-bf50-683c-fd53b3d18ca3@gjunka.com>
In-Reply-To: <60145.108.68.169.115.1515941410.squirrel@cosmo.uchicago.edu>
References:  <ecce3fa6-3909-0947-685c-8a412684e99c@gjunka.com> <061ccfb3-ee6a-71a7-3926-372bb17b3171@kicp.uchicago.edu> <4cd39c52-9bf0-ef44-8335-9b4cf6eb6a6b@gjunka.com> <60145.108.68.169.115.1515941410.squirrel@cosmo.uchicago.edu>

next in thread | previous in thread | raw e-mail | index | archive | help

On 14/01/2018 14:50, Valeri Galtsev wrote:
> On Sun, January 14, 2018 8:34 am, Grzegorz Junka wrote:
>> On 13/01/2018 18:31, Valeri Galtsev wrote:
>>>
>>> On 01/13/18 10:21, Grzegorz Junka wrote:
>>>> Hello,
>>>>
>>>> I am installing a FreeBSD server based on Supermicro H8SML-iF. There
>>>> are three PCIe slots to which I installed 2 NVMe drives and one
>>>> network card Intel I350-T4 (with 4 Ethernet slots).
>>>>
>>>> I am observing a strange behavior where the system doesn't boot if
>>>> all three PCIe slots are populated. It shows this message:
>>>>
>>>> nvme0: <Generic NVMe Device> mem 0xfd8fc000-0xfd8fffff irq 24 at
>>>> device 0.0 on pci1
>>>> nvme0: controller ready did not become 1 within 30000 ms
>>>> nvme0: did not complete shutdown within 5 seconds of notification
>>>>
>>>> The I see a kernel panic/dump and the system reboots after 15 seconds.
>>>>
>>>> If I remove one card, either one of the NVMe drives or the network
>>>> card, the system boots fine. Also, if in BIOS I set PnP OS to YES
>>>> then sometimes it boots (but not always). If I set PnP OS to NO, and
>>>> all three cards are installed, the system never boots.
>>>>
>>>> When the system boots OK I can see that the network card is reported
>>>> as 4 separate devices on one of the PCIe slots. I tried different
>>>> NVMe drives as well as changing which device is installed to which
>>>> slot but the result seems to be the same in any case.
>>>>
>>>> What may be the issue? Amount of power drawn by the hardware? Too
>>>> many devices not supported by the motherboard? Too many interrupts
>>>> for the FreeBSD kernel to handle?
>>> That would be my first suspicion. Either total power drawn off the
>>> power supply. Or total power drawn off the PCI[whichever it is] bus
>>> power leads. Check if any of the add-on cards have extra power port
>>> (many video cards do). Card likely will work without extra power
>>> connected to it, but connecting extra power on the card may solve your
>>> problem. Next: borrow more powerful power supply and see if that
>>> resolves the issue. Or temporarily disconnect everything else (like
>>> all hard drives), and boot with all three cards off live CD, and see
>>> if that doesn't crash, then it is marginally insufficient power supply.
>> Thanks for the suggestion. The power supply was able to power two NVMe
>> disks and 6 spinning HDD disks without issues in another server. So the
>> total power should be fine. It may be the PCI bus power leads is causing
>> problems but then, two NVMe drives wouldn't take more than 5-9W and the
>> network card even less. PCI Express specification allows much more to be
>> drawn from each slot. In total the server shouldn't take more than 50-70W,
>>
>> I am not saying that it's not because of the power supply, but I think
>> it would be the least likely at this point. I will try with another
>> power supply when I find one.
> Another shot in the dark: some PCI-express slots may be "a pair", i.e.
> they can only take cards with the same number of signal lanes. Then you
> may have trouble if one of the cards is, say, x8 another is x4. System
> board ("motherboard") manual may shed light on this.
>
> Incidentally, PS powering successfully different machine not necessarily
> is also sufficient to power this one. As you said.
>

The manual states:

Slot 7: One (1) PCI-Express x8 (in x16) Gen. 2
Slot 6: One (1) PCI-Express x4 (in x8 slot) Gen. 2
Slot 5: One (1) PCI-Express x8 Gen. 2

I tried all combinations (the pair of NVMe drives in slots 5/6, 5/7, 
6/7, the network card in the third slot) but none worked. Considering 
the manual, slots 5/7 for NVMe and slot 6 for the network card should be 
the safest bet?

Is there any utility to verify how many CPU lanes are in use/free?

GregJ



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b362cc2f-6bcc-bf50-683c-fd53b3d18ca3>