Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 29 Jan 2017 16:34:41 -0800
From:      David Christensen <dpchrist@holgerdanske.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: FreeBSD 11.0-RELEASE-p7 i386 system drive imaging and migration
Message-ID:  <2973d1ea-202f-60fa-2930-eec05b626cfb@holgerdanske.com>
In-Reply-To: <86bmupg0gi.fsf@WorkBox.homestead.org>
References:  <df0c81d7-fd2b-852f-4007-5fb4b24100e0@holgerdanske.com> <86bmupg0gi.fsf@WorkBox.homestead.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 01/29/17 10:55, Brandon J. Wandersee wrote:
>
> David Christensen writes:
>
>> What is the proper way to clone a FreeBSD system image from one drive to
>> another?
>
> In my personal opinion, the "proper" way is to back up your data, create
> a fresh partition table and filesystems on the new disk, and restore the
> backup. Using `dd` to clone an entire disk byte-for-byte works, but it's
> the painfully slow, tedious, and potentially dangerous way of doing
> this. The native backup utilities---dump(8) and restore(8) for UFS, `zfs
> send` and `zfs receive` for ZFS---will copy the data from an existing
> filesystem and write to a new filesystem at speeds exponentially greater
> than anything you'll get from `dd`.

Thanks for the reply.


Here's my FreeBSD system drive:

toor@freebsd:/root # gpart show
=>      63  31277169  ada0  MBR  (15G)
         63         1        - free -  (512B)
         64  31277160     1  freebsd  [active]  (15G)
   31277224         8        - free -  (4.0K)

=>       0  31277160  ada0s1  BSD  (15G)
          0   4194304       1  freebsd-zfs  (2.0G)
    4194304   4194304       2  freebsd-swap  (2.0G)
    8388608  22888544       4  freebsd-zfs  (11G)
   31277152         8          - free -  (4.0K)

toor@freebsd:/root # zpool list
NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH 
ALTROOT
bootpool  1.98G   101M  1.89G         -     6%     4%  1.00x  ONLINE  -
zroot     10.9G  4.30G  6.58G         -    33%    39%  1.00x  ONLINE  -

toor@freebsd:/root # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
bootpool             101M  1.82G  99.4M  /bootpool
zroot               4.30G  6.24G    96K  /zroot
zroot/ROOT          2.68G  6.24G    96K  none
zroot/ROOT/default  2.68G  6.24G  2.68G  /
zroot/tmp            164K  6.24G   164K  /tmp
zroot/usr           1.61G  6.24G    96K  /usr
zroot/usr/home       399M  6.24G   399M  /usr/home
zroot/usr/ports      641M  6.24G   641M  /usr/ports
zroot/usr/src        609M  6.24G   609M  /usr/src
zroot/var            812K  6.24G    96K  /var
zroot/var/audit       96K  6.24G    96K  /var/audit
zroot/var/crash       96K  6.24G    96K  /var/crash
zroot/var/log        308K  6.24G   308K  /var/log
zroot/var/mail       120K  6.24G   120K  /var/mail
zroot/var/tmp         96K  6.24G    96K  /var/tmp


As I understand it, taking an image involves:

1.  Back up the MBR (dd?).

2.  Back up the slice 1 partition table (?).

3.  Back up bootpool file system ('zfs send').

4.  Back up the swap partition encryption container header (?).

5.  Back up the zroot partition encryption container header (?).

6.  Back up zroot file system ('zfs send').


Restoring an image involves:

1.  Restore MBR ('dd').

2.  Restore slice 1 partition table (?).

3.  Create bootpool ZFS pool and file system (?).

4.  Restore bootpool file system ('zfs receive').

5.  Create a encryption container in the second partition.

6.  Restore headers.

7.  Initialize swap.

8.  Create an encryption container in the third partition.

9.  Restore headers.

10. Create zroot ZFS pool and file system (?).

12. Restore zroot file system ('zfs receive').


(I'm a FreeBSD noob, so I'm sure there are errors in the above.)


These processes are complex enough to warrant automation.  Can 
Clonezilla handle FreeBSD 11.0 with MBR and encrypted ZFS root?


> However, I'm not sure that addresses the actual problem in this
> particular case. I can't say exactly what the error message you're
> getting means, but while it might stem from how you copied the system

I have a Debian 7 computer with a Pentium D 945 processor, 4 GB RAM, 16 
GB USB flash system drive, a Seagate ST3000DM01 3 TB backup HDD, and an 
assortment of drive docking bays.


How Debian sees the original FreeBSD system drive:

         2017-01-28 13:44:37 root@p43200 ~
         # parted /dev/sdb u s p free
         Model: ATA SAMSUNG SSD UM41 (scsi)
         Disk /dev/sdb: 31277232s
         Sector size (logical/physical): 512B/512B
         Partition Table: msdos

         Number  Start      End        Size       Type     File system 
Flags
                 63s        63s        1s                  Free Space
          1      64s        31277223s  31277160s  primary  zfs          boot
                 31277224s  31277231s  8s                  Free Space


Take an image of the original FreeBSD system drive:

         2017-01-28 13:44:55 root@p43200 ~
         # cd /mnt/q/image/holgerdanske.com/freebsd/

         2017-01-28 13:48:05 root@p43200 
/mnt/q/image/holgerdanske.com/freebsd
         # time dd if=/dev/sdb count=31277224 | gzip | tee 
i72600s-20170128-1346-
freebsd-11.0-release-i386-op.img.gz | sha256sum -b > 
i72600s-20170128-1346-freeb
sd-11.0-release-i386-op.img.gz.sha256
         31277224+0 records in
         31277224+0 records out
         16013938688 bytes (16 GB) copied, 1032.32 s, 15.5 MB/s

         real    17m12.330s
         user    12m7.961s
         sys     1m42.726s


Restore image onto a cloned system drive (Intel 520 Series 60 GB SSD):

         2017-01-28 20:56:58 root@p43200 
/mnt/q/image/holgerdanske.com/freebsd
         # time zcat 
i72600s-20170128-1346-freebsd-11.0-release-i386-op.img.gz > /dev/sdb

         real    3m42.036s
         user    1m35.854s
         sys     0m27.406s


> it might also imply a problem with the disk itself. Unrecoverable read
> errors, maybe.

1.  Putting the original system drive into another computer broke Xfce 
applications.

2.  Everything works as before when the original system drive is put 
back into original computer (I am typing this message on that system).

3.  The cloned drive works and has passed Intel SSD Toolbox checks.

4.  Putting the cloned drive into the original computer broke Xfce 
applications in exactly the same way.


I doubt it's an SSD hardware problem.


Again, here are the error messages seen on the console when I attempt to 
launch Terminal Emulator in Xfce when the original system drive is place 
into another computer and when the cloned drive is placed into the 
original computer:

         vm_fault: pager read error: pid 1023 (python2.7)
         vm_fault: pager read error: pid 1040 (xfce4-terminal)


Does anyone have any ideas why Xfce applications would generate such errors?


David




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2973d1ea-202f-60fa-2930-eec05b626cfb>