trying out freebsd and failing at it

Dec 14, 2014  

Wait, what? Why?! No reason, really. I saw the new FreeBSD book1 lying around in someone’s office, and separately I was reminded of the week I spent getting Gentoo up and running a decade ago, and decided I missed all that and wanted to repeat a similar experience.

So, this is an attempt to both (1) install it within a VM on the Google Compute Engine2, and (2) slowly learn more about it. Here follows a log of everything I did, based on the original instructions from the mailing list 3. (Meta-note: if running remotely, be sensible; use tmux or screen)

Step 1: Install the emulator

$ sudo apt-get install qemu

Step 2: Get the FreeBSD version to install – in my case, I picked the “disc1” version corresponding to the “RELEASE” image4

$ wget ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/10.1/FreeBSD-10.1-RELEASE-amd64-disc1.iso

Step 3: Create the disk image for the emulator

$ qemu-img create disk.raw 100g
Formatting 'disk.raw', fmt=raw size=107374182400 

Step 4: Boot from the ISO downloaded earlier

Notes: - The original instructions also mention --enable_kvm, but that didn’t work for me. I tried sudo modprobe kvm-intel, but dmesg | grep kvm informed me that it had been disabled by the Bios, so I skipped it here. - If you are running this remotely (like I was), you’ll have to add --curses to the qemu command line.

Step 5: Install FreeBSD

At this point, if all went well, the emulator should boot up and you should see the “Welcome” screen, where you can hit “Install” and begin.

Notes: - Pretty standard stuff, if you’ve done a linux install before. - When it came to the disk setup I chose ufs for no particular reason5, but you can choose any of the options; it’ll show you the suggested table for the “entire disk” as split between freebsd-boot, freebsd-ufs and freebsd-swap, just hit “Finish” and then “Commit” to proceed. - Take a nice long break when the “Archive Extraction” step begins; this is computationally intensive, and (especially without kvm enabled) takes a long time on the emulator. - DNS settings are google.internal. for the “search”, and 169.254.169.254 for the DNS IP. - Include sshd and ntpd in the list of services to start at boot

Step 6: Further configuration (as root)

Once you hit “Exit” at the end, choose the option to drop into a shell and then run the following:

echo 'console="comconsole"' >>/etc/rc.conf
echo ‘console=”comconsole”’ > /boot/loader.conf
sed -I -e “/hostname/d” /etc/rc.conf
echo -Dh >/boot.config

cat <<EOF >/etc/dhclient.conf
interface "vtnet0" {
  supersede subnet-mask 255.255.0.0;
}
EOF

sed -I -e “/server/d” /etc/ntp.conf
echo “server 169.254.169.254” >> /etc/ntp.conf

echo “169.254.169.254 metadata.google.internal metadata” >> /etc/hosts

Step 7: Add yourself as a user

  • Run adduser and follow the prompts6
  • Add yourself to the wheel group. E.g. in my case: sh $ pw user mod agam -G wheel
  • Allow yourself to login via ssh: sh $ sed -I -e “s/#PasswordAuthentication no/PasswordAuthentication yes/” /etc/ssh/sshd_config

Step 8: Setup GCE (within the image)

  • Enable either OpenDNS or Google Public DNS . E.g. for the latter: sh $ echo “nameserver 8.8.8.8” >> /etc/resolv.conf
  • Install sudo, python7, and wget8
  • Get gcloud910
  • Remove the DNS record added earlier: sh $ sed -I -e “/8.8.8.8/d” /etc/resolv.conf
  • Turn off FreeBSD (run poweroff)

Step 9: Setup GCE (on your workstation)

  • Install gcloud (as before)
  • Authenticate: Run gcloud auth login and follow the url to enter the code
  • Prepare the image for upload: tar -Szcf freebsd.tar.gz disk.raw
  • Create a bucket11 and upload12 the image there (Note: I was shocked, shocked, by how fast this upload went!)
  • Prepare the image for use in your VM and insert it

    $ gcutil addimage freebsd gs://<bucket>/<object>
    $ gcutil --project <project_id> freebsd gs://<bucket>/<object>
    
  • Add a VM and SSH to it (both these operations can be done either through the “Google Cloud Console” or the command-line client13). E.g. in my case the latter is > gcloud compute ssh myvm14

Step 10: Nope

Yeah this didn’t work for me. Luckily, the “serial console” can be viewed through the dashboard, and what I saw was a repeated failure to boot.

Mounting local file systems:.
Writing entropy file:.
vtnet0: link state changed to UP
Starting Network: lo0 vtnet0.
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
     options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
     inet6 ::1 prefixlen 128 
     inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 
     inet 127.0.0.1 netmask 0xff000000 
     nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
vtnet0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=6c01bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
	ether 42:01:0a:f0:c8:6c
	nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
	media: Ethernet 10Gbase-T <full-duplex>
	status: active
Starting devd.
Starting dhclient.
DHCPREQUEST on vtnet0 to 255.255.255.255 port 67
DHCPNAK from 169.254.169.254
DHCPDISCOVER on vtnet0 to 255.255.255.255 port 67 interval 8
DHCPOFFER from 169.254.169.254
DHCPREQUEST on vtnet0 to 255.255.255.255 port 67
DHCPACK from 169.254.169.254
panic: ffs_write: type 0xfffff80008a1cb10 0 (0,150)
cpuid = 2
KDB: stack backtrace:
#0 0xffffffff80963000 at kdb_backtrace+0x60
#1 0xffffffff80928125 at panic+0x155
#2 0xffffffff80b7f825 at ffs_write+0x5b5
#3 0xffffffff80e428f5 at VOP_WRITE_APV+0x145
#4 0xffffffff809d96f9 at vn_write+0x259
#5 0xffffffff809d598b at vn_io_fault+0x10b
#6 0xffffffff8097a437 at dofilewrite+0x87
#7 0xffffffff8097a168 at kern_writev+0x68
#8 0xffffffff8097a0f3 at sys_write+0x63
#9 0xffffffff80d25851 at amd64_syscall+0x351
#10 0xffffffff80d0aa6b at Xfast_syscall+0xfb
Uptime: 7s

Looks like this may be a corrupt disk? Not sure. But pretty bummed :(

Update: I see people having luck with either (1) building a “rescue disk” on a running FreeBSD machine (not an option for me), or (2) using the mfsbsd15 remote install. I think I’ll try the latter when I get time.

Update: Here’s how that went :-

  • Get the mfsbsd image16
$curl -o disk.raw http://mfsbsd.vx.sk/files/iso/10/amd64/mfsbsd-se-10.1-RELEASE-amd64.iso`
  • Tar it, upload it
$ tar -Szcf mfs-freebsd.tar.gz disk.raw
$ gsutil cp mfs-freebsd.tar.gz gs://<bucket_name>
$ gcutil addimage mfs-freebsd gs://<bucket_name>/mfs-freebsd.tar.gz
  • Create a VM with this instance

Aaand … NOPE again; fails to even show the serial console this time :( Error: The resource 'projects/algol-c/zones/us-central1-a/instances/myvm' is not ready

  • Deleted the VM and created it again17

Failed again, but this time I grabbed the output before it vanished:

Unable to lock ram - bridge not found
Changing serial settings was 3/2 now 3/0
enter handle_19:
  NULL
Booting from Hard Disk...
Boot failed: not a bootable disk

enter handle_18:
  NULL
Booting from Floppy...
Boot failed: could not read the boot disk

enter handle_18:
  NULL
No bootable device.  Powering off VM.
END OF LINE
  Retrying in 60 seconds.
  • Of course! I have to make a bootable image using this iso (!) But, But, But …. that needs a running FreeBSD system?!

Update: Damn, looks like GCE is behind both Amazon and (wtf!) Microsoft on this. From the release notes for v10.1:

FreeBSD 10.1-RELEASE is also available on these cloud hosting platforms:

  • Amazon® EC2™ FreeBSD/amd64
  • Microsoft® Azure™ FreeBSD/amd64, FreeBSD/i386

Conclusion: The qemu path should have worked, I don’t yet understand why not. Another option might be to try the vhd image and get that to work. Or try EC2/Azure. Or wait for someone to figure this out and publicly share a working image. Or give up on FreeBSD and get back “to real work” :)


  1. The newer edition of “The Design and Implementation of the FreeBSD Operating System” [return]
  2. Yes, I’ve heard the Amazon experience is easier, but that would be … uh … slightly disloyal right now :P [return]
  3. https://groups.google.com/forum/#!msg/gce-discussion/YWoa3Aa_49U/FYAg9oiRlLUJ [return]
  4. From this list [return]
  5. Don’t necessarily care too much about the “Cult of ZFS”; besides, UFS is faster. [return]
  6. Shell choices are sh/csh/tcsh, with the first being the default option. Take your pick; I’m going to replace sh with bash later anyway. [return]
  7. If you try to guess and install Python 3, you’ll see this error: ERROR: Python 3 is not supported by the Google Cloud SDK. Please use a Python 2.x version that is 2.6 or greater (Do as it says! (Also, you might not want to put bets on python 3?)) [return]
  8. Either run pkg_add -r for each tool, or compile it from source with ports. For the latter, first run portsnap fetch, portsnap extract, and portsnap update, in that order (this is a one-time setup), then cd to the appropriate directory under /usr/ports and run make install for each tool (warning: this takes a long, long, long time! If you’re running the emulator without kvm support, you have to be extremely masochistic to try compiling from source). [return]
  9. Original instructions refer to gsutil and gcutil separately, but they were (you guessed it!) deprecated. [return]
  10. More information here [return]
  11. gsutil -p <project_id> -c gs://<bucket_name> [return]
  12. gsutil cp <gzipped_file> gs://<bucket_name> [return]
  13. https://cloud.google.com/sdk/gcloud/reference/compute/instances/create [return]
  14. You also need a --project flag, but you can set a global value for this by running gcloud config set project <project_id> [return]
  15. https://www.freebsd.org/doc/en_US.ISO8859-1/articles/remote-install/preparation.html [return]
  16. In case you were wondering, the file name inside the uploaded image has to be disk.raw [return]
  17. “Terminated” VMs have to be dealt with this way, AFAICS [return]