Wednesday, October 29, 2014

Building bhyve Images using makefs and mkimg

Recently Neel Natu committed work to enable bhyve to run on AMD processors.  My main development machine is an AMD A10-5700, so the commit enables me to use bhyve for testing.

EDIT: Anish Gupta did the work that Neel Natu commited.  Thanks Anish!

I had previously built images using makefs and mkimg for a CF card for use in another machine, so being able to build images to use with bhyve makes sense.

First, you need to make sure that you have a complete source check out along with a completed buildworld and buildkernel.  Then follow these steps:
  1. Install world and distribution into a temporary directory using the NO_ROOT option:
    make installworld DESTDIR=<tmpdir> -DDB_FROM_SRC -DNO_ROOT
    make distribution DESTDIR=<tmpdir> -DDB_FROM_SRC -DNO_ROOT
    This preps everything with the defaults as necessary.
  2. Install a kernel either into a different directory (I do this) or into the same directory above:
    make installkernel DESTDIR=<tmpkerndir> -DNO_ROOT KERNCONF=<conf>
  3. Make a directory with your custom configuration files.  The basics are /etc/rc.conf and /etc/fstab and you might want /firstboot on there too.  You will also need a METALOG file which contains the permissions for the files.  This is just a standard mtree file, so you could use mtree to generate this instead of creating it by hand.  The file contents are below.
  4. Build the ufs image using the script in the src tree at tools/tools/makeroot/
    /usr/src/tools/tools/makeroot/  -e <custdir>/METALOG -e <tmpkerndir>/METALOG -p <tmpdir>/etc/master.passwd -s 2g ufs.img root
  5. Build the disc image:
    mkimg -s gpt -b <tmpdir>/boot/pmbr -p freebsd-boot:=<tmpdir>/boot/gptboot -p freebsd-swap::1G -p freebsd-ufs:=ufs.img -o disc.img
  6. Run the image:
    sh /usr/share/examples/bhyve/ -d disc.img vm0
There you have it.   Besides running the image, all the other steps can be done as a normal user w/o root access.

EDIT: You also might want to include an /entropy file (populated with 4k from /dev/random) in your custom directory so that the image has a good seed for entropy at first boot for things such as sshd key generation.

File contents:
  • /etc/fstab:
    /dev/vtbd0p3    /               ufs     rw              1 1
  • Custom METALOG:#mtree 2.0
    ./etc/rc.conf type=file uname=root gname=wheel mode=0644
    ./etc/fstab type=file uname=root gname=wheel mode=0644
    ./firstboot type=file uname=root gname=wheel mode=0644

Saturday, March 15, 2014

Python ctypes wrapper for FLAC

As many people know, I've a fan of Python and I have been using it for over 15 years now.

One of the recent improvements that Python has made is the inclusion of ctypes, which allows you to write a wrapper around shared libraries in Python making it much easier to integrate libraries.  Previously you'd have to know the C Python API to get a module.  There was SWIG, but if the library was complicated, it'd often not produce working code and even if it did, you'd have to hand tweak the output to get it to work the way you think it should.

One of the projects I've worked on is a UPnP media server.  One of the features it has is the ability to decode a flac file and support seeking with in the file.

I have now released a package of the code:

The one issue w/ ctypes is that some code can be very slow in Python.  The FLAC library presents the sample data as arrays for each channel, though most libraries interleave the channel data.  I have written a very small library (that is optional) interleave.c that does this interleaving in faster C code.  In my tests, using the C code results in about a third of the CPU usage.

Hope this is useful for others!

Wednesday, March 5, 2014

CTF + ARMeb + debugging

I've been working on making the AVILA board work again with FreeBSD.  Thanks to Jim from Netgate for sending me a board to do this work.

I still have a pending patch waiting to go through bde to fix an unaligned off_t store which gets things farther, but with the patch I'm getting a: panic: vm_page_alloc: page 0xc0805db0 is wired shortly after the machine launches the daemons.

I did work to get cross gdb working for armeb (committed in r261787 and r261788), but that didn't help as there is no kernel gdb support on armeb.  As I'm doing this debugging over the network, I can't dump a core.

I didn't feel like hand decoding a struct vm_page, so I thought of other methods, and one way is to use CTF to parse the data type and decode the data.  I know python and ctypes, so I decided to wrap libctf and see what I could do.

Getting the initial python wrapper working was easy, but my initial test data was the kernel on my amd64 box that I am developing on.  Now I needed to use real armeb CTF data.  I point it to my kernel, and I get: "File uses more recent ELF version than libctf".  Ok, extract the CTF data from the kernel (ctf data is stored in a section named .SUNW_ctf) and work on that directly:
$ objcopy -O binary --set-section-flags optfiles=load,alloc -j .SUNW_ctf /tftpboot/kernel.avila.avila /dev/null
objcopy: /tftpboot/kernel.avila.avila: File format not recognized

Well, ok, that's not too surprising since it's an ARMEB binary, lets try:
$ /usr/obj/arm.armeb/usr/src.avila/tmp/usr/bin/objcopy -O binary --set-section-flags optfiles=load,alloc -j .SUNW_ctf /tftpboot/kernel.avila.avila /tmp/test.avila.ctf     
$ ls -l /tmp/test.avila.ctf 
-rwxr-xr-x  1 jmg  wheel  0 Mar  5 17:59 /tmp/test.avila.ctf

Hmm, that didn't work too well, ok, lets just use dd to extract the data using info from objdump -x.

Ok, now that I've done that, I get:
ValueError: '/tmp/avila.ctf': File is not in CTF or ELF format

Hmm, why is that?  Well, it turns out that the endian of the CTF data is wrong.  The magic is cf f1, but the magic on amd64 is f1 cf, it's endian swapped.  That's annoying.  After spending some time trying to build an cross shared version of libctf, I find that it has the same issue.

After a bit of looking around, I discover the CTF can only ever read native endianness, but ctfmerge has a magic option that will write out endian swapped data if necessary depending upon the ELF file it's putting in.  This means that the CTF data in an armeb object file will be different depending upon the endian you compiled it on, so the object file isn't cross compatible.  But, this does mean that the data in the object files will be readable by libctf, just not the data written into the kernel.

So, I create a sacrificial amd64 binary:
$ echo 'int main() {}' | cc -o /tmp/avila2.ctf -x c -

And use ctfmerge to put the data in it:
$ ctfmerge -L fldkj -o /tmp/avila2.ctf /usr/obj/arm.armeb/usr/src.avila/sys/AVILA/*.o

and again use dd to extract the .SUNW_ctf section into a separate file.

With all this work, I finally have the CTF data in a format that libctf can parse, so, I try to parse some data.  Now the interesting thing is that the CTF data does encode sizes of integers, but it uses the native arch's pointer sizes for CTF_K_POINTER types, which means that pointers appear to be 8 bytes in size instead of the correct 4 bytes.  A little more hacking on the script to force all pointers to be 4 bytes, and a little help to convert ddb output to a string and finally, I have a dump of the struct vm_page that I was trying to get all along:
{'act_count': '\x00',
 'aflags': '\x00',
 'busy_lock': 1,
 'dirty': '\xff',
 'flags': 0,
 'hold_count': 0,
 'listq': {'tqe_next': 0xc0805e00, 'tqe_prev': 0xc06d18a0},
 'md': {'pv_kva': 3235856384,
        'pv_list': {'tqh_first': 0x0, 'tqh_last': 0xc0805de0},
        'pv_memattr': '\x00',
        'pvh_attrs': 0},
 'object': 0xc06d1878,
 'oflags': '\x04',
 'order': '\t',
 'phys_addr': 17776640,
 'pindex': 3572,
 'plinks': {'memguard': {'p': 0, 'v': 3228376932},
            'q': {'tqe_next': 0x0, 'tqe_prev': 0xc06d1f64},
            's': {'pv': 0xc06d1f64, 'ss': {'sle_next': 0x0}}},
 'pool': '\x00',
 'queue': '\xff',
 'segind': '\x01',
 'valid': '\xff',
 'wire_count': 1}

So, the above was produced w/ the final script.