Moved blog

April 14th, 2009 by lulf

After having set up my own domain and ikiwiki to test it out, I figured out that I might as well move everything there in order to keep most of my hompage/blog stuff in the same place. I was able to import a wordpress export from this blog by using this script (I did have to modify it a bit in order to make it work though):

http://chris-lamb.co.uk/projects/ikiwiki-wordpress-import/

Until the planet rss link is updated, users can find the freebsd-related posts here:

http://lulf.geeknest.org/tags/freebsd/

Setting up a ZFS-only system

December 16th, 2008 by lulf

After loader support for ZFS was imported into FreeBSD around a month ago, I’ve been thinking of installing a ZFS-only system on my laptop. I also decided to try out using the GPT layout instead of using disklabels etc.

The first thing I started with was to grab a snapshot of FreeBSD CURRENT. Since sysinstall doesn’t support setting up ZFS etc, it can’t be used, so one have to use the Fixit environment on the FreeBSD install cd to set it up. I started out by removing the existing partition table on the disk (just writing zeros to the start of the disk will do). If you’re reading this before the january 2009 snapshot of CURRENT comes out , you have to create your own iso image in order to get loader with the latest fixes. Look in src/release/Makefile and src/release/i386/mkisoimages.sh for how to do this.

Then, the next step was to setup the GPT with the partitions that I wanted to have. Using gpt in FreeBSD, one should create one partition to contain the initial gptzfsboot loader. In addition, I wanted a swap partition, as well as a partition to use for a zpool for the whole system.

To setup the GPT, I used gpart(8) and looked at examples from the man-page. The first thing to do is to setup the GPT partition scheme, first by creating the partition table, and then add the appropriate partitions.

# gpart create -s GPT ad4
# gpart add -b 34 -s 128 -t freebsd-boot ad4
# gpart add -b 162 -s 5242880 -t freebsd-swap ad4
# gpart add -b 5243042 -s 125829120 -t freebsd-zfs ad4
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ad4

This creates the initial GPT, and adds three partitions. The first partition contains the gptzfsboot loader which is able to recognize and load the loader from a zfs partition. The second partition is the swap partition (I used 2.5 GB for swap in this case). The third partition is the partition containing the zpool (60GB). Sizes and offsets are specified in sectors (1 sector is typically 512 bytes). The last command puts the needed bootcode into ad4p1 (freebsd-boot).

Having setup the partitions, the hardest part should be done. As we are in the fixit environment, we can now create the zpool as well.

# zpool create data /dev/ad4p3

The zpool should now be up and running. I then decided to create the different filesystems i wanted to have in this pool. I created /usr, /home and /var (I use tmpfs for /tmp).

Then, freebsd must be installed on the system. I did this by copying all folders from /dist in the fixit environment into the zpool. In addition, the /dev folder have to be created. For better details on this, you can follow (http://wiki.freebsd.org/AppleMacbook) At least /dist/boot should be copied in order to be able to boot.

Then, the boot have to be setup. First, boot/loader.conf have to contain:

zfs_load="YES"
vfs.root.mountfrom="zfs:data"

Any additional filesystems or swap has to be entered into etc/fstab, in my case:

/dev/ad4p2 none swap sw 0 0

I also entered the following into etc/rc.conf

zfs_enable="YES"

In addition, boot/zfs/zpool.cache has to exist in order to be able to let the zpool be imported automatically when zfs loads on system boot. To do this, I had to:

# mkdir /boot/zfs
# zpool export data && zpool import data

In order to make /boot/zfs/zpool.cache get populated in the Fixit environment. Then, I copied zpool.cache to boot/zfs on the zpool:

# cp /boot/zfs/zpool.cache /data/boot/zfs

Finally, a basic system should be installed.The last ting to do is to unmount the filesystem(s) and set a few properties:

# zfs set mountpoint=legacy data
# zfs set mountpoint=/usr data/usr
# zfs set mountpoint=/var data/var
# zfs set mountpoint=/home data/home
# zpool set bootfs=data data

To get all the quirks right, such as permissions etc, you should to a real install with making world or using sysinstall when booted into the system. Reboot, and you might be as lucky as me and boot into your ZFS-only system :) For further information, take a look at:

http://wiki.freebsd.org/ZFSOnRoot
which contains some information on how to use ZFS as root, but by booting from ufs and:
http://wiki.freebsd.org/AppleMacbook
which has a nice section on setting up the zpool in a Fixit environment.

Update:

When rebuilding FreeBSD after this type of install, it’s also important that you build with LOADER_ZFS_SUPPORT=YES in order for the loader to be able to read zpools.

Csup status

September 16th, 2008 by lulf

Finally, I have been able to resolve all current known issues with cvsmode support in csup. I just sent out a new announcement with the patch, and I hope to get some more testing and perhaps some reviews soon, but it is a big patch and few people are familiar with the code base.

Remaining issues with the patch is support for using the status file when reading (but this is not critical at all), as well as rsync support (which is only significant for a few files in the freebsd repository).

I hope as many as possible are able to test it:

http://people.freebsd.org/~lulf/patches/csup/csup_09_16_2008.diff

Testing csup

March 6th, 2008 by lulf

The last couple of weeks I’ve been very busy with school (and I expected this to be a quiet semester). However, I’ve found some of the last few bugs lurking around in csup:

  • Deltas that had a ‘hand-hacked’ date would have deltatexts that would be misplaced in the rcsfile.
  • When adding a new diff, a ‘..’ would be converted to ‘.’ twice, meaning it disappeared.

Now, there are only these issues left, but I’m not sure if I really want to fix this:

  • Some RCS files have an extra space between desc and the deltas. CVSup fixes this by _counting_ the lines and then write them out when writing out the RCS file. I think this is silly, since it doesn’t really matter according to the RCS standard.
  • Some files appear to display garbage values, such as src/share/examples/kld/firmware/fwimage/firmware.img,v
    This disappears for some reasons in csup, but I’m not sure how to handle this. Comments are welcome.
  • It has a quite high memory usage, and this might be due to some leaks that
    I’ve been unable to find. I’ll do a much better audit of the code and run
    valgrind to investigate this further.
  • Does not support md5 of RCS stream, so it can’t detect errors yet.
  • Statusfile file attributes might not be correct.
  • Some RCS parts such as newphrases (man rcsfile) is not supported yet.
  • Some hardcoded limits that may break it.
  • Things done a silly way such as sorting and comparing, which I have plans to
    improve later.

So, finally, you can try out patches if you’d like:

http://people.freebsd.org/~lulf/patches/csup/cvsmode

Currently, I’m including the tokenizer generated by flex, since the flex file itself can’t be compiled with csup.

Improving csup

January 31st, 2008 by lulf

Hello there!

Huntin’ them bugs

August 17th, 2007 by lulf

More status updates… I’ve been fixing many small gvinum bugs the last couple of weeks:

  • The state of gvinum objects were changed after reloading. This meant that objects got the wrong state when gvinum was brought up.
  • Made gvinum always use the most recent configuration it finds when setting object states.
  • Make sure the newest drive is always the newest, and not the first in the drivelist, as was previously assumed.
  • Add “growable”-state to be used when a plex is ready to be grown.
  • Allow a plex to be rebuilt even though it’s also growable.
  • Do not change the size of the volume until the plex is completely grown.
  • Add status of growing and rebuild of a plex in the list output.
  • Prevent rebuild to take over the I/O system increasing access-count at the start and end of the rebuild.

Probably a couple of other fixes as well. Also, I’ve updated the vinum-examples page in the handbook to reflect new features and more practical examples. I’ve posted a “call for testers” on current@, arch@ and geom@, and have received some response from people who are willing to help me test. Thanks to them. I’ve uploaded the code-sample that I’ll be delivering to google here: http://folk.ntnu.no/lulf/gvinum_soc2007.tar.gz

Cleaning up

August 6th, 2007 by lulf

The last couple of weeks I’ve tested and done bugfixing and cleanup of gvinum code. I refactored some parts to make the code belong where it seems logical. I also implemented growing for striped plexes, but that was quite easy since I could reuse most of the code for growing RAID-5 plexes.√ā¬†Unfortunately I was sick for a week and unable to work.

What remains now is to do more testing (can’t get enough), and write and update documenatation on gvinum. I have updated patches for gvinum at http://folk.ntnu.no/lulf/patches/freebsd/gvinum for both RELENG_6 and CURRENT. I appreciate reports from brave users who tries it out, even if it works :)

Also, I created a new perforce-branch called gvinum_cache. I’ve currently implemented a read/write-cache to check if this would give much speed-up for gvinum. It’s not very nice for reliability, but could be an option for those who want better performance. Anyway, I’ll update more on this later.

Growing up

July 17th, 2007 by lulf

Since last post I haven’t really done that much do gvinum, but a few things.

  • I added a few automated test-scripts to check if a volume behaves properly
  • Go through test-plan and make sure that gvinum passes the tests.
  • I’ve been thinking a lot on how to best implement growing RAID-5 plexes.
  • I’ve implemented growing of RAID-5 plexes.

Now, the first and second points are quite boring to do, but I had to do it. Now the last points were trickier, since I didn’t really know where I should start. Finally I decided the best way was to let the plex overwrite itself! A more detalied explanation can be found in the TODO of my perforce branch. I need to test the implementation a bit now. Other than that, I’ve been a bit lazy on my own work this week, and tried to help other students with reviews etc.

Bugathon-week

July 4th, 2007 by lulf

Since last post, there has been many small bugfixes to gvinum. After some debating with myself on how I should implement concat/stripe/mirror, I think I got it pretty much right. The event system changed gvinum a lot, so I had to rewrite most of the code I already had on this.

I have done a lot of testing this week, and I made a test plan that I’m going to follow. Hopefully, I’ll also be able to create som automatic tests for this.

I’ve even been a good boy and updated the gvinum manpage! I added some examples to the manpage as well, so that it’s easier to get into gvinum for inexperienced users (not sure if we want gvinum to live even longer, but :) ).

A lot of small problems with weird states being set was also fixed, since this can be very confusing if you havent used gvinum much.

What I’d like to do next, is create a set of testscripts that I can use to test quickly and easily with. I also noticed that it would be nice to have a similar command like ‘mirror’ for RAID-5 volumes. This could be used like this: ‘gvinum raid5 <disk1> <disk2> <disk3>’. Other than that, I’ve started to think on how I’m going to implement raid-5 resizing and other goals in my proposal.

Bug-monster dying slowly

June 28th, 2007 by lulf

Finally, an update on what I’ve been doing since the last time. This time I have a lot of small changes that have been done:

  • Implement initialization of RAID5-Arrays. This basically writes zeros over everything and makes sure parity is correct.
  • Fix a bug with mirror code. The length of the completed requests got doubled if you have a mirror with two plexes, tripled if you have a mirror with three plexes etc.
  • When a mirrored plexes are syncing, all requests after and including the first _write-request_ are delayed until syncing is finished.
  • Allow rebuilding a RAID-5 array while it is in use (e.g. mounted). Delay requests that are in conflict with the rebuild, but allow requests on the already rebuilt part to be run.
  • Allow subdisks to come up automagically after rebuild.
  • Allow stripesizes not divideable by the subdisk size. A regression in the new gvinum code prevented this.
  • Modify the event system to contain two intmax_t fields, so we won’t have to allocate/deallocate pointers all the time when passing args to gv_post_event.
  • Add support for the rename and move commands to new gvinum. The code has been rewritten for the new gvinum.
  • Fix a bug in the code for degraded writes to a RAID5-array, where only zeros were written.
  • Other minor bug/style fixes.

Next, I’m going to implement concat/stripe/mirror functionality. I already have some code from previous work I did, so I just need to adapt it to new gvinum, as well as change some ugly parts. There are some small facade-changes left, but I will do this after the last of the original vinum features is completed. Also, I will try write a nice status report, and get a testable patch out by the time the reports are finished.