more on zfs…

18. October 07

This time I ran izone with a 8GB file on :

  1. UFS 2 (newfs defaults) over a gstripe (stripe size 64k) volume
  2. zfs with 3 disks stripped
  3. UFS 2 (newfs defaults) over a zvol (equal to zpool size)
  4. zfs over a gstripe (stripe size 64k) volume
  5. UFS2 async + gjournal over a gstripe (stripe size 64k) volume

zfs -- read
zfs --write

I’m confused about what to expect from these values…
As usual complete set of graphes are here. (note: I also splitted iozone results by record size).

5 Kommentare zu “more on zfs…”  

  1. 1 ivoras

    Can you retry the tests with gjournal UFS “async”? I feel like the last result (write tests, UFS zvol) could be similar to it because of the constant journaling of data and metadata that both subsystems (zvol and gjournal) do. Either that or you’re seeing interaction of caches of both ZFS/zvol and UFS which possibly make the writings unsafe (wrt crashes).

    AFAIK “zvol” is not a raw device, it still has all the “smart” stuff from ZFS, like caching and journaling.

  2. 2 clement

    gjournal ufs async benchmark is running. But I don’t expect impressive result since iozone behavior isn’t gjournal friendly. I personnally don’t trust UFS2 zvol results, I also suspect a cache interaction, write speed drops exactly when arc cache gets exhausted.
    “Hybrid” tests (UFS zvol/zfs over gstripe) were poor attempts to find if the read speed drop was due to volume or fs layer.

  3. 3 ivoras

    It’s not that difficult to explain the results now – the curve for UFS2 async gjournal writes is similar to ZVOL – obviously it’s the curve of UFS being journaled “completely”, both metadata and data. It looks like ZFS journaling is more effective, though I suspect you might improve gjournal performance in this case by increasing the journal size and/or decreasing journal switch frequency.

    The read result is more problematic – I can’t say why would gstripe result in such a low performance with small files. Maybe it’s broken in some way? I have three ideas that might help gstripe performance: 1) use the (undocumented) geom_cache / gcache layer below the file system, 2) try increasing vfs.read_max (readahead), and 3) try fiddling and kern.geom.stripe.maxmem sysctls.

  1. 1 Solo Estoy » ZFS under FreeBSD performace
  2. 2 more on zfs

Kommentar hinterlassen

Log-In | Wordpress | Cappuccino