clement vs 7.0-PRERELEASE

16. October 07

No, I’m not dead (yet?).

[Disclaimer]: I’m far from being an expert in VM or storage. Don’t use this post if you need to feed various trolls.

To make the story short, I’m confused about my ZFS benchmarks – Yeah I know benchmark sucks – Write performances are quite impressive, but I feel uncomfortable with read performances when it reaches arc limits…

Most of my workstations and personal servers are running CURRENT and I was waiting for RELENG_7 to test it on my test servers at work. Why? just because I’m lazy and I want to update my servers with my freebsd-update receipe (It _almost_ works ;)).
I was also waiting for RELENG_7 because of zfs, to use it on the “low cost” storage server we received.
It’s a HP DL 365 G1 dual core with 5GB of RAM. The storage part is a MSA 20 with 12 500GB disks attached to a Smart Array 4602 (ciss(4)) controller, 192MB of BBW cache (50/50). Due to limitations of the latter, volumes can’t exceed 2 TB. We splitted the disk pool into 3 RAID 5 volumes of 1.4To each. [dmesg here]

ZFS will strip volumes for us.
# zpool create test da0 da1 da2

Out of the box, raid volumes speed are not that bad. Applying scottl@’s ciss patch helps a little.
Once the zpool created, I was impressed by the “irrevelant dd benchmark”: read 180MB/s, write 110MB/s. I couldn’t resist to launch iozone. As expected it panic() :)
All iozone runs are performed with the following command:
# iozone -aRcWe -g 2G -f /test/testfile

I rose vm.kmem* to 1GB and kern.maxvnodes to 400000, I re-ran iozone and played with postmark. Few hours later, Yet Another Panic. I tried to set vm.kmem* up to 1.5GB or 2G with no luck. Anyway, I’ll investigate later. I decided to test without prefetch. No surprise. YAP.
I thought it was time to give pjd’s vm_kern.c hack a chance. Once the kernel recompiled, I restarted iozone (without prefetch). 6 hours later, still no panic, great ! A lot of successfull rsync later, disappointing tar over nc transfers (it requires more investigation, I’ll “blog” about later), I felt uncomfortable with read performance.

I ran my bench with and without prefetch. Read performance was dissatisfied (compared to write performances).
Last time I’ve seen kinds of dramatical performance hits, it was on linux, buffer cache starvation. I decided to check it compare it to a striped volume. I destroyed my zpool and create a gstripe with 64k stripe.
# zpool destroy test
# gstripe label -s 64k da0 da1 da2
# newfs /dev/stripe/data

I finally got read performance I expected.

read

write

As you can see, the fall appears when arc gets full.
Here are arc infos, except for the run where vfs.zfs.arc_{min,max} was set to 32MB
vfs.zfs.arc_min: 33554432
vfs.zfs.arc_max: 805306368

Backing out pjd’s patch didn’t help. I’m currently running the same benchmark with a iozone file of 8GB.

You can get all the graph here.


1 Kommentar zu “clement vs 7.0-PRERELEASE”  

  1. 1 Learning On Demand | Hammer Filesystem on DFlyBSD 2.0

Kommentar hinterlassen


Log-In | Wordpress | Cappuccino