ivoras’ FreeBSD blog

December 10, 2007

finstall alpha2

Filed under: FreeBSD — ivoras @ 11:18 am

I’ve created a new livecd+finstall ISO image containing FreeBSD 7.0-BETA4. This release of finstall fixes most of the bugs present in earlier versions, and introduces only one new feature: file systems are created on glabel devices. You can fetch the alpha2 version of finstall here. It was good to get back to this project, even in the limited time i have for it.

It looks like I can now create a realistic schedule for 7.0-RELEASE. It will probably contain the following new features (i.e. in addition to those already in alpha2):

* ZFS
* Installing on already partitioned drives
* Some kind of rudimentary remote install.

Yes, this means that software RAID and volume management support will not get in there for this release, but there really wasn’t time as I spent most of the time (during and post SoC) chasing bugs in the kernel (and, as it turns out, the compiler). I hope I’ll have support for sw RAID and VM for 7.1-RELEASE.

I’d appreciate bug reports, and also help from interested developers. One field when it definitely needs work are text strings: language proofing, help texts, etc. I18n is not yet supported – only English is used now.

In case you missed it, here’s what’s new in FreeBSD 7.0.

December 7, 2007

WINE on FreeBSD

Filed under: FreeBSD — ivoras @ 12:48 am

For those not following the mailing lists, here’s some encouraging news about running Windows applications on WINE on FreeBSD 7:

* MS Office 2000
* 3D benchmark

It seems that benchmark results are a bit lower (less than 10%) for FreeBSD, compared to Linux.

November 27, 2007

Perils of .0 releases

Filed under: FreeBSD — ivoras @ 5:01 pm

As 7.0 is approaching release, a recurring question on the mailing lists is (in its many forms) “how stable is it?”. The answer really depends on what you are planning to do with it, but there are several known errors, bugs and misfeatures which will surely be present in 7.0-RELEASE. If your workload includes some of those, you better wait for the next release before putting a 7.x in production. If not, go ahead: by all means it’s stable enough.

Here’s the list of problems currently known to me, as of 7.0-BETA3. The list is probably not complete (so it may grow over time), and some of the problems listed may not be relevant to your workload, so take it with a grain if salt.

* ZFS is mostly unstable (or at least not as stable as UFS), especially under low memory conditions, on both i386 and amd64
* tmpfs is somtimes unstable in subtle ways (not very repeatable)
* unionfs doesn’t work over cd9660 (this one is obscure and only hurts LiveCD makers)
* removing mounted USB drives still doesn’t work (and USB support in general has most of the old problems)
* gcc program profiling doesn’t work
* java doesn’t work stable with some applications (tomcat sometimes crashes)

* while performance was greatly improved for database-like tasks, there are reports that complex tasks like heavy web applications could have performance problems.

In addition to these, there are some “non-bugs” which get reported often, some of which actually are bugs and problems but they have always existed and people have grown accustomed to them:

* BETA ISO images don’t contain packages (though sysintall offers to install them, and fails with weird errors)
* Support for 3D in X.Org is very limited, mostl due to lack of drivers (yes, there’s a NVIDIA driver for i386 but there’s no equivalent AMD64 version).

Of course there are also many good news.

November 5, 2007

A little bit longer…

Filed under: FreeBSD — ivoras @ 9:55 pm

I thought I’d upload a new version of the finstall ISO image tonight, with 7.0-BETA2 and X.Org 7.3 but no cookie. It seems there’s a bug / regression in BETA2 which prevents mounting of unionfs file system over a cd9660 file system. So, I’ll have to wait some more.

October 15, 2007

VirtualBox

Filed under: FreeBSD — ivoras @ 7:14 pm

I’m trying out VirtualBox VM software and so far I’m pleased. It seems to have significantly better performance then VMWare Server, and it’s maybe better than VMWare ESX 3:

INDEX VALUES
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 6479416.5 555.2
Double-Precision Whetstone 55.0 1636.4 297.5
Execl Throughput 43.0 355.2 82.6
File Copy 1024 bufsize 2000 maxblocks 3960.0 71113.0 179.6
File Copy 256 bufsize 500 maxblocks 1655.0 29474.0 178.1
File Copy 4096 bufsize 8000 maxblocks 5800.0 114875.0 198.1
Pipe Throughput 12440.0 244627.7 196.6
Pipe-based Context Switching 4000.0 9561.1 23.9
Process Creation 126.0 926.1 73.5
Shell Scripts (8 concurrent) 6.0 23.0 38.3
System Call Overhead 15000.0 117599.8 78.4
=========
FINAL SCORE 122.1

(these results are comparable with the VMWare Server benchmark from a few days ago; the VMWare ESX benchmark in the same post was done on a slower hardware)

I’s only shortcoming is that it doesn’t seem to support “background” VM instances :( If that gets implemented, VirtualBox could become the best very best VM choice for FreeBSD.

There were some noncommittal talks about maybe porting VirtualBox to FreeBSD (in “host” mode) so the product is becoming very exciting.

Update: It might be fast but apparently it doesn’t work yet. The guest OS doesn’t survive a buildworld. After some time the kernel complained four times about unexpected eflags in sigreturn (like 0×80283), and I had four unrelated processes stuck in a tight CPU-using loop.

October 14, 2007

Unionfs patches are in the tree!

Filed under: FreeBSD — ivoras @ 7:30 pm

Finally!

Unionfs was unusable (at least for me) without these patches and today they were finally committed to 8-CURRENT. They should be MFC-ed to 7-STABLE after a week.

(If you didn’t know 7-STABLE was branched, then — surprise!)

October 7, 2007

Bounties?

Filed under: FreeBSD — ivoras @ 10:44 pm

I’m wondering how open are FreeBSD users to the idea of providing funding for some FreeBSD-specific development. I’m specifically interested about “bounties” such as those from rsync.net for certain specific projects. The FreeBSD Foundation always accepts donations, but this is something different and more targeted.

Let’s give an example (nothing definite right now, no obligations, even the main developer(s) aren’t contacted about the idea): are there any people or organizations that would fund BLUFFS? The motivation is clear: a clean journaling file system compatible with UFS. Though ZFS is nice, it’s still not stable enough (and requires so much memory it’s not suitable for smaller machines) and softupdates is in bad shape anyway.

If you’re interested, have an idea or actually anything to contribute to this topic, please post a reply.

September 29, 2007

How slow is VMWare (Server)?

Filed under: FreeBSD — ivoras @ 9:47 pm

VMWare is slow. But how slow is it? Here’s two runs of benchmarks/unixbench on the same machine, first in a VMWare guest under VMWare Server 1.0 on Windows XP, the second under the native OS on the same machine.

Here are the results on VMWare:


INDEX VALUES
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 6330202.6 542.4
Double-Precision Whetstone 55.0 1606.8 292.1
Execl Throughput 43.0 468.4 108.9
File Copy 1024 bufsize 2000 maxblocks 3960.0 36722.0 92.7
File Copy 256 bufsize 500 maxblocks 1655.0 11696.0 70.7
File Copy 4096 bufsize 8000 maxblocks 5800.0 49643.0 85.6
Pipe Throughput 12440.0 95945.5 77.1
Pipe-based Context Switching 4000.0 21320.3 53.3
Process Creation 126.0 1209.9 96.0
Shell Scripts (8 concurrent) 6.0 1.0 1.7
System Call Overhead 15000.0 47093.0 31.4
=========
FINAL SCORE 70.1

And here on the raw hardware:

INDEX VALUES
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 6467105.1 554.2
Double-Precision Whetstone 55.0 1633.7 297.0
Execl Throughput 43.0 2030.9 472.3
File Copy 1024 bufsize 2000 maxblocks 3960.0 63783.0 161.1
File Copy 256 bufsize 500 maxblocks 1655.0 57489.0 347.4
File Copy 4096 bufsize 8000 maxblocks 5800.0 53476.0 92.2
Pipe Throughput 12440.0 930715.9 748.2
Pipe-based Context Switching 4000.0 204248.8 510.6
Process Creation 126.0 5373.3 426.5
Shell Scripts (8 concurrent) 6.0 563.7 939.5
System Call Overhead 15000.0 720641.0 480.4
=========
FINAL SCORE 387.4

Both guests are FreeBSD 7-CURRENT with debugging disabled. The results are not 100% comparable since the VMWare image was run without SMP, but on this benchmark, SMP positively influences only “shell scripts” results (parallel execution) – other results are either comparable, or negatively influenced by SMP (the CPU is a dual-core Athlon 64, i386 mode).

Make your own conclusions, but I consider the IO and context switch performance so bad they’re making the whole system unusable in production (at least where performance is important).

Update:

In defense of VMWare I’ve run unixbench on VMWare ESX3 server (though on a system not at all comparable to the one in above benchmarks – a 3 GHz Xeon from the NetBurst era, running 6.2-RELEASE as a guest) and the results are better:


INDEX VALUES
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 5113310.0 438.2
Double-Precision Whetstone 55.0 935.0 170.0
Execl Throughput 43.0 555.5 129.2
File Copy 1024 bufsize 2000 maxblocks 3960.0 55662.0 140.6
File Copy 256 bufsize 500 maxblocks 1655.0 17818.0 107.7
File Copy 4096 bufsize 8000 maxblocks 5800.0 66604.0 114.8
Pipe Throughput 12440.0 132556.6 106.6
Pipe-based Context Switching 4000.0 18074.1 45.2
Process Creation 126.0 1414.9 112.3
Shell Scripts (8 concurrent) 6.0 130.7 217.8
System Call Overhead 15000.0 62919.9 41.9
=========
FINAL SCORE 121.2

I still wouldn’t use it where performance is important, but at least these results look half-usable. The major improvement seems to be in context switching and parallel execution.

Second update:

Here’s the same setup as in the first VMWare Server benchmark (same machine, Windows XP host, 7-CURRENT), with QEmu+kqemu (kernel+user code acceleration):


TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 5456588.4 467.6
Double-Precision Whetstone 55.0 1492.1 271.3
Execl Throughput 43.0 166.5 38.7
File Copy 1024 bufsize 2000 maxblocks 3960.0 13744.0 34.7
File Copy 256 bufsize 500 maxblocks 1655.0 4426.0 26.7
File Copy 4096 bufsize 8000 maxblocks 5800.0 23832.0 41.1
Pipe Throughput 12440.0 23079.7 18.6
Pipe-based Context Switching 4000.0 2159.5 5.4
Process Creation 126.0 409.8 32.5
Shell Scripts (8 concurrent) 6.0 8.6 14.3
System Call Overhead 15000.0 9728.4 6.5
=========
FINAL SCORE 33.3

Compared to this, VMWare server doesn’t look bad at all :(

August 29, 2007

finstall alpha version

Filed under: FreeBSD — ivoras @ 8:17 am

As some of you might now, I’ve been working on a GUI installer for FreeBSD as a Google Summer of Code 2007 project that would one day, with some luck, replace the aging sysinstall. The SoC is now officially over and it’s time to make a public release of what’s been done so far.

But first, I’d like to write a bit about the project itself. Here are some of the more important ideas planned for the project, in no particular order:

  • Make a modern installer for a modern FreeBSD system, with support for advanced features not present in sysinstall.
  • Make the installer run directly of a FreeBSD Live CD
  • Separate the installer (as a tool) into the front-end and the back-end.
  • Make both the front-end and the back-end extensible enough so new features can be easily added.
  • Make the front-end and the back-end interchangeable so people can write their own replacements.
  • Make the back-end network-aware so it can support network (remote) installations. Use a service announcement and configuration technology such as Zeroconf.
  • Eventually, make the back-end a part of FreeBSD base to be used for regular system configuration.

I’m happy to say the project is going on nicely, and that I’ve created infrastructure that can support the above features (as well as some other goodies discussed at the recent BSDCan). The project itself is hosted in the FreeBSD’s Perforce depot. It consists of three parts / subprojects:

  • installer – the installer application, created in Python with PyGTK as the GUI library
  • pybackend – the backend daemon. For speedy coding, it was also implemented in Python, but if the project gains popularity, a C version that can be included in FreeBSD base system is planned.
  • makeimage – a Python script that creates a LiveCD ISO image with the installer. It can be customized to produce generic FreeBSD LiveCDs.

The back-end is a mostly stateless XML-RPC server that provides two kind of services: synchronous RPC calls and asynchronous “jobs” (that can take a long time and have progress checking infrastructure). The front-end is a modular PyGTK application that provides the user interface and uses the back-end for all “real” work.

I’ve created an ISO image with the installer embedded in it that can be used primarily for testing. The LiveCD is a fully working FreeBSD 7-CURRENT installation (i386) with X.Org 7.0, Xfce 4.2 desktop environment, Firefox, Thunderbird and a couple of supporting utilities. The image was built with mtune=generic CPU optimizations (pentium-m, pentium-4 and others). The installer version included in this LiveCD is a test version, more like a technology preview than a usable application. It can only install the system on a blank, unpartitioned drive and has only been tested on VMWare so far. I’ve disabled features that I know will not work (yet) but there’s still a chance there are bugs in the existing/enabled options.

Speaking of bugs, the overall state of FreeBSD 7-CURRENT is not very stable right now, and there are several known bugs and panics that will hopefully be resolved before 7.0. The system on the LiveCD (that is also used for the new installation) is *not* the “official” system that can be downloaded from FreeBSD source repositories. It contains several local patches I’ve made (or found from other developers and applied) to resolve some of the bugs and instabilities. I’ve submitted the patches to re@ some time ago, but they have still not been applied to the official source tree. Even so, there are several more-or-less known problems in the kernel I’m using, so you can expect random panics (you can imagine it’s hard do develop something like this with random panics happening all the time :( ). In particular, the LiveCD kernel might panic during the last phase (configuration), which is a problem I currently can’t solve.

I think that’s all that needs to be said. Download the installer image, try it out and see for yourself how it works. As I’ve said, there’s still a lot to be done and some planned features are not present in this release – but have fun trying it out. I won’t make an official announcement on current@ because I think it’s too early for that, so if you have any suggestions or questions you can post them as comments to this blog post. Please post both successes and failures (but keep the reports short :) ). (n.b. I’ll delete all comments that don’t have something useful to say, e.g. “I’m downloading it and I’ll try it tomorrow” type of posts).

P.S. If you don’t want to try it yet, I’ve created several screenshots of finstall.

August 24, 2007

RAID flash

Filed under: FreeBSD — ivoras @ 11:00 am

I’ve just had a sort-of shock about how cheap the USB flash drives have become (vs. how pricey they used to be a long time ago). I didn’t look at their prices for a long time (since I didn’t need any new flash drives) so I was pleasantly surprised with the per/MB prices that have become “normal”. On the other hand, capacity of consumer USB drives hasn’t gone up much – it seems 8 GB is the top of the range now, and speed is still not great – it seems 8-10 MB/s is the norm. But it’s relatively cheap technology now.

It seems that now is a good time to start experimenting with flash memory even in production, especially where seek times are important (flash drives have no seek latency that’s present in mechanical drives). Having no seek time has a nice side-effect when drives are used in RAID1 with gmirror, which can be configured to split large read requests over the mirrored drives. With mechanical drives this mode couldn’t produce significant performance because the drive heads still needed to seek through the unread portions (for sequential requests) but the situation is ideal for seek-less drives.

I think the ideal solution here would be RAID 1+0 with four drives. Admittedly, this would only give 16 GB of storage space (if each individual drive is 8 GB), but the (read) performance should scale linearly to ~~ 40 MB/s (once it would double in gmirror/RAID1, then again in gstripe/RAID0), which is decent. 16 GB is relatively small, but flash memory is much cheaper than server memory (e.g. FB-DIMM) and most databases are relatively small, so it might not be affordable to keep the whole database in RAM any more.

Only one question remains now – how many IOPS can be had from flash memory (unconfirmed information says: around 2000), and does having all flash drives on the same host USB bus (e.g. one USB hub with 4 drives, connected to the motherboard port) introduce too much latency?

« Newer PostsOlder Posts »

Powered by WordPress