Category Archives: Admins

Varnish to the rescue

Some months ago the mailman web interface was getting slower and slower, and increasingly taking resources away from mail processing to the point where it started to be a problem.

Without knowing too much about how mailman works it seems like each load of the the listinfo ‘front page’, which lists all 150 or so mailing-lists, requires mailman to lock and read the config files for each list. This might not be a problem with few lists or a non-busy site, but for this resulted in the list info page taking in best case 2-3 seconds to load (this is still the case), but when the system got busy it was frequently taking > 20 seconds to load the list info page, since there were “many” request at the same time.

It is possible this could be fixed in mailman, but I wasn’t too keen on trying to optimize mailman considering I knew very little about it, and some web searching didn’t reveal any obvious solutions. This was when I remembered Poul-Henning Kamp’s goal for the Varnish reverse caching proxy project: You should be able to drop Varnish in front of an overloaded CMS or similar and be up and running in 5 minutes (OK, could be it wasn’t 5 minutes, but something like that). I had been meaning too look a Varnish for years, but never gotten around to it before.

Without knowing much about how to use Varnish I read a bit of the docs and did a basic install in a fresh jail on The setup was the most basic one which just proxied every requests it received to the ‘real’ mailman web server, and of course cached when possible. The default Varnish configuration is rather conservative and does not cache pages for that long when there is no expire information available from the backend web server. The default expiration time is 2 minutes as I recall.

Even this rather short caching by Varnish of pages made the difference between the mailman web interface being very unresponsive and unacceptably slow, to “just working” and responding as fast as can be expected. Before setting up Varnish I was getting regular (several times a day) Nagios mails about the mailman interface not responding in a reasonable time, and after installing Varnish I haven’t had a single one :-). I also haven’t needed to touch the Varnish set up – it has just taken care of itself.

Over the last couple of days I decided to learn more about Varnish and using it more which resulted in that today I switched the FreeBSD wiki over to also running behind Varnish. The wiki (MoinMoin) doesn’t support explicitly purging changed pages from Varnish, so the wiki pages can’t be cached for very long since we would risk returning out of date pages for a long time. This means I don’t expect Varnish to change responsiveness of the wiki much but it does mean that should a wiki page suddenly get a lot of hits, e.g. by a slashdotting or similar, I won’t have to scramble to prevent the wiki from stop working due to the load. starts using 8.0

As people reading this blog probably know the FreeBSD 8.0 release cycle is well under way. To help with the release testing the first “real” production system have now been upgraded to FreeBSD 8.0 (close to what will become 8.0-BETA3). The first system to be upgraded is one of the internal recursive nameservers since we have a backup server running 7.2, just in case.

Unless the new 8.0 server has any problems over the comming time more systems will be upgraded, as time permits.

Oh, and in case people wonder why I wrote “real” above, it’s because most (all?) of the package building systems actually have been running various versions of 8.0 for a while, but they just aren’t quite as publically noticable when they crash. Cluster Update

The FreeBSD project has three racks hosted by ISC with various servers. The use of the systems have been limited by the fact that there was no firewall in front of the systems, so each host had to have local firewall and/or access control rules.

To use the systems better we have now installed a firewall in front of the systems. This means that the FreeBSD project can better use the facilities provided by ISC and the servers donated by various people and companies.

The firewall is in fact two separate Soekris net5501-70 systems running FreeBSD 7. They use pf for packet filtering and CARP to provide redundancy between the two systems. The redundant setup is done to reduce the risk of taking all the systems offline due to hardware or software failure in one firewall.

The two Soekris net5501 systems were sponsored by the FreeBSD Foundation. The 1U rack mount case and flash cards were donated by Brad Davis. Brad also handled the initial configuration and installation of the systems at ISC. Peter Losher helped out from the ISC side with getting additional IP addresses, DNS, and other logistics. So a big thanks to all the before mentioned for helping making this possible, and to ISC in general for hosting the servers. Netperf Backup Project

I wrote a bit about one of my current projects for the FreeBSD Status report, unfortunatly due to some mixup it didn’t get included, so I’m posting the text here instead:

Recently NetApp donated one of their Filer storage systems and Sentex donated hosting of it with the FreeBSD Foundation helping with various administrative tasks. See the FreeBSD Foundation July 2008 Newsletter for details of the donation.As of this writing around 1 TB of data has been transfered to the off-site storage system and critical systems are being set up for periodic backup as time permits. The 1 TB includes the FreeBSD ftp-archive containing old FreeBSD releases which is being extra backed up to avoid loosing the history data for the FreeBSD project.

The actual backups are being done with rsync over ssh glued together by some custom scripts. As I’m not that creative with naming, the “system” is called qsbp or “Q Simple Backup Push”. File system snapshots are being used to preserve old data while still allowing a relativly simple system to be used.

On behalf of the the FreeBSD Admins Team, I would like to thank NetApp, Sentex, and the FreeBSD Foundation for making this possible.

The FreeBSD IPv6 Wiki – now fully public

The main DNS record for the FreeBSD wiki ( has now been updated to include the IPv6 address record.
The server hosting the wiki,, has been running with bz’s latest IPv6 jail patch for 10 days without issues.
See the freebsd-jail list for patches:

The FreeBSD IPv6 Wiki

A month or so before EuroBSDCon 2007 conference the systems at Yahoo! had gotten IPv6 connectivity with the main web server and mail servers being accessible via IPv6. The FreeBSD wiki was still IPv4 only as was (and still is) is running in a jail.

At the conference I talked to Bjoern A. Zeeb (AKA bz@) about the issue with IPv4 only jails and he was interested in making a patch so FreeBSD jails could support IPv6 and the FreeBSD wiki could be accessible via IPv6.

I should poke Bjoern regularly about making the patch, which I failed miserably at, but he got work done on the patch anyway. A few weeks ago he sent me the IPv6 jail patch for me to try out. Since life should be interesting I didn’t try it on a test server, but on the production web server sky which hosts the FreeBSD wiki and more. Just in case there were any problems I made sure I was around to recover things in case the system blew up, but none of that happened. In fact, since I installed the patch on sky a week ago there haven’t been any problems (that I know of at least). Granted there aren’t much IPv6 traffic, but the IPv4 part have been under its normal load.

So far the main DNS record for the wiki has not been updated to include the AAAA records, so people will use IPv6 if they have it, but that expected to come soon. For now people can try out the wiki using IPv6 by accessing It has a slight (100%) likeness with the IPv4 wiki, but… IPv6!

For people interested in the patch the work is being done in the FreeBSD Perforce repository at //depot/user/bz/jail/…. I am sure Bjoern will post appropriate public patch when he think it is ready. Credit should also go to Pawel Jakub Dawidek (AKA pjd@) who made the multi IP(v4) jail patch which Bjoern based his patch on. Thanks to Bjoern and Pawel for the work making this possible!

Now I just need to actually get around to setting up IPv6 at home, so I can actually try out the IPv6 wiki myself in anything other than lynx from other hosts… any year now.

Web server fun

When I started the “sky” project (the new jail based I never expected how much magic was involved, or how long it would take to set up a new from scratch, so that’s why the project has been going slow for a while.

Over the last month or so the current has had severe hardware problems which caused it to crash often. That is of course rather annoying but the positive thing is that it has given me motivation for finishing up the setup on sky. Now that EuroBSDCon 2007 is over I will also have more time to do other FreeBSD stuff again.

I am currently at the FreeBSD Developers summit and I’m mainly working on sky. It’s been a while since I messed with it so just upgrading ports etc. in the jails has taken some time, but I’m not done yet – so stay tuned for more updates.

Oh, and if is down, try which is the main FreeBSD website running on

wiki goes into the sky, and more

I finally got tired at looking at the hostname “wikitest”, so I decided to move the FreeBSD wiki to This also means that the wiki can now be fully “official” and has been renamed to I took the opportunity to familiarize myself some more with how a moinmoin installation works so I did spend a good part of a weekend doing the migration but now there are fewer direct hacks in the wiki and I actually somewhat knows where the files are. The small downside to moving the wiki, and the main reason I haven’t done this before, is that I have a bit less freedom configuring the jails on sky since I now have to be a bit careful not to accidentally break the wiki. The move actually happened over a week ago, I just didn’t get around to writing about it before.

The current “monitoring system” consists of running “ruptime | grep down” from cron every hour. This is actually very effective compared to the simplicity, but it doesn’t catch e.g. when squid on die due to the disk being being full. To better detect this kind of errors I have I have been working on setting up Nagios for to be able to find out quickly when stuff crash. The configuration of the Nagios installation still isn’t complete, but at least it does warn me about major outages now. Thanks to the Nagios install by Erwin Lansing I also get mails if the FreeBSD Nagios crash so that part is also covered.

In unrelated news FreeBSD 4.X is no longer supported by the FreeBSD Security Team, so that is very nice that we finally could drop the support since FreeBSD 4.x has diverted quite a lot from FreeBSD 5/6/7 by now (or rather the other way around). It was getting increasingly difficult backporting fixes etc. for Security Advisories. RIP FreeBSD 4.