Category Archives: FreeBSD

OpenSSH, PAM and user names

FreeBSD just published a security advisory for, amongst other issues, a piece of code in OpenSSH's PAM integration which could allow an attacker to use one user's credentials to impersonate another (original patch here). I would like to clarify two things, one that is already mentioned in the advisory and one that isn't.

The first is that in order to exploit this, the attacker must not only have valid credentials but also first compromise the unprivileged pre-authentication child process through a bug in OpenSSH itself or in a PAM service module.

The second is that this behavior, which is universally referred to in advisories and the trade press as a bug or flaw, is intentional and required by the PAM spec (such as it is). There are multiple legitimate use cases for this, such as:

  • Letting PAM, rather than the application, prompt for a user name; the spec allows passing NULL instead of a user name to pam_start(3), in which case it is the service module's responsibility (in pam_sm_authenticate(3)) to prompt for a user name using pam_get_user(3). Note that OpenSSH does not support this.

  • Mapping multiple users with different identities and credentials in the authentication backend to a single “template” user when the application they need to access does not need to distinguish between them, or when this determination is made through other means (e.g. environment variable, which service modules are allowed to set).

  • Mapping Windows user names (which can contain spaces and non-ASCII characters that would trip up most Unix applications) to Unix user names.

That being said, I do not object to the patch, only to its characterization. Regarding the first issue, it is absolutely correct to consider the unprivileged child as possibly hostile; this is, after all, the entire point of privilege separation. Regarding the second issue, there are other (and probably better) ways to achieve the same result—performing the translation in the identity service, i.e. nsswitch, comes to mind—and the percentage of users affected by the change lies somewhere between zero and negligible.

One could argue that instead of silently ignoring the user name set by PAM, OpenSSH should compare it to the original user name and either emit a warning or drop the connection if it does not match, but that is a design choice which is entirely up to the OpenSSH developers.

Building ARM Packages with Poudriere (the simple way)..

The current directions for building ARM packages are quite long and need to be updated. This is my work-in-progress directions until I get everything right and then I will update the documentation.

  1. Install poudriere and qemu-user-static: pkg install poudriere qemu-user-static
  2. Enable qemu-user-static in rc.conf: qemu_user_static_enable="YES"
  3. Run the startup script to configure your system for building different architectures: /usr/local/etc/rc.d/qemu_user_static start
  4. Create a ports tree to build: poudriere ports -c -m svn+https -p svn
  5. Create an ARM build jail. Note, this will take awhile: poudriere jail -c -j 11armv6 -v head -a arm.armv6 -m svn+https

Now you can test build whatever packages you want for your ARM device:
poudriere testport -j 11armv6 -p svn -o x11-wm/lxsession

Official Vagrant FreeBSD Images

I am very proud to announce that FreeBSD Vagrant images are now available.

Usage:
For VMWare, create a Vagrantfile like so:

Vagrant.configure("2") do |config|
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
  config.vm.box = "freebsd/FreeBSD-11.0-CURRENT"
  config.ssh.shell = "sh"
end

For VirtualBox, create a Vagrantfile like:

Vagrant.configure("2") do |config|
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
  config.vm.box = "freebsd/FreeBSD-11.0-CURRENT"
  config.ssh.shell = "sh"
  config.vm.base_mac = "080027D14C66"
end

Then run:
vagrant up

On first boot the machine will come up and install missing pkgs and run freebsd-update if needed. Note that this can take a few minutes. If it fails to boot try using: vagrant up --no-destroy-on-error. On my 2004 iMac with a spinning disk it takes just over 3 minutes. On my mid 2014 MBP with a SSD it takes about 1 minute and 45 seconds. In the future we will reevaluate installing the missing packages on boot vs when the VM is built.

Note that you can replace `FreeBSD-11.0-CURRENT’ with `FreeBSD-10.0-RC2′ or others. To see a full list of versions available, check the Hashicorp Atlas website here: https://atlas.hashicorp.com/FreeBSD/

Going forward:

  • All snapshots will include Vagrant images, so weekly updates of FreeBSD -STABLE branches and -CURRENT.
  • All future releases will including Vagrant images.

Essen Hackthon 2015 — last day status

I committed the 64bit support for the linux base ports (disabled by default, check the commit message), but this broke the INDEX build. Portmgr was faster than me to revert it. All errors are mine. I think most of the work is done, I just need to find out what the correct way is to handle this make/fmake difference (malformed conditional).

Share/Save

Essen Hackathon Status report — 2nd day

I had a look at the open PR’s for a quick-win and found one where the dependencies where incomplete. Fixed.

Then I reviewed Alan Jude’s patch for 64bit linux_base-c6 ports (on amd64). Looks good so far. Just a few minor issues. I took the time to get familiar with reviews.FreeBSD.org and the arc command line tool, applied the patch to my source tree, worked a while on merge-conflicts, added some minor changes, and validated the download of the 32bit RPM’s of the linux_base-c6 port.

In between I also discussed/reviewed some fixes for docs with Dru, signed some PGP keys, and served as a source for a funny picture (at least what geeks/nerds consider a funny picture). I also checked how to allow multi-cast in jails. There is a PR with a patch inside, but it’s IPv6 only. I did something similar for IPv4 and compiled a kernel. No compile time issues, but as the system where I can easily test this is at home, I prefer to be in front of the box in case it panics (that tells something about my confidence level of my patch… no idea if what I do there is actually correct… ENOCLUE about the network code in the kernel).

TODO for the last day of the Hackathon:

  • validate all RPM’s (download / distinfo) of the ports which changed
  • validate the install/deinstall of the 32bit version of the ports for regression
  • validate the 64bit install/deinstall for at least the linux base port (more if time permits tomorrow)

Share/Save

FDT overlays in FreeBSD

FDT overlay is an extension to FDT format that lets user to modify base FDT run-time: add new nodes, add new properties to existing nodes or modify existing properties. It’s useful when you have base board and some extension units like cape/shield for Pi/BBB or loadable FPGA logic for Zynq. I will not go into details you can find internals described on Adafruit or Raspberry Pi websites.

When dealing with overlays there are two options where to handle them: loader or kernel. Managing overlays at kernel level gives more flexibility but requires more related logic, e.g. re-init pinmux after applying overlay, re-run newbus probe/attach. On the other hand loader-level support is quite straightforward and involves nothing but DTB modifications and it’s a natural first step to adding FDT overlays to FreeBSD.

Proposed solution is to add fdt_overlays variable that contains coma-separated list of dtbo files, e.g.: “bbb-no-hdmi.dtbo,bbb-4dcape-43.dtbo”. This variable can be defined either as a loader(8) variable or as a u-boot env variable. During the boot ubldr load base DTB and right before passing control to the kernel it would go through files, load them from /boot/dtb/ direсtory on root partition and apply to the base blob. Final DTB would be passed to kernel.

You can find patch and review comments to it on Differential site: D3180. It contains:
- Extension to dtc to generate dynamic symbols and fixup info.
- ubldr fdt_overlays support

As Warner Losh mentioned it’s not clear yet how to deal with dynamic symbols support patch. It’s not part of official dtc tree though it’s accepted by RPi and BBB communities.

Essen Hackathon status report — 1st day/evening

The Essen Hackathon 2015 starts. More or less around 6pm people started to show up (including myself). The socializing session (BBQ) had some funny/interesting stories, and provided already some interesting topics to have a closer look at.

Possible candidates where I can provide some input are around DTrace: How to use it (but probably Sean Chittenden has some much more interesting DTrace things to show) and how to add SDT probes to the kernel.

On the ports side I want to get some insight into the USES framework, to see if it may be easy to convert the linuxulator ports to it or not. Maybe I can also have a deeper look into patches for the 64bit side of the linux_base ports.

Share/Save

FreeBSD now has NUMA? Why’d it take so long?

I just committed "NUMA" to FreeBSD. Well, no, I didn't. I did almost no actual NUMA-y work in FreeBSD. I just exposed the existing NUMA stuff in FreeBSD out and re-enabled it.

FreeBSD-9 introduced basic NUMA awareness in the physical allocator (sys/vm/vm_phys.c.) It implemented first-touch page allocation, and then fell back to searching through the domains, round-robin style. It wasn't perfect, for some workloads it was apparently okay. But it had some shortcomings - it wasn't configurable, UMA and other subsystems didn't know about NUMA domains, and the scheduler really didn't know about NUMA domains. So I'm sure there are plenty of workloads which it didn't work for.

That was all ripped out before FreeBSD-10. FreeBSD-10 NUMA just implements round-robin physical page allocation. It still tracks the per-domain physical memory regions, but it doesn't do any kind of NUMA aware allocation. From what I can gather, it was removed until something 'better' would land.

However, nothing (yet) has landed. So I decided I'd take a look into it. I found that for a lot of simple workloads (ie, where you're doing lots of anonymous memory allocation - eg, you're doing math crunching) the FreeBSD-9 model works fine. It's also a perfectly good starting point for experimenting.

So all my NUMA work in -HEAD does is provide an API to exactly the above. It doesn't teach the kernel APIs about domain aware allocations - there's currently no way to ask for memory from a specific domain when calling UMA, or contigmalloc, etc. The scheduler doesn't know about NUMA, so threads/processes will migrate off-socket very quickly unless you explicitly limit things. Devices don't yet do NUMA local work - the ACPI code is in there to enumerate which NUMA domain they're in, but it's not used anywhere just yet.

Then what is it good for?

If you're doing math workloads where you read in data into memory, do a bunch of work, and spit it out - it works fine. If you're running bhyve instances, you can run them using numactl and have them pinned to a local NUMA domain. Those coarse-grained things work fine. You can also change the system default back to round-robin and use first-touch or fixed-domain for specific processes. It's useful for exactly the same subset of tasks as it was in FreeBSD-9, but now it's at least configurable.

So what's next?

Well, my main aim is to get the minimum done so kernel side work is NUMA aware. This includes UMA, contigmalloc, malloc, mbuf allocation and such. It'd be nice to tag VM objects with a domain allocation policy, but that's currently out of scope. I'd also like to plumb in domain configuration into devices and allow devices to allocate memory for different driver threads with different policies.

But the first thing that showed up is that KVA allocation and superpages get in the way of malloc/contigmalloc working. Allocating memory in FreeBSD first allocates KVA space, then back-fills it with pages. As far as malloc/contigmalloc is concerned, KVA is KVA and it finds the first available space in a time-fast way. It then backfills it with physical pages. The superpage reservation bits (sys/vm/vm_reserv.[ch]) join together regions that are contiguous and in the same superpage and turn it into an allocation from the same superpage. These have no idea about NUMA domains. So, if you allocate a 4KiB page via malloc() from domain 0 and then try to allocate a 4KiB page from domain 1, it will likely mess it up:

  • First page gets allocated - first KVA, then the underlying 2mb superpage is allocated and a 4k page is returned - from physical memory domain 0;
  • Second page gets allocated - first KVA, and if it's adjacent or within the same 2mb superpage as the above allocation, it'll "fake" the page allocation via refcounting and it'll really be that same underlying superpage - but it's from physical memory domain 0.
I have to teach both vm_reserv and the KVA allocator about NUMA domains, enough so domain specific allocations don't use KVA that's adjacent. It was suggested that I create a second layer of KVA allocators that allocate KVA from the main resource allocator in superpage chunks (here it's 2mb) and then I do domain-specific allocations from them. It'll change how things get fragmented a bit, but it does mean that I won't fall afoul of things.

So, I'll do the above as an experiment and I'll push the VM policy evaluation up a little into malloc/contigmalloc. I'll see how that experiment goes and I'll post diffs for testing/evaluation.

The importance of mentoring, or "how I got involved in FreeBSD"..

Here's how I was introduced into this UNIX world, or "wait, WHO was your WHAT?"

So, here's 11ish or so year old Adrian. It's the early 90s. I was hiding in my bedroom, trying to make another crystal set out of random parts and scraping away the paint at my windowsill. In walks my Aunty, who introduces her new boyfriend.

"Hi, I'm Julian." he said. That wasn't all that interesting.

"Oh, are you making a crystal set?" .. ok, so that was interesting.

And, that was that. Suddenly, someone role-model-y shows up in my life out of the blue. There I was, an 11 year old who felt very mostly alone most of the time, and someone shows up who I can look up to and think I can relate to. So, I'm a sponge for everything he shows me. Whenever he comes over, he has some new story to tell, some new thing to show me. He would show me better ways of building transistor switch circuits when I was in the "make large arcs with car alternator" phase of my early teens. And, when I saved up and bought a PC, he started to show me programming.

Now, I was already programming. My parents had saved up and bought me an Amstrad CPC464. We had a second-hand commodore 64 for a short while, but that eventually somehow stopped working and I didn't have the clue to fix it. But I was programming Locomotive BASIC and dabbling in Z80 assembly when I was 12, and had "upgraded" to Turbo Pascal 6 when I hit high school. (Yes, school taught Turbo Pascal at Grade 10 level, and I decided to learn it a bit earlier. That's .. wow, that dates me.) I hadn't yet really stumbled into C yet. I had heard about it, but I didn't have anything that could write it.


Julian explained task switching to me one day during a walk along the beach. He explained that computers can just appear to be doing multiple things at once - but the CPU only does one thing at a time, and you can just switch things really quickly to give the appearance that it's multitasking. With that bright spark planted in my head, I went home and started dreaming up ways to make my Z80 based CPC do something like this.

My mother dragged me to McDonalds to apply for a job the moment I was legally able to (14 years, 9 months) and I saw a computer at a second hand shop - it was a $500 IBM PC/AT, with EGA monitor, two floppy disks and a printer. We put down a down-payment and I paid it off myself with my minimum wage money. Once I had that home I quickly erm, "acquired" a copy of Turbo Pascal for home and was off drawing funny little fractals.

So yes - it's Julian's fault I discovered FreeBSD. Yes, this is Julian Elischer. One day he showed me his computer, running something called BSD. He was trying to explain bourne shell scripting and the installer. I nodded, very confused, and eventually went back to the VGA programming book he lent me. He also showed me fractint running in X on his monochome 486 DX2-50 laptop. I had no idea what was going on under the scenes, only that the fractals were much more interesting than the ones I was drawing. So I took the VGA book home and started learning how to use the higher resolutions available. One thing stuck in my mind: so much bit-plane work. Ugh. One other thing stuck in my mind - reading from VGA memory is one of the slowest things you can do. Don't do it. Ever. (Do you hear that console driver authors? Don't do it. It's bad.)

One day he explained pointers to me. I had erm, "acquired" a copy of Turbo C 2.0 from a friend after failing to make much traction with the less friendly versions (Tiny C, for example.) I had coded up a few things, but I didn't really "get" it. So he sat me down with a pen and paper, and drew diagrams to explain what was going on. I remember that lightbulb going off in the back of my mind, as I dimly connected the whole idea of types and sizes together - and that was it. I was off and doing bad things to C code.

I eventually saved up enough for an updated 286 motherboard, then an updated graphics card (full VGA!), then a sound blaster card, and finally a 486-DX33 motherboard. He introduced me to his friend Peter (who had, and I believe still has, a rather extensive electronics collection) and handed me a FreeBSD-1.1 CDROM. I took it home, put it in, and .. it didn't do anything. My 486 had a soundblaster pro + CD-ROM, and .. well, FreeBSD-1.1 didn't speak to that hardware. So, I eventually put Slackware Linux 3.0 on the thing, and became a Linux nerd for a bit.

I did eventually try FreeBSD-1.1 on it - after putting a lot of FreeBSD bits on a lot of floppies - but I couldn't figure out what to do when it booted. This is going to sound silly - but the lack of colorls turned me off. I know, it seems silly now, but that's honestly why I went back to Slackware.

I eventually went back to FreeBSD in the 2.x era once I had an IDE CDROM and I was working part time at an ISP after (high) school finished. Yes, I figured out how to get colorls to work, I got in trouble disagreeing with a Michael (O, not M) at iiNet about Squid on Linux versus FreeBSD, and well.. stuff. Here was this 17yo kid disagreeing with things and acting like he knew everything. I'm sure it was endearing.

Fast-forward a couple years, and I had been hacking on FreeBSD here and there. I got in a little erm, "trouble" before I finished high school, which phk reminded me of - when they granted me a commit bit. I forget when this was, but I wouldn't have been much older than 20.

So - this is why mentoring kids is important. It may seem like a waste of time; it may seem like they don't understand, but we were all there once. We wanted someone to relate to, someone to look up to, and something interesting to do. Julian was that person for me, and I owe both him and my mother (of course) pretty much everything about my existence in this silly little computer industry.

(This is also why you don't skimp on hardware support for popular, if cheaper platforms and "shiny" looking features if you want people to adopt your stuff -  but that's a different rant.)

Ok, that's done. I'm going back to hacking on VGA/VESA boot loader support for FreeBSD-HEAD. That's long overdue, and I want my pretty splash screen.

RTL-SDR on FreeBSD, or "hey, cool, I live near an airport, I wonder if ADSB works.."

I bought one of those cheap RTL-SDR units a few months ago. There's no real kernel code required for it - all of the rtl-sdr code just uses the generic USB userland API which is shared between many operating systems.

So, getting it going was pretty easy:

# pkg install rtl-sdr

Then, using it to test ADSB is pretty easy:

# rtl_adsb -V -S 

.. this is verbose and listens to short packets.

Where I live (near San Jose Airport!) I receive a lot of ADSB transmissions. It's quite interesting.

Ok, so next - what about something more GUI like? Someone's already done it - https://github.com/antirez/dump1090 . There's already a package for it:

# pkg install dump1090
# dump1090 --net --aggressive

Then, point a webserver at http://localhost:8080/ and watch!

OpenStack on FreeBSD/Xen Proof of Concept

In my previos post I described how to run libvirt/libxl on the FreeBSD Xen dom0 host. Today we're going a little further and run OpenStack on top of that.

Screenshot showing the Ubuntu guest running on OpenStack on the FreeBSD host.

Setup Details

I'm running a slightly modified OpenStack stable/kilo version. Everything is deployed on two hosts: controller and compute.

Controller

Controller host is running FreeBSD -CURRENT. It has the following components:

  • MySQL 5.5
  • RabbitMQ 3.5
  • glance
  • keystone through apache httpd 2.4 w/ mod_wsgi

Everything here is installed through FreeBSD ports (except glance and keystone) and don't require any modifications.

For glance I wrote rc.d to have a convenient ways to start it:

(18:19) novel@kloomba:~ %> sudo service glance-api status
glance_api is running as pid 792.
(18:19) novel@kloomba:~ %> sudo service glance-registry status
glance_registry is running as pid 796.
(18:19) novel@kloomba:~ %>

Compute

Compute node is running the following:

  • libvirt from the git repo
  • nova-compute
  • nova-scheduler
  • nova-conductor
  • nova-network

This hosts is running FreeBSD -CURRENT as well. I also wrote some rc.d scripts for nova services except nova-network and nova-compute because I start it by hand and want to see logs right on the screen.

Nova-network is running in the FlatDHCP mode. For Nova I had to implement a FreeBSD version of the linux_net.LinuxNetInterfaceDriver that's responsible for bridge creation and plugging devices into it. It doesn't support vlans at this point though.

Additionally, I have implemented NoopFirewallManager to be used instead linux_net.IptablesManager and modified nova to allow to specify firewall driver to use.

Few more things I modified is fixing network.l3.NullL3 class mismatching interface and modified virt.libvirt to use the 'phy' driver for disks in libvirt domains XML.

And of course I had to disable a few things in nova.conf that obviously not work on FreeBSD.

I hope to put everything together and upload the code on github and create some wiki page documenting the deployment. It's definitely worth to note that things are very very far from being stable. There are some traces here and there, VMs sometimes fail to start, xenlight for some reason could start failing at VMs startup etc etc etc. So if you're looking at it as a production tool, you should definitely forget about it, at this point it's just a thing to hack on.

libvirt/libxl on FreeBSD

Few months ago FreeBSD Xen dom0 support was announced. There's even a guide available how to run it: http://wiki.xen.org/wiki/FreeBSD_Dom0.

I will not duplicate stuff described in that document, just suggest that if you're going to try it, it'd probably be better to use the port emulators/xen instead of compiling stuff manually from the git repo.. I'll just share some bits that probably could save some of your time.

X11 and Xen dom0

I wasn't able to make X11 work under dom0. When I startx with the x11/nvidia-driver enabled in xorg.conf, kernel panics. I tried to use an integrated Intel Haswell video, but it's not supported by x11-drivers/xf86-video-intel. It works with x11-driver/xf86-video-vesa, however, the vesa driver causes system lock up on shutdown that triggers fsck every time on the next boot and it's very annoying. Apparently, this behavior is the same even when not under Xen. I decided to stop wasting my time on trying to fix it and just started using it in a headless mode.

IOMMU

You should really not ignore the IOMMU requirement and check if your CPU supports that. If you boot Xen kernel and you don't have IOMMU support, it will fail to boot and you'll have to perform some boot loader tricks to disable Xen to boot your system (i.e. do unload xen and unset xen_kernel). Just google up your CPU name, e.g. 'i5-4690' and follow the link to ark.intel.com. Make sure that it lists VT-d as supported under the 'Advanced Technologies' section. Also, make sure it's enabled in BIOS as well.

UEFI

At the time of writing (May / June 2015), Xen doesn't work with the UEFI loader.

xl cannot allocate memory

You most likely will have to modify your /etc/login.conf to set memorylocked=unlimited for your login class, otherwise the xl tool will fail with some 'cannot allocate memory' error.

libvirt

It's very good that Xen provides the libxl toolkit. It should have been installed when you installed the emulators/xen port as a dependency. The actual port that installs it is sysutils/xen-tools. As the libvirt Xen driver supports libxl, there's not so much work required to make it work on FreeBSD. I made only a minor change to disable some Linux specific /proc checks inside libvirt to make it work on FreeBSD and pushed that to the 'master' branch of libvirt today.

If you want to test it, you'd need to checkout libvirt source code using git:

git clone git://libvirt.org/libvirt.git

and then run ./bootstrap. It will inform if it needs something that's not installed.

For my libxl test setup I configure libvirt this way:

./configure --without-polkit --with-libxl --without-xen --without-vmware --without-esx --without-bhyve CC=gcc48 CFLAGS=-I/usr/local/include LIBS=-L/usr/local/lib

The only really important part here is the '--with-libxl', other flags are more or less specific to my setup. After configure just run gmake and it should build fine. Now you can install everything and run the libvirtd daemon.

If everything went fine, you should be able to connect to it using:

virsh -c "xen://"

Now we can define some domains. Let's check these two examples:

The first one is for a simple pre-configured FreeBSD guest image. The second one defines CDROM device and hard disk devices. It's set to boot from CDROM to be able to install Linux. Both domains are configured to attach to the default libvirt network on the virbr0 bridge. Additionally, both domains support VNC.

You could get domain VNC display number using the vncdisplay command in virsh and then connect to a VM with your favorite VNC client.

I've been using this setup for a couple of days and it works fine. However, more testers are welcome, if you're using it and have some issues please drop me an email to novel@`uname -s`.org or poke me on twitter.

freebsd-wifi-build, or "wait, you can run freebsd on atheros MIPS access points? where do I get that?"

I've been running FreeBSD at home as my primary internet/wifi access for a few years now. It's cheap, it's easy to do, and I've tried very hard to wrap up the whole process into a mostly-simple build system that spits out a useful image to use.

It's pretty simple in concept - I take FreeBSD-HEAD, build it with some cut-down options, create a custom filesystem image with some custom boot scripts and a custom configuration file, and provide an image that you can TFTP (using a serial console and ethernet cable) or upload directly to the AP if it supports it.

The supported hardware list is here:

https://github.com/freebsd/freebsd-wifi-build/wiki/Supported-Boards

Now, it's not a huge list like OpenWRT, but that's mostly because I don't have an infinite supply of Atheros MIPS based routers. I think I'll get some of the TP-Link Archer series stuff next.

Building it is pretty simple:

https://github.com/freebsd/freebsd-wifi-build/wiki

You checkout the build repo, check out FreeBSD-HEAD, install a couple of packages, and run the build for your board. Once it's done, the images for your board appear in ../tftpboot/. There's a wiki page for each of the supported boards with a walkthrough with how to get FreeBSD going on it.

It comes up on 192.168.1.20/24 with 'user' and 'root' users, with no password. So, the first thing you should do after installation is telnet in, configure /etc/cfg/rc.conf with your actual LAN IPs, set the user/root passwords, and then 'cfg_save' to save things. Then, reboot and voila!

The configuration file format looks like FreeBSD but it isn't. I'm keeping it somewhat hierarchical-looking in naming but flat in implementation so I can migrate it to something like a sqlite or luci backend in the future.

https://github.com/freebsd/freebsd-wifi-build/wiki/Config-Overview

It's good enough for me to be able to set up an AP to be a bridge with a management IP address and configure the ethernet switch. Others have added ipfw support to do NAT and firewalling - I'm going to add configuration rules for NAT, IPFW and routing soon so it's all integrated.

It's FreeBSD, all the way through:


$ uname -a
FreeBSD tl-wdr3600 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r282406M: Wed May 6 22:27:16 PDT 2015 adrian@lucy-11i386:/usr/home/adrian/work/freebsd/head-embedded/obj/mips/mips.mips/usr/home/adrian/work/freebsd/head-embedded/src/sys/TL-WDR4300 mips
$ ifconfig wlan0 list sta
ADDR AID CHAN RATE RSSI IDLE TXSEQ RXSEQ CAPS FLAG
18:ee:69:15:f4:12 2 1 26M 37.0 45 2703 51888 EPS AQEHTRM RSN HTCAP WME
04:e5:36:0d:1b:0d 1 1 19M 23.0 15 1524 47072 EPS AQEPHTR RSN HTCAP WME
cc:3a:61:0e:33:a0 3 1 19M 32.0 30 2585 43072 EPS AQEPHTR RSN HTCAP WME
40:0e:85:1a:f1:69 4 1 19M 25.0 30 1138 54800 EPS AQEPHTR RSN HTCAP WME
00:0f:13:97:14:54 5 1 54M 30.0 45 1808 57424 EPS AE RSN
00:22:fa:c2:d1:20 6 1 26M 24.5 0 574 57776 EPS AQEHTRS RSN HTCAP WME

So if you'd like a FreeBSD based device to act as your home gateway, this is where you can start. It's not pfsense, but it's designed to run on things much smaller than pfsense supports and it's a good introduction into the world of FreeBSD embedded.

Intel DDIO, LLC cache, buffer alignment, prefetching, shared locks and packet rates.

I've been digging into the low level behaviour of high throughput packet classification and pushing for my job. The initial suggestions from everyone was "use netmap!" Which was cool, but it only seems to to fast packet work if you're only ever really flipping packets between receive and transmit rings. Once you start actually looking into the payload, you start having to take memory misses and things can slow down quite a bit. An L3 miss (ie, RAM access) on Sandybridge is ~50ns. (There's also costs involved in walking the TLB, but I won't cover that here.)

For background: http://7-cpu.com/cpu/SandyBridge.html .

But! Intel has this magical thing called DDIO. In theory (and there's a lot of theory here), DMA is done via a small (~10%) fraction of LLC (L3) cache, which is shared between all cores. If the data is already in cache when the CPU accesses it, it will be quick. Also, if you then wish to DMA out data from something in cache, it doesn't have to get flushed to memory first - it's just DMAed straight out of cache.

However! When I was doing packet bridge testing (using netmap + bridge, 64 byte payloads), I noticed that I was doing a significant amount of memory bandwidth. It wasn't quite at the rate of 10G worth of bridged data, but DDIO should be doing almost all of that work for me at 64 byte payloads.

So, to reproduce: run netmap bridge (eg 'bridge -i netmap:ix0 -i netmap:ix1') and run pkt-gen between two nodes.

This is the output of 'pcm-memory.x 1' from the intel-pcm toolkit (which is available as a binary package on FreeBSD.)

---------------------------------------||---------------------------------------
--                   System Read Throughput(MB/s):    300.68                  --
--                  System Write Throughput(MB/s):    970.81                  --
--                 System Memory Throughput(MB/s):   1271.48                  --
---------------------------------------||---------------------------------------

The first theory - the bridging isn't occuring fast enough to service what's in LLC before it gets flushed out by other packets. So, assume:

  1. It's 1/10th of the LLC - which is 1/10th of an 8 core * 2.5MB per core setup, is ~ 2MB.
  2. 64 byte payloads are being cached.
  3. Perfect (!) LLC use.
That's 32,768 packets at a time. Now, netmap is doing ~ 1000 packets a batch and it's keeping up line rate bridging on one core (~14 million packets per second), so it's not likely that.

Ok, so what if it's not perfect LLC usage?

Then I thought back to cache line aliasing and other issues that I've previously written about. What if the buffers are perfectly aligned (say, 2048 byte aligned) - the cache line aliasing effects should also manifest themselves as low LLC utilisation.

Luckily netmap has a twiddle - 'dev.netmap.buf_size' / 'dev.netmap.priv_buf_size'. They're both .. 2048. So yes, the default buffer sizes are aligned, and there's likely some very poor LLC utilisation going on.

So, I tried 1920 - that's 2048 - (2 * 64) - ie, two cache lines less than 2048.


---------------------------------------||---------------------------------------
--                   System Read Throughput(MB/s):    104.92                  --
--                  System Write Throughput(MB/s):    382.32                  --
--                 System Memory Throughput(MB/s):    487.24                  --
---------------------------------------||---------------------------------------

It's now using significantly less memory bandwidth to do the same thing. I'm guessing this is because I'm now using the LLC much more efficiently.

Ok, so that's nice - but what about when it comes time to actually look at the packet contents to make decisions?

I've modified a copy of bridge to do a few things, mostly inspired by netmap-ipfw:
  • It does batch receive from netmap;
  • but it then looks at the ethernet header do decap that;
  • then it gets the IPv4 src/dst addresses;
  • .. and looks them up in a (very large) traditional hash table.
I also have a modified copy of pkt-gen that will use completely random source and destination IPv4 addresses and ports, so as to elicit some very terrible behaviour.

With an empty hash set, but still dereferencing the ethernet header and IPv4 source/destination, handling a packet at a time, no batching, no prefetching and only using one core/thread to run:

buf_size=2048:
  • Bridges about 6.5 million pps;
  • .. maxes out the CPU core;
  • Memory access: 1000MB/sec read; 423MB/sec write (~1400MB/sec in total).
buf_size=1920:
  • Bridges around 10 million pps;
  • 98% of a CPU core;
  • Memory access: 125MB/sec read, 32MB/sec write, ~ 153MB/sec in total.
So, it's a significant drop in memory throughput and a massive increase in pps for a single core.

Ok, so most of the CPU time is now spent looking at the ethernet header in the demux routine and in the hash table lookup. It's a blank hash table, so it's just the memory access needed to see if the bucket has anything in it. I'm guessing it's because the CPU is loading in the ethernet and IP header into a cache line, so it's not already there from DDIO.

I next added in prefetching the ethernet header. I don't have the code to do that, so I can't report numbers at the moment. But what I did there was I looped over everything in the netmap RX ring, dereferenced the ethernet header, and then did per-packet processing. This was interesting, but I wanted to try batching out next. So, after some significant refactoring, I arranged the code to look like this:
  1. Pull in up to 1024 entries from the netmap receive ring;
  2. Loop through, up to 16 at a time, and place them in a batch
  3. For each packet in a batch do:
    1. For each packet in the batch: optional prefetch on the ethernet header
    2. For each packet in the batch: decapsulate ethernet/IP header;
    3. For each packet in the batch: optional prefetch on the hash table bucket head;
    4. For each packet in the batch: do hash table lookup, decide whether to forward/block
    5. For each packet in the batch: forward (ie, ignore the forward/block for now.)
I had things be optional so I could turn on/off prefetching and control the batch size.

So, with an empty hash table, no prefetching and only changing the batch size, at buf_size=1920:
  • Batch size of 1: 10 million pps;
  • Batch size of 2: 11.1 million pps;
  • Batch size of 4: 11.7 million pps.
Hm, that's cute. What about with prefetching of ethernet header? At buf_size=1920:
  • Batch size of 1: 10 million pps;
  • Batch size of 2: 10.8 million pps;
  • Batch size of 4: 11.5 million pps.
Ok, so that's not that useful. Prefetching on the bucket header here isn't worthwhile, because the buckets are all empty (and thus NULL pointers.)

But, I want to also be doing hash table lookups. I loaded in a reasonably large hash table set (~ 6 million entries), and I absolutely accept that a traditional hash table is not exactly memory or cache footprint happy. I was specifically after what the performance was like for a traditional hash table. Said hash table has 524,288 buckets, and each points to an array of IPv4 addresses to search. So yes, not very optimal by any measure, but it's the kind of thing you'd expect to find in an existing project.

With no prefetching, and a 6 million entry hash table:

At 2048 byte buffers:
  • Batch size of 1: 3.7 million pps;
  • Batch size of 2: 4.5 million pps;
  • Batch size of 4: 4.8 million pps.
At 1920 byte buffers:
  • Batch size of 1: 5 million pps;
  • Batch size of 2: 5.6 million pps;
  • Batch size of 4: 5.6 million pps.
That's a very inefficient hash table - each bucket is going to have around 11 IPv4 entries in it, and that's checking almost a cache line worth of IPv4 addresses in it. Not very nice. But, it's within a cache line worth of data, so in theory it's not too terrible.

What about with prefetching? All at 1920 byte buffers:
  • Batch size of 4, ethernet prefetching: 5.5 million pps
  • Batch size of 4, hash bucket prefetching: 7.7 million pps
  • Batch size of 4, ethernet + hash bucket prefetching: 7.5 million pps
So in this instance, there's no real benefit from doing prefetching on both.

For one last test, let's bump the bucket count from 524,288 to 2,097,152. These again are all at buf_size=1920:
  • Batch size of 1, no prefetching: 6.1 million pps;
  • Batch size of 2, no prefetching: 7.1 million pps;
  • Batch size of 4, no prefetching: 7.1 million pps;
  • Batch size of 4, hash bucket prefetching: 8.9 million pps.
Now, I didn't quite predict this. I figured that since I was reading in the full cache line anyway, having up to 11 entries in it to linearly check would be cheap. It turns out that no, that's not exactly true.

The difference between the naive way (no prefetching, no batching) to 4-packet batching, hash bucket prefetching is not trivial - it's ~ 50% faster. Going all the way to a larger hash bucket was ~75% faster. Now, this hash implementation is not exactly cache footprint friendly - it's bigger than the LLC, so with random flows and thus no real useful cache behaviour it's going to degrade to quite a few memory accesses.

This has been quite a fun trip down the optimisation peephole. I'm going to spend a bunch of time writing down the hardware performance counters involved in analysing this stuff and I'll look to write a follow-up post with details about that.

One final things: threads and locking. I wanted to clearly demonstrate the cost of shared read locks on a setup like this. There's been lots of discussions about the right kind of locking and concurrency strategies, so I figured I'd just do a simple test in this setup and explain how terrible it can get.

So, no read-locks between threads on the hash table, batch size of 4, hash bucket prefetching, buf_size=1920:
  • 1 thread: 8.9 million pps;
  • 4 threads: 12 million pps.
But with a read lock on the hash table lookups:
  • 1 thread: 7 million pps;
  • 4 threads: 4.7 million pps.
I'm guessing that as I add more threads, the performance will drop.

Even taking a rwlock as a reader lock in pthreads is expensive - it's purely just an atomic increment/decrement in FreeBSD, but it's still not free. I'm getting the lock once for two hash table lookups - ie, the source and destination IP hash table lookups are done under one lock. I'm sure if I took the lock for the whole batch hash table lookup it'd work out a little better on a small number of CPU cores, but I think this demonstrates my point - read locks aren't going to cut it when you have a frequently accessed thing to protect.

The best bit about this post? The prefetching, terrible (large) hash table performance and general cache abuse is not new. Doing batching on superscalar Intel CPUs is not new. Documenting DDIO effectiveness using non-power-of-two-aligned buffer sizes is new, but it's just a rehash of the existing cache aliasing effect. But, I now have a little test bed to experiment with these things without having to try and involve the rest of a kernel.

Yes, I'll publish code soon.

Using the arswitch ethernet switch on FreeBSD

I sat down a few weeks ago to make the AR8327 ethernet switch work and in doing so I wanted to add per-port and 802.1q VLAN support. It turned out that I .. didn't know as much I thought I did about the etherswitch support. So, after a whole bunch of trial-and-error, I wrapped my head around things. This post is mostly a braindump so if I do forget I have something written down about it - at least until I turn it into a FreeBSD manpage.

There's three modes:
  • default - all ports are in the same VLAN;
  • per-port - each port can be in a VLAN 'group';
  • dot1q - each port can be in multiple VLAN groups, with 802.1q tagging going on.
The per-port VLAN group is for switches that don't have an arbitrary VLAN table - you just assign each port an ID from some low set of values (say, 16), and then the VLAN tag can either be added or not added. I think the RTL8366 switch is like this, but I'd have to check.

The dot1q VLAN is for switches that support multiple VLANs, each can have an arbitrary VLAN ID (0..4095) with optional other VLAN options (like tag-in-tag support.)

The etherswitch configuration side has a few options and they're supported by different hardware:
  • Each port has a port VLAN ID - this is the "native port" for dot1q support. I don't think it has any particular meaning in the per-port VLAN code in arswitch but I could be terribly wrong. I thought it did when I initially did the port, but the documentation is .. lacking.
  • Then there's a set of per-port flags - eg q-in-q, 802.1q tagging, etc.
  • Then there's the vlangroup - each vlangroup has a vlan ID, and then a set of port members. Each port member can be tagged or untagged.
This is where things get odd.

Firstly - the AR934x SoC switch support doesn't include VLANs. I need to add that. I'm not sure which side of the wall this falls.

The switches previous to the AR8327 support per-port and VLAN configuration, but they don't support per-port-per-VLAN tagging. Ie, you can configure 802.1q VLANs, and you can enable tagging on the port - but it tags all packets that aren't the port 'VLAN ID'.

The per-port VLAN ID seems ignored by the arswitch code - it's only used by the dot1q support.

So I think (and it hasn't yet been tested) that on the earlier switches, I can use per-port VLANs with tagging by:
  • Configuring per port vlans - "etherswitch config vlan_mode port"
  • Adding vlangroups as appropriate with membership - tag/untag doesn't matter
  • Set the CPU port up to have tagging - "etherswitch port0 addtag"
When configuring dot1q VLANs, the mode is "config vlan_mode dot1q" and the 802.1q VLAN IDs are used, but the above still holds - the port is tagged or untagged.

But on the AR8327, the VLAN map hardware actually supports enabling/disabling tagging on a per-port-per-VLAN basis. Ie, when the VLAN table is programmed with the port membership, it takes a list of both the ports and whether the ports are tagged/untagged/open/filtered. So, I don't think per-port VLAN tagging works - only dot1q tagging. Maybe I can make it work, but I haven't really sat down for long enough with the documentation to see what combinations are required.
  • Configure the hardware - "etherswitch config vlan_mode dot1q"
  • Add vlangroups as appropriate, set pvid as appropriate
  • For each vlangroup membership, the port can be tagged or untagged - eg to tag the cpu port 0, you'd use '0t' as the port member. That says "port0 is a member, and it's tagged."
I still have a whole lot more to add - the ingress/egress filters aren't configurable, the per-port vlan stuff needs to be made much more sensible and consistent - and the AR934x SoC switch needs to support VLANs. Oh, and much more documentation. But, hey, I can get the thing spitting out VLAN tags, so when it's time to setup my home network with some VLANs, i'll be sure to document what I did and share it with everyone.

Cache Line Aliasing #2, or "What happens when you page align everything"

After a little more digging into the Intel performance side of things, I discovered one of the big reasons for the performance drop on this particular workload: how Intel CPUs do memory reordering.

The TL;DR is this - there's some hardware inside the Intel CPUs that tracks memory ordering and cache contents - but they don't use all the address bits.

The relevant chapter in the intel optimisation guide is 3.6.8 - Capacity Limits and Aliasing in Caches. The specific thing I was hitting was in 3.6.8.2 - Store Forwarding Aliasing.

Assembly/Compiler Coding Rule 56. (H impact, M generality) Avoid having a store followed by a non-dependent load with addresses that differ by a multiple of 4 KBytes. Also, lay out data or order computation to avoid having cache lines that have linear addresses that are a multiple of 64 KBytes apart in the same working set. Avoid having more than 4 cache lines that are some multiple of 2 KBytes apart in the same first-level cache working set, and avoid having more than 8 cache lines that are some multiple of 4 KBytes apart in the same first-level cache working set.

So, given this, what can be done? In this workload, a bunch of large matrices were allocated via jemalloc, which page aligns large allocations. In the default invocation of the benchmark (where the allocation padding size is 0), the memory access patterns showed a very large number of counter events on "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS" - which is the number of 64k address aliases on the Sandy Bridge Xeon processors I've been testing on. (The same occurs on Westmere, Ivy Bridge and Haswell.) As I vary the padding size, the address aliasing value drops, the memory access counters increase, and the general performance increases.

On the test boxes I have (running pmcstat -w 120 -C -p LD_BLOCKS_PARTIAL.ADDRESS_ALIAS ./himenobmtxpa M )

0 217799413 830.995025
64 18138386 1624.296713
96 8876469 1662.486298
128 19281984 1645.370750
192 18247069 1643.119908
256 18511952 1661.426341
320 19636951 1674.154119
352 19716236 1686.694053
384 19684863 1681.110499
448 18189029 1683.163673
512 19380987 1691.937818

So there's still plenty of aliasing going on at different padding offsets, however it's a very marked drop between 0 and, well, anything.

It turns out that someone's gone and done a bunch more digging into the effects of various CPU magic under the hood. The last paper in the list (Analysing Contextual Bias..) looks at Aliasing and Cache Effects and the effect of memory layout. There's some cute (and sobering!) analysis of the performance changes due to something as simple as the length of your login name in the UNIX environment. It's worth reading.

The summary? Maybe page alignment of all of your memory accesses isn't the way to go.

For further reading:

cache line aliasing effects, or "why is freebsd slower than linux?"

There was some threads on FreeBSD/DragonflyBSD mailing lists a few years ago (2012?) which talked about some math benchmarks being much slower on FreeBSD/DragonflyBSD versus Linux.

When the same benchmark is run on FreeBSD/DragonflyBSD using the Linux layer (ie, a linux binary compiled for linux, but run on BSD) it gives the same or better behaviour.

Some digging was done, and it turned out it was due to memory allocation patterns and memory layout. The jemalloc library allocates large chunks at page aligned boundaries, whereas the allocator in glibc under Linux does not.

I've put the code online in the hope that others can test and verify this:

https://github.com/erikarn/himenobmtxpa

The branch 'local/freebsd' has my local change to allow the allocator offset to be specified. The offset compounds on each allocation - so with an 'n' byte offset, the first allocation is 0 bytes offset from the page boundary, the next is 'n' bytes offset from the page boundary, the next is '2n' bytes offset, etc.

You can experiment with different values and get completely different behavioural results. It's non-trivial: there's a 100% speedup by using a 127 byte offset for each allocation, versus a 0 byte offset.

I'd like to investigate cache line aliasing effects further. There was work done a few years ago to offset mbuf headers in the FreeBSD kernel so they weren't all page-aligned or 256/512/1024 byte aligned - and apparently this gave a significant performance improvement. But it wasn't folded into FreeBSD. What I'd like to do is come up with some better strategies / profiling guides for identifying when this is actually happening so the underlying objects being accessed can be adjusted.

So - if anyone out there has any tips, hints or suggestions on how to do this, please let me know. I'd like to document and automate this testing.

FreeBSD on the POWER8: it’s alive!

A post to freebsd-ppc from a couple of months ago asked if we had support for POWER8 and offered to provide remote access to anyone interested in working on it. I was sufficiently intrigued that I approached the FreeBSD powerpc hackers to ask about it, and was informed that it'd be nice, but we didn't have hardware.

After a bit of wrangling of hardware logistics and with the FreeBSD Foundation purchasing a box, a Tyan POWER8 evaluation server appeared. Nathan Whitehorn started poking at it and managed to get a basic "hello world" going, but stalled on issues with the Linux KVM virtualisation environment.

Fast forward a few weeks - he's figured out the KVM issues, their lack of support for some mandated hypervisor APIs and other bugs - FreeBSD now boots inside of the hypervisor environment and seems stable enough to do development on.

He then found the existing powerpc pmap (physical memory management) code wasn't very SMP friendly - it works fine on one and two CPU powerpc machines, but this POWER8 evaluation board is a 4-core, 32-thread CPU. So a few days of development went by and he rewrote most of the pmap code to be much more fine grained locked and scale much, much better than the existing code. (He also found the PS3 hypervisor layer isn't thread-safe.)

What's been done thus far?

  • FreeBSD boots inside the hypervisor environment;
  • Virtualised console, networking and storage all work;
  • (in progress) new, scalable pmap implementation;
  • Initial support for the Vector-Scalar Extension (VSX) that's found on POWER7 and POWER8.
So, I'm impressed. Nathan's done a fantastic job bringing the whole thing up. There's some further work on the new powerpc technology that needs doing (things like the new vector processing units, performance counter support and such) and I'm sure Justin and Nathan will poke powerpc dtrace support into further good shape. I'm going to see if we can fix a chelsio 40G NIC into one of these and work with their developers to fix any endian/busdma issues that creep up, and then do some network stack scaling testing with it. There's also the missing hardware/hypervisor support to run FreeBSD on the bare metal, which would be a fantastic achievement.

Now I kind of want some larger POWER8 hardware.

TDMA (somewhat) working on AR9380 chips

(Wow, I have a lot of posts to write to catch up on things.)

I've just brought up FreeBSD's TDMA support on the AR9380 chipset. Specifically, the AR9331, since I have a Carambola 2 on me today.

It was pretty simple to bring up - I was missing the beacon configuration HAL call that the TDMA code expected. It's only used by the TDMA code - the STA and AP modes rely on the normal HAL beacon methods that date back to the Atheros HAL.

The only problem - it seems something is up with ANI (noise immunity) and sensitivity on at least the AR9331. It doesn't seem to behave well on slightly loaded channels and thus the beacons don't always go out when they're supposed to.

But, if you've been wanting to play with TDMA on the later Atheros chips, now you can!

pkg(8) passes coverity scans

At FOSDEM phk@ reminded me to always on regular basis make static analysis of the code via all possible tools available.

We did but on unregular basis and only paid attention to very critical reports And not all reports.

That is now fixed, I relaunched a few scan via coverity and I'm happy to say that the latest scan on master claims 0 defects!

Meaning that all known defects have been fixed.

I was also planning to use lint(1) as well, unfortunatly on FreeBSD lint is not supporting C99...

If I'm brave enough I may synchronise lint(1) with NetBSD which seems to have added C99 support to that tool. Or maybe someone will volunteer to do it? :)