Category Archives: tech

OpenSSH, PAM and user names

FreeBSD just published a security advisory for, amongst other issues, a piece of code in OpenSSH's PAM integration which could allow an attacker to use one user's credentials to impersonate another (original patch here). I would like to clarify two things, one that is already mentioned in the advisory and one that isn't.

The first is that in order to exploit this, the attacker must not only have valid credentials but also first compromise the unprivileged pre-authentication child process through a bug in OpenSSH itself or in a PAM service module.

The second is that this behavior, which is universally referred to in advisories and the trade press as a bug or flaw, is intentional and required by the PAM spec (such as it is). There are multiple legitimate use cases for this, such as:

  • Letting PAM, rather than the application, prompt for a user name; the spec allows passing NULL instead of a user name to pam_start(3), in which case it is the service module's responsibility (in pam_sm_authenticate(3)) to prompt for a user name using pam_get_user(3). Note that OpenSSH does not support this.

  • Mapping multiple users with different identities and credentials in the authentication backend to a single “template” user when the application they need to access does not need to distinguish between them, or when this determination is made through other means (e.g. environment variable, which service modules are allowed to set).

  • Mapping Windows user names (which can contain spaces and non-ASCII characters that would trip up most Unix applications) to Unix user names.

That being said, I do not object to the patch, only to its characterization. Regarding the first issue, it is absolutely correct to consider the unprivileged child as possibly hostile; this is, after all, the entire point of privilege separation. Regarding the second issue, there are other (and probably better) ways to achieve the same result—performing the translation in the identity service, i.e. nsswitch, comes to mind—and the percentage of users affected by the change lies somewhere between zero and negligible.

One could argue that instead of silently ignoring the user name set by PAM, OpenSSH should compare it to the original user name and either emit a warning or drop the connection if it does not match, but that is a design choice which is entirely up to the OpenSSH developers.


UPDATE 2014-10-14 23:40 UTC The details have been published: meet the SSL POODLE attack.

UPDATE 2014-10-15 11:15 UTC Simpler server test method, corrected info about browsers

UPDATE 2014-10-15 16:00 UTC More information about client testing

El Reg posted an article earlier today about a purported flaw in SSL 3.0 which may or may not be real, but it’s been a bad year for SSL, we’re all on edge, and we’d rather be safe than sorry. So let’s take it at face value and see what we can do to protect ourselves. If nothing else, it will force us to inspect our systems and make conscious decisions about their configuration instead of trusting the default settings. What can we do?

The answer is simple: there is no reason to support SSL 3.0 these days. TLS 1.0 is fifteen years old and supported by every browser that matters and over 99% of websites. TLS 1.1 and TLS 1.2 are eight and six years old, respectively, and are supported by the latest versions of all major browsers (except for Safari on Mac OS X 10.8 or older), but are not as widely supported on the server side. So let’s disable SSL 2.0 and 3.0 and make sure that TLS 1.0, 1.1 and 1.2 are enabled.

What to do next

Test your server

The Qualys SSL Labs SSL Server Test analyzes a server and calculates a score based on the list of supported protocols and algorithms, the strength and validity of the server certificate, which mitigation techniques are implemented, and many other factors. It takes a while, but is well worth it. Anything less than a B is a disgrace.

If you’re in a hurry, the following command will attempt to connect to your server using SSL 2.0 or 3.0:

:|openssl s_client -ssl3 -connect

If the last line it prints is DONE, you have work to do.

Fix your server

Disable SSL 2.0 and 3.0 and enable TLS 1.0, 1.1 and 1.2 and forward secrecy (ephemeral Diffie-Hellman).

For Apache users, the following line goes a long way:

SSLProtocol ALL -SSLv3 -SSLv2

It disables SSL 2.0 and 3.0, but does not modify the algorithm preference list, so your server may still prefer older, weaker ciphers and hashes over more recent, stronger ones. Nor does it enable Forward Secrecy.

The Mozilla wiki has an excellent guide for the most widely used web servers and proxies.

Test your client

The Poodle Test website will show you a picture of a poodle if your browser is vulnerable and a terrier otherwise. It is the easiest, quickest way I know of to test your client.

Qualys SSL Labs also have an SSL Client Test which does much the same for your client as the SSL Server Test does for your server; unfortunately, it is not able to reliably determine whether your browser supports SSL 3.0.

Fix your client

On Windows, use the Advanced tab in the Internet Properties dialog (confusingly not searchable by that name, search for “internet options” or “proxy server” instead) to disable SSL 2.0 and 3.0 for all browsers.

On Linux and BSD:

  • Firefox: open and set security.tls.version.min to 1. You can force this setting for all users by adding lockPref("security.tls.version.min", 1); to your system-wide Mozilla configuration file. Support for SSL 3.0 will be removed in the next release.

  • Chrome: open and select “show advanced settings”. There should be an HTTP/SSL section which lets you disable SSL 3.0 is apparently no way to disable SSL 3.0. Support for SSL 3.0 will be removed in the next release.

I do not have any information about Safari and Opera. Please comment (or email me) if you know how to disable SSL 3.0 in these browsers.

Good luck, and stay safe.

DNS improvements in FreeBSD 11

Erwin Lansing just posted a summary of the DNS session at the FreeBSD DevSummit that was held in conjunction with BSDCan 2014 in May. It gives a good overview of the current state of affairs, including known bugs and plans for the future.

I’ve been working on some of these issues recently (in between $dayjob and other projects). I fixed two issues in the last 48 hours, and am working on two more.

Reverse lookups in private networks

Fixed in 11.

In its default configuration, Unbound 1.4.22 does not allow reverse lookups for private addresses (RFC 1918 and the like). NLNet backported a patch from the development version of Unbound which adds a configuration option, unblock-lan-zones, which disables this filtering. But that alone is not enough, because the reverse zones are actually signed (EDIT: the problem is more subtle than that, details in comments); Unbound will attempt to validate the reply, and will reject it because the zone is supposed to be empty. Thus, for reverse lookups to work, the reverse zones for all private address ranges must be declared as insecure:

    # Unblock reverse lookups for LAN addresses
    unblock-lan-zones: yes
    # ...

FreeBSD 11 now has both the unblock-lan-zones patch and an updated local-unbound-setup script which sets up the reverse zones. To take advantage of this, simply run the following command to regenerate your configuration:

# service local_unbound setup

This feature will be available in FreeBSD 10.1.

Building libunbound writes to /usr/src (#190739)

Fixed in 11.

The configuration lexer and parser were included in the source tree instead of being generated at build time. Under certain circumstances, make(1) would decide that they needed to be regenerated. At best, this inserted spurious changes into the source tree; at worst, it broke the build.

Part of the reason for this is that Unbound uses preprocessor macros to place the code generated by lex(1) and yacc(1) in its own separate namespace. FreeBSD’s lex(1) is actually Flex, which has a command-line option to achieve the same effect in a much simpler manner, but to take advantage of this, the lexer needed to be cleaned up a bit.

Allow local domain control sockets

Work in progress

An Unbound server can be controlled by sending commands (such as “reload your configuration file”, “flush your cache”, “give me usage statistics”) over a control socket. Currently, this can only be a TCP socket. Ilya Bakulin has developed a patch, which I am currently testing, that allows Unbound to use a local domain (aka “unix”) socket instead.

Allow unauthenticated control sockets

Work in progress

If the control socket is a local domain socket instead of a TCP socket, there is no need for encryption and little need for authentication. In the local resolver case, only root on the local machine needs access, and this can be enforced by the ownership and file permissions of the socket. A second patch by Ilya Bakulin makes encryption and authentication optional so there is no need to generate and store a client certificate in order to use unbound-control(8).

Dark Patterns

The term dark pattern was coined (I believe) by Harry Brignull to describe practices in user interface design intended to make it easy for your users to accidentally select a more profitable (for you) option and hard for them to revert, cancel or unsubscribe.

This is not news. We all know how, for instance, low-cost airlines try to trick you into ordering travel insurance, or software installers try to trick you into installing browser toolbars. But it’s something we usually associate with slightly dodgy outfits like RyanAir or Oracle.

I recently learned that Adobe really, really wants you to buy Acrobat XI Pro. It costs twice as much as Adobe XI Standard and is loaded with features that few people need. Arguably, few people need Acrobat XI Standard either—but I have a lot of papers to scan and if there’s one thing Acrobat does better than any other software I’ve encountered, it’s scan and post-process documents.

Go to Adobe’s front page and select Acrobat from the Products drop-down. You get your average product page with product information, testimonials, awards, stock photos of happy people (presumably, Acrobat is what makes them happy), and a sidebar that offers you the Pro and Standard versions. The sidebar doesn’t list the full price, however, because Acrobat is pretty expensive. Instead, they show you the price of an upgrade from a previous version. They also offer you a free evaluation license, but there’s no evaluation license for Acrobat XI Standard. So you download and install an evaluation copy of Acrobat XI Pro and use it for a while and start to really like it. It keeps popping up a dialog nagging you to buy a full license, and finally you decide to do so and click the “Buy” button. It brings up the Adobe Store in a browser window with Acrobat XI Pro already in your shopping cart, and you think “No! I wanted the standard version!” and try to change your order, but you can’t actually convert the item in your cart from Pro to Standard, so you end up having to empty your card and navigate through the store until you find the correct version. You pay and download and run the installer, but it refuses to run because you already have a better version installed (i.e. your unlicensed evaluation copy of Acrobat XI Pro) which you have to manually uninstall before installing your licensed copy of Acrobat XI Standard.

But at least you can scan.

On standards (and testing)

RFC 4648 defines the Base16, Base32 and Base64 encodings. Base16 (aka hex) and Base64 are widely known and used, but Base32 is an odd duck. It is rarely used, and there are several incompatible variants, of which the RFC acknowledges two: [A-Z2-7] and [0-9A-V].

One of the uses of Base32, and the reason for my interest in it, is in Google’s otpauth URI scheme for exchanging HOTP and TOTP keys. I needed a Base32 codec for my OATH library, so when a cursory search for a lightweight permissive-licensed implementation failed to turn up anything, I wrote my own.

My OATH implementation is currently deployed in an environment in which OTP keys for new users (or new OTP keys for existing users) are generated by the primary provisioning system, which passes them on to a smaller provisioning system in charge of firewalls and authentication (codenamed Nexus), which passes them on to a RADIUS server, which uses my code to validate user responses. When we transitioned from generating OTP keys manually to having the provisioning system generate them for us, we ran into trouble: some keys worked, others didn’t. It turned out to be a combination of factors:

  • The keys generated by the provisioning system were syntactically correct but out of spec. Most importantly, their length was not always a multiple of 40 bits, so their Base32 representation included padding.
  • Nexus performed only cursory validation of the keys it received from the provisioning system, so it accepted the out-of-spec keys.
  • The Google Authenticator app (at least the Android version, but possibly the iOS version as well) does not handle padded keys well. If I recall correctly, the original Android app rejected them outright; the current version simply rounds them down. (Why don’t the Android system libraries provide Base32 encoding and decoding?)
  • My Base32 decoder didn’t handle padding correctly either… and of course, I only had tests for the encoder, because I was in a rush when I wrote it and I didn’t need decoding until later. Yes, this is stupid. Yes, I fixed it and now have 100% condition/decision coverage (thanks to BullseyeCoverage, with a caveat: 100% C/D coverage of table-driven code does not guarantee correctness, because it only checks the code, not the table).

Having fixed both the provisioning system and the OATH verification tool, I decided to add stronger input validation to Nexus. The easiest way to validate a Base32-encoded key, I figured, is to decode it. And wouldn’t you know, there are not one but two Perl implementations of Base32!

Unfortunately, they’re both broken, and have been for years.

  • MIME::Base32 (the latest release is dated 2010-08-25, but the code hasn’t changed since the original release on 2003-12-10) does not generate padding, and decodes it into garbage. In addition, it does not accept lower-case code.
  • Convert::Base32 (the latest release is dated 2012-04-22, but the code hasn’t changed since the original release on 2001-07-17) does not generate padding, and dies when it encounters what it calls “non-base32 characters”. In addition, while it accepts lower-case code (which is commendable, even though the RFC specifies an upper-case alphabet), it also generates lower-case code, which is wrong.

Both packages ship with tests. MIME::Base32’s tests simply encodes a string, decodes the result, and checks that it got the original string back.

Convert::Base32’s tests are more complex and include length and padding tests, but it defines padding as the lower, unused bits of the last non-padding character in the output.

MIME::Base32 references RFC 3548 (the predecessor to RFC 4648) but does not come close to implementing it correctly. Convert::Base32 predates the RFC and conforms to the old RACE Internet draft, which is small consolation since RACE was never standardized and was eventually replaced by Punycode.

I wrote a script which runs the RFC 4648 test vectors through either or both MIME::Base32 and Convert::Base32, depending on what’s available. The first two columns are the input and output to and from the encoder, and the last two are the input and output to and from the decoder. Note that the script adds the correct amount of padding before feeding the encoded string back to the decoder.

 1 f            |  2 MY               |  8 MY======         |  7 fOOOOO     
 2 fo           |  4 MZXQ             |  8 MZXQ====         |  6 fo����
 3 foo          |  5 MZXW6            |  8 MZXW6===         |  6 foo���
 4 foob         |  7 MZXW6YQ          |  8 MZXW6YQ=         |  5 foob       
 5 fooba        |  8 MZXW6YTB         |  8 MZXW6YTB         |  5 fooba       
 6 foobar       | 10 MZXW6YTBOI       | 16 MZXW6YTBOI====== | 12 foobarOOOOO
Data contains non-base32 characters at line 16
 1 f            |  2 my               |  8 my======         | %

(the final % is my shell indicating that the output did not end with a line feed).

The same test, with forced conversion to upper-case before decoding:

 1 f            |  2 MY               |  8 MY======         |  7 fOOOOO     
 2 fo           |  4 MZXQ             |  8 MZXQ====         |  6 fo����
 3 foo          |  5 MZXW6            |  8 MZXW6===         |  6 foo���
 4 foob         |  7 MZXW6YQ          |  8 MZXW6YQ=         |  5 foob       
 5 fooba        |  8 MZXW6YTB         |  8 MZXW6YTB         |  5 fooba       
 6 foobar       | 10 MZXW6YTBOI       | 16 MZXW6YTBOI====== | 12 foobarOOOOO
Data contains non-base32 characters at line 17
 1 f            |  2 my               |  8 MY======         | %

Once again, with forced conversion to lower-case:

 1 f            |  2 MY               |  8 my======         |  8 my======    
 2 fo           |  4 MZXQ             |  8 mzxq====         |  7 mz{����
 3 foo          |  5 MZXW6            |  8 mzxw6===         |  7 mz{�O
 4 foob         |  7 MZXW6YQ          |  8 mzxw6yq=         |  6 mz{��^
 5 fooba        |  8 MZXW6YTB         |  8 mzxw6ytb         |  6 mz{��]
 6 foobar       | 10 MZXW6YTBOI       | 16 mzxw6ytboi====== | 14 mz{��]���zzzzz
Data contains non-base32 characters at line 17
 1 f            |  2 my               |  8 my======         | %


We can patch it for you wholesale

…but remembering costs extra.

Every once in a while, I come across a patch someone sent me, or which I developed in response to a bug report I received, but it’s been weeks or months and I can’t for the life of me remember where it came from, or what it’s for.

Case in point—I’m typing this on a laptop I haven’t used in over two months, and one of the first things I found when I powered it on and opened Chrome was a tab with the following patch:

diff --git a/lib/libpam/modules/pam_login_access/pam_login_access.c b/lib/libpam/modules/pam_login_access/pam_login_access.c
index 945d5eb..b365aee 100644
--- a/lib/libpam/modules/pam_login_access/pam_login_access.c
+++ b/lib/libpam/modules/pam_login_access/pam_login_access.c
@@ -79,20 +79,23 @@ pam_sm_acct_mgmt(pam_handle_t *pamh, int flags __unused,

        gethostname(hostname, sizeof hostname);

-       if (rhost == NULL || *(const char *)rhost == '') {
+       if (tty != NULL && *(const char *)tty != '') {
                PAM_LOG("Checking login.access for user %s on tty %s",
                    (const char *)user, (const char *)tty);
                if (login_access(user, tty) != 0)
                        return (PAM_SUCCESS);
                PAM_VERBOSE_ERROR("%s is not allowed to log in on %s",
                    user, tty);
-       } else {
+       } else if (rhost != NULL && *(const char *)rhost != '') {
                PAM_LOG("Checking login.access for user %s from host %s",
                    (const char *)user, (const char *)rhost);
                if (login_access(user, rhost) != 0)
                        return (PAM_SUCCESS);
                PAM_VERBOSE_ERROR("%s is not allowed to log in from %s",
                    user, rhost);
+       } else {
+               PAM_VERBOSE_ERROR("neither host nor tty is set");
+               return (PAM_SUCCESS);

        return (PAM_AUTH_ERR);

The patch fixes a long-standing bug in pam_login_access(8) (the code assumes that either PAM_TTY or PAM_RHOST is defined, and crashes if they are both NULL), but I only have the vaguest recollection of the conversation that led up to it. If you’re the author, please contact me so I can give proper credit when I commit it.

On testing, part III

I just got word of an embarrassing bug in OpenPAM Nummularia. The is_upper() macro, which is supposed to evaluate to true if its argument is an upper-case letter in the ASCII character set, only evaluates to true for the letter A:

#define is_upper(ch)                            \
        (ch >= 'A' && ch <= 'A')

This macro is never used directly, but it is referenced by is_letter(), which is referenced by is_pfcs(), which is used to validate paths and path-like strings, i.e. service names and module names or paths. As a consequence, OpenPAM does not support services or modules which contain an upper-case letter other than A. I never noticed because a) none of the services or modules in use on the systems I use to develop and test OpenPAM have upper-case letters in their names and b) there are no unit or regression tests for the character classification macros, nor for any code path that uses them (except openpam_readword(), which uses is_lws() and is_ws()).

The obvious course of action is to add unit tests for the character classification macros (r760) and then fix the bug (r761). In this case, complete coverage is easy to achieve since there are only 256 possible inputs for each predicate.

I have merged the fix to FreeBSD head (r262529 and r262530). Impatient users can fix their system by running the following commands:

% cd /usr/src/contrib/openpam
% svn diff -r758:762 svn:// | patch
% cd /usr/src/lib/libpam/libpam
% make && make install

Unsurprisingly, writing more unit tests for OpenPAM is moving up on my TODO list. Please contact me if you have the time and inclination to help out.


The Internet Society likes my work. I aim to please…

One of the things I did in the process of importing LDNS and Unbound into FreeBSD 10 was to change the default value for VerifyHostKeyDNS from “no” to “yes” in our OpenSSH when compiled with LDNS support (which can be turned off by adding WITHOUT_LDNS=YES to /etc/src.conf before buildworld).

The announcement the ISOC blog post refers to briefly explains my reasons for doing so:

I consider this a lesser evil than “ask” (aka “train the user to type ‘yes’ and hit enter”) and “no” (aka “train the user to type ‘yes’ and hit enter without even the benefit of a second opinion”).

There were objections to this (which I’m too lazy to dig up and quote) along the lines of:

  • Shouldn’t OpenSSH tell you that it found and used an SSHFP record?
  • Shouldn’t known_hosts entries take precedence over SSHFP records?
  • Shouldn’t OpenSSH store the key in known_hosts after verifying it against an SSHFP record?

The answer to all of the above is “yes, but…”

Here is how host key verification should work, ideally:

  1. Obtain host key from server
  2. Gather cached host keys from various sources (known_hosts, SSHFP, LDAP…)
  3. If we found one or more cached keys:
    1. Check for and warn about inconsistencies between these sources
    2. Check for and warn about inconsistencies between the cached key and what the server sent us
    3. If we got a match from a trusted source, continue connecting
    4. Inform the user of any matches from untrusted sources
  4. Display the key’s fingerprint
  5. Ask the user whether to:
    1. Store the server’s key for future reference and continue connecting
    2. Continue connecting without storing the key
    3. Disconnect

The only configuration required here is a list of trusted and untrusted sources, the difference being that a match or mismatch from a trusted source is normative while a match or mismatch from an untrusted source is merely informative.

Unfortunately, in OpenSSH, SSHFP support seems to have been grafted onto the existing logic rather than integrated into it. Here’s how it actually works:

  1. Obtain host key from server
  2. If VerifyHostKeyDNS is “yes” or “ask”, look for SSHFP records in DNS
  3. If an SSHFP record was found:
    1. If it matches the server’s key:
      1. If it has a valid DNSSEC signature and VerifyHostKeyDNS is “yes”, continue connecting
      2. Otherwise, set a flag to indicate that a matching SSHFP record was found
    2. Otherwise, warn about the mismatch
  4. Look for cached keys in the user and system host key files
  5. If we got a match from the host key files, continue connecting
  6. If we did not find anything in the host key files:
    1. If we found a matching SSHFP record, tell the user
    2. Ask the user whether to:
      1. Store the server’s key for future reference and continue connecting
      2. Disconnect
  7. If we found a matching revoked key in the host key files, warn the user and terminate
  8. If we found a different key in the host key files, warn the user and terminate

Part of the problem is that at the point where we tell the user that we found a matching SSHFP record, we no longer know whether it was signed. By switching the default for VerifyHostKeyDNS to “yes”, I’m basically saying that I trust DNSSEC more than I trust the average user’s ability to understand the information they’re given and make an informed decision.

DNS again: a clarification

There are a few points I’d like to clarify regarding my previous post about DNS in FreeBSD 10.

Some people were very quick to latch on to it and claim that “FreeBSD announced that Unbound and LNDS will replace BIND as the system’s DNS resolver” or words to that effect. This is, at best, a misunderstanding.

First of all: this is my personal blog. I speak only for myself, not for the FreeBSD project. I participated in the discussions and decision-making and did most of the work related to the switch, but I am neither a leader of nor a spokesperson for the project. As the current Security Officer, I sometimes speak on behalf of the project in security matters, but this is not one of those times. If this had been an official announcement, it would have been posted on the project’s website and / or on the freebsd-announce mailing list, not on my blog (or anybody else’s).

Second: BIND is a very mature, complex and versatile piece of software which implements pretty much every aspect of the DNS protocol and related standards, and is at the forefront of its field. It is developed and maintained by the Internet Systems Consortium, which is a major actor in the development and standardization of the DNS protocol. If you need an authoritative nameserver, or a caching resolver for a large and complex network, BIND is the natural choice. I use it myself, both privately and at work (note: I do not speak for the University of Oslo either). Most of the root servers run BIND. Unbound, on the other hand, is primarily a caching (recursing or forwarding) resolver. Although it has limited support for local zones (serving A, AAAA and PTR records only), which is mostly useful for overlaying information about machines on your RFC1918 SOHO network on top of the data served by a “real” nameserver, it is not capable of running as a full-fledged authoritative nameserver.

Third: due to its key role in Internet infrastructure, BIND is one of the most intensely scrutinized pieces of software. A tiny flaw in BIND can have major consequences for the Internet as a whole. The number and frequency of BIND-related security advisories are more a reflection of its importance than of its quality. Frankly, if you want to talk about code quality and BIND vs LDNS / Unbound… let’s just say that people who live in glass houses shouldn’t throw stones.

Fourth: FreeBSD has shipped with BIND for many years, but BIND was never the FreeBSD’s “system resolver” except in the loosest definition of the term. Most applications that need to perform DNS lookups use either gethostbyname(3) or, preferably, getaddrinfo(3), which are implemented entirely in libc (with code that goes back at least 25 years); I haven’t touched that code, and I don’t plan to. A few applications—notably host(1) and dig(1), which are actually part of BIND—need more fine-grained control over the queries they send and more details about the answers they receive, and use the BIND lightweight resolver library (lwres(3)); these applications have either been replaced by LDNS-based equivalents or deprecated. It is, of course, entirely possible to set up BIND as a local caching resolver; in fact, the default configuration we ship is perfectly suited to that purpose. It’s a little bit more work if you want it to forward rather than recurse—especially on a laptop or a broadband connection without a fixed IP, because you have to set up the resolvconf(8) magic yourself—but it’s not rocket surgery.

Fifth: a major part of the decision to remove BIND, which I stupidly forgot to mention, is that BIND 10 has been completely rewritten in C++ and Python. Importing Python into the base system is out of the question, so we would have been forced to switch sooner or later: at the earliest when users started complaining that we shipped an outdated version, and at the latest when the ISC discontinued BIND 9 entirely.

Sixth: Unbound is not a long-term solution. We needed a caching resolver for FreeBSD 10 and decided to use Unbound because it’s fairly mature and we know it well, but it is a stopgap measure to address the DNSSEC issue while we work on a long-term solution. For FreeBSD 11, we see DNS as only one of several services provided by the Capsicum service daemon called Casper; no decision has yet been made as to which validating resolver library Casper will use as its back-end. In any case, we will continue to provide both authoritative nameserver daemons and caching resolver daemons, such as BIND, NSD, Unbound DNSMasq etc. through the ports system, which can provide better support, access to newer versions, and faster updates than we can in the base system.

Finally, I should add that the ISC has supported the FreeBSD project for many years, both directly and indirectly. Although I haven’t been directly involved in that part of the project, I’m very grateful for their contribution and bear no ill will against them, and I was very unhappy to see my previous post misconstrued as an attack against BIND and the ISC.

DNS in FreeBSD 10

Yesterday, I wrote about the local caching resolver we now have in FreeBSD 10. I’ve fielded quite a few questions about it (in email and on IRC), and I realized that although this has been discussed and planned for a long time, most people outside the 50 or so developers who attended one or both of the last two Cambridge summits (201208 and 201308) were not aware of it, and may not understand the motivation.

There are two parts to this. The first is that BIND is a support headache with frequent security advisories and a lifecycle that aligns poorly with our release schedule, so we end up having to support FreeBSD releases containing a discontinued version of BIND. The second part is the rapidly increasing adoption of DNSSEC, which requires a caching DNSSEC-aware resolver both for performance reasons (DNSSEC validation is time-consuming) and to avoid having to implement DNSSEC validation in the libc resolver.

We could have solved the DNSSEC issue by configuring BIND as a local caching resolver, but for the reasons mentioned above, we really want to remove BIND from the base system; hence the adoption of a lightweight caching resolver. An additional benefit of importing LDNS (which is a prerequisite for Unbound) is that OpenSSH can now validate SSHFP records.

Note that the dns/unbound port is not going away, and that users who want to run Unbound as a caching resolver for an entire network rather than just a single machine have the option of either moving their configuration into /var/unbound/unbound.conf, or running the base and port versions side-by-side. This should not be a problem as long as the port version doesn’t try to listen on or ::1.

I’d like to add that since my previous post on the subject, and with the help of readers, developers and users, I have identified and corrected several issues with the initial commit

  • /etc/unbound is now a symlink to /var/unbound. My original intention was to have the configuration files in /etc/unbound and the root anchor, unbound-control keys etc. in /var/unbound, but the daemon needs to access both locations at run-time, not just on start-up, so they must all be inside the chroot. Running the daemon un-chrooted is, of course, out of the question.
  • The init script ordering has been amended so the local_unbound service now starts before most (hopefully all) services that need functioning DNS.
  • resolvconf(8) is now blocked from updating /etc/resolv.conf to avoid failing over from the DNSSEC-aware local resolver to a potentially non-DNSSEC-aware remote resolver in the event of a request returning an invalid record.
  • The configure command line and date / time are no longer included in the binary.

Finally, I just flipped the switch so that BIND is now disabled by default and the LDNS utilities are enabled. The BIND_UTILS and LDNS_UTILS build options are mutually exclusive; in hindsight, I should probably have built and installed the new host(1) as ldns-host(1) so both options could have been enabled at the same time. We don’t yet have a dig(1) wrapper for drill(1), so host(1) is the only actual conflict.

Local caching resolver in FreeBSD 10

As of a few hours ago, all it takes to set up a local caching resolver in FreeBSD 10 is:

# echo local_unbound_enable=yes >>/etc/rc.conf
# service local_unbound start

Yes, it really is that simple—and it works fine with DHCP, too. Hold my beer and watch this:

# pgrep -lf dhclient
1316 dhclient: vtnet0
1265 dhclient: vtnet0 [priv]
# cat /etc/resolv.conf
# Generated by resolvconf
# time host is an alias for has address has IPv6 address 2001:1900:2254:206a::50:0 mail is handled by 0 .
        0.02 real         0.00 user         0.01 sys

As you can see, we’re running DHCP on a VirtIO network interface. Let’s work our magic:

# echo local_unbound_enable=yes >>/etc/rc.conf
# service local_unbound start
Performing initial setup.
Extracting forwarders from /etc/resolv.conf.
/var/unbound/forward.conf created
/var/unbound/unbound.conf created
/etc/resolvconf.conf created
original /etc/resolv.conf saved as /etc/resolv.conf.20130923.075319
Starting local_unbound.

And presto:

# pgrep -lf unbound
3799 /usr/sbin/unbound -c/var/unbound/unbound.conf
# cat /var/unbound/unbound.conf 
# Generated by local-unbound-setup
        username: unbound
        directory: /var/unbound
        chroot: /var/unbound
        pidfile: /var/run/
        auto-trust-anchor-file: /var/unbound/root.key

include: /var/unbound/forward.conf
# cat /var/unbound/forward.conf
# Generated by local-unbound-setup
        name: .
# cat /etc/resolv.conf
# Generated by resolvconf
# nameserver

options edns0

We can see the cache at work; the first request takes significantly longer than before, but the second is served from cache:

# time host is an alias for has address has IPv6 address 2001:1900:2254:206a::50:0 mail is handled by 0 .
        0.07 real         0.01 user         0.00 sys
# time host is an alias for has address has IPv6 address 2001:1900:2254:206a::50:0 mail is handled by 0 .
        0.01 real         0.00 user         0.00 sys

Finally, let’s see how this interacts with DHCP:

# resolvconf -u
# cat /etc/resolv.conf
# Generated by resolvconf
options edns0

# cat /var/unbound/forward.conf 
# Generated by resolvconf

        name: ""

        name: "."

Note that resolvconf(8) re-added the entry. It doesn’t really matter, as long as comes first.

[ETA: it does matter—see Jakob Schlyter’s comment below and my reply.]

[ETA: see my followup about the motivation for importing Unbound.]

Growing a VirtualBox disk with ZFS on it

I have a VirtualBox VM on a Windows host with a 32 GB disk. That disk is partitioned with GPT and has four partitions: a boot partition, a swap partition, a smallish UFS root partition, and a ZFS partition. I need more space in the latter, so let’s grow it.

The first step is to shut down the VM and resize the virtual disk. This cannot be done in the GUI—we have to use the command-line utility:

C:\Users\des\VirtualBox VMs\FreeBSD\CrashBSD 9>"\Program Files\Oracle\VirtualBox\VBoxManage.exe" showhdinfo "CrashBSD 9.vdi"
UUID:                 4a088148-72ef-4737-aae6-0a39e05aee06
Accessible:           yes
Logical size:         32768 MBytes
Current size on disk: 14484 MBytes
Type:                 normal (base)
Storage format:       VDI
Format variant:       dynamic default
In use by VMs:        CrashBSD 9 (UUID: 06bbe99d-9118-4c11-b29b-4ffd175ad06c)
Location:             C:\Users\des\VirtualBox VMs\FreeBSD\CrashBSD 9\CrashBSD 9.vdi

C:\Users\des\VirtualBox VMs\FreeBSD\CrashBSD 9>"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyhd "CrashBSD 9.vdi" --resize 65536

C:\Users\des\VirtualBox VMs\FreeBSD\CrashBSD 9>"\Program Files\Oracle\VirtualBox\VBoxManage.exe" showhdinfo "CrashBSD 9.vdi"
UUID:                 4a088148-72ef-4737-aae6-0a39e05aee06
Accessible:           yes
Logical size:         65536 MBytes
Current size on disk: 14485 MBytes
Type:                 normal (base)
Storage format:       VDI
Format variant:       dynamic default
In use by VMs:        CrashBSD 9 (UUID: 06bbe99d-9118-4c11-b29b-4ffd175ad06c)
Location:             C:\Users\des\VirtualBox VMs\FreeBSD\CrashBSD 9\CrashBSD 9.vdi

Next, we boot the VM into single-user mode. It will repeatedly complain about the secondary GPT table, which is supposed to be located at the end of the disk but is now in the middle, since we doubled the size of the disk:

GEOM: ada0: the secondary GPT header is not in the last LBA.
# gpart list ada0
Geom name: ada0
modified: false
state: CORRUPT
fwheads: 16
fwsectors: 63
last: 67108830
first: 34
entries: 128
scheme: GPT

Thankfully, this is trivial to fix. In fact, this exact use case is mentioned in the gpart(8) man page:

# gpart ada0 recover
ada0 recovered.
# gpart list ada0
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 134217694
first: 34
entries: 128
scheme: GPT

Next, we resize the ZFS partition to fill all available space:

# gpart list ada0
4. Name: ada0p4
   Mediasize: 25769803776 (24G)
# gpart resize -i 4 ada0
ada0p4 resized
# gpart list ada0
4. Name: ada0p4
   Mediasize: 60129442304 (56G)

Let’s see what our pool looks like now:

# zpool import crash
# zfs list crash
crash  14.2G  9.31G    31K  /

Hmm, no cigar. The pool hasn’t grown because the underlying vdev hasn’t automatically expanded to fill the resized partition. That’s easy to fix, though:

# zpool online -e crash ada0p4

And there we go:

# zfs list crash
crash  14.2G  40.8G    31K  /

Challenges in Identity Management and Authentication

This was my presentation at the 2012 EuroBSDCon in Warsaw, Poland. I’ve been meaning to write more extensively on this subject, but never got around to it. I just watched through the video twice, and it was a lot less cringe-inducing than I expected (especially when you consider that I was sick and sleep-deprived when I gave it).

Towards the end, I got a question about Apple’s security framework. In my answer, I referred to it as CDDL. That was a slip of the tongue; I was referring to CDSA, which is actually an Open Group specification which Apple implemented and open-sourced. Furthermore, CDSA does not to everything I said it does. However, Apple built their Security Services Framework (described in their Authentication, Authorization and Permissions Guide and various other documents) on top of CDSA; so the combination of CDSA and what Apple added on top does everything from key management to authentication and authorization.

My presentation at the 2013 EuroBSDCon in St Julians, Malta will continue where I left off last year, outlining a concrete solution based on the principles set forth in the second part of last year’s presentation (starting at 32:06).

pkgng without ports: addenda

Two things I forgot to mention in my previous post:

  1. In order to use OpenPAM from svn instead of the version that comes with FreeBSD, you need to copy security/pam_mod_misc.h and pam_debug_log.c into the OpenPAM source tree and adjust the Makefiles accordingly, otherwise FreeBSD’s service modules won’t run and you won’t be able to log in. I don’t plan to include this code in OpenPAM; I’d rather overhaul FreeBSD’s modules so they no longer need it.
  2. What I actually wanted to do, but didn’t because I needed a solution there and then, was patch automake itself to add a pkgng target so gmake pkgng creates a package with no additional input required (except possibly a +DESC file).

Creating pkgng packages without ports

Lately, I’ve been working on expanding the scope of OpenPAM to more than just a PAM library. Specifically, I’ve added support (in a separate library) for the OATH HOTP and TOTP one-time password algorithms. In the long term, I also intend to implement PSKC and OCRA, the ultimate goal being full compliance with the OATH client and server certification profiles. Part of the reason I’m doing this is that my employer needs it, which is why the University of Oslo holds the copyright on most of the OATH code, but it is also something I’ve been wanting to do for a long time, and which I believe will greatly benefit FreeBSD.

This is a large undertaking, though. I’m not comfortable rolling a new OpenPAM release with the OATH code at this time—and I probably won’t be for quite a while. I’ve created a “nooath” branch and may roll a release from that branch in order to get the many other OpenPAM improvements into FreeBSD 10.0, but that’s a different story.

In the meantime, I need a way to test my code; not just on a development machine, but also on semi-production systems such as my desktop and my home router. Once it’s tested, I also need a way to deploy it on mission-critical systems. All these systems have one thing in common: they are binary installations, maintained with freebsd-update rather than built from source. So I need a way to install a newer version of OpenPAM without disturbing the base version.

The easy answer is to install in /usr/local:

# ./configure --prefix=/usr/local
# gmake
# gmake install

We also need to make sure that everything that uses PAM uses the new version (which is 100% backward compatible with older applications and modules). Conveniently, for historical reasons, OpenPAM installs, whereas FreeBSD 8.x and newer install, so it’s a simple matter of mapping one to the other:

# echo "" >>/etc/libmap.conf

That doesn’t address deployment, though. I don’t want to have to compile OpenPAM on every machine, and I already have a mechanism for distributing and updating software across multiple machines: my two pkgng repositories. Let’s take advantage of them by creating pkgng packages for OpenPAM.

I could create an OpenPAM port and build packages from there. There is even precedent for creating a port that obtains sources directly from a repository rather than from a release tarball, so I could test individual revisions. I would however need a copy of the ports tree—it used to be possible to build a port independently of the ports tree, but that time is long gone. Another drawback is that I would have to jump through hoops to create packages from a modified source tree (for pre-commit testing). Finally, I would not be able to create a package of a specific version without first installing that version locally. So creating a port is a less-than-ideal solution.

What I did instead was write a script which installs OpenPAM into a temporary directory and creates the package from there. Well—nearly: there is a small hitch due to a bug in pkg which I expect will be fixed in the near future.

Let’s take a look at some of the juicier parts of the script.

First, we need to determine the package name and version. The name is taken directly from, and so is the version—at first. The thing is, I’d rather not have to continuously update, so @PACKAGE_VERSION@ is normally “trunk” (or “nooath”) until I roll a release. Therefore, if @PACKAGE_VERSION@ is a word rather than a number, I use Subversion’s svnversion utility to retrieve the current revision number. If I can successfully extract a number from the ouput, I append it to the original value.

if ! expr "$version" : "[0-9]{1,}$" >/dev/null ; then
    svnversion="$(svnversion 2>&1)"
    svnversion=$(expr "$svnversion" : '\([0-9][0-9]*\)[A-Z]\{0,1\}$')
    if [ -n "$svnversion" ] ; then

For reasons which will become clear later, we also need to know which version of pkg is installed, as well as the ABI:

pkgver=$(pkg query %v pkg)
[ -n "$pkgver" ] || error "Unable to determine pkgng version."
pkgabi=$(pkg -vv | awk '$1 == "ABI:" { print $2 }')
[ -n "$pkgabi" ] || error "Unable to determine package ABI."

Next, we create a temporary directory into which we will install the software, so we can create a package without touching the host system. The traps ensure that the temporary directory is deleted when the script exits or is interrupted (SIGINT). Two separate traps are needed, because if we install the same trap for both EXIT and INT, it will run twice in the SIGINT case: once due to the SIGINT itself and once because the script exits. Clearing the trap from within the trap handler doesn’t work, because traps are local to the block in which they were set.

info "Creating the temporary directory."
tmproot=$(mktemp -d "${TMPDIR:-/tmp}/$package-$version.XXXXXX")
[ -n "$tmproot" -a -d "$tmproot" ] || \
    error "Unable to create the temporary directory."
trap "exit 1" INT
trap "info Deleting the temporary directory. ; rm -rf '$tmproot'" EXIT
set -e

We can now install our software into the temporary directory ($make evaluates to either make or gmake with a few options to reduce the amount of noise GNU make generates):

info "Installing into the temporary directory."
$make install DESTDIR="$tmproot"

We need a manifest for the package. Most of it can be automatically generated from the information provided in; the only hardcoded OpenPAM-specific information in my script are the comment and description. The latter can easily be avoided: if no description is provided, pkg create will look for a +DESC file in the same directory as the manifest and use its contents instead. The former is not so easily avoided; there is no autoconf macro for a short description of the package, and while PACKAGE_COMMENT="foo"; AC_SUBST(PACKAGE_COMMENT) would work, it feels sort of dirty. I’ll probably end up writing a custom macro that does just that.

Anyway, we start out with a stub:

info "Generating the stub manifest."
cat >"$manifest" <<EOF
name: $package
version: $version
origin: local/$package
comment: [...]
arch: $pkgabi
prefix: @prefix@
categories: local, security

The rest of the manifest consists of a list of files to be included in the package. We generate it automatically from the contents of our temporary directory, which shouldn’t contain anything that we don’t want to include.

info "Generating the file list."
    echo "files:"
    find "$tmproot" -type f | while read file ; do
        [ "$file" = "$manifest" ] && continue
        mode=$(stat -f%p "$file" | cut -c 3-)
        echo "  $file: { uname: root, gname: wheel, perm: $mode }"

We hardcode the ownership as root:wheel, which is correct 99% of the time. This allows us to run the entire package creation process as an unprivileged user. Or it would, except that pkg contains some rather advanced logic to determine whether a package installs shared libraries or depends on shared libraries provided other packages, and that logic doesn’t take into account the case where the package root is not /. Setting LD_LIBRARY_PATH doesn’t help, since pkg doesn’t use the run-time linker, but reads and interprets the Elf headers itself. I haven’t yet managed to untangle that logic to the point where I can figure out where to insert the package root so it will find the correct libraries. The only workaround is to install the package to /, which requires root privileges.

info "Packaging."
if [ "$pkgver" \< "1.1.5" ] ; then
    info "pkg 1.1.4 or older detected."
    yesno "We must now install to /.  Proceed?" || error "Chicken."
    $make install
    pkg create -m "$tmproot" -o "$builddir"
    pkg create -r "$tmproot" -m "$tmproot" -o "$builddir"

Note that I have optimistically predicted that the bug will be fixed in pkg 1.1.5…

This script should be easily adaptable to any other project that uses GNU autotools and doesn’t deviate too far from automake’s standard installation procedure. As previously mentioned, the only part of the script specific to OpenPAM is the stub manifest, and that can easily be changed.

Managing your own pkgng repository

[edit 2013-08-05: fixed a typo in the two command lines used to create the repo definition files, spotted by swills@]

Say you have your own poudriere and your own pkgng repo. You’ve set up Apache to point at your poudriere’s package directory:

<VirtualHost *>
  ServerAdmin [email protected]
  DocumentRoot /poudriere/data/packages
  <Directory "/poudriere/data">
    Options +Indexes +SymLinksIfOwnerMatch
    IndexOptions +FancyIndexing +FoldersFirst
    Order allow,deny
    Allow from all

The 91amd64-default and 91i386-default directories are so named by poudriere because they contain the output of the 91amd64 and 91i386 jails, respectively, based on the default ports tree. These are details which you don’t necessarily want your clients to know (or need to know), so you create symlinks which match your clients’ ABIs:

# cd /poudriere/data/packages
# ln -s 91amd64-default freebsd:9:x86:64
# ln -s 91i386-default freebsd:9:x86:32

All you need to do on the client side now is:

# cat >/usr/local/etc/pkg.conf <<EOF

Now, let’s think about this for a while. Every time you install a new machine, you have to copy or type in that pkg.conf, and while this is a pretty minimal example, your real pkg.conf could be much larger: you could have multiple repos, multiple servers with failover, etc. How about we fetch it from a central location?

# fetch -o/usr/local/etc/

But what if it changes? Well, why not use the package system itself to distribute and maintain it?

We want to distribute our pkg.conf as a package, and since we want pkg to update it when it changes, we need to place it in a repo. We can’t stick it in the FreeBSD ports tree, and while it is possible to sneak it into the local copy of the ports tree that poudriere builds from, it’s not very convenient. So what we do is create an additional pkgng repo with only one package, which contains two pkg.conf files: one for our real pkgng repo, and one for the repo that contains our configuration package.

First, we create the contents of our package:

% mkdir des-repos
% cd des-repos
% mkdir -p usr/local/etc/pkg/repos
% cat >usr/local/etc/pkg/repos/des-packages.conf <<EOF
% cat >usr/local/etc/pkg/repos/des-repos.conf <<EOF

Now we need a manifest:

% cat >+MANIFEST <<EOF
name: des-repos
version: 20130715
origin: local/des-repos
comment: Repository definitions for
maintainer: [email protected]
prefix: /usr/local
desc: Repository definitions for
categories: local, ports-mgmt
  pkg: { name: pkg, origin: ports-mgmt/pkg, version: 1.1 }
  /usr/local/etc/pkg/repos/des-packages.conf: { uname: root, gname: wheel, perm: 0644 }
  /usr/local/etc/pkg/repos/des-repos.conf: { uname: root, gname: wheel, perm: 0644 }

Note that arch is intentionally left blank, as this package is architecture-neutral.

Once we have contents and a manifest, we can create the package file:

% pkg create -r $PWD -m $PWD
% tar tf des-repos-20130715.txz 

All that remains (on the server) is to create the repo:

# mkdir /poudriere/data/packages/repos
# cp des-repos-20130715.txz /poudriere/data/packages/repos
# pkg repo /poudriere/data/packages/repos
# cd /poudriere/data/packages
# ln -s repos/des-repos-20130715.txz des-repos.txz

Then, on each client (presumably including the server itself):

# rm /var/db/pkg/repo*sqlite
# rm /usr/local/etc/pkg.conf
# pkg add
# pkg update


Benchmark: WD Red NAS

My wife is in the market for large, cheap drives with decent performance to store sequencing data, so I ordered and tested a 2 TB Western Digital Red NAS (WD20EFRX—no link because is broken at the moment). The Red series seems to be a halfway point between the WD Green and WD Black series: like the Green series, they have 4096-byte sectors and IntelliPower (i.e. variable rpm), but they are designed for 24×7 operation and seem to have far more consistent performance—although not quite on par with the Black series.

The big news is that this is the first Advanced Format disk I’ve seen that correctly reports its physical sector size:

protocol              ATA/ATAPI-9 SATA 3.x
device model          WDC WD20EFRX-68AX9N0
firmware revision     80.00A80
serial number         WD-WMC301592199
WWN                   50014ee6adf1fbaf
cylinders             16383
heads                 16
sectors/track         63
sector size           logical 512, physical 4096, offset 0
LBA supported         268435455 sectors
LBA48 supported       3907029168 sectors
PIO supported         PIO4
DMA supported         WDMA2 UDMA6

As shown below, random-access performance is decent, but not mind-blowing—with the important caveat that I tested it on a machine that only has SATA I. I will update the numbers if and when I get the chance to test it on a machine with a SATA II or SATA III controller.

   count    size  offset    step        msec     tps    kBps

   32768    4096       0   16384       10222    3205   12822
   32768    4096     512   16384       33900     966    3866
   32768    4096    1024   16384       35417     925    3700
   32768    4096    2048   16384       36207     905    3620

   16384    8192       0   32768        8298    1974   15794
   16384    8192     512   32768       31238     524    4195
   16384    8192    1024   32768       31666     517    4139
   16384    8192    2048   32768       31547     519    4154
   16384    8192    4096   32768        8037    2038   16307

    8192   16384       0   65536        6471    1265   20252
    8192   16384     512   65536       27815     294    4712
    8192   16384    1024   65536       27201     301    4818
    8192   16384    2048   65536       27607     296    4747
    8192   16384    4096   65536        6722    1218   19497
    8192   16384    8192   65536        6396    1280   20489

    4096   32768       0  131072        5199     787   25210
    4096   32768     512  131072       22564     181    5808
    4096   32768    1024  131072       23349     175    5613
    4096   32768    2048  131072       20816     196    6296
    4096   32768    4096  131072        5540     739   23655
    4096   32768    8192  131072        5307     771   24693
    4096   32768   16384  131072        5303     772   24716

Sequential performance is also pretty decent:

# dd if=/dev/zero of=/dev/ada2 bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 8.881374 secs (120898164 bytes/sec)


Amazon‘s recent acquisition of Liquavista has engendered speculation about a color Kindle. It also made me go “I told you so”.

There is an oft-repeated but apocryphal story about USPTO Commissioner Henry Ellsworth (some say Charles Duell) suggesting that the Patent Office should be shut down because “everything that can be invented has been invented”. While neither of these men ever made any such claim, a similar sentiment is surprisingly common even among technically literate people.

A while ago, I got into a discussion about emittive (CRT, LED) vs transmittive (LCD) vs reflective (eInk) display technologies. My position was that a) emittive and transmittive displays are stopgap technologies and b) high-resolution, low-latency, full-color reflective displays will be commercially available within a few years. This was immediately dismissed because, and I paraphrase, “there’s no way you’ll get the ink beads to turn fast enough”.

Chew on this for a bit.

Imagine that it’s 1880 and I tell you that “within a few years, it will be possible to travel a hundred kilometers in mere hours”, and you answer “no horse could possibly run that fast”.

Now imagine the same scenario in 1890, a few years after automobiles became commercially available.

Now imagine the same scenario in 1900, when high-end automobiles were capable of sustaining speeds of 50 km/h and above.

In the first scenario, I am looking at experiments and proofs-of-concept and hoping, fingers crossed, that a breakthrough is imminent. In the second, I am extrapolating from currently available technology and recent advances. In the third, I am simply predicting that today’s bleeding-edge technology will soon become widely available and affordable.

When I had that conversation about display technologies, black-and-white electrowetting displays were already in production, and, although I did not know this at the time, Liquavista had started shipping color EWD devkits to OEMs. They are expected to enter production this year.

I told you so.

Hurtigruten mener seg hevet over norsk lov

Hurtigruten har i flere år, med ujevne mellomrom, sendt reklame til min Gmail-adresse. Så langt i år har jeg mottatt fire såkalte nyhetsbrev fra dem.

Jeg har aldri reist med Hurtigruten. Jeg har heller aldri bedt dem om noe pristilbud, prospekt e.l. som kunne oppfattes som et ønske om å motta reklame. Jeg har aldri kontaktet dem – bortsett fra de gangene jeg har bedt dem om å slutte å sende meg reklame.

Dette er et klart brudd på Markedsføringslovens §15:

 I næringsvirksomhet er det forbudt, uten mottakerens forutgående samtykke, å rette markedsføringshenvendelser til fysiske personer ved elektroniske kommunikasjonsmetoder som tillater individuell kommunikasjon, som for eksempel elektronisk post, telefaks eller automatisert oppringningssystem (talemaskin). […] Krav om forhåndssamtykke etter første ledd gjelder heller ikke markedsføring ved elektronisk post i eksisterende kundeforhold der den næringsdrivende avtaleparten har mottatt kundens elektroniske adresse i forbindelse med salg.

Hurtigruten mener tydeligvis at de er hevet over norsk lov.

Jeg klaget dem inn til Forbrukerombudet for drøye to måneder siden, men har ikke fått noe svar. Klagen min er heller ikke journalført, hvilket i seg selv er et brudd på Offentlighetslovens §10 og dertil hørende forskrift 2008.10.17 nr 1119. Sic transit gloria mundi; Forbrukerombudet pleide å være flinke til å følge opp spam-klager, men for rundt halvannet eller to år siden sluttet de å behandle dem «på grunn av stor saksmengde».


1996. The Spice Girls rock (pop?) the world with Wannabe. Will Smith kicks alien butt in Independence day. DVDs become commercially available. Scientists clone the first mammal. Ebay opens. Three important standards are either released or reshaped into their current form: MIME, Unicode and IPv6. 17 years later, a shocking amount of software still does not support these standards.