Category Archives: testing

PC-BSD 10.1.2-RC1, Lumina Desktop 0.8.4 Released!

The PC-BSD team is pleased to announce the availability of RC1 images for the upcoming quarterly 10.1.2 release. Please test these images out and report any issues found on our bug tracker.

What else is new in PC-BSD 10.1.2? How about a new version of the Lumina Desktop Environment! PC-BSD users who stick to the “Production” branch of packages will find that the Lumina desktop has evolved/improved an incredible amount since the last quarterly update for PC-BSD (10.1.1), so I highly recommend that you try it out! The release notes for this new version are also listed at the bottom of this announcement for those of you who have been tracking along with its development, so please try it out and let us know what you think!

 

PC-BSD 10.1.2 notable Changes

  • New PersonaCrypt Utility
    • Allows moving all of users $HOME directory to an encrypted USB Drive. This drive can be connected at login, and used across different systems
    • Stealth Mode — Allows login to a blank $HOME directory, which is encrypted with a one-time GELI key. This $HOME directory is then discarded at logout, or rendered unreadable after a reboot
  • Tor mode — Switch firewall to running transparent proxy, blocking all traffic except what is routed through Tor.
  • Migrated to IPFW firewall for enabling VIMAGE in 10.2
  • Added sound configuration via the first boot utility
  • Support for encrypted iSCSI backups via Life-Preserver, including support for bare-metal restores via installer media
  • New HTML handbook, updated via normal package updates
  • Media Center support allowing direct login to Kodi and PlexHomeTheater for the 10ft user experience
  • Switch to new AppCafe interface, with remote support via web-browser
  • Improvements to Online Updater, along with GRUB nested menus for Boot-Environments
  • Migrate all ports to using LibreSSL instead of OpenSSL
  • Switch from NTP to OpenNTPD
  • Lumina desktop 0.8.4
  • Chromium 42.0.2311.90
  • Firefox 37.0.2
  • NVIDIA Driver 346.47
  • Pkg 1.5.1 Updating

Package Availability:

  • Users currently running the EDGE package repo can now update their packages via the updater GUI or “pc-updatemanager” utility to be brought up to date with RC1. Updates for users on the 10.1.1 / PRODUCTION repo will be available once 10.1.2-RELEASE is announced.

Getting media

Reporting Bugs

  • Found a bug in PC-BSD 10.1.2 or Lumina 0.8.4? Please report it (with as much detail as possible) to our bugs database. https://bugs.pcbsd.org

 

Lumina Desktop changes since version 0.8.3:

Panel Improvements:

  • Add mouse tracking support
  • Add support for variable-length panels (a percentage of the screen edge length).
  • Add support for pinning the panel to a particular location on the screen edge (either corner, or centered)
  • Automatically re-scale the panel size if the monitor used in the previous session was a different screen resolution.
  • For hidden panels, have 1% of the panel size be visible on the screen while it is “hidden” (rather than a hard-coded pixel size). This is better for high-resolution screens.
  • Remove the restriction that panels be on opposite screen edges.

New options/usage for lumina-search:

  • Easily change file/dir search preferences on a temporary basis
  • New command-line flags for starting searches instantly
  • Search functionality integrated into the Insight file manager. The Ctrl-F keyboard shortcut or the “Search” menu option will start a search for a file/directory with the current directory as the starting point.
  • A “Search” button has been added to the  home directory browser in the user menu. This will allow the user to easily start searching for a file/dir within the selected directory.

New “Favorites” system backend:

  • This new backend is much faster and more reliable than the old system of sym-links.
  • Your favorites should be automatically converted to the new format when you log into the new version of Lumina.

New Utility: lumina-fileinfo

  • This utility allows the user to view basic file information, such as timestamps, owner/group info, file size, and read/write permissions.
  • If the file is a XDG desktop shortcut (that the user has permission to modify), this utility also provides the ability to make changes to that shortcut.
  • This can easily be used by right-clicking on files in the desktop view plugin or within the Insight file manager and selecting the “Properties” option.
  • A big thank you to contributor William (william-os4y on GitHub) for writing this utility!!

Other Random Improvements:

  • Better application recommendations for files/URL’s (especially for web browsers or email clients).
  • Major cleanup of XCB library usage.
  • Hardware-brightness controls now used for PC-BSD by default (if supported by the system hardware).
  • Putting the system into the suspend state is now supported for PC-BSD/Debian.
  • New clock display formats.
  • A large number of session cleanup improvements
  • A large number of session initialization improvements (including resetting the user’s previous screen brightness and audio volume settings).
  • New default keyboard shortcuts for tiling the open windows on the screen (new user configurations only)
  • Better support for the URL input format when required by an application.
  • Make the user’s “log out” window appear much faster when activated.

Errata:

  • There is a known bug in Lumina 0.8.4 regarding “unlocked” desktop plugins. The close/maximize buttons for the plugin are unresponsive when using Qt 5.4.1, preventing the user from easily removing/maximizing a desktop plugin. We are still looking into this, but at the moment it appears to be a bug in Qt itself. As a temporary workaround, you can simply right-click on the titlebar for the unlocked plugin and select close/maximize from the menu instead.

On standards (and testing)

RFC 4648 defines the Base16, Base32 and Base64 encodings. Base16 (aka hex) and Base64 are widely known and used, but Base32 is an odd duck. It is rarely used, and there are several incompatible variants, of which the RFC acknowledges two: [A-Z2-7] and [0-9A-V].

One of the uses of Base32, and the reason for my interest in it, is in Google’s otpauth URI scheme for exchanging HOTP and TOTP keys. I needed a Base32 codec for my OATH library, so when a cursory search for a lightweight permissive-licensed implementation failed to turn up anything, I wrote my own.

My OATH implementation is currently deployed in an environment in which OTP keys for new users (or new OTP keys for existing users) are generated by the primary provisioning system, which passes them on to a smaller provisioning system in charge of firewalls and authentication (codenamed Nexus), which passes them on to a RADIUS server, which uses my code to validate user responses. When we transitioned from generating OTP keys manually to having the provisioning system generate them for us, we ran into trouble: some keys worked, others didn’t. It turned out to be a combination of factors:

  • The keys generated by the provisioning system were syntactically correct but out of spec. Most importantly, their length was not always a multiple of 40 bits, so their Base32 representation included padding.
  • Nexus performed only cursory validation of the keys it received from the provisioning system, so it accepted the out-of-spec keys.
  • The Google Authenticator app (at least the Android version, but possibly the iOS version as well) does not handle padded keys well. If I recall correctly, the original Android app rejected them outright; the current version simply rounds them down. (Why don’t the Android system libraries provide Base32 encoding and decoding?)
  • My Base32 decoder didn’t handle padding correctly either… and of course, I only had tests for the encoder, because I was in a rush when I wrote it and I didn’t need decoding until later. Yes, this is stupid. Yes, I fixed it and now have 100% condition/decision coverage (thanks to BullseyeCoverage, with a caveat: 100% C/D coverage of table-driven code does not guarantee correctness, because it only checks the code, not the table).

Having fixed both the provisioning system and the OATH verification tool, I decided to add stronger input validation to Nexus. The easiest way to validate a Base32-encoded key, I figured, is to decode it. And wouldn’t you know, there are not one but two Perl implementations of Base32!

Unfortunately, they’re both broken, and have been for years.

  • MIME::Base32 (the latest release is dated 2010-08-25, but the code hasn’t changed since the original release on 2003-12-10) does not generate padding, and decodes it into garbage. In addition, it does not accept lower-case code.
  • Convert::Base32 (the latest release is dated 2012-04-22, but the code hasn’t changed since the original release on 2001-07-17) does not generate padding, and dies when it encounters what it calls “non-base32 characters”. In addition, while it accepts lower-case code (which is commendable, even though the RFC specifies an upper-case alphabet), it also generates lower-case code, which is wrong.

Both packages ship with tests. MIME::Base32’s tests simply encodes a string, decodes the result, and checks that it got the original string back.

Convert::Base32’s tests are more complex and include length and padding tests, but it defines padding as the lower, unused bits of the last non-padding character in the output.

MIME::Base32 references RFC 3548 (the predecessor to RFC 4648) but does not come close to implementing it correctly. Convert::Base32 predates the RFC and conforms to the old RACE Internet draft, which is small consolation since RACE was never standardized and was eventually replaced by Punycode.

I wrote a script which runs the RFC 4648 test vectors through either or both MIME::Base32 and Convert::Base32, depending on what’s available. The first two columns are the input and output to and from the encoder, and the last two are the input and output to and from the decoder. Note that the script adds the correct amount of padding before feeding the encoded string back to the decoder.

MIME::Base32
 1 f            |  2 MY               |  8 MY======         |  7 fOOOOO     
 2 fo           |  4 MZXQ             |  8 MZXQ====         |  6 fo����
 3 foo          |  5 MZXW6            |  8 MZXW6===         |  6 foo���
 4 foob         |  7 MZXW6YQ          |  8 MZXW6YQ=         |  5 foob       
 5 fooba        |  8 MZXW6YTB         |  8 MZXW6YTB         |  5 fooba       
 6 foobar       | 10 MZXW6YTBOI       | 16 MZXW6YTBOI====== | 12 foobarOOOOO
Convert::Base32
Data contains non-base32 characters at base32-test.pl line 16
 1 f            |  2 my               |  8 my======         | %

(the final % is my shell indicating that the output did not end with a line feed).

The same test, with forced conversion to upper-case before decoding:

MIME::Base32
 1 f            |  2 MY               |  8 MY======         |  7 fOOOOO     
 2 fo           |  4 MZXQ             |  8 MZXQ====         |  6 fo����
 3 foo          |  5 MZXW6            |  8 MZXW6===         |  6 foo���
 4 foob         |  7 MZXW6YQ          |  8 MZXW6YQ=         |  5 foob       
 5 fooba        |  8 MZXW6YTB         |  8 MZXW6YTB         |  5 fooba       
 6 foobar       | 10 MZXW6YTBOI       | 16 MZXW6YTBOI====== | 12 foobarOOOOO
Convert::Base32
Data contains non-base32 characters at base32-test.pl line 17
 1 f            |  2 my               |  8 MY======         | %

Once again, with forced conversion to lower-case:

MIME::Base32
 1 f            |  2 MY               |  8 my======         |  8 my======    
 2 fo           |  4 MZXQ             |  8 mzxq====         |  7 mz{����
 3 foo          |  5 MZXW6            |  8 mzxw6===         |  7 mz{�O
 4 foob         |  7 MZXW6YQ          |  8 mzxw6yq=         |  6 mz{��^
 5 fooba        |  8 MZXW6YTB         |  8 mzxw6ytb         |  6 mz{��]
 6 foobar       | 10 MZXW6YTBOI       | 16 mzxw6ytboi====== | 14 mz{��]���zzzzz
Convert::Base32
Data contains non-base32 characters at base32-test.pl line 17
 1 f            |  2 my               |  8 my======         | %

sigh

On testing, part III

I just got word of an embarrassing bug in OpenPAM Nummularia. The is_upper() macro, which is supposed to evaluate to true if its argument is an upper-case letter in the ASCII character set, only evaluates to true for the letter A:

#define is_upper(ch)                            \
        (ch >= 'A' && ch <= 'A')

This macro is never used directly, but it is referenced by is_letter(), which is referenced by is_pfcs(), which is used to validate paths and path-like strings, i.e. service names and module names or paths. As a consequence, OpenPAM does not support services or modules which contain an upper-case letter other than A. I never noticed because a) none of the services or modules in use on the systems I use to develop and test OpenPAM have upper-case letters in their names and b) there are no unit or regression tests for the character classification macros, nor for any code path that uses them (except openpam_readword(), which uses is_lws() and is_ws()).

The obvious course of action is to add unit tests for the character classification macros (r760) and then fix the bug (r761). In this case, complete coverage is easy to achieve since there are only 256 possible inputs for each predicate.

I have merged the fix to FreeBSD head (r262529 and r262530). Impatient users can fix their system by running the following commands:

% cd /usr/src/contrib/openpam
% svn diff -r758:762 svn://svn.openpam.org/openpam/trunk | patch
% cd /usr/src/lib/libpam/libpam
% make && make install

Unsurprisingly, writing more unit tests for OpenPAM is moving up on my TODO list. Please contact me if you have the time and inclination to help out.

FreeBSD Development Snapshots – Testers Wanted

I am pleased to announce the re-availability of FreeBSD development snapshots provided by the FreeBSD Project.

As with any development branch, these snapshots are not intended for use on production systems. However, we do encourage testing on non-production systems as much as possible.

Users interested in testing the development branches are also encouraged to subscribe to the freebsd-snapshots@ mailing list, where new snapshot availability, including corresponding installation image checksums, and any additional noteworthy information about the images will be announced.

The list subscription URL is: http://lists.freebsd.org/mailman/listinfo/freebsd-snapshots

Snapshots may be downloaded from the corresponding architecture subdirectory over FTP: ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/

Problems, bug reports, or regression reports should be reported through the GNATS PR system or the appropriate mailing list, such as -current@ or -stable@ .

This directory contains FreeBSD installation snapshots for the head/ and stable/ branches. The snapshot installation media are organized in the same hierarchy as the FreeBSD release installers, for example:

snapshots/i386/i386/ISO-IMAGES/
snapshots/amd64/amd64/ISO-IMAGES/

The FTP structure is also available for bootonly.iso installers.

These snapshots do not include a package distribution, and do not include the release notes documentation.

Snapshots are expected to be generated weekly. Snapshot retention is expected to be the current week and three prior weeks. The FTP directory will always contain the distribution sets for the most recent snapshot.

Snapshots are named as:

FreeBSD-{BRANCH}-{ARCH}-{DATE}-{SVNREV}

where BRANCH is the official branch name (such as 10.0-CURRENT, or 9.2-STABLE), ARCH is the target architecure the release was built, DATE is the day the snapshot was built in YYYYMMDD format, and SVNREV is the svn revision number at the point in time the snapshot was generated.

The CHECKSUM.MD5 and CHECKSUM.SHA256 files are suffixed with the date the snapshots were generated.

In general, it can be expected that all snapshots will be built with the same subversion revision, as to avoid inconsistencies between images built on the same day.

Be sure to check the mailing lists for those architectures (e.g., freebsd-sparc64 mailing list for the sparc64 architecure) before taking one of those snapshots. The specific testing purpose that snapshot was generated for should be mentioned there. The snapshot ISOs in those architecture directories may not be suitable for general use.

Those who wish to keep up-to-date on the latest snapshot availability are advised to subscribe to the freebsd-snapshots@ mailing list, where new snapshots will be announced, including the image checksums and any notable information regarding a particular snapshot set.

First PC-BSD 9.0 Alpha Snapshot Available for Testing

Kris Moore has just announced that the first testing snapshot is available for download (both 32 and 64 bit versions). You can help us make 9.0 an awesome release by trying out the snapshots (there will be many between now and the first beta some time next spring) and providing feedback about any bugs you find. Since these are testing snapshots, it is recommended that you try them out on a spare system or using a virtual environment such as VirtualBox. If you're planning on trying out all of the new desktop environments, you should use a virtual machine of at least 2

First PC-BSD 9.0 Alpha Snapshot Available for Testing

Kris Moore has just announced that the first testing snapshot is available for download (both 32 and 64 bit versions). You can help us make 9.0 an awesome release by trying out the snapshots (there will be many between now and the first beta some time next spring) and providing feedback about any bugs you find. Since these are testing snapshots, it is recommended that you try them out on a spare system or using a virtual environment such as VirtualBox. If you're planning on trying out all of the new desktop environments, you should use a virtual machine of at least 2

Unit Testing Uncovers Bugs

As part of the ‘utility’ library in one of the projects we are using at work, I wrote two small wrappers around strtol() and strtoul(). These two functions support a much more useful error reporting mechanism than the plain atoi() and atol() functions, but getting the error checking right in all the places they are called is a bit boring and cumbersome. This is probably part of the reason why there are still programs out there that use atoi() and atol().

For example here’s how I usually check for errors in calls to the strtol() and strtoul() functions:

char *endp;
long x;

endp = NULL;
errno = 0;
x = strtol(str, &amp;endp, base);
if (errno != 0 || (endp != NULL && *endp != '\0' &&
    (isdigit(*endp) != 0 || isspace(*endp) == 0)))
        /* Return 'endp' if possible. */
        return -1;
}
/* At this point 'x' contains the parsed value. */

This is a lot of code for parsing a single long value. For one or two input strings it may be ok to repeat the code in the places where the numeric parsing code is needed. For more than a couple of input strings it really feels boring to repeat this code again and again.

When I set out to write the wrapper code for strtol() and strtoul() my goal was to make it very easy to parse input strings. A typical call to the parsing function should be a single line of code; it should be very clear if the parsing attempt succeeded or failed; it should also be possible to get both the parsing success or failure and the numeric value we just parsed; it should also be possible to get hold of the last character we managed to parse, so that strings like "100 200 300" can be parsed efficiently without having to manually find where the textual representation of the first number ends or the second one starts.

That's quite a list of goals for a single function, but the function call style I envisioned looked something like this:

long value;
char *endp = NULL;

if (parselong("0x12345678", &endp, 16, &value) != 0) {
        err(1, "parse error");
}

The return value of parselong() makes it very clear if the parsing attempt succeeded or failed. A return value of zero means success. Any other return value means failure.

The parsed value is returned through the &value pointer. If the parsing attempt has failed parselong() can leave the value unmodified to avoid inflicting spurious side-effects to its calling code because of a failed attempt to parse an input string.

If the parsing attempt has succeeded, &endp may be set to point right after the last character that was successfully parsed. This is actually part of the documented interface of strtol() and strtoul(), so it comes for free by wrapping these functions.

Finally, parsing a long value is a single function call. It is a lot easier to call the parsing function without having to repeat all the error checking boilerplate at each calling site. It's even easy to "chain" multiple parsing attempts using a style similar to:

long value1, value2, value3;

if (parselong("0x12345678", NULL, 16, &value1) != 0 ||
    parselong("0xdeadbeef", NULL, 16, &value2) != 0 ||
    parselong("0xf00fc0de", NULL, 16, &value3) != 0)
        err(1, "parse error");

Not that this is a good style of reporting errors, but it is possible, just because it's now easy to parse a value and check if it was parsed correctly with a single line of code.

The Unit Tests Fail on Linux

Several months passed after I wrote the initial parselong() and parseulong() functions. In the meantime I had to port the program using them to other platforms. The initial target platform was FreeBSD.

This is a bug that lurked for a few months in the initial code of parselong() until I had to port the function to another platform and started writing unit tests to verify that it works the way I expected it to work on all possible systems. In retrospect I should have started by writing the unit tests, but that's something I can say now because I finally got around to doing it and they did serve a very useful purpose.

When I had to port my 'utility' functions to work on several Linux versions too, I wrote a collection of unit tests for parselong() and parseulong(). The testing framework I used was CUnit because of the way it nicely integrates with plain ANSI C code.

One of the test functions I wrote was supposed to check for failures returned by parselong() for invalid input strings. The bulk of the test function was something like this:

#include "CUnit/Basic.h"

void
test_parselong_failures(void)
{
        long value = TEST_VALUE_ULONG_MAGIC;

        CU_ASSERT_EQUAL(parselong("xxx", NULL, 0, &value), -1);
        CU_ASSERT_EQUAL(value, TEST_VALUE_ULONG_MAGIC);

        CU_ASSERT_EQUAL(parselong("+", NULL, 0, &value), -1);
        CU_ASSERT_EQUAL(value, TEST_VALUE_ULONG_MAGIC);

        CU_ASSERT_EQUAL(parselong("-", NULL, 0, &value), -1);
        CU_ASSERT_EQUAL(value, TEST_VALUE_ULONG_MAGIC);
        ...
        CU_PASS("parselong() failures for invalid values look ok");
}

Running the unit tests on FreeBSD seemed to work fine. After all the initial version of the parselong() function had been manually tested with the same input strings earlier.

When I tried running the same test cases on Linux though, they failed. Apparently parselong() was not detecting that strtol() failed to parse the input string "xxx" or any other input strings from the ones tested in the test_parselong_failures() function!

The Bug Uncovered

Adding a couple of debugging printf() calls to parselong() itself showed that on Linux parselong() was returning zero for invalid input strings when strtol() could parse no character at all from the input string.

The initial version of the error checking code for strtol() was similar to:

char *endp;
long x;

endp = NULL;
errno = 0;
x = strtol(str, &endp, base);
if (errno != 0 || (endp != NULL && endp != str && *endp != '\0' &&
    (isdigit(*endp) != 0 || isspace(*endp) == 0)))
        /* Return 'endp' if possible. */
        return -1;
}
/* At this point 'x' contains the parsed value. */

The highlighted part (endp != str) of the error checking code assumes that strtol() will move the 'endp' pointer at least one character after the start of the input string. Apparently on Linux this is not the case. The strtol() function of Linux does not move 'endp' at all if it cannot parse even a single character of the input string. This seems to be the correct behavior for strtol(), but it was hidden for a while, lurking in the original parselong() code, until I ran the unit tests of the function on Debian GNU/Linux.

The CUnit driver program that I used to run the test cases failed on Linux with error messages like:

  1. test_parselong.c:63  - CU_ASSERT_EQUAL(parselong("xxx", NULL, 0, &value),-1)
  2. test_parselong.c:64  - CU_ASSERT_EQUAL(value, TEST_VALUE_ULONG_MAGIC)
  3. test_parselong.c:66  - CU_ASSERT_EQUAL(parselong("+", NULL, 0, &value), -1)
  4. test_parselong.c:67  - CU_ASSERT_EQUAL(value, TEST_VALUE_ULONG_MAGIC)

The culprit for these test case failures was the assumption that Linux would set errno to a non-zero value for an invalid input string... Apparently, it doesn't. The following small program prints different output on BSD vs. Linux:

$ cat -n strtest.c
     1  #include <errno.h>
     2  #include <limits.h>
     3  #include <stdio.h>
     4  #include <stdlib.h>
     5
     6  int
     7  main(void)
     8  {
     9          long value;
    10          const char *input = "xxx";
    11          char *endp = NULL;
    12
    13          errno = 0;
    14          value = strtol(input, &endp, 0);
    15          printf("str = %p = \"%s\"\n", input, input);
    16          printf("endp = %p \"%s\"\n", endp, endp ? endp : "(null)");
    17          if (endp != NULL) {
    18                  printf("endp[0] = '%c' (%d 0%03o #x%02x)\n",
    19                    *endp, *endp, *endp, *endp);
    20          }
    21          printf("errno = %d\n", errno);
    22          printf("value = %ld 0%lo #x%lx\n", value, value, value);
    23          return EXIT_SUCCESS;
    24  }

On FreeBSD the output of this program includes an errno value of EINVAL:

freebsd$ cc strtest.c
freebsd$ ./a.out
str = 0x8048604 = "xxx"
endp = 0x8048604 "xxx"
endp[0] = 'x' (120 0170 #x78)
errno = 22
value = 0 00 #x0
freebsd$ fgrep 22 /usr/include/sys/errno.h
#define EINVAL          22              /* Invalid argument */
freebsd$

On a recent update of Debian GNU/Linux "testing" the output is slightly different:

debian$ cc strtest.c
debian$ ./a.out
str = 0x8048630 = "xxx"
endp = 0x8048630 "xxx"
endp[0] = 'x' (120 0170 #x78)
errno = 0
value = 0 00 #x0
debian$

This means that the only indication we have that the Linux version of strtol() failed to parse some of the input text is the value of 'endp': it's the same as the input string. The error-checking code of the original parselong() wrapper was:

        x = strtol(str, &endp, base);
        if (errno != 0 || (endp != NULL && endp != str && *endp != '\0' &&
            (isdigit(*endp) != 0 || isspace(*endp) == 0)))
                error(...);

But on Linux both of the following are true:

  • errno is not set to a non-zero value.
  • If strtol() could not parse even one input character, endp == str.

This caused parselong() to bypass the error checking code, and try to return a 'valid' result even tough the Linux strtol() version has failed. Hence the failure of the unit tests.

Removing the (endp != str) conditional expression means that the error checking code works equally well on Linux and BSD. The BSD version of strtol() returns a non-zero errno value, triggerring the first part of the error checking code. The Linux version returns an endp pointer that is non-null and fails the '\0' check later on. The new parselong() function is slightly shorter and it passes the unit tests on both BSD and Linux.

Conclusions

There is something thrilling about fixing bugs by removing code. This bug was one of the few cases I've come across during the last couple of months where removing code was an improvement. There's probably a joke about "writing too much code" and the bug-resolving debt each line of new code introduces. I think I'll leave that for another time though.

The most important conclusion of today's bug hunting session was that Unit Testing really does work and it pays back in real, quite tangible ways. Had I not spent a bit of time to think about what the parselong() and parseulong() functions are supposed to do, when they are supposed to fail and how they are allowed to fail, I would not spent the time to write test cases for them. Had I not written the test cases, I wouldn't notice there is a failing test case on Linux. Had I not seen that I wouldn't realize some times the two functions were returning completely bogus results on Linux systems.

The central place the unit testing code has in this story is an important and serious lesson for me:

KEEP TESTING!

Filed under: Computers, FreeBSD, GNU/Linux, Linux, Programming, Software Tagged: Computers, FreeBSD, GNU/Linux, hellug, Linux, Programming, Software, testing