Todays wow, this really works! moment brought to you by: NFS, sshfs (from FUSE) and Samba.
There’s a NFS file server in my office, from which I mount stuff into my home directory on my workstation. There’s a small FreeBSD server in my living room which, among other things, serves Samba to my Windows desktop machine. Using sshfs on the home server, I mounted my home directory from my workplace as a subdirectory of my local user, and I’m accessing it from my Windows desktop over Samba.
Before the bytes hit the drives on the server, here’s the path they must take:
[Home desktop, Windows] -- Samba -- [Home server, FreeBSD] -- sshfs -- [Work desktop, Linux] -- NFS -- [Work server, FreeBSD]
And it works. Really. I’m editing OpenOffice files on my Work server right now.
Of course it should work – all of these individual components in the chain are tested and known to work so there’s practically no real concern, but seeing all this in operation made me think how many standards, interoperability specs and engineering went into making this possible, especially since the actual connections between the components are very varying: ADSL, Ethernet of various speeds and I’m sure there’s still ATM somewhere in the telco’s infrastructure. The number of different operating systems the bytes go through (if “embedded” ones on routers and similar equipment is also counted) is probably huge.
We live in great times.
(Of course, I won’t try anything that depends on file locking )
The only problem is that sshfs basically hangs the system when the IP changes on the ADSL side (file system lookups hang).
Comments Off on nfs+sshfs+samba dance
It was a really great conference! I have so many good impressions it’s just hard to sort them all out and write them down. Instead of that, here’s a treat for the geeky-minded
In accordance with the specification expressed here:
… I’ve created a device driver that implements the functionality in kernel. In honour of the operating system by which this work was inspired, I name the driver random.debian. The kernel module creates a device entry (/dev/random.debian) which is a infinite source of random data with entropy compatible with the above specification. The source code tarball for the Debian-like random data source is of course available under the BSD license. This will work on any recent version of FreeBSD.
As much as I would like to have though of this first, the idea was actually put out by PHK or Robert Watson while we were waiting for dinner, so that part of the credit goes to them.
This DevSummit+BSDCan was very fun and educational and will definitely try to be here in the next years also.
Background: I’m developing something that should eventually become a high performance network server, with high transactions/s rate (basically a database cache). Currently I’m experimenting with various modes of using SMP facilities for the server (thread usage, binding, etc.). A big problem is that, while I temporarily have a server on which to test it, I don’t have a client machine which could push the server to the limits. I currently have a “dumb” multithreaded benchmark client, spawning N threads (N is 40 in these tests), each of which is a blocking network client (i.e. one thread per connection). This setup, when run via local Unix sockets on the server, can achieve 125,000 trans/s, but I believe the result should be much better if the client doesn’t task the CPU of the server.
Marko Zec helped me with that, temporarily providing me a machine which dual boots 7.x and 4.x with his VIMAGE patches, as well as without the patches. Originally I just used the 7.0 system, and achieved something like 62,000 trans/s, which is too low for me. On his insistence I booted 4.11 and ported the client-side benchmark on it. Without any significant modifications except those needed for the difference between gcc 2.9x and gcc 4, the same client code rocketed to 81,000 trans/s! This is using libc_r, meaning the whole 40-threaded thing is visible to the kernel as one process (4.x doesn’t have kernel support for multithreading)! This number is still too low and I’ll probably need to find several machines that could work at the same time to overtax the server (which will be very hard) but just the raw difference between 4.x and 7.x is staggering. Network card is bge, gigabit, directly connected to the server via crossover cable.
On the bright side, VIMAGE patches don’t influence the performance noticeably.