Programming odds and ends — InfiniBand, RDMA, and low-latency networking for now.


gcsfs, a FUSE driver for Google Cloud Storage, now available

gcsfs is a FUSE driver for Google Cloud Storage with much the same functionality as s3fuse. It isn’t quite a fork — for the moment, both drivers are very similar — but this makes it easier to separate future development of Google Cloud Storage-specific features. Some key features:

  • Binaries for Debian 7, RHEL/CentOS 6, Ubuntu, and OS X 10.9.
  • Compatible with GCS web console.
  • Caches file/directory metadata for improved performance.
  • Verifies file consistency on upload/download.
  • Optional file content encryption.
  • Supports multi-part (resumable) uploads.
  • Allows setting Cache-Control header (via extended attributes on files).
  • Maps file extensions to known MIME types (to set Content-Type header).
  • (Mostly) POSIX compliant.

Binaries (and source packages) are in gcsfs-downloads (on Google Drive).

Users of CentOS images on Google Compute Engine can install pre-built binaries by downloading gcsfs-0.15-1.x86_64.rpm, then using yum:

user@gce-instance-centos ~]$ sudo yum localinstall gcsfs-0.15-1.x86_64.rpm
user@gce-instance-centos ~]$ gcsfs -V
gcsfs, 0.15 (r792M), FUSE driver for cloud object storage services
enabled services: aws, fvs, google-storage
user@gce-instance-centos ~]$

For Debian download gcsfs_0.15-1_amd64.deb, then:

user@gce-instance-debian:~$ sudo dpkg -i gcsfs_0.15-1_amd64.deb
... a bunch of dependency errors ...
user@gce-instance-debian:~$ sudo apt-get install -f
user@gce-instance-debian:~$ gcsfs -V
gcsfs, 0.15 (r792M), FUSE driver for cloud object storage services
enabled services: aws, fvs, google-storage

s3fuse 0.15 released

s3fuse 0.15 is now available. This release contains fairly minor fixes and packaging updates. Highlights from the change log:

Removed libxml++ dependency.
libxml++ was pulling in many unnecessary package dependencies and wasn’t really providing much added value over libxml2, so as of 0.15 it’s gone. As a bonus, it’s no longer necessary to enable the EPEL repository on RHEL/CentOS before installing s3fuse.

Fixed libcurl init/cleanup bug.
0.14 and earlier versions had a bug that sometimes prevented establishment of SSL connections if s3fuse ran in daemonized (background) mode. 0.15 addresses this.

Binaries for RHEL/CentOS, Debian, and OS X, as well as source archives, are now hosted in s3fuse-downloads (on Google Drive).

Ubuntu packages are at the s3fuse PPA.

Sample Code Repository

Updated code for the RDMA tutorials is now hosted at GitHub:

Special thanks to SeongJae P. for the Makefile fixes.

Importing VHD Disk Images into XCP

Earlier today I needed to move a VHD disk image from VirtualBox into XCP, but couldn’t find an obvious way to do this. The xe vdi-import command only seems to work with raw disk images, and I didn’t want to convert my VHD into a raw image only to have XCP convert it back to a VHD. What I ended up doing was creating a new VDI in an NFS storage repository, then overwriting the image.

The main caveat to this approach is that it only works with VHD images, and only with NFS repositories (at least that’s what I gather from the documentation).

Also, don’t blame me if something goes wrong and you accidentally wipe out your mission-critical disk image, or your XCP host, etc.

So here’s what I did:

  1. Get the SR ID: SR_ID=$(xe sr-list type=nfs --minimal)
  2. Get the associated PBD: PBD_ID=$(xe sr-param-list uuid=$SR_ID | grep 'PBDs' | sed -e 's/.*: //')
  3. Get the NFS path: xe pbd-param-list uuid=$PBD_ID | grep 'device-config'
  4. Create an image, specifying an image size larger than the VDI: VDI_ID=$(xe vdi-create sr-uuid=$SR_ID name-label='new image' type=user virtual-size=<image-size>)
  5. Mount the NFS export locally: mount <nfs-server>:<nfs-path> /mnt
  6. Replace the VHD: cp <your-vhd-file> /mnt/$SR_ID/$VDI_ID.vhd

s3fuse 0.14 released

Over the weekend I posted version 0.14 of s3fuse. Highlights from the change log:

NEW: Multipart (resumable) uploads for Google Cloud Storage.
With this most recent release, Google Cloud Storage support is on par with S3 support. Multipart/resumable uploads and downloads work reliably, and performance is similar. Many thanks to Eric J. at Google for all the help improving GCS support in 0.14.

NEW: Support for FV/S.
With the help of Hiroyuki K. at IIJ, s3fuse now supports IIJ’s FV/S cloud storage service.

NEW: Set file content type by examining extension.
s3fuse will now set the HTTP Content-Type header according to the file extension using the system MIME map.

NEW: Set Cache-Control header with extended attribute.
If a Cache-Control header is returned for an object, it will be available in the s3fuse_cache_control extended attribute. Setting the extended attribute (with, say, setfattr) will update the Cache-Control header for the object.

NEW: Allow creation of “special” files (FIFOs, devices, etc.).
mkfifo and mknod now work in s3fuse-mounted buckets, with semantics similar to NFS-mounted filesystems (in particular: FIFOs do not form IPC channels between hosts).

Various POSIX compliance fixes.
From the README:

s3fuse is mostly POSIX compliant, except that:

  • Last-access time (atime) is not recorded.
  • Hard links are not supported.

Some notes on testing:

All tests should pass, except:

  chown/00.t: 141, 145, 149, 153
    [FUSE doesn't call chown when uid == -1 and gid == -1]

  mkdir/00.t: 30
  mkfifo/00.t: 30
  open/00.t: 30
    [atime is not recorded]

  rename/00.t: 7-9, 11, 13-14, 27-29, 31, 33-34
  unlink/00.t: 15-17, 20-22, 51-53
    [hard links are not supported]

As with 0.13, Ubuntu packages are at the s3fuse PPA.

s3fuse 0.13 released

I’ve just uploaded version 0.13 of s3fuse, my FUSE driver for Amazon S3 (and Google Cloud Storage) buckets. 0.13 is a near-complete rewrite of s3fuse, and brings a few new features and vastly improved (I hope) robustness. From the change log:

NEW: File encryption
Operates at the file level and encrypts the contents of files with a key (or set of keys) that you control. See the README.

NEW: Glacier restore requests
Allows for the restoration of files auto-archived to Glacier. See this AWS blog post and the README for more information.

NEW: OS X binaries
A disk image (.dmg) is now available on the downloads page containing pre-built OS X binaries (built on OS X 10.8.2, so compatibility may be limited).

NEW: Size-limited object cache
The object attribute cache now has a fixed size. This addresses the memory utilization issues reported by Gregory C. and others. The default maximum size is 1,000 objects, but this can be changed by tweaking the max_objects_in_cache configuration option.

IMPORTANT: Removed auth_data configuration option
For AWS, use aws_secret_file instead. For Google Storage, use gs_token_file. This will require a change to existing configuration files.

IMPORTANT: Default configuration file locations
s3fuse now searches for a configuration file in ~/.s3fuse/s3fuse.conf before trying %sysconfdir/s3fuse.conf (this is usually /etc/s3fuse.conf or /usr/local/s3fuse.conf).

File Hashing
SHA256 is now used for file integrity checks. The file hash, if available, will be in the “s3fuse_sha256” extended attribute. A standalone SHA256 hash generator (“s3fuse_sha256_hash”) that uses the same hash-list method as s3fuse is now included.

Set the stats_file configuation option to a valid path and s3fuse will write statistics (event counters, mainly) to the given path when unmounting.

OS X default options
noappledouble and daemon_timeout=3600 are now default FUSE options on OS X.

KNOWN ISSUE: Google Cloud Storage large file uploads
Multipart GCS uploads are not implemented. Large files will time out unless the transfer_timeout_in_s configuration option is set to something very large.

RDMA tutorial PDFs

In cooperation with the HPC Advisory Council, I’ve reformatted three of my RDMA tutorials for easier offline reading. You can find them, along with several papers on InfiniBand, GPUs, and other interesting topics, at the HPC Training page. For easier access, here are my three papers: