s3fuse 0.13 released
I’ve just uploaded version 0.13 of s3fuse, my FUSE driver for Amazon S3 (and Google Cloud Storage) buckets. 0.13 is a near-complete rewrite of s3fuse, and brings a few new features and vastly improved (I hope) robustness. From the change log:
NEW: File encryption
Operates at the file level and encrypts the contents of files with a key (or set of keys) that you control. See the README.
NEW: OS X binaries
A disk image (.dmg) is now available on the downloads page containing pre-built OS X binaries (built on OS X 10.8.2, so compatibility may be limited).
NEW: Size-limited object cache
The object attribute cache now has a fixed size. This addresses the memory utilization issues reported by Gregory C. and others. The default maximum size is 1,000 objects, but this can be changed by tweaking the
max_objects_in_cache configuration option.
auth_data configuration option
For AWS, use
aws_secret_file instead. For Google Storage, use
gs_token_file. This will require a change to existing configuration files.
IMPORTANT: Default configuration file locations
s3fuse now searches for a configuration file in ~/.s3fuse/s3fuse.conf before trying %sysconfdir/s3fuse.conf (this is usually /etc/s3fuse.conf or /usr/local/s3fuse.conf).
SHA256 is now used for file integrity checks. The file hash, if available, will be in the “s3fuse_sha256” extended attribute. A standalone SHA256 hash generator (“s3fuse_sha256_hash”) that uses the same hash-list method as s3fuse is now included.
stats_file configuation option to a valid path and s3fuse will write statistics (event counters, mainly) to the given path when unmounting.
OS X default options
daemon_timeout=3600 are now default FUSE options on OS X.
KNOWN ISSUE: Google Cloud Storage large file uploads
Multipart GCS uploads are not implemented. Large files will time out unless the
transfer_timeout_in_s configuration option is set to something very large.