One of the nicest developments in the cloud arena is the increasing adoption of standards. This, of course, will impact on maturity and market confidence on such technologies.
Amazon, as one of the pioneers, made a good choice on their offering design by making their API implementation public. Now, vendors such as Eucalyptus private/hybrid cloud offering and many other providers can leverage and build upon the specs to offer compatible services removing the hassle for their customers to learn a new technology/tool.
I’ve bare-metal servers siting on data-center. A couple of months ago I’ve learned about their new cloud storage offering. Since I’m working a lot on cloud lately, I checked the service. It’s was nice to learn they are not re-inventing the wheel but instead implementing Amazon’s Simple Storage Service (S3) defacto standard for cloud storage.
Currently there are many S3-compatible tools available both FLOSS and freeware/closed source (). I’ve been using , which is already available in the Debian archive, to interact with S3-compatible services. Usage is pretty straightforward.
For my use case I intend to store copies of certain files on my provider’s S3-compatible service. Before being able to store files I’ll need to create buckets. If you are not very familiar with S3 terminology buckets can be seen as containers or folders (in the desktop paradigm).
First thing to do is configure your keys and credentials for accessing S3 from your provider. I do recommend to use the --configure
option to create the $HOME/.s3cfg
file because it will fill in all the available options for a standard S3 service, leaving you the work of just tweaking them based on your needs. You can go and create the file all by yourself if you prefer, of course.
$ sudo aptitude install s3cmd $ s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3 Access Key: ...
You’ll be required to enter the access key and the secret key. You’ll be asked for a encryption password (use only if you plan to use this feature). Finally, the software will test the configuration against Amazon’s service. Since this is not our case it will fail. Tell the configuration instead to not retry configuration and say Y
to Save configuration
.
Now, edit the $HOME/.s3cfg
file and set the address for your private/third-party S3 provider. This is done here:
host_base = s3.amazonaws.com host_bucket = %(bucket)s.s3.amazonaws.com
Change s3.amazonaws.com to your provider’s address and the host_bucket
configuration also. In my case I had to use:
host_base = rs1.connectria.com host_bucket = %(bucket)s.rs1.connectria.com
Now, save the file and test the service by listing the available buckets (of course there is none yet).
$ s3cmd ls
If you don’t get an error then the tool is properly configured. Now you can create buckets, put files, list, etc.
$ s3cmd mb s3://testbucket Bucket 's3://testbucket/' created $ s3cmd put testfile.txt s3://testbucket testfile.txt -> s3://testbucket/testfile.txt [1 of 1] 8331 of 8331 100% in 1s 7.48 kB/s done $ s3cmd ls s3://testbucket 2012-12-26 22:09 8331 s3://testbucket/testfile.txt