Add this line to your application's Gemfile:
gem 'fog-aliyun'
And then execute:
$ bundle
Or install it yourself as:
$ gem install fog-aliyun
Before you can use fog-aliyun, you must require it in your application:
require 'fog/aliyun'
Since it's a bad practice to have your credentials in source code, you should load them from default fog configuration file: ~/.fog
. This file could look like this:
default:
aliyun_accesskey_id: "<YOUR_ACCESS_KEY_ID>"
aliyun_accesskey_secret: "<YOUR_SECRET_ACCESS_KEY>"
aliyun_region_id: "<YOUR_TARGET_REGION>"
conn = Fog::Storage[:aliyun]
If you haven't modified your default fog configuration file or you don't want to use it, you can load your credentials by this way:
opt = {
:provider => 'aliyun',
:aliyun_accesskey_id => <YOUR_ACCESS_KEY_ID>,
:aliyun_accesskey_secret => <YOUR_SECRET_ACCESS_KEY>,
:aliyun_oss_bucket => <YOUR_OSS_BUCKET>,
:aliyun_region_id => <YOUR_TARGET_REGION>,
:aliyun_oss_endpoint => <YOUR_OSS_ENDPOINT>,
}
conn = Fog::Storage.new(opt)
-> Note: :aliyun_region_id
is optional and default to "cn-hangzhou".
-> Note: :aliyun_oss_endpoint
is optional. If it is not specified, it will be generated automatically by :aliyun_region_id
.
Its basic format is oss-<region-id>.aliyuncs.com
and with default schema "http" and default port "80".
If you want to use https or 443 port, you can use a format <schema>://oss-<region-id>.aliyuncs.com:<port>
.
Fog::Aliyun provides both a model and request abstraction. The request abstraction provides the most efficient interface and the model abstraction wraps the request abstraction to provide a convenient ActiveModel
like interface.
The Fog::Storage object supports a number of methods that wrap individual HTTP requests to the OSS API.
To see a list of requests supported by the storage service:
conn.requests
This returns:
[[nil, :copy_object], [nil, :delete_bucket], [nil, :delete_object], [nil, :get_bucket], [nil, :get_object], [nil, :get_object_http_url], [nil, :get_object_https_url], [nil, :head_object], [nil, :put_bucket], [nil, :put_object], [nil, :list_buckets], [nil, :list_objects], [nil, :get_containers], [nil, :get_container], [nil, :delete_container], [nil, :put_container]]
To request all of buckets:
conn.list_buckets
And this returns like the flowing;
[{"Location"=>"oss-cn-beijing", "Name"=>"dt1", "CreationDate"=>"2015-07-30T08:38:02.000Z"}, {"Location"=>"oss-cn-shenzhen", "Name"=>"ruby1", "CreationDate"=>"2015-07-30T02:22:34.000Z"}, {"Location"=>"oss-cn-qingdao", "Name"=>"yuanhang123", "CreationDate"=>"2015-05-18T03:06:31.000Z"}]
You can also request in this way;
conn.list_buckets(:prefix=>"pre")
Here is a summary of the optional parameters:
Parameters | Description |
---|---|
:prefix |
The bucket name of the results must start with 'prefix'.It won't filter prefix information if not set Data Types: String Defaults:none |
:marker |
The result will start from the marker alphabetically.It wil start from the first if not set. Data Types: String Defaults: none |
:maxKeys |
Set the max number of the results. It will set to 100 if not set. The max value of maxKeys is 1000. Data Types: String Defaults: 100 |
To learn more about Fog::Aliyun
request methods, you can refer to our source code. To learn more about OSS API, refer to AliYun OSS API.
Fog models behave in a manner similar to ActiveModel
. Models will generally respond to create
, save
, destroy
, reload
and attributes
methods. Additionally, fog will automatically create attribute accessors.
Here is a summary of common model methods:
Method | Description |
---|---|
create |
Accepts hash of attributes and creates object. Note: creation is a non-blocking call and you will be required to wait for a valid state before using resulting object. |
save | Saves object. Note: not all objects support updating object. |
destroy |
Destroys object. Note: this is a non-blocking call and object deletion might not be instantaneous. |
reload | Updates object with latest state from service. |
attributes | Returns a hash containing the list of model attributes and values. |
identity |
Returns the identity of the object. Note: This might not always be equal to object.id. |
The remainder of this document details the model abstraction.
Note: Fog sometimes refers to OSS containers as directories.
To retrieve a list of directories:
dirs = conn.directories
This returns a collection of Fog::Storage::Aliyun::Directory
models:
To retrieve a specific directory:
dir = dirs.get "dir"
This returns a Fog::Storage::Aliyun::Directory
instance:
To create a directory:
dirs.create :key => 'backups'
To delete a directory:
directory.destroy
Note: Directory must be empty before it can be deleted.
To get a directory's URL:
directory.public_url
To list files in a directory:
directory.files
Note: File contents is not downloaded until body
attribute is called.
To upload a file into a directory:
file = directory.files.create :key => 'space.jpg', :body => File.open "space.jpg"
Note: For files larger than 5 GB please refer to the Upload Large Files section.
OSS requires files larger than 5 GB (the OSS default limit) to be uploaded into segments along with an accompanying manifest file. All of the segments must be uploaded to the same container.
Segmented files are downloaded like ordinary files. See Download Files section for more information.
The most efficient way to download files from a private or public directory is as follows:
File.open('downloaded-file.jpg', 'w') do | f |
directory.files.get("my_big_file.jpg") do | data, remaining, content_length |
f.syswrite data
end
end
This will download and save the file.
Note: The body
attribute of file will be empty if a file has been downloaded using this method.
If a file object has already been loaded into memory, you can save it as follows:
File.open('germany.jpg', 'w') {|f| f.write(file_object.body) }
Note: This method is more memory intensive as the entire object is loaded into memory before saving the file as in the example above.
To get a file's URL:
file.public_url
Cloud Files supports copying files. To copy files into a container named "trip" with a name of "europe.jpg" do the following:
file.copy("trip", "europe.jpg")
To move or rename a file, perform a copy operation and then delete the old file:
file.copy("trip", "germany.jpg")
file.destroy
To delete a file:
file.destroy
After checking out the repo, run bin/setup
to install dependencies. Then, run rake spec
to run the tests. You can also run bin/console
for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install
. To release a new version, update the version number in version.rb
, and then run bundle exec rake release
, which will create a git tag for the version, push git commits and tags, and push the .gem
file to rubygems.org.
To run test suite use the following command:
rake spec
To run test suite with code coverage:
export COVERAGE=true
rake spec
The result will be generated in coverage
folder.
To run integration tests please prepare a set of AliCloud credentials to be used by integration tests.
Define the credentials and bucket in ~/.fog
file in using following format:
default:
aliyun_accesskey_id: "...access key..." # You can create a set of credentials
aliyun_accesskey_secret: "...secret..." # using Alicloud console portal
aliyun_region_id: "...region name..." # Example: cn-shanghai
aliyun_oss_bucket: "...name of the bucket..." # Example: fog-integration-test-bucket
WARNING: Do NOT use any productive account credentials and buckets for the testing, it may be harmful to your data!
The tests are using [https://github.com/aliyun/aliyun-cli#installation](Aliyun CLI) to setup integration bucket and content for tests, please install it locally before running integration tests.
Aliyun CLI will be configured automatically as part of test execution using the credentials provided for fog connection.
Then run the test suite with INTEGRATION
environment variable to activate integration tests:
export INTEGRATION=true
rake spec
Performance tests are providing memory consumption report for download/upload operations.
export PERFORMANCE=true
rake spec
The gem is available as open source under the terms of the MIT License.