1. 程式人生 > 其它 >4.Ceph 基礎篇 - 物件儲存使用

4.Ceph 基礎篇 - 物件儲存使用

文章轉載自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247485256&idx=1&sn=39e072156c87c639e0c64236d3c2d25d&chksm=e9fdd2bcde8a5baa1da7583a34d94ba6c7311d1c354ede8b3fec57b61f9f42d5fd48e9152d6b&scene=178&cur_album_id=1600845417376776197#rd

物件儲存 RGW

基本概念

Ceph Object Gateway is an object storage interface built on top of librados to provide applications with a RESTful gateway to Ceph Storage Clusters. Ceph Object Storage supports two interfaces:

Ceph 物件閘道器是一個物件儲存介面,它建立在 librados 上面提供 RESTful 閘道器的Ceph 物件儲存叢集,Ceph 物件儲存支援兩種介面;

1.S3-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API.

相容S3:提供物件儲存功能,與大部分 AWS S3 RESTful API 子集相容;

2.Swift-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API.

相容Swift:提供物件儲存功能,與大部分 OpenStack Swift API 子集相容;

Ceph Object Storage uses the Ceph Object Gateway daemon (radosgw), which is an HTTP server for interacting with a Ceph Storage Cluster. Since it provides interfaces compatible with OpenStack Swift and Amazon S3, the Ceph Object Gateway has its own user management. Ceph Object Gateway can store data in the same Ceph Storage Cluster used to store data from Ceph File System clients or Ceph Block Device clients. The S3 and Swift APIs share a common namespace, so you may write data with one API and retrieve it with the other.

Ceph 物件儲存主要是由 Ceph 物件閘道器守護程式(radosgw)實現的,它是一個用於與 Ceph 儲存叢集互動的 HTTP 伺服器。因為它提供了相容 S3 與 Switf 的介面,Ceph 物件閘道器擁有自己的使用者管理,Ceph 物件閘道器能儲存資料在同一個 Ceph 儲存叢集,同時儲存CephFS和塊裝置客戶。S3和Swift API共享一個公共名稱空間,因此您可以用一個API編寫資料,然後用另一個API檢索資料。

Ceph 物件閘道器主要是由 radosgw 接受使用者請求,然後與後端librados互動;它主要起到承上啟下的功能,對外它能相容兩種介面,一種是S3,另一種是Openstack Swift 介面,這兩種介面,都有各自的使用者認證機制,所以 Ceph 也提供了一套獨立的使用者管理機制,這套使用者管理機制能同時相容S3和Swift介面,最終資料會落在 OSD 上面,無論使用 S3 還是 Swift ,它們落到 OSD 上面,都是相同的名稱空間;所以你使用 S3 儲存的,也可以使用 Swift 進行訪問。這是物件儲存的一個基本架構,所以如果要使用它,我們需要部署radosgw,才能訪問到叢集,預設是沒有安裝的。

bucket 是什麼?可以理解為裝載物件的容器,它的後端是無限可擴充套件的儲存空間,並且具備安全可靠性。Ceph 物件儲存它後端是藉助 ceph rados 實現資料容災的機制,那麼他能提供哪些功能呢?

基本功能

  • RESTful Interface # RESTful風格的介面,實現上傳下載及管理功能;
  • S3- and Swift-compliant APIs # 提供兩種風格的 API 介面,相容 S3 和 Swift;
  • S3-style subdomains
  • Unified S3/Swift namespace # 扁平化、統一的的S3/Swift的名稱空間;
  • User management # 為了安全性,也提供使用者管理,可以限制物件是可以公共訪問,還是授權訪問;
  • Usage tracking # 追蹤使用者使用情況 rados df
  • Striped objects # 支援分片上傳
  • Cloud solution integration # 支援雲解決方案整合
  • Multi-site deployment # 支援多站點部署
  • Multi-site replication # 支援多站點複製

安裝 RGW

1. 軟體安裝

[root@ceph-node01 ~]# rpm -qa |grep ceph
ceph-base-14.2.11-0.el7.x86_64
ceph-mon-14.2.11-0.el7.x86_64
ceph-deploy-2.0.1-0.noarch
python-ceph-argparse-14.2.11-0.el7.x86_64
libcephfs2-14.2.11-0.el7.x86_64
ceph-common-14.2.11-0.el7.x86_64
ceph-selinux-14.2.11-0.el7.x86_64
ceph-mds-14.2.11-0.el7.x86_64
ceph-14.2.11-0.el7.x86_64
python-cephfs-14.2.11-0.el7.x86_64
ceph-osd-14.2.11-0.el7.x86_64
ceph-mgr-14.2.11-0.el7.x86_64
ceph-radosgw-14.2.11-0.el7.x86_64 # 直接使用 yum 安裝即可
[root@ceph-node01 ~]#

2. 啟動服務,預設啟動在7480埠

[root@ceph-node01 ceph-deploy]# ceph-deploy rgw create ceph-node01

3. 服務檢測

[root@ceph-node01 ceph-deploy]# systemctl status [email protected][email protected] - Ceph rados gateway
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
   Active: active (running) since 一 2020-10-05 20:34:36 EDT; 14s ago
 Main PID: 33574 (radosgw)
   CGroup: /system.slice/system-ceph\x2dradosgw.slice/[email protected]
           └─33574 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-node01 --setuser ceph --setgroup ceph

10月 05 20:34:36 ceph-node01 systemd[1]: Started Ceph rados gateway.
10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/[email protected]:13] Unknown lvalue 'LockPersonality' in section 'Service'
10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/[email protected]:14] Unknown lvalue 'MemoryDenyWriteExecute' in ...Service'
10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/[email protected]:17] Unknown lvalue 'ProtectControlGroups' in se...Service'
10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/[email protected]:19] Unknown lvalue 'ProtectKernelModules' in se...Service'
10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/[email protected]:20] Unknown lvalue 'ProtectKernelTunables' in s...Service'
Hint: Some lines were ellipsized, use -l to show in full.
[root@ceph-node01 ceph-deploy]# netstat -antp |grep 7480
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 33574/radosgw
[root@ceph-node01 ceph-deploy]# ceph -s
  cluster:
    id: cc10b0cb-476f-420c-b1d6-e48c1dc929af
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 2d)
    mgr: ceph-node01(active, since 2d), standbys: ceph-node02, ceph-node03
    osd: 3 osds: 3 up (since 2d), 3 in (since 2d)
    rgw: 1 daemon active (ceph-node01)

  task status:

  data:
    pools: 5 pools, 256 pgs
    objects: 507 objects, 1.1 GiB
    usage: 5.3 GiB used, 395 GiB / 400 GiB avail
    pgs: 256 active+clean

  io:
    client: 23 KiB/s rd, 0 B/s wr, 35 op/s rd, 23 op/s wr

[root@ceph-node01 ceph-deploy]#

4. 首次訪問服務

提示錯誤,使用了 anonymous 匿名使用者請求所有列表,到這裡說明安裝完畢了;

[root@ceph-node01 ceph-deploy]# curl http://ceph-node01:7480/
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
[root@ceph-node01 ceph-deploy]#

5. 修改 RGW 的 預設埠 7480 到 80

[root@ceph-node01 ceph-deploy]# cat ceph.conf
。。。

[client.rgw.ceph-node01]
rgw_frontends = "civetweb port=80"
[root@ceph-node01 ceph-deploy]#

修改配置檔案ceph.conf,為什麼修改這個檔案呢?因為後面新增節點時,預設是copy的這個配置檔案,修改這個配置檔案,可以確保叢集的唯一性,下面推送配置檔案到所有節點;

[root@ceph-node01 ceph-deploy]# ceph-deploy --overwrite-conf config push ceph-node01 ceph-node02 ceph-node03

--overwrite-conf 注意需要使用這個選項,否則提示無法覆蓋;

6. 重啟服務

[root@ceph-node01 ceph-deploy]# systemctl restart [email protected]
[root@ceph-node01 ceph-deploy]# netstat -antp |grep 80|grep radosgw
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 34352/radosgw
tcp 0 0 100.73.18.152:36100 100.73.18.153:6800 ESTABLISHED 34352/radosgw
tcp 0 0 100.73.18.152:53018 100.73.18.152:6800 ESTABLISHED 34352/radosgw
tcp 0 0 100.73.18.152:36118 100.73.18.153:6800 ESTABLISHED 34352/radosgw
tcp 0 0 100.73.18.152:39680 100.73.18.152:6802 ESTABLISHED 34352/radosgw
tcp 0 0 100.73.18.152:56320 100.73.18.128:6800 ESTABLISHED 34352/radosgw
tcp 0 0 100.73.18.152:39666 100.73.18.152:6802 ESTABLISHED 34352/radosgw
tcp 0 0 100.73.18.152:56336 100.73.18.128:6800 ESTABLISHED 34352/radosgw
tcp 0 0 100.73.18.152:53034 100.73.18.152:6800 ESTABLISHED 34352/radosgw
[root@ceph-node01 ceph-deploy]#

7. 驗證 80 埠

[root@ceph-node01 ceph-deploy]# curl http://ceph-node01/
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
[root@ceph-node01 ceph-deploy]#

直此,RGW 服務部署完成;

使用 S3 訪問 RGW

1. 建立 s3 的相容使用者

[root@ceph-node01 ceph-deploy]# radosgw-admin user create --uid ceph-s3-user --display-name "Ceph S3 User Demo"
{
    "user_id": "ceph-s3-user",
    "display_name": "Ceph S3 User Demo",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "ceph-s3-user",
            "access_key": "V3J9L4M1WKV5O5ECAKPU",
            "secret_key": "f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

[root@ceph-node01 ceph-deploy]#

注意上面的 access_key 與 secret_key 很重要,要記下來,以備後用; 不記錄也沒有關係,我們可以使用以下命令檢視;

[root@ceph-node01 ceph-deploy]# radosgw-admin user info --uid ceph-s3-user

2. 使用 Ceph SDK 訪問 Ceph 叢集

官方SDK 使用說明:https://docs.ceph.com/en/latest/radosgw/s3/python/#using-s3-api-extensions

[root@ceph-node01 ~]# cat s3client.py
import boto
import boto.s3.connection
access_key = 'V3J9L4M1WKV5O5ECAKPU'
secret_key = 'f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw'

conn = boto.connect_s3(
        aws_access_key_id = access_key,
        aws_secret_access_key = secret_key,
        host = 'ceph-node01', port = 80,
        is_secure=False, # uncomment if you are not using ssl
        calling_format = boto.s3.connection.OrdinaryCallingFormat(),
        )
bucket = conn.create_bucket("ceph-s3-bucket")
for bucket in conn.get_all_buckets():
        print "{name}\t{created}".format(
                name = bucket.name,
                created = bucket.creation_date,
        )
[root@ceph-node01 ~]#
[root@ceph-node01 ~]# python s3client.py
ceph-s3-bucket 2020-10-06T04:13:10.629Z
[root@ceph-node01 ~]#

安裝完rgw後,會自動建立3個pool,一個是rgw.control、rgw.meta、rgw.log,當我們建立bucket後,還會建立一個rgw.buckets.index pool池;

[root@ceph-node01 ~]# ceph osd lspools
1 ceph-demo
2 .rgw.root
3 default.rgw.control
4 default.rgw.meta
5 default.rgw.log
6 default.rgw.buckets.index
[root@ceph-node01 ~]#

3. 使用命令列方式操作 rgw

安裝命令列工具

[root@ceph-node01 ~]# yum -y install s3cmd

配置命令列工具

[root@ceph-node01 ~]# s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: V3J9L4M1WKV5O5ECAKPU
Secret Key: f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw
Default Region [US]:

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 100.73.18.152:80

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 100.73.18.152:80/%(bucket)s

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
  Access Key: V3J9L4M1WKV5O5ECAKPU
  Secret Key: f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw
  Default Region: US
  S3 Endpoint: 100.73.18.152:80
  DNS-style bucket+hostname:port template for accessing a bucket: 100.73.18.152:80/%(bucket)s
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
[root@ceph-node01 ~]#

簡單使用命令列工具

[root@ceph-node01 ~]# s3cmd ls
2020-10-06 04:13 s3://ceph-s3-bucket
[root@ceph-node01 ~]# s3cmd mb s3://s3cmd-demo
ERROR: S3 error: 403 (SignatureDoesNotMatch)

這是需要修改版本,啟用v2版本即可

[root@ceph-node01 ~]# sed -i '/signature_v2/s/False/True/g' /root/.s3cfg
[root@ceph-node01 ~]#

再次建立

[root@ceph-node01 ~]# s3cmd mb s3://s3cmd-demo
Bucket 's3://s3cmd-demo/' created
[root@ceph-node01 ~]#

上傳單個檔案

[root@ceph-node01 ~]# s3cmd put /etc/fstab s3://s3cmd-demo/fatab-demo
upload: '/etc/fstab' -> 's3://s3cmd-demo/fatab-demo' [1 of 1]
 465 of 465 100% in 0s 1751.66 B/s done
ERROR: S3 error: 416 (InvalidRange)
[root@ceph-node01 ~]#

出現錯誤 ERROR: S3 error: 416 (InvalidRange),原因是上傳object物件的時候,需要建立 pool 儲存資料,建立 pool 是需要 pg,當 pg 數量不夠的情況下,可以將現有的 pg 數量調小,或者修改配置檔案調整引數,然後重啟 mon 程序,調整方法有三種:
1.調整 pg_num 和 pgp_num ,預設引數值均為8;
2.調整 mon_max_pg_per_osd 引數,預設是 300,適當增大;當每個 OSD 中的 PG數量超過這個引數值時,就會報錯;https://www.suse.com/support/kb/doc/?id=000019402
3.增加更多的 OSD 進來;

我們採用第2種方法:

[root@ceph-node01 ceph-deploy]# cat ceph.conf
[global]
fsid = cc10b0cb-476f-420c-b1d6-e48c1dc929af
public_network = 100.73.18.0/24
cluster_network = 100.73.18.0/24
mon_initial_members = ceph-node01
mon_host = 100.73.18.152
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon_max_pg_per_osd = 1000

[client.rgw.ceph-node01]
rgw_frontends = "civetweb port=80"
[root@ceph-node01 ceph-deploy]#

重啟下 monitor daemon 程序

[root@ceph-node01 ceph-deploy]# systemctl restart ceph-mon@ceph-node01
[root@ceph-node01 ceph-deploy]# systemctl restart ceph-mon@ceph-node02
[root@ceph-node01 ceph-deploy]# systemctl restart ceph-mon@ceph-node03

再次上傳檔案測試

[root@ceph-node01 ceph-deploy]# s3cmd put /etc/fstab s3://s3cmd-demo/
upload: '/etc/fstab' -> 's3://s3cmd-demo/fstab' [1 of 1]
 465 of 465 100% in 1s 337.90 B/s done
[root@ceph-node01 ceph-deploy]#

4. 常用操作

# 1. 列出所有 bucket
[root@ceph-node01 ~]# s3cmd ls
2020-10-06 04:13 s3://ceph-s3-bucket
2020-10-06 04:34 s3://s3cmd-demo
2020-10-06 08:07 s3://swift-demo

# 2. 建立 bucket
[root@ceph-node01 ~]# s3cmd mb s3://gwj-demo/
Bucket 's3://gwj-demo/' created

# 3. 刪除空的bucket
[root@ceph-node01 ~]# s3cmd rb s3://gwj-demo/
Bucket 's3://gwj-demo/' removed

# 4. 上傳檔案到bucket
[root@ceph-node01 ~]# s3cmd put ip s3://s3cmd-demo/
upload: 'ip' -> 's3://s3cmd-demo/ip' [1 of 1]
 78 of 78 100% in 0s 2.21 KB/s done

# 5. 上傳目錄到 bucket
[root@ceph-node01 ~]# s3cmd put ./ s3://s3cmd-demo/
ERROR: Parameter problem: Use --recursive to upload a directory: ./

[root@ceph-node01 ~]# s3cmd put ./ s3://s3cmd-demo/ --recursive
。。。
upload: './ceph-deploy/get-pip.py' -> 's3://s3cmd-demo/ceph-deploy/get-pip.py' [36 of 39]
 1885433 of 1885433 100% in 0s 18.00 MB/s done
upload: './ip' -> 's3://s3cmd-demo/ip' [37 of 39]
 78 of 78 100% in 0s 4.60 KB/s done
upload: './s3client.py' -> 's3://s3cmd-demo/s3client.py' [38 of 39]
 655 of 655 100% in 0s 10.23 KB/s done
upload: './size.log' -> 's3://s3cmd-demo/size.log' [39 of 39]
 2448 of 2448 100% in 0s 33.82 KB/s done
[root@ceph-node01 ~]#

# 6. 列舉 bucket 中的內容
[root@ceph-node01 ~]# s3cmd ls s3://s3cmd-demo/
                          DIR s3://s3cmd-demo/.cache/
                          DIR s3://s3cmd-demo/.ssh/
                          DIR s3://s3cmd-demo/ceph-deploy/
2020-10-06 10:24 19887 s3://s3cmd-demo/.bash_history
2020-10-06 10:24 18 s3://s3cmd-demo/.bash_logout
2020-10-06 10:24 176 s3://s3cmd-demo/.bash_profile
2020-10-06 10:24 176 s3://s3cmd-demo/.bashrc
2020-10-06 10:24 1077 s3://s3cmd-demo/.cephdeploy.conf
2020-10-06 10:24 100 s3://s3cmd-demo/.cshrc
2020-10-06 10:24 0 s3://s3cmd-demo/.history
2020-10-06 10:24 2140 s3://s3cmd-demo/.s3cfg
2020-10-06 10:24 12288 s3://s3cmd-demo/.swp
2020-10-06 10:24 129 s3://s3cmd-demo/.tcshrc
2020-10-06 10:24 5864 s3://s3cmd-demo/.viminfo
2020-10-06 10:24 974 s3://s3cmd-demo/anaconda-ks.cfg
2020-10-06 10:24 3454 s3://s3cmd-demo/ceph-deploy-ceph.log
2020-10-06 08:57 465 s3://s3cmd-demo/fstab
2020-10-06 10:24 78 s3://s3cmd-demo/ip
2020-10-06 10:24 655 s3://s3cmd-demo/s3client.py
2020-10-06 10:24 2448 s3://s3cmd-demo/size.log
[root@ceph-node01 ~]#

# 7. 下載單個檔案
[root@ceph-node01 gwj]# s3cmd get s3://s3cmd-demo/size.log
download: 's3://s3cmd-demo/size.log' -> './size.log' [1 of 1]
 2448 of 2448 100% in 0s 242.11 KB/s done
[root@ceph-node01 gwj]# ls
size.log
[root@ceph-node01 gwj]#

# 8. 刪除 bucket 中的內容
[root@ceph-node01 gwj]# s3cmd del s3://s3cmd-demo/size.log
delete: 's3://s3cmd-demo/size.log'
[root@ceph-node01 gwj]# s3cmd get s3://s3cmd-demo/size.log
ERROR: Parameter problem: File ./size.log already exists. Use either of --force / --continue / --skip-existing or give it a new name.
[root@ceph-node01 gwj]#

# 9. 獲取對應的bucket所佔用的空間大小
[root@ceph-node01 gwj]# s3cmd du -H s3://s3cmd-demo/
   3M 39 objects s3://s3cmd-demo/
[root@ceph-node01 gwj]# s3cmd du -H s3://s3cmd-demo/.ssh
   3K 4 objects s3://s3cmd-demo/.ssh
[root@ceph-node01 gwj]#

# 10. 檢視bucket檔案資訊
[root@ceph-node01 gwj]# s3cmd info s3://s3cmd-demo/ip
s3://s3cmd-demo/ip (object):
   File size: 78
   Last mod: Tue, 06 Oct 2020 10:24:29 GMT
   MIME type: text/plain
   Storage: STANDARD
   MD5 sum: fd3066a2b8b805e905aeb073afd970cf
   SSE: none
   Policy: none
   CORS: none
   ACL: Ceph S3 User Demo: FULL_CONTROL
   x-amz-meta-s3cmd-attrs: atime:1601969067/ctime:1601287013/gid:0/gname:root/md5:fd3066a2b8b805e905aeb073afd970cf/mode:33188/mtime:1601287013/uid:0/uname:root
[root@ceph-node01 gwj]#

# 11. 兩個bucket之間相互cp
[root@ceph-node01 gwj]# s3cmd cp s3://s3cmd-demo/ip s3://test-demo/
remote copy: 's3://s3cmd-demo/ip' -> 's3://test-demo/ip'
[root@ceph-node01 gwj]# s3cmd cp --recursive s3://s3cmd-demo/.ssh s3://test-demo/
remote copy: 's3://s3cmd-demo/.ssh/authorized_keys' -> 's3://test-demo/.ssh/authorized_keys'
remote copy: 's3://s3cmd-demo/.ssh/id_rsa' -> 's3://test-demo/.ssh/id_rsa'
remote copy: 's3://s3cmd-demo/.ssh/id_rsa.pub' -> 's3://test-demo/.ssh/id_rsa.pub'
remote copy: 's3://s3cmd-demo/.ssh/known_hosts' -> 's3://test-demo/.ssh/known_hosts'
[root@ceph-node01 gwj]#

# 12. 兩個bucket之間進行mv操作
[root@ceph-node01 gwj]# s3cmd ls s3://s3cmd-demo/.swp
2020-10-06 10:24 12288 s3://s3cmd-demo/.swp
[root@ceph-node01 gwj]# s3cmd mv s3://s3cmd-demo/.swp s3://test-demo/
move: 's3://s3cmd-demo/.swp' -> 's3://test-demo/.swp'
[root@ceph-node01 gwj]# s3cmd ls s3://test-demo/.swp
2020-10-06 10:36 12288 s3://test-demo/.swp
[root@ceph-node01 gwj]# s3cmd ls s3://s3cmd-demo/.swp
[root@ceph-node01 gwj]#

# 13. 列出需要同步的檔案和目錄,但不進行同步
[root@ceph-node01 ~]# s3cmd sync --dry-run ./ s3://s3cmd-demo
upload: './.swp' -> 's3://s3cmd-demo/.swp'
upload: './ip' -> 's3://s3cmd-demo/ip'
upload: './.cache/abrt/lastnotification' -> 's3://s3cmd-demo/.cache/abrt/lastnotification'
remote copy: 'size.log' -> 'gwj/size.log'
WARNING: Exiting now because of --dry-run
[root@ceph-node01 ~]#

# 14. 在bucket中刪除本地不存在的檔案
[root@ceph-node01 a]# ls
10.txt 1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt
[root@ceph-node01 a]#
[root@ceph-node01 a]# s3cmd ls s3://test2-demo/
2020-10-06 11:17 43 s3://test2-demo/1.txt
2020-10-06 11:17 43 s3://test2-demo/10.txt
2020-10-06 11:17 43 s3://test2-demo/2.txt
2020-10-06 11:17 43 s3://test2-demo/3.txt
2020-10-06 11:17 43 s3://test2-demo/4.txt
2020-10-06 11:17 43 s3://test2-demo/5.txt
2020-10-06 11:17 43 s3://test2-demo/6.txt
2020-10-06 11:17 43 s3://test2-demo/7.txt
2020-10-06 11:17 43 s3://test2-demo/8.txt
2020-10-06 11:17 43 s3://test2-demo/9.txt
[root@ceph-node01 a]# rm -rf 10.txt
[root@ceph-node01 a]# s3cmd sync --delete-removed ./ s3://test2-demo/
delete: 's3://test2-demo/10.txt'
[root@ceph-node01 a]# s3cmd ls s3://test2-demo/
2020-10-06 11:17 43 s3://test2-demo/1.txt
2020-10-06 11:17 43 s3://test2-demo/2.txt
2020-10-06 11:17 43 s3://test2-demo/3.txt
2020-10-06 11:17 43 s3://test2-demo/4.txt
2020-10-06 11:17 43 s3://test2-demo/5.txt
2020-10-06 11:17 43 s3://test2-demo/6.txt
2020-10-06 11:17 43 s3://test2-demo/7.txt
2020-10-06 11:17 43 s3://test2-demo/8.txt
2020-10-06 11:17 43 s3://test2-demo/9.txt
[root@ceph-node01 a]#

使用 swift 訪問 RGW

1. 建立 swift 使用者

[root@ceph-node01 ceph-deploy]# radosgw-admin subuser create --uid ceph-s3-user --subuser=ceph-s3-user:swift --access=full
{
    "user_id": "ceph-s3-user",
    "display_name": "Ceph S3 User Demo",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [
        {
            "id": "ceph-s3-user:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "ceph-s3-user",
            "access_key": "V3J9L4M1WKV5O5ECAKPU",
            "secret_key": "f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw"
        }
    ],
    "swift_keys": [
        {
            "user": "ceph-s3-user:swift",
            "secret_key": "ZIOOU8Xcfe3m6ZZapK5P2rU0GGPaiS31chy9yvMW"
        }
    ],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

[root@ceph-node01 ceph-deploy]#

2. 建立 swift 使用者的secret

[root@ceph-node01 ceph-deploy]# radosgw-admin key create --subuser=ceph-s3-user:swift --key-type=swift --gen-secret
{
    "user_id": "ceph-s3-user",
    "display_name": "Ceph S3 User Demo",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [
        {
            "id": "ceph-s3-user:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "ceph-s3-user",
            "access_key": "V3J9L4M1WKV5O5ECAKPU",
            "secret_key": "f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw"
        }
    ],
    "swift_keys": [
        {
            "user": "ceph-s3-user:swift",
            "secret_key": "0M1GdRTvMSU3fToOxEVXrBjItKLBKtu8xhn3DcEE"
        }
    ],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

[root@ceph-node01 ceph-deploy]#

3. 需要使用 pip 安裝 swift 客戶端工具

# 注意如果有pip的話,就不需要再安裝了
[root@ceph-node01 ceph-deploy]# curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
。。。
[root@ceph-node01 ceph-deploy]# python get-pip.py
。。。
[root@ceph-node01 ceph-deploy]# pip install python-swiftclient
。。。

4. 使用 swift 命令列工具

[root@ceph-node01 ceph-deploy]# swift -A http://100.73.18.152/auth -U ceph-s3-user:swift -K 0M1GdRTvMSU3fToOxEVXrBjItKLBKtu8xhn3DcEE list
ceph-s3-bucket
s3cmd-demo
[root@ceph-node01 ceph-deploy]#

5. 配置成環境變數的形式

[root@ceph-node01 ceph-deploy]# cat /etc/profile
。。。
export ST_AUTH=http://100.73.18.152/auth
export ST_USER=ceph-s3-user:swift
export ST_KEY=0M1GdRTvMSU3fToOxEVXrBjItKLBKtu8xhn3DcEE
[root@ceph-node01 ceph-deploy]# source /etc/profile
[root@ceph-node01 ceph-deploy]# swift list
ceph-s3-bucket
s3cmd-demo
[root@ceph-node01 ceph-deploy]#

6. 建立bucket

[root@ceph-node01 ceph-deploy]# swift post swift-demo
[root@ceph-node01 ceph-deploy]# swift list
ceph-s3-bucket
s3cmd-demo
swift-demo
[root@ceph-node01 ceph-deploy]#

7. 上傳單個檔案測試

[root@ceph-node01 ceph-deploy]# swift upload swift-demo /etc/fstab
Object HEAD failed: http://100.73.18.152/swift/v1/swift-demo/etc/fstab 416 Requested Range Not Satisfiable
[root@ceph-node01 ceph-deploy]#

8. 上傳目錄測試

[root@ceph-node01 a]# swift upload swift-demo /etc/fstab
etc/fstab
[root@ceph-node01 a]#

9. 常用操作

# 1. 列舉所有 bucket
[root@ceph-node01 a]# swift list
ceph-s3-bucket
s3cmd-demo
swift-demo
test-demo
test2-demo
[root@ceph-node01 a]#

# 2. 列舉所有 bucket
[root@ceph-node01 a]# swift list --lh
    0 0 2020-10-06 04:13:10 ceph-s3-bucket
   37 3.6M 2020-10-06 04:34:49 s3cmd-demo
 2360 33M 2020-10-06 08:07:55 swift-demo
    7 16K 2020-10-06 10:32:02 test-demo
    9 387 2020-10-06 11:17:00 test2-demo
 2.4K 36M
[root@ceph-node01 a]#

# 3. 列舉單個 bucket
[root@ceph-node01 a]# swift list swift-demo

# 4. 上傳單個檔案到bucket
[root@ceph-node01 a]# swift upload swift-demo /etc/fstab
etc/fstab
[root@ceph-node01 a]#

# 5. 上傳目錄到指定的bucket
[root@ceph-node01 a]# swift upload swift-demo /etc/

# 6. swift 狀態資訊
[root@ceph-node01 a]# swift stat
                                    Account: v1
                                 Containers: 5
                                    Objects: 2413
                                      Bytes: 38701415
Objects in policy "default-placement-bytes": 0
  Bytes in policy "default-placement-bytes": 0
   Containers in policy "default-placement": 5
      Objects in policy "default-placement": 2413
        Bytes in policy "default-placement": 38701415
                     X-Openstack-Request-Id: tx000000000000000001302-005f7c5afd-a638-default
                X-Account-Bytes-Used-Actual: 45948928
                                 X-Trans-Id: tx000000000000000001302-005f7c5afd-a638-default
                                X-Timestamp: 1601985277.38095
                               Content-Type: text/plain; charset=utf-8
                              Accept-Ranges: bytes
[root@ceph-node01 a]#

# 7. 建立 bucket
[root@ceph-node01 a]# swift post swift-test
[root@ceph-node01 a]# swift list
ceph-s3-bucket
s3cmd-demo
swift-demo
swift-test
test-demo
test2-demo
[root@ceph-node01 a]#

# 8. 刪除 bucket
[root@ceph-node01 a]# swift delete swift-demo

# 9. 刪除指定 object
[root@ceph-node01 a]# swift delete swift-test root/a/1.txt
root/a/1.txt
[root@ceph-node01 a]#

# 10. 上傳大檔案時可以使用-S指定分片大小
[root@ceph-node01 a]# swift upload swift-test /home/log.txt
home/log.txt
[root@ceph-node01 a]# swift upload swift-test -S 102400000 /home/log2.txt
home/log2.txt segment 5
home/log2.txt segment 3
home/log2.txt segment 1
home/log2.txt segment 0
home/log2.txt segment 2
home/log2.txt segment 4
home/log2.txt
[root@ceph-node01 a]#

總結

從概念、安裝、到實踐使用,簡單介紹了物件儲存命令列工具。