Hypertable + Ceph 分散式檔案系統
阿新 • • 發佈:2021-06-05
Hypertableceph分散式檔案系統HDFSHadoop
我最近在研究Hypertable, Hypertable 安裝分為單機安裝與分散式檔案系統,單擊安裝主要用於開發環境。生產環境一般都是採用分散式檔案系統。Hypertable 系統以下幾種組合。
Hypertable + HDFS(Hadoop)
Hypertable + KFS
Hypertable + MapR
Hypertable + ThriftBroker
前面三種,我既然選擇了 Hypertable 而不用 HBase 就是不喜歡Java 所以我不會使用 Hadoop.
第二種 Hypertable + KFS 我去看了KFS Subversion 最有一次更新時2011年, 放棄了KFS
第三重 MapR 沒有聽說過,資料也比較少
看到最後一種 ThriftBroker 可以持之 Ceph 就像嘗試。
Hypertable 安裝參考:
http://netkiller.github.io/nosql/hypertable/index.html
下面是Ceph 的安裝
Ceph
6.1. Installation on Ubuntu
$ apt-cache search ceph ceph - distributed storage ceph-common - common utilities to mount and interact with a ceph filesystem ceph-common-dbg - debugging symbols for ceph-common ceph-dbg - debugging symbols for ceph ceph-fs-common - common utilities to mount and interact with a ceph filesystem ceph-fs-common-dbg - debugging symbols for ceph-fs-common ceph-mds-dbg - debugging symbols for ceph gceph - Graphical ceph cluster status utility gceph-dbg - debugging symbols for gceph libcephfs-dev - Ceph distributed file system client library (development files) libcephfs1 - Ceph distributed file system client library libcephfs1-dbg - debugging symbols for libcephfs1 librados-dev - RADOS distributed object store client library (development files) librados2 - RADOS distributed object store client library librados2-dbg - debugging symbols for librados2 librbd-dev - RADOS block device client library (development files) librbd1 - RADOS block device client library librbd1-dbg - debugging symbols for librbd1 ceph-mds - distributed filesystem service ceph-resource-agents - OCF-compliant resource agents for Ceph obsync - synchronize data between cloud object storage providers or a local directory python-ceph - Python libraries for the Ceph distributed filesystem $ sudo apt-get install ceph $ sudo apt-get install ceph-mds
建立目錄
sudo mkdir -p /var/lib/ceph/osd/ceph-0
sudo mkdir -p /var/lib/ceph/osd/ceph-1
sudo mkdir -p /var/lib/ceph/mon/ceph-a
sudo mkdir -p /var/lib/ceph/mds/ceph-a
建立key檔案
$ cd /etc/ceph
$ sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
建立key檔案過程如下
$ sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring temp dir is /tmp/mkcephfs.4rUAn1MJYV preparing monmap in /tmp/mkcephfs.4rUAn1MJYV/monmap /usr/bin/monmaptool --create --clobber --add a 192.168.6.2:6789 --print /tmp/mkcephfs.4rUAn1MJYV/monmap /usr/bin/monmaptool: monmap file /tmp/mkcephfs.4rUAn1MJYV/monmap /usr/bin/monmaptool: generated fsid a5afe011-bfde-4784-8d3d-e488418897d6 epoch 0 fsid a5afe011-bfde-4784-8d3d-e488418897d6 last_changed 2013-04-10 18:05:46.409761 created 2013-04-10 18:05:46.409761 0: 192.168.6.2:6789/0 mon.a /usr/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.4rUAn1MJYV/monmap (1 monitors) === osd.0 === 2013-04-10 18:05:46.899898 7f8b26ec8780 -1 filestore(/var/lib/ceph/osd/ceph-0) limited size xattrs -- filestore_xattr_use_omap enabled 2013-04-10 18:05:47.303918 7f8b26ec8780 -1 filestore(/var/lib/ceph/osd/ceph-0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2013-04-10 18:05:47.658550 7f8b26ec8780 -1 created object store /var/lib/ceph/osd/ceph-0 journal /var/lib/ceph/osd/ceph-0/journal for osd.0 fsid a5afe011-bfde-4784-8d3d-e488418897d6 2013-04-10 18:05:47.659360 7f8b26ec8780 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2013-04-10 18:05:47.659489 7f8b26ec8780 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring === osd.1 === 2013-04-10 18:05:48.039253 7f27289be780 -1 filestore(/var/lib/ceph/osd/ceph-1) limited size xattrs -- filestore_xattr_use_omap enabled 2013-04-10 18:05:48.338222 7f27289be780 -1 filestore(/var/lib/ceph/osd/ceph-1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory 2013-04-10 18:05:48.734861 7f27289be780 -1 created object store /var/lib/ceph/osd/ceph-1 journal /var/lib/ceph/osd/ceph-1/journal for osd.1 fsid a5afe011-bfde-4784-8d3d-e488418897d6 2013-04-10 18:05:48.734992 7f27289be780 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2013-04-10 18:05:48.735294 7f27289be780 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring === mds.a === creating private key for mds.a keyring /var/lib/ceph/mds/ceph-a/keyring creating /var/lib/ceph/mds/ceph-a/keyring Building generic osdmap from /tmp/mkcephfs.4rUAn1MJYV/conf /usr/bin/osdmaptool: osdmap file '/tmp/mkcephfs.4rUAn1MJYV/osdmap' /usr/bin/osdmaptool: writing epoch 1 to /tmp/mkcephfs.4rUAn1MJYV/osdmap Generating admin key at /tmp/mkcephfs.4rUAn1MJYV/keyring.admin creating /tmp/mkcephfs.4rUAn1MJYV/keyring.admin Building initial monitor keyring added entity mds.a auth auth(auid = 18446744073709551615 key=AQB8OWVR0JMKMhAAZNnl4D2JkWIppS7gkdYkhw== with 0 caps) added entity osd.0 auth auth(auid = 18446744073709551615 key=AQB7OWVRIFdNJxAAHjgfc+J1uVTMj4uVLtTSaQ== with 0 caps) added entity osd.1 auth auth(auid = 18446744073709551615 key=AQB8OWVROCLPKxAAJ/Jim86K7Ip1PGnCw3Fb/g== with 0 caps) === mon.a === /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a placing client.admin keyring in ceph.keyring $ ls ceph.conf ceph.keyring
啟動ceph
$ sudo service ceph -a start
$ sudo ceph health
啟動過程如下
$ sudo service ceph -a start
=== mon.a ===
Starting Ceph mon.a on ubuntu...
starting mon.a rank 0 at 192.168.6.2:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid a5afe011-bfde-4784-8d3d-e488418897d6
=== mds.a ===
Starting Ceph mds.a on ubuntu...
starting mds.a at :/0
=== osd.0 ===
Starting Ceph osd.0 on ubuntu...
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
=== osd.1 ===
Starting Ceph osd.1 on ubuntu...
starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
$ sudo ceph health
HEALTH_OK
$ sudo mkdir /mnt/ceph
$ sudo mount -t ceph 192.168.6.2:6789:/ /mnt/ceph
檢視檔案系統的掛在情況
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/mapper/ubuntu-root ext4 49263424 8860876 37900100 19% /
udev devtmpfs 2014956 4 2014952 1% /dev
tmpfs tmpfs 809808 1612 808196 1% /run
none tmpfs 5120 0 5120 0% /run/lock
none tmpfs 2024516 0 2024516 0% /run/shm
none tmpfs 102400 0 102400 0% /run/user
/dev/vda1 ext2 233191 80600 140150 37% /boot
192.168.6.2:6789:/ ceph 98526208 22726656 75799552 24% /mnt/ceph
嘗試建立一個檔案
$ sudo touch /mnt/ceph/hello
轉載於:https://my.oschina.net/neochen/blog/121840