FastDFS之叢集部署
FastDFS是一個開源的輕量級分散式檔案系統,由跟蹤伺服器(tracker server)、儲存伺服器(storage server)和客戶端(client)三個部分組成,主要解決了海量資料儲存問題,特別適合以中小檔案(建議範圍:4KB < file_size <500MB)為載體的線上服務。在生成環境FastDFS一般都是用叢集配置,以提高FastDFS的可用性,併發能力。
部署架構:
環境IP地址(關閉所有環境的防火牆):
Tracker 192.168.18.178
Group 1:
S1:192.168.110.71
S2:192.168.110.91
Group 2:
S3:192.168.100.90
S4:192.168.100.194
注:Tracker可以部署多臺,提供負載,這裡資源有限,就部署一臺。
由於需要安裝nginx,每臺機器都安裝依賴:
yum -y install zlib pcre pcre-devel zlib-devel
一、安裝tracker
1.安裝依賴libfastcommon
unzip libfastcommon-master.zip
cd libfastcommon
./make.sh
./make.sh install
2.安裝FastDFS
unzip fastdfs.zip
cd
fastdfs
./make.sh
./make.sh install
預設安裝目錄:/usr/bin
將原安裝資料夾下的配置檔案複製到/etc/fdfs目下:cp ./conf/* /etc/fdfs/
3.配置
編輯配置檔案目錄下的tracker.conf
一般只需改動以下幾個引數即可:
disabled=false #啟用配置檔案
port=22122 #設定tracker的埠號
base_path=/home/fastdfs #設定tracker的資料檔案和日誌目錄(需預先建立)
http.server_port=8080 #設定http埠號
4.啟動
/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start
二、安裝tracker代理nginx
在tracker上安裝的nginx主要為了提供http訪問的反向代理、負載均衡以及快取服務。
1.安裝nginx
tar -zxvf nginx-1.8.0.tar.gz
tar -zxvf ngx_cache_purge-2.3.tar.gz
cd nginx-1.8.0
./configure --prefix=/usr/local/nginx --add-module=/root/ngx_cache_purge-2.3
make
make install
如果提示錯誤,可能缺少依賴的軟體包,需先安裝依賴包,再次執行./configure
nginx以及nginx cache purge外掛模組安裝完成,安裝目錄/usr/local/nginx
2.配置nginx
user root;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#設定快取引數
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 300m;
sendfile on;
tcp_nopush on;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 128k;
#設定快取儲存路徑、儲存方式、分配記憶體大小、磁碟最大空間、快取期限
proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=http-cache:500m max_size=10g inactive=30d;
proxy_temp_path /var/cache/nginx/proxy_cache/tmp;
keepalive_timeout 65;
#設定group伺服器
upstream fdfs_group1 {
server 192.168.110.71:8090 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.110.91:8090 weight=1 max_fails=2 fail_timeout=30s;
}
upstream fdfs_group2 {
server 192.168.100.90:8090 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.100.194:8090 weight=1 max_fails=2 fail_timeout=30s;
}
server {
listen 80;
server_name localhost;
charset utf-8;
#access_log /usr/local/nginx/logs/localhost.access.log main;
location /group1/M00 {
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_cache http-cache;
proxy_cache_valid 200 304 12h;
proxy_cache_key $uri$is_args$args;
proxy_pass http://fdfs_group1;
expires 30d;
}
location /group2/M00 {
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_cache http-cache;
proxy_cache_valid 200 304 12h;
proxy_cache_key $uri$is_args$args;
proxy_pass http://fdfs_group2;
expires 30d;
}
#設定清除快取的訪問許可權
location ~ /purge(/.*) {
allow 127.0.0.1;
allow 172.16.1.0/24;
deny all;
proxy_cache_purge http-cache $1$is_args$args;
}
}
}
建立快取目錄:/var/cache/nginx/proxy_cache/tmp
3.啟動/usr/local/nginx/sbin/nginx
三、安裝storage
1.安裝
參考安裝tracker前2步驟。
2.配置
編輯配置檔案目錄下的storage.conf
只需改動以下幾個引數即可:
disabled=false #啟用配置檔案
group_name=group1#組名,根據實際情況修改
port=23000#設定storage的埠號
base_path=/home/fastdfs#設定storage的日誌目錄(需預先建立)
store_path_count=1#儲存路徑個數,需要和store_path個數匹配
store_path0=/home/fastdfs#儲存路徑
tracker_server=192.168.17.43:22122#tracker伺服器的IP地址和埠號
http.server_port=8080 #設定http埠號建立目錄mkdir /home/fastdfs
3.執行
/usr/bin/fdfs_storaged /etc/fdfs/storage.conf start
另外:
分別在其他機器上全部安裝storage並確認執行正常。注意配置檔案中group名引數需要根據實際情況調整:
group1:192.168.110.71,192.168.110.91
group2:192.168.100.90,192.168.100.194
另外每個group中所有storage的埠號必須一致。
四、在storage上安裝nginx
在storage上安裝的nginx主要為了提供http的訪問服務,同時解決group中storage伺服器的同步延遲問題。
1.解壓fastdfs-nginx-module外掛
unzip fastdfs-nginx-module.zip
2.安裝nginx
tar -zxvf nginx-1.8.0.tar.gz
cd nginx-1.8.0
./configure --prefix=/usr/local/nginx --add-module=../fastdfs-nginx-module/src
make
make install
安裝目錄:/usr/local/nginx
若安裝報錯:[emerg] 13513#0: eventfd() failed (38: Function not implemented)
原因是:編譯時帶了--with-file-aio模組,這個要linux 2.6.22以後核心才支援.伺服器是2.6.18。也可以下載低版本的nginx版本
3.配置
1)配置FastDFS的nginx外掛
將FastDFS的nginx外掛模組的配置檔案copy到FastDFS配置檔案目錄
fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/
編輯/etc/fdfs配置檔案目錄下的mod_fastdfs.conf,設定storage資訊並儲存。
一般只需改動以下幾個引數即可:
base_path=/home/fastdfs #儲存日誌目錄
tracker_server=192.168.18.43:22122 #tracker伺服器的IP地址以及埠號
storage_server_port=23000#storage伺服器的埠號
group_name=group1#當前伺服器的group名
url_have_group_name = true #檔案url中是否有group名
store_path_count=1 #儲存路徑個數,需要和store_path個數匹配
store_path0=/home/fastdfs #儲存路徑
http.need_find_content_type=true#從副檔名查詢檔案型別(nginx時為true)
group_count = 2 #設定組的個數
在末尾增加2個組的具體資訊:
[group1]
group_name=group1
storage_server_port=23000
store_path_count=1
store_path0=/home/fastdfs
[group2]
group_name=group2
storage_server_port=23000
store_path_count=1
store_path0=/home/fastdfs
建立M00至儲存目錄的符號連線:
ln -s /home/fastdfs/data /home/fastdfs/data/M00
2)配置nginx
vi nginx.conf
user root;
location ~/group[1-3]/M00 {
root /home/fastdfs/data;
ngx_fastdfs_module;
}
4.啟動
/usr/local/nginx/sbin/nginx
另外:
分別在其他機器storage上全部安裝nginx並確認執行正常。注意配置檔案中group名引數需要根據實際情況調整:
group1:192.168.110.71,192.168.110.91
group2:192.168.100.90,192.168.100.194
另外nginx的埠號8090。
至此所有配置完畢。
四、測試
配置/etc/fdfs/client.conf
base_path=/home/fastdfs #日誌存放路徑
tracker_server=192.168.18.43:22122 #tracker伺服器IP地址和埠號
http.tracker_server_port=8080 #tracker伺服器的http埠號
通過fdfs_upload_file上傳一個檔案到FastDFS,程式會自動返回檔案的URL,
#fdfs_upload_file /etc/fdfs/client.conf 40-15052PZK5.jpg
group1/M00/00/00/wKhuR1Vmh_2ADmdfAAF1dmVtk4w934.jpg
然後使用瀏覽器訪問,訪問正常
http://192.168.18.43/group1/M00/00/00/wKhuR1Vmh_2ADmdfAAF1dmVtk4w934.jpg
注:可以使用fdfs_monitor檢視tracker和所有group的執行情況
# fdfs_monitor /etc/fdfs/client.conf
[2015-05-27 20:19:59] DEBUG - base_path=/home/fastdfs, connect_timeout=30, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0
server_count=1, server_index=0
tracker server is 192.168.18.43:22122
group count: 2
Group 1:
group name = group1
disk total space = 45438 MB
disk free space = 33920 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 8080
store path count = 1
subdir count per path = 256
current write server index = 1
current trunk file id = 0
Storage 1:
id = 192.168.110.71
ip_addr = 192.168.110.71 (localhost) ACTIVE
http domain =
version = 5.06
join time = 2015-05-27 01:37:04
up time =
total storage = 95217 MB
free storage = 47563 MB
upload priority = 10
store_path_count = 1
subdir_count_per_path = 256
storage_port = 23000
storage_http_port = 8080
current_write_path = 0
source storage id =
if_trunk_server = 0
connection.alloc_count = 256
connection.current_count = 1
connection.max_count = 2
total_upload_count = 1
success_upload_count = 1
total_append_count = 0
success_append_count = 0
total_modify_count = 0
success_modify_count = 0
total_truncate_count = 0
success_truncate_count = 0
total_set_meta_count = 0
success_set_meta_count = 0
total_delete_count = 0
success_delete_count = 0
total_download_count = 0
success_download_count = 0
total_get_meta_count = 0
success_get_meta_count = 0
total_create_link_count = 0
success_create_link_count = 0
total_delete_link_count = 0
success_delete_link_count = 0
total_upload_bytes = 95606
success_upload_bytes = 95606
total_append_bytes = 0
success_append_bytes = 0
total_modify_bytes = 0
success_modify_bytes = 0
stotal_download_bytes = 0
success_download_bytes = 0
total_sync_in_bytes = 0
success_sync_in_bytes = 0
total_sync_out_bytes = 0
success_sync_out_bytes = 0
total_file_open_count = 1
success_file_open_count = 1
total_file_read_count = 0
success_file_read_count = 0
total_file_write_count = 1
success_file_write_count = 1
last_heart_beat_time = 2015-05-27 20:19:49
last_source_update = 2015-05-27 20:14:04
last_sync_update = 1969-12-31 16:00:00
last_synced_timestamp = 1969-12-31 16:00:00
Storage 2:
id = 192.168.110.91
ip_addr = 192.168.110.91 (localhost) ACTIVE
http domain =
version = 5.06
join time = 2015-05-27 19:35:36
up time = 2015-05-27 19:35:36
total storage = 45438 MB
free storage = 33920 MB
upload priority = 10
store_path_count = 1
subdir_count_per_path = 256
storage_port = 23000
storage_http_port = 8080
current_write_path = 0
source storage id = 192.168.110.71
if_trunk_server = 0
connection.alloc_count = 256
connection.current_count = 1
connection.max_count = 1
total_upload_count = 0
success_upload_count = 0
total_append_count = 0
success_append_count = 0
total_modify_count = 0
success_modify_count = 0
total_truncate_count = 0
success_truncate_count = 0
total_set_meta_count = 0
success_set_meta_count = 0
total_delete_count = 0
success_delete_count = 0
total_download_count = 0
success_download_count = 0
total_get_meta_count = 0
success_get_meta_count = 0
total_create_link_count = 0
success_create_link_count = 0
total_delete_link_count = 0
success_delete_link_count = 0
total_upload_bytes = 0
success_upload_bytes = 0
total_append_bytes = 0
success_append_bytes = 0
total_modify_bytes = 0
success_modify_bytes = 0
stotal_download_bytes = 0
success_download_bytes = 0
total_sync_in_bytes = 95606
success_sync_in_bytes = 95606
total_sync_out_bytes = 0
success_sync_out_bytes = 0
total_file_open_count = 1
success_file_open_count = 1
total_file_read_count = 0
success_file_read_count = 0
total_file_write_count = 1
success_file_write_count = 1
last_heart_beat_time = 2015-05-27 20:19:51
last_source_update = 1969-12-31 16:00:00
last_sync_update = 2015-05-27 20:14:07
last_synced_timestamp = 2015-05-27 20:14:05 (-1s delay)
Group 2:
group name = group2
disk total space = 9916 MB
disk free space = 7434 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 8080
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0
Storage 1:
id = 192.168.100.194
ip_addr = 192.168.100.194 (localhost) ACTIVE
http domain =
version = 5.06
join time = 2015-05-27 20:03:37
up time = 2015-05-27 20:03:37
total storage = 47368 MB
free storage = 37371 MB
upload priority = 10
store_path_count = 1
subdir_count_per_path = 256
storage_port = 23000
storage_http_port = 8080
current_write_path = 0
source storage id = 192.168.100.90
if_trunk_server = 0
connection.alloc_count = 256
connection.current_count = 1
connection.max_count = 1
total_upload_count = 0
success_upload_count = 0
total_append_count = 0
success_append_count = 0
total_modify_count = 0
success_modify_count = 0
total_truncate_count = 0
success_truncate_count = 0
total_set_meta_count = 0
success_set_meta_count = 0
total_delete_count = 0
success_delete_count = 0
total_download_count = 0
success_download_count = 0
total_get_meta_count = 0
success_get_meta_count = 0
total_create_link_count = 0
success_create_link_count = 0
total_delete_link_count = 0
success_delete_link_count = 0
total_upload_bytes = 0
success_upload_bytes = 0
total_append_bytes = 0
success_append_bytes = 0
total_modify_bytes = 0
success_modify_bytes = 0
stotal_download_bytes = 0
success_download_bytes = 0
total_sync_in_bytes = 0
success_sync_in_bytes = 0
total_sync_out_bytes = 0
success_sync_out_bytes = 0
total_file_open_count = 0
success_file_open_count = 0
total_file_read_count = 0
success_file_read_count = 0
total_file_write_count = 0
success_file_write_count = 0
last_heart_beat_time = 2015-05-27 20:19:44
last_source_update = 1969-12-31 16:00:00
last_sync_update = 1969-12-31 16:00:00
last_synced_timestamp = 1969-12-31 16:00:00
Storage 2:
id = 192.168.100.90
ip_addr = 192.168.100.90 (localhost) ACTIVE
http domain =
version = 5.06
join time = 2015-05-27 19:50:27
up time = 2015-05-27 19:50:27
total storage = 9916 MB
free storage = 7434 MB
upload priority = 10
store_path_count = 1
subdir_count_per_path = 256
storage_port = 23000
storage_http_port = 8080
current_write_path = 0
source storage id =
if_trunk_server = 0
connection.alloc_count = 256
connection.current_count = 1
connection.max_count = 1
total_upload_count = 0
success_upload_count = 0
total_append_count = 0
success_append_count = 0
total_modify_count = 0
success_modify_count = 0
total_truncate_count = 0
success_truncate_count = 0
total_set_meta_count = 0
success_set_meta_count = 0
total_delete_count = 0
success_delete_count = 0
total_download_count = 0
success_download_count = 0
total_get_meta_count = 0
success_get_meta_count = 0
total_create_link_count = 0
success_create_link_count = 0
total_delete_link_count = 0
success_delete_link_count = 0
total_upload_bytes = 0
success_upload_bytes = 0
total_append_bytes = 0
success_append_bytes = 0
total_modify_bytes = 0
success_modify_bytes = 0
stotal_download_bytes = 0
success_download_bytes = 0
total_sync_in_bytes = 0
success_sync_in_bytes = 0
total_sync_out_bytes = 0
success_sync_out_bytes = 0
total_file_open_count = 0
success_file_open_count = 0
total_file_read_count = 0
success_file_read_count = 0
total_file_write_count = 0
success_file_write_count = 0
last_heart_beat_time = 2015-05-27 20:19:48
last_source_update = 1969-12-31 16:00:00
last_sync_update = 1969-12-31 16:00:00
last_synced_timestamp = 1969-12-31 16:00:00
參考:
相關推薦
FastDFS之叢集部署
FastDFS是一個開源的輕量級分散式檔案系統,由跟蹤伺服器(tracker server)、儲存伺服器(storage server)和客戶端(client)三個部分組成,主要解決了海量資料儲存問題,特別適合以中小檔案(建議範圍:4KB < file_size &l
FastDFS伺服器叢集部署和整合客戶端到SpringBoot
FastDFS是一個開源的輕量級分散式檔案系統,它對檔案進行管理,功能包括:檔案儲存、檔案同步、檔案訪問(檔案上傳、檔案下載)等,解決了大容量儲存和負載均衡的問題,同時也能做到在叢集環境下一臺機子上傳檔案,同時該組下的其他節點下也備份了上傳的檔案。做分散式系統開發時,其中要解決的一個問題就是圖片、音視訊
Kafka之叢集部署
Kafka支援多種叢集方式: 單節點單broker叢集 單節點多broker叢集 多節點多broker叢集 一、單節點單broker叢集 在上篇中,我們在單臺機器上部署了Kafka,現在將其設定為單節點單broker叢集。架構如圖所示: ①修改配置檔案
Redis之叢集部署
部署 ① 環境準備 準備三臺機器(192.168.124.1、192.168.124.2、192.168.124.3),每臺部署兩個Redis例項 192.168.124.1部署7001,7002埠例項的Redis 192.168.124.2部署7003,7004
02 . 分散式儲存之FastDFS 高可用叢集部署
`單節點部署和原理請看上一篇文章` https://www.cnblogs.com/you-men/p/12863555.html #### 環境 ```mysql [Fastdfs-Server] 系統 = CentOS7.3 軟體 = fastdfs-nginx-module_v1.16
FastDFS之集群部署
進入 stc log 啟動停止 sof 如圖所示 文檔 進行 iptable 一、準備工作1下載軟件:http://sourceforge.net/projects/fastdfs/files/https://pan.baidu.com/s/1gfJ0WSz#list/pa
(視訊)asp.net core系列之k8s叢集部署視訊
0、前言 應許多網友的要求,特此錄製一下k8s叢集部署的視訊。在錄製完成後發現視訊的聲音存在一點瑕疵,不過不影響大家的觀感。 由於B站的賬號等級不夠無法上傳視訊因此先放在youku上(存在廣告),請大家多多包涵。 一、視訊說明 1、視訊地址: 如果有不懂,或者有疑問的歡迎留言。視訊分為兩段。 第一
Hadoop建設工具Ambari的安裝部署及完整使用(五)——Ambari使用之叢集解除安裝
五.Ambari使用——解除安裝叢集 方式一: ambari本身並沒有提供基於web管理端的自動化解除安裝的功能。ambari web管理端的操作更多的是實現服務或是主機的擴充套件(無法完全清除服務和主機)。因此本人通過實踐,也整理了一份手動解除安裝指令碼: 注:因為此指令碼只是根
Hadoop建設工具Ambari的安裝部署及完整使用(四)——Ambari使用之叢集建立
四.Ambari使用——建立叢集 登入並建立叢集 1) 以管理員登入ambari-server,使用者名稱和密碼預設為:admin/admin 2) 點選【Launch Install Wizard】開始安裝叢集 3) 給叢集命名
LVS 負載均衡叢集部署之 LVS 介紹
一、LVS 簡介及優點 LVS 即Linux Virtual Server ,Linux虛擬伺服器,它主要用於多伺服器的負載均衡。 &
LVS 負載均衡叢集部署之 DR 模式
一、DR 模式工作原理 如圖,LVS-DR的工作原理,在圖中已經說明,下面,我們來列舉 LVS-DR 模式特點: 1、RIP 可以使用私有地址,也可以使用公網地址,如果使用公網地址,則可以直接
高可用負載均衡叢集之 HAProxy 部署
一、HAProxy 簡介 1、HAProxy 是開源、免費、快速並且可靠的一種解決方案,他可以執行在大部分主流的 Linux 伺服器上。 2、HAProxy 適用於負載那些特大的 WEB 站點,而這些站點通常又需要會話保持或者
RocketMQ初探(五)之RocketMQ4.2.6叢集部署(單Master+雙Master+2m+2s+async非同步複製)
原文地址:https://www.cnblogs.com/buyige/p/9454634.html 以下部署方式結合眾多博友的部落格,經過自己一步一步實際搭建,如有雷同,侵權行為,請見諒。。。其中遇到不少的坑,希望能幫到更多的人,現在很少能找到一份完整版4.2.6版本的搭
Hardoop之叢集網路屬性部署(實現免密碼登入)
** 詳細配置見底部連結 ** 1、完成靜態網路地址的配置,所有主機間網路能夠正常使用,相互之間可以正常連線。 2、完成主機名的配置,正確設定永久有效的主機名。 3、完成防火牆的配置,使平臺相關軟體的常用埠能夠正常遠端訪問。 4、完成免密碼登入的配置,使所有主
Redis資料庫之主從複製和叢集部署
主從複製: 目前只能在一臺機子上演示主從複製 第一步 需要啟動兩臺Redis, 複製兩份相同的redis.conf 第二步 分別將兩個redis.conf檔案的port設定為6380和6381 第三步 將6381那臺的slaveof屬性 設定為 127.0.0.1 6
dubbo學習之dubbo管理控制檯裝配及整合zookeeper叢集部署(1)
dubbo管理控制檯開源部分主要包含:路由規則,動態配置,服務降級,訪問控制,權重調整,負載均衡,等管理功能。 1、下載dubbo 地址:http://code.alibabatech.com/mvn/releases/com/alibaba/dubbo-admin/2.4.1/dubbo-admin
Hyperledger Fabric 1.0 公有云安裝6--叢集部署之坑
使用了三臺VPC,都是Ubuntu 16.04 LTS版名稱 ip節點hostnameorgnazations1172.21.0.5ordererorderer.example.comorderers2s3172.21.0.13172.21.0.15sp0,clisp1p
FastDFS叢集部署
FastDFS之——叢集的安裝、 配置、 使用 FastDFS 叢集規劃: 跟蹤伺服器 1:192.168.1.121 tracker-1 跟蹤伺服器 2:192.168.1.122 tracker-2 儲存
分散式高併發高可用FastDFS檔案伺服器叢集部署----
在搭建fastDFS檔案系統時遇到一些問題,總結下來與大家一起分享。也可以給大家作為參考。FastDFS叢集規劃(一個IP對應一個伺服器)VIP為對外訪問入口Proxy-1/Proxy-2組成高可用的代理伺服器,分搶佔模式和非搶佔模式。搶佔模式下:MASTER故障中恢復後會繼
Mycat之——Mycat叢集部署(基於HAProxy + Mycat)
一、軟體版本 作業系統:CentOS-6.5-x86_64 JDK版本:jdk1.7.0_80 HAProxy版本:haproxy-1.5.19.tar.gz Mycat版本:Mycat-server-1.5.1-RELEASE-20170717215510-linux.