AWS S3日誌文件通過服務器上傳到elk
阿新 • • 發佈:2018-05-16
AWS S3 日誌 文件 通過elk查看s3產生的大量日誌
首先理清思路
首先從s3上用s3cmd命令將logs同步下來 再將日誌寫到文件 再通過elk展示出來
一、安裝s3cmd命令
S3cmd工具的安裝與簡單使用:
參考文檔
https://www.cnblogs.com/xd502djj/p/3604783.html https://github.com/s3tools/s3cmd/releases
先下載S3cmd安裝包 從github中獲取
mkdir /home/tools/ && cd /home/tools/ wget https://github.com/s3tools/s3cmd/releases/download/v2.0.1/s3cmd-2.0.1.tar.gz tar xf s3cmd-2.0.1.tar.gz mv s3cmd-2.0.1 /usr/local/ mv /usr/local/ s3cmd-2.0.1 /usr/local/s3cmd ln -s /usr/local/s3cmd/s3cmd /usr/bin/s3cmd
安裝完成之後 使用 s3cmd –configure 設置key
主要就是access key和 secure key 配置完成之後會生成下邊的配置文件
[root@prod-sg s3cmd]# cat /root/.s3cfg [default] access_key = AKIAI4Q3PTOQ5xxxxxxx aws s3的access key 必須 access_token = add_encoding_exts = add_headers = bucket_location = US ca_certs_file = cache_file = check_ssl_certificate = True check_ssl_hostname = True cloudfront_host = cloudfront.amazonaws.com default_mime_type = binary/octet-stream delay_updates = False delete_after = False delete_after_fetch = False delete_removed = False dry_run = False enable_multipart = True encrypt = False expiry_date = expiry_days = expiry_prefix = follow_symlinks = False force = False get_continue = False gpg_command = /usr/bin/gpg gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_passphrase = aviagames guess_mime_type = True host_base = s3.amazonaws.com host_bucket = %(bucket)s.s3.amazonaws.com human_readable_sizes = False invalidate_default_index_on_cf = False invalidate_default_index_root_on_cf = True invalidate_on_cf = False kms_key = limit = -1 limitrate = 0 list_md5 = False log_target_prefix = long_listing = False max_delete = -1 mime_type = multipart_chunk_size_mb = 15 multipart_max_chunks = 10000 preserve_attrs = True progress_meter = True proxy_host = proxy_port = 0 put_continue = False recursive = False recv_chunk = 65536 reduced_redundancy = False requester_pays = False restore_days = 1 restore_priority = Standard secret_key = 0uoniJrn9qQhAnxxxxxxCZxxxxxxxxxxxx aws s3的secret_key 必須 send_chunk = 65536 server_side_encryption = False signature_v2 = False signurl_use_https = False simpledb_host = sdb.amazonaws.com skip_existing = False socket_timeout = 300 stats = False stop_on_error = False storage_class = urlencoding_mode = normal use_http_expect = False use_https = False use_mime_magic = True verbosity = WARNING website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/ website_error = website_index = index.html
二、s3cmd命令安裝完成之後 編寫腳本
#!/bin/bash #進入S3同步目錄 mkdir /server/s3dir/logs/ -p && cd /server/s3dir/logs/ #每隔5分鐘將S3的日誌列表放到S3.log文件中 #while true #do /usr/bin/s3cmd ls s3://bigbearsdk/logs/ >S3.log #執行同步命令確認 S3與服務器 日誌一樣 /usr/bin/s3cmd sync --skip-existing s3://bigbearsdk/logs/ ./ #done #當天的日誌排並序追加到一個文件 grep $(date +%F) S3.log |sort -nk1,2 |awk -F [/] '{print $NF}' > date.log sed -i 's#\_#\\_#g' date.log sed -i 's#<#\\\<#g' date.log sed -i 's#\ #\\ #g' date.log sed -i 's#>#\\\>#g' date.log ##[ -f ELK.log ] && #{ # cat ELK.log >> ELK_$(date +%F).log # echo > ELK.log # find /home/tools/ -name ELK*.log -mtime +7 |xargs rm -f #} #將每個文件的日誌都追加到S3上傳日誌中 while read line do echo "$line"|sed 's#(#\\\(#g'|sed 's#)#\\\)#g'| sed 's#\_#\\_#g'|sed 's#<#\\\<#g'|sed 's#>#\\\>#g'|sed 's#\ #\\ #g' >while.log head -1 while.log |xargs cat >> /server/s3dir/s3elk.log done < date.log
這樣的話 S3日誌裏邊的內容全都到了 s3elk.log 這個文件中 再通過elk監控日誌
未完待續。。。
有意向加微信 Dellinger_blue
AWS S3日誌文件通過服務器上傳到elk