1. 程式人生 > 實用技巧 >docker日常管理

docker日常管理

Linux版本下載地址:https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
Windows版本下載地址:https://download.docker.com/win/static/stable/x86_64/

容器可以理解帶有孔洞的記憶體泡,通過孔洞可以直接訪問物理機的資源。

映象的命名:

  1. 如果不涉及倉庫,映象可以任意命名
  2. 如果涉及到往倉庫中push映象,則需要滿足一定規則:
    • 伺服器IP:埠/分類/映象名:tag
    • 埠預設:80
    • tag預設:latest

映象的管理:

  1. docker pull image #拉取映象
  2. docker push image #推送映象
  3. docker rmi image #刪除映象
  4. docker tag image new_name #給映象打標籤
  5. docker images #檢視當前映象
  6. docker save docker.io/mysql > mysql.tar #匯出映象
  7. docker load -i mysql.tar #匯入映象
  8. docker save docker.io/nginx docker.io/mysql hub.c.163.com/mengkzhaoyun/cloud/ansible-kubernetes hub.c.163.com/public/centos > all.tar #匯出所有映象
  9. docker history docker.io/mysql:latest #檢視映象分層
  10. docker history docker.io/mysql:latest --no-trunc #顯示完整內容

容器的管理:

  1. docker ps #檢視當前正在執行的容器
    • -a 檢視所有容器
  2. 生命週期:(預設情況下,一個容器只存在一個程序)
    • 映象執行的程序相當於靈魂,容器相當於肉體 
    • docker run docker.io/nginx 容器執行的一瞬間,即已結束
    • docker ps 不會有任何輸出
    • docker ps -a 會看到執行過的容器
  3. docker run -t -i -d docker.io/nginx #啟動容器
    • -t 生成一個終端
    • -i 提供一個互動
    • -d 放入後臺
  4. docker run -t -i -d --restart=always docker.io/nginx #從容器中退出,容器不關閉
  5. docker attach 0d182c82cc13 #進入放入後臺的容器
  6. docker rm -f 0d182c82cc13 #刪除容器
  7. docker run -dit -name=c1 --restart=always docker.io/nginx #重新命名
  8. docker stop c1 #停止容器c1
  9. docker start c1 #啟動容器c1
  10. docker run -dit -name=c1 --rm docker.io/nginx #建立臨時容器,容器一旦退出,會自動刪除
  11. docker run -dit -name=c1 docker.io/nginx sleep 20
  12. docker run -it --name=c2 --restart=always -e name1=tom1 -e name2=tom2 docker.io/tomcat
    • -e 指定變數,變數會傳遞到容器中(echo $name1;echo $name2)
  13. docker run -it --name=db --restart=always -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=devin -e MUSQL_PASSWORD=redhat -e MYSQL_DATABASE=students docker.io/mysql #mysql中的變數
  14. docker inspect db #檢視容器db的資訊
  15. docker exec db 'ip a' #在db容器中執行“”ip a“”命令
  16. docker exec -it db bash #在容器內開啟額外的bash程序
  17. docker cp 1.txt db:/opt
  18. docker exec db ls /opt
  19. docker cp db:/etc/hosts .
  20. docker attach db #進入放於後臺的容器
  21. docker run -dit --name=db -p 3306 docker.io/mysql bash #指定容器埠3306,物理機上分配一個隨機埠
  22. docker run -dit --name=db -p 8080:3306 docker.io/mysql bash #指定容器埠3306對映到物理機上埠為8080
  23. docker top db #檢視容器內跑的什麼程序
  24. docker logs -f db
    • -f 持續重新整理
  25. 刪除所有映象指令碼
    #!/bin/bash
    file=$(mktemp)
    docker images |grep -v TAG |awk '{print $1 ":" $2}' > $file
    while read aa ; 
    do
        docker rmi $aa
    done < $file
    rm -rf $file

資料卷的管理:

  1. docker run -it --name=web -v /data hub.c.163.com/public/centos bash #指定容器中的目錄為/data,物理機上會隨機分配一個目錄
  2. docker run -it --name=web -v /xx:/data hub.c.163.com/public/centos bash #指定容器中的目錄為/data,對映到物理機的位置為/xx
  3. docker run -it --name=web -v /xx:/data:rw hub.c.163.com/public/centos bash #指定容器中的目錄為/data,對映到物理機的位置為/xx
  4. docker inspect web #中的mounts條目可以檢視掛載情況
    • "Source": "/data"
    • "Destination": "/xxx"

網路的管理:

   可以類比vmware workstation中的網路來理解

  1. docker network list
    • bridge #相當於workstation中的nat網路
    • host 複製物理機的網路資訊
  2. docker run -it --name=db --restart=always --net bridge -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=devin -e MUSQL_PASSWORD=redhat -e MYSQL_DATABASE=students docker.io/mysql #指定網橋
  3. man -k docker
  4. man docker-network-create
  5. docker network create --driver=bridge --subnet=10.254.0.0/16 --ip-range=10.254.97.0/24 --gateway=10.254.97.254br0
  6. docker network inspect br0
  7. docker network rm br0

wordpress+mysql搭建個人部落格

  • docker run -dit --name=db --restart=always -v /db:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_DATABASE=wordpress hub.c.163.com/library/mysql:5.7
  • docker run -dit --name=blog -v /web:/var/www/html -p 80:80 -e WORDPRESS_DB_HOST=172.17.0.2 -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=redhat -e WORDPRESS_DB_NAME=wordpress hub.c.163.com/library/wordpress #通過ip地址繫結,容器stop後,ip會被釋放繼而別其他容器佔用,建議用嚇一條命令。
  • docker run -it --name=blog -v /web:/var/www/html -p 80:80 --link db:xx -e WORDPRESS_DB_HOST=xx -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=redhat -e WORDPRESS_DB_NAME=wordpress hub.c.163.com/public/wordpress bash #將db容器附一個別名xx,以後再連線可以通過別名來連線,可以解決ip釋放的問題。如果別名自定義xx,“-e WORDPRESS_DB_HOST=xx -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=redhat -e WORDPRESS_DB_NAME=wordpress”需要指定,如果別名定義為mysql,變數可以省略。

自定義映象

基映象+dockerfile生成一個臨時容器,這個臨時容器匯出為新映象後,臨時容器會自動刪除。

  1. 自定義dockerfile
    FROM hub.c.163.com/library/centos
    MAINTAINER devin
    
    RUN rm -rf /etc/yum.repos.d/*
    COPY CentOS-Base.repo /etc/yum.repos.d/
    ADD epel.repo /etc/yum.repos.d/
    ENV aa=xyz RUN yum makecache RUN yum install openssh-clients openssh-server -y RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key RUN ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key RUN ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key RUN sed -i '/UseDNS/cUseDNS no/' /etc/ssh/sshd_config
    RUN useradd devin
    USER devin VOLUME /data1 RUN echo 'root:redhat' | chpasswd EXPOSE 22 CMD ["/usr/sbin/sshd","-D"]
    • ADD 拷貝,對壓縮檔案既拷貝又解壓縮
    • COPY 只是單純的拷貝
    • RUN 執行作業系統命令
    • ENV 指定變數
    • VOLUME 指定容器內的目錄
    • USER 以devin身份登入
    • EXPOSE 只是標記而已
  2. docker build -t centos:ssh . -f dockerfile_v3
    • -t:指定新映象的名字
    • -f:指定dockerfile檔案
    • .:當前目錄

配置docker本地倉庫

  1. docker pull hub.c.163.com/library/registry #下載倉庫映象
  2. docker run -dit --name=myregistry -p 5000:5000 -v /myregistry:/var/lib/registry hub.c.163.com/library/registry #執行倉庫容器
  3. docker tag docker.io/centos:latest 192.168.108.101:5000/cka/centos:v1 #修改要推送至倉庫的映象標籤,上傳到那臺伺服器是由映象名字決定的
  4. docker push 192.168.108.101:5000/cka/centos:v1 #會出現如下報錯,原因:預設使用https的方式通訊,而我們的推送方式是http,解決辦法有兩種,如7。
    The push refers to a repository [192.168.108.101:5000/cka/centos]
    Get https://192.168.108.101:5000/v1/_ping: http: server gave HTTP response to HTTPS client
  5. 解決http通訊問題
    vim /etc/docker/daemon.json
    {
    "insecure-registries": ["192.168.108.101:5000"]
    } vim
    /etc/sysconfig/docker
    OPTIONS最後面加上下面條目
    --insecure-registry=192.168.108.101:5000
  6. systemctl restart docker
  7. docker start myregistry
  8. curl -s 192.168.108.101:5000/v2/_catalog | json_reformat #檢視倉庫中所有的映象
  9. curl -s 192.168.108.101:5000/v2/cka/centos/tags/list #檢視倉庫中已有的映象的版本
  10. 客戶端指定預設的倉庫下載地址(預設為docker.io)
  11. 修改配置
    vim /etc/sysconfig/docker
    ADD_REGISTRY="--add-registry 192.168.108.101:5000"
  12. 由docker run -dit --name=myregistry -p 5000:5000 -v /myregistry:/var/lib/registry hub.c.163.com/library/registry知道映象存放的物理位置是/myregistry
  13. 刪除倉庫中的映象 delete_docker_registry_image
    #!/usr/bin/env python
    """
    Usage:
    Shut down your registry service to avoid race conditions and possible data loss
    and then run the command with an image repo like this:
    delete_docker_registry_image.py --image awesomeimage --dry-run
    """
    
    import argparse
    import json
    import logging
    import os
    import sys
    import shutil
    import glob
    
    logger = logging.getLogger(__name__)
    
    
    def del_empty_dirs(s_dir, top_level):
        """recursively delete empty directories"""
        b_empty = True
    
        for s_target in os.listdir(s_dir):
            s_path = os.path.join(s_dir, s_target)
            if os.path.isdir(s_path):
                if not del_empty_dirs(s_path, False):
                    b_empty = False
            else:
                b_empty = False
    
        if b_empty:
            logger.debug("Deleting empty directory '%s'", s_dir)
            if not top_level:
                os.rmdir(s_dir)
    
        return b_empty
    
    
    def get_layers_from_blob(path):
        """parse json blob and get set of layer digests"""
        try:
            with open(path, "r") as blob:
                data_raw = blob.read()
                data = json.loads(data_raw)
                if data["schemaVersion"] == 1:
                    result = set([entry["blobSum"].split(":")[1] for entry in data["fsLayers"]])
                else:
                    result = set([entry["digest"].split(":")[1] for entry in data["layers"]])
                    if "config" in data:
                        result.add(data["config"]["digest"].split(":")[1])
                return result
        except Exception as error:
            logger.critical("Failed to read layers from blob:%s", error)
            return set()
    
    
    def get_digest_from_blob(path):
        """parse file and get digest"""
        try:
            with open(path, "r") as blob:
                return blob.read().split(":")[1]
        except Exception as error:
            logger.critical("Failed to read digest from blob:%s", error)
            return ""
    
    
    def get_links(path, _filter=None):
        """recursively walk `path` and parse every link inside"""
        result = []
        for root, _, files in os.walk(path):
            for each in files:
                if each == "link":
                    filepath = os.path.join(root, each)
                    if not _filter or _filter in filepath:
                        result.append(get_digest_from_blob(filepath))
        return result
    
    
    class RegistryCleanerError(Exception):
        pass
    
    
    class RegistryCleaner(object):
        """Clean registry"""
    
        def __init__(self, registry_data_dir, dry_run=False):
            self.registry_data_dir = registry_data_dir
            if not os.path.isdir(self.registry_data_dir):
                raise RegistryCleanerError("No repositories directory found inside " \
                                           "REGISTRY_DATA_DIR '{0}'.".
                                           format(self.registry_data_dir))
            self.dry_run = dry_run
    
        def _delete_layer(self, repo, digest):
            """remove blob directory from filesystem"""
            path = os.path.join(self.registry_data_dir, "repositories", repo, "_layers/sha256", digest)
            self._delete_dir(path)
    
        def _delete_blob(self, digest):
            """remove blob directory from filesystem"""
            path = os.path.join(self.registry_data_dir, "blobs/sha256", digest[0:2], digest)
            self._delete_dir(path)
    
        def _blob_path_for_revision(self, digest):
            """where we can find the blob that contains the json describing this digest"""
            return os.path.join(self.registry_data_dir, "blobs/sha256",
                                digest[0:2], digest, "data")
    
        def _blob_path_for_revision_is_missing(self, digest):
            """for each revision, there should be a blob describing it"""
            return not os.path.isfile(self._blob_path_for_revision(digest))
    
        def _get_layers_from_blob(self, digest):
            """get layers from blob by digest"""
            return get_layers_from_blob(self._blob_path_for_revision(digest))
    
        def _delete_dir(self, path):
            """remove directory from filesystem"""
            if self.dry_run:
                logger.info("DRY_RUN: would have deleted %s", path)
            else:
                logger.info("Deleting %s", path)
                try:
                    shutil.rmtree(path)
                except Exception as error:
                    logger.critical("Failed to delete directory:%s", error)
    
        def _delete_from_tag_index_for_revision(self, repo, digest):
            """delete revision from tag indexes"""
            paths = glob.glob(
                os.path.join(self.registry_data_dir, "repositories", repo,
                             "_manifests/tags/*/index/sha256", digest)
            )
            for path in paths:
                self._delete_dir(path)
    
        def _delete_revisions(self, repo, revisions, blobs_to_keep=None):
            """delete revisions from list of directories"""
            if blobs_to_keep is None:
                blobs_to_keep = []
            for revision_dir in revisions:
                digests = get_links(revision_dir)
                for digest in digests:
                    self._delete_from_tag_index_for_revision(repo, digest)
                    if digest not in blobs_to_keep:
                        self._delete_blob(digest)
    
                self._delete_dir(revision_dir)
    
        def _get_tags(self, repo):
            """get all tags for given repository"""
            path = os.path.join(self.registry_data_dir, "repositories", repo, "_manifests/tags")
            if not os.path.isdir(path):
                logger.critical("No repository '%s' found in repositories directory %s",
                                 repo, self.registry_data_dir)
                return None
            result = []
            for each in os.listdir(path):
                filepath = os.path.join(path, each)
                if os.path.isdir(filepath):
                    result.append(each)
            return result
    
        def _get_repositories(self):
            """get all repository repos"""
            result = []
            root = os.path.join(self.registry_data_dir, "repositories")
            for each in os.listdir(root):
                filepath = os.path.join(root, each)
                if os.path.isdir(filepath):
                    inside = os.listdir(filepath)
                    if "_layers" in inside:
                        result.append(each)
                    else:
                        for inner in inside:
                            result.append(os.path.join(each, inner))
            return result
    
        def _get_all_links(self, except_repo=""):
            """get links for every repository"""
            result = []
            repositories = self._get_repositories()
            for repo in [r for r in repositories if r != except_repo]:
                path = os.path.join(self.registry_data_dir, "repositories", repo)
                for link in get_links(path):
                    result.append(link)
            return result
    
        def prune(self):
            """delete all empty directories in registry_data_dir"""
            del_empty_dirs(self.registry_data_dir, True)
    
        def _layer_in_same_repo(self, repo, tag, layer):
            """check if layer is found in other tags of same repository"""
            for other_tag in [t for t in self._get_tags(repo) if t != tag]:
                path = os.path.join(self.registry_data_dir, "repositories", repo,
                                    "_manifests/tags", other_tag, "current/link")
                manifest = get_digest_from_blob(path)
                try:
                    layers = self._get_layers_from_blob(manifest)
                    if layer in layers:
                        return True
                except IOError:
                    if self._blob_path_for_revision_is_missing(manifest):
                        logger.warn("Blob for digest %s does not exist. Deleting tag manifest: %s", manifest, other_tag)
                        tag_dir = os.path.join(self.registry_data_dir, "repositories", repo,
                                               "_manifests/tags", other_tag)
                        self._delete_dir(tag_dir)
                    else:
                        raise
            return False
    
        def _manifest_in_same_repo(self, repo, tag, manifest):
            """check if manifest is found in other tags of same repository"""
            for other_tag in [t for t in self._get_tags(repo) if t != tag]:
                path = os.path.join(self.registry_data_dir, "repositories", repo,
                                    "_manifests/tags", other_tag, "current/link")
                other_manifest = get_digest_from_blob(path)
                if other_manifest == manifest:
                    return True
    
            return False
    
        def delete_entire_repository(self, repo):
            """delete all blobs for given repository repo"""
            logger.debug("Deleting entire repository '%s'", repo)
            repo_dir = os.path.join(self.registry_data_dir, "repositories", repo)
            if not os.path.isdir(repo_dir):
                raise RegistryCleanerError("No repository '{0}' found in repositories "
                                           "directory {1}/repositories".
                                           format(repo, self.registry_data_dir))
            links = set(get_links(repo_dir))
            all_links_but_current = set(self._get_all_links(except_repo=repo))
            for layer in links:
                if layer in all_links_but_current:
                    logger.debug("Blob found in another repository. Not deleting: %s", layer)
                else:
                    self._delete_blob(layer)
            self._delete_dir(repo_dir)
    
        def delete_repository_tag(self, repo, tag):
            """delete all blobs only for given tag of repository"""
            logger.debug("Deleting repository '%s' with tag '%s'", repo, tag)
            tag_dir = os.path.join(self.registry_data_dir, "repositories", repo, "_manifests/tags", tag)
            if not os.path.isdir(tag_dir):
                raise RegistryCleanerError("No repository '{0}' tag '{1}' found in repositories "
                                           "directory {2}/repositories".
                                           format(repo, tag, self.registry_data_dir))
            manifests_for_tag = set(get_links(tag_dir))
            revisions_to_delete = []
            blobs_to_keep = []
            layers = []
            all_links_not_in_current_repo = set(self._get_all_links(except_repo=repo))
            for manifest in manifests_for_tag:
                logger.debug("Looking up filesystem layers for manifest digest %s", manifest)
    
                if self._manifest_in_same_repo(repo, tag, manifest):
                    logger.debug("Not deleting since we found another tag using manifest: %s", manifest)
                    continue
                else:
                    revisions_to_delete.append(
                        os.path.join(self.registry_data_dir, "repositories", repo,
                                     "_manifests/revisions/sha256", manifest)
                    )
                    if manifest in all_links_not_in_current_repo:
                        logger.debug("Not deleting the blob data since we found another repo using manifest: %s", manifest)
                        blobs_to_keep.append(manifest)
    
                    layers.extend(self._get_layers_from_blob(manifest))
    
            layers_uniq = set(layers)
            for layer in layers_uniq:
                if self._layer_in_same_repo(repo, tag, layer):
                    logger.debug("Not deleting since we found another tag using digest: %s", layer)
                    continue
    
                self._delete_layer(repo, layer)
                if layer in all_links_not_in_current_repo:
                    logger.debug("Blob found in another repository. Not deleting: %s", layer)
                else:
                    self._delete_blob(layer)
    
            self._delete_revisions(repo, revisions_to_delete, blobs_to_keep)
            self._delete_dir(tag_dir)
    
        def delete_untagged(self, repo):
            """delete all untagged data from repo"""
            logger.debug("Deleting utagged data from repository '%s'", repo)
            repositories_dir = os.path.join(self.registry_data_dir, "repositories")
            repo_dir = os.path.join(repositories_dir, repo)
            if not os.path.isdir(repo_dir):
                raise RegistryCleanerError("No repository '{0}' found in repositories "
                                           "directory {1}/repositories".
                                           format(repo, self.registry_data_dir))
            tagged_links = set(get_links(repositories_dir, _filter="current"))
            layers_to_protect = []
            for link in tagged_links:
                layers_to_protect.extend(self._get_layers_from_blob(link))
    
            unique_layers_to_protect = set(layers_to_protect)
            for layer in unique_layers_to_protect:
                logger.debug("layer_to_protect: %s", layer)
    
            tagged_revisions = set(get_links(repo_dir, _filter="current"))
    
            revisions_to_delete = []
            layers_to_delete = []
    
            dir_for_revisions = os.path.join(repo_dir, "_manifests/revisions/sha256")
            for rev in os.listdir(dir_for_revisions):
                if rev not in tagged_revisions:
                    revisions_to_delete.append(os.path.join(dir_for_revisions, rev))
                    for layer in self._get_layers_from_blob(rev):
                        if layer not in unique_layers_to_protect:
                            layers_to_delete.append(layer)
    
            unique_layers_to_delete = set(layers_to_delete)
    
            self._delete_revisions(repo, revisions_to_delete)
            for layer in unique_layers_to_delete:
                self._delete_blob(layer)
                self._delete_layer(repo, layer)
    
    
        def get_tag_count(self, repo):
            logger.debug("Get tag count of repository '%s'", repo)
            repo_dir = os.path.join(self.registry_data_dir, "repositories", repo)
            tags_dir = os.path.join(repo_dir, "_manifests/tags")
    
            if os.path.isdir(tags_dir):
                tags = os.listdir(tags_dir)
                return len(tags)
            else:
                logger.info("Tags directory does not exist: '%s'", tags_dir)
                return -1
    
    def main():
        """cli entrypoint"""
        parser = argparse.ArgumentParser(description="Cleanup docker registry")
        parser.add_argument("-i", "--image",
                            dest="image",
                            required=True,
                            help="Docker image to cleanup")
        parser.add_argument("-v", "--verbose",
                            dest="verbose",
                            action="store_true",
                            help="verbose")
        parser.add_argument("-n", "--dry-run",
                            dest="dry_run",
                            action="store_true",
                            help="Dry run")
        parser.add_argument("-f", "--force",
                            dest="force",
                            action="store_true",
                            help="Force delete (deprecated)")
        parser.add_argument("-p", "--prune",
                            dest="prune",
                            action="store_true",
                            help="Prune")
        parser.add_argument("-u", "--untagged",
                            dest="untagged",
                            action="store_true",
                            help="Delete all untagged blobs for image")
        args = parser.parse_args()
    
    
        handler = logging.StreamHandler()
        handler.setFormatter(logging.Formatter(u'%(levelname)-8s [%(asctime)s]  %(message)s'))
        logger.addHandler(handler)
    
        if args.verbose:
            logger.setLevel(logging.DEBUG)
        else:
            logger.setLevel(logging.INFO)
    
    
        # make sure not to log before logging is setup. that'll hose your logging config.
        if args.force:
            logger.info(
                "You supplied the force switch, which is deprecated. It has no effect now, and the script defaults to doing what used to be only happen when force was true")
    
        splitted = args.image.split(":")
        if len(splitted) == 2:
            image = splitted[0]
            tag = splitted[1]
        else:
            image = args.image
            tag = None
    
        if 'REGISTRY_DATA_DIR' in os.environ:
            registry_data_dir = os.environ['REGISTRY_DATA_DIR']
        else:
            registry_data_dir = "/opt/registry_data/docker/registry/v2"
    
        try:
            cleaner = RegistryCleaner(registry_data_dir, dry_run=args.dry_run)
            if args.untagged:
                cleaner.delete_untagged(image)
            else:
                if tag:
                    tag_count = cleaner.get_tag_count(image)
                    if tag_count == 1:
                        cleaner.delete_entire_repository(image)
                    else:
                        cleaner.delete_repository_tag(image, tag)
                else:
                    cleaner.delete_entire_repository(image)
    
            if args.prune:
                cleaner.prune()
        except RegistryCleanerError as error:
            logger.fatal(error)
            sys.exit(1)
    
    
    if __name__ == "__main__":
        main()
    View Code
  14. export REGISTRY_DATA_DIR=/myregistry/docker/registry/v2 #匯出映象存放的根路徑
  15. ./delete_docker_registry_image -i cka/centos:v1 #刪除映象centos:v1

監控容器

  1. docker stats #字元的形式顯示容器使用的資源
  2. 由谷歌開發一款圖形介面的監控工具cadvisor,以容器的形式實現,其實質就是資料卷的掛載。容器c1、c2、c3在物理機中對應的目錄會在cadvisor容器中被掛載,cadvisor容器通過對掛載的目錄進行分析,從而實現對容器c1、c2、c3的監控。
    docker pull hub.c.163.com/xbingo/cadvisor:latest

    docker run \
    -v /var/run:/var/run \
    -v /sys/:/sys:ro \
    -v /var/lib/docker:/var/lib/docker:ro -d \
    -p 8080:8080 \
    --name=mon hub.c.163.com/xbingo/cadvisor:latest

    192.168.108.101:8080 #訪問圖形介面

編排工具compose

  1. yum install docker -y
  2. systemctl enable docker --now
  3. yum install docker-compose -y
  4. vim docker-compose.yaml #格式
    blog:
            image: hub.c.163.com/public/wordpress:4.5.2
            restart: always
            links:
                    - db:mysql
            ports:
                    - "80:80"
    
    db:
            image: hub.c.163.com/library/mysql
            restart: always
            environment:
                    - MYSQL_ROOT_PASSWORD=redhat
                    - MYSQL_DATABASE=wordpress
       volumes:
    - /xx:/var/lib/mysql
  5. echo 1 > /proc/sys/net/ipv4/ip_forward
  6. docker-compose up [ -d ]
  7. 192.168.108.102:80 #訪問wordpress圖形介面
  8. docker-compose ps
  9. docker-compose stop
  10. docker-compose start
  11. docker-compose rm (stop為前提)

harbor的使用(web介面進行管理,使用compose建立)

  1. harbor(harbor-offline-installer-v1.10.4.tgz)下載地址:https://github.com/goharbor/harbor/releases
  2. tar zxvf harbor-offline-installer-v1.10.4.tgz
  3. cd harbor
  4. docker load -i harbor.v1.10.4.tar.gz
  5. vim harbor.yml (改為hostname: 192.168.108.103,其他因個人情況而定,這裡使用預設)
  6. ?????

容器資源限制(基於linux系統自帶的Cgroup)

  1. docker run -dit -m 512m --name=c1 centos:v1 #分配c1容器512m記憶體
  2. docker run -dit --cpuset-cpus=0,1 --name=c1 centos:v1 #繫結cpu
  3. ps mo pid,comm,psr $(pgrep cat) #檢視執行在哪些cpu上