Docker 运维管理
Docker 运维管理
- 一、Swarm集群管理
- 1.1 Swarm的核心概念
- 1.1.1 集群
- 1.1.2 节点
- 1.1.3 服务和任务
- 1.1.4 负载均衡
- 1.2 Swarm安装
- 准备工作
- 创建集群
- 添加工作节点到集群
- 发布服务到集群
- 扩展一个或多个服务
- 从集群中删除服务
- ssh免密登录
- 二、Docker Compose
- 与 Swarm 一起使用 Compose
- 三、配置私有仓库(Harbor)
- 3.1 环境准备
- 3.2 安装 Docker
- 3.3 安装 docker-compose
- 3.4 准备 Harbor
- 3.5 配置证书
- 3.6 部署配置 Harbor
- 3.7 配置启动服务
- 3.8 定制本地仓库
- 3.9 测试本地仓库
一、Swarm集群管理
docker swarm 是 docker 官方提供的一套容器编排系统,是 Docker 公司推出的官方容器集群平台。基于 Go 语言 实现。
架构如下:
1.1 Swarm的核心概念
1.1.1 集群
一个集群由多个 Docker 主机组成,这些 Docker 主机以集群模式运行,并充当 Manager 和 Worker
1.1.2 节点
swarm 是一系列节点的集合,节点可以是一台裸机或者一台虚拟机。一个节点能扮演一个或者两个角色 Manager 或者 Worker
-
Manager 节点
Docker Swarm 集群需要至少一个 manager 节点,节点之间使用 Raft consensus protocol 进行协同工作。
通常,第一个启用 docker swarm 的节点将成为 leader,后来加入的都是 follower。当前的 leader 如果挂掉,剩余的节点将重新选举出一个新的 leader。 每一个 manager 都有一个完整的当前集群状态的副本,可以保证 manager 的高可用 -
Worker 节点
worker 节点是运行实际应用服务的容器所在的地方。理论上,一个 manager 节点也能同时成为 worker 节点,但在生产环境中,不建议这样做。 worker 节点之间,通过 control plane 进行通信,这种通信使用 gossip 协议,并且是 异步 的。
1.1.3 服务和任务
- services(服务)
swarm service 是一个抽象的概念,它只是一个对运行在 swarm 集群上的应用服务,所期望状态的描述。它就像一个描述了下面物品的清单列表一样:- 服务名称
- 使用哪个镜像来创建容器
- 要运行多少个副本
- 服务的容器要连接到哪个网络上
- 应该映射哪些端口
- task(任务)
在Docker Swarm中,task 是一个部署的最小单元,task与容器是 一对一 的关系。 - stack(堆栈)
stack 是描述一系列相关 services 的集合。我们通过在一个 YAML 文件中来定义一个 stack
1.1.4 负载均衡
集群管理器可以自动为服务分配
一个已发布端口,也可以为该服务配置
一个已发布端口
可以指定任何未使用的端口。如果未指定端口,则集群管理器会为服务分配30000-32767 范围内的端口。
1.2 Swarm安装
准备工作
[root@docker ~]# hostnamectl hostname manager
[root@docker ~]# exit
[root@manager ~]# nmcli c modify ens160 ipv4.addresses IP地址/24
[root@manager ~]# init 6
[root@manager ~]# cat >> /etc/hosts <<EOF
> 192.168.98.47 manager1
> 192.168.98.48 worker1
> 192.168.98.49 worker2
> EOF
[root@manager ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.678 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.461 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.353 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2035ms
rtt min/avg/max/mdev = 0.353/0.497/0.678/0.135 ms
[root@manager ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.719 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.300 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.417 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2089ms
rtt min/avg/max/mdev = 0.300/0.478/0.719/0.176 ms
[root@manager ~]# ping -c 3 manager1
PING manager1 (192.168.98.47) 56(84) bytes of data.
64 bytes from manager1 (192.168.98.47): icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from manager1 (192.168.98.47): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from manager1 (192.168.98.47): icmp_seq=3 ttl=64 time=0.035 ms--- manager1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2042ms
rtt min/avg/max/mdev = 0.034/0.035/0.038/0.001 ms[root@worker1 ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.023 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.024 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.035 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2056ms
rtt min/avg/max/mdev = 0.023/0.027/0.035/0.005 ms
[root@worker1 ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.405 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.509 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.381 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2065ms
rtt min/avg/max/mdev = 0.381/0.431/0.509/0.055 ms[root@worker2 ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.304 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.346 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.460 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.304/0.370/0.460/0.065 ms
[root@worker2 ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.189 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.079 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.055 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2038ms
rtt min/avg/max/mdev = 0.055/0.107/0.189/0.058 ms# 查看版本
docker run --rm swarm -v
swarm version 1.2.9 (527a849)
- 镜像:
[root@manager ~]# ls
anaconda-ks.cfg
[root@manager ~]# ls
anaconda-ks.cfg swarm_1.2.9.tar.gz
[root@manager ~]# docker load -i swarm_1.2.9.tar.gz
6104cec23b11: Loading layer 12.44MB/12.44MB
9c4e304108a9: Loading layer 281.1kB/281.1kB
a8731583ab53: Loading layer 2.048kB/2.048kB
Loaded image: swarm:1.2.9
[root@manager ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
swarm 1.2.9 1a5eb59a410f 4 years ago 12.7MB
[root@manager ~]# ls
anaconda-ks.cfg swarm_1.2.9.tar.gz# 远程传输至worker1和woker2
scp swarm_1.2.9.tar.gz root@192.168.98.48:~
[root@worker1 ~]# ls
anaconda-ks.cfg swarm_1.2.9.tar.gz
创建集群
语法格式:
docker swarm init --advertise <MANAGER-IP> #manager主机的IP
- manager
[root@manager ~]# docker swarm init --advertise-addr 192.168.98.47
# --advertise-addr 参数表示其它 swarm 中的 worker 节点使用此 ip 地址与 manager 联系
Swarm initialized: current node (bsicfg4mvo18a0tv0z17crtnh) is now a manager.To add a worker to this swarm, run the following command:#token::令牌(令牌不是永久有效的,有时效性)docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
# 将这个命令在 worker 主机上执行
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.# 查看节点状态(都是ready)
[root@manager ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
bsicfg4mvo18a0tv0z17crtnh * manager Ready Active Leader 28.0.4
xtc6pax5faoobzqo341vf72rv worker1 Ready Active 28.0.4
ik0tqz8axejwu82mmukwjo47m worker2 Ready Active 28.0.4#查看集群状态
docker info
#将节点强制驱除集群
docker swarm leave --force[root@manager ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
[root@manager ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
#查看节点信息
docker node ls
#查看集群状态
docker info
#将节点强制驱除集群
docker swarm leave --force
添加工作节点到集群
创建了一个集群与管理器节点,就可以添加工作节点
- 添加工作节点 worker1
[root@worker1 ~]# docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
This node joined a swarm as a worker.
- 添加工作节点 worker2
[root@worker2 ~]# docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
This node joined a swarm as a worker.
- 如果忘记了 token 的值,在管理节点 192.168.98.47(manager)主机上执行:
(令牌不是永久有效的,有时效性)
# 在管理者节点上写
[root@manager ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
发布服务到集群
# 查看节点状态(都是ready)
[root@manager ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
bsicfg4mvo18a0tv0z17crtnh * manager Ready Active Leader 28.0.4
xtc6pax5faoobzqo341vf72rv worker1 Ready Active 28.0.4
ik0tqz8axejwu82mmukwjo47m worker2 Ready Active 28.0.4
- manager
# docker服务创建 ,--replicas 1:副本数量# 在管理节点 192.168.98.47(manager)主机上执行: #容器名称 #容器镜像名称
[root@manager ~]# docker service create --replicas 1 --name nginx2 -p 80:80 nginx:1.27.4#创建一个容器作为服务,名称叫做nginx2(类似docker run),如果没有nginx:1.27.4这个文件就会自动去拉取
没有自动拉取就执行以下指令(三台主机都要拉取nginx_1.27.4)
[root@manager ~]# docker load -i nginx_1.27.4.tar.gz
7914c8f600f5: Loading layer 77.83MB/77.83MB
9574fd0ae014: Loading layer 118.3MB/118.3MB
17129ef2de1a: Loading layer 3.584kB/3.584kB
320c70dd6b6b: Loading layer 4.608kB/4.608kB
2ef6413cdcb5: Loading layer 2.56kB/2.56kB
d6266720b0a6: Loading layer 5.12kB/5.12kB
1fb7f1e96249: Loading layer 7.168kB/7.168kB
Loaded image: nginx:1.27.4
[root@manager ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx 1.27.4 97662d24417b 2 months ago 192MB
swarm 1.2.9 1a5eb59a410f 4 years ago 12.7MB
[root@manager ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
o9atmzaj9on8 nginx2 replicated 1/1 nginx:1.27.4 *:80->80/tcp[root@manager ~]# docker service create --replicas 1 --name nginx2 -p 80:80 nginx:1.27.4
z0j2olxqwrlpm84uho1w1m20p
overall progress: 1 out of 1 tasks
1/1: running
verify: Service z0j2olxqwrlpm84uho1w1m20p converged scp root@managere:~/nginx 1.27.4.tar.gz
scp root@192.168.98.47:~/nginx 1.27.4.tar.gz
# --pretty:打印得好看一点
[root@manager ~]# docker service inspect --pretty nginx2ID: o9atmzaj9on8tf9ffmp8k7bun #容器ID
Name: nginx2 #容器名称
Service Mode: Replicated #模式:ReplicatedReplicas: 1 #副本数
Placement:
UpdateConfig:Parallelism: 1 #限制(变形的方式有一个)On failure: pauseMonitoring Period: 5sMax failure ratio: 0Update order: stop-first
RollbackConfig:Parallelism: 1On failure: pauseMonitoring Period: 5sMax failure ratio: 0Rollback order: stop-first
ContainerSpec: #镜像的名称:nginx:1.27.4Image: nginx:1.27.4@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eabInit: false
Resources:
Endpoint Mode: vip
Ports:PublishedPort = 80Protocol = tcpTargetPort = 80PublishMode = ingress[root@manager ~]# docker service ps nginx2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
id3xjfro1bdl nginx2.1 nginx:1.27.4 manager Running Running 5 minutes ago
# ID 名称 镜像 在哪个节点上 是否在运行 运行时间
扩展一个或多个服务
# scale:伸缩
[root@manager ~]# docker service scale nginx2=5
nginx2 scaled to 5
overall progress: 5 out of 5 tasks
1/5: running
2/5: running
3/5: running
4/5: running
5/5: running
verify: Service nginx2 converged [root@manager ~]# docker service ps nginx2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
rbmlfw6oiyot nginx2.1 nginx:1.27.4 worker1 Running Running 9 minutes ago
uvko24qqf1ps nginx2.2 nginx:1.27.4 manager Running Running 2 minutes ago
kwjys4eldv2d nginx2.3 nginx:1.27.4 manager Running Running 2 minutes ago
onnmzewg4ou4 nginx2.4 nginx:1.27.4 worker2 Running Running 2 minutes ago
v0we91sbj7p5 nginx2.5 nginx:1.27.4 worker1 Running Running 2 minutes ago [root@manager ~]# docker service scale nginx2=1
nginx2 scaled to 1
overall progress: 1 out of 1 tasks
1/1: running
verify: Service nginx2 converged
[root@manager ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
z0j2olxqwrlp nginx2 replicated 1/1 nginx:1.27.4 *:80->80/tcp
[root@manager ~]# docker service scale nginx2=3
nginx2 scaled to 3
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service nginx2 converged
[root@manager ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
o9atmzaj9on8 nginx2 replicated 3/3 nginx:1.27.4 *:80->80/tcp
[root@manager ~]# docker service ps nginx2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
id3xjfro1bdl nginx2.1 nginx:1.27.4 manager Running Running 15 minutes ago
hx7sqjr88ac8 nginx2.2 nginx:1.27.4 worker2 Running Running 2 minutes ago
p7x2lyo6vdk8 nginx2.5 nginx:1.27.4 worker1 Running Running 2 minutes ago
- 更新服务:
[root@manager ~]# docker service update --publish-rm 80:80 --publish-add 88:80 nginx2
nginx2
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service nginx2 converged
[root@manager ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78adf323559b nginx:1.27.4 "/docker-entrypoint.…" 20 seconds ago Up 17 seconds 80/tcp nginx2.1.ds6dy33b6m62kzajs9frvdw4g
[root@manager ~]# docker service ps nginx2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
ds6dy33b6m62 nginx2.1 nginx:1.27.4 manager Running Running 19 seconds ago
id3xjfro1bdl \_ nginx2.1 nginx:1.27.4 manager Shutdown Shutdown 20 seconds ago
rpnodskwk4f7 nginx2.2 nginx:1.27.4 worker2 Running Running 23 seconds ago
hx7sqjr88ac8 \_ nginx2.2 nginx:1.27.4 worker2 Shutdown Shutdown 24 seconds ago
shdko7kq7vvw nginx2.5 nginx:1.27.4 worker1 Running Running 16 seconds ago
p7x2lyo6vdk8 \_ nginx2.5 nginx:1.27.4 worker1 Shutdown Shutdown 16 seconds ago
[root@manager ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
o9atmzaj9on8 nginx2 replicated 3/3 nginx:1.27.4 *:88->80/tcp
- worker1
[root@worker1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21c117b44558 nginx:1.27.4 "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp nginx2.5.shdko7kq7vvwqsmzgtm5790s2
- worker2
[root@worker2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a761908c5f2d nginx:1.27.4 "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp nginx2.2.rpnodskwk4f79yxpf0tctb5n8
从集群中删除服务
[root@manager ~]# docker service rm nginx2
nginx2
[root@manager ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@manager ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
[root@worker1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@worker2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ssh免密登录
- ssh-keygen
- ssh-copy-id root@worker1
- ssh-copy-id root@worker2
- ssh worker1
- exit
[root@manager ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): #回车
Enter passphrase (empty for no passphrase): #回车
Enter same passphrase again: #回车
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:6/aBJGgBhXrJ5l541rWVPguq1ao5QLDqe6LfztIxBAw root@manager
The key's randomart image is:
+---[RSA 3072]----+
|Eo.o. |
|. +. |
| = o. . |
|o * .o . o |
|.= oo...S+ |
|. +.* .+ooo |
|.. * o..+..o |
| oo+oo.o. .. |
|oo=oB+..... |
+----[SHA256]-----+[root@manager ~]# ssh-copy-id root@worker1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@worker1's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'root@worker1'"
and check to make sure that only the key(s) you wanted were added.[root@manager ~]# ssh-copy-id root@worker2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'worker2 (192.168.98.49)' can't be established.
ED25519 key fingerprint is SHA256:I3/lsrnTEnXOE3LFvTLRUXAJ+AhSVrIEWtqTnleRz9w.
This host key is known by the following other names/addresses:~/.ssh/known_hosts:1: manager~/.ssh/known_hosts:4: worker1
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@worker2's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'root@worker2'"
and check to make sure that only the key(s) you wanted were added.[root@manager ~]# ssh worker1
Register this system with Red Hat Insights: rhc connectExample:
# rhc connect --activation-key <key> --organization <org>The rhc client and Red Hat Insights will enable analytics and additional
management capabilities on your system.
View your connected systems at https://console.redhat.com/insightsYou can learn more about how to register your system
using rhc at https://red.ht/registration
Last login: Thu May 8 16:36:42 2025 from 192.168.98.1
[root@worker1 ~]# exit
logout
Connection to worker1 closed.
二、Docker Compose
拉取 docker-compose-linux-x86_64 文件
- 创建 docker-compose.yaml 文件
[root@docker ~]# mv docker-compose-linux-x86_64 /usr/bin/docker-compose
[root@docker ~]# ll
total 4
-rw-------. 1 root root 989 Feb 27 16:19 anaconda-ks.cfg
[root@docker ~]# chmod +x /usr/bin/docker-compose
[root@docker ~]# ll /usr/bin/docker-compose
-rwxr-xr-x. 1 root root 73699264 May 10 09:55 /usr/bin/docker-compose
[root@docker ~]# docker-compose --version
Docker Compose version v2.35.1
[root@docker ~]# vim docker-compose.yaml
[root@docker ~]# ls
anaconda-ks.cfg docker-compose.yaml
- 复制会话,创建需要的宿主机目录
[root@docker ~]# mkdir -p /opt/{nginx,mysql,redis}
[root@docker ~]# mkdir /opt/nginx/{conf,html}
[root@docker ~]# mkdir /opt/mysql/data
[root@docker ~]# mkdir /opt/redis/data
[root@docker ~]# tree /opt
- 执行
[root@docker ~]# docker-compose up -d
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion
[+] Running 26/26? mysql Pulled 91.7s ? c2eb5d06bfea Pull complete 16.4s ? ba361f0ba5e7 Pull complete 16.4s ? 0e83af98b000 Pull complete 16.4s ? 770e931107be Pull complete 16.6s ? a2be1b721112 Pull complete 16.6s ? 68c594672ed3 Pull complete 16.6s ? cfd201189145 Pull complete 55.2s ? e9f009c5b388 Pull complete 55.3s ? 61a291920391 Pull complete 87.0s ? c8604ede059a Pull complete 87.0s ? redis Pulled 71.0s ? cd07ede39ddc Pull complete 40.5s ? 63df650ee4e0 Pull complete 43.1s ? c175c1c9487d Pull complete 55.0s ? 91cf9601b872 Pull complete 55.0s ? 4f4fb700ef54 Pull complete 55.0s ? c70d7dc4bd70 Pull complete 55.0s ? nginx Pulled 53.6s ? 254e724d7786 Pull complete 20.5s ? 913115292750 Pull complete 31.3s ? 3e544d53ce49 Pull complete 32.7s ? 4f21ed9ac0c0 Pull complete 36.7s ? d38f2ef2d6f2 Pull complete 39.7s ? 40a6e9f4e456 Pull complete 42.7s ? d3dc5ec71e9d Pull complete 45.3s
[+] Running 3/4? Network root_default Created 0.0s ? Container mysql Starting 0.3s ? Container redis Started 0.3s ? Container nginx Started 0.3s
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/opt/mysql/my.cnf" to rootfs at "/etc/my.cnf": create mountpoint for /etc/my.cnf mount: cannot create subdirectories in "/var/lib/docker/overlay2/1a5867fcf9c1f650da4bc51387cbd7621f1464eb2a1ce8d90f90f43733b34602/merged/etc/my.cnf": not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
[root@docker ~]# vim docker-compose.yaml
[root@docker ~]# cat docker-compose.yaml
version: "3.9"
services: nginx:image: nginx:1.27.5container_name: nginxports:- 80:80volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmlmysql:image: mysql:9.3.0container_name: mysqlrestart: alwaysports:- 3306:3306volumes:- /opt/mysql/data:/var/lib/mysql#- /opt/mysql/my.cnf:/etc/my.cnfcommand:- character-set-server=utf8mb4- collation-server=utf8mb4_general_ciredis:image: redis:8.0.0container_name: redisports:- 6379:6379volumes:- /opt/redis/data:/data[root@docker ~]# docker-compose up -d
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion
[+] Running 3/3? Container redis Running 0.0s ? Container mysql Started 0.2s ? Container nginx Running
[root@docker ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
redis 8.0.0 d62dbaef1b81 5 days ago 128MB
nginx 1.27.5 a830707172e8 3 weeks ago 192MB
mysql 9.3.0 2c849dee4ca9 3 weeks ago 859MB
[root@docker ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4390422bf4b1 mysql:9.3.0 "docker-entrypoint.s…" About a minute ago Restarting (127) 7 seconds ago mysql
18713204332c nginx:1.27.5 "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, [::]:80->80/tcp nginx
be2489a46e42 redis:8.0.0 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp redis
- 访问
[root@docker ~]# curl localhost
curl: (56) Recv failure: Connection reset by peer
[root@docker ~]# echo "index.html" > /opt/nginx/html/index.html
[root@docker ~]# curl http://192.168.98.149
curl: (7) Failed to connect to 192.168.98.149 port 80: Connection refused[root@docker ~]# cd /opt/nginx/conf/
[root@docker conf]# ls
[root@docker conf]# vim web.conf
[root@docker conf]# cat web.conf
server {listen 80;server_name 192.168.98.149;root /opt/nginx/html;
}[root@docker conf]# curl http://192.168.98.149
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.27.5</center>
</body>
</html>[root@docker conf]# vim web.conf
[root@docker conf]# cat web.conf
server {listen 80;server_name 192.168.98.149;root /usr/share/nginx/html;
}
[root@docker conf]# docker restart nginx
nginx
[root@docker conf]# curl http://192.168.98.149
index.html
- 必须在有 docker-compose.yaml 文件下的路径执行
[root@docker ~]# docker-compose ps
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
mysql mysql:9.3.0 "docker-entrypoint.s…" mysql 21 minutes ago Restarting (127) 52 seconds ago
nginx nginx:1.27.5 "/docker-entrypoint.…" nginx 23 minutes ago Up 11 minutes 0.0.0.0:80->80/tcp, [::]:80->80/tcp
redis redis:8.0.0 "docker-entrypoint.s…" redis 23 minutes ago Up 23 minutes 0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp
- 解决警告:删除 docker-compose.yaml 文件第一行的 version: “3.9”
[root@docker ~]# vim docker-compose.yaml
[root@docker ~]# cat docker-compose.yaml
services: nginx:image: nginx:1.27.5container_name: nginxports:- 80:80volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmlmysql:image: mysql:9.3.0container_name: mysqlrestart: alwaysports:- 3306:3306volumes:- /opt/mysql/data:/var/lib/mysql#- /opt/mysql/my.cnf:/etc/my.cnfcommand:- character-set-server=utf8mb4- collation-server=utf8mb4_general_ciredis:image: redis:8.0.0container_name: redisports:- 6379:6379volumes:- /opt/redis/data:/data
[root@docker ~]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
mysql mysql:9.3.0 "docker-entrypoint.s…" mysql 24 minutes ago Restarting (127) 3 seconds ago
nginx nginx:1.27.5 "/docker-entrypoint.…" nginx 26 minutes ago Up 41 seconds 0.0.0.0:80->80/tcp, [::]:80->80/tcp
redis redis:8.0.0 "docker-entrypoint.s…" redis 26 minutes ago Up 26 minutes 0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp
[root@docker ~]# mkdir composetest
[root@docker ~]# cd composetest/
[root@docker composetest]# ls
[root@docker composetest]# vim app.py
[root@docker composetest]# cat app.py
import timeimport redis
from flask import Flaskapp = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)def get_hit_count():retries = 5while True:try:return cache.incr('hits')except redis.exceptions.ConnectionError as exc:if retries == 0:raise excretries -= 1time.sleep(0.5)@app.route('/')
def hello():count = get_hit_count()return 'Hello World! I have been seen {} times.\n'.format(count)if __name__ == "__main__":app.run(host="0.0.0.0", debug=True)[root@docker composetest]# pip freeze > requirements.txt
[root@docker composetest]# cat requirements.txt
flask
redis
[root@docker yum.repos.d]# mount /dev/sr0 /mnt
mount: /mnt: WARNING: source write-protected, mounted read-only.
[root@docker yum.repos.d]# dnf install pip -y
Updating Subscription Management repositories.
Unable to read consumer identityThis system is not registered with an entitlement server. You can use "rhc" or "subscription-manager" to register.BaseOS 2.7 MB/s | 2.7 kB 00:00
AppStream 510 kB/s | 3.2 kB 00:00
Docker CE Stable - x86_64 4.2 kB/s | 3.5 kB 00:00
Docker CE Stable - x86_64 31 B/s | 55 B 00:01
Errors during downloading metadata for repository 'docker-ce-stable':- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/e0bf6fe4688b6f32c32f16998b1da1027a1ebfcec723b7c6c09e032effdb248a-primary.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/65c4f66e2808d328890505c3c2f13bb35a96f457d1c21a6346191c4dc07e6080-updateinfo.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/2fb592ad5a8fa5136a1d5992ce0a70f344c84f1601f0792d9573f8c5de73dffa-filelists.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]
Error: Failed to download metadata for repo 'docker-ce-stable': Yum repo downloading error: Downloading error(s): repodata/e0bf6fe4688b6f32c32f16998b1da1027a1ebfcec723b7c6c09e032effdb248a-primary.xml.gz - Cannot download, all mirrors were already tried without success; repodata/2fb592ad5a8fa5136a1d5992ce0a70f344c84f1601f0792d9573f8c5de73dffa-filelists.xml.gz - Cannot download, all mirrors were already tried without success
[root@docker yum.repos.d]# dnf install pip -y
Updating Subscription Management repositories.
Unable to read consumer identityThis system is not registered with an entitlement server. You can use "rhc" or "subscription-manager" to register.Docker CE Stable - x86_64 4.5 kB/s | 3.5 kB 00:00
Docker CE Stable - x86_64
...... Complete!
[root@docker ~]# ls
anaconda-ks.cfg composetest docker-compose.yaml
[root@docker ~]# docker-compose down
[+] Running 4/4✔ Container redis Removed 0.1s ✔ Container nginx Removed 0.1s ✔ Container mysql Removed 0.0s ✔ Network root_default Removed 0.1s
[root@docker ~]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
[root@docker ~]# cd composetest/
[root@docker composetest]# ls
[root@docker composetest]# vim Dockerfile
[root@docker composetest]# vim docker-compose.yaml
[root@docker composetest]# cat Dockerfile
FROM python:3.12-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
[root@docker composetest]# cat docker-compose.yaml
services:web:build: .ports:- 5000:5000redis:image: redis:alpine# 复制会话,访问
[root@docker composetest]# curl http://192.168.98.149:5000
Hello World! I have been seen 1 times.
- 执行命令启动,复制会话访问成功后 exit 退出
[root@docker composetest]# docker-compose up
Compose can now delegate builds to bake for better performance.To do so, set COMPOSE_BAKE=true.
[+] Building 50.1s (10/10) FINISHED docker:default=> [web internal] load build definition from Dockerfile 0.0s=> => transferring dockerfile: 208B 0.0s=> [web internal] load metadata for docker.io/library/python:3.12-alpine 3.6s=> [web internal] load .dockerignore 0.0s=> => transferring context: 2B 0.0s=> [web internal] load build context 0.0s=> => transferring context: 392B 0.0s=> [web 1/4] FROM docker.io/library/python:3.12-alpine@sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b 6.9s=> => resolve docker.io/library/python:3.12-alpine@sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b59fe 0.0s=> => sha256:d96ff845ea26c45b2499dfa5fb9ef5b354ba49ee6025d35c4ba81403072c10ad 252B / 252B 5.2s=> => sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b59fe07535a5 9.03kB / 9.03kB 0.0s=> => sha256:b18b7c0ecd765d0608e439663196f2cfe4b572dccfffe1490530f7398df4b271 1.74kB / 1.74kB 0.0s=> => sha256:81ce3817028c37c988e38333dd7283eab7662ce7dd43b5f019f86038c71f75ba 5.33kB / 5.33kB 0.0s=> => sha256:d39e98ccbc35a301f215a15d73fd647ad7044124c3835da293d82cbb765cab03 460.20kB / 460.20kB 2.2s=> => sha256:a597717b83500ddd11c9b9eddc7094663b2a376c4c5cb0078b433e11a82e70d0 13.66MB / 13.66MB 6.2s=> => extracting sha256:d39e98ccbc35a301f215a15d73fd647ad7044124c3835da293d82cbb765cab03 0.1s=> => extracting sha256:a597717b83500ddd11c9b9eddc7094663b2a376c4c5cb0078b433e11a82e70d0 0.6s=> => extracting sha256:d96ff845ea26c45b2499dfa5fb9ef5b354ba49ee6025d35c4ba81403072c10ad 0.0s=> [web 2/4] ADD . /code 0.1s=> [web 3/4] WORKDIR /code 0.0s=> [web 4/4] RUN pip install -r requirements.txt 39.3s=> [web] exporting to image 0.1s=> => exporting layers 0.1s=> => writing image sha256:32303d47b57e0c674340d90a76bb82776024626e64d95e04e0ff5d7e893caa80 0.0s => => naming to docker.io/library/composetest-web 0.0s => [web] resolving provenance for metadata file 0.0s
[+] Running 4/4 ✔ web Built 0.0s ✔ Network composetest_default Created 0.0s ✔ Container composetest-redis-1 Created 0.0s ✔ Container composetest-web-1 Created 0.0s
Attaching to redis-1, web-1
redis-1 | Starting Redis Server
redis-1 | 1:C 10 May 2025 08:28:20.677 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-1 | 1:C 10 May 2025 08:28:20.680 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-1 | 1:C 10 May 2025 08:28:20.680 * Redis version=8.0.0, bits=64, commit=00000000, modified=1, pid=1, just started
redis-1 | 1:C 10 May 2025 08:28:20.680 * Configuration loaded
redis-1 | 1:M 10 May 2025 08:28:20.683 * monotonic clock: POSIX clock_gettime
redis-1 | 1:M 10 May 2025 08:28:20.693 * Running mode=standalone, port=6379.
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> RedisBloom version 7.99.90 (Git=unknown)
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> Registering configuration options: [
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> { bf-error-rate : 0.01 }
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> { bf-initial-size : 100 }
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> { bf-expansion-factor : 2 }
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> { cf-bucket-size : 2 }
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> { cf-initial-size : 1024 }
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> { cf-max-iterations : 20 }
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> { cf-expansion-factor : 1 }
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> { cf-max-expansions : 32 }
redis-1 | 1:M 10 May 2025 08:28:20.709 * <bf> ]
redis-1 | 1:M 10 May 2025 08:28:20.709 * Module 'bf' loaded from /usr/local/lib/redis/modules//redisbloom.so
redis-1 | 1:M 10 May 2025 08:28:20.799 * <search> Redis version found by RedisSearch : 8.0.0 - oss
redis-1 | 1:M 10 May 2025 08:28:20.800 * <search> RediSearch version 8.0.0 (Git=HEAD-61787b7)
redis-1 | 1:M 10 May 2025 08:28:20.803 * <search> Low level api version 1 initialized successfully
redis-1 | 1:M 10 May 2025 08:28:20.808 * <search> gc: ON, prefix min length: 2, min word length to stem: 4, prefix max expansions: 200, query timeout (ms): 500, timeout policy: fail, cursor read size: 1000, cursor max idle (ms): 300000, max doctable size: 1000000, max number of search results: 1000000,
redis-1 | 1:M 10 May 2025 08:28:20.810 * <search> Initialized thread pools!
redis-1 | 1:M 10 May 2025 08:28:20.810 * <search> Disabled workers threadpool of size 0
redis-1 | 1:M 10 May 2025 08:28:20.815 * <search> Subscribe to config changes
redis-1 | 1:M 10 May 2025 08:28:20.815 * <search> Enabled role change notification
redis-1 | 1:M 10 May 2025 08:28:20.815 * <search> Cluster configuration: AUTO partitions, type: 0, coordinator timeout: 0ms
redis-1 | 1:M 10 May 2025 08:28:20.816 * <search> Register write commands
redis-1 | 1:M 10 May 2025 08:28:20.816 * Module 'search' loaded from /usr/local/lib/redis/modules//redisearch.so
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> RedisTimeSeries version 79991, git_sha=de1ad5089c15c42355806bbf51a0d0cf36f223f6
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> Redis version found by RedisTimeSeries : 8.0.0 - oss
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> Registering configuration options: [
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> { ts-compaction-policy : }
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> { ts-num-threads : 3 }
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> { ts-retention-policy : 0 }
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> { ts-duplicate-policy : block }
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> { ts-chunk-size-bytes : 4096 }
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> { ts-encoding : compressed }
redis-1 | 1:M 10 May 2025 08:28:20.828 * <timeseries> { ts-ignore-max-time-diff: 0 }
redis-1 | 1:M 10 May 2025 08:28:20.829 * <timeseries> { ts-ignore-max-val-diff : 0.000000 }
redis-1 | 1:M 10 May 2025 08:28:20.829 * <timeseries> ]
redis-1 | 1:M 10 May 2025 08:28:20.833 * <timeseries> Detected redis oss
redis-1 | 1:M 10 May 2025 08:28:20.837 * Module 'timeseries' loaded from /usr/local/lib/redis/modules//redistimeseries.so
redis-1 | 1:M 10 May 2025 08:28:20.886 * <ReJSON> Created new data type 'ReJSON-RL'
redis-1 | 1:M 10 May 2025 08:28:20.900 * <ReJSON> version: 79990 git sha: unknown branch: unknown
redis-1 | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V1 API
redis-1 | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V2 API
redis-1 | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V3 API
redis-1 | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V4 API
redis-1 | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V5 API
redis-1 | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Enabled diskless replication
redis-1 | 1:M 10 May 2025 08:28:20.904 * <ReJSON> Initialized shared string cache, thread safe: false.
redis-1 | 1:M 10 May 2025 08:28:20.904 * Module 'ReJSON' loaded from /usr/local/lib/redis/modules//rejson.so
redis-1 | 1:M 10 May 2025 08:28:20.904 * <search> Acquired RedisJSON_V5 API
redis-1 | 1:M 10 May 2025 08:28:20.907 * Server initialized
redis-1 | 1:M 10 May 2025 08:28:20.910 * Ready to accept connections tcp
web-1 | * Serving Flask app 'app'
web-1 | * Debug mode: on
web-1 | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
web-1 | * Running on all addresses (0.0.0.0)
web-1 | * Running on http://127.0.0.1:5000
web-1 | * Running on http://172.18.0.3:5000
web-1 | Press CTRL+C to quit
web-1 | * Restarting with stat
web-1 | * Debugger is active!
web-1 | * Debugger PIN: 102-900-685
web-1 | 192.168.98.149 - - [10/May/2025 08:31:06] "GET / HTTP/1.1" 200 -w Enable Watch
failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 1[root@docker composetest]#
[root@docker composetest]# pip install --upgrade pip
Requirement already satisfied: pip in /usr/lib/python3.9/site-packages (21.3.1)
与 Swarm 一起使用 Compose
使用 docker service create 一次只能部署一个服务,使用 docker-compose.yml 我们可以一次启动多个关联的服务。
[root@docker ~]# ls
anaconda-ks.cfg composetest docker-compose.yaml
[root@docker ~]# mkdir dcswarm
[root@docker ~]# ls
anaconda-ks.cfg composetest dcswarm docker-compose.yaml[root@docker ~]# cd dcswarm/
[root@docker dcswarm]# vim docker-compose.yaml
[root@docker dcswarm]# cat docker-compose.yaml
services:nginx:image: nginx:1.27.5ports:- 80:80- 443:443volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmldeploy:mode: replicatedreplicas: 2mysql:image: mysql:9.3.0ports:- 3306:3306command:- character-set-server=utf8mb4- collation-server=utf8mb4_general_cienvironment:MYSQL_ROOT_PASSWORD: "123456"volumes:- /opt/mysql/data:/var/lib/mysqldeploy:mode: replicatedreplicas: 2[root@docker dcswarm]# docker stack deploy -c docker-compose.yaml webdocker swarm initdocker swarm join --token
[root@docker ~]# scp dcswarm/* root@192.168.98.47:~
root@192.168.98.47's password:
docker-compose.yaml [root@manager ~]# ls
anaconda-ks.cfg docker-compose.yaml nginx_1.27.4.tar.gz swarm_1.2.9.tar.gz
[root@manager ~]# mkdir dcswarm
[root@manager ~]# mv docker-compose.yaml dcswarm/
[root@manager ~]# cd dcswarm/
[root@manager dcswarm]# ls
docker-compose.yaml
[root@manager dcswarm]# docker stack deploy -c docker-compose.yaml web
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
Creating network web_default
Creating service web_mysql
Creating service web_nginx
三、配置私有仓库(Harbor)
3.1 环境准备
Harbor(港湾),是一个用于 存储 和 分发 Docker 镜像的企业级 Registry 服务器。
- 修改主机名、IP地址
- 开启路由转发/etc/sysctl.conf
net.ipv4.ip_forward = 1
- 配置主机映射/etc/hosts
# 修改主机名、IP地址
[root@docker ~]# hostnamectl hostname harbor
[root@docker ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:24 brd ff:ff:ff:ff:ff:ffaltname enp3s0inet 192.168.86.132/24 brd 192.168.86.255 scope global dynamic noprefixroute ens160valid_lft 1672sec preferred_lft 1672secinet6 fe80::20c:29ff:fef5:e524/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:2e brd ff:ff:ff:ff:ff:ffaltname enp19s0inet 192.168.98.159/24 brd 192.168.98.255 scope global dynamic noprefixroute ens224valid_lft 1672sec preferred_lft 1672secinet6 fe80::fd7d:606d:1a1b:d3cc/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether ae:a7:3e:3a:39:bb brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@docker ~]# nmcli c show
NAME UUID TYPE DEVICE
Wired connection 1 110c742f-bd12-3ba3-b671-1972a75aa2e6 ethernet ens224
ens160 d622d6da-1540-371d-8def-acd3db9bd38d ethernet ens160
lo d20cef01-6249-4012-908c-f775efe44118 loopback lo
docker0 b023990a-e131-4a68-828c-710158f77a50 bridge docker0
[root@docker ~]# nmcli c m "Wired connection 1" connection.id ens224
[root@docker ~]# nmcli c show
NAME UUID TYPE DEVICE
ens224 110c742f-bd12-3ba3-b671-1972a75aa2e6 ethernet ens224
ens160 d622d6da-1540-371d-8def-acd3db9bd38d ethernet ens160
lo d20cef01-6249-4012-908c-f775efe44118 loopback lo
docker0 b023990a-e131-4a68-828c-710158f77a50 bridge docker0
[root@docker ~]# nmcli c m ens224 ipv4.method manual ipv4.addresses 192.168.98.20/24 ipv4.gateway 192.168.98.2 ipv4.dns 223.5.5.5 connection.autoconnect yes
[root@docker ~]# nmcli c up ens224
[root@harbor ~]# nmcli c m ens160 ipv4.method manual ipv4.addresses 192.168.86.20/24 ipv4.gateway 192.168.86.200 ipv4.dns "223.5.5.5 8.8.8.8" connection.autoconnect yes
[root@harbor ~]# nmcli c up ens160
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
[root@harbor ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:24 brd ff:ff:ff:ff:ff:ffaltname enp3s0inet 192.168.86.20/24 brd 192.168.86.255 scope global noprefixroute ens160valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fef5:e524/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:2e brd ff:ff:ff:ff:ff:ffaltname enp19s0inet 192.168.98.20/24 brd 192.168.98.255 scope global noprefixroute ens224valid_lft forever preferred_lft foreverinet6 fe80::fd7d:606d:1a1b:d3cc/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether ae:a7:3e:3a:39:bb brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever# 开启路由转发
[root@harbor ~]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
[root@harbor ~]# sysctl -p
net.ipv4.ip_forward = 1
# 配置主机映射
[root@harbor ~]# vim /etc/hosts
[root@harbor ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.86.11 k8s-master01 m1
192.168.86.12 k8s-node01 n1
192.168.86.13 k8s-node02 n2
192.168.86.20 harbor.registry.com harbor
3.2 安装 Docker
- 添加 Docker 源
- 安装 Docker
- 配置 Docker
- 启动 Docker
- 验证 Docker
[root@harbor ~]# vim /etc/docker/daemon.json
[root@harbor ~]# cat /etc/docker/daemon.json
{"default-ipc-mode": "shareable", #ipc模式打开"data-root": "/data/docker", #指定docker数据放在哪个目录"exec-opts": ["native.cgroupdriver=systemd"], #指定cgroup的驱动方式是systemd"log-driver": "json-file", #格式json"log-opts": {"max-size": "100m","max-file": "50"},"insecure-registries": ["https://harbor.registry.com"], #自己的仓库的地址(私有仓库)"registry-mirrors":[ #拉取镜像(公共仓库)"https://docker.m.daocloud.io","https://docker.imgdb.de","https://docker-0.unsee.tech","https://docker.hlmirror.com","https://docker.1ms.run","https://func.ink","https://lispy.org","https://docker.xiaogenban1993.com"]
}
[root@harbor ~]# mkdir -p /data/docker -p
[root@harbor ~]# systemctl restart docker
[root@harbor ~]# ls /data/docker/
buildkit containers engine-id image network overlay2 plugins runtimes swarm tmp volumes
3.3 安装 docker-compose
- 下载
- 安装
- 赋权
- 验证
[root@harbor ~]# mv docker-compose-linux-x86_64 /usr/bin/docker-compose
[root@harbor ~]# chmod +x /usr/bin/do
docker dockerd dockerd-rootless.sh domainname
docker-compose dockerd-rootless-setuptool.sh docker-proxy
[root@harbor ~]# chmod +x /usr/bin/docker-compose
[root@harbor ~]# docker-compose --version
Docker Compose version v2.35.1
[root@harbor ~]# cd /data/
[root@harbor data]# mv /root/harbor-offline-installer-v2.13.0.tgz .
[root@harbor data]# ls
docker harbor-offline-installer-v2.13.0.tgz
[root@harbor data]# tar -xzf harbor-offline-installer-v2.13.0.tgz
[root@harbor data]# ls
docker harbor harbor-offline-installer-v2.13.0.tgz
[root@harbor data]# rm -f *.tgz
[root@harbor data]# ls
docker harbor
[root@harbor data]# cd harbor/
[root@harbor harbor]# ls
common.sh harbor.v2.13.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
3.4 准备 Harbor
- 下载 Harbor
- 解压文件
3.5 配置证书
- 生成 CA 证书
- 生成服务器证书
- 向 Harbor 和 Docker 提供证书
3.6 部署配置 Harbor
- 配置 Harbor
- 加载 harbor 镜像
- 检查安装环境
- 启动 Harbor
- 查看启动的容器
3.7 配置启动服务
- 停止 Harbor
- 编写服务文件
- 启动 Harbor 服务
3.8 定制本地仓库
- 配置映射
- 配置仓库
3.9 测试本地仓库
- 拉取镜像
- 镜像打标签
- 登录仓库
- 推送镜像
- 拉取镜像
- 配置证书
https://goharbor.io/docs/2.13.0/install-config/installation-prereqs/
[root@harbor harbor]# mkdir ssl
[root@harbor harbor]# cd ssl
[root@harbor ssl]# openssl genrsa -out ca.key 4096
[root@harbor ssl]# ls
ca.key
[root@harbor ssl]# openssl req -x509 -new -nodes -sha512 -days 3650 \-subj "/C=CN/ST=Chongqing/L=Banan/O=example/OU=Personal/CN=MyPersonal Root CA" \-key ca.key \-out ca.crt
[root@harbor ssl]# ls
ca.crt ca.key[root@harbor ssl]# openssl genrsa -out harbor.registry.com.key 4096
[root@harbor ssl]# ls
ca.crt ca.key harbor.registry.com.key
[root@harbor ssl]# openssl req -sha512 -new \-subj "/C=CN/ST=Chongqing/L=Banan/O=example/OU=Personal/CN=harbor.registry.com" \-key harbor.registry.com.key \-out harbor.registry.com.csr
[root@harbor ssl]# ls
ca.crt ca.key harbor.registry.com.csr harbor.registry.com.key[root@harbor ssl]# cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names[alt_names]
DNS.1=harbor.registry.com
DNS.2=harbor.registry
DNS.3=harbor
EOF
[root@harbor ssl]# ls
ca.crt ca.key harbor.registry.com.csr harbor.registry.com.key v3.ext[root@harbor ssl]# openssl x509 -req -sha512 -days 3650 \-extfile v3.ext \-CA ca.crt -CAkey ca.key -CAcreateserial \-in harbor.registry.com.csr \-out harbor.registry.com.crt
Certificate request self-signature ok
subject=C=CN, ST=Chongqing, L=Banan, O=example, OU=Personal, CN=harbor.registry.com
[root@harbor ssl]# ls
ca.crt ca.key ca.srl harbor.registry.com.crt harbor.registry.com.csr harbor.registry.com.key v3.ext
[root@harbor ssl]# mkdir /data/cert
[root@harbor ssl]# cp harbor.registry.com.crt /data/cert/
cp harbor.registry.com.key /data/cert/
[root@harbor ssl]# ls /data/cert/
harbor.registry.com.crt harbor.registry.com.key
[root@harbor ssl]# openssl x509 -inform PEM -in harbor.registry.com.crt -out harbor.registry.com.cert
[root@harbor ssl]# ls
ca.crt ca.srl harbor.registry.com.crt harbor.registry.com.key
ca.key harbor.registry.com.cert harbor.registry.com.csr v3.ext[root@harbor ssl]# mkdir -p /etc/docker/certs.d/harbor.registry.com:443
[root@harbor ssl]# cp harbor.registry.com.cert /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# cp harbor.registry.com.key /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# cp ca.crt /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# systemctl restart docker
[root@harbor ssl]# systemctl status docker
● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)Active: active (running) since Sun 2025-05-11 10:42:37 CST; 1min 25s ago
TriggeredBy: ● docker.socketDocs: https://docs.docker.comMain PID: 3322 (dockerd)Tasks: 10Memory: 29.7MCPU: 353msCGroup: /system.slice/docker.service└─3322 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockMay 11 10:42:36 harbor dockerd[3322]: time="2025-05-11T10:42:36.466441227+08:00" level=info msg="Creating a contai>
[root@harbor ssl]# cd ..
[root@harbor harbor]# ls
common.sh harbor.v2.13.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare ssl
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml
[root@harbor harbor]# vim harbor.yml
[root@harbor harbor]# cat harbor.yml
# Configuration file of Harbor# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: harbor.registry.com #修改# http related config
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80# https related config
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /data/cert/harbor.registry.com.crt #修改private_key: /data/cert/harbor.registry.com.key #修改
...............
[root@harbor harbor]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
[root@harbor harbor]# docker load -i harbor.v2.13.0.tar.gz
874b37071853: Loading layer [==================================================>]
........
832349ff3d50: Loading layer [==================================================>] 38.95MB/38.95MB
Loaded image: goharbor/harbor-exporter:v2.13.0
[root@harbor harbor]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
goharbor/harbor-exporter v2.13.0 0be56feff492 4 weeks ago 127MB
goharbor/redis-photon v2.13.0 7c0d9781ab12 4 weeks ago 166MB
goharbor/trivy-adapter-photon v2.13.0 f2b4d5497558 4 weeks ago 381MB
goharbor/harbor-registryctl v2.13.0 bbd957df71d6 4 weeks ago 162MB
goharbor/registry-photon v2.13.0 fa23989bf194 4 weeks ago 85.9MB
goharbor/nginx-photon v2.13.0 c922d86a7218 4 weeks ago 151MB
goharbor/harbor-log v2.13.0 463b8f469e21 4 weeks ago 164MB
goharbor/harbor-jobservice v2.13.0 112a1616822d 4 weeks ago 174MB
goharbor/harbor-core v2.13.0 b90fcb27fd54 4 weeks ago 197MB
goharbor/harbor-portal v2.13.0 858f92a0f5f9 4 weeks ago 159MB
goharbor/harbor-db v2.13.0 13a2b78e8616 4 weeks ago 273MB
goharbor/prepare v2.13.0 2380b5a4f127 4 weeks ago 205MB
[root@harbor harbor]# ls
common.sh harbor.v2.13.0.tar.gz harbor.yml harbor.yml.tmpl install.sh LICENSE prepare ssl
[root@harbor harbor]# ./prepare
prepare base dir is set to /data/harbor
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
copy /data/secret/tls/harbor_internal_ca.crt to shared trust ca dir as name harbor_internal_ca.crt ...
ca file /hostfs/data/secret/tls/harbor_internal_ca.crt is not exist
copy to shared trust ca dir as name storage_ca_bundle.crt ...
copy None to shared trust ca dir as name redis_tls_ca.crt ...
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir# 启动harbor
[root@harbor harbor]# ./install.sh[Step 0]: checking if docker is installed ...Note: docker version: 28.0.4[Step 1]: checking docker-compose is installed ...Note: Docker Compose version v2.34.0[Step 2]: loading Harbor images ...
Loaded image: goharbor/harbor-db:v2.13.0
Loaded image: goharbor/harbor-jobservice:v2.13.0
Loaded image: goharbor/harbor-registryctl:v2.13.0
Loaded image: goharbor/redis-photon:v2.13.0
Loaded image: goharbor/trivy-adapter-photon:v2.13.0
Loaded image: goharbor/nginx-photon:v2.13.0
Loaded image: goharbor/registry-photon:v2.13.0
Loaded image: goharbor/prepare:v2.13.0
Loaded image: goharbor/harbor-portal:v2.13.0
Loaded image: goharbor/harbor-core:v2.13.0
Loaded image: goharbor/harbor-log:v2.13.0
Loaded image: goharbor/harbor-exporter:v2.13.0[Step 3]: preparing environment ...[Step 4]: preparing harbor configs ...
prepare base dir is set to /data/harbor
Clearing the configuration file: /config/portal/nginx.conf
.......
Generated configuration file: /config/portal/nginx.conf
.......
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
copy /data/secret/tls/harbor_internal_ca.crt to shared trust ca dir as name harbor_internal_ca.crt ...
ca file /hostfs/data/secret/tls/harbor_internal_ca.crt is not exist
copy to shared trust ca dir as name storage_ca_bundle.crt ...
copy None to shared trust ca dir as name redis_tls_ca.crt ...
loaded secret from file: /data/secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dirNote: stopping existing Harbor instance ...[Step 5]: starting Harbor ...
[+] Running 10/10 #10个容器成功运行✔ Network harbor_harbor Created 0.0s
......... 1.3s
✔ ----Harbor has been installed and started successfully.----
- 配置启动服务
# 会删容器,但是不会删除镜像
[root@harbor harbor]# docker-compose down
[+] Running 10/10✔ Container harbor-jobservice Removed 0.1s ✔ Container registryctl Removed 0.1s ✔ Container nginx Removed 0.1s ✔ Container harbor-portal Removed 0.1s ✔ Container harbor-core Removed 0.1s ✔ Container registry Removed 0.1s ✔ Container redis Removed 0.1s ✔ Container harbor-db Removed 0.2s ✔ Container harbor-log Removed 10.1s ✔ Network harbor_harbor Removed 0.1s
[root@harbor harbor]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@harbor harbor]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
goharbor/harbor-exporter v2.13.0 0be56feff492 4 weeks ago 127MB
goharbor/redis-photon v2.13.0 7c0d9781ab12 4 weeks ago 166MB
goharbor/trivy-adapter-photon v2.13.0 f2b4d5497558 4 weeks ago 381MB
goharbor/harbor-registryctl v2.13.0 bbd957df71d6 4 weeks ago 162MB
goharbor/registry-photon v2.13.0 fa23989bf194 4 weeks ago 85.9MB
goharbor/nginx-photon v2.13.0 c922d86a7218 4 weeks ago 151MB
goharbor/harbor-log v2.13.0 463b8f469e21 4 weeks ago 164MB
goharbor/harbor-jobservice v2.13.0 112a1616822d 4 weeks ago 174MB
goharbor/harbor-core v2.13.0 b90fcb27fd54 4 weeks ago 197MB
goharbor/harbor-portal v2.13.0 858f92a0f5f9 4 weeks ago 159MB
goharbor/harbor-db v2.13.0 13a2b78e8616 4 weeks ago 273MB
goharbor/prepare v2.13.0 2380b5a4f127 4 weeks ago 205MB# 编写服务文件(必须有这三个板块)
[root@harbor harbor]# vim /usr/lib/systemd/system/harbor.service
[root@harbor harbor]# cat /usr/lib/systemd/system/harbor.service
[Unit] #定义服务启动的依赖关系和顺序、服务的描述信息
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service # 定义顺序:...之后,不是强依赖
Requires=docker.service #必须先启动这个服务(强依赖)
Documentation=http://github.com/vmware/harbor[Service] #定义服务的启动、停止、重启
Type=simple #服务启动的进程启动方式
Restart=on-failure #如果失败了就重启
RestartSec=5 #重启时间
ExecStart=/usr/bin/docker-compose --file /data/harbor/docker-compose.yml up #启动时需要执行的指令
# /usr/bin/docker-compose为刚刚安装的路径
ExecStop=/usr/bin/docker-compose --file /data/harbor/docker-compose.yml down[Install]
WantedBy=multi-user.target
- /usr/bin/docker-compose为刚刚安装的路径
[root@harbor ~]# cd /data/harbor/
[root@harbor harbor]# ls
common docker-compose.yml harbor.yml install.sh prepare
common.sh harbor.v2.13.0.tar.gz harbor.yml.tmpl LICENSE ssl
[root@harbor harbor]# cat docker-compose.yml
services:log:image: goharbor/harbor-log:v2.13.0container_name: harbor-logrestart: alwayscap_drop:- ALLcap_add:- CHOWN- DAC_OVERRIDE- SETGID- SETUIDvolumes:- /var/log/harbor/:/var/log/docker/:z- type: bindsource: ./common/config/log/logrotate.conftarget: /etc/logrotate.d/logrotate.conf- type: bindsource: ./common/config/log/rsyslog_docker.conftarget: /etc/rsyslog.d/rsyslog_docker.confports:- 127.0.0.1:1514:10514networks:- harborregistry:image: goharbor/registry-photon:v2.13.0container_name: registryrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/registry:/storage:z- ./common/config/registry/:/etc/registry/:z- type: bindsource: /data/secret/registry/root.crttarget: /etc/registry/root.crt- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "registry"registryctl:image: goharbor/harbor-registryctl:v2.13.0container_name: registryctlenv_file:- ./common/config/registryctl/envrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/registry:/storage:z- ./common/config/registry/:/etc/registry/:z- type: bindsource: ./common/config/registryctl/config.ymltarget: /etc/registryctl/config.yml- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "registryctl"postgresql:image: goharbor/harbor-db:v2.13.0container_name: harbor-dbrestart: alwayscap_drop:- ALLcap_add:- CHOWN- DAC_OVERRIDE- SETGID- SETUIDvolumes:- /data/database:/var/lib/postgresql/data:znetworks:harbor:env_file:- ./common/config/db/envdepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "postgresql"shm_size: '1gb'core:image: goharbor/harbor-core:v2.13.0container_name: harbor-coreenv_file:- ./common/config/core/envrestart: alwayscap_drop:- ALLcap_add:- SETGID- SETUIDvolumes:- /data/ca_download/:/etc/core/ca/:z- /data/:/data/:z- ./common/config/core/certificates/:/etc/core/certificates/:z- type: bindsource: ./common/config/core/app.conftarget: /etc/core/app.conf- type: bindsource: /data/secret/core/private_key.pemtarget: /etc/core/private_key.pem- type: bindsource: /data/secret/keys/secretkeytarget: /etc/core/key- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:harbor:depends_on:- log- registry- redis- postgresqllogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "core"portal:image: goharbor/harbor-portal:v2.13.0container_name: harbor-portalrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUID- NET_BIND_SERVICEvolumes:- type: bindsource: ./common/config/portal/nginx.conftarget: /etc/nginx/nginx.confnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "portal"jobservice:image: goharbor/harbor-jobservice:v2.13.0container_name: harbor-jobserviceenv_file:- ./common/config/jobservice/envrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/job_logs:/var/log/jobs:z- type: bindsource: ./common/config/jobservice/config.ymltarget: /etc/jobservice/config.yml- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- corelogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "jobservice"redis:image: goharbor/redis-photon:v2.13.0container_name: redisrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/redis:/var/lib/redisnetworks:harbor:depends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "redis"proxy:image: goharbor/nginx-photon:v2.13.0container_name: nginxrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUID- NET_BIND_SERVICEvolumes:- ./common/config/nginx:/etc/nginx:z- /data/secret/cert:/etc/cert:z- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harborports:- 80:8080- 443:8443depends_on:- registry- core- portal- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "proxy"
networks:harbor:external: false