K8S部署Harbor仓库实战
K8S部署Harbor仓库实战 - 简书
创建文件目录
- chartmuseum目录: /var/nfs/data/harbor/chartmuseum
- database目录: /var/nfs/data/harbor/database
- jobservice目录: /var/nfs/data/harbor/jobservice
- redis目录: /var/nfs/data/harbor/redis
- registry目录: /var/nfs/data/harbor/registry
- trivy目录: /var/nfs/data/harbor/trivy
- 脚本
mkdir -p /var/nfs/data/harbor/chartmuseum
mkdir -p /var/nfs/data/harbor/database
mkdir -p /var/nfs/data/harbor/jobservice
mkdir -p /var/nfs/data/harbor/redis
mkdir -p /var/nfs/data/harbor/registry
mkdir -p /var/nfs/data/harbor/trivychmod -R 777 harbor
添加heml仓库
helm repo add harbor https://helm.goharbor.io
helm repo update
创建namespace
kubectl create namespace harbor
创建配置文件目录
mkdir -p /root/harbor
cd /root/harbor
创建harbor-pv.yaml文件并替换内容
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-registrylabels:app: harbor-registry
spec:capacity:storage: 50GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: "nfs-client"mountOptions:- hardnfs:path: /mydata/k8s/public/harbor/registryserver: 192.168.5.22
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-chartmuseumlabels:app: harbor-chartmuseum
spec:capacity:storage: 5GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: "nfs-client"mountOptions:- hardnfs:path: /mydata/k8s/public/harbor/chartmuseumserver: 192.168.5.22
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-jobservicelabels:app: harbor-jobservice
spec:capacity:storage: 5GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: "nfs-client"mountOptions:- hardnfs:path: /mydata/k8s/public/harbor/jobserviceserver: 192.168.5.22
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-databaselabels:app: harbor-database
spec:capacity:storage: 5GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: "nfs-client"mountOptions:- hardnfs:path: /mydata/k8s/public/harbor/databaseserver: 192.168.5.22
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-redislabels:app: harbor-redis
spec:capacity:storage: 5GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: "nfs-client"mountOptions:- hardnfs:path: /mydata/k8s/public/harbor/redisserver: 192.168.5.22
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-trivylabels:app: harbor-trivy
spec:capacity:storage: 5GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: "nfs-client"mountOptions:- hardnfs:path: /mydata/k8s/public/harbor/trivyserver: 192.168.5.22
替换内容
sed -i 's/192.168.5.22/k8s-master01/g' /root/harbor/harbor-pv.yaml
sed -i 's/\/mydata\/k8s\/public/\/var\/nfs\/data/g' /root/harbor/harbor-pv.yaml
sed -i 's/nfs-client/nfs-sc/g' /root/harbor/harbor-pv.yaml
应用PV
kubectl apply -f /root/harbor/harbor-pv.yaml
出错了删除
kubectl delete -f /root/harbor/harbor-pv.yaml
创建harbor-pvc.yaml文件并替换内容
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: harbor-registry
spec:accessModes:- ReadWriteOncestorageClassName: "nfs-client"resources:requests:storage: 50Giselector:matchLabels:app: harbor-registry
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: harbor-chartmuseum
spec:accessModes:- ReadWriteOncestorageClassName: "nfs-client"resources:requests:storage: 5Giselector:matchLabels:app: harbor-chartmuseum
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: harbor-jobservice
spec:accessModes:- ReadWriteOncestorageClassName: "nfs-client"resources:requests:storage: 5Giselector:matchLabels:app: harbor-jobservice
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: harbor-database
spec:accessModes:- ReadWriteOncestorageClassName: "nfs-client"resources:requests:storage: 5Giselector:matchLabels:app: harbor-database
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: harbor-redis
spec:accessModes:- ReadWriteOncestorageClassName: "nfs-client"resources:requests:storage: 5Giselector:matchLabels:app: harbor-redis
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: harbor-trivy
spec:accessModes:- ReadWriteOncestorageClassName: "nfs-client"resources:requests:storage: 5Giselector:matchLabels:app: harbor-trivy
sed -i 's/nfs-client/nfs-sc/g' /root/harbor/harbor-pvc.yaml
应用PVC
kubectl apply -f /root/harbor/harbor-pvc.yaml -n harbor
出错了删除
kubectl delete -f harbor-pvc.yaml -n harbor
创建harbor-values.yaml并替换
expose:type: ingresstls:enabled: trueclusterIP:name: harborannotations: {}ports:httpPort: 80httpsPort: 443notaryPort: 4443ingress:hosts:core: harbor-core.public.192.168.4.224.nip.ionotary: harbor-notary.public.192.168.4.224.nip.iocontroller: defaultkubeVersionOverride: ""annotations:ingress.kubernetes.io/ssl-redirect: "true"ingress.kubernetes.io/proxy-body-size: "0"nginx.ingress.kubernetes.io/ssl-redirect: "true"nginx.ingress.kubernetes.io/proxy-body-size: "0"notary:annotations: {}harbor:annotations: {}
externalURL: https://harbor-core.public.192.168.4.224.nip.io:31839persistence:enabled: trueresourcePolicy: "keep"persistentVolumeClaim:registry:existingClaim: "harbor-registry"storageClass: "nfs-client"subPath: ""accessMode: ReadWriteOncesize: 50Gichartmuseum:existingClaim: "harbor-chartmuseum"storageClass: "nfs-client"subPath: ""accessMode: ReadWriteOncesize: 5Gijobservice:existingClaim: "harbor-jobservice"storageClass: "nfs-client"subPath: ""accessMode: ReadWriteOncesize: 5Gidatabase:existingClaim: "harbor-database"storageClass: "nfs-client"subPath: ""accessMode: ReadWriteOncesize: 5Giredis:existingClaim: "harbor-redis"storageClass: "nfs-client"subPath: ""accessMode: ReadWriteOncesize: 5Gitrivy:existingClaim: "harbor-trivy"storageClass: "nfs-client"subPath: ""accessMode: ReadWriteOncesize: 5Gi
sed -i 's/nfs-client/nfs-sc/g' /root/harbor/harbor-values.yaml
如果没有域名不使用ingress方式导出,使用nodePort方式导出,并关闭HTTPS
替换harbor-values.yaml文件中的expose段
expose:type: nodePorttls:enabled: falseclusterIP:name: harborannotations: {}ports:httpPort: 80httpsPort: 443notaryPort: 4443
externalURL: http://yourip:3xxxx
参考使用nodePort方式导出的文章
helm安装harbor后登陆一直提示账户或密码错误_harbor默认密码不对-CSDN博客
使用nginx导出
externalURL配置
- 考虑内外网的问题,本集群内使用内网IP+端口,客户端访问使用外网的IP+另一个端口
- externalURL配置成内网IP
- 然后再将harbor-nginx这个deployment暴露为一个SVC使用另一个端口,方便客户端使用
如果支持域名,则使用ingress
安装ingress-nginx
参考: K8S Helm 安装ingress-nginx/ingress-nginx-CSDN博客
修改配置文件harbor-values.yaml中的expose部分
expose:type: ingresstls:enabled: trueclusterIP:name: harborannotations: {}ports:httpPort: 80httpsPort: 443notaryPort: 4443ingress:controller: defaultkubeVersionOverride: ""annotations:ingress.kubernetes.io/ssl-redirect: "true"ingress.kubernetes.io/proxy-body-size: "0"nginx.ingress.kubernetes.io/ssl-redirect: "true"nginx.ingress.kubernetes.io/proxy-body-size: "0"notary:annotations: {}harbor:annotations: {}persistence:
安装harbor(externalURL=https://harbor.david.org)
helm install harbor harbor/harbor --namespace harbor --create-namespace \--values harbor-values.yaml \--set expose.ingress.className=nginx \--set expose.ingress.hosts.core=harbor.david.org \--set expose.ingress.hosts.notary=notary.david.org \--set externalURL=https://harbor.david.org \--set harborAdminPassword="Harbor12345"
查看harbor-ingress
[root@k8s-master01 harbor]# kubectl describe ingress -n harbor harbor-ingress
Name: harbor-ingress
Labels: app=harborapp.kubernetes.io/managed-by=Helmchart=harborheritage=Helmrelease=harbor
Namespace: harbor
Address:
Ingress Class: nginx
Default backend: <default>
TLS:harbor-ingress terminates harbor.david.org
Rules:Host Path Backends---- ---- --------harbor.david.org/api/ harbor-core:80 (10.244.1.102:8080)/service/ harbor-core:80 (10.244.1.102:8080)/v2/ harbor-core:80 (10.244.1.102:8080)/chartrepo/ harbor-core:80 (10.244.1.102:8080)/c/ harbor-core:80 (10.244.1.102:8080)/ harbor-portal:80 (10.244.1.100:8080)
Annotations: ingress.kubernetes.io/proxy-body-size: 0ingress.kubernetes.io/ssl-redirect: truemeta.helm.sh/release-name: harbormeta.helm.sh/release-namespace: harbornginx.ingress.kubernetes.io/proxy-body-size: 0nginx.ingress.kubernetes.io/ssl-redirect: true
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Sync 11m nginx-ingress-controller Scheduled for sync
安装
helm install harbor harbor/harbor -f harbor-values.yaml -n harbor
出错了卸载
helm uninstall harbor -n harbor
查看是否都启动了
kubectl get pods -owide -n harbor
kubectl get ingress -owide -n harbor
网页登录
用户名: admin
密码:Harbor12345
http://yourip:3xxxx
如果使用ingress, 配置一个host
https://harbor-core.myharbor.io:3xxxx
之前有出错,处理异常问题
Error: INSTALLATION FAILED: Unable to continue with install:
PersistentVolumeClaim "harbor-jobservice" in namespace "harbor" exists and cannot be imported into the current release:
invalid ownership metadata;
label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "harbor"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "harbor"
不需要jobservice的pvc
kubectl delete pvc harbor-jobservice -n harbor
查看状态
kubectl get pods -owide -n harbor
查看有问题的pod
kubectl describe pods -n harbor pod名称
修改docker配置文件,添加harbor仓库地址
[root@master01 harbor]# vi /etc/docker/daemon.json
[root@master01 harbor]# cat /etc/docker/daemon.json
{"registry-mirrors": ["http://hub-mirror.c.163.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"insecure-registries": ["yourip:3xxxx","0.0.0.0/0"]}
重启docker
[root@master01 harbor]# systemctl daemon-reload
[root@master01 harbor]# systemctl restart docker.service
登录harbor仓库
[root@master01 harbor]# docker login yourip:3xxxx
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
docker login | Docker Docs
Login Succeeded
打个新镜像
docker images
docker tag hello-world:latest yourip:3xxxx/yourlib/hello-world:latest
推送
docker push yourip:3xxxx/yourlib/hello-world:latest
上网页端查看yourlib项目下,hello-world:latest已上传成功
- 配置k8s拉取私有仓库镜像
- 解决异常
- container failed to do request Head https
- http: server gave HTTP response to HTTPS client
- 需跳过https的验证
- 修改以下配置
- 解决异常
vi /etc/containerd/config.toml[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.configs."yourip:3xxxx".tls]insecure_skip_verify = true[plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."yourip:3xxxx"]endpoint = ["http://yourip:3xxxx"]
#重新加载配置并重启
systemctl daemon-reload && systemctl restart containerd
#可以写个deployment尝试拉取镜像
-
- 以上配置修改后执行ctr命令拉取镜像,仍然还会抛以上异常,但K8S拉取镜像时正常拉取
- ctr -n=k8s.io images pull yourip:3xxxx/test/hello-world:latest
- 异常:http: server gave HTTP response to HTTPS client
- 主要原因是K8S并不使用该插件,所以并不会读取该配置
- [plugins."io.containerd.grpc.v1.cri".registry.configs]
- 所以配置/etc/containerd/certs.d也不生效
- 使用该配置并不能解决
- --plain-http=true
- 相关配置链接
- https://github.com/containerd/containerd/blob/release/1.5/docs/hosts.md
- 使用在ctr命令行却可正常拉取
- ctr -n=k8s.io images pull yourip:3xxxx/docker.io/busybox:latest --platform linux/amd64 --plain-http=true
- https://github.com/containerd/containerd/issues/2758
- https://github.com/containerd/containerd/issues/3800
- ctr -n=k8s.io images pull yourip:3xxxx/test/hello-world:latest
- 以上配置修改后执行ctr命令拉取镜像,仍然还会抛以上异常,但K8S拉取镜像时正常拉取
-
-
-
-
-
- https://github.com/containerd/containerd/issues/6285
-
-
-
-