高可用 k8s 1.29 一键安装脚本, 丝滑至极

在这里插入图片描述

博客原文

文章目录

    • 集群配置
      • 配置清单
      • 集群规划
      • 集群网络规划
    • 环境初始化
      • 主机配置
    • 配置高可用ApiServer
      • 安装 nginx
      • 安装 Keepalived
    • 安装脚本
      • 需要魔法的脚本
      • 不需要魔法的脚本
      • 配置自动补全
      • 加入其余节点
    • 验证集群

集群配置

配置清单

  • OS: ubuntu 20.04
  • kubernetes: 1.29.1
  • Container Runtime:Containerd 1.7.11
  • CRI: runc 1.10
  • CNI: cni-plugin 1.4

集群规划

IPHostname配置
192.168.254.130master012C 4G 30G
192.168.254.131master022C 4G 30G
192.168.254.132node12C 4G 30G

集群网络规划

  • Pod 网络: 10.244.0.0/16
  • Service 网络: 10.96.0.0/12
  • Node 网络: 192.168.254.0/24

环境初始化

主机配置

ssh-keygen
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.254.131
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.254.132# 将节点加入 hosts
cat << EOF >> /etc/hosts
192.168.254.130 master01
192.168.254.131 master02
192.168.254.132 node01
EOF

配置高可用ApiServer

安装 nginx

所有 master 节点都要操作

apt install nginx -y
systemctl status nginx# 修改 nginx 配置文件
cat /etc/nginx/nginx.conf
user user;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;events {worker_connections 768;# multi_accept on;
}#添加了stream 这一段,其他的保持默认即可
stream {log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log  /var/log/nginx/k8s-access.log  main;upstream k8s-apiserver {server 192.168.254.130:6443;          #master01的IP和6443端口server 192.168.254.131:6443;          #master02的IP和6443端口}server {listen 16443;                                    #监听的是16443端口,因为nginx和master复用机器,所以不能是6443端口proxy_pass k8s-apiserver;                #使用proxy_pass模块进行反向代理}
}......# 重启 nginx 服务
systemctl restart nginx && systemctl enable nginx && systemctl status nginx# 端口检查
# netstat  -lntup| grep 16443
nc -l -p 16443
#nc: Address already in use

安装 Keepalived

所有 master 节点都要操作

apt install keepalived -y# 写入 nginx 检查脚本
cat << EOF > /etc/keepalived/nginx_check.sh
#!/bin/bash
#1、判断Nginx是否存活
counter=`ps -C nginx --no-header | wc -l`
if [ $counter -eq 0 ]; then#2、如果不存活则尝试启动Nginx./usr/local/nginx/sbin/nginxsleep 2#3、等待2秒后再次获取一次Nginx状态counter=`ps -C nginx --no-header | wc -l`#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移if [ $counter -eq 0 ]; thenkillall keepalivedfi
fi
EOFchmod +x /etc/keepalived/nginx_check.sh

更改 master01 的 keepalived 配置:

cat << EOF > /etc/keepalived/keepalived.conf
global_defs {router_id LVS_DEVELvrrp_skip_check_adv_addrvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_script chk_nginx {script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径interval 2              ## 检测时间间隔weight -20              ## 如果条件成立,权重-20
}vrrp_instance VI_1 {state MASTER                ##主节点为 MASTER,备份节点为 BACKUPinterface ens33             ##绑定 VIP 的网络接口,与本机IP地址所在网络接口相同virtual_router_id 100       ##虚拟路由id,主从节点必须保持一致priority 100                ##节点优先级,直范围0-254,MASTER 要比 BACKUP 高advert_int 1authentication {            ##设置验证信息,两个节点必须一致auth_type PASSauth_pass 123456}track_script {chk_nginx               ##执行 Nginx 监控}virtual_ipaddress {192.168.254.100          ##VIP,两个节点必须设置一样(可设置多个)}
}
EOFsystemctl restart keepalived && systemctl enable keepalived.service
ip a | grep 192.168.254.100

更改 master02 的 keepalived 配置:

cat << EOF > /etc/keepalived/keepalived.conf
global_defs {router_id LVS_DEVELvrrp_skip_check_adv_addrvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_script chk_nginx {script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径interval 2              ## 检测时间间隔weight -20              ## 如果条件成立,权重-20
}vrrp_instance VI_1 {state BACKUP                ##主节点为 MASTER,备份节点为 BACKUPinterface ens33             ##绑定 VIP 的网络接口,与本机IP地址所在网络接口相同virtual_router_id 100       ##虚拟路由id,主从节点必须保持一致priority 90                ##节点优先级,直范围0-254,MASTER 要比 BACKUP 高advert_int 1authentication {            ##设置验证信息,两个节点必须一致auth_type PASSauth_pass 123456}track_script {chk_nginx               ##执行 Nginx 监控}virtual_ipaddress {192.168.254.100          ##VIP,两个节点必须设置一样(可设置多个)}
}
EOFsystemctl restart keepalived && systemctl enable keepalived.service
ip a | grep 192.168.254.100

安装脚本

**前置条件: ** 脚本中存在拉取国外资源, 需要你配置代理 ==> [如何让虚拟机拥有愉快网络环境](https://ai-feier.github.io/p/%E5%A6%82%E4%BD%95%E8%AE%A9%E8%99%9A%E6%8B%9F%E6%9C%BA%E6%8B%A5%E6%9C%89%E6%84%89%E5%BF%AB%E7%BD%91%E7%BB%9C%E7%8E%AF%E5%A2%83/)

需要:

  • 虚拟机代理
  • apt 下载代理

需要魔法的脚本

在所有节点执行以下脚本

脚本功能:

  • 时间同步
  • 关闭 swap
  • 启用内核模块
  • 安装 ipvs 并启用内核参数
  • 安装 containerd, runc, cni
  • 更改 containerd 沙箱镜像和 cgroup 并且配置镜像加速
  • 安装最新 kubelet, kubeadm, kubectl

注意: 请先通过export name=master01方式设置当前 node 的 hostname

install.sh:

export name=master01  # 改为你 hostname 的名称, 脚本中删除该行
#!/bin/bashhostnamectl set-hostname $name# 阿里源
mv /etc/apt/sources.list /etc/apt/sources.list.bak
cat <<EOF > /etc/apt/sources.list
deb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
EOF
apt update# 时间同步
timedatectl set-timezone Asia/Shanghai
#安装chrony,联网同步时间
apt install chrony -y && systemctl enable --now chronyd# 禁用 swap
sudo swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab# 安装 ipvs
apt install -y ipset ipvsadm# 配置需要的内核模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF# 启动模块
sudo modprobe overlay
sudo modprobe br_netfiltercat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF# 是 sysctl 参数生效
sudo sysctl --system
# 检验是否配置成功
#lsmod | grep br_netfilter
#lsmod | grep overlay
#sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward# 配置 ipvs 内核参数
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF# 内核加载 ipvs
sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack
# 确认ipvs模块加载
#lsmod |grep -e ip_vs -e nf_conntrack# 安装 Containerd
wget -c https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz
tar -xzvf containerd-1.7.11-linux-amd64.tar.gz
#解压出来一个bin目录,containerd可执行文件都在bin目录里面
mv bin/* /usr/local/bin/
rm -rf bin#使用systemcd来管理containerd
cat << EOF > /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerdType=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload && systemctl enable --now containerd 
#systemctl  status containerd# 安装 runc
#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:
curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 && \
install -m 755 runc.amd64 /usr/local/sbin/runc# 安装 CNI plugins
wget -c https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
#根据官网的安装步骤来,创建一个目录用于存放cni插件
mkdir -p /opt/cni/bin
tar -xzvf  cni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/# 修改 Containd 配置
#修改containerd的配置,因为containerd默认从k8s官网拉取镜像
#创建一个目录用于存放containerd的配置文件
mkdir -p /etc/containerd
#把containerd配置导出到文件
containerd config default | sudo tee /etc/containerd/config.toml# 修改沙箱镜像
sed -i 's#sandbox_image = "registry.k8s.io/pause:.*"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
# 修改 cgroup 为 systemd
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
# 配置镜像加速
sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#' /etc/containerd/config.toml# 配置 Containerd 镜像源
# docker hub镜像加速
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]capabilities = ["pull", "resolve"][host."https://docker.m.daocloud.io"]capabilities = ["pull", "resolve"][host."https://reg-mirror.qiniu.com"]capabilities = ["pull", "resolve"][host."https://registry.docker-cn.com"]capabilities = ["pull", "resolve"][host."http://hub-mirror.c.163.com"]capabilities = ["pull", "resolve"]EOF# k8s.gcr.io镜像加速
mkdir -p /etc/containerd/certs.d/k8s.gcr.io
tee /etc/containerd/certs.d/k8s.gcr.io/hosts.toml << 'EOF'
server = "https://k8s.gcr.io"[host."https://k8s-gcr.m.daocloud.io"]capabilities = ["pull", "resolve", "push"]
EOF#重启containerd
systemctl restart containerd 
#systemctl status containerd# 安装 kubeadm、kubelet、kubectl
# 安装依赖
sudo systemctl restart containerd
sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl 
sudo apt-mark hold kubelet kubeadm kubectl# kubelet 开机自启
systemctl enable --now kubelet# 配置 crictl socket
crictl config  runtime-endpoint unix:///run/containerd/containerd.sock
crictl config image-endpoint unix:///run/containerd/containerd.sock

不需要魔法的脚本

前置:

下载我下载好的资源包

  • CSDN 资源 – 免费

  • 阿里云 OSS

  • GitLab

资源列表:

资源原始地址
Container Runtime:Containerd 1.7.11https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz
CRI: runc 1.10https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64
CNI: cni-plugin 1.4https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
calico 3.27 : tigera-operator.yamlhttps://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
calico 3.27 : custom-resources.yamlhttps://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml

下载资源:

wget -O k8s1.29.tar.gz https://blog-source-mkt.oss-cn-chengdu.aliyuncs.com/resources/k8s/kubeadm%20init/k8s1.29.tar.gz
tar xzvf k8s1.29.tar.gz
cd workdirexport name=master01  # 改为你 hostname 的名称

在所有节点执行以下脚本

脚本功能:

  • 时间同步
  • 关闭 swap
  • 启用内核模块
  • 安装 ipvs 并启用内核参数
  • 安装 containerd, runc, cni
  • 更改 containerd 沙箱镜像和 cgroup 并且配置镜像加速
  • 安装最新 kubelet, kubeadm, kubectl

注意: 请先通过export name=master01方式设置当前 node 的 hostname

install.sh:

#!/bin/bashhostnamectl set-hostname $name# 阿里源
mv /etc/apt/sources.list /etc/apt/sources.list.bak
cat <<EOF > /etc/apt/sources.list
deb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
EOF
apt update# 时间同步
timedatectl set-timezone Asia/Shanghai
#安装chrony,联网同步时间
apt install chrony -y && systemctl enable --now chronyd# 禁用 swap
sudo swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab# 安装 ipvs
apt install -y ipset ipvsadm# 配置需要的内核模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF# 启动模块
sudo modprobe overlay
sudo modprobe br_netfiltercat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF# 是 sysctl 参数生效
sudo sysctl --system
# 检验是否配置成功
#lsmod | grep br_netfilter
#lsmod | grep overlay
#sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward# 配置 ipvs 内核参数
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF# 内核加载 ipvs
sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack
# 确认ipvs模块加载
#lsmod |grep -e ip_vs -e nf_conntrack# 安装 Containerd
#wget -c https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz
tar -xzvf containerd-1.7.11-linux-amd64.tar.gz
#解压出来一个bin目录,containerd可执行文件都在bin目录里面
mv bin/* /usr/local/bin/
rm -rf bin#使用systemcd来管理containerd
cat << EOF > /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerdType=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload && systemctl enable --now containerd 
#systemctl  status containerd# 安装 runc
#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:
#curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 && \
install -m 755 runc.amd64 /usr/local/sbin/runc# 安装 CNI plugins
#wget -c https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
#根据官网的安装步骤来,创建一个目录用于存放cni插件
mkdir -p /opt/cni/bin
tar -xzvf  cni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/# 修改 Containd 配置
#修改containerd的配置,因为containerd默认从k8s官网拉取镜像
#创建一个目录用于存放containerd的配置文件
mkdir -p /etc/containerd
#把containerd配置导出到文件
containerd config default | sudo tee /etc/containerd/config.toml# 修改沙箱镜像
sed -i 's#sandbox_image = "registry.k8s.io/pause:.*"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
# 修改 cgroup 为 systemd
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
# 配置镜像加速
sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#' /etc/containerd/config.toml# 配置 Containerd 镜像源
# docker hub镜像加速
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]capabilities = ["pull", "resolve"][host."https://docker.m.daocloud.io"]capabilities = ["pull", "resolve"][host."https://reg-mirror.qiniu.com"]capabilities = ["pull", "resolve"][host."https://registry.docker-cn.com"]capabilities = ["pull", "resolve"][host."http://hub-mirror.c.163.com"]capabilities = ["pull", "resolve"]EOF# k8s.gcr.io镜像加速
mkdir -p /etc/containerd/certs.d/k8s.gcr.io
tee /etc/containerd/certs.d/k8s.gcr.io/hosts.toml << 'EOF'
server = "https://k8s.gcr.io"[host."https://k8s-gcr.m.daocloud.io"]capabilities = ["pull", "resolve", "push"]
EOF#重启containerd
systemctl restart containerd 
#systemctl status containerd# 安装 kubeadm、kubelet、kubectl
# 安装依赖
sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl 
sudo apt-mark hold kubelet kubeadm kubectl# kubelet 开机自启
systemctl enable --now kubelet# 配置 crictl socket
crictl config  runtime-endpoint unix:///run/containerd.sock
crictl config image-endpoint unix:///run/containerd/containerd.sock
chmod +x install.sh
./install.sh

初始化 master01

暴露环境变量

export K8S_VERSION=1.29.1   # k8s 集群版本
export POD_CIDR=10.244.0.0/16   # pod 网段
export SERVICE_CIDR=10.96.0.0/12   # service 网段
export APISERVER_MASTER01=192.168.254.130   # master01 ip
export APISERVER_HA=192.168.254.100    # 集群 vip 地址
export APISERVER_HA_PORT=16443    # 集群 vip 地址

在你的主节点初始化集群(同样在 workdir/ 下)

# 命令行方式初始化, 后面需要手动更改 kube-proxy 为 ipvs 模式
# kubeadm init --apiserver-advertise-address=$APISERVER_MASTER01 --apiserver-bind-port=6443 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.29.1 --service-cidr=$SERVICE_CIDR --pod-network-cidr=$POD_CIDR --upload-certs# kubeadm config print init-defaults >Kubernetes-cluster.yaml  # kubeadm 默认配置
cat << EOF > Kubernetes-cluster.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:# 将此处IP地址替换为主节点IP ETCD容器会试图通过此地址绑定端口 如果主机不存在则会失败advertiseAddress: $APISERVER_MASTER01bindPort: 6443
nodeRegistration:criSocket: unix:///run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: $name  # 节点 hostnametaints: null
---
# controlPlaneEndpoint 可配置高可用的 ApiServer
apiServer:timeoutForControlPlane: 4m0scertSANs: # 主节点IP- $APISERVER_HA- $APISERVER_MASTER01
apiVersion: kubeadm.k8s.io/v1beta3
controlPlaneEndpoint: "$APISERVER_HA:$APISERVER_HA_PORT"
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:  # 可使用外接 etcd 集群local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 国内源
kind: ClusterConfiguration
kubernetesVersion: $K8S_VERSION
networking:dnsDomain: cluster.local# 增加配置 指定pod网段podSubnet: $POD_CIDRserviceSubnet: $SERVICE_CIDR
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kubeproxy 使用 ipvs
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
EOFkubeadm init --config Kubernetes-cluster.yaml --upload-certsmkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config# 安装 calico
sed -i 's#cidr.*#cidr: '$POD_CIDR'#' custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml

–upload-certs: 将控制平面证书上传到 kubeadm-certs Secret。

​ 简单来说: 后面就不需要把集群证书拷贝到其他 master 节点

配置自动补全

apt install bash-completion -y
cat << EOF >> ~/.profile
alias k='kubectl'
source <(kubectl completion bash)
complete -F __start_kubectl k
EOFsource ~/.profile

加入其余节点

master02:

kubeadm join 192.168.254.100:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6c9f43be739919e1e03abaa3d0deae00bc2400f77dc7574e338dc6460be2eab6 \--control-plane --certificate-key 02feec260870e7145d69b65d0252f1067768c193d9e8c4aba31ed1b1fa7aaba8

node01:

kubeadm join 192.168.254.100:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6c9f43be739919e1e03abaa3d0deae00bc2400f77dc7574e338dc6460be2eab6

验证集群

$ k get po -A
NAMESPACE         NAME                                       READY   STATUS              RESTARTS   AGE
calico-system     calico-kube-controllers-75f84bf8b4-96hht   0/1     ContainerCreating   0          6m19s
calico-system     calico-node-4cd7c                          0/1     PodInitializing     0          105s
calico-system     calico-node-7z22c                          0/1     PodInitializing     0          109s
calico-system     calico-node-pcq8m                          0/1     Running             0          6m19s
calico-system     calico-typha-65b78b8f8d-r2qjn              1/1     Running             0          100s
calico-system     calico-typha-65b78b8f8d-vv4ph              1/1     Running             0          6m19s
calico-system     csi-node-driver-bsd66                      0/2     ContainerCreating   0          105s
calico-system     csi-node-driver-h465x                      0/2     ContainerCreating   0          109s
calico-system     csi-node-driver-htqj2                      0/2     ContainerCreating   0          6m19s
kube-system       coredns-857d9ff4c9-nk4kx                   1/1     Running             0          6m40s
kube-system       coredns-857d9ff4c9-w6zff                   1/1     Running             0          6m40s
kube-system       etcd-master01                              1/1     Running             0          6m53s
kube-system       etcd-master02                              1/1     Running             0          97s
kube-system       kube-apiserver-master01                    1/1     Running             0          6m53s
kube-system       kube-apiserver-master02                    1/1     Running             0          98s
kube-system       kube-controller-manager-master01           1/1     Running             0          6m53s
kube-system       kube-controller-manager-master02           1/1     Running             0          97s
kube-system       kube-proxy-7mwpd                           1/1     Running             0          109s
kube-system       kube-proxy-gfcqb                           1/1     Running             0          6m40s
kube-system       kube-proxy-vkkm4                           1/1     Running             0          105s
kube-system       kube-scheduler-master01                    1/1     Running             0          6m53s
kube-system       kube-scheduler-master02                    1/1     Running             0          99s
tigera-operator   tigera-operator-55585899bf-xssq5           1/1     Running             0          6m40s

参考:

  1. https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  2. https://ai-feier.github.io/p/keepalived-nginx%E5%AE%9E%E7%8E%B0%E9%AB%98%E5%8F%AF%E7%94%A8apiserver/
  3. https://blog.csdn.net/m0_51964671/article/details/135256571

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://xiahunao.cn/news/2773968.html

如若内容造成侵权/违法违规/事实不符,请联系瞎胡闹网进行投诉反馈,一经查实,立即删除!

相关文章

(每日持续更新)信息系统项目管理(第四版)(高级项目管理)考试重点整理第10章 项目进度管理(三)

博主2023年11月通过了信息系统项目管理的考试&#xff0c;考试过程中发现考试的内容全部是教材中的内容&#xff0c;非常符合我学习的思路&#xff0c;因此博主想通过该平台把自己学习过程中的经验和教材博主认为重要的知识点分享给大家&#xff0c;希望更多的人能够通过考试&a…

流量嗅探详解

不少人存在这样的观点&#xff1a;只要计算机安装各种专业的安全软件&#xff0c;系统及时更 新补丁&#xff0c;密码尽可能复杂&#xff0c;那么计算机就会避免遭到入侵。当然这样的确不容易 被入侵&#xff0c;但那也只是针对传统的病毒、木马而言&#xff0c;在流量攻击面前…

汇编笔记 01

小蒟蒻的汇编自学笔记&#xff0c;如有错误&#xff0c;望不吝赐教 文章目录 笔记编辑器&#xff0c;启动&#xff01;debug功能CS & IPmovaddsub汇编语言寄存器的英文全称中英对照表muldivandor 笔记 编辑器&#xff0c;启动&#xff01; 进入 debug 模式 debug功能 …

在C++的union中使用std::string(非POD对象)的陷阱

struct和union的对比 union最开始是C语言中的关键字&#xff0c;在嵌入式中比较常见&#xff0c;由于嵌入式内存比较稀缺&#xff0c;所以常用union用来节约空间&#xff0c;在其他需要节省内存的地方也可以用到这个关键字&#xff0c;写一个简单程序来说明union的用途 struc…

如何合理规划 PostgreSQL 的数据库用户

PostgreSQL 作为世界上最领先的开源数据库&#xff0c;有一套强大的用户角色权限系统&#xff0c;和 MySQL 做一个对比&#xff1a; 但硬币的另一面则是对于简单场景来说增加了复杂度。在许多单应用场景&#xff0c;其实也不需要额外的 schema 层&#xff0c;也不需要额外的 ow…

2 月 7 日算法练习- 数据结构-树状数组上二分

问题引入 给出三种操作&#xff0c; 0在容器中插入一个数。 1在容器中删除一个数。 2求出容器中大于a的第k大元素。 树状数组的特点就是对点更新&#xff0c;成段求和&#xff0c;而且常数非常小。原始的树状数组只有两种操作&#xff0c;在某点插入一个数和求1到i的所有数的…

Sublime Text 3配置 Node.js 开发环境

《开发工具系列》 Sublime Text 3配置 Node.js 开发环境 一、引言二、主要内容2.1 初识 Sublime Text 32.2 初识 Node.js2.3 接入 Node.js2.3.1 下载并安装 Node.js2.3.2 环境变量配置 2.4 配置 Node.js 开发环境2.5 编写 Node.js 代码2.6 运行 Node.js 代码 三、总结 一、引言…

Spring Boot3整合Redis

⛰️个人主页: 蒾酒 &#x1f525;系列专栏&#xff1a;《spring boot实战》 &#x1f30a;山高路远&#xff0c;行路漫漫&#xff0c;终有归途。 目录 前置条件 1.导依赖 2.配置连接信息以及连接池参数 3.配置序列化方式 4.编写测试 前置条件 已经初始化好一个spr…

C++,stl,deque容器详解

目录 1.deque容器的构造函数 代码示例&#xff1a; 2.deque的赋值操作 3.deque的大小操作 4.deque的插入和删除 5.deque的数据存取 6.deque的排序操作 1.deque容器的构造函数 代码示例&#xff1a; #include<bits/stdc.h> using namespace std;void print(deque…

Spring Cloud使用ZooKeeper作为注册中心的示例

简单的Spring Cloud应用程序使用ZooKeeper作为注册中心的示例&#xff1a; 1.新建模块&#xff1a; 2.勾选依赖&#xff1a; 3.在pom.xml文件中做出部分修改及添加Spring Cloud Zookeeper 依赖版本&#xff1a; 完整pom文件 <?xml version"1.0" encoding&q…

单片机学习笔记---LED点阵屏的工作原理

目录 LED点阵屏分类 LED点阵屏显示原理 74HC595的介绍 一片74HC595的工作原理 多片级联工作原理 总结 LED点阵屏由若干个独立的LED组成&#xff0c;LED以矩阵的形式排列&#xff0c;以灯珠亮灭来显示文字、图片、视频等。LED点阵屏广泛应用于各种公共场合&#xff0c;如汽…

微信小程序(三十八)滚动容器

注释很详细&#xff0c;直接上代码 上一篇 新增内容&#xff1a; 1.滚动触底事件 2.下拉刷新事件 源码&#xff1a; index.wxml <view class"Area"> <!-- scroll-y 垂直滚动refresher-enabled 允许刷新bindrefresherrefresh 绑定刷新作用函数bindscrollto…

Netty中使用编解码器框架

目录 什么是编解码器&#xff1f; 解码器 将字节解码为消息 将一种消息类型解码为另一种 TooLongFrameException 编码器 将消息编码为字节 将消息编码为消息 编解码器类 通过http协议实现SSL/TLS和Web服务 什么是编解码器&#xff1f; 每个网络应用程序都必须定义如何…

imgaug数据增强神器:增强器一览

官网&#xff1a;imgaug — imgaug 0.4.0 documentationhttps://imgaug.readthedocs.io/en/latest/ github:GitHub - aleju/imgaug: Image augmentation for machine learning experiments. imgaug数据增强神器&#xff1a;增强器一览_iaa 图像增强改变颜色-CSDN博客文章浏览阅…

C++新版本特性

目录: 前言 C11的常用新特性 auto类型推导&#xff1a; auto的限制&#xff1a; auto的应用&#xff1a; decltype类型推导&#xff1a; decltype的实际应用&#xff1a; 使用using 定义别名&#xff1a; 支持函数模板的默认模板参数 : tuple元组&#xff1a; 列表初…

泛型相关内容

1. 什么是泛型 泛型就是定义一种模板&#xff0c;既实现了编写一次&#xff0c;万能匹配&#xff0c;又通过编译器保证了类型安全。 2. 使用泛型 1&#xff09;使用泛型时&#xff0c;把泛型参数<T>替换为需要的class类型&#xff0c;不指定时默认为Obje…

uniapp中配置开发环境和生产环境

uniapp在开发的时候&#xff0c;可以配置多种环境&#xff0c;用于自动切换IP地址&#xff0c;用HBuilder X直接运行的就是开发环境&#xff0c;用HBuilder X发布出来的&#xff0c;就是生产环境。 1.使用HBuilder X创建原生的uniapp程序 选择vue3 2.什么都不改&#xff0c;就…

可达鸭二月月赛——入门赛第四场(周三)题解

可达鸭二月月赛——入门赛第四场&#xff08;周三&#xff09;题解 博文作者&#xff1a;王胤皓 题目&#xff08;可达鸭学员应该能打开&#xff0c;打不开的题解里有题目简述&#xff09;题解(点击即可跳转&#xff0c;里面有我的名字)T1 小可喝水linkT2 \texttt{ }\texttt{ …

06-OpenFeign-使用HtppClient连接池

默认下OpenFeign使用URLConnection 请求连接&#xff0c;每次都需要创建、销毁连接 1、添加ApacheHttpClient依赖 <!-- 使用Apache HttpClient替换Feign原生httpclient--><dependency><groupId>org.apache.httpcomponents</groupId><artifact…

音视频色彩:RGB/YUV

目录 1.RGB 1.1介绍 1.2分类 1.2.1RGB16 1)RGB565 2)RGB555 1.2.2RGB24 1.2.3RGB222 2.YUV 2.1介绍 2.2分类 2.2.1 YUV444 2.2.2 YUV 422 2.2.3 YUV 420 2.3存储格式 2.3.1 YUYV 2.3.2 UYVY 2.3.3 YUV 422P 2.3.4 YUV420P/YUV420SP 2.3.5 YU12 和…