通过kubeadm初始化安装kubernetes v1.32高可用集群(rocky9.5系统 3个控制平台2个node节点集群部署)
1.基本配置
安装前需要对网段和ip地址进行规划。
1). 搭建环境介绍
五台Rocky 9.2 2C4G 20G;如果是虚拟机复制出来的机器 ,请确保cat /sys/class/dmi/id/product_uuid 是唯一的;类似如下:
2). 静态ip地址配置
主要是为了在服务器重启不会导致ip地址变动,可以在系统中静态配置,也可以在dhcp上做静态保留,这里用dhcp保留;类似如下:
3). 服务器ip规划
主机名 | IP地址 | 说明 |
---|---|---|
k8s-master01 ~ 03 | 192.168.1.101 ~ 103 | 3台master节点ip范围 |
/ | 192.168.1.136 | keepalived虚拟IP(不单独占用机器) |
k8s-node01 ~ 02 | 192.168.1.132 ~ 133 | 两台work节点(node节点) |
配置信息 | 备注 |
---|---|
系统版本 | Rocky Linux 9.2(update) |
Containerd | latest最新版本 |
Pod网段 | 172.16.0.0/16 |
Service网段 | 10.96.0.0/16 |
注意:
- 如果测试环境ip网段与以上不一样,请统一对应替换这些网段及ip,
- Pod网段和service和宿主机网段不要重复!!!
- VIP地址及以上服务器保留地址,发fd先确认好没有被占用!如果是在云上搭建这里的VIP是SLB/NLB地址
- 如果是公司私有云搭建需要联系管理员,是否支持vip和地址保留
2.服务器初始化
1). 所有节点更改主机名
注意这里是所有节点即5台主机均需要更改主机名;如:
hostnamectl set-hostname k8s-master01
主机名依次为 k8s-master02、k8s-master03、k8s-node01、k8s-node02
2). 所有节配置hosts ,修改/etc/hosts如下:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.101 k8s-master01
192.168.1.102 k8s-master02
192.168.1.103 k8s-master03
192.168.1.132 k8s-node01
192.168.1.133 k8s-node02
3). 所有节点配置yum源
sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e
's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \
-i.bak \
/etc/yum.repos.d/*.repo
yum makecache
4). 所有节点必备工具安装
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data
lvm2 git rsyslog -y
5). 所有节点基本配置如下:
关闭防火墙、selinux、dnsmasq、swap、开启rsyslog、配置limit
systemctl disable --now firewalld
systemctl disable --now dnsmasq
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
systemctl enable --now rsyslog
# 关闭 swap分区
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
# 所有节点安装 ntpdate
dnf install epel-release -y
dnf config-manager --set-enabled epel
dnf install ntpsec -y
# 时区配置同步为上海时区
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
# 所有节点配置limit
ulimit -SHn 65535
cat /etc/security/limits.conf
# 末尾添加如下内容
cat >> /etc/security/limits.conf << 'EOF'
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
# 所有节点系统升级
yum update -y
6). 免密登录配置(密钥登录)
Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作:
cd root
ssh-keygen -t rsa # 生成密钥,一路回车即可
# 推送至集群其他主机
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i
.ssh/id_rsa.pub $i;done
注意:公有云环境,可能需要把kubectl放在一个非Master节点上
7). 内核配置
所有节点安装ipvsadm和配置ipvs模块
yum install ipvsadm ipset sysstat conntrack libseccomp -y
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
所有节点创建ipvs.conf,并配置开机自动加载
/etc/modules-load.d/ipvs.conf
# 加入以下内容
cat >> /etc/modules-load.d/ipvs.conf << 'EOF'
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
所有节点然后执行systemctl enable –now systemd-modules-load.service即可(报错不用管)
所有节点内核优化配置
cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
所有节点应用配置并重启,于查看内核模块是否自动加载成功
# sysctl --system
# reboot
重启后查看:
# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
8). 集群安装文件下载
cd /root/ ; git clone https://gitee.com/dukuan/k8s-ha-install.git
以下完成后,所有节点关机做个快照(系统初始化完成!)
3. 高可用组件安装
本节主要安装高可用集群,haproxy和keepalived;
注意:公有云要用公有云自带的负载均衡,比如阿里云的SLB、NLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived(主要没有vip进行漂移)的.
1). 所有Master节点安装Haproxy和KeepAlived
yum install keepalived haproxy -y
2). 所有Master节点Haproxy配置
注意配置文件中 IP:6443 是apiserver端口
[root@k8s-master01 etc]# mkdir /etc/haproxy
[root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue
256 weight 100
server k8s-master01 192.168.1.101:6443 check
server k8s-master02 192.168.1.102:6443 check
server k8s-master03 192.168.1.103:6443 check
在master0上修改好后同步到master02 master03:
scp haproxy.cfg k8s-master02:/etc/haproxy/
scp haproxy.cfg k8s-master03:/etc/haproxy/
3). 所有Master节点配置KeepAlived,
需要注意interface部分的配置 ,这里需要填写你自己主机上的网络接口,如我本地的是 ifconfig 看到的是enp0s3则interface需要修改为enp0s3 如下:
k8s-master01的keepalive配置
[root@k8s-master01 etc]# mkdir /etc/keepalived
[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens160
mcast_src_ip 192.168.1.101
virtual_router_id 51
priority 101
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.136
}
track_script {
chk_apiserver
}
}
k8s-master02配置:
[root@k8s-master02 etc]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
mcast_src_ip 192.168.1.102
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.136
}
track_script {
chk_apiserver
}
}
k8s-master03配置:
[root@k8s-master03 etc]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
mcast_src_ip 192.168.1.103
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.136
}
track_script {
chk_apiserver
}
}
4).所有master节点配置KeepAlived健康检查文件
[root@k8s-master01 keepalived]# vim /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
脚本添加可执行权限 ,并同步至master02 和master03
# chmod +x /etc/keepalived/check_apiserver.sh
# scp /etc/keepalived/check_apiserver.sh master02:/etc/keepalived/
# scp /etc/keepalived/check_apiserver.sh master02:/etc/keepalived/
5). 所有master节点启动haproxy和keepalived
# systemctl daemon-reload
# systemctl enable --now haproxy
# systemctl enable --now keepalived
测试keepalived服务是否是正常
所有节点(master和node)上进行测试
# ping 192.168.1.136 -c 4
PING 192.168.1.136 (192.168.1.136) 56(84) bytes of data.
64 bytes from 192.168.1.136: icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from 192.168.1.136: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 192.168.1.136: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 192.168.1.136: icmp_seq=4 ttl=64 time=0.063 ms
--- 192.168.1.136 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3106ms
rtt min/avg/max/mdev = 0.062/0.163/0.464/0.173 ms
# telnet 192.168.1.136 16443
Trying 192.168.1.136...
Connected to 192.168.1.136.
Escape character is '^]'.
Connection closed by foreign host.
如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:
# netstat -ntpul
以上没有问题,三台主节点(控制平台)高可用性haproxy和keepalive配置完成!
4.Runtime安装 - containerd
由于本次测试安装的kubernetes版本为1.32.2大于1.24,因此不再推荐使用docker为Runtime,但后面测试做镜像还是要用到docker,因此也需要安装但可以不启动。
1). 配置安装源
所有节点
# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data
lvm2 git -y
yum-config-manager --add-repo
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2). 配置Containerd所需的模块
所有节点配置
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
所有节点加载模块并配置Containerd所需要的内核并加载内核
# modprobe -- overlay
# modprobe -- br_netfilter
# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# sysctl --system
所有节点生成containerd配置文件:
# mkdir -p /etc/containerd
# containerd config default | tee /etc/containerd/config.toml
3). 配置Containerd的Cgroup和Pause镜像
所有节点配置
# sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
# sed -i 's#k8s.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'
/etc/containerd/config.toml
# sed -i 's#registry.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'
/etc/containerd/config.toml
# sed -i 's#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'
/etc/containerd/config.toml
所有节点启动Containerd,并配置开机自启动
# systemctl daemon-reload
# systemctl enable --now containerd
所有节点验证containerd能否给k8s使用,是否正
# ctr plugin ls egrep 'cri|overlayfs'
所有节点配置crictl客户端连接的运行时位置(可选):
# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
安装配置好crictl后 取代docker ps 可以查看本机运行的容器
5. 安装Kubernetes集群
1). 配置安装源并安装kubeadm|kubelet|kubectl
注意版本号,本次测试用的是v1.32,海外可以源从k8s官方获取
以下是从阿里源配置
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key
EOF
在Master01节点查看最新的Kubernetes版本是多少
[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates | sort -r
所有节点安装1.32最新版本kubeadm、kubelet和kubectl
# yum install kubeadm-1.32* kubelet-1.32* kubectl-1.32* -y
所有节点设置Kubelet开机自启动(由于还未初始化,没有kubelet的配置文件,此时kubelet无法启动,无需关心)
# systemctl daemon-reload
# systemctl enable --now kubelet
此时kubelet是起不来的,日志会有报错不影响!
2). kubeadm集群初始化
在k8s-master01上操作:
配置字段官方参数
[root@k8s-master01 ~]# vim kubeadm-config.yaml # vim 需要 set paste
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.1.101
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
---
apiServer:
certSANs:
- 192.168.1.136 # 如果搭建的不是高可用集群,把此处改为master的IP
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.1.136:16443 # 如果搭建的不是高可用集群,把此处IP改为master的IP,端口改成6443
controllerManager: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.32.2 # 更改此处的版本号和kubeadm version一致
networking:
dnsDomain: cluster.local
podSubnet: 172.16.0.0/16 # 注意此处的网段,不要与service和节点网段冲突
serviceSubnet: 10.96.0.0/16 # 注意此处的网段,不要与pod和节点网段冲突
scheduler: {}
更新kubeadm文件
[root@k8s-master01 ~]# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
将new.yaml文件复制到其他master节点
[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done
之后所有Master节点提前下载镜像,可以节省初始化时间(其他节点不需要更改任何配置,包括IP地址也不需要更改)
# kubeadm config images pull --config /root/new.yaml
正确的反馈信息类似如下(版本可能不一样)
[root@k8s-master01 ~]# kubeadm config images pull --config /root/new.yaml
[config/images] Pulled
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.32.2
[config/images] Pulled
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.32.2
[config/images] Pulled
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.32.2
[config/images] Pulled
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.32.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.11.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.16-0
master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他master节点加入Master01即可
[root@k8s-master01 ~]# kubeadm init --config /root/new.yaml --upload-certs
初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值),初始化产生的信息类似如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf # kubectl 是通过这个配置文件操作集群的
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each
as root:
# 不要复制文档当中的,要去使用节点生成的
kubeadm join 192.168.1.136:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash
sha256:98f5c123ddca271b729b7e39c3a6e0aa63d8148f5e3000a6173705a46394a224 \
--control-plane --certificate-key
2a04d65cfd7e0a481d3d1b4d44abd100a04bb1436a37863277fc071902859e75
Please note that the certificate-key gives access to cluster sensitive data, keep it
secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.136:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash
sha256:98f5c123ddca271b729b7e39c3a6e0aa63d8148f5e3000a6173705a46394a224Master01节点配置环境变量,用于访问Kubernetes集群:
cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc
检查 etcd状态
[root@k8s-master01 ~]# kubectl exec -it etcd-k8s-master01 -n kube-system -- sh
# etcdctl --endpoints="192.168.1.101:2379,192.168.1.102:2379,192.168.1.103:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint status --write-out=table
Master01节点查看节点状态:(显示NotReady不影响)
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane 24s v1.32.2
采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:kubectl get pod –A
[root@k8s-master01 ~]# kubectl get cs
3). 集群初始化失败排查思路
如果初始化失败,重置后再次初始化,命令如下(没有失败不要执行)
[root@k8s-master01 ~]# kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
另外需要看系统日志,CentOS/RockyLinux日志路径:/var/log/messages,Ubuntu系列日志路径:/var/log/syslog:
tail -f /var/log/messages | grep -v "not found"
可能出的原因:
- 系统uuid不唯一(克隆出来的)
- Containerd的配置文件修改的不对,自行参考《安装containerd》小节核对
- new.yaml配置问题,比如非高可用集群忘记修改16443端口为6443
- new.yaml配置问题,三个网段有交叉,出现IP地址冲突
- VIP不通导致无法初始化成功,此时messages日志会有VIP超时的报错
4). 高可用master配置
其他master加入集群,master02和master03分别执行(千万不要在master01再次执行,不能直接复制文档当中的命令,而是你自己刚才master01初始化之后产生的命令)
kubeadm join 192.168.1.136:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash
sha256:98f5c123ddca271b729b7e39c3a6e0aa63d8148f5e3000a6173705a46394a224 \
--control-plane --certificate-key
2a04d65cfd7e0a481d3d1b4d44abd100a04bb1436a37863277fc071902859e75
查看当前状态:(如果显示NotReady不影响)
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane 4m23s v1.32.2
k8s-master02 NotReady control-plane 66s v1.32.2
k8s-master03 NotReady control-plane 14s v1.32.2
6. Node节点配置
Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源
本次测试需要将k8s-node01 和k8s-node02加入集群中
注意:
kubeadm join 192.168.1.136:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash
sha256:98f5c123ddca271b729b7e39c3a6e0aa63d8148f5e3000a6173705a46394a224
所有节点初始化完成后,查看集群状态(NotReady不影响)
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane 4m23s v1.32.2
k8s-master02 NotReady control-plane 66s v1.32.2
k8s-master03 NotReady control-plane 14s v1.32.2
k8s-node01 NotReady <none> 13s v1.32.2
k8s-node02 NotReady <none> 10s v1.32.2
7. Calico组件安装
所有节点(包括后期新加node的节点)禁止NetworkManager管理Calico的网络接口,防止有冲突或干扰:
cat >>/etc/NetworkManager/conf.d/calico.conf<<EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali
EOF
systemctl daemon-reload
systemctl restart NetworkManager
以下步骤只在master01执行(.x不需要按对应版本更改):
[root@k8s-master01 ~]# cd /root/k8s-ha-install && git checkout manual-installation-v1.32.x && cd calico/
修改Pod网段:
[root@k8s-master01 ~]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr=
| awk -F= '{print $NF}'`
[root@k8s-master01 ~]# sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
[root@k8s-master01 ~]# kubectl apply -f calico.yaml
[root@k8s-master01 ~]# kubectl get pod –n kube-system
查看状态(有非running状态莫慌,稍等,在拉镜像呢,最终状态如下:
此时节点全部变为Ready状态:
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 28m v1.32.2
k8s-master02 Ready control-plane 27m v1.32.2
k8s-master03 Ready control-plane 27m v1.32.2
k8s-node01 Ready <none> 26m v1.32.2
k8s-node02 Ready <none> 26m v1.32.2
8. Metrics部署
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
将Master01节点的front-proxy-ca.crt复制到所有Node节点(后期新加node节点也要操作)
[root@k8s-master01 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
[root@k8s-master01 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node02:/etc/kubernetes/pki/front-proxy-ca.crt
以下操作均在master01节点执行安装metrics server
[root@k8s-master01 ~]# cd /root/k8s-ha-install/kubeadm-metrics-server
# kubectl create -f comp.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
查看状态:
[root@k8s-master01 ~]# kubectl get po -n kube-system -l k8s-app=metrics-server
查看节点和Pod资源使用率
[root@k8s-master01 ~]# kubectl top node
[root@k8s-master01 ~]# kubectl top pod -n kube-system
9. Dashboard部署
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
[root@k8s-master01 ~]# cd /root/k8s-ha-install/dashboard/
[root@k8s-master01 ~]# kubectl create -f .
修改svc类型Cluster-IP为NodePort
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
查看安装
[root@k8s-master01 ~]# kubectl get pod -n kubernetes-dashboard
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
访问:
https://192.168.1.101:31621
token登录
生成临时token
[root@k8s-master01 ~]# kubectl create token admin-user -n kube-system
eyJhbGciOiJSUzI1NiIsImtpZCI6IkNmYUVQQXVvLWRmUjk5eDBfODV5TkZHN2hSbkwyUXJkUV9uMFd0Vm9QMkUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQzNjUxNzQ4LCJpYXQiOjE3NDM2NDgxNDgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMTVhNjQ4YzAtYmM2Zi00OWVjLWFiYjYtZjI4N2EwYWI1NWViIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMjg4ZGU1ODctZTE2ZS00YzQ5LWE4Y2QtN2I2YzgzMjUzYWVlIn19LCJuYmYiOjE3NDM2NDgxNDgsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.sKlUYtFUIGQ3ey6_8OBljHqgUP3565kchyaagE2nFftP-nbTWz7e_ch0BvGXDda5fR90qcDe1DUmrU8Mv7DPZ4XU-ZgFcIbjBTDNesrnV9DhITve-lYP-bdIv6qpa2H_R00LQtyQpRuF23nL7l4y1IaVZ43RBEOAugqduS87emyBGvN0XSfneOHg262ACDpBDiA993VaL3uMmj4K2mNIn3pQJ0VG8qFKtmjbklH22CwoZVtQysuHqMn1ZrlUHn0NokjtkL0LhRWp_RCCMu5AkF1dDHLBVUrDgqK5-MBFifGkWH_4pxnnfGimog9cbYg_fKrNgpI2S24f4keLkW9FVQ
登录成功以命名空间隔离,如kubernetes-dashboard:
至此kubernetes v1.32 基于rocky9.5系统 3个控制平台2个node节点集群部署完成!
10. 补充
注意:kubeadm安装的集群,证书有效期默认是一年。master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。
启动和二进制不同的是,
kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml,修改后需要重启kubelet进程
其他组件的配置文件在/etc/kubernetes/manifests目录下
比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启pod。不能再次创建该文件;另外因为是高可用集群,所以修改配置后,其他主节点也要对应修改。
kube-proxy的配置在kube-system命名空间下的configmap中,可以通过
kubectl edit cm kube-proxy -n kube-system
进行更改,更改完成后,可以通过patch重启kube-proxy
kubectl patch daemonset kube-proxy -p
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式删除Taint,即可部署Pod
[root@k8s-master01 ~]# kubectl taint node -l node-role.kubernetes.io/control-plane node-role.kubernetes.io/control-plane:NoSchedule-
查看node上的污点配置
[root@k8s-master01 ~]# kubectl describe node |grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: <none>
Taints: <none>
再次给节点添加aint让其不被调度:
kubectl taint nodes k8s-master03 node-role.kubernetes.io/control-plane=:NoSchedule
node/k8s-master03 tainted
[root@k8s-master01 manifests]# kubectl describe node |grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: <none>
Taints: <none>
文档信息
- 本文作者:sysnat
- 本文链接:https://sysant.github.io/2025/04/03/kubeadm%E9%83%A8%E7%BD%B2k8s%E9%9B%86%E7%BE%A4/
- 版权声明:自由转载-非商用-非衍生-保持署名(创意共享3.0许可证)