跳转至

k8s-1.29.0部署参考

部署环境

两台虚拟机
cpu 3a6000
os openEuler 24.03(LTS)

准备工作

设置主机名

Master节点命名为k8s-master,Worker节点命名为k8s-node

注:如果没有#特殊说明,下文出现的命令默认在两个机器上执行

hostnamectl set-hostname k8s-master #在Master节点机器上执行
hostnamectl set-hostname k8s-node   #在Worker节点机器上执行

关闭 SElinux

setenforce 0

禁用防火墙

systemctl stop firewalld.service
systemctl disable firewalld.service

关闭 swap

swapoff -a
sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab

清理 iptables

iptables -F
iptables -X
iptables -Z
iptables -t nat -F
iptables -t nat -X
iptables -t nat -Z

host 解析文件配置

执行下述指令:

注:将指令中Master/Worker IP修改为自己节点的IP

cat >>/etc/hosts<< EOF
Master节点机器IP   k8s-master
Worker节点机器IP   k8s-node
EOF 

转发 IPv4 并让 iptables 看到桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动:
sudo sysctl --system
通过运行以下指令确认 br_netfilteroverlay 模块被加载:
lsmod | grep br_netfilter
lsmod | grep overlay
通过运行以下指令确认 net.bridge.bridge-nf-call-iptablesnet.bridge.bridge-nf-call-ip6tablesnet.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

安装 containerd 容器运行时

下载 containerd 压缩包:

wget https://github.com/Loongson-Cloud-Community/containerd/releases/download/v1.7.13/containerd-1.7.13-static-abi2.0-bin.tar.gz
tar -xf containerd-1.7.13-static-abi2.0-bin.tar.gz
mv containerd-1.7.13-static-abi2.0-bin/* /usr/local/bin/
生成默认配置文件:
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
下载 containerd.service 文件:
wget -O /usr/lib/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

systemctl daemon-reload
systemctl enable --now containerd
结合 runc 使用 systemd cgroup 驱动,在 /etc/containerd/config.toml 中设置:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
/etc/containerd/config.toml中重载沙箱(pause)镜像:
[plugins."io.containerd.grpc.v1.cri"]
  sandbox_image = "lcr.loongnix.cn/kubernetes/pause:3.9"
一旦更新了这个配置文件,需要重启 containerd:systemctl restart containerd

安装 runc

wget https://github.com/Loongson-Cloud-Community/runc/releases/download/v1.1.12/runc-seccomp-1.1.12-abi2.0-bin.tar.gz
tar -xf runc-seccomp-1.1.12-abi2.0-bin.tar.gz
mv runc-seccomp-1.1.12-abi2.0-bin/runc-static /usr/local/bin/runc

安装 kubeadm、kubelet、kubectl 等 rpm 包

mkdir -p /tmp/rpms
cd /tmp/rpms
wget http://cloud.loongnix.cn/releases/loongarch64/kubernetes/kubernetes/v1.29.0/cri-tools-1.29.0-0.loongarch64.rpm
wget http://cloud.loongnix.cn/releases/loongarch64/kubernetes/kubernetes/v1.29.0/kubeadm-1.29.0-0.loongarch64.rpm
wget http://cloud.loongnix.cn/releases/loongarch64/kubernetes/kubernetes/v1.29.0/kubectl-1.29.0-0.loongarch64.rpm
wget http://cloud.loongnix.cn/releases/loongarch64/kubernetes/kubernetes/v1.29.0/kubelet-1.29.0-0.loongarch64.rpm
wget http://cloud.loongnix.cn/releases/loongarch64/kubernetes/kubernetes/v1.29.0/kubernetes-cni-1.3.0-0.loongarch64.rpm
yum install -y ./*.rpm

配置 crictl

cat <<EOF|tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
EOF

kubeadm 初始化集群

执行命令:

kubeadm init \
--image-repository lcr.loongnix.cn/kubernetes \
--kubernetes-version v1.29.0 \
--cri-socket=/run/containerd/containerd.sock \
--pod-network-cidr=10.244.0.0/16 -v=5
注:也可以将初始化配置打印到yaml文件,修改后通过yaml文件启动,kubeadm config print init-defaults > kubeadm-config.yaml; kubeadm init --config kubeadm-config.yaml

成功后根据提示配置:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

加入 node

在Worker节点上执行Master节点初始化成功时提供的加入命令即可,命令格式类似:

kubeadm join Master_Node_IP:6443 --token wngmgg.sxr4n542r4y4pji7 \
    --discovery-token-ca-cert-hash sha256:a0a15138d3d2149c3cca6e6fc9ead95439e00f60fda314d9cef31ff1a6baa461
如果上述打印信息被覆盖,可以执行下述命令重新生成:
kubeadm token create --print-join-command

添加网络插件

calico

下载calico配置文件:

wget https://github.com/Loongson-Cloud-Community/calico/releases/download/v3.26.1/calico.yaml
yaml文件中的CALICO_IPV4POOL_CIDR字段需要与初始化时的pod-network-cidr一致,修改后执行:
kubectl apply -f calico.yaml

flannel

下载flannel配置文件:

wget https://github.com/Loongson-Cloud-Community/flannel/releases/download/v0.24.3/kube-flannel.yml
添加:
kubectl apply -f kube-flannel.yml

查看 node 状态

kubectl get nodes -o wide
NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   2d20h   v1.29.0
k8s-node     Ready    <none>          3m55s   v1.29.0

异常处理

如果网络插件pod已经启动,但coredns一直处于pending状态,建议查看日志:

journalctl -f -u kubelet.service
如果出现类似:
"Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
的问题,并且ip -a打印的网络接口没有cni0,建议重新启动该node机器