k8s(四)安装k8s集群
- June 8, 2022
kubeadm init, kubeadm join
作者:lomtom
个人网站:lomtom.cn 🔗
个人公众号:博思奥园 🔗
你的支持就是我最大的动力。
k8s系列:
使用部署工具安装集群有三种方式:
原理
这里以kubeadm为例,安装 kubernetes 1.23.1。
使用kubeadm安装kubernetes的核心其实只有两步:
# 创建一个Master节点
$ kubeadm init
# 将一个Node节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口>
即:创建Master节点,并且将Node节点加入到集群中。其他的步骤都是为这两步做准备。
kubeadm init
在执行 kubeadm init
命令的同时,kubeadm为我们做了很多事情,例如安装前的预检查、生成证书以及生成一些必要的组件等等。
-
预检查其实包括很多步骤
- 检查Linux 内核的版本必须是否是 3.10 以上,可使用
cat /proc/version
命令查看。 - 检查Linux Cgroups 模块是否可用?
- 用户户安装的 kubeadm 和 kubelet 的版本是否兼容
- Containerd/Docker 是否已经安装?
- …
- 检查Linux 内核的版本必须是否是 3.10 以上,可使用
-
证书生成,通常保存在Master 节点的 /etc/kubernetes/pki 目录下。
-
安装必要的组件,Master节点中的API Server、Scheduler、Controller manager、ectl就是在这个时候被kubeadm以pod的形式部署起来的。
首先kubeadm会生成相对应的yaml描述文件(存在于/etc/kubernetes/manifests),并且以一种特殊的方式“静态 pod”的方式进行启动。
[root@master ~]# ls /etc/kubernetes/manifests etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
静态 Pod总是由kubelet创建的,并且总在kubelet所在的Node(这里就是Master)上运行。
当然,我们也可以以自定义的方式,修改这些pod的启动参数,并且在init时指定配置文件即可,例如:
kubeadm init --config=kubeadm-init.yaml
-
在完成Master 组件的健康检查后,kubeadm 就会为集群生成一个 bootstrap token,通过该token,就可以允许其他的节点加入到这个集群了。
-
最后一步,Kubernetes 会默认 kube-proxy 和 DNS 这两个插件。它们分别用来提供整个集群的服务发现和 DNS 功能。
kubeadm join
kubeadm join的原理就是通过master节点的token以一种安全的方式连接到Master节点的Apiserver上即可。
可是,为什么执行 kubeadm join 需要这样一个 token 呢?
因为,任何一台机器想要成为 Kubernetes 集群中的一个节点,就必须在集群的 kube-apiserver 上注册。可是,要想跟 apiserver 打交道,这台机器就必须要获取到相应的证书文件(CA 文件)。可是,为了能够一键安装,我们就不能让用户去 Master 节点上手动拷贝这些文件。所以,kubeadm 至少需要发起一次“不安全模式”的访问到 kube-apiserver,从而拿到保存在 ConfigMap 中的 cluster-info(它保存了 APIServer 的授权信息)。而 bootstrap token,扮演的就是这个过程中的安全验证的角色。
安装
前提准备
- 本次使用两台主机搭建K8s集群,并且两台主机是可以互相连通的。
- 能够访问外网,因为需要拉取必要镜像
- 安装Linux系统,且内核版本在3.10 及以上
操作系统 | 主机名 | IP |
---|---|---|
centos7.9.2003(最小化安装) | master | 8.16.0.67 |
centos7.9.2003(最小化安装) | slaver | 8.16.0.66 |
安装步骤
- 安装master
- 配置环境
- 安装containerd
- 安装kubectl、kubelet、kubeadm工具
- 进行kubeadm init
- 安装网络
- 安装node(同master,有小部分差别)
- 将node与master进行join
Master
-
配置环境
配置环境包括配置hosts,关闭一些不必要的服务。
#设置主机名 hostnamectl set-hostname master #添加hosts解析 cat >> /etc/hosts << EOF 8.16.0.67 master 8.16.0.66 slaver EOF ping -c4 master #同步时间 yum -y install ntp systemctl start ntpd && systemctl enable ntpd && systemctl status ntpd #关闭防火墙 systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld #永久关闭seLinux(需重启系统生效) setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config #关闭swap swapoff -a # 临时关闭 sed -i 's/.*swap.*/#&/g' /etc/fstab #加载IPVS模块 yum -y install ipset ipvsadm cat > /etc/sysconfig/modules/ipvs.modules <<EOF modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
-
安装
containerd
可以将
containerd
换成docker
,因为docker
附带了containerd
(文章结尾附带了Docker版),containerd即将成为趋势。cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF # 载入必要模块 sudo modprobe overlay sudo modprobe br_netfilter cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system yum install -y wget # 配置软件源 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo # yum list |grep containerd # 安装containerd yum -y install containerd.io.x86_64 mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml # 使用 systemd cgroup 驱动程序 sed -i '/runc.options/a\ SystemdCgroup = true' /etc/containerd/config.toml && \ grep 'SystemdCgroup = true' -B 7 /etc/containerd/config.toml # 更改sandbox_image sed -ri 's#k8s.gcr.io\/pause:3.2#registry.aliyuncs.com\/google_containers\/pause:3.6#' /etc/containerd/config.toml # endpoint位置添加阿里云的镜像源 sed -ri 's#https:\/\/registry-1.docker.io#https:\/\/registry.aliyuncs.com#' /etc/containerd/config.toml systemctl daemon-reload systemctl enable containerd --now systemctl status containerd
Tips:启动可能会有问题,是因为sed命令没有生效,那么可以手动修改/etc/containerd/config.toml,增加
endpoint
、sandbox_image
值,查看SystemdCgroup
是否重复。[plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://registry.aliyuncs.com"] [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
cgroups是docker使用的核心技术之一,其名称源自控制组群(英语:control groups)的简写,是Linux内核的一个功能,用来限制、控制与分离一个进程组的资源(如CPU、内存、磁盘输入输出等)。
-
安装kubectl、kubelet、kubeadm工具
kubeadm
:用来初始化集群的指令。kubelet
:在集群中的每个节点上用来启动 Pod 和容器等。kubectl
:用来与集群通信的命令行工具。
cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors .aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF yum list kubeadm --showduplicates | sort -r yum -y install kubeadm-1.23.1-0 kubelet-1.23.1-0 kubectl-1.23.1-0 systemctl enable --now kubelet systemctl status kubelet
Tips:执行
systemctl enable --now kubelet
可能会无法启动,kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。执行第四步后,再查看状态。 -
kubeadm初始化及安装网络
#设置crictl cat << EOF >> /etc/crictl.yaml runtime-endpoint: unix:///var/run/containerd/containerd.sock image-endpoint: unix:///var/run/containerd/containerd.sock timeout: 10 debug: false EOF mkdir ~/kubeadm_init && cd ~/kubeadm_init kubeadm init # 定义初始化文件 kubeadm config print init-defaults > kubeadm-init.yaml cat > kubeadm-init.yaml << EOF apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 8.16.0.67 bindPort: 6443 nodeRegistration: criSocket: /run/containerd/containerd.sock name: master taints: - effect: "NoSchedule" key: "node-role.kubernetes.io/master" --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.23.1 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd EOF # 查看所需镜像列表 kubeadm config images list --config kubeadm-init.yaml # 预拉取镜像 kubeadm config images pull --config kubeadm-init.yaml ctr -n k8s.io i ls -q crictl images crictl ps -a # 执行kubeadm初始化 kubeadm init --config=kubeadm-init.yaml mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # 安装网络 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Tips:
crictl
是containerd
兼容的容器运行时命令行接口,可以使用它来检查和调试 k8s 节点上的容器运行时和应用程序。- init后会生成join语句
kubeadm join 8.16.0.67:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:db42cac7470d6df1bb187db2aa89f9204f7de13d1ca28abc435d462bf72e651c
Slaver
-
参考Master
-
参考Master
-
参考Master
-
参考Master
#设置crictl cat << EOF >> /etc/crictl.yaml runtime-endpoint: unix:///var/run/containerd/containerd.sock image-endpoint: unix:///var/run/containerd/containerd.sock timeout: 10 debug: false EOF # 拉取镜像 crictl pull registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1 crictl pull registry.aliyuncs.com/google_containers/pause:3.6 ctr -n k8s.io i ls -q crictl images crictl ps -a mkdir -p $HOME/.kube sudo cp -i master:/etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Join
join操作将slaver节点加入到集群中。
-
生成永久token,生成的token类似:q7zp90.hywock1tphsdrvkq,然后他就会生成一条默认的join语句。
kubeadm token create --ttl 0 --print-join-command
使用
kubeadm token list
查看所有token[root@master k8s]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS q7zp90.hywock1tphsdrvkq <forever> <never> authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
-
进行join
使用第一步生成join语句直接在node节点上执行
kubeadm join 8.16.0.67:6443 --token q7zp90.hywock1tphsdrvkq --discovery-token-ca-cert-hash sha256:266cfd8963dbefe1cfa0b2c965896d185182133b908d2d24c6f214356e1822fc
认证方式参考:kubeadm join 🔗
测试
至此大功就告成啦。🎉🎉🎉
可以采用kubectl get node -o wide
命令查看所有节点
[root@master k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 2d v1.23.1
slaver Ready <none> 44h v1.23.1
那就安装一个nginx来测试一下吧。
创建一个nginx的deployment并且将其expose对外暴露。
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
等待一段时间,安装完成后,deployment、pod、service将会各生成一个。
使用kubectl get svc,pod,deploy -o wide
便可查看
[root@master k8s]# kubectl get svc,pod,deploy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
service/nginx NodePort 10.109.59.210 <none> 80:31732/TCP 33h
NAME READY STATUS RESTARTS AGE
pod/nginx-85b98978db-9z9ck 1/1 Running 0 33h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 33h
访问nginx地址,成功
Tips
-
重置kubeadm安装状态
kubeadm reset rm -fr $HOME/.kube/config
-
在拉取镜像的时候如果连接超时,可以选择国内镜像源(例如阿里),加上参数
--image-repository=registry.aliyuncs.com/google_containers
即可kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers kubeadm init --image-repository=registry.aliyuncs.com/google_containers
-
在执行
kubeadm init
后,节点处于NotReady,这是符合预期的。使用
kubectl describe node master
即可查看master节点信息,他会告诉我们network plugin is not ready: cni config uninitialized
因为没有安装网络,所有DNS服务一起起不来,安装网络服务即可(这里是flannel)。
[root@master kubeadm_init]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d8c4cb4d-2fbpp 0/1 Pending 0 2m40s kube-system coredns-6d8c4cb4d-g8jzg 0/1 Pending 0 2m40s kube-system etcd-master 1/1 Running 1 (6m23s ago) 3m2s kube-system kube-apiserver-master 1/1 Running 1 (6m32s ago) 2m58s kube-system kube-controller-manager-master 1/1 Running 1 (3m26s ago) 2m55s kube-system kube-proxy-tdz44 1/1 Running 0 2m40s kube-system kube-scheduler-master 1/1 Running 1 (3m53s ago) 2m59s # 安装flannel后 [root@master kubeadm_init]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d8c4cb4d-2fbpp 1/1 Running 0 8m48s kube-system coredns-6d8c4cb4d-g8jzg 1/1 Running 0 8m48s kube-system etcd-master 1/1 Running 1 (12m ago) 9m10s kube-system kube-apiserver-master 1/1 Running 1 (12m ago) 9m6s kube-system kube-controller-manager-master 1/1 Running 1 (9m34s ago) 9m3s kube-system kube-flannel-ds-7gm8n 1/1 Running 0 57s kube-system kube-proxy-tdz44 1/1 Running 0 8m48s kube-system kube-scheduler-master 1/1 Running 1 (10m ago) 9m7s
1)获取flannel:wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 🔗
2)获取镜像:
grep image kube-flannel.yml
[root@maste k8s]# grep image kube-flannel.yml #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply) image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1 #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply) image: rancher/mirrored-flannelcni-flannel:v0.17.0 #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
3)Master:
kubecetl create -f kube-flannel.yml
4)Node:
crictl pull flannelcni/flannel-cni-plugin:v1.0.1 crictl pull flannelcni/flannel:v0.17.0
-
Master节点执行
systemctl enable --now kubelet
后,kubelet可能会起不来,等kubeadm init
执行成功后,就会自动起来了。 -
让node节点使用kubectl命令行工具
mkdir -p $HOME/.kube scp master:/$HOME/.kube/config $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config mkdir -p $HOME/.kube scp master:/etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
-
如果想要可视化,这里有两种方法
-
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml
-
使用lens 🔗 (推荐)
-
-
阿里云ecs里没有配置公网ip,etcd无法启动,所以kubeadm在初始化会出现”timeout“的错误
解决办法就是给你的服务器版定一个公网ip。
-
想要优雅的使用k命令,而不每次都输入
kubectl
。可以在每次ssh连接后执行function k() { cmdline=`HISTTIMEFORMAT="" history | awk '$2 == "kubectl" && (/-n/ || /--namespace/) {for(i=2;i<=NF;i++)printf("%s ",$i);print ""}' | tail -n 1` regs=('\-n [\w\-\d]+' '\-n=[\w\-\d]+' '\-\-namespace [\w\-\d]+' '\-\-namespace=[\w\-\d]+') for i in "${!regs[@]}"; do reg=${regs[i]} nsarg=`echo $cmdline | grep -o -P "$reg"` if [[ "$nsarg" == "" ]]; then continue fi cmd="kubectl $nsarg $@" echo "$cmd" $cmd return done cmd="kubectl $@" echo "$cmd" $cmd }
如果不想要每次都手动执行可以设置为ssh登陆后自动执行: 编辑
~/.bashrc
if [[ -n $SSH_CONNECTION ]] ; then function k() { cmdline=`HISTTIMEFORMAT="" history | awk '$2 == "kubectl" && (/-n/ || /--namespace/) {for(i=2;i<=NF;i++)printf("%s ",$i);print ""}' | tail -n 1` regs=('\-n [\w\-\d]+' '\-n=[\w\-\d]+' '\-\-namespace [\w\-\d]+' '\-\-namespace=[\w\-\d]+') for i in "${!regs[@]}"; do reg=${regs[i]} nsarg=`echo $cmdline | grep -o -P "$reg"` if [[ "$nsarg" == "" ]]; then continue fi cmd="kubectl $nsarg $@" echo "$cmd" $cmd return done cmd="kubectl $@" echo "$cmd" $cmd } fi
docker 版
- master
#!/bin/bash
startTime=`date +%Y%m%d-%H:%M:%S`
startTime_s=`date +%s`
#1、环境准备
#设置主机名
hostnamectl set-hostname master
#添加hosts解析
cat >> /etc/hosts << EOF
8.16.0.100 master
8.16.0.101 slaver
EOF
ping -c2 master
ping -c2 slaver
#同步时间
yum -y install ntp
systemctl start ntpd && systemctl enable ntpd && systemctl status ntpd
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
#永久关闭seLinux(需重启系统生效)
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#关闭swap
swapoff -a # 临时关闭
sed -i 's/.*swap.*/#&/g' /etc/fstab
#加载IPVS模块
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
EOF
kernel_version=$(uname -r | cut -d- -f1)
echo $kernel_version
if [ `expr $kernel_version \> 4.19` -eq 1 ]
then
modprobe -- nf_conntrack
else
modprobe -- nf_conntrack_ipv4
fi
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
#2、部署docker
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
# 1.20+需要开启br_netfilter
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee ca
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
yum install -y wget
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O ca
yum -y install docker-ce-19.03.15 docker-ce-cli-19.03.15
systemctl daemon-reload
systemctl enable docker --now
systemctl status docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://7vnz06qj.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker
#3、Kubernetes部署
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
yum -y install kubeadm-1.19.16-0 kubelet-1.19.16-0 kubectl-1.19.16-0
systemctl enable --now kubelet
systemctl status kubelet
mkdir ~/kubeadm_init && cd ~/kubeadm_init
kubeadm config print init-defaults > kubeadm-init.yaml
cat > kubeadm-init.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authenticatioca
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 8.16.0.100
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master
taints:
- effect: "NoSchedule"
key: "node-role.kubernetes.io/master"
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.16
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
# 预拉取镜像
kubeadm config images pull --config kubeadm-init.yaml
kubeadm init --config=kubeadm-init.yaml | tee kubeadm-init.log
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node -owide
endTime=`date +%Y%m%d-%H:%M:%S`
endTime_s=`date +%s`
sumTime=$[ $endTime_s - $startTime_s ]
echo "$startTime ---> $endTime" "Total:$sumTime seconds"
- slaver
#!/bin/bash
startTime=`date +%Y%m%d-%H:%M:%S`
startTime_s=`date +%s`
#1、环境准备
#设置主机名
hostnamectl set-hostname slaver
#添加hosts解析
cat >> /etc/hosts << EOF
8.16.0.100 master
8.16.0.101 slaver
EOF
ping -c2 master
ping -c2 slaver
#同步时间
yum -y install ntp
systemctl start ntpd && systemctl enable ntpd && systemctl status ntpd
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
#永久关闭seLinux(需重启系统生效)
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#关闭swap
swapoff -a # 临时关闭
sed -i 's/.*swap.*/#&/g' /etc/fstab
#加载IPVS模块
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
EOF
kernel_version=$(uname -r | cut -d- -f1)
echo $kernel_version
if [ `expr $kernel_version \> 4.19` -eq 1 ]
then
modprobe -- nf_conntrack
else
modprobe -- nf_conntrack_ipv4
fi
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
#2、部署docker
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
# 1.20+需要开启br_netfilter
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
yum install -y wget
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-19.03.15 docker-ce-cli-19.03.15
systemctl daemon-reload
systemctl enable docker --now
systemctl status docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://7vnz06qj.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl daemon-reload && systemctl restart docker
#3、Kubernetes部署
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
yum -y install kubeadm-1.19.16-0 kubelet-1.19.16-0 kubectl-1.19.16-0
systemctl enable --now kubelet
systemctl status kubelet
本章节安装k8s脚本由大佬 🔗提供