如何解决Kubernetes集群中的DNS解析问题

如何解决Kubernetes集群中的DNS解析问题
机器人
摘要
lomtom

问题描述

在部署MongoDB副本集后,尽管Pod成功启动,但无法正确配置副本集。错误信息表明无法连接到其他两个MongoDB Pod,导致连接超时的错误。

错误信息如下:

{
	"ok" : 0,
	"errmsg" : "replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongodb-1.mongodb-hs.default.svc.cluster.local:27017 failed with Couldn't get a connection within the time limit of 1000ms, mongodb-2.mongodb-hs.default.svc.cluster.local:27017 failed with Couldn't get a connection within the time limit of 1000ms",
	"code" : 74,
	"codeName" : "NodeNotFound"
}

问题分析

连接分析

在其中一个Pod中,尝试使用Kubernetes的DNS服务访问其他MongoDB Pod,但遇到连接失败问题,这提示可能是与DNS相关的问题。

root@mongodb-0:/# bin/mongo mongodb-1.mongodb-hs.default.svc.cluster.local:27017
MongoDB shell version v4.4.11
connecting to: mongodb://mongodb-1.mongodb-hs.default.svc.cluster.local:27017/test?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server mongodb-1.mongodb-hs.default.svc.cluster.local:27017, connection attempt failed: HostNotFound: Could not find address for mongodb-1.mongodb-hs.default.svc.cluster.local:27017: SocketException: Host not found (non-authoritative), try again later :
connect@src/mongo/shell/mongo.js:374:17
@(connect):2:6
exception: connect failed
exiting with code 1

由于MongoDB的Pod缺少网络工具,因此需要额外创建一个Pod来进行网络测试:

apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["sleep","3600"]

进入新创建的容器后,执行ping测试,发现无法解析baidu.comKubernetes.default

/ # ping baidu.com
ping: bad address 'baidu.com'

/ # ping Kubernetes.default
ping: bad address 'Kubernetes.default'

/ # nslookup kubernetes.default
;; connection timed out; no servers could be reached

这提示可能存在网络或DNS解析问题,因此需要进一步分析Kubernetes的DNS服务或网络设置。

检查组件状态

此时需要验证集群中的CoreDNS和kube-proxy组件是否正常运行:

[root@master ~]# kubectl get po -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-rzz7m         1/1     Running   13         5d23h
coredns-6d56c8448f-zvbs2         1/1     Running   13         5d23h
etcd-master                      1/1     Running   13         5d23h
kube-apiserver-master            1/1     Running   7          2d20h
kube-controller-manager-master   1/1     Running   14         5d23h
kube-flannel-ds-qn7bs            1/1     Running   12         5d22h
kube-proxy-vbrvv                 1/1     Running   6          3h6m
kube-scheduler-master            1/1     Running   14         5d23h

上述输出表明所有组件正常运行。接下来,查看CoreDNS和kube-proxy的日志输出。

检查Coredns

CoreDNS的日志输出如下:

[root@master ~]# kubectl logs -f coredns-6d56c8448f-rzz7m -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03

检查Proxy

kube-proxy的日志输出如下:

[root@master ~]# kubectl logs -f kube-proxy-llwsb -n kube-system
I0718 01:49:32.894434       1 node.go:136] Successfully retrieved node IP: 10.56.47.244
I0718 01:49:32.894504       1 server_others.go:142] kube-proxy node IP is an IPv4 address (10.56.47.244), assume IPv4 operation
W0718 01:49:33.007267       1 server_others.go:584] Unknown proxy mode "", assuming iptables proxy
I0718 01:49:33.007385       1 server_others.go:185] Using iptables Proxier.
W0718 01:49:33.007400       1 server_others.go:461] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0718 01:49:33.007405       1 server_others.go:472] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
W0718 01:49:33.007774       1 proxier.go:280] missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended
I0718 01:49:33.008534       1 server.go:650] Version: v1.19.16
I0718 01:49:33.009190       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0718 01:49:33.009259       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0718 01:49:33.009329       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0718 01:49:33.010066       1 config.go:315] Starting service config controller
I0718 01:49:33.010118       1 config.go:224] Starting endpoint slice config controller
I0718 01:49:33.010222       1 shared_informer.go:240] Waiting for caches to sync for service config
I0718 01:49:33.010222       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0718 01:49:33.110416       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0718 01:49:33.110416       1 shared_informer.go:247] Caches are synced for service config

这时候发现了问题所在,proxy并没有使用ipvs,而是使用了Using iptables Proxier.,那么出现这种现象的原因很有可能我们没有设置为ipvs,重新修改proxy mode,重启pod。

[root@master ~]# kubectl edit configmap kube-proxy -n kube-system


apiVersion: v1
data:
  config.conf: |-
  ....
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs" #修改为ipvs即可
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
  ....  

再次查看proxy日志:

[root@master ~]# kubectl logs -f kube-proxy-vbrvv -n kube-system
I0718 02:55:57.130360       1 node.go:136] Successfully retrieved node IP: 10.56.47.244
I0718 02:55:57.130441       1 server_others.go:142] kube-proxy node IP is an IPv4 address (10.56.47.244), assume IPv4 operation
I0718 02:55:57.143730       1 server_others.go:258] Using ipvs Proxier.
W0718 02:55:57.143747       1 server_others.go:461] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0718 02:55:57.143754       1 server_others.go:472] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0718 02:55:57.144010       1 proxier.go:364] missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended
E0718 02:55:57.144068       1 proxier.go:381] can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1
W0718 02:55:57.144136       1 proxier.go:434] IPVS scheduler not specified, use rr by default
I0718 02:55:57.144297       1 server.go:650] Version: v1.19.16
I0718 02:55:57.144528       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0718 02:55:57.144884       1 config.go:315] Starting service config controller
I0718 02:55:57.144897       1 shared_informer.go:240] Waiting for caches to sync for service config
I0718 02:55:57.144926       1 config.go:224] Starting endpoint slice config controller
I0718 02:55:57.144931       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0718 02:55:57.245024       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0718 02:55:57.245041       1 shared_informer.go:247] Caches are synced for service config 

根据 Using ipvs Proxier得知已经使用了ipvs,但是其中有一个关键日志:can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1,这是因为kernel version版本过低。

解决办法

升级内核

因为使用的是CentOS 系统内核版本为 3.10,里面的 IPVS 模块比较老旧,缺少新版 Kubernetes IPVS 所需的依赖。所以重新安装新的内核版本即可。

这里有两种方法,任选一种即可:

方法一:

# 载入公钥
[root@master ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

# 安装 ELRepo 最新版本
[root@master ~]# yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm

# 查询可以使用的 kernel 包版本
[root@master ~]# yum list available --disablerepo=* --enablerepo=elrepo-kernel

# 安装可用的内核版本
[root@master ~]# yum install -y kernel-lt.x86_64.5.4.206-1.el7.elrepo --enablerepo=elrepo-kernel

# 查看内核版本
[root@master ~]# rpm -q kernel
kernel-3.10.0-1160.el7.x86_64

[root@master ~]# rpm -q kernel-lt
kernel-lt-5.4.206-1.el7.elrepo.x86_6

# 设置默认内核版本
[root@master ~]# grub2-set-default "CentOS Linux (5.4.206-1.el7.elrepo.x86_64) 7 (Core)"
[root@master ~]# grub2-editenv list
saved_entry=CentOS Linux (5.4.206-1.el7.elrepo.x86_64) 7 (Core)

方法二:

[root@master ~]# rpm -Uvh https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/kernel-lt-5.4.258-1.el7.elrepo.x86_64.rpm


[root@master ~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg 
CentOS Linux (5.4.258-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.92.1.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-957.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-4253311876aa40debedd52dd1547f16a) 7 (Core)

# 数字0代表内核
[root@master ~]# grub2-set-default 0

重启使配置生效,也可以进入系统选择现在的内核版本,注意如果使用kernel-lt或者kernel-ml需要进入bios将secure boot关闭,否则无法加载内核,将显示you need to load the kernel first

[root@master ~]# reboot

[root@master ~]# uname -r
5.4.206-1.el7.elrepo.x86_64

加载br-netfilter

内核更新完毕,再次查看proxy日志:

[root@master ~]# kubectl logs kube-proxy-vbrvv -n kube-system -f
I0718 04:22:49.332759       1 node.go:136] Successfully retrieved node IP: 10.56.47.244
I0718 04:22:49.332880       1 server_others.go:142] kube-proxy node IP is an IPv4 address (10.56.47.244), assume IPv4 operation
I0718 04:22:49.352797       1 server_others.go:258] Using ipvs Proxier.
W0718 04:22:49.352810       1 server_others.go:461] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0718 04:22:49.352814       1 server_others.go:472] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0718 04:22:49.353067       1 proxier.go:364] missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended
W0718 04:22:49.353167       1 proxier.go:434] IPVS scheduler not specified, use rr by default
I0718 04:22:49.353502       1 server.go:650] Version: v1.19.16
I0718 04:22:49.353651       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0718 04:22:49.353687       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0718 04:22:49.353708       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0718 04:22:49.354255       1 config.go:315] Starting service config controller
I0718 04:22:49.354323       1 config.go:224] Starting endpoint slice config controller
I0718 04:22:49.354392       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0718 04:22:49.354392       1 shared_informer.go:240] Waiting for caches to sync for service config
I0718 04:22:49.454832       1 shared_informer.go:247] Caches are synced for service config 
I0718 04:22:49.454953       1 shared_informer.go:247] Caches are synced for endpoint slice config 

看到missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended,缺少br-netfilter module,加载br-netfilter module,并且设sysctl br-nf-call-iptables即可:

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

再次查看proxy日志正常显示。

[root@master ~]# kubectl logs kube-proxy-vbrvv -n kube-system -f
I0718 04:50:23.329436       1 node.go:136] Successfully retrieved node IP: 10.56.47.244
I0718 04:50:23.329475       1 server_others.go:142] kube-proxy node IP is an IPv4 address (10.56.47.244), assume IPv4 operation
I0718 04:50:23.370927       1 server_others.go:258] Using ipvs Proxier.
W0718 04:50:23.370939       1 server_others.go:461] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0718 04:50:23.370942       1 server_others.go:472] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
W0718 04:50:23.371241       1 proxier.go:434] IPVS scheduler not specified, use rr by default
I0718 04:50:23.371558       1 server.go:650] Version: v1.19.16
I0718 04:50:23.371710       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0718 04:50:23.371749       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0718 04:50:23.371770       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0718 04:50:23.372293       1 config.go:315] Starting service config controller
I0718 04:50:23.372300       1 shared_informer.go:240] Waiting for caches to sync for service config
I0718 04:50:23.372310       1 config.go:224] Starting endpoint slice config controller
I0718 04:50:23.372312       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0718 04:50:23.472396       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0718 04:50:23.472396       1 shared_informer.go:247] Caches are synced for service config

再次进入pod,测试,发现正常:

/ # ping baidu.com
PING baidu.com (220.181.38.251): 56 data bytes
64 bytes from 220.181.38.251: seq=0 ttl=51 time=37.177 ms
64 bytes from 220.181.38.251: seq=1 ttl=51 time=35.746 ms

--- baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 35.746/36.461/37.177 ms
/ # ping Kubernetes.default
PING Kubernetes.default (10.96.0.1): 56 data bytes
64 bytes from 10.96.0.1: seq=0 ttl=64 time=0.022 ms

其他原因(网段冲突)

注意:如果域名服务器和Pod的IP处于同一个网段,同样也会造成域名解析错误,具体报错为:

[root@user-cluster-0001 ~]# kubectl logs -n kube-system coredns-5d78c9869d-rb2md -f
...
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 baidu.com. A: read udp 100.100.175.74:49661->100.125.129.250:53: i/o timeout
[ERROR] plugin/errors: 2 baidu.com. AAAA: read udp 100.100.175.74:58488->100.125.1.250:53: i/o timeout
[ERROR] plugin/errors: 2 baidu.com. AAAA: read udp 100.100.175.74:32817->100.125.1.250:53: i/o timeout
[ERROR] plugin/errors: 2 baidu.com. A: read udp 100.100.175.74:54280->100.125.1.250:53: i/o timeout
...

这些错误信息表明DNS解析失败,应用程序无法正确解析域名。根本原因在于Kubernetes集群的Pod网段(100.64.0.0/10)与华为云北京四区的主机所使用的DNS服务IP地址(100.125.129.250100.125.129.250)发生了冲突,导致Pod无法找到真正的DNS服务器。

可通过以下命令查看Pod网段:

kubectl get cm kubeadm-config -n kube-system -o yaml | grep -i podsub

可以修改ippool来修改pod网段,一旦修改了Pod网段,你需要删除所有的Pod,并重新分配IP地址。你可以使用以下命令来修改网段及删除所有的Pod:

# 修改网段
kubectl edit ippool
# 删除所有pod,重新分配IP
kubectl get pods --no-headers=true --all-namespaces |sed -r 's/(\S+)\s+(\S+).*/kubectl --namespace \1 delete pod \2/e'
lomtom

标题:如何解决Kubernetes集群中的DNS解析问题

作者:lomtom

链接:https://lomtom.cn/cqoquyxmr4nta