升级到k8s的17.0出现问题

摘要:
问题如下:找出原因:[root@localhostk8s1.15.1-master]#Journalctl-f-ukubelet-Logsbeginat III 2020-01-2910:32:33CST。--一月2913:22:09k8s1kubelet[95442]:E012913:22:0975883495442秘密。go:195]无法获得秘密kube系统

问题如下:

升级到k8s的17.0出现问题第1张

 查找原因:

[root@localhost k8s1.15.1-master]# journalctl -f -u kubelet
-- Logs begin at 三 2020-01-29 10:32:33 CST. --
1月 29 13:22:09 k8s1 kubelet[95442]: E0129 13:22:09.758834   95442 secret.go:195] Couldn't get secret kube-system/flannel-token-4g7bs: failed to sync secret cache: timed out waiting for the condition
1月 29 13:22:09 k8s1 kubelet[95442]: E0129 13:22:09.758868   95442 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/96cbc74a-58d5-4d10-b7df-52732ba63938-flannel-token-4g7bs" ("96cbc74a-58d5-4d10-b7df-52732ba63938")" failed. No retries permitted until 2020-01-29 13:22:10.258852307 +0800 CST m=+3.230320911 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "flannel-token-4g7bs" (UniqueName: "kubernetes.io/secret/96cbc74a-58d5-4d10-b7df-52732ba63938-flannel-token-4g7bs") pod "kube-flannel-ds-amd64-j7bl2" (UID: "96cbc74a-58d5-4d10-b7df-52732ba63938") : failed to sync secret cache: timed out waiting for the condition"
1月 29 13:22:09 k8s1 kubelet[95442]: E0129 13:22:09.758890   95442 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
1月 29 13:22:09 k8s1 kubelet[95442]: E0129 13:22:09.758926   95442 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/8167b71d-457f-48d3-85bd-ef793c18a2a6-kube-proxy" ("8167b71d-457f-48d3-85bd-ef793c18a2a6")" failed. No retries permitted until 2020-01-29 13:22:10.258909639 +0800 CST m=+3.230378243 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8167b71d-457f-48d3-85bd-ef793c18a2a6-kube-proxy") pod "kube-proxy-tvvth" (UID: "8167b71d-457f-48d3-85bd-ef793c18a2a6") : failed to sync configmap cache: timed out waiting for the condition"
1月 29 13:22:09 k8s1 kubelet[95442]: E0129 13:22:09.758943   95442 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
1月 29 13:22:09 k8s1 kubelet[95442]: E0129 13:22:09.758980   95442 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/eb8dcc9b-b573-4e4a-a292-f8fcce423783-config-volume" ("eb8dcc9b-b573-4e4a-a292-f8fcce423783")" failed. No retries permitted until 2020-01-29 13:22:10.258961581 +0800 CST m=+3.230430190 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eb8dcc9b-b573-4e4a-a292-f8fcce423783-config-volume") pod "coredns-5c98db65d4-qdp89" (UID: "eb8dcc9b-b573-4e4a-a292-f8fcce423783") : failed to sync configmap cache: timed out waiting for the condition"
1月 29 13:22:11 k8s1 kubelet[95442]: E0129 13:22:11.463974   95442 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
1月 29 13:22:16 k8s1 kubelet[95442]: E0129 13:22:16.463855   95442 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
1月 29 13:22:21 k8s1 kubelet[95442]: E0129 13:22:21.471563   95442 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
1月 29 13:22:26 k8s1 kubelet[95442]: E0129 13:22:26.665812   95442 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
1月 29 13:22:32 k8s1 kubelet[95442]: E0129 13:22:32.266202   95442 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
1月 29 13:22:53 k8s1 kubelet[95442]: E0129 13:22:53.483584   95442 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
1月 29 13:22:53 k8s1 kubelet[95442]: F0129 13:22:53.483602   95442 csi_plugin.go:281] Failed to initialize CSINodeInfo after retrying
1月 29 13:22:53 k8s1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
1月 29 13:22:53 k8s1 systemd[1]: Unit kubelet.service entered failed state.
1月 29 13:22:53 k8s1 systemd[1]: kubelet.service failed.

https://github.com/kubernetes/kubernetes/issues/86094

发现这个链接也有这个问题

升级到k8s的17.0出现问题第2张

 解决:

After experiencing the same issue, editing /var/lib/kubelet/config.yaml to add:

featureGates:
  CSIMigration: false

然后重启3个节点的kubelet就可以解决了

升级到k8s的17.0出现问题第3张

免责声明:文章转载自《升级到k8s的17.0出现问题》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇androidstudio安卓获取外部sd卡存储剑指offer(8)下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

Kubernetes K8S之kubectl命令详解及常用示例

Kubernetes kubectl命令详解与常用示例,基于k8s v1.17.4版本 kubectl常用示例 查看类命令 1 # 获取节点和服务版本信息 2 kubectl get nodes 3 # 获取节点和服务版本信息,并查看附加信息 4 kubectl get nodes -o wide 5 6 # 获取pod信息,默认是defau...

基于ack k8s集群调度的方案设计

目录 1、概述 2、原生调度原则 2.1 调度流程 2.2 调度策略 3、应用和服务概况 4、阿里云集群概况 4.1 集群概况 4.1 node节点的规划 4.1.1 阿里云ecs介绍 4.1.2 k8s集群节点选型原则 4.1.3 k8s集群节点池设计 5、调度策略的设计 5.1 原生调度类型的取舍 5.2 局部最优解理论 5....

[TimLinux] k8s 故障分析全集

节点NotReady Traints: node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unreachable:NoSchedule 解决办法: 查看 kube-controller-manager, kube-scheduler, kube-apiserver, kube-pro...

jenkins pipeline实现自动构建并部署至k8s

在日常开发中,经常会有发布的需求,而且经常会碰到各种环境,比如:开发环境、测试环境、生产环境。虽然可以使用手动构建、上传服务器部署的方式,但在微服务架构下一个项目经常包含多个微服务的部署,如果用手动方式就会非常繁琐而且容易出错。使用jenkins结合SCM可以实现代码的整个自动化构建部署过程。 本文中自动构建部署过程大致完成了以下步骤: 提交spring...

K8S从入门到放弃系列-(3)部署etcd集群

摘要:etcd 是k8s集群最重要的组件,用来存储k8s的所有服务信息, etcd 挂了,集群就挂了,我们这里把etcd部署在master三台节点上做高可用,etcd集群采用raft算法选举Leader, 由于Raft算法在做决策时需要多数节点的投票,所以etcd一般部署集群推荐奇数个节点,推荐的数量为3、5或者7个节点构成一个集群。 官方地址 https...

kubernetes安装rabbitmq集群

1.准备K8S环境 2.下载基础镜像,需要安装两种插件:autocluster、rabbitmq_management 方法一: 下载已有插件镜像 [root@localhost ~]#docker pull registry.cn-hangzhou.aliyuncs.com/wise2c/kubernetes-rabbitmq-autocluster 下...