使用kubeadm部署K8S v1.17.0集群

摘要:
环境信息操作系统:CentOSLinuxrelease7.7.1908(核心)Docker:19.03.8 kubernetes:v1.17.0群集信息hostnameIPk8s-master192.168.87.10 k8s-node01192.168.87.11 I.准备工作(由所有节点执行)1.1.设置主机名k8s master修改主机名
环境信息
  • 操作系统:CentOS Linux release 7.7.1908 (Core)
  • docker:19.03.8
  • kubernetes:v1.17.0
集群信息
hostnameIP
k8s-master192.168.87.10
k8s-node01192.168.87.11
一、准备工作(所有节点执行)

1.1、设置主机名

hostnamectl set-hostname k8s-master   \ 修改主机名
hostnamectl set-hostname k8s-node01

hostnamectl status   \ 查看是否修改成功

1.2、修改hosts文件

cat <<EOF >>/etc/hosts
192.168.87.10 k8s-master
192.168.87.11 k8s-node01
EOF

1.3、关闭防火墙&selinux

systemctl stop firewalld
systemctl disable firewalld

setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

1.4、关闭swap

swapoff -a
sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab

1.5、免密登录

1.5.1、master节点生成密钥

ssh-keygen -t rsa

root@K8S00 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:c9mUKafubFrLnpZGEnQflcSYu7KPc64Mz/75WPwLvJY root@K8S00
The key's randomart image is:
+---[RSA 2048]----+
|             *o. |
|        . . +oo  |
|       . ...=o   |
|        .  Bo    |
|        S.+ ..   |
|        .+o o.   |
|        .oo+ o+  |
|         XB+.Eo. |
|        .B#BBo..o|
+----[SHA256]-----+

1.5.2、分发密钥

ssh-copy-id root@k8s-node01

1.6 安装依赖包

CentOS:

yum install -y epel-release
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget

Ubuntu:
apt-get install -y conntrack ipvsadm ntp ipset jq iptables curl sysstat libseccomp

1.7、同步时间

ntpdate time1.aliyun.com

1.8、加载内核模块

modprobe ip_vs_rr
modprobe br_netfilter

1.9、优化内核参数

cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
二、安装docker

2.1、添加docker的yum源

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.2、安装docker

yum install -y docker-ce
如果要卸载docker,一定要使用 yum -y remove docker*,否则没有卸载干净,会影响其他版本docker的安装

2.3、启动docker

systemctl enable docker
systemctl start docker
三、安装kubeadm,kubelet和kubectl

3.1、添加kubernetes的yum源

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3.2、安装kubeadm,kubelet和kubectl

yum -y install kubelet kubeadm kubectl
systemctl enable kubelet
四、部署Kubernetes Master

此操作在master节点执行

kubeadm init --apiserver-advertise-address=192.168.87.10 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

输出结果:

W1223 19:51:05.051907    5167 validation.go:28] Cannot validate kubelet config - no validator is available
W1223 19:51:05.051988    5167 validation.go:28] Cannot validate kube-proxy config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s00 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 172.22.34.34]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s00 localhost] and IPs [172.22.34.34 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s00 localhost] and IPs [172.22.34.34 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1223 19:58:54.479214    5167 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1223 19:58:54.480599    5167 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 36.092926 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s00 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s00 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5w8vmp.zpuwn9chde7vq9j2
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.87.10:6443 --token 5w8vmp.zpuwn9chde7vq9j2 
    --discovery-token-ca-cert-hash sha256:b577acf7412994b84809120b5a0ba40c27ef0b950838a731964df16a62ef2dc9 

根据输出的提示,还需要做以下几个动作:
1、开始使用集群前,需要在主节点上执行(这步是配置kubectl):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
五、安装网络插件(flannel)

一般的网络无法访问quay.io,可以曲线救国,找国内的镜像源,或者从docker hub上拉取flannel的镜像,此处选择第2种方式。

5.1、手动拉取flannel镜像

在集群的所有机器上操作

# 手动拉取flannel的docker镜像
docker pull easzlab/flannel:v0.11.0-amd64
# 修改镜像名称
docker tag easzlab/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

5.2、下载并安装flannel资源配置清单(此操作在master节点上进行)

wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml 
六、node节点加入集群

使用kubeadm join 注册Node节点到Matser
(kubeadm join 的内容,在上面kubeadm init (kubeadm init输出结果的最后写明了) 已经生成好了)
此操作在node节点上进行操作:

kubeadm join 192.168.87.10:6443 --token h21v01.ca56fof5m8myjy3e 
--discovery-token-ca-cert-hash sha256:4596521eed7d2daf11832be58b03bee46b9c248829ce31886d40fe2e997b1919

查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作

[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
master01   Ready    master   10m     v1.17.0
node01     Ready    <none>   4m44s   v1.17.0

[root@master01 ~]# kubectl get pods -n kube-system
NAME                               READY   STATUS              RESTARTS   AGE
coredns-9d85f5447-279k7            1/1     Running             0          10m
coredns-9d85f5447-lz8d8            0/1     ContainerCreating   0          10m
etcd-master01                      1/1     Running             0          10m
kube-apiserver-master01            1/1     Running             0          10m
kube-controller-manager-master01   1/1     Running             0          10m
kube-flannel-ds-amd64-5f769        1/1     Running             0          36s
kube-flannel-ds-amd64-gl5lm        1/1     Running             0          36s
kube-flannel-ds-amd64-ttbdk        1/1     Running             0          36s
kube-proxy-tgs9j                   1/1     Running             0          5m11s
kube-proxy-vpgng                   1/1     Running             0          10m
kube-proxy-wthxn                   1/1     Running             0          5m8s
kube-scheduler-master01            1/1     Running             0          10m

至此使用kubeadm的方式安装k8s v1.17完毕。

七、测试一下kubernetes集群
##创建一个镜像为nginx的容器
[root@master01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
##查看pod的详细信息,events部分可以看到创建过程
[root@master01 ~]# kubectl describe pod nginx-86c57db685-9xbn6 
Name:         nginx-86c57db685-9xbn6
Namespace:    default
Priority:     0
Node:         node02/192.168.1.242
Start Time:   Thu, 02 Jan 2020 11:49:52 +0800
Labels:       app=nginx
              pod-template-hash=86c57db685
Annotations:  <none>
Status:       Running
IP:           10.244.2.2
IPs:
  IP:           10.244.2.2
Controlled By:  ReplicaSet/nginx-86c57db685
Containers:
  nginx:
    Container ID:   docker://baca9e4f096278fbe8851dcb2eed794aefdcebaa70509d38df1728c409e73cdb
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Jan 2020 11:51:49 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4ghv8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-4ghv8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4ghv8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m43s  default-scheduler  Successfully assigned default/nginx-86c57db685-9xbn6 to node02
  Normal  Pulling    3m42s  kubelet, node02    Pulling image "nginx"
  Normal  Pulled     106s   kubelet, node02    Successfully pulled image "nginx"
  Normal  Created    106s   kubelet, node02    Created container nginx
  Normal  Started    106s   kubelet, node02    Started container nginx
##查看pod的ip
[root@master01 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nginx-86c57db685-9xbn6   1/1     Running   0          2m18s   10.244.2.2   node02   <none>           <none>
##访问nginx
[root@master01 ~]# curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
         35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
补充内容

1、后续有nodes节点想加入集群的话,由于默认token的有效期为24小时,当过期之后,该token就不可用了,解决方法如下:

重新生成新的token ==> kubeadm token create

# 1.查看当前的token列表
[root@K8S00 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
7mjtn4.9kds6sabcouxaugd   23h         2019-12-24T15:44:58+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

# 2.重新生成新的token
[root@K8S00 ~]# kubeadm token create
369tcl.oe4punpoj9gaijh7

# 3.再次查看当前的token列表
[root@K8S00 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
369tcl.oe4punpoj9gaijh7   23h         2019-12-24T16:05:18+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
7mjtn4.9kds6sabcouxaugd   23h         2019-12-24T15:44:58+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

# 4.获取ca证书sha256编码hash值
[root@K8S00 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
7ae10591aa593c2c36fb965d58964a84561e9ccd416ffe7432550a0d0b7e4f90

# 5.节点加入集群
[root@k8s-node03 ~]# kubeadm join --token 369tcl.oe4punpoj9gaijh7(新的token) --discovery-token-ca-cert-hash sha256:7ae10591aa593c2c36fb965d58964a84561e9ccd416ffe7432550a0d0b7e4f90(ca证书sha256编码hash值) 172.22.34.31:6443 --skip-preflight-chec

2、kubectl命令自动补全

##安装包
yum install -y bash-completion*
##手工执行
source <(kubectl completion bash)
##写入环境变量
echo "source <(kubectl completion bash)" >> ~/.bashrc
##需要手工执行一下,否则tab补全时会提示“-bash: _get_comp_words_by_ref: command not found ” 
sh /usr/share/bash-completion/bash_completion 
##加载环境变量
source /etc/profile
##再次使用kubectl命令进行tab补全就ok了

免责声明:文章转载自《使用kubeadm部署K8S v1.17.0集群》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇操作系统-PV操作的原理和几种常见问题C-Lodop的https扩展版,火狐下添加例外下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

nginx配置热更新

启动 将配置文件设置好,然后执行相应的文件,启动程序。 windows 直接执行可执行文件 linux的supervisor配置(后台启动) /home/nginx/sbin/nginx -g 'daemon off;' -c /home/nginx/conf/nginx.conf -p /home/nginx -s 命令 To start nginx,...

(二)docker的部署安装,配置,基础命令

一、docker 的安装部署 这里不过多介绍,下面这两个linux发型版 安装可以参考 ubuntu的 docker-ce安装 centos7的 docker-ce安装 二.docker配置文件 重要参数解释: OPTIONS 用来控制Docker Daemon进程参数 -H 表示Docker Daemon绑定的地址, -H=unix:///var/r...

centos7 安装图行界面及卸载

1、在终端上输入命令 $sudo yum groupinstall "GNOME Desktop" "Graphical Administration Tools" $sudo ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target  //修改运行级别 重...

企业微信开发之授权登录

以前写过一篇公众号的授权登录https://blog.csdn.net/dsn727455218/article/details/65630151,今天给大家分享一下企业微信的授权登录。 大致都差不多流程 注意事项: 1.网页授权及JS-SDK需要在企业微信上配置可信域名 2.企业微信授权登录里面填写你的可信域名 调用流程为:A) 用户访问第三方服务,第三...

Monit:开源服务器监控工具

Monit是一个跨平台的用来监控Unix/linux系统(比如Linux、BSD、OSX、Solaris)的工具。Monit特别易于安装,而且非常轻量级(只有500KB大小),并且不依赖任何第三方程序、插件或者库。 Monit可以监控服务器进程状态、HTTP/TCP状态码、服务器资源变化、文件系统变动等等,根据这些变化,可以设定邮件报警、重启进程或服务。易...

iOS之Cocoapods的安装和使用

iOS之Cocoapods的安装和使用 本人有话说: Cocoapods在日常的开发中,是经常用到开发依赖工具,那么具体该怎么安装和使用呢?下面就直接上干货吧! 每种语言发展到一个阶段,就会出现相应的依赖管理工具,例如 Java 语言的 Maven,nodejs 的 npm。随着 iOS 开发者的增多,业界也出现了为 iOS 程序提供依赖管理的工具,它的名...