虚拟机搭建Kubespere3.0(详细)

摘要:
相对于rancher,我个人更倾向于kubespere,因为它界面确实比较吸引我,废话不多扯,下面开始吧1.环境准备1.前置要求虚拟机:centos7.6~7.8!!!以下:https://kuboard.cn/install/install-k8s.html#%E6%A3%80%E6%9F%A5-centos-hostname网站的检验结果2.网络配置vi/etc/sysconfig/network-scripts重新启动网络服务:/etc/init.d/networkrestart或servicenetworkrestart3.虚拟机环境如下说明:1)centos版本为7.6或7.7、CPU内核数量大于等于2,且内存大于等于4G。2)hostname不是localhost,且不包含下划线、小数点、大写字母。3)任意节点都有固定的内网IP地址。4)任意节点上IP地址可互通,且没有防火墙、安全组隔离。

前言:不断学习就是程序员的宿命。

相对于rancher,我个人更倾向于kubespere,因为它界面确实比较吸引我,废话不多扯,下面开始吧

1.环境准备

1.前置要求

虚拟机:centos7.6~7.8!!!

以下:https://kuboard.cn/install/install-k8s.html#%E6%A3%80%E6%9F%A5-centos-hostname网站的检验结果

虚拟机搭建Kubespere3.0(详细)第1张

2.网络配置

vi/etc/sysconfig/network-scripts

重新启动网络服务:

/etc/init.d/network restart或service network restart

虚拟机搭建Kubespere3.0(详细)第2张

3.虚拟机环境如下

虚拟机搭建Kubespere3.0(详细)第3张

说明

1)centos版本为7.6或7.7、CPU内核数量大于等于2,且内存大于等于4G。

2)hostname不是localhost,且不包含下划线、小数点、大写字母。

3)任意节点都有固定的内网IP地址(集群机器统一内网)

4)任意节点上IP地址 可互通(无需NAT映射即可相互访问),且没有防火墙、安全组隔离。

5)任意节点不会直接使用docker run或docker-compose运行容器,Pod

2.基础环境安装(3台)

2.1安装基础工具(3台)

yum install -y wget && yum install -y vim && yum install -y lsof && yum install -y net-tools

虚拟机搭建Kubespere3.0(详细)第4张

2.2配置hosts

vim /etc/hosts
 
192.168.6.30k8s4
192.168.6.31k8s5
192.168.6.32 k8s6
###
hostnamectl set-hostname <newhostname>:指定新的hostname su 切换过来

虚拟机搭建Kubespere3.0(详细)第5张

2.3关闭防火墙

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

虚拟机搭建Kubespere3.0(详细)第6张

2.4关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0cat /etc/selinux/config

虚拟机搭建Kubespere3.0(详细)第7张

2.5关闭swap

swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab  
free -l -h

虚拟机搭建Kubespere3.0(详细)第8张

2.6将桥接的IPv4流量传递到iptables的链

如果没有/etc/sysctl.conf文件的话直接执行以下:
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
如果有该文件可以执行以下命令
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf

虚拟机搭建Kubespere3.0(详细)第9张

2.7执行以下命令

sysctl -p

虚拟机搭建Kubespere3.0(详细)第10张

3.安装Docker(3台)

3.1卸载旧版本Docker

sudo yum remove docker 
  docker-client 
  docker-client-latest 
  docker-common 
  docker-latest 
  docker-latest-logrotate 
  docker-logrotate 
  docker-engine

虚拟机搭建Kubespere3.0(详细)第11张

3.2安装基础依赖

yum install -y yum-utils 
device-mapper-persistent-data 
lvm2

虚拟机搭建Kubespere3.0(详细)第12张

3.3配置docker yum

sudo yum-config-manager 
--add-repo 
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

虚拟机搭建Kubespere3.0(详细)第13张

3.4安装并启动docker

yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8containerd.io
systemctl enable docker
systemctl start docker
docker version

虚拟机搭建Kubespere3.0(详细)第14张

3.5配置docker加速

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'{
  "registry-mirrors": ["https://0v8k2rvr.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

虚拟机搭建Kubespere3.0(详细)第15张

4.k8s环境安装

4.1配置K8Syum

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

虚拟机搭建Kubespere3.0(详细)第16张

4.2卸载旧版本

yum remove -y kubelet kubeadm kubectl

虚拟机搭建Kubespere3.0(详细)第17张

4.3安装k8skubeletkubeadmkubectl(3台)

yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

虚拟机搭建Kubespere3.0(详细)第18张

4.4开启自启

systemctl enable kubelet && systemctl start kubelet

虚拟机搭建Kubespere3.0(详细)第19张

4.5初始化(3台)

vi images.sh   #新建一个脚本
 
#!/bin/bash
images=(
  kube-apiserver:v1.17.3kube-proxy:v1.17.3kube-controller-manager:v1.17.3kube-scheduler:v1.17.3coredns:1.6.5etcd:3.4.3-0pause:3.1)
for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

虚拟机搭建Kubespere3.0(详细)第20张

4.6初始化master节点(k8s4)

kubeadm init 
--apiserver-advertise-address=192.168.6.30--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers 
--kubernetes-version v1.17.3--service-cidr=10.96.0.0/16--pod-network-cidr=10.244.0.0/16
###################注意#######################
这里
--apiserver-advertise-address=192.168.6.30 ip为master节点IP

虚拟机搭建Kubespere3.0(详细)第21张

4.7配置kubectl(master节点)

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


#########这里一定是上一步的结果

虚拟机搭建Kubespere3.0(详细)第22张

4.8部署网络插件(master)

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

虚拟机搭建Kubespere3.0(详细)第23张

4.9查看节点和Pod运行情况

kubectl get nodes

虚拟机搭建Kubespere3.0(详细)第24张

4.10 执行令牌(从节点)

这里需要注意的是,必须等所有的状态为Runing才能进行下一步操作

虚拟机搭建Kubespere3.0(详细)第25张

kubeadm join 192.168.6.30:6443 --token 7s5qgp.6f4r4u0fqq2jz5sf 
    --discovery-token-ca-cert-hash sha256:83ddca99f2916795170efad4c45a85e3b397e8859604c429531a92f29a711156
######初始化完成主节点后会生成

虚拟机搭建Kubespere3.0(详细)第26张

4.11主节点查看运行情况

kubectl getnodes

 kubectl get pods -A

虚拟机搭建Kubespere3.0(详细)第27张

5.搭建NFS作为默认sc(3台)

5.1配置nfs服务器

yum install -y nfs-utils
 
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

虚拟机搭建Kubespere3.0(详细)第28张

5.2创建nfs服务器目录(master节点作为服务器,master节点操作)并启动nfs

mkdir -p /nfs/data

systemctl enable rpcbind

systemctl enable nfs-server

systemctl start rpcbind

systemctl start nfs-server

exportfs -r

exportfs ####检查配置是否生效

虚拟机搭建Kubespere3.0(详细)第29张

5.3搭建NFS-Client(从节点操作)

5.3.1安装客户端工具

showmount -e 192.168.6.30
####该ip是主节点ip

虚拟机搭建Kubespere3.0(详细)第30张

5.3.2创建同步文件夹(从节点操作)

mkdir /root/nfsmount
ls /root

虚拟机搭建Kubespere3.0(详细)第31张

5.3.3将客户端的/root/nfsmount/nfs/data/做同步(从节点操作)

mount -t nfs 192.168.6.30:/nfs/data/ /root/nfsmount

虚拟机搭建Kubespere3.0(详细)第32张

5.3.4验证

虚拟机搭建Kubespere3.0(详细)第33张

6.设置动态供应

虚拟机搭建Kubespere3.0(详细)第34张

6.1创建provisionerNFS环境前面已经搭好)

虚拟机搭建Kubespere3.0(详细)第35张

6.2master节点操作

vim nfs-rbac.yaml
---apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: nfs-provisioner-runner
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  -kind: ServiceAccount
    name: nfs-provisioner
    namespace: defaultroleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---kind: Deployment
apiVersion: apps/v1
metadata:
   name: nfs-client-provisioner
spec:
   replicas: 1strategy:
     type: Recreate
   selector:
     matchLabels:
        app: nfs-client-provisioner
   template:
      metadata:
         labels:
            app: nfs-client-provisioner
      spec:
         serviceAccount: nfs-provisioner
         containers:
            -  name: nfs-client-provisioner
               image: lizhenliang/nfs-client-provisioner
               volumeMounts:
                 -  name: nfs-client-root
                    mountPath:  /persistentvolumes
               env:
                 -name: PROVISIONER_NAME
                    value: storage.pri/nfs
                 -name: NFS_SERVER
                    value: 192.168.6.30
                 -name: NFS_PATH
                    value: /nfs/data
         volumes:
           - name: nfs-client-root
             nfs:
               server: 192.168.6.30path: /nfs/data

6.3执行创建nfsyaml文件信息

kubectl apply -f nfs-rbac.yaml

kubectl get pods -A

如果报错:查看报错信息,这个命令:

kubectl describe pod xxx -n kube-system

虚拟机搭建Kubespere3.0(详细)第36张

7.1创建yaml

vim storageclass-nfs.yaml
 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storage-nfs
provisioner: storage.pri/nfs
reclaimPolicy: Delete

7.2应用storageclass-nfs.yaml文件

kubectl apply -f storageclass-nfs.yaml

7.3修改默认驱动

kubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

kubectl get sc

虚拟机搭建Kubespere3.0(详细)第37张

8.安装metrics-server(master)

8.1准备metrics-server.yaml文件

---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-admin: "true"rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
-kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
-kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100
---apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        ports:
        - name: main-port
          containerPort: 4443protocol: TCP
        securityContext:
          readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"
---apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"kubernetes.io/cluster-service: "true"spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443protocol: TCP
    targetPort: main-port
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
rules:
-apiGroups:
  - ""resources:
  -pods
  -nodes
  - nodes/stats
  -namespaces
  -configmaps
  verbs:
  - get
  -list
  -watch
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
-kind: ServiceAccount
  name: metrics-server
  namespace: kube-system

8.2应用pod

kubectl apply -f metrics-server.yaml

kubectl get pod -A

虚拟机搭建Kubespere3.0(详细)第38张

8.3查看系统的监控状态

kubectl top nodes

如果运行kubectl top nodes这个命令,爆metrics not available yet这个命令还没有用,那就稍等一会,就能用了

虚拟机搭建Kubespere3.0(详细)第39张

9.安装kebespherev3.0.0

官网:https://kubesphere.com.cn/

部署文档:https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/

9.1安装(master节点)

9.1.1 准备cluster-configuration.yaml

vim cluster-configuration.yaml

添加以下内容: 其中“192.168.6.30”要修改为你master节点ip

---apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0spec:
  persistence:
    storageClass: ""        # If there is not a default StorageClass inyour cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the host cluster. Retrive the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion"| grep jwtSecret"on the host cluster.
  etcd:
    monitoring: true       # Whether to enable etcd monitoring dashboard installation. You have to create a secret foretcd before you enable it.
    endpointIps: 192.168.6.30# etcd cluster EndpointIps, it can be a bunch of IPs here.
    port: 2379# etcd port
    tlsEnable: truecommon:
    mysqlVolumeSize: 20Gi # MySQL PVC size.
    minioVolumeSize: 20Gi # Minio PVC size.
    etcdVolumeSize: 20Gi  # etcd PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    es:   # Storage backend forlogging, events and auditing.
      # elasticsearchMasterReplicas: 1   # total number of master nodes, it's not allowed to use even number
      # elasticsearchDataReplicas: 1# total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi   # Volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # Volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch, it is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
  console:
    enableMultiLogin: true  # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
    port: 30880alerting:                # (CPU: 0.3 Core, Memory: 300 MiB) Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: trueauditing:                # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened inplatform, initiated by different tenants.
    enabled: truedevops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: truejenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: trueruler:
      enabled: truereplicas: 2logging:                 # (CPU: 57 m, Memory: 2.76 G) Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such asElasticsearch, Kafka and Fluentd.
    enabled: truelogsidecarReplicas: 2metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
    enabled: falsemonitoring:
    # prometheusReplicas: 1            # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability aswell.
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1# AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it asthe role of host or member cluster.
  networkpolicy:       # Network policies allow network isolation within the same cluster, which means firewalls can be setup between certain instances (Pods).
    # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
    enabled: truenotification:        # Email Notification support for the legacy alerting system, should be enabled/disabled together with the above alerting option.
    enabled: trueopenpitrix:          # (2 Core, 3.6 G) Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management.
    enabled: trueservicemesh:         # (0.3 Core, 300 MiB) Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization fortraffic topology.
    enabled: true

9.1.2 准备kubesphere-installer.yaml

---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
  -name: v1alpha1
    served: truestorage: truescope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
    -cc

---apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
-apiGroups:
  - ""resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -apps
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -extensions
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -batch
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -rbac.authorization.k8s.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -apiregistration.k8s.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -apiextensions.k8s.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -tenant.kubesphere.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -certificates.k8s.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -devops.kubesphere.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -monitoring.coreos.com
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -logging.kubesphere.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -jaegertracing.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -storage.k8s.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -admissionregistration.k8s.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -policy
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -autoscaling
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -networking.istio.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -config.istio.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -iam.kubesphere.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -notification.kubesphere.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -auditing.kubesphere.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -events.kubesphere.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -core.kubefed.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -installer.kubesphere.io
  resources:
  - '*'verbs:
  - '*'
-apiGroups:
  -storage.kubesphere.io
  resources:
  - '*'verbs:
  - '*'

---kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
-kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      -name: installer
        image: kubespheredev/ks-installer:latest
        imagePullPolicy: "Always"volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
      volumes:
      -hostPath:
          path: /etc/localtime
          type: ""name: host-time

9.2.应用pod

kubectl apply -f kubesphere-installer.yaml

kubectl apply -f cluster-configuration.yaml

虚拟机搭建Kubespere3.0(详细)第40张

9.3查看日志(漫长等待)

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

虚拟机搭建Kubespere3.0(详细)第41张

查看pod启动状态

kubectl get pods -A

虚拟机搭建Kubespere3.0(详细)第42张

9.4访问验证是否成功

虚拟机搭建Kubespere3.0(详细)第43张

访问地址:

http://192.168.6.30:30880/login

帐号:admin

密码:P@88w0rd

虚拟机搭建Kubespere3.0(详细)第44张

各组件正常

虚拟机搭建Kubespere3.0(详细)第45张

10.问题

10.1解决prometheus一直没能Running的问题,缺少证书

如等待了半个小时左右还是没能Running,特别是monitoring这两个有问题,这个是监控用的

虚拟机搭建Kubespere3.0(详细)第46张

kubectl describe pod prometheus-k8s-0  -n kubesphere-monitoring-system

虚拟机搭建Kubespere3.0(详细)第47张

说没有这个secret-kube-etcd-client-certs这个证书

看一下kubesphere的整个apiserver

ps-ef|grep kube-apiserver

这个apiserver就会打印整个证书位置

虚拟机搭建Kubespere3.0(详细)第48张

说明是有这些证书文件的,但是kubesphere它不知道,它相当于依赖了我们系统里面的

这些证书文件就是在这些位置:

--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt

--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt

--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key

解决方案:把这个命令复制到主节点运行即可

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

虚拟机搭建Kubespere3.0(详细)第49张

表示这个secret已经创建了

可以用命令查看是否创建成功:

kubectl get secret -A //这个命令的意思就是获取所有系统里面的secret

虚拟机搭建Kubespere3.0(详细)第50张

只要证书一创建,那么我们这个prometheus很快就可以了

虚拟机搭建Kubespere3.0(详细)第51张

如果还是不行,把这个prometheus-k8s-0这个pod删掉

命令:kubectl delete pod prometheus-k8s-0 -n kubesphere-monitoring-system

然后让它再拉取一个就可以了

再把prometheus-k8s-1这个pod删掉,也让它重新拉取

命令:kubectl delete pod prometheus-k8s-1 -n kubesphere-monitoring-system

如有问题欢迎讨论交流

免责声明:文章转载自《虚拟机搭建Kubespere3.0(详细)》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇手机端兼容性测试HBase统计表行数(RowCount)的四种方法下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

docker 界面话管理工具Portainer

1、环境:docker-desktop 2、docker已安装 3、安装Portainer : 1、docker pull portainer/portainer2、docker run -d -p 19000:9000 -v //var/run/docker.sock:/var/run/docker.sock -v D:dockersportainerd...

Structure needs cleaning(结构需要清理)解决

现象:在对docker容器进行删除或文件时提示无法删除,结构需要清理 解决方法: 1.docker容器: 停止docker服务,卸载文件系统,进行文件修复,由于使用的是ext4格式,所以使用 fsck.ext4 这里的挂载目录是 /dev/mapper/dockervg-dockerlv ,完整修复命令为 fsck.ext4 /dev/mapper/doc...

Java 8 终于支持 Docker!

Java 8曾经与Docker无法很好地兼容性,现在问题已消失。 请注意:我在本文中使用采用GNU GPL v2许可证的OpenJDK官方docker映像。在Oracle Java SE中,这里描述的docker支持功能在更新191中引入。Oracle在2019年4月更改了Java 8更新的许可证,自Java SE 8 Update 211以来商业使用不再...

kubernetes 强制删除istio-system空间,强制删除pod

加上这个选项 --grace-period=0 --force--grace-period=0 --force 先删除deployment,pod,svc再删除namespace > kubectl get pod -n istio-system NAME READY S...

Centos安装shellcheck的方法

shellcheck shellcheck是用来检查shell脚本的工具。 采用haskell语言开发。 在ubuntu中,可以直接采用apt install shellcheck安装完成 但是在Centos,yum是没有shellcheck的包的,因此,需要另一种方法安装 Centos安装shellcheck 由于shellcheck是haskell语...

Docker简介以及安装

Docker简介以及安装 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任。 一.什么是容器 1.一种虚拟化方案   与传统的虚拟机不同,传统的虚拟机是通过中间层将一台或多台独立的机器虚拟运行在物理硬件之上。而容器则是直接运行在操作系统内核之上的用户空间,因此容器虚拟化也被称作操作系统虚拟化。 2.操作系统级别的虚拟化   由于依赖操作...