k8s zookeeper安装(集群版与非集群版)

摘要:
安装zookeeper集群版本的步骤1:添加帮助图像源helmrepoaddincubatorhttp://storage.googleapis.com/kubernetes-charts-incubator第2步:下载Zookeperhelmfetcherculator/zookeeper第3步:修改…持久性:enabled:true##zookeperdataPersi

集群版zookeeper安装

第一步:添加helm镜像源

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

第二步:下载Zookeeper

helm fetch incubator/zookeeper

第三步:修改

...
persistence:
  enabled: true
  ## zookeeper data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "nfs-client"
  accessMode: ReadWriteOnce
  size: 5Gi
...

注意:

1、如果已有存储,可不执行以下操作,将现有的storageClass替换即可

查看storageclass,替换对应的NAME

kubectl get sc -(namespace名称)

[root@k8s-master zookeeper]# kubectl get sc -n xxxxxx
NAME       PROVISION                                          RECLAIMPOLICY VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client cluster.local/moldy-seagull-nfs-client-provisioner Delete        Immediate           true                   16d

2、如果没有存储,执行下列操作时,注意存储的方式及地址

修改存储(storageclass 名称为kubectl get sc -(namespace名称) 下面的共享存储卷,如果没有按照以下步骤安装)

1、集群版本:如果是1.19+
# xxx填写存储地址,例如nfs共享存储填写ip:192.168.8.158
helm install --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner

如果出现错误

Error: failed to download "stable/nfs-client-provisioner" (hint: running `helm repo update` may help)
2、如果是1.19版本以下执行yaml文件
$ kubectl create -f nfs-client-sa.yaml
$ kubectl create -f nfs-client-class.yaml
$ kubectl create -f nfs-client.yaml

注意nfs-client.yaml存储地址!!!

nfs-client-sa.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

nfs-client-class.yaml

 apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs

nfs-client.yaml

spec.containers.env.name:NFS_SERVER 对应的value地址根据实际需求更换,以下192.168.8.158地址为示例地址

kind: Deployment
apiVersion: apps/v1 
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.8.158
            - name: NFS_PATH
              value: /data/k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.8.158
            path: /data/k8s

非集群版zookeeper安装

注意:zookeeper.yaml中存储地址,根据实际情况修改存储(共有三处PV需要修改)

kubectl apply -f zookeeper.yaml -n xxxxx

zookeeper.yaml

##创建Service
---
apiVersion: v1
kind: Service
metadata:
 name: zookeeper
 labels:
  name: zookeeper
spec:
 type: NodePort
 ports:
 - port: 2181
   protocol: TCP
   targetPort: 2181
   name: zookeeper-2181
   nodePort: 30000
 - port: 2888
   protocol: TCP
   targetPort: 2888
   name: zookeeper-2888
 - port: 3888
   protocol: TCP
   targetPort: 3888
   name: zookeeper-3888
 selector:
   name: zookeeper
---

##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
 name: zookeeper-data-pv
 labels:
   pv: zookeeper-data-pv

spec:
 capacity:
   storage: 10Gi
 accessModes:
   - ReadWriteMany
 persistentVolumeReclaimPolicy: Retain
 #########################################################注意pv的nfs存储地址,根据实际情况修改##################
 nfs:            #NFS设置
   server: 192.168.8.158
   path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: zookeeper-data-pvc
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 10Gi
 selector:
   matchLabels:
     pv: zookeeper-data-pv
---
##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
 name: zookeeper-datalog-pv
 labels:
   pv: zookeeper-datalog-pv

spec:
 capacity:
   storage: 10Gi
 accessModes:
   - ReadWriteMany
 persistentVolumeReclaimPolicy: Retain
 #########################################################注意pv的nfs存储地址,根据实际情况修改##################
 nfs:            #NFS设置
   server: 192.168.8.158
   path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: zookeeper-datalog-pvc
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 10Gi
 selector:
   matchLabels:
     pv: zookeeper-datalog-pv

---
apiVersion: v1
kind: PersistentVolume
metadata:
 name: zookeeper-logs-pv
 labels:
   pv: zookeeper-logs-pv

spec:
 capacity:
   storage: 10Gi
 accessModes:
   - ReadWriteMany
 persistentVolumeReclaimPolicy: Retain
 #########################################################注意pv的nfs存储地址,根据实际情况修改##################
 nfs:
   server: 192.168.8.158
   path: /data/k8s

##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: zookeeper-logs-pvc
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 10Gi
 selector:
   matchLabels:
     pv: zookeeper-logs-pv

---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: zookeeper
 labels:
   name: zookeeper
spec:
 replicas: 1
 selector:
    matchLabels:
        name: zookeeper
 template:
   metadata:
     labels:
      name: zookeeper
   spec:
     containers:
     - name: zookeeper
       image: zookeeper:3.4.13
       imagePullPolicy: Always
       volumeMounts:
       - mountPath: /logs
         name: zookeeper-logs
       - mountPath: /data
         name: zookeeper-data
       - mountPath: /datalog
         name: zookeeper-datalog
       ports:
       - containerPort: 2181
       - containerPort: 2888
       - containerPort: 3888
     volumes:
     - name: zookeeper-logs
       persistentVolumeClaim:
         claimName: zookeeper-logs-pvc
     - name: zookeeper-data
       persistentVolumeClaim:
         claimName: zookeeper-data-pvc
     - name: zookeeper-datalog
       persistentVolumeClaim:
         claimName: zookeeper-datalog-pvc

安装nimbus

第一步:安装nimbus配置文件config map

注意:nimbus-cm.yaml中的zookeeper为zookeeper的service名称

kubectl apply -fnimbus-cm.yaml -n xxxxxx

nimbus-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: nimbus-cm
data:
  storm.yaml: |
    # DataSource
    storm.zookeeper.servers: [zookeeper]
    nimbus.seeds: [nimbus]
    storm.log.dir: "/logs"
    storm.local.dir: "/data"

第二步:安装Deployment
kubectl apply -f nimbus.yaml -n xxxxxx

nimbus.yaml

注意创建PV时,存储地址,根据实际情况修改

##创建Service
apiVersion: v1
kind: Service
metadata:
 name: nimbus
 labels: 
  name: nimbus
spec:
 ports:
 - port: 6627
   protocol: TCP
   targetPort: 6627
   name: nimbus-6627
 selector:
   name: storm-nimbus
---
##创建PV,注意修改nfs存储地址,根据实际情况调整
---
apiVersion: v1
kind: PersistentVolume
metadata:
 name: storm-nimbus-data-pv
 labels:
   pv: storm-nimbus-data-pv
spec:
 capacity:
   storage: 5Gi
 accessModes:
   - ReadWriteMany
 persistentVolumeReclaimPolicy: Retain
 nfs:
   server: 192.168.8.158
   path: /data/k8s

##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: storm-nimbus-data-pvc
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 5Gi
 selector:
   matchLabels:
     pv: storm-nimbus-data-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
 name: storm-nimbus-logs-pv
 labels:
   pv: storm-nimbus-logs-pv
spec:
 capacity:
   storage: 5Gi
 accessModes:
   - ReadWriteMany
 persistentVolumeReclaimPolicy: Retain
 nfs:
   server: 192.168.8.158
   path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: storm-nimbus-logs-pvc
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 5Gi
 selector:
   matchLabels:
     pv: storm-nimbus-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: storm-nimbus
 labels:
   name: storm-nimbus
spec:
 replicas: 1
 selector:
    matchLabels:
        name: storm-nimbus
 template:
   metadata:
     labels: 
      name: storm-nimbus
   spec:
     hostname: nimbus
     imagePullSecrets:
     - name: e6-aliyun-image
     containers:
     - name: storm-nimbus
       image: storm:1.2.2
       imagePullPolicy: Always
       command:
       - storm
       - nimbus
       #args:
       #- nimbus
       volumeMounts:
       - mountPath: /conf/
         name: configmap-volume
       - mountPath: /logs
         name: storm-nimbus-logs
       - mountPath: /data
         name: storm-nimbus-data
       ports:
       - containerPort: 6627
     volumes:
     - name: storm-nimbus-logs
       persistentVolumeClaim:
         claimName: storm-nimbus-logs-pvc
     - name: storm-nimbus-data
       persistentVolumeClaim:
         claimName: storm-nimbus-data-pvc
     - name: configmap-volume
       configMap:
         name: nimbus-cm
#     hostNetwork: true
#     dnsPolicy: ClusterFirstWithHostNet    

安装nimbus-ui

kubectl create deployment stormui --image=adejonge/storm-ui -n xxxxxx
第二步:安装svc
kubectl expose deployment stormui --port=8080 --type=nodeport -n xxxxxxx
第三步:创建config map

安装zk-ui

安装方式
kubectl apply -f zookeeper-program-ui.yaml -n xxxxxxx
配置文件

zookeeper-program-ui.yaml

##创建Service
---
apiVersion: v1
kind: Service
metadata:
 name: zookeeper-ui
 labels: 
  name: zookeeper-ui
spec:
 type: NodePort
 ports:
 - port: 9090
   protocol: TCP
   targetPort: 9090
   name: zookeeper-ui-8080
   nodePort: 30012
 selector:
   name: zookeeper-ui

---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: zookeeper-ui
 labels:
   name: zookeeper-ui
spec:
 replicas: 1
 selector:
    matchLabels:
        name: zookeeper-ui
 template:
   metadata:
     labels: 
      name: zookeeper-ui
   spec:
     containers:
     - name: zookeeper-ui
       image: maauso/zkui
       imagePullPolicy: Always
       env:
       - name: ZKLIST
         value: 192.168.8.158:30000
       ports:
       - containerPort: 9090

免责声明:文章转载自《k8s zookeeper安装(集群版与非集群版)》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇layui 添加复选框checkbox后,无法正确显示及点击的方法js基础(使用Canvas画图)下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

kubeadm实现k8s高可用集群环境部署与配置

高可用架构 k8s集群的高可用实际是k8s各核心组件的高可用,这里使用主备模式,架构如下: 主备模式高可用架构说明: 核心组件 高可用模式 高可用实现方式 apiserver 主备 keepalived controller-manager 主备 leader election scheduler 主备 leader election...

Linux NFS服务器的安装与配置(转载)

一、NFS服务简介   NFS 是Network File System的缩写,即网络文件系统。一种使用于分散式文件系统的协定,由Sun公司开发,于1984年向外公布。功能是通过网络让不同的机器、不同的操作系统能够彼此分享个别的数据,让应用程序在客户端通过网络访问位于服务器磁盘中的数据,是在类Unix系统间实现磁盘文件共享的一种方法。   NFS 的基本原...

zookeeper 介绍与集群安装

zookeeper 介绍 ZooKeeper是一个分布式开源框架,提供了协调分布式应用的基本服务,它向外部应用暴露一组通用服务——分布式同步(Distributed Synchronization)、命名服务(Naming Service)、集群维护(Group Maintenance)等,简化分布式应用协调及其管理的难度,提供高性能的分布式服务。ZooK...

Kubernetes Ingress 日志分析与监控的最佳实践

摘要:Ingress主要提供HTTP层(7层)路由功能,是目前K8s中HTTP/HTTPS服务的主流暴露方式。为简化广大用户对于Ingress日志分析与监控的门槛,阿里云容器服务和日志服务将Ingress日志打通,只需要应用一个yaml资源即可完成日志采集、分析、可视化等一整套Ingress日志方案的部署。 前言 目前Kubernetes(K8s)已经真正...

K8S(rancher) 服务(POD)定时重启服务方案

目标 每天在固定时间点,重启服务。 运行N小时后重启服务。 难点: 原先Linux虚拟机部署中,可以轻易实现。 现在Docker K8S 如何实现? 解决方案: 巧用 rancher 2.* 新特性 -- 存活状态检查(liveness)特性说明健康检查 在容器启动后的N秒后以N秒的频率执行一次检查 TCP 端口检查  HTTP 请求状态检查(2...

k8s学习笔记之五:volume,PV ,PVC

一,volume Ⅰ、emptyDir(pod消失就消失) 简介:emptyDir Volume是在Pod分配到node时创建的,从他的名称就能看得出来,它的出事内容为空,并且无需指定宿主机上对应的目录文件, 因为这是kubernetes自动分配的一个目录,当Pod从node上移除时,emptyDir中的数据也会被永久删除emptyDir的用途有: 例子1...