Zookeeper 3.5.7搭建(启用Kerberos)

摘要:
root@VECS02583:/etc/salt/salt#lsapache-zookeeper-3.5.7-bin.tar.gzbin_packagehadoop-3.2.1hostsjdk1.8.0_92.tar.gzprofileroot@VECS02583:/etc/salt/salt#tar-zxfapache-zookeeper-3.5.7-bin.tar.gzroot@VECS02583:/etc/salt/salt#root@VECS02583:/etc/salt/salt#cdapache-zookeeper-3.5.7-binroot@VECS02583:/etc/salt/salt/apache-zookeeper-3.5-7箱号lsbinconfdocslibLICENSE.txtNOTICE.txtREADME.mdREADME_packaging.txtroot@VECS02583:/etc/salt/salt/apache-zookeeper-3.57-bin#cdconf/root@VECS02583:/etc/salt/salt/apache-zookeeper-3.57-bin/conf#lsconfiguration.xsllog4j.propertieszoo_sample.cfgroot@VECS02583:/etc/salt/salt/apache-zookeeper-3.57-bin/conf#cpzoo_Sample.cfgzoo。cfg备注:官方网站下载地址:https://archive.apache.org/dist/zookeeper/ 。 如果使用没有bin的包直接启动,将报告错误。在ZooKeeper3.5.0之前,ZooKeeper集群的所有成员及其配置参数在启动时都是静态加载的,并且在运行时是不可变的。因此,当Zookeeper集群需要扩展或收缩时,我们只能手动修改配置文件,然后重新启动以完成ZK集群的扩展或收缩。在Zookeeper 3.5.0版本之后,Zookeepher集群开始支持动态修改集群中的服务器配置,修改方法只需要zk提供的客户端命令reconfig。
Zookeeper 3.5.7搭建(启用Kerberos)

标签(空格分隔): zookeeper


一,Zookeeper 3.5.7组件搭建步骤

1,Zookeeper 搭建 (全集群 hosts 文件,profile 系统环境变量文件在所有机器均全部同步完毕),且/etc/profile append 文件如下:

# HADOOP CONFIG
export HADOOP_HOME=/app/hadoop
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib/native"
export YARN_CONF_DIR=$HADOOP_CONF_DIR
export SQOOP_HOME=/app/sqoop
export HIVE_HOME=/app/hive
export PRESTO_HOME=/app/presto
export SCALA_HOME=/app/scala
export SPARK_HOME=/app/spark
export ZOOKEEPER_HOME=/app/zookeeper
export HBASE_HOME=/app/hbase
PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$ANT_HOME/bin:$HIVE_HOME/bin:$SQOOP_HOME/bin:$PRESTO_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:$HBASE_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH

export JAVA_HOME CLASSPATH PATH

2, JDK安装。(jdk 安装包aes-256加密包已经包含)

全集群安装jdk:

salt '*' cp.get_file salt://jdk1.8.0_92.tar.gz /app/jdk1.8.0_92.tar.gz

salt '*' cmd.run "cd /app && tar -zxf jdk1.8.0_92.tar.gz"

salt '*' cmd.run "cd /app && ln -s  jdk1.8.0_92 jdk" 

salt '*' cmd.run "cd /app && rm -rf jdk1.8.0_92.tar.gz"

salt '*' cmd.run "cd /app && ls -al" 

结果如下:
VECS02589:
    total 12
    drwxr-xr-x 3 root root 4096 Feb 22 22:15 .
    drwxr-xr-x 3 root root 4096 Feb 22 22:10 ..
    lrwxrwxrwx 1 root root   11 Feb 22 22:14 jdk -> jdk1.8.0_92
    drwxr-xr-x 8 uucp  143 4096 Apr  1  2016 jdk1.8.0_92
VECS02590:
    total 12
    drwxr-xr-x 3 root root 4096 Feb 22 22:15 .
    drwxr-xr-x 3 root root 4096 Feb 22 22:10 ..
    lrwxrwxrwx 1 root root   11 Feb 22 22:14 jdk -> jdk1.8.0_92
    drwxr-xr-x 8 uucp  143 4096 Apr  1  2016 jdk1.8.0_92
......

3, 安装KDC:

详情过程见:https://www.cnblogs.com/hit-zb/p/12534426.html

4, 安装Zookeeper:

一,下载安装包并设置一些基本配置

进入 vecs02583,并下载 zk 3.5.x版本安装包,此处按搭建时刻最新版本为例:

cd /etc/salt/salt

wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.5.7/apache-zookeeper-3.5.7-bin.tar.gz

解压下载的安装包,并进行简单的配置文件修改。


root@VECS02583:/etc/salt/salt# ls
apache-zookeeper-3.5.7-bin.tar.gz  bin_package  hadoop-3.2.1  hosts  jdk1.8.0_92.tar.gz  profile
root@VECS02583:/etc/salt/salt# tar -zxf apache-zookeeper-3.5.7-bin.tar.gz 
root@VECS02583:/etc/salt/salt# 
root@VECS02583:/etc/salt/salt# cd apache-zookeeper-3.5.7-bin
root@VECS02583:/etc/salt/salt/apache-zookeeper-3.5.7-bin# ls
bin  conf  docs  lib  LICENSE.txt  NOTICE.txt  README.md  README_packaging.txt
root@VECS02583:/etc/salt/salt/apache-zookeeper-3.5.7-bin# cd conf/
root@VECS02583:/etc/salt/salt/apache-zookeeper-3.5.7-bin/conf# ls
configuration.xsl  log4j.properties  zoo_sample.cfg
root@VECS02583:/etc/salt/salt/apache-zookeeper-3.5.7-bin/conf# cp zoo_sample.cfg zoo.cfg

备注:官网下载地址:https://archive.apache.org/dist/zookeeper/。
特别注意,从版本3.5.5开始,需要下载带有bin名称的包,带有bin名称的包中有编译之后的的二进制包。而之前普通的tar.gz中只有源码无法直接使用。使用不带bin的包直接启动会报错。

在zookeeper3.5.0之前,zookeeper集群的全体成员以及它的配置参数都是在启动时静态加载的,并且在运行时不可变。因此当zookeeper集群需要扩缩容的时候,我们只能通过手动修改配置文件然后滚动重启的方式来完成zk集群的扩容或缩容。当集群中机器较多的时候,可以会因为人工导致误操作的概率加大。

在zookeeper3.5.0版本之后,zookeeper集群开始支持动态修改集群中的服务器配置,而修改的方式只需要通过zk提供的客户端命令reconfig进行操作。可以通过rename命令对集群中的服务器进行增加、删除,还可以改变服务器的端口配置以及服务器在集群中的角色,participant/observer。

而实现zookeeper进行动态配置的一个很重要的前提就是它将zookeeper的动态配置与静态配置进行了分离,动态配置文件通过静态配置文件中的dynamicConfigFile关键词与动态配置文件进行链接。同样的,zookeeper3.5之后的版本兼容旧版本的集群配置,使用旧版本配置,zookeeper服务器会将静态文件中的动态部分自动分离出来。

编辑 zoo.cfg配置文件。

vim zoo.cfg

tickTime=2000
initLimit=5
syncLimit=2
dataDir=/data1/data/zookeeper/data
dataLogDir=/data1/data/zookeeper/logs

autopurge.purgeInterval=1
autopurge.snapRetainCount=10

extendedTypesEnabled=true
reconfigEnabled=true
standaloneEnabled=false

dynamicConfigFile=/app/zookeeper/conf/zoo.cfg.dynamic

特别注意:

从3.5.0开始clientPort和clientPortAddress配置参数不应该被使用,这些配置应该放在动态配置中进行配置。

配置说明:

zookeeper选填的配置项较多,其他配置项无需填写直接默认即可,只需要配置上述选项。

tickTime:服务器与服务器之间、服务器与客户端之间心跳检查的时间间隔。同时它也是一个时间单位,initLimit和syncLimit参数都以该值作为时间单位
initLimit:集群中的follower服务器(F)与leader服务器(L)之间初始连接时能容忍的最多心跳数(tickTime的数量)。此处表示当已经超过5个心跳时间之后leader还没收到follower的返回信息,则表示当前follower链接失败。
syncLimit:这个配置项标识 Leader 与 Follower 之间发送消息,请求和应答时间长度,最长不能超过多少个 tickTime 的时间长度,总的时间长度就是 2*2000=4 秒
dataDir:zookeeper保存快照数据的目录,默认情况下,zookeeper将写数据的日志也保留在这里。
dataLogDir:指定事务日志文件存放目录。若没指定该值,该日志写在dataDir下
autopurge.snapRetainCount:指定自动清理快照文件和事务日志文件的时间,单位为小时,默认为0表示不自动清理,这个时候可以使用脚本zkCleanup.sh手动清理。不清理的结果是占用的磁盘空间越来越大。
autopurge.purgeInterval:指定保留快照文件和事务日志文件的个数,默认为3
extendedTypesEnabled:开启zookeeper扩展功能,如果需要使用到zookeeper的ttl node功能,需要设置当前参数为true。
reconfigEnabled:从3.5.0开始,3.5.3之前,无法禁用动态重新配置功能。由于该功能的安全问题,所以在3.5.3之后,zookeeper引入了reconfigEnabled配置,默认情况下,该配置的默认值为false,即无法修改服务器配置,该种状态下所有尝试修改集群配置的命令将都会出错。所以如果需要进行对服务器进行配置,必须将该字段的值设置为true。
standaloneEnabled:在3.5.0之前,可以在独立模式或分布式模式下运行ZooKeeper。这些是单独的实现堆栈,并且无法在运行时在它们之间进行切换。默认情况下(为了向后兼容),standaloneEnabled设置为 true。使用此默认值的结果是,如果以单个服务器启动,则不允许集合增长,并且如果从多个服务器启动,则不允许缩小以包含少于两个参与者。将标志设置为false会指示系统运行分布式软件堆栈,即使整体中只有一个参与者也是如此。
dynamicConfigFile:指定当前服务的动态配置。

zookeeper动态配置文件

此处配置文件统一放 conf 下。

touch /app/zookeeper/conf/zoo.cfg.dynamic

动态配置文件中填入以下配置:
在 conf 目录下:
vim zoo.cfg.dynamic

server.1=VECS04851:2888:3888:participant;2181
server.2=VECS04852:2888:3888:participant;2181
server.3=VECS04853:2888:3888:participant;2181

配置参数说明:

server.<positive id> = <address1>:<port1>:<port2>[:role];[<client port address>:]<client port>

positive id:zk中的服务器id
address1:服务器ip地址
port1:服务器与集群中的leader交换信息的端口。
Port2: leader选举专用端口
role:当前服务器在集群中的角色,该角色包括participant或者observer(默认是participant)。observer不参与选举
client port address:客户端端口ip,默认为0.0.0.0
client port:客户端链接ip,2181。

/bin/zkEnv.sh配置修改

zookeeper的系统运行日志默认打印在zookeeper.out文件中,由于zookeeper.out文件不会滚动和自动清理,会导致文件越来越大,所以此处需要修改zkEnv.sh配置,使其系统日志强制输出到日志文件中并支持滚动。

# 设置系统日志存放目录,将下面命令直接放在zkEnv.sh的最后
export ZOO_LOG_DIR=/app/zookeeper/logs
# 设置日志输出方式,在zkEnv.sh中寻找ZOO_LOG4J_PROP,将该值修改为:
ZOO_LOG4J_PROP="INFO,ROLLINGFILE"

conf/log4j.properties 配置修改

设置每个日志文件大小为1000M,滚动10个文件

log4j.appender.ROLLINGFILE.MaxFileSize=1000MB
log4j.appender.ROLLINGFILE.MaxBackupIndex=10
二,配置 zk 启用kerberos。

在kerberos服务器vecs02583上,使用root用户,在/root/keytabs目录下,创建Zookeeper的principal

kadmin.local:  
kadmin.local:  listprincs
K/M@HADOOP.COM
kadmin/admin@HADOOP.COM
kadmin/changepw@HADOOP.COM
kadmin/vecs02583@HADOOP.COM
krbtgt/HADOOP.COM@HADOOP.COM
kadmin.local:  addprinc -randkey zookeeper/vecs04851@HADOOP.COM
WARNING: no policy specified for zookeeper/vecs04851@HADOOP.COM; defaulting to no policy
Principal "zookeeper/vecs04851@HADOOP.COM" created.
kadmin.local:  addprinc -randkey zookeeper/vecs04852@HADOOP.COM
WARNING: no policy specified for zookeeper/vecs04852@HADOOP.COM; defaulting to no policy
Principal "zookeeper/vecs04852@HADOOP.COM" created.
kadmin.local:  addprinc -randkey zookeeper/vecs04853@HADOOP.COM
WARNING: no policy specified for zookeeper/vecs04853@HADOOP.COM; defaulting to no policy
Principal "zookeeper/vecs04853@HADOOP.COM" created.
kadmin.local:  xst -k zookeeper.keytab zookeeper/vecs04851@HADOOP.COM
Entry for principal zookeeper/vecs04851@HADOOP.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04851@HADOOP.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04851@HADOOP.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04851@HADOOP.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04851@HADOOP.COM with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04851@HADOOP.COM with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:zookeeper.keytab.
kadmin.local:  xst -k zookeeper.keytab zookeeper/vecs04852@HADOOP.COM
Entry for principal zookeeper/vecs04852@HADOOP.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04852@HADOOP.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04852@HADOOP.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04852@HADOOP.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04852@HADOOP.COM with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04852@HADOOP.COM with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:zookeeper.keytab.
kadmin.local:  xst -k zookeeper.keytab zookeeper/vecs04853@HADOOP.COM
Entry for principal zookeeper/vecs04853@HADOOP.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04853@HADOOP.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04853@HADOOP.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04853@HADOOP.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04853@HADOOP.COM with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:zookeeper.keytab.
Entry for principal zookeeper/vecs04853@HADOOP.COM with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:zookeeper.keytab.

生成如下文件:(注意该文件权限赋予 400 且属组用户要匹配进程 启动用户)

root@VECS02583:~/keytabs# ls
zookeeper.keytab
root@VECS02583:~/keytabs# chown -R zookeeper:zookeeper zookeeper.keytab
root@VECS02583:~/keytabs# chmod 400 zookeeper.keytab 

将该文件放到 zk 的 conf 的目录下。

修改zoo.cfg,添加以下安全相关的配置

追加

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000

在conf目录创建java.env文件,添加以下内容

export JVMFLAGS="-Djava.security.auth.login.config=/app/zookeeper/conf/jaas.conf"

在conf目录创建jaas.conf文件

Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/app/zookeeper/conf/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/vecs4851@HADOOP.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/app/zookeeper/conf/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/vecs04851@HADOOP.COM";
};

注意:如果修改jaas.conf配置,则一定要重启zkServer,否则会导致zkClient连不上,
可能是因为zkClinet和zkServer使用同一个jaas配置,实际zkClient应该配置自己的keytab用于访问,
而不是配置成和Server一样,可以在其他机器上面新建一个用户作为访问的客户端。

将zookeeper拷贝到搭建节点

tar -zcf apache-zookeeper-3.5.7-bin.tar.gz apache-zookeeper-3.5.7-bin

salt -N zk cp.get_file salt://apache-zookeeper-3.5.7-bin.tar.gz /app/apache-zookeeper-3.5.7-bin.tar.gz

salt -N zk cmd.run "cd /app && tar -zxf apache-zookeeper-3.5.7-bin.tar.gz && ln -s apache-zookeeper-3.5.7-bin zookeeper"

salt -N zk cmd.run "cd /app && chown -R zookeeper:zookeeper apache-zookeeper-3.5.7-bin"

salt -N zk cmd.run "useradd zookeeper && mkdir -p /data1/data/zookeeper/data && mkdir -p /data1/data/zookeeper/logs && chown -R zookeeper:zookeeper  /data1/data/zookeeper/"
还需要在zk data 目录下 的myid 中加入zk 节点int 序号
salt VECS04851 cmd.run "su - zookeeper -c 'echo 1 > /data1/data/zookeeper/data/myid'"
salt VECS04852 cmd.run "su - zookeeper -c 'echo 2 > /data1/data/zookeeper/data/myid'"
salt VECS04853 cmd.run "su - zookeeper -c 'echo 3 > /data1/data/zookeeper/data/myid'"

zookeeper 进程使用 zookeeper 用户启动,

salt -N zk cmd.run "su - zookeeper - 'zkServer.sh start '"


启动完毕 zookeeper 可以在 zk 节点上进入 zkCli

[18:58:53zookeeper@VECS04852 ~]$ zkCli.sh -server vecs04852:2181
Connecting to vecs04852:2181
2020-03-21 18:59:05,931 [myid:] - INFO  [main:Environment@109] - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
2020-03-21 18:59:05,935 [myid:] - INFO  [main:Environment@109] - Client environment:host.name=vecs04852
2020-03-21 18:59:05,935 [myid:] - INFO  [main:Environment@109] - Client environment:java.version=1.8.0_92
2020-03-21 18:59:05,938 [myid:] - INFO  [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2020-03-21 18:59:05,938 [myid:] - INFO  [main:Environment@109] - Client environment:java.home=/data1/app/jdk1.8.0_92/jre
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:java.class.path=/app/zookeeper/bin/../zookeeper-server/target/classes:/app/zookeeper/bin/../build/classes:/app/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/app/zookeeper/bin/../build/lib/*.jar:/app/zookeeper/bin/../lib/zookeeper-jute-3.5.7.jar:/app/zookeeper/bin/../lib/zookeeper-3.5.7.jar:/app/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/app/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/app/zookeeper/bin/../lib/netty-transport-native-unix-common-4.1.45.Final.jar:/app/zookeeper/bin/../lib/netty-transport-native-epoll-4.1.45.Final.jar:/app/zookeeper/bin/../lib/netty-transport-4.1.45.Final.jar:/app/zookeeper/bin/../lib/netty-resolver-4.1.45.Final.jar:/app/zookeeper/bin/../lib/netty-handler-4.1.45.Final.jar:/app/zookeeper/bin/../lib/netty-common-4.1.45.Final.jar:/app/zookeeper/bin/../lib/netty-codec-4.1.45.Final.jar:/app/zookeeper/bin/../lib/netty-buffer-4.1.45.Final.jar:/app/zookeeper/bin/../lib/log4j-1.2.17.jar:/app/zookeeper/bin/../lib/json-simple-1.1.1.jar:/app/zookeeper/bin/../lib/jline-2.11.jar:/app/zookeeper/bin/../lib/jetty-util-9.4.24.v20191120.jar:/app/zookeeper/bin/../lib/jetty-servlet-9.4.24.v20191120.jar:/app/zookeeper/bin/../lib/jetty-server-9.4.24.v20191120.jar:/app/zookeeper/bin/../lib/jetty-security-9.4.24.v20191120.jar:/app/zookeeper/bin/../lib/jetty-io-9.4.24.v20191120.jar:/app/zookeeper/bin/../lib/jetty-http-9.4.24.v20191120.jar:/app/zookeeper/bin/../lib/javax.servlet-api-3.1.0.jar:/app/zookeeper/bin/../lib/jackson-databind-2.9.10.2.jar:/app/zookeeper/bin/../lib/jackson-core-2.9.10.jar:/app/zookeeper/bin/../lib/jackson-annotations-2.9.10.jar:/app/zookeeper/bin/../lib/commons-cli-1.2.jar:/app/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/app/zookeeper/bin/../zookeeper-*.jar:/app/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/app/zookeeper/bin/../conf:.:/app/jdk//lib/tools.jar:/app/jdk//lib/dt.jar
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:java.compiler=<NA>
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:os.name=Linux
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:os.arch=amd64
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:os.version=2.6.32-754.27.1.el6.x86_64
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:user.name=zookeeper
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:user.home=/home/zookeeper
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:user.dir=/home/zookeeper
2020-03-21 18:59:05,939 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.free=115MB
2020-03-21 18:59:05,941 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.max=228MB
2020-03-21 18:59:05,941 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.total=121MB
2020-03-21 18:59:05,945 [myid:] - INFO  [main:ZooKeeper@868] - Initiating client connection, connectString=vecs04852:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@58d25a40
2020-03-21 18:59:05,954 [myid:] - INFO  [main:X509Util@79] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2020-03-21 18:59:05,963 [myid:] - INFO  [main:ClientCnxnSocket@237] - jute.maxbuffer value is 4194304 Bytes
2020-03-21 18:59:05,973 [myid:] - INFO  [main:ClientCnxn@1653] - zookeeper.request.timeout value is 0. feature enabled=
Welcome to ZooKeeper!
JLine support is enabled
[zk: vecs04852:2181(CONNECTING) 0] 2020-03-21 18:59:06,305 [myid:vecs04852:2181] - INFO  [main-SendThread(vecs04852:2181):Login@302] - Client successfully logged in.
2020-03-21 18:59:06,307 [myid:vecs04852:2181] - INFO  [Thread-1:Login$1@135] - TGT refresh thread started.
2020-03-21 18:59:06,312 [myid:vecs04852:2181] - INFO  [Thread-1:Login@320] - TGT valid starting at:        Sat Mar 21 18:59:06 CST 2020
2020-03-21 18:59:06,312 [myid:vecs04852:2181] - INFO  [Thread-1:Login@321] - TGT expires:                  Sun Mar 22 18:59:06 CST 2020
2020-03-21 18:59:06,313 [myid:vecs04852:2181] - INFO  [Thread-1:Login$1@193] - TGT refresh sleeping until: Sun Mar 22 15:03:57 CST 2020
2020-03-21 18:59:06,313 [myid:vecs04852:2181] - INFO  [main-SendThread(vecs04852:2181):SecurityUtils$1@128] - Client will use GSSAPI as SASL mechanism.
2020-03-21 18:59:06,327 [myid:vecs04852:2181] - INFO  [main-SendThread(vecs04852:2181):ClientCnxn$SendThread@1112] - Opening socket connection to server vecs04852/10.111.30.248:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2020-03-21 18:59:06,335 [myid:vecs04852:2181] - INFO  [main-SendThread(vecs04852:2181):ClientCnxn$SendThread@959] - Socket connection established, initiating session, client: /10.111.30.248:53206, server: vecs04852/10.111.30.248:2181
2020-03-21 18:59:06,368 [myid:vecs04852:2181] - INFO  [main-SendThread(vecs04852:2181):ClientCnxn$SendThread@1394] - Session establishment complete on server vecs04852/10.111.30.248:2181, sessionid = 0x2004e17d3a30000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

WATCHER::

WatchedEvent state:SaslAuthenticated type:None path:null

[zk: vecs04852:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: vecs04852:2181(CONNECTED) 1] 

免责声明:文章转载自《Zookeeper 3.5.7搭建(启用Kerberos)》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇aws cli command line interface的安装与使用c# C#设置WebBrowser使用Edge内核下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

Hive速览

一、概述 Hive由Facebook开源,是一个构建在Hadoop之上的数据仓库工具 将结构化的数据映射成表 支持类SQL查询,Hive中称为HQL 1.读模式 2.Hive架构 3.使用Hive的原因 Hadoop数据分析的问题: MapReduce实现复杂查询逻辑开发难度大,周期长 开发速度无法快速满足业务发展 使用Hive原因 类似SQL语法,使...

一个完整的hadoop程序开发过程

目的说明hadoop程序开发过程 前提条件ubuntu或同类OS java1.6.0_45 eclipse-indigo hadoop-0.20.2 hadoop-0.20.2-eclipse-plugin.jar 各项版本一定要匹配,否则出了问题都不知道是什么原因。 配置 配置Java 详见:Ubuntu下搭建JAVA开发环境及卸载 配置分布式Hadoo...

Centos7.X安装impala(RPM方式)

Centos7.5安装Impala 一、安装包准备1.1、Impala下载地址 http://archive.cloudera.com/beta/impala-kudu/redhat/7/x86_64/impala-kudu/0/RPMS/x86_64/ 1.2、impala依赖下载地址 http://archive.cloudera.com/cdh5/r...

opencv配置过程 (cmake,vs2013,qt 5.4)

平台及软件: Windows 7 X86 Visual Studio 2013 OpenCV3.0.0 Cmake3.3 1、下载Windows下的安装文件OpenCV-3.0.0.exe,解压,选择需要的安装目录即可。(本文为F:\opencv) 注意相应的目录不能包含中文。 2、Cmake编译 执行CMake,用于把OpenCV的源码生成对应的VS工程...

HadoopDB:混合分布式系统

HadoopDB 是一个 Mapreduce 和传统关系型数据库的结合方案,以充分利用 RDBMS 的性能和 Hadoop 的容错、分布特性。2009 年被 Yale 大学教授 Abadi 提出,继而商业化为 Hadapt,据称从 VC 那儿拉到了 10M 刀投资。 本文是对 HadoopDB 论文的总结。其中不免掺杂些自己的不成熟想法,更详细的内容,还...

sqoop迁移

3.1 概述 sqoop是apache旗下一款“Hadoop和关系数据库服务器之间传送数据”的工具。 导入数据:MySQL,Oracle导入数据到Hadoop的HDFS、HIVE、HBASE等数据存储系统; 导出数据:从Hadoop的文件系统中导出数据到关系数据库 3.2 工作机制 将导入或导出命令翻译成mapreduce程序来实现 在翻译出的mapre...