zookeeper + kafka
zookeeper + kafka
一、部署zookeeper集群
192.168.100.10 zookeeper1
192.168.100.20 zookeeper2
192.168.100.30 zookeeper3
1、检查防火墙selinux(三台)
[root@stw3 ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
[root@stw3 ~]# getenforce
Disabled
2、配置/etc/hosts文件
[root@stw ~]# hostnamectl set-hostname node1.example.com
[root@stw ~]# bash
[root@stw ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 node1.example.com node1
192.168.100.20 node2.example.com node2
192.168.100.30 node3.example.com node3
3、做免密钥(三台可以都做一下)
[root@node1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:nbfUSdfvO6UIx254Wfa28+1JW7e4V8JawSZtmyPS97o root@node1.example.com
The key's randomart image is:
+---[RSA 2048]----+
| |
| .|
| o. o|
| . ..o*o.|
| S ooo=o+.|
| oo=.@.o|
| *.X B*|
| . B ++%|
| o oE@O|
+----[SHA256]-----+
[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node2 (192.168.100.20)' can't be established.
ECDSA key fingerprint is SHA256:R7/1dpul7cu8SnefsN2wQw5hKDL+xekk0ffasLS6OGI.
ECDSA key fingerprint is MD5:81:88:a1:16:52:83:c0:d5:59:ad:2b:3a:d5:52:02:bc.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'root@node2'"
and check to make sure that only the key(s) you wanted were added.[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node3 (192.168.100.30)' can't be established.
ECDSA key fingerprint is SHA256:R7/1dpul7cu8SnefsN2wQw5hKDL+xekk0ffasLS6OGI.
ECDSA key fingerprint is MD5:81:88:a1:16:52:83:c0:d5:59:ad:2b:3a:d5:52:02:bc.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node3's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'root@node3'"
and check to make sure that only the key(s) you wanted were added.
4、可以直接将node1的/etc/hosts文件远程传输给node2、node3
[root@node1 ~]# scp /etc/hosts root@node2:/etc/hosts
hosts 100% 275 80.1KB/s 00:00
[root@node1 ~]# scp /etc/hosts root@node3:/etc/hosts
hosts 100% 275 80.6KB/s 00:00
5、时钟同步(三台)
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# systemctl enable chronyd
Created symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to /usr/lib/systemd/system/chronyd.service.
6、安装java(三台)
(1)三台主机都mkdir /opt/software
[root@node1 ~]# mkdir /opt/software
[root@node1 ~]# cd /opt/software
(2)下载安装jdk-8u181
[root@node1 ~]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[root@node1 ~]# rpm -qa | grep java
python-javapackages-3.4.1-11.el7.noarch
java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64
java-1.8.0-openjdk-headless-1.8.0.181-7.b13.el7.x86_64
tzdata-java-2018e-3.el7.noarch
javapackages-tools-3.4.1-11.el7.noarch
[root@node1 ~]# rpm -e java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64 --nodeps
[root@node1 ~]# rpm -e java-1.8.0-openjdk-headless-1.8.0.181-7.b13.el7.x86_64 --nodeps
[root@node1 ~]# java -version
bash: java: command not found...
[root@node1 ~]# cd /opt/software/
[root@node1 software]# rz -E
rz waiting to receive.
[root@node1 software]# ls
jdk-8u181-linux-x64.tar.gz
[root@node1 software]# tar -zxf jdk-8u181-linux-x64.tar.gz
[root@node1 software]# ls
jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz
[root@node1 software]# vim /etc/profile
(3)将java目录和/etc/profile都发送到node2和node3
[root@node1 software]# scp -r jdk1.8.0_181/ root@node2:/opt/software/
[root@node1 software]# scp -r jdk1.8.0_181/ root@node3:/opt/software/
[root@node1 software]# scp /etc/profile root@node2:/etc/profile
profile 100% 1963 757.4KB/s 00:00
[root@node1 software]# scp /etc/profile root@node3:/etc/profile
profile 100% 1963 969.7KB/s 00:00
(4)三台主机全部都source一下/etc/profile
[root@node1 software]# source /etc/profile
[root@node2 ~]# source /etc/profile
[root@node3 ~]# source /etc/profile
7、安装zookeeper
[root@node1 software]# rz -E
rz waiting to receive.
[root@node1 software]# ls
jdk1.8.0_181 jdk-8u181-linux-x64.tar.gz zookeeper-3.4.8.tar.gz
[root@node1 software]# tar -zxf zookeeper-3.4.8.tar.gz
[root@node1 software]# mv zookeeper-3.4.8 zookeeper
[root@node1 software]# cd zookeeper/
[root@node1 zookeeper]# mkdir data logs
[root@node1 zookeeper]# ls
bin dist-maven logs zookeeper-3.4.8.jar
build.xml docs NOTICE.txt zookeeper-3.4.8.jar.asc
CHANGES.txt ivysettings.xml README_packaging.txt zookeeper-3.4.8.jar.md5
conf ivy.xml README.txt zookeeper-3.4.8.jar.sha1
contrib lib recipes
data LICENSE.txt src
[root@node1 zookeeper]# cd conf
[root@node1 conf]# ls
configuration.xsl log4j.properties zoo_sample.cfg
[root@node1 conf]# cp zoo_sample.cfg zoo.cfg
[root@node1 conf]# ls
configuration.xsl log4j.properties zoo.cfg zoo_sample.cfg
[root@node1 conf]# vim zoo.cfg
[root@node1 conf]# cat zoo.cfg | grep -v "#"
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/software/zookeeper/data
clientPort=2181server.1=192.168.100.10:2888:3888
server.2=192.168.100.20:2888:3888
server.3=192.168.100.30:2888:3888
8、在刚刚创建的data中将编号写入子文件myid
[root@node1 conf]# echo 1 > /opt/software/zookeeper/data/myid
[root@node1 conf]# cat /opt/software/zookeeper/data/myid
1
9、将zookeeper目录发送给另外两台主机
[root@node1 software]# scp -r zookeeper root@node2:/opt/software/
[root@node1 software]# scp -r zookeeper root@node3:/opt/software/
10、将另外两台主机的myid文件中的id更改一下
node2:
[root@node2 ~]# echo 2 > /opt/software/zookeeper/data/myid
[root@node2 ~]# cat /opt/software/zookeeper/data/myid
2
node3:
[root@node3 ~]# echo 3 > /opt/software/zookeeper/data/myid
[root@node3 ~]# cat /opt/software/zookeeper/data/myid
3
11、配置zookeeper的环境变量
[root@node1 software]# vim /etc/profile
//最后加上这两行
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
[root@node1 software]# scp /etc/profile root@node2:/etc/profile
profile 100% 2047 643.4KB/s 00:00
[root@node1 software]# scp /etc/profile root@node3:/etc/profile
profile 100% 2047 627.3KB/s 00:00 100% 1900 615.5KB/s 00:00
12、三台主机都需要source一下/etc/profile,让其生效
[root@node1 software]# source /etc/profile
13、三台主机都启动一下zookeeper,查看状态
node1:
[root@node1 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node1 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
node2:
[root@node2 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node2 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
node3:
[root@node3 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node3 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
可以看出,node2为leader,node1和node2都为follower。
二、安装kafka
1、下载并解压kafka包文件
[root@node1 software]# ls
jdk1.8.0_181 kafka_2.11-2.4.0.tgz zookeeper-3.4.8.tar.gz
jdk-8u181-linux-x64.tar.gz zookeeper zookeeper.out
[root@node1 software]# tar -zxf kafka_2.11-2.4.0.tgz
2、编辑配置文件
[root@node1 software]# vim kafka_2.11-2.4.0/config/server.properties
//在配置文件中找到以下两行并注释掉(在文本前加#)如下所示
#broker.id=0
#zookeeper.connect=localhost:2181
//添加
broker.id=1
zookeeper.connect=192.168.100.10:2181,192.168.100.20:2181,192.168.100.30:2181
//取消注释并修改
listeners=PLAINTEXT://192.168.100.10:9092
3、将kafka目录发送给另外两台主机
[root@node1 software]# scp -r kafka_2.11-2.4.0 root@node2:/opt/software/
[root@node1 software]# scp -r kafka_2.11-2.4.0 root@node3:/opt/software/
4、更改另外两台主机的kafka配置文件
node2:
[root@node2 ~]# vim /opt/software/kafka_2.11-2.4.0/config/server.properties
broker.id=2
listeners=PLAINTEXT://192.168.100.20:9092
node3:
[root@node3 ~]# vim /opt/software/kafka_2.11-2.4.0/config/server.properties
broker.id=3
listeners=PLAINTEXT://192.168.100.30:9092
5、三台主机全部启动kafka
[root@node1 software]# cd /opt/software/kafka_2.11-2.4.0/bin
[root@node1 bin]# ./kafka-server-start.sh -daemon /opt/software/kafka_2.11-2.4.0/config/server.properties
jps查看
node1:
[root@node1 bin]# jps
11761 Kafka
11842 Jps
10671 QuorumPeerMain
node2:
[root@node2 bin]# jps
10722 QuorumPeerMain
11821 Jps
11743 Kafka
node3:
[root@node3 bin]# jps
10626 QuorumPeerMain
11241 Kafka
11310 Jps
6、测试kafka
node1:
[root@node1 bin]# ./kafka-topics.sh --create --zookeeper 192.168.100.10:2181 --replication-factor 1 --partitions 1 --topic test
Created topic test.
node2:
[root@node2 bin]# ./kafka-topics.sh --list --zookeeper 192.168.100.20:2181
test
node3:
[root@node3 bin]# ./kafka-topics.sh --list --zookeeper 192.168.100.30:2181
test
