Zookeeper+Kafka集群部署

主机规划:

10.200.3.85    Kafka+ZooKeeper

10.200.3.86    Kafka+ZooKeeper

10.200.3.87    Kafka+ZooKeeper

软件下载地址:

  1. wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
  2. wget http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.12-1.1.0.tgz

三台主机hosts文件一致:

  1. # cat /etc/hosts
  2. 10.200.3.85 ZooKeeper-Kafka-01
  3. 10.200.3.86 ZooKeeper-Kafka-02
  4. 10.200.3.87 ZooKeeper-Kafka-03

一、安装zookeeper

1.在master节点上操作:

  1. [root@ZooKeeper-Kafka-01 src]# tar -zxvf zookeeper-3.4.10.tar.gz -C /usr/local/
  2. [root@ZooKeeper-Kafka-01 src]# cd ..
  3. [root@ZooKeeper-Kafka-01 local]# ln -s zookeeper-3.4.10 zookeeper
  4. [root@ZooKeeper-Kafka-01 local]# cd zookeeper/conf/
  5. [root@ZooKeeper-Kafka-01 conf]# cp zoo_sample.cfg zoo.cfg
  6. tickTime=2000
  7. initLimit=10
  8. syncLimit=5
  9. dataDir=/tmp/zookeeper
  10. clientPort=2181
  11. server.1=ZooKeeper-Kafka-01:2888:3888
  12. server.2=ZooKeeper-Kafka-02:2888:3888
  13. server.3=ZooKeeper-Kafka-03:2888:3888

2.创建dataDir目录创建/tmp/zookeeper

# 在master节点上

  1. [root@ZooKeeper-Kafka-01 conf]# mkdir /tmp/zookeeper
  2. [root@ZooKeeper-Kafka-01 conf]# touch /tmp/zookeeper/myid
  3. [root@ZooKeeper-Kafka-01 conf]# echo 1 > /tmp/zookeeper/myid

3.将zookeeper文件复制到另外两个节点:

  1. [root@ZooKeeper-Kafka-01 local]# scp -r zookeeper-3.4.10/ 10.200.3.86:/usr/local/
  2. [root@ZooKeeper-Kafka-01 local]# scp -r zookeeper-3.4.10/ 10.200.3.87:/usr/local/

4.在两个slave节点创建目录和文件

#ZooKeeper-Kafka-02节点:

  1. [root@ZooKeeper-Kafka-02 local]# ln -s zookeeper-3.4.10 zookeeper
  2. [root@ZooKeeper-Kafka-02 local]# mkdir /tmp/zookeeper
  3. [root@ZooKeeper-Kafka-02 local]# touch /tmp/zookeeper/myid
  4. [root@ZooKeeper-Kafka-02 local]# echo 2 > /tmp/zookeeper/myid

#ZooKeeper-Kafka-03节点

  1. [root@ZooKeeper-Kafka-03 local]# ln -s zookeeper-3.4.10 zookeeper
  2. [root@ZooKeeper-Kafka-03 local]# mkdir /tmp/zookeeper
  3. [root@ZooKeeper-Kafka-03 local]# touch /tmp/zookeeper/myid
  4. [root@ZooKeeper-Kafka-03 local]# echo 3 > /tmp/zookeeper/myid

5.分别在每个节点上启动 zookeeper测试:

  1. [root@ZooKeeper-Kafka-01 zookeeper]# ./bin/zkServer.sh start
  2. [root@ZooKeeper-Kafka-02 zookeeper]# ./bin/zkServer.sh start
  3. [root@ZooKeeper-Kafka-03 zookeeper]# ./bin/zkServer.sh start

6.查看状态:

  1. [root@ZooKeeper-Kafka-01 zookeeper]# ./bin/zkServer.sh status
  2. ZooKeeper JMX enabled by default
  3. Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
  4. Mode: follower
  5. [root@ZooKeeper-Kafka-02 zookeeper]# ./bin/zkServer.sh status
  6. ZooKeeper JMX enabled by default
  7. Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
  8. Mode: leader
  9. [root@ZooKeeper-Kafka-03 zookeeper]# ./bin/zkServer.sh status
  10. ZooKeeper JMX enabled by default
  11. Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
  12. Mode: follower

至此zookeeper集群安装成功!!!

Kafka集群安装配置

1.安装:

  1. [root@ZooKeeper-Kafka-01 src]# tar -zxvf kafka_2.12-1.1.0.tgz -C /usr/local/
  2. [root@ZooKeeper-Kafka-01 src]# cd ..
  3. [root@ZooKeeper-Kafka-01 local]# ln -s kafka_2.12-1.1.0 kafka

2.修改server.properties文件

  1. # master为0
  2. broker.id=0
  3. listeners=PLAINTEXT://ZooKeeper-Kafka-01:9092
  4. advertised.listeners=PLAINTEXT://ZooKeeper-Kafka-01:9092
  5. num.network.threads=3
  6. num.io.threads=8
  7. socket.send.buffer.bytes=102400
  8. socket.receive.buffer.bytes=102400
  9. socket.request.max.bytes=104857600
  10. log.dirs=/tmp/kafka-logs
  11. num.partitions=5
  12. num.recovery.threads.per.data.dir=1
  13. offsets.topic.replication.factor=1
  14. transaction.state.log.replication.factor=1
  15. transaction.state.log.min.isr=1
  16. log.retention.hours=24
  17. log.segment.bytes=1073741824
  18. log.retention.check.interval.ms=300000
  19. # 连接
  20. zookeeper.connect=ZooKeeper-Kafka-01:2181,ZooKeeper-Kafka-02:2181,ZooKeeper-Kafka-03:2181
  21. zookeeper.connection.timeout.ms=6000
  22. group.initial.rebalance.delay.ms=0
  23. # 可删除topic
  24. delete.topic.enable=true

3.将 kafka_2.12-1.1.0 文件夹复制到另外两个节点下

  1. [root@ZooKeeper-Kafka-01 local]# scp -r kafka_2.12-1.1.0/ 10.200.3.86:/usr/local/
  2. [root@ZooKeeper-Kafka-01 local]# scp -r kafka_2.12-1.1.0/ 10.200.3.87:/usr/local/

并修改每个节点对应的 server.properties 文件的 broker.id和listenrs:

  1. [root@ZooKeeper-Kafka-02 config]# cat server.properties
  2. broker.id=1
  3. listeners=PLAINTEXT://ZooKeeper-Kafka-02:9092
  4. advertised.listeners=PLAINTEXT://ZooKeeper-Kafka-02:9092
  5. [root@ZooKeeper-Kafka-03 config]# cat server.properties
  6. broker.id=2
  7. listeners=PLAINTEXT://ZooKeeper-Kafka-03:9092
  8. advertised.listeners=PLAINTEXT://ZooKeeper-Kafka-03:9092

4.启动服务

  1. bin/kafka-server-start.sh config/server.properties &

Zookeeper+Kafka集群测试

创建topic:

  1. [root@ZooKeeper-Kafka-01 kafka]# bin/kafka-topics.sh --create --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --replication-factor 3 --partitions 3 --topic test

显示topic:

  1. [root@ZooKeeper-Kafka-01 kafka]# bin/kafka-topics.sh --describe --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --topic test

列出topic:

  1. [root@ZooKeeper-Kafka-01 kafka]# bin/kafka-topics.sh --list --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181
  2. test

创建 producer(生产者);

# 在master节点上 测试生产消息

  1. [root@ZooKeeper-Kafka-01 kafka]# bin/kafka-console-producer.sh --broker-list ZooKeeper-Kafka-01:9092 -topic test
  2. >hello world
  3. >[2018-04-03 12:18:25,545] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
  4. this is example ...
  5. >[2018-04-03 12:19:16,342] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-2. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
  6. welcome to china
  7. >[2018-04-03 12:20:53,141] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)

创建 consumer(消费者):

# 在ZooKeeper-Kafka-02节点上 测试消费

  1. [root@ZooKeeper-Kafka-02 kafka]# bin/kafka-console-consumer.sh --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 -topic test --from-beginning
  2. Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
  3. this is example ...
  4. hello world
  5. [2018-04-03 12:20:53,145] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
  6. welcome to china

#在ZooKeeper-Kafka-03节点上 测试消费

  1. [root@ZooKeeper-Kafka-03 kafka]# bin/kafka-console-consumer.sh --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 -topic test --from-beginning
  2. Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
  3. welcome to china
  4. hello world
  5. this is example ...

然后在 producer 里输入消息,consumer 中就会显示出同样的内容,表示消费成功

删除 topic 和关闭服务

  1. [root@ZooKeeper-Kafka-01 kafka]# bin/kafka-topics.sh --delete --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --topic test

启动服务:

  1. bin/kafka-server-start.sh config/server.properties &

停止服务:

  1. bin/kafka-server-stop.sh

至此Zookeeper+Kafka集群配置成功.

声明: 本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。
中间件

抽象nginx做cache时缓存判断的万能公式——存不存、存多久、用不用

2024-11-14 12:13:29

中间件

Nginx编译安装秘笈:实用模块与自动化部署的完美结合

2024-11-20 14:33:14

0 条回复 A文章作者 M管理员
欢迎您,新朋友,感谢参与互动!
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
私信列表
搜索