搭建 kafka 测试集群

申请机器

联系OP同学申请机器,Linux服务器一台、三台、五台、(2*n+1),Zookeeper集群的工作是超过半数才能对外提供服务,3台中超过两台超过半数,允许1台挂掉 ,是否可用偶数,其实没必要。
如果有四台那么挂掉一台还剩下三台服务器,如果在挂掉一个就不行了,这里记住是超过半数
找到如下 3 台机器:

10.159.1.40
10.159.1.41
10.159.1.42

安装基础环境

安装Java环境支持,需要安装sun-java8,不再赘述。

目录规划

首先要注意在生产环境中目录结构要定义好,防止在项目过多的时候找不到所需的项目。

1
2
3
4
5
6
7
8
9
10
11
12
13
$ pwd
/home/work
$ mkdir opt
$ cd opt
$ mkdir zookeeper
$ mkdir kafka
$ tree -L 1
.
├── kafka
└── zookeeper

2 directories, 0 files

搭建zookeeper集群

安装配置zookeeper

安装zookeeper

1
2
3
4
5
6
7
8
$ pwd
/home/work
$ wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
# 如果机器没有外网权限需要通过scp等方法来获取zookeeper-3.4.13.tar.gz
$ cd /home/work/opt/zookeeper
$ tar -zxvf /home/work/zookeeper-3.4.13.tar.gz
mkdir zkdata #存放快照日志
mkdir zkdatalog #存放事物日志

配置zookeeper

查看conf目录,在该目录zoo_sample.cfg文件是官方给我们的zookeeper的样板文件,给他复制一份命名为zoo.cfg,zoo.cfg是官方指定的文件命名规则。

1
2
3
4
5
$ ll /home/work/opt/zookeeper/zookeeper-3.4.13/conf/ -tr
total 16
-rw-r--r-- 1 work work 922 Jun 30 01:04 zoo_sample.cfg
-rw-r--r-- 1 work work 2161 Jun 30 01:04 log4j.properties
-rw-r--r-- 1 work work 535 Jun 30 01:04 configuration.xsl

修改zoo.cfg文件内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/work/opt/zookeeper/zkdata
dataLogDir=/home/work/opt/zookeeper/zkdatalog
# the port at which the clients will connect
clientPort=12181
server.1=10.159.1.40:12888:13888
server.2=10.159.1.41:12888:13888
server.3=10.159.1.42:12888:13888

创建myid文件

1
2
3
4
5
6
#在server1上执行
echo "1" > /home/work/opt/zookeeper/zkdata/myid
#在server2上执行
echo "2" > /home/work/opt/zookeeper/zkdata/myid
#在server3上执行
echo "3" > /home/work/opt/zookeeper/zkdata/myid

确认zookeeper是否部署成功

启动zookeeper

1
2
3
4
#进入到Zookeeper的bin目录下
$ cd /home/work/opt/zookeeper/zookeeper-3.4.13/bin
#启动服务(3台都需要操作)
$ ./zkServer.sh start

检查是否启动成功

1
2
3
4
5
#检查服务器状态
$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/work/opt/zookeeper/zookeeper-3.4.13/bin/../conf/zoo.cfg # 使用的具体配置文件
Mode: follower #是否为leaderzk

注意:集群一般只有一个leader,多个follower,主一般是相应客户端的读写请求,而从主同步数据,当主挂掉之后就会从follower里投票选举一个leader出来。

另外,可以用jps查看zk的进程,这个是zk的整个工程的main

1
2
3
$ jps
48642 QuorumPeerMain
19551 Jps

搭建kafka集群

安装配置kafka

安装kafka

1
2
3
4
5
6
7
8
$ cd ~
$ pwd
/home/work

$ wget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.0.0/kafka_2.12-2.0.0.tgz
$ cd /home/work/opt/kafka
$ tar -zxvf /home/work/kafka_2.12-2.0.0.tgz
$ mkdir kafkalogs # 创建kafka消息目录,主要存放kafka消息

配置kafka

进入到config目录

1
cd /home/work/opt/kafka/kafka_2.11-2.0.0/config

主要关注:server.properties这个文件即可,我们可以发现在目录下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ ll
total 68
-rw-r--r-- 1 work work 906 Jul 24 22:17 connect-console-sink.properties
-rw-r--r-- 1 work work 909 Jul 24 22:17 connect-console-source.properties
-rw-r--r-- 1 work work 5321 Jul 24 22:17 connect-distributed.properties
-rw-r--r-- 1 work work 883 Jul 24 22:17 connect-file-sink.properties
-rw-r--r-- 1 work work 881 Jul 24 22:17 connect-file-source.properties
-rw-r--r-- 1 work work 1111 Jul 24 22:17 connect-log4j.properties
-rw-r--r-- 1 work work 2262 Jul 24 22:17 connect-standalone.properties
-rw-r--r-- 1 work work 1221 Jul 24 22:17 consumer.properties
-rw-r--r-- 1 work work 4727 Jul 24 22:17 log4j.properties
-rw-r--r-- 1 work work 1919 Jul 24 22:17 producer.properties
-rw-r--r-- 1 work work 7025 Oct 24 11:42 server.properties
-rw-r--r-- 1 work work 1032 Jul 24 22:17 tools-log4j.properties
-rw-r--r-- 1 work work 1169 Jul 24 22:17 trogdor.conf
-rw-r--r-- 1 work work 1023 Jul 24 22:17 zookeeper.properties

有很多文件,这里可以发现有Zookeeper文件,我们可以根据Kafka内带的zk集群来启动,但是建议使用独立的zk集群。
修改配置文件server.properties

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

host.name=10.159.1.40
port=19092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/home/work/opt/kafka/kafkalogs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=10.159.1.40:12181,10.159.1.41:12181,10.159.1.42:12181
message.max.byte=5242880
default.replication.factor=2
replica.fetch.max.bytes=5242880

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

确认kafka是否部署成功

启动kafka

1
2
$ cd /home/work/opt/kafka/kafka_2.11-2.0.0/bin
$ ./kafka-server-start.sh -daemon ../config/server.properties

检查是否启动成功

1
2
3
4
$ jps # 查看kafka是否启动成功
48642 QuorumPeerMain
55722 Jps
25868 Kafka

tips:上述只是一台机器上搭建zookeeper和kafka,其他两台机器做上述同样操作

测试kafka功能

创建Topic

1
2
3
4
5
6
7
8
./kafka-topics.sh --create --zookeeper 10.159.1.41:12181 --replication-factor 2 --partitions 1 --topic test_mq

# --replication-factor 2 复制两份
# --partitions 1 创建1个分区
# --topic 主题为test_mq

$ ./kafka-topics.sh --list --zookeeper localhost:12181
test_mq

生成&消费消息

1
2
3
4
5
6
7
8
9
10
11
# 在10.159.1.40机器上执行,并在>提示提示符后输入msg
$ ./kafka-console-producer.sh --broker-list 10.159.1.40:19092 --topic test_mq
> test mq message

# 在10.159.1.41机器上执行,可以看到收到了消息
$ ./kafka-console-consumer.sh --bootstrap-server PLAINTEXT://10.159.1.41:19092 --topic test_mq --from-beginning
test mq message

# 在10.159.1.42机器上执行,可以看到收到了消息
$ ./kafka-console-consumer.sh --bootstrap-server PLAINTEXT://10.159.1.42:19092 --topic test_mq --from-beginning
test mq message

至此,kafka集群搭建成功。后续还需要优化

TODO

  1. 多boker调研
  2. 参数优化