The following blog will let you know about how to start 3 Kafka Servers on a single machine and how one can visualise that partitions of a topic are managed by these three Kafka servers.

Steps to Start 3 Kafka Servers

 

  1. Create 2 more copies of server.properties file present under config directory. The copied files named as server1.properties and server2.properties as shown below.

2. One by one open above two copied files and change some configurations as discussed below.

a) broker.id

The broker.id is the unique id given to a kafka broker. This id is responsible for uniquely identifying a kafka broker over a cluster. The default value of the id is 0 as it is set in server.properties file. For server1.properties and server2.properties file the value of broker.id is set as 1 and 2 respectively as shown below.

server1.properties

server2.properties

 

b) Port Number

The default port number on which the kafka broker runs is 9092 as it is set in server.properties file. It is necessary to change the port number of other two brokers otherwise all brokers will start reading and writing on the same port number. In order to change the port number you just have to increment the default value by 1 and 2. Please keep in mind that on different machines there is no need to change port number of kafka brokers. The value of port number in server1.properties is set as 9093 and for server2.properties is 9094 as shown below.

server1.properties

 

server2.properties

c) log.dirs

log.dirs is basically the broker log directory. The log directory is the main data directory of a broker. You have change its path otherwise all 3 brokers will start writing data to the same directory. The following image represents the changed value for server1.properties file and server2.properties.

server1.properties

 

server2.properties

3. Start Zookeeper

Go to directory where your Kafka file is placed and enter ./bin/zookeeper-server-start.sh config/zookeeper.properties command to start the zookeeper.

4. Start one by one all three Kafka servers as shown below

a)  Enter command : ./bin/kafka-server-start.sh config/server.properties

server.properties

b)  Enter command : ./bin/kafka-server-start.sh config/server1.properties

server1.properties

c) Enter command : ./bin/kafka-server-start.sh config/server2.properties

server2.properties

You have seen that all 3 Kafka brokers started successfully on single machine. Now its time to visualise that how the partitions of your topic are managed by the these running brokers.

1) Create a Kafka topic

Go to kafka directory and enter ./bin/kafka-topics.sh –create –zookeeper localhost:2181 –topic dixit –replication-factor 2 –partitions 2 command to create a topic named as “dixit” with replication factor and partitions 2 as shown below.

topic creation

2) Describe topic

Go to kafka directory and enter ./bin/kafka-topics.sh –describe –zookeeper localhost:2181 –topic dixit command. You will see following output as shown below in image.

topic details

The above image shows the detailed information of a topic. The following points will give you detailed information about how kafka manage its topic.

a) The main thing to focus is leader. You have seen that for partition 0 the leader is 1 and for partition 1 the leader is 2. It means that the broker having id as 1 is responsible for storing partition first partition of topic and broker having id as 2 is responsible for storing another partition of the topic.

b) The another term is known as replicas which shows the broker id on which the duplicate copy of a partition is present. You can see that the duplicate copy of first partition is maintained by broker having id 1 and 2 respectively as shown under replicas column. The duplicate copy of second partition is maintained by the broker having id 2 and 0 respectively.

c) Another term to notice that is Isr (In sync replicas). It contains the list of broker ids which are in sync with the leader(the main broker ).

Hopefully now you got an idea that how to start 3 kafka brokers on single machine. The next part of blog will make you understand that how to write a Java program to start a Kafka Producer. Till then keep reading Kafka.

 

Comments

comments

About the author

Dixit Khurana