[SOLVED] Accessing Kafka in Docker: A Simple Guide to External and Internal Connectivity
Today, we see that Apache Kafka is a key tool for managing real-time data streams in cloud-native environments. But accessing Kafka inside and outside Docker containers can be tricky for developers and data engineers. This guide will show you easy ways to connect to Kafka. We will share clear steps and best practices to help you manage Kafka connections better.
In this chapter, we will look at these solutions:
- Part 1 - Configuring Kafka Docker Container for External Access: Here we will learn how to change your Kafka setup to let external connections in.
- Part 2 - Using Docker-Compose for Multi-Container Setup: We will see how to use Docker Compose for a more complex Kafka setup.
- Part 3 - Setting Up Host Networking for Kafka Access: In this part, we will explore how to set up host networking for easier access to your Kafka.
- Part 4 - Accessing Kafka from Another Docker Container: We will find out how to connect different Docker containers to your Kafka service.
- Part 5 - Configuring Firewall Rules for Kafka: We will get tips on how to set up firewall rules to keep your Kafka connections secure.
- Part 6 - Testing Kafka Access with Command Line Tools: Here we will learn some good command-line tools to test your Kafka access and connectivity.
If you want to learn more about Kafka, you may find these links useful:
By following this guide, we will be ready to access Kafka easily, both inside and outside Docker. This will help us build strong data streaming in our applications.
Part 1 - Configuring Kafka Docker Container for External Access
To access Kafka from outside Docker container, we need to configure the Kafka server settings and the Docker container properly. Here is how we can set up our Kafka Docker container for external access.
Docker Run Command: We can use this command to run our Kafka container. This command will expose the necessary ports:
docker run -d --name=kafka \ -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://<YOUR_HOST_IP>:9092 \ -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \ -e KAFKA_ZOOKEEPER_CONNECT=<YOUR_ZOOKEEPER_IP>:2181 \ -p 9092:9092 \ wurstmeister/kafka:latest
We should replace
<YOUR_HOST_IP>
with the IP address of our host machine. Also, replace<YOUR_ZOOKEEPER_IP>
with the Zookeeper IP address.Kafka Configuration: We must make sure our
server.properties
file has these settings:listeners=PLAINTEXT://0.0.0.0:9092 advertised.listeners=PLAINTEXT://<YOUR_HOST_IP>:9092
This setting lets Kafka listen on all interfaces. It also advertises the correct host IP for clients that connect from outside.
Network Mode: If we want to use the host network, we can run the container in host network mode. This allows the container to share the host’s network stack:
docker run -d --name=kafka \ --network host \ -e KAFKA_ZOOKEEPER_CONNECT=<YOUR_ZOOKEEPER_IP>:2181 \ wurstmeister/kafka:latest
Testing External Access: After we start the Kafka container, we can test the external access. We can use Kafka command-line tools or a Kafka client from our host machine. To produce a message to a topic, we can use this command:
kafka-console-producer.sh --broker-list <YOUR_HOST_IP>:9092 --topic test
Firewall Configuration: We must check that our firewall allows traffic on the Kafka port, which is 9092 by default. We can use this command to allow the traffic:
sudo ufw allow 9092
For more details about managing Kafka topics, we can refer to how to create a topic in Kafka and how to delete a topic in Kafka.
Part 2 - Using Docker-Compose for Multi-Container Setup
To set up Apache Kafka with Docker-Compose in a multi-container way,
we need to create a docker-compose.yml
file. This file will
have the right settings for Kafka and Zookeeper. This setup helps them
communicate easily.
version: "3.8"
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
networks:
- kafka-net
kafka:
image: wurstmeister/kafka:latest
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:9092,OUTSIDE://0.0.0.0:9094
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
networks:
- kafka-net
networks:
kafka-net:
driver: bridge
Instructions to Deploy the Setup
Create a Docker-Compose File: Save the YAML settings above in a file called
docker-compose.yml
.Start Services: We run this command in the folder where the
docker-compose.yml
file is:docker-compose up -d
Verify the Setup: We can check if Kafka is running right by listing the running containers:
docker ps
Access Kafka: We can access Kafka from our host machine on port
9094
. Use this command to send or read messages:docker exec -it <kafka_container_id> kafka-console-producer.sh --broker-list localhost:9094 --topic test-topic
This setup helps us use Docker-Compose for a multi-container environment. It makes sure that we can reach Kafka both inside Docker and from outside. For more info on managing Kafka topics, check this guide on creating topics.
Part 3 - Setting Up Host Networking for Kafka Access
To access Kafka that runs in a Docker container from outside, we can set up host networking. This setup lets Kafka listen on the host’s network directly. It makes it easier to connect without worrying about port mappings.
Steps to Configure Host Networking for Kafka:
Run Kafka with Host Network Mode:
We need to use the--network host
option when we start the Kafka container. Here is an example:docker run -d --name kafka --network host \ \ -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \ -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \ -e KAFKA_ZOOKEEPER_CONNECT=localhost:2181 wurstmeister/kafka
Configure Kafka Properties:
We have to make sure that the Kafka properties are set right. TheKAFKA_ADVERTISED_LISTENERS
should match the hostname or IP address which clients will use to connect. If we are using localhost, we set it as shown above.Verify Kafka is Running:
We should check if Kafka is running and can be accessed on the host network. We can use this command to see the running containers:docker ps
Testing the Connection:
After Kafka is running, we can test the connection using a Kafka client or command-line tools:kafka-topics.sh --list --bootstrap-server localhost:9092
By doing these steps, we allow Kafka access over the host network. This makes the connection process easier for applications that run outside Docker. For more Kafka settings, check Kafka Server Configuration.
Part 4 - Accessing Kafka from Another Docker Container
To access Kafka in one Docker container from another Docker container, we need to set up the network correctly. We can use Docker’s default bridge network or make a custom network. Let’s see how we can do this.
Create a Custom Network (Optional):
This step is not a must but it helps with better control and separation.docker network create kafka-network
Run Kafka and Zookeeper:
When we start Kafka, we must connect it to the same network.docker run -d --name zookeeper --network kafka-network -e ZOOKEEPER_CLIENT_PORT=2181 wurstmeister/zookeeper docker run -d --name kafka --network kafka-network -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 -p 9092:9092 wurstmeister/kafka
Run Your Application Container:
We also need to connect our application container to the same network.docker run -it --network kafka-network --rm your-application-image
Access Kafka from Your Application:
In our application, we should use the name of the Kafka container (kafka
) in the Kafka client settings.bootstrap.servers=kafka:9092
Example Kafka Producer Code:
Here is a simple Java code to send messages.Properties props = new Properties(); .put("bootstrap.servers", "kafka:9092"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props<String, String> producer = new KafkaProducer<>(props); KafkaProducer.send(new ProducerRecord<>("your-topic", "key", "value")); producer.close(); producer
By following these steps, we can easily access Kafka from another Docker container. If you want to know more about how to set up Kafka topics and do other tasks, check out how to create a topic in Kafka.
Part 5 - Configuring Firewall Rules for Kafka
We need to set up firewall rules to access Kafka inside and outside Docker. It is important to let the right traffic go through. This means we allow incoming and outgoing traffic on the ports that Kafka uses.
Steps to Configure Firewall Rules for Kafka:
Identify Kafka Ports: Kafka normally uses these ports:
- Broker Port: 9092 (or the port you set in your Kafka config)
- Zookeeper Port: 2181 (if you use Zookeeper)
Open Ports on the Firewall: We can use the commands below based on the operating system.
For Ubuntu (using UFW):
sudo ufw allow 9092/tcp sudo ufw allow 2181/tcp
For CentOS (using Firewalld):
sudo firewall-cmd --zone=public --add-port=9092/tcp --permanent sudo firewall-cmd --zone=public --add-port=2181/tcp --permanent sudo firewall-cmd --reload
For Windows:
- Open Windows Firewall.
- Click on Advanced settings.
- Create a new Inbound Rule for TCP port 9092.
- Create another Inbound Rule for TCP port 2181.
Check Firewall Status: We need to make sure the rules are correct and the firewall is on.
sudo ufw status # For Ubuntu sudo firewall-cmd --list-all # For CentOS
Testing Connectivity: After we set up the firewall, we should test if we can connect to our Kafka broker from inside and outside the Docker container. We can use tools like
telnet
ornc
:telnet [Kafka_IP] 9092
By doing these steps, we can make sure that our Kafka setup is reachable both inside and outside Docker. If we want to learn more about Kafka settings, we can check understanding Apache Kafka or Kafka server configuration.
Part 6 - Testing Kafka Access with Command Line Tools
We can check if we can access Kafka from both inside and outside Docker. We can use the command line tools that Kafka gives us. Here are the simple steps to test Kafka access.
Accessing Kafka from the Host:
First, we need to make sure Kafka is running. We also need the command line tools on our host machine. We will use
kafka-console-producer
andkafka-console-consumer
commands for testing.Producing Messages:
docker exec -it <kafka_container_name> kafka-console-producer --broker-list localhost:9092 --topic test-topic
After we run this command, we can type some messages and hit Enter to send them.
Consuming Messages:
docker exec -it <kafka_container_name> kafka-console-consumer --bootstrap-server localhost:9092 --topic test-topic --from-beginning
This command reads messages from the start of
test-topic
.Accessing Kafka from Another Docker Container:
If we want to test from another container, we need to connect using that container’s network.
Producing Messages:
docker run -it --network <your_network> --rm confluentinc/cp-kafka:latest kafka-console-producer --broker-list <kafka_container_name>:9092 --topic test-topic
Consuming Messages:
docker run -it --network <your_network> --rm confluentinc/cp-kafka:latest kafka-console-consumer --bootstrap-server <kafka_container_name>:9092 --topic test-topic --from-beginning
Testing Kafka Access from the Host Machine:
If we set up Kafka for external access, we can test it right from our host machine.
Producing Messages:
kafka-console-producer --broker-list <host_ip>:9092 --topic test-topic
Consuming Messages:
kafka-console-consumer --bootstrap-server <host_ip>:9092 --topic test-topic --from-beginning
Remember to change <kafka_container_name>
to your
actual Kafka container name. Also, replace <host_ip>
with your host’s IP address. For more details about configurations, we
can look at the Kafka
server configuration.
By doing these steps, we can test Kafka access using command line tools in different places.
Frequently Asked Questions
1. How do we connect to a Kafka broker running in a Docker container from our host machine?
To connect to a Kafka broker in a Docker container from our host
machine, we need to set the advertised.listeners
property
in the Kafka config. This property should point to the host’s IP address
or name. For more steps, check our guide on how
to access Kafka inside and outside Docker.
2. What are the common issues when we access Kafka from a Docker container?
Common issues when we access Kafka from a Docker container are wrong
listener settings and network issues. We should make sure our
docker-compose
file is correct. Also, Kafka’s advertised
listeners should allow external access. For more tips on
troubleshooting, look at our article on understanding
Apache Kafka.
3. Can we use Docker Compose to set up a multi-container Kafka environment?
Yes, we can use Docker Compose to create a multi-container Kafka
environment. We can define our Kafka and Zookeeper services in a
docker-compose.yml
file. This makes it easy to manage and
scale. For a full guide, read our section on using
Docker-Compose for multi-container setup.
4. How do we configure firewall rules to allow access to Kafka running in Docker?
To let access to Kafka running in Docker, we need to open the right ports. The default port is 9092. This means we must set inbound rules for our host machine and any network security groups if we are using cloud services. For more details, check our guide on configuring firewall rules for Kafka.
5. What tools can we use to test Kafka access from the command line?
We can use command line tools like
kafka-console-producer
and
kafka-console-consumer
to test Kafka access. These tools
let us produce and consume messages from the command line. This helps to
check our Kafka setup inside and outside Docker. For a full list of
commands, please see our section on testing
Kafka access with command line tools.
Comments
Post a Comment