How Can You Connect to Kafka Running in Docker?

To connect to Kafka in Docker, we need to make sure our Docker container is set up right for networking. We should use the right Kafka bootstrap server address. This usually means using the container’s IP address and the port we mapped. If we follow the right steps, we can connect to our Kafka easily even when it runs in Docker.

In this article, we will look at different ways to connect to Kafka in Docker. We will talk about how to set up Docker networking for Kafka. We will also see how to use Kafka CLI to connect, how to connect Java applications, how to use Spring Boot, and how to access Kafka from other tools. The solutions we will check out include:

  • How to Connect to Kafka Running in Docker
  • Configuring Docker Networking for Kafka Connections
  • Using Kafka CLI to Connect to Dockerized Kafka
  • Connecting Java Applications to Kafka in Docker
  • Using Spring Boot to Connect to Kafka Running in Docker
  • Accessing Kafka from External Tools while Running in Docker
  • Frequently Asked Questions

Configuring Docker Networking for Kafka Connections

To connect to Kafka running in Docker, we need to set up Docker networking correctly. Kafka listens on multiple ports. It needs a good network to let the Kafka broker talk to clients. These clients can be in other containers or outside of Docker.

Docker Network Setup

  1. Create a Docker Network: We should make a special Docker network for our Kafka and Zookeeper containers.

    docker network create kafka-net
  2. Run Zookeeper and Kafka: When we start our Kafka and Zookeeper containers, we must connect them to the network we created.

    Run Zookeeper:

    docker run -d --name zookeeper --network kafka-net \
      -e ZOOKEEPER_CLIENT_PORT=2181 \
      wurstmeister/zookeeper

    Run Kafka:

    docker run -d --name kafka --network kafka-net \
      -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
      -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
      -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
      -p 9092:9092 \
      wurstmeister/kafka

Exposing Ports

  • Host Access: To let outside applications connect to Kafka, we need to map the Kafka port to the host. We do this with the -p 9092:9092 option in the command above.

  • Advertised Listeners: We use KAFKA_ADVERTISED_LISTENERS to show how clients should connect. For outside access, we can set it to our host’s IP or domain: bash -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://<host_ip>:9092

Verifying Docker Network

  • Inspect the Network: To check if our containers are properly connected to the network, we run: bash docker network inspect kafka-net

Accessing from Other Containers

  • For apps or clients in other Docker containers, we can use the service name in the Docker network. For example, we can connect using kafka:9092.

Using Bridge Network Mode

If we are using the default bridge network mode, we might need to change the advertised listeners to our host machine’s IP address. This will help outside connections work right.

By setting up Docker networking correctly for our Kafka, we can make sure that Kafka and client apps, whether they are in containers or outside, can talk to each other easily.

Using Kafka CLI to Connect to Dockerized Kafka

To connect to a Kafka instance running in Docker using the Kafka Command Line Interface (CLI), we can follow these steps:

  1. Run Kafka and Zookeeper in Docker: First, we need to make sure that Kafka and Zookeeper are running in Docker. We can use this docker-compose.yml file:

    version: '2'
    
    services:
      zookeeper:
        image: wurstmeister/zookeeper:3.4.6
        ports:
          - "2181:2181"
    
      kafka:
        image: wurstmeister/kafka:latest
        ports:
          - "9092:9092"
        environment:
          KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://localhost:9092
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
          KAFKA_LISTENERS: INSIDE://0.0.0.0:9092,OUTSIDE://0.0.0.0:9092
          KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

    We run it with this command:

    docker-compose up -d
  2. Access Kafka CLI: We can access the Kafka CLI by running a command inside the Kafka container. Use this command to open a bash shell in the running Kafka container:

    docker exec -it <kafka_container_name> /bin/bash

    We need to replace <kafka_container_name> with the name of our Kafka container. It is usually kafka if we use the above docker-compose.yml.

  3. Produce Messages: To produce messages to a Kafka topic, we use the kafka-console-producer.sh script. For example, to send messages to a topic named test, we run:

    kafka-console-producer.sh --broker-list localhost:9092 --topic test

    Now we can type our messages and press Enter to send them.

  4. Consume Messages: To consume messages from the same topic, we use the kafka-console-consumer.sh script. To read messages from the test topic, we run:

    kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
  5. Check Available Topics: We can list all available topics with this command:

    kafka-topics.sh --list --bootstrap-server localhost:9092

These steps help us connect to Kafka running in Docker using the CLI. We can produce and consume messages easily. If we want more information on Docker networking, we can refer to how Docker networking works for multi-container applications.

Connecting Java Applications to Kafka in Docker

To connect Java applications to Kafka running in Docker, we must make sure the Kafka broker can be reached from our Java app. Here is a simple guide to help us do this.

  1. Docker Compose Setup: We can use Docker Compose to run Kafka and Zookeeper. Below is a sample docker-compose.yml file.
version: '3.8'

services:
  zookeeper:
    image: wurstmeister/zookeeper:3.4.6
    ports:
      - "2181:2181"

  kafka:
    image: wurstmeister/kafka:latest
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://localhost:9094
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9092,OUTSIDE://0.0.0.0:9094
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    depends_on:
      - zookeeper
  1. Run Docker Compose: We need to run this command to start our Kafka and Zookeeper services.
docker-compose up -d
  1. Java Dependencies: We should add the needed dependencies to our pom.xml if we use Maven. We can include the Kafka client library like this:
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>3.4.0</version>
</dependency>
  1. Producer Example: Here is a simple Java code for a producer to send messages to Kafka:
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class KafkaJavaProducer {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9094");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        KafkaProducer<String, String> producer = new KafkaProducer<>(props);
        producer.send(new ProducerRecord<>("my-topic", "key", "value"));
        producer.close();
    }
}
  1. Consumer Example: Here is a simple Java code for a consumer to read messages from Kafka:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class KafkaJavaConsumer {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9094");
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList("my-topic"));

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
            records.forEach(record -> {
                System.out.printf("Consumed message: %s%n", record.value());
            });
        }
    }
}
  1. Networking Considerations: We need to ensure our Java application can reach the Kafka broker at the advertised listener address. This is localhost:9094. If our Java application runs in another Docker container, we must make sure it is on the same Docker network as the Kafka container.

  2. Testing the Connection: After we run the producer and consumer code, we should see messages sent and received if everything is set up right.

For more information about Docker and how it works, we can check what is Docker and why should you use it.

Using Spring Boot to Connect to Kafka Running in Docker

We can connect a Spring Boot application to Kafka that runs in Docker by following these steps:

  1. Add Dependencies: We need to add these dependencies in our pom.xml if we are using Maven:

    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
    </dependency>
  2. Configure Application Properties: In our application.properties file, we must set the Kafka bootstrap server. If Kafka runs on Docker with default settings, we can use:

    spring.kafka.bootstrap-servers=localhost:9092
    spring.kafka.consumer.group-id=my-group
    spring.kafka.consumer.auto-offset-reset=earliest
  3. Create Kafka Producer Configuration: We will define a configuration class for the Kafka producer:

    import org.apache.kafka.clients.producer.ProducerConfig;
    import org.apache.kafka.common.serialization.StringSerializer;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.kafka.core.DefaultKafkaProducerFactory;
    import org.springframework.kafka.core.KafkaTemplate;
    import org.springframework.kafka.core.ProducerFactory;
    
    import java.util.HashMap;
    import java.util.Map;
    
    @Configuration
    public class KafkaProducerConfig {
        @Bean
        public ProducerFactory<String, String> producerFactory() {
            Map<String, Object> configProps = new HashMap<>();
            configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
            configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
            configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
            return new DefaultKafkaProducerFactory<>(configProps);
        }
    
        @Bean
        public KafkaTemplate<String, String> kafkaTemplate() {
            return new KafkaTemplate<>(producerFactory());
        }
    }
  4. Create a Kafka Consumer: We will define a Kafka consumer configuration:

    import org.apache.kafka.clients.consumer.ConsumerConfig;
    import org.apache.kafka.common.serialization.StringDeserializer;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.kafka.annotation.EnableKafka;
    import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
    import org.springframework.kafka.core.ConsumerFactory;
    import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
    
    import java.util.HashMap;
    import java.util.Map;
    
    @Configuration
    @EnableKafka
    public class KafkaConsumerConfig {
        @Bean
        public ConsumerFactory<String, String> consumerFactory() {
            Map<String, Object> configProps = new HashMap<>();
            configProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
            configProps.put(ConsumerConfig.GROUP_ID_CONFIG, "my-group");
            configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
            configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
            return new DefaultKafkaConsumerFactory<>(configProps);
        }
    
        @Bean
        public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
            ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
            factory.setConsumerFactory(consumerFactory());
            return factory;
        }
    }
  5. Sending Messages: We can use the KafkaTemplate to send messages:

    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.kafka.core.KafkaTemplate;
    import org.springframework.stereotype.Service;
    
    @Service
    public class KafkaProducerService {
        @Autowired
        private KafkaTemplate<String, String> kafkaTemplate;
    
        public void sendMessage(String topic, String message) {
            kafkaTemplate.send(topic, message);
        }
    }
  6. Consuming Messages: We create a listener to consume messages from Kafka:

    import org.springframework.kafka.annotation.KafkaListener;
    import org.springframework.stereotype.Component;
    
    @Component
    public class KafkaConsumerService {
        @KafkaListener(topics = "your_topic_name", groupId = "my-group")
        public void listen(String message) {
            System.out.println("Received Message: " + message);
        }
    }

We run our Spring Boot application. Now it can connect to Kafka running in Docker. We must ensure that Docker runs and Kafka is up and reachable. For more details about Docker setup, we can check How to Connect Docker Containers to Different Networks.

Accessing Kafka from External Tools while Running in Docker

To access Kafka in Docker from outside tools, we need to make sure that Kafka is set up to allow external connections. We also need to expose the right ports. Here is how we can do it:

  1. Docker Run Command: When we start our Kafka container, we must expose the needed ports. Kafka usually runs on port 9092. We can use this command to run Kafka in Docker:

    docker run -d --name kafka \
    -p 9092:9092 \
    -e KAFKA_LISTENER_SECURITY_MAP='PLAINTEXT:PLAINTEXT' \
    -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://localhost:9092' \
    -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' \
    wurstmeister/kafka:latest
  2. Update Your Kafka Configuration: In our Kafka configuration file, which is usually server.properties, we must set these properties:

    listeners=PLAINTEXT://0.0.0.0:9092
    advertised.listeners=PLAINTEXT://<YOUR_PUBLIC_IP>:9092

    Here, we must change <YOUR_PUBLIC_IP> to the real IP address of our Docker host.

  3. Accessing from External Tools: Now we can access Kafka using different tools like:

    • Kafka CLI: We can run this command to send or get messages:

      kafka-console-producer --broker-list <YOUR_PUBLIC_IP>:9092 --topic test
    • Kafka GUI Tools: We can use tools like Kafka Tool, Confluent Control Center, or Kafdrop. We just need to give <YOUR_PUBLIC_IP>:9092 as the bootstrap server.

  4. Firewall and Security Groups: We should check that our firewall or any security groups (if we use a cloud service) allow incoming connections on port 9092.

  5. Testing Connection: We can use this command to check the connection from an outside tool:

    kafka-console-consumer --bootstrap-server <YOUR_PUBLIC_IP>:9092 --topic test --from-beginning

By following these steps, we can access our Kafka instance running in Docker from external tools. If we need more help with Docker setup or networking, we can look at resources like this Docker networking guide.

Frequently Asked Questions

1. How do we connect to Kafka running in Docker?

To connect to Kafka in Docker, we need to make sure our Docker container is set up right for networking. We can connect using the Kafka command line interface (CLI) from another container or from our local machine. This depends on how we have set it up. Make sure the right ports are open. The default port is 9092. For detailed steps, we can check our guide on how to connect Docker containers to different networks.

2. What are the best practices for configuring Docker networking for Kafka?

When we configure Docker networking for Kafka, it is important to use a bridge network. This helps with keeping our containers separate and allows them to talk to each other. This setup lets Kafka brokers and clients communicate well. Also, we should set the advertised listeners in the Kafka configuration. This helps external clients connect properly. For more information on Docker networking, we can read our article on how Docker networking works for multi-container applications.

3. Can we use the Kafka CLI to manage Kafka in Docker?

Yes, we can use the Kafka CLI to manage Kafka when it runs in Docker. We can run CLI commands directly from the Kafka container. Also, we can use Docker’s exec command to run CLI commands against the running Kafka instance. This way, we can create topics, send messages, and manage consumer groups easily. For more on using the CLI, we can check our guide on how to work with Docker containers using the Docker CLI.

4. How can we integrate Java applications with Dockerized Kafka?

To connect Java applications to Kafka in Docker, we need to make sure our application can reach the Kafka broker’s advertised listeners. We should use the Kafka client libraries in our Java application and set the bootstrap servers properly. Make sure the right ports are open and can be reached from our application. For more details on connecting applications to Docker services, we can read our article on how to use Docker Compose to connect services.

5. What tools can we use to access Kafka from outside Docker?

To access Kafka from outside tools while it runs in Docker, we can use GUI clients like Kafka Tool, Confluent Control Center, or any HTTP-based management tools. We need to make sure the Kafka ports are open and our Docker networking settings allow outside connections. For more help, we can look at our resource on how to expose Docker container ports to the host.

These FAQs give a good overview of connecting to Kafka in Docker and cover common issues we might face. For more reading on Docker and what it can do, we can explore more resources on what is Docker and why should you use it and how to install Docker on different operating systems.