Skip to main content

Docker - Working with Kubernetes

Docker and Kubernetes are very important tools in today’s software development. They help us create, deploy, and manage applications easily. Docker - Working with Kubernetes helps teams use containerization. This makes our applications scalable and portable. It also keeps our environments the same at different stages of development. We need to understand how these tools work together to improve our workflows and boost productivity.

In this chapter, we will look at the basics of Docker and Kubernetes. We will learn how to set up the environment, create Docker images, and manage deployments in Kubernetes. We also talk about scaling applications and setting up persistent storage. This will give us a good understanding of Docker - Working with Kubernetes. For more details, we can check out Kubernetes Architecture and Docker Continuous Integration.

Introduction to Docker and Kubernetes

We all know that Docker and Kubernetes are important tools for making and running applications today. Docker helps us package applications with everything they need into small units called containers. This way, we can be sure that applications work the same way everywhere. It solves the problem of “it works on my machine.”

Kubernetes helps us manage these containers when we have a lot of them. It automates how we deploy, scale, and run application containers on different computers. When we use Docker and Kubernetes together, we get a strong system to build, deploy, and manage applications easily.

Here are some key features of Docker:

  • Containerization: It keeps applications separate in lightweight containers.
  • Portability: It makes sure environments are the same in development, testing, and production.
  • Efficiency: It uses less resources than traditional virtual machines.

Now, let’s look at some key features of Kubernetes:

  • Self-healing: It restarts containers that stop working by themselves.
  • Scaling: It can quickly change the number of application instances based on how many users we have.
  • Load balancing: It spreads network traffic evenly across all containers.

If you want to learn more about Docker’s architecture, check out Docker Architecture. Also, learning about Kubernetes architecture will help us improve how we deploy applications.

Setting Up Your Environment

To work well with Docker and Kubernetes, we need a good environment. First, we should check if Docker is installed on our machine. We can follow the installation guide at Docker Installation. After we set up Docker, we need to install Kubernetes.

The best way to run Kubernetes on our computer is to use Minikube or Docker Desktop. Both of these options support Kubernetes. Here is a simple setup guide:

  1. Install Minikube:

    • We need to download and install Minikube from the official website.

    • To start Minikube, we will use this command:

      minikube start
  2. Install kubectl:

    • Next, we need to install kubectl. This is the tool we use to work with Kubernetes. We can do this by running:

      curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
      chmod +x ./kubectl
      sudo mv ./kubectl /usr/local/bin/kubectl
  3. Verify Installation:

    • Now we check if Kubernetes is running with this command:

      kubectl cluster-info

With this setup, we can use Docker containers inside a Kubernetes cluster easily. If we want to learn more about Kubernetes architecture, we can look at Kubernetes Architecture.

Creating a Docker Image for Your Application

We start with creating a Docker image. This is an important step when we use Docker and work with Kubernetes. An image acts like a plan for our application. It includes everything we need to run it. To create a Docker image, we usually use a Dockerfile. This file tells Docker what to do.

Here is a simple example of a Dockerfile for a Node.js application:

# Use the official Node.js image as a parent image
FROM node:14

# Set the working directory
WORKDIR /usr/src/app

# Copy package.json and package-lock.json
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the application code
COPY . .

# Expose the application port
EXPOSE 3000

# Command to run the application
CMD ["node", "app.js"]

After we create our Dockerfile, we build the image with this command:

docker build -t my-node-app .

This command builds a Docker image named my-node-app. After we build it, we can run the image on our local machine to check if it works correctly.

For more details about managing Docker images, we can check what are Docker images. Once we create our Docker image, we can easily use it in our Kubernetes deployment. This makes using Docker and Kubernetes a strong choice for developing modern applications.

Understanding Kubernetes Architecture

Kubernetes is a free tool for managing containers. It helps us deploy, scale, and manage applications that run in containers. The architecture of Kubernetes has many important parts. These parts work together to create a strong and flexible environment for running Docker containers.

  1. Master Node: This is the main part of Kubernetes. It controls the cluster. It has:

    • API Server: This connects with all parts and acts like a bridge for users and apps.
    • Scheduler: This decides where to put workloads based on what resources we have.
    • Controller Manager: This takes care of controllers that keep the cluster in the right state.
  2. Worker Nodes: These nodes run the application containers. They include:

    • Kubelet: This is a helper that talks with the master node. It makes sure containers are in the right state.
    • Kube-Proxy: This handles the network routing for services. It helps pods to communicate with each other.
    • Container Runtime: This is the program that runs containerized applications like Docker.
  3. Pods: These are the smallest units we can deploy in Kubernetes. A pod can have one or more containers.

  4. Services: These group a set of pods. They give us a stable way to access these pods.

If we want to learn more about how these parts work together, we can visit Kubernetes Architecture. This architecture helps Kubernetes to manage Docker containers well. It makes sure we have high availability and can grow in cloud environments.

Deploying Docker Containers on Kubernetes

Deploying Docker containers on Kubernetes need some steps. This helps our applications run well in a container environment. Kubernetes helps us manage our Docker containers. It gives us good features like scaling and automated rollouts.

  1. Create a Deployment: We use a YAML file to set up a Kubernetes Deployment. Here is an example:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
            - name: my-app
              image: my-docker-image:latest
              ports:
                - containerPort: 80
  2. Apply the Deployment: We run this command to create the deployment:

    kubectl apply -f deployment.yaml
  3. Expose the Deployment: To let others access our application, we expose it with a Service:

    kubectl expose deployment my-app --type=LoadBalancer --port=80
  4. Check the Status: We need to monitor our deployment and pods:

    kubectl get deployments
    kubectl get pods

If we follow these steps, we can easily deploy Docker containers on Kubernetes. We can use its strong features for better management. For more details about Kubernetes architecture, you can check this article.

Managing Pods and Deployments

In Kubernetes, we need to manage Pods and Deployments well. This is important for keeping our applications stable and able to grow. A Pod is the smallest unit we can deploy in Kubernetes. It can hold one or more containers. A Deployment helps us update Pods in a clear way. It makes sure that the state we want matches the current state.

To manage Pods and Deployments, we can follow these steps:

  1. Creating a Deployment: We can use this YAML setup to create a Deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
            - name: my-container
              image: my-image:latest

    This setup creates three copies of the my-app Deployment.

  2. Updating a Deployment: To update a Deployment, we can use the kubectl apply command with the new configuration file. Kubernetes will take care of the rollout by itself.

  3. Scaling Deployments: We can scale our application by changing the number of replicas in our Deployment YAML. We can also use this command:

    kubectl scale deployment my-app --replicas=5
  4. Rolling Back: If an update does not work, we can go back to an earlier version with this command:

    kubectl rollout undo deployment/my-app

For more information on Kubernetes structure, we can check Kubernetes Architecture. When we manage Pods and Deployments well, our applications can handle changes and stay strong in a changing environment.

Scaling Applications with Kubernetes

We can scale applications with Kubernetes. This is a strong feature that helps us manage our application’s load easily. Kubernetes allows us to do both vertical and horizontal scaling. This means we can change the number of replicas of our application without any trouble.

Horizontal Pod Autoscaler (HPA)

The Horizontal Pod Autoscaler helps us automatically change the number of pods in a deployment. It does this based on CPU usage or other chosen metrics. To set up HPA, we can use this command:

kubectl autoscale deployment your-deployment-name --cpu-percent=50 --min=1 --max=10

This command will make sure that the number of pods goes up and down between 1 and 10 based on CPU use.

Manual Scaling

We can also scale our application by hand using this command:

kubectl scale deployment your-deployment-name --replicas=5

Best Practices for Scaling

  • Load Testing: We should do load testing. This helps us find the best number of replicas.
  • Monitoring: We need to use monitoring tools. They help us watch application performance and resource use.
  • Resource Requests and Limits: We should define resource requests and limits in our pod specs. This helps us use resources well.

For more information about Kubernetes architecture and how it helps with scaling, check out Kubernetes architecture.

Using Kubernetes Services for Networking

Kubernetes Services are important for managing networking in a Kubernetes cluster. They give stable IP addresses and DNS names for applications. This helps different parts communicate easily. We need to know how to use Kubernetes Services well for deploying Docker containers on Kubernetes.

There are different types of Kubernetes Services:

  • ClusterIP: This is the default service type. It shows the service on a cluster-internal IP. Only people inside the cluster can reach it.
  • NodePort: This type shows the service on each Node’s IP at a fixed port. This lets outside traffic access the service.
  • LoadBalancer: This creates an external load balancer in cloud providers that support it. It gives a public IP to the service.
  • ExternalName: This connects the service to the externalName field, like DNS names.

To make a basic Kubernetes Service, we can use the following YAML configuration:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 8080
      nodePort: 30000
  selector:
    app: my-app

This setup connects a Docker container that runs on port 8080 to an external NodePort (30000). For more details on Kubernetes networking, we can check out Docker Networking. Knowing about Kubernetes Services for networking is very important for managing Docker containers in a Kubernetes environment.

Configuring Persistent Storage in Kubernetes

We need to set up persistent storage in Kubernetes for applications that keep data after the container stops. Kubernetes helps us manage storage with tools like Persistent Volumes (PVs) and Persistent Volume Claims (PVCs).

  1. Persistent Volume (PV): A PV is a storage space in the cluster. An admin can create it or it can be created automatically using Storage Classes. It acts like a storage unit.

  2. Persistent Volume Claim (PVC): A PVC is a request for storage from a user. It shows the size and access modes we need. This way, users can ask for the right storage they want.

Example Configuration:

Here is a simple YAML example to show how to set up a PV and PVC:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /data/my-pv

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

In this example, we create a PV with 10Gi of storage. The PVC asks for 5Gi. When the PVC is linked to the PV, the application can use this storage for a long time.

For more details on Docker and Kubernetes architecture and Docker volumes, please check these links. We think configuring persistent storage is very important to keep data safe in our Kubernetes applications.

Monitoring and Logging in Kubernetes

We need good monitoring and logging to keep our applications healthy and working well in Kubernetes. Tools like Prometheus help us monitor our apps. We can use Fluentd or the ELK stack for logging. These tools give us useful information about how our apps behave and how the system performs.

Monitoring with Prometheus:

  • Installation: We can install Prometheus using Helm or set it up as a Kubernetes deployment.
  • Configuration: We must create a ServiceMonitor to tell Prometheus which services to get metrics from.

Here is a simple ServiceMonitor configuration:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: my-service-monitor
  labels:
    app: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  endpoints:
    - port: metrics

Logging with ELK Stack:

  • Elasticsearch: This stores logs and lets us search them.
  • Logstash: It gathers logs from many places.
  • Kibana: This helps us see log data in a visual way.

We can deploy Fluentd as a DaemonSet. This helps us collect logs from all nodes. We can set up Fluentd to send logs to Elasticsearch for storing and analyzing.

For more details about logging, check out Docker Logging. We can learn how to connect these tools for better monitoring. Good monitoring and logging in Kubernetes help keep our applications strong and working well.

Docker - Working with Kubernetes - Full Example

In this part, we show a simple example of using Docker with Kubernetes to deploy a basic web app. This example will help us create a Docker image, deploy it on a Kubernetes cluster, and manage it well.

  1. Create a Docker Image

    First, we make a Dockerfile for a simple Node.js app:

    # Use the official Node.js image
    FROM node:14
    
    # Set the working directory
    WORKDIR /usr/src/app
    
    # Copy package.json and install dependencies
    COPY package*.json ./
    RUN npm install
    
    # Copy the application code
    COPY . .
    
    # Expose the application port
    EXPOSE 3000
    
    # Start the application
    CMD ["node", "app.js"]

    Now, we build the Docker image:

    docker build -t my-node-app .
  2. Deploy on Kubernetes

    Next, we create a Kubernetes Deployment configuration in deployment.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-node-app
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-node-app
      template:
        metadata:
          labels:
            app: my-node-app
        spec:
          containers:
            - name: my-node-app
              image: my-node-app:latest
              ports:
                - containerPort: 3000

    Then, we deploy the application:

    kubectl apply -f deployment.yaml
  3. Expose the Deployment

    To access the app from outside, we create a Service:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-node-app-service
    spec:
      type: LoadBalancer
      ports:
        - port: 80
          targetPort: 3000
      selector:
        app: my-node-app

    Finally, we apply the service configuration:

    kubectl apply -f service.yaml

This full example shows how we can use Docker with Kubernetes. For more information about Kubernetes architecture and Docker networking, you can check the links given.

Conclusion

In this article on Docker - Working with Kubernetes, we look at important ideas. We talk about Kubernetes architecture. We also explain how to create Docker images and how to deploy applications.

When we understand how to manage pods, scale applications, and set up persistent storage, we make our ability to create strong applications better. With this knowledge, we can connect Docker with Kubernetes. This helps us use containers more easily.

For more information, we can check our guides on Docker - Kubernetes Architecture and Docker Logging.

Comments