Skip to main content

[SOLVED] Checking Kubernetes pod CPU and memory utilization - kubernetes

[SOLVED] A Simple Guide to Monitoring Kubernetes Pod CPU and Memory Use

In Kubernetes, it is very important to monitor pod CPU and memory use. This helps us keep the performance good and manage our resources well. When we know how to check these numbers, we can make sure our apps run well. We can also stop running out of resources and make our Kubernetes cluster better. In this chapter, we will look at different ways to check Kubernetes pod CPU and memory use. This will give us useful tips for managing resources. Here are the ways we will look at:

  • Solution 1: Using kubectl top command
  • Solution 2: Querying Metrics Server for Real-Time Metrics
  • Solution 3: Accessing Resource Metrics via Prometheus
  • Solution 4: Using Grafana to Visualize Resource Use
  • Solution 5: Setting Resource Requests and Limits in Pod Specs
  • Solution 6: Monitoring with Kubernetes Dashboard

By the end of this chapter, we will understand how to monitor resource use in our Kubernetes setup. This will help our apps stay efficient and responsive. If you want to read more on related topics, you can look at our articles on how to handle Kubernetes pod warning messages and understanding Kubernetes resource limits.

Solution 1 - Using kubectl top command

To check how much CPU and memory Kubernetes pods are using, we can use the kubectl top command. This command gives us real-time information about resource use for pods, nodes, and the whole cluster.

Prerequisites

First, we need to make sure that the Metrics Server is running in our cluster. The kubectl top command needs the Metrics Server to get the resource data. If we do not have the Metrics Server installed, we can follow the Metrics Server installation guide to set it up.

Checking Pod Utilization

To see the current CPU and memory usage for all pods in a specific namespace or all namespaces, we can run this command:

# For all pods in the default namespace
kubectl top pods

# For all pods in a specific namespace
kubectl top pods -n <namespace>

# For all pods in all namespaces
kubectl top pods --all-namespaces

Output Explanation

The output will show a table with these columns:

  • NAME: The name of the pod.
  • CPU(cores): The current CPU usage for the pod.
  • MEMORY(bytes): The current memory usage for the pod.

Here is an example output:

NAME                      CPU(cores)   MEMORY(bytes)
nginx-5c7b8c8b46-abcdef   50m          128Mi
mysql-7f8c6c6c6f-123456   100m         256Mi

Filtering and Sorting

We can also filter and sort the output using regular shell commands. For example, if we want to sort by CPU usage, we can do this:

kubectl top pods --all-namespaces | sort -k3 -h

Troubleshooting

If we see problems with the kubectl top command, we should check that the Metrics Server is running and set up correctly. We can check its status by running:

kubectl get deployment metrics-server -n kube-system

If the Metrics Server is not running, we can look at the Metrics Server troubleshooting guide for more help.

Using the kubectl top command is a quick and easy way to see how much CPU and memory our Kubernetes pods are using. This helps us manage resources better.

Solution 2 - Querying Metrics Server for Real-Time Metrics

To check the CPU and memory use of Kubernetes pods, we can query the Metrics Server. It gives us resource usage data for pods and nodes in real-time. The Metrics Server collects data from the kubelet on each node. Then it shows this data through the Kubernetes API.

Prerequisites

  1. Metrics Server Installation: First, we need to make sure that the Metrics Server is installed in our cluster. We can install it using this command:

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
  2. Permissions: Next, we need to have the right permissions to access the metrics.

Querying Metrics

After the Metrics Server is running, we can use the kubectl command to get the real-time metrics of our pods. Here is how we can do it:

  1. Check Pod Metrics: To see the CPU and memory usage of all pods in the current namespace, we run this command:

    kubectl top pods

    This command will give us output like this:

    NAME          CPU(cores)   MEMORY(bytes)
    my-pod-1     100m         200Mi
    my-pod-2     200m         300Mi
  2. Check Specific Pod Metrics: If we want to see the metrics for a specific pod, we can give the pod name:

    kubectl top pod my-pod-1
  3. Check Node Metrics: Also, if we want to see the resource use of nodes, we can use this command:

    kubectl top nodes

    This will show us CPU and memory usage for each node in the cluster:

    NAME            CPU(cores)   MEMORY(bytes)
    node-1         250m         500Mi
    node-2         300m         600Mi

Troubleshooting

If we have problems with the Metrics Server, we can check its logs to find the issue:

kubectl logs -n kube-system metrics-server-<pod-name>

We should replace <pod-name> with the real name of the Metrics Server pod.

Additional Resources

For more information on using metrics in Kubernetes, we can check how to check Kubernetes pod metrics. This will help us understand how to use metrics better for managing resources in our Kubernetes environment.

Solution 3 - Accessing Resource Metrics via Prometheus

Prometheus is a strong tool for monitoring and alerting. We can use it with our Kubernetes cluster to collect and see metrics. This includes how much CPU and memory our pods are using. To access resource metrics through Prometheus, we follow these steps:

Step 1: Install Prometheus on your Kubernetes Cluster

We can deploy Prometheus using the Prometheus Operator or the kube-prometheus-stack Helm chart. Here is how we install it with Helm:

  1. Add the Helm repository:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo update
  2. Install the kube-prometheus-stack:

    helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace

This installs Prometheus and other tools for monitoring. This includes Grafana and Alertmanager.

Step 2: Expose Prometheus Server

To see the Prometheus UI, we need to expose the Prometheus server. We can do this by port-forwarding:

kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n monitoring 9090

After we run this command, we can access the Prometheus dashboard at http://localhost:9090 in our web browser.

Step 3: Querying Metrics

Once we access the Prometheus dashboard, we can query metrics for CPU and memory use. Here are some useful queries:

  • CPU Utilization:

    sum(rate(container_cpu_usage_seconds_total{pod=~"your-pod-name.*"}[5m])) by (pod)
  • Memory Utilization:

    sum(container_memory_usage_bytes{pod=~"your-pod-name.*"}) by (pod)

We need to replace your-pod-name with the real name of our pod or use regex patterns to match many pods.

Step 4: Visualizing Metrics

Prometheus has a built-in graph interface. But for better visualizations, we can use Grafana. If we have installed the kube-prometheus-stack, Grafana is already ready.

  1. Access Grafana:

    We can port-forward the Grafana service:

    kubectl port-forward svc/prometheus-grafana -n monitoring 3000:80

    We can access Grafana at http://localhost:3000. The default login is admin/prom-operator.

  2. Create Dashboards:

    We can create dashboards to see CPU and memory metrics from the Prometheus queries we used. Grafana helps us create panels to show these metrics in different ways like graphs or gauges.

Step 5: Set Up Alerts (Optional)

We can also set alerts in Prometheus. This helps us monitor CPU and memory use. We get notified when they go over certain limits. Here is an example alert rule for high CPU use:

groups:
  - name: pod-alerts
    rules:
      - alert: HighCpuUsage
        expr: sum(rate(container_cpu_usage_seconds_total{pod=~"your-pod-name.*"}[5m])) by (pod) > 0.8
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "High CPU usage detected in pod {{ $labels.pod }}"
          description: "CPU usage is above 80% for more than 5 minutes."

We can connect this with Alertmanager to send notifications by email, Slack, or other ways.

By using Prometheus, we can monitor our Kubernetes pod CPU and memory use well. We gain insights into how we use resources and how our performance is. For more details on Kubernetes monitoring, we can check the Kubernetes monitoring documentation.

Solution 4 - Using Grafana to Visualize Resource Utilization

We can use Grafana as a strong open-source tool for checking and monitoring. It helps us see how much CPU and memory our Kubernetes pods are using. By connecting Grafana with a data source like Prometheus, we can make helpful dashboards. These dashboards show real-time performance data for our Kubernetes pods.

Prerequisites

  1. Kubernetes Cluster: We need a running Kubernetes cluster.
  2. Prometheus: We must install Prometheus in our cluster to gather metrics. We can use Helm or do it manually. For Helm installation, look at the Helm documentation.
  3. Grafana: We should deploy Grafana in our Kubernetes cluster.

Step-by-Step Guide

  1. Deploy Prometheus: If we have not installed Prometheus yet, we can use this Helm command:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo update
    helm install prometheus prometheus-community/kube-prometheus-stack

    This command will set up Prometheus with all needed settings.

  2. Deploy Grafana: We can install Grafana with Helm like this:

    helm install grafana grafana/grafana

    After we install it, we can access Grafana by using port-forwarding:

    kubectl port-forward service/grafana 3000:80

    Then we can go to http://localhost:3000 with the default username (admin) and password (admin).

  3. Add Prometheus as a Data Source:

    • In the Grafana user interface, go to ConfigurationData Sources.
    • Click on Add data source and pick Prometheus.
    • Set the URL to your Prometheus service. Usually, it is http://prometheus-kube-prometheus-prometheus:9090.
    • Click Save & Test to check if Grafana can connect to Prometheus.
  4. Create Dashboards:

    • Click on the + icon in the left sidebar and choose Dashboard.

    • Add a new panel by clicking on Add new panel.

    • In the panel editor, put a Prometheus query to get CPU or memory usage. For example:

      • CPU Usage:

        sum(rate(container_cpu_usage_seconds_total{job="kubelet", cluster="", image!=""}[5m])) by (pod)
      • Memory Usage:

        sum(container_memory_usage_bytes{job="kubelet", cluster="", image!=""}) by (pod)
    • We can change the type of visualization, like graphs or tables, and set the time range and refresh rate as we want.

  5. Save the Dashboard: After we have set up the panels, we click on Save to keep our dashboard for later.

Monitoring and Alerts

Grafana also lets us set up alerts based on the data we collect. We can make alerting rules in our dashboards. This way, we will get notified when resource use goes above certain limits.

Using Grafana helps us visualize Kubernetes pod CPU and memory usage. It gives us a clear and interactive way to check our workloads. If we want to learn more about monitoring, we can read this resource on monitoring with Kubernetes Dashboard.

Solution 5 - Setting Resource Requests and Limits in Pod Specs

To manage CPU and memory usage in Kubernetes, we need to set resource requests and limits in our Pod specs. This helps containers get the resources they need to run well. It also stops any container from using too many resources. This can cause problems like slow performance or crashes in a Kubernetes cluster.

Setting Resource Requests and Limits:

  1. Understanding Resource Requests and Limits:

    • Requests tell us the minimum amount of CPU and memory a container needs. Kubernetes uses this to place the container on a node that has enough resources.
    • Limits show the maximum amount of CPU and memory a container can use. If a container goes over this limit, Kubernetes may slow it down or stop it.
  2. Defining Resource Requests and Limits in Pod Specs: We can set resource requests and limits in our Pod or Deployment definition in the YAML file. Here is an example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-app
    spec:
      containers:
        - name: app-container
          image: my-app-image:latest
          resources:
            requests:
              memory: "256Mi"
              cpu: "500m"
            limits:
              memory: "512Mi"
              cpu: "1"

    In this example:

    • The container asks for 256 MiB of memory and 500 milliCPU (which is 0.5 CPU).
    • The limits are 512 MiB of memory and 1 CPU.
  3. Best Practices:

    • Set realistic resource requests based on what the application needs and the data we see. This helps Pods get scheduled well.
    • Check and improve resource requests and limits regularly based on how things are used. We can use tools like kubectl top to see resource use in real-time.
    • Do not give too many resources. Setting limits too high can waste resources.
  4. Applying Changes: After we change the Pod specs, we can apply the changes using kubectl:

    kubectl apply -f my-app-pod.yaml

By setting the right resource requests and limits, we can manage CPU and memory usage in our Kubernetes environment. This helps our applications run well while using resources smartly. For more details about Kubernetes Pods and resource management, check out how to access Kubernetes API.

Solution 6 - Monitoring with Kubernetes Dashboard

We can use the Kubernetes Dashboard as a web-based tool to see our applications running in the cluster. It shows us how much CPU and memory our pods use. To monitor CPU and memory usage of Kubernetes pods with the Dashboard, let’s follow these steps.

Step 1: Deploy the Kubernetes Dashboard

If we have not deployed the Kubernetes Dashboard yet, we can do it with this command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

This command will deploy the latest version of the Kubernetes Dashboard. We can change the version number if we need a different one.

Step 2: Access the Dashboard

To see the Kubernetes Dashboard, we need to set up a proxy. We run this command:

kubectl proxy

This starts a local proxy server at http://localhost:8001. We can access the dashboard by going to:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Step 3: Log in to the Dashboard

We need a token to log in. To create a token for the kubernetes-dashboard service account, we can run:

kubectl -n kubernetes-dashboard create token admin-user

We should copy the generated token and use it to log into the dashboard.

Step 4: Monitor Pod CPU and Memory Utilization

Once we are logged in, we go to the “Workloads” section. Here, we can see all pods running in our cluster. We can click on any pod to check its details, including CPU and memory usage.

  • CPU Utilization: This shows as a percentage that tells us how much CPU the pod uses from the allocated resources.
  • Memory Utilization: This shows how much memory the pod is using compared to the memory limits set in the pod specs.

Step 5: Set Resource Requests and Limits

To monitor and manage effectively, we should set resource requests and limits in our pod specs. This helps the Kubernetes scheduler to make smart choices about resource allocation. Here is an example of how we can set requests and limits in our pod spec:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
    - name: example-container
      image: nginx
      resources:
        requests:
          memory: "256Mi"
          cpu: "500m"
        limits:
          memory: "512Mi"
          cpu: "1"

In this example, the pod asks for 256Mi of memory and 500m of CPU. The limits are set to 512Mi for memory and 1 CPU.

Step 6: Explore Metrics and Insights

The Kubernetes Dashboard gives us different metrics and insights about our cluster’s performance. We can use these metrics to look for trends and improve our workloads.

For more details on monitoring resources in Kubernetes, we can look at tools like Prometheus and Grafana. These tools can help us monitor even better.

By following these steps, we can monitor Kubernetes pod CPU and memory usage with the Kubernetes Dashboard. This helps our applications run smoothly. For more details on monitoring solutions, we can check a linked article about setting resource requests and limits in Kubernetes.

Conclusion

In this article, we look at some good ways to check how much CPU and memory Kubernetes pods use. We talked about using the kubectl top command and how to ask the Metrics Server for data. We also mention some advanced tools like Prometheus and Grafana.

It is very important to set resource requests and limits in pod settings. When we do this, we help manage resources better. This also helps our Kubernetes environment run well.

If we want to learn more about managing Kubernetes, we can check our guide on why container memory usage is important and how to set resource limits well.

Comments