How can you list recently deleted pods in Kubernetes?

To list the pods that we recently deleted in Kubernetes, we can use the Kubernetes API or turn on audit logging. Kubernetes does not give a simple command to show deleted pods. But we can use tools like kubectl with logging features or access the Kubernetes API. These can help us track the lifecycle events of pods, including when we delete them.

In this article, we will look at different ways to list recently deleted pods in Kubernetes. We will cover using kubectl, the Kubernetes API, turning on and using audit logs, and using custom controllers for better tracking. We will also talk about making solutions for monitoring pod lifecycle for better observability. Here is what we will cover:

  • How to List Recently Deleted Pods in Kubernetes Using kubectl
  • How to Use the Kubernetes API to Retrieve Recently Deleted Pods
  • How to Enable and Use Kubernetes Audit Logs for Deleted Pods
  • How to Leverage Custom Controllers to Track Deleted Pods
  • How to Implement a Pod Lifecycle Monitoring Solution
  • Frequently Asked Questions

For more information about Kubernetes, we can check related articles like What is Kubernetes and How Does it Simplify Container Management? and How Do I Use Kubernetes for Machine Learning?.

How to Use the Kubernetes API to Retrieve Recently Deleted Pods

We can retrieve recently deleted pods in Kubernetes using the Kubernetes API. We can use the kubectl command or make direct HTTP requests to the API server. Here is how we can do it:

Using kubectl with API Query

We can access the Kubernetes API directly with kubectl to list the deleted pods. Deleted pods do not show in the regular listing. But we can check the events linked to the pods.

kubectl get events --sort-by='.metadata.creationTimestamp' -n <namespace>

We need to replace <namespace> with our target namespace. This command shows all events sorted by creation time. Here, we can find events that relate to pod deletions.

Making Direct API Requests

We can also make direct HTTP requests to the Kubernetes API to get info about deleted pods. Use this endpoint:

GET /api/v1/namespaces/<namespace>/pods

This gives us the pods in the chosen namespace. But to include deleted ones, we need to use the ?fieldSelector parameter.

Here is an example using curl:

curl -X GET "https://<k8s-api-server>/api/v1/namespaces/<namespace>/pods?fieldSelector=status.phase=Failed&labelSelector=<your-label-selector>" -H "Authorization: Bearer <your-token>"

Example with kubectl proxy

If we want to access the API without issues with authentication, we can start kubectl proxy and then access the API:

  1. Start the proxy:
kubectl proxy
  1. Access the deleted pods using your web browser or a tool like curl:
curl http://localhost:8001/api/v1/namespaces/<namespace>/pods?fieldSelector=status.phase=Failed

Custom Resource Definitions (CRDs)

If we need better tracking, we can think about using a Custom Resource Definition (CRD). This CRD can track pod lifecycle events, including deletions. This way, we can keep a record of deleted pods. Then we can query this info later.

Final Note

For more details on managing Kubernetes resources, we can check the article on how to interact with the Kubernetes API. This article gives us more insights on how to work with different Kubernetes resources in a good way.

How to Enable and Use Kubernetes Audit Logs for Deleted Pods

To track deleted pods in Kubernetes, we need to enable and use audit logs. Kubernetes audit logs give us records of all requests made to the API server. This includes actions related to pod deletions. Here’s how we can enable and set up audit logging for our Kubernetes cluster.

Step 1: Configure the Audit Policy

First, we create a YAML file called audit-policy.yaml. This file will tell us which events to log. For deleted pods, we add a rule as below:

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
  - level: RequestResponse
    resources:
      - group: ""
        resources: ["pods"]
    verbs: ["delete"]

Step 2: Enable Audit Logging

Next, we need to change the Kubernetes API server configuration to use the audit policy we just made. Usually, we edit the API server manifest file. This file is often at /etc/kubernetes/manifests/kube-apiserver.yaml. We add these flags:

spec:
  containers:
  - command:
    - kube-apiserver
    - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
    - --audit-log-path=/var/log/kube-apiserver-audit.log
    - --audit-log-maxage=30
    - --audit-log-maxbackup=10
    - --audit-log-maxsize=100

Step 3: Restart the API Server

After we change the API server configuration, we have to restart the API server. This helps to apply the changes. If we use a managed Kubernetes service, we should check the documentation from our provider on how to restart the API server.

Step 4: Access the Audit Logs

Now that we have enabled audit logging, we can access the logs at the path we set. We can use this command to view the logs:

cat /var/log/kube-apiserver-audit.log | grep "DELETE"

This command helps us see only delete operations, so we can monitor deleted pods well.

Step 5: Analyze the Audit Logs

The audit log entries give us detailed info about the deletion requests. This includes who did the action, when it happened, and which resource was affected. Here is an example of a log entry for a deleted pod:

{
  "kind": "Event",
  "apiVersion": "audit.k8s.io/v1",
  "level": "RequestResponse",
  "timestamp": "2023-10-01T12:00:00Z",
  "auditID": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
  "requestURI": "/api/v1/namespaces/default/pods/my-pod",
  "verb": "delete",
  "user": {
    "username": "system:admin",
    "groups": ["system:masters"]
  },
  "responseStatus": {
    "code": 200
  }
}

By following these steps, we can enable and use Kubernetes audit logs to track deleted pods. For more info on managing Kubernetes resources, we can check this article on Kubernetes components.

How to Use Custom Controllers to Track Deleted Pods

We can use custom controllers in Kubernetes to track deleted pods. We do this by using the Kubernetes client-go library to watch for events. This way, we can keep a record of deleted pods. This is important for auditing and monitoring.

Steps to Make a Custom Controller

  1. Set Up Your Go Environment: Make sure you have Go installed. Set up your workspace.

  2. Import Important Packages: In your Go application, use the Kubernetes client-go and other needed packages.

    import (
        "context"
        "k8s.io/apimachinery/pkg/runtime"
        "k8s.io/apimachinery/pkg/watch"
        "k8s.io/client-go/kubernetes"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/client-go/tools/remotecommand"
    )
  3. Initialize Kubernetes Client: Load your kubeconfig. Then create a Kubernetes client.

    config, err := clientcmd.BuildConfigFromFlags("", "/path/to/kubeconfig")
    if err != nil {
        panic(err.Error())
    }
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
        panic(err.Error())
    }
  4. Watch for Pod Deletion Events: We use a watch on the pods resource to listen for delete events.

    watchPods, err := clientset.CoreV1().Pods("your-namespace").Watch(context.TODO(), metav1.ListOptions{})
    if err != nil {
        panic(err.Error())
    }
    
    for event := range watchPods.ResultChan() {
        switch event.Type {
        case watch.Deleted:
            pod := event.Object.(*v1.Pod)
            fmt.Printf("Deleted pod: %s\n", pod.Name)
            // Add logic to save pod info in storage
        }
    }
  5. Store Deleted Pod Information: We need to log deleted pod details in a database or file so we can get it later.

Example of Storing Deleted Pods

Here is a simple example. This writes the deleted pod’s name and time to a log file:

func logDeletedPod(podName string) {
    f, err := os.OpenFile("deleted_pods.log", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0666)
    if err != nil {
        log.Fatal(err)
    }
    defer f.Close()

    if _, err := f.WriteString(fmt.Sprintf("Deleted pod: %s at %s\n", podName, time.Now().Format(time.RFC3339))); err != nil {
        log.Fatal(err)
    }
}

Deploying the Custom Controller

When your custom controller is ready, we can deploy it as a pod in our cluster. Make sure it has the right permissions to watch pod events. We do this by assigning a proper Role and RoleBinding.

Conclusion

Using custom controllers helps us track deleted pods in Kubernetes. We can change the setup based on our monitoring and auditing needs. This way, we keep good visibility over pod lifecycle events. For more information about Kubernetes, you can check what are the key components of a Kubernetes cluster.

How to Implement a Pod Lifecycle Monitoring Solution

To monitor the lifecycle of pods in Kubernetes, we can use custom controllers, webhooks, and monitoring tools. Here is a simple step-by-step guide:

  1. Custom Controller:
    • We create a custom Kubernetes controller with the client-go library. This controller will watch for pod events.
    • It will listen for ADDED, DELETED, and MODIFIED events related to pods.
    package main
    
    import (
        "context"
        "k8s.io/apimachinery/pkg/watch"
        "k8s.io/client-go/kubernetes"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/kubernetes/pkg/client/clientset_generated/clientset"
    )
    
    func main() {
        config, err := clientcmd.BuildConfigFromFlags("", "/path/to/kubeconfig")
        if err != nil {
            panic(err.Error())
        }
        clientset, _ := kubernetes.NewForConfig(config)
    
        watchPods(clientset)
    }
    
    func watchPods(clientset *clientset.Clientset) {
        watch, _ := clientset.CoreV1().Pods("").Watch(context.TODO(), metav1.ListOptions{})
        for event := range watch.ResultChan() {
            switch event.Type {
            case watch.Added:
                // Handle pod addition
            case watch.Deleted:
                // Handle pod deletion
            case watch.Modified:
                // Handle pod modification
            }
        }
    }
  2. Kubernetes Events:
    • We use Kubernetes events to track pod lifecycle changes. We can query events with kubectl:
    kubectl get events --sort-by='.metadata.creationTimestamp'
  3. Prometheus and Grafana:
    • We deploy Prometheus to collect metrics from our Kubernetes cluster. We use the kube-state-metrics service to gather pod lifecycle metrics.
    • Then, we set up Grafana dashboards to show pod states over time.
    apiVersion: v1
    kind: Service
    metadata:
      name: prometheus
    spec:
      ports:
        - port: 9090
      selector:
        app: prometheus
  4. Alerting with Prometheus:
    • We create alert rules in Prometheus. This helps us know when pods are failing or restarting often.
    groups:
    - name: pod-alerts
      rules:
      - alert: PodRestarting
        expr: rate(kube_pod_container_status_restarts_total[5m]) > 0
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "Pod {{ $labels.pod }} is restarting frequently"
  5. Logging:
    • We can use tools like Fluentd or Elasticsearch. These tools help us collect logs from our pods. We set up log shipping to track pod events and errors.
  6. Webhook for Custom Actions:
    • We can also add a webhook to trigger custom actions when pod events happen. For example, it can send notifications or log information.
    apiVersion: admissionregistration.k8s.io/v1
    kind: ValidatingWebhookConfiguration
    metadata:
      name: pod-lifecycle-webhook
    webhooks:
    - name: podlifecycle.example.com
      clientConfig:
        service:
          name: pod-lifecycle-service
          namespace: default
          path: "/validate"
        caBundle: <ca-bundle>
      rules:
      - operations: ["CREATE", "DELETE"]
        apiGroups: ["v1"]
        apiVersions: ["pods"]
        resources: ["pods"]

By using these methods, we can monitor the lifecycle of pods in Kubernetes. This helps us see their states and react to changes quickly. For more information on Kubernetes configurations, check the article on Kubernetes Pod Management.

Frequently Asked Questions

1. How can we view recently deleted pods in Kubernetes?

To see recently deleted pods in Kubernetes, we can use the kubectl command with the --field-selector option. This helps us filter events for certain actions like deletion. But, by default, Kubernetes does not keep information about deleted pods. So, we might need to turn on auditing or use a monitoring tool to track this in detail.

2. Are we able to get deleted pod logs in Kubernetes?

When a pod is deleted in Kubernetes, its logs usually cannot be retrieved. This is unless we have set up a logging tool that keeps logs even after the pod is gone. Tools like Fluentd or ELK Stack can help us save logs for deleted pods. This way, we can look at them later.

3. How can we enable Kubernetes audit logs for deleted pods?

To turn on audit logs for deleted pods in Kubernetes, we need to set up the API server with an audit policy file. This file tells the server what events to log, including deletions of pods. We should also make sure our Kubernetes cluster saves these logs in a place where we can access them later.

4. What are custom controllers in Kubernetes and how can they help us track deleted pods?

Custom controllers in Kubernetes can help us watch pod lifecycle events, like deletions. By making a custom controller, we can keep a record of deleted pods. This helps us track and audit them. It gives us a solution that suits our specific monitoring needs.

5. What is a pod lifecycle monitoring solution and how can it help us track deleted pods?

A pod lifecycle monitoring solution helps us watch the state changes of pods in a Kubernetes cluster. With this solution, we can see when pods are created, deleted, or restarted. This is very helpful for keeping track of operations and fixing problems related to pod management. For more details, check out our article on monitoring Kubernetes events.