Skip to main content

[SOLVED] How to rolling restart pods without changing deployment yaml in kubernetes? - kubernetes

[SOLVED] Easy Ways to Restart Pods in Kubernetes Without Changing Deployment YAML

In Kubernetes, we need to manage deployments well. This is important for keeping our applications running smoothly. One task we often do is restart pods without changing the deployment YAML. This way, we can refresh our application pods without bothering users too much. In this chapter, we will look at some simple methods to do this. We will show the benefits of each method and give clear steps to follow.

Here are the methods we can use to restart our Kubernetes pods:

  • Solution 1 - Use kubectl rollout restart
  • Solution 2 - Patch the deployment with a dummy annotation
  • Solution 3 - Use kubectl scale to trigger a restart
  • Solution 4 - Change the environment variables for a short time
  • Solution 5 - Use kubectl edit to add a dummy label
  • Solution 6 - Trigger a restart using a command in a CronJob

By using these methods, we can manage our Kubernetes deployments well without changing our YAML files too much. For more tips, we can check out our articles on debugging image pull errors and service accounts in Kubernetes.

Solution 1 - Use kubectl rollout restart

One of the easiest and best ways to restart your Pods in Kubernetes is by using the kubectl rollout restart command. This command helps us restart all Pods managed by a specific deployment without changing the deployment YAML.

To use this command, we can follow these steps:

  1. Identify the Deployment: First, we need to find the name of the deployment we want to restart. We can see all deployments in our current namespace by using this command:

    kubectl get deployments
  2. Execute the Rollout Restart: After we find the deployment name, we can start the rolling restart by running this command:

    kubectl rollout restart deployment/<deployment-name>

    Here, we replace <deployment-name> with the real name of our deployment. For example:

    kubectl rollout restart deployment/my-app
  3. Verify the Rollout Status: After we start the restart, we can check the status of the rollout to make sure the Pods are restarting correctly:

    kubectl rollout status deployment/<deployment-name>

    This command will give us updates about the current state of the rollout. It will tell us if it finished successfully or if there are any problems.

Using kubectl rollout restart is a simple and good way to restart your Pods. We do not need to change our deployment settings. This method follows best practices for Kubernetes management. It helps us keep our infrastructure with little trouble.

For more about managing deployments, we can check the Kubernetes documentation or look at solutions for similar issues, like how to restart Pods when a ConfigMap changes.

Solution 2 - Patch the deployment with a dummy annotation

We can easily restart our Kubernetes pods without changing the deployment YAML file. We do this by patching the deployment with a dummy annotation. When we change something in the deployment’s spec, it makes Kubernetes restart the pods.

To use this solution, we follow these steps:

  1. Identify Your Deployment: First, we need to find the name of the deployment we want to update. We can list all the deployments in a namespace with this command:

    kubectl get deployments -n <namespace>
  2. Patch the Deployment: Next, we use the kubectl patch command to add or update a dummy annotation. Here is an example command. We replace <deployment-name> and <namespace> with our specific deployment name and namespace:

    kubectl patch deployment <deployment-name> -n <namespace> \
    -p '{"spec": {"template": {"metadata": {"annotations": {"kubectl.kubernetes.io/restartedAt": "'$(date +%Y-%m-%dT%H:%M:%SZ)'"}}}}}}'

    In this command, we add an annotation named kubectl.kubernetes.io/restartedAt with the current time. This tells Kubernetes that the deployment changed. So, it will trigger the rolling restart of the pods.

  3. Verify the Restart: We can check the status of the pods to make sure they are rolling out correctly:

    kubectl rollout status deployment <deployment-name> -n <namespace>
  4. Check the Pods: We can see the current state of our pods after the restart by running:

    kubectl get pods -n <namespace>

This method works well and does not need us to change the deployment YAML file directly. For more details on managing deployments, we can look at debugging Kubernetes pods or checking Kubernetes pod status.

Solution 3 - Use kubectl scale to trigger a restart

One good way to restart pods in a Kubernetes deployment without changing the deployment YAML file is to use the kubectl scale command. This command can change how many replicas we have for a deployment. When we do this, it restarts the pods.

Steps to Use kubectl scale

  1. Identify Your Deployment: First, we need to know the name of our deployment and the namespace it is in. If it is not in the default namespace, we should check that. We can list our deployments with this command:

    kubectl get deployments
  2. Scale Down and Up: To start a rolling restart, we can scale down the number of replicas to zero. After that, we can scale it back up to the number we want. For example, if our deployment is called my-deployment and we want 3 replicas, we can run these commands:

    # Scale down to 0 replicas
    kubectl scale deployment my-deployment --replicas=0
    
    # Scale back up to 3 replicas
    kubectl scale deployment my-deployment --replicas=3

    This way, we stop the existing pods and create new ones. This ensures that any changes in the container image or settings are applied.

  3. Check the Status: After we scale the deployment, we should watch the status of our pods to make sure they restart properly. We can use this command:

    kubectl get pods -l app=my-deployment

    We need to change app=my-deployment to match the correct label to see the pods for our deployment.

Additional Information

  • Graceful Termination: When we scale down to zero, Kubernetes makes sure that the current pods stop nicely. This means that they can finish their work before stopping.

  • Rollback: If we have problems with the new pods, we can always scale the deployment down to zero again and then scale it back up to the old number of replicas. It is good to check logs and events while doing this.

Using kubectl scale is an easy way to restart our Kubernetes pods without changing the deployment settings. For more information on managing deployments, we can look at the official Kubernetes documentation.

If we are having problems with pod restarts or settings, we can check resources like how to debug ImagePullBackOff or how to restart pods when ConfigMap changes.

Solution 4 - Change the environment variables for a short time

To restart Kubernetes pods smoothly without changing the deployment YAML, we can change the environment variables for a short time. This will make Kubernetes notice the change and start a new rollout of the pods.

Steps to Change Environment Variables

  1. Find the Deployment: First, we need to know the name of the deployment and the namespace where your pods are running. If it’s not in the default namespace, find that too.

  2. Get the Current Deployment Settings: We use this command to get the current settings of the deployment, which includes the environment variables:

    kubectl get deployment <deployment-name> -n <namespace> -o yaml
  3. Patch the Deployment: We can change an existing environment variable or add a new one with the kubectl patch command. For example, if we want to change an environment variable called ENV_VAR, we can do it like this:

    kubectl patch deployment <deployment-name> -n <namespace> \
    -p '{"spec":{"template":{"spec":{"containers":[{"name":"<container-name>","env":[{"name":"ENV_VAR","value":"new-value"}]}]}}}}}'

    If we want to add a temporary dummy environment variable, we can use this:

    kubectl patch deployment <deployment-name> -n <namespace> \
    -p '{"spec":{"template":{"spec":{"containers":[{"name":"<container-name>","env":[{"name":"DUMMY_ENV_VAR","value":"dummy-value"}]}]}}}}}'
  4. Check the Rollout Status: After we apply the patch, we can check how the rollout is going with this command:

    kubectl rollout status deployment/<deployment-name> -n <namespace>
  5. Undo the Changes (Optional): If we want to go back to the original environment variable, we can either use the same patch command again or redeploy the original settings.

Important Things to Think About

  • Effect on Running Pods: Changing environment variables will stop the old pods and start new ones. Make sure your application can handle this well.

  • Rolling Update Method: Check that your deployment has a good rolling update method set up. This helps to make the change without any downtime.

This way is good for restarting your Kubernetes pods without editing the deployment YAML file, which helps us make quick changes when needed.

For more tips on managing Kubernetes deployments, we can look at this article on Kubernetes rolling updates.

Solution 5 - Use kubectl edit to add a dummy label

One easy way to do a rolling restart of our Kubernetes pods is to use the kubectl edit command. We can add a dummy label to the existing deployment. This change will make Kubernetes see the deployment as updated. Then it will restart the pods.

Here’s what we can do:

  1. Identify the Deployment: First, we need to know the name of our deployment and the namespace it is in. If we are not sure, we can list all deployments in the current namespace by using:

    kubectl get deployments
  2. Edit the Deployment: Next, we can use the kubectl edit command to change the deployment. For example, if our deployment is named my-deployment, we run:

    kubectl edit deployment my-deployment
  3. Add a Dummy Label: When the editor opens (it is usually vi or nano), we go to the metadata section. Here, we can add a new label or change an existing one. We can use a simple label like restart=true. It should look like this:

    metadata:
      labels:
        restart: "true" # Add this line
  4. Save and Exit: After we add the label, we save our changes and exit the editor. This will start a rolling restart of the pods that the deployment manages.

  5. Verify the Restart: We can check the status of the pods to see if they are restarting by running:

    kubectl get pods -l restart=true

This method is very helpful for making quick changes without changing the whole deployment file. It helps us keep a clean deployment YAML while still getting the result of restarting the pods.

If we want to learn more about managing our Kubernetes resources, we can check out how to set dynamic values with Helm or look at how to check Kubernetes pod CPU and memory usage.

Solution 6 - Trigger a restart using a command in a CronJob

We can use a Kubernetes CronJob to restart our pods at set times. This method helps us run a command that restarts the deployment. It refreshes the pods without changing the deployment YAML file.

Step-by-Step Implementation

  1. Create a CronJob: First, we need to set up a CronJob in YAML format. It will use the kubectl rollout restart command to restart our deployment. Here is an example of a CronJob that restarts a deployment called your-deployment every day at midnight.

    apiVersion: batch/v1
    kind: CronJob
    metadata:
      name: restart-deployment-cronjob
    spec:
      schedule: "0 0 * * *" # This schedule runs the job every day at midnight
      jobTemplate:
        spec:
          template:
            spec:
              containers:
                - name: restart-deployment
                  image: bitnami/kubectl:latest # Using a lightweight image with kubectl
                  command:
                    - /bin/sh
                    - -c
                    - |
                      kubectl rollout restart deployment/your-deployment -n your-namespace
              restartPolicy: OnFailure
  2. Deploy the CronJob: Next, we save the YAML configuration to a file. We can name it cronjob-restart.yaml. Then we apply it to our Kubernetes cluster with this command:

    kubectl apply -f cronjob-restart.yaml
  3. Verify the CronJob: We should check if the CronJob is created and running on schedule:

    kubectl get cronjobs
  4. Monitor Job Execution: We can keep an eye on the CronJob by watching the jobs it creates:

    kubectl get jobs --watch
  5. Check Pod Status: After the CronJob runs, we can check if the pods have restarted. We do this by checking the status of our deployment:

    kubectl get pods -n your-namespace

Considerations

  • We need to make sure the image for the CronJob (bitnami/kubectl in this case) has the kubectl command.
  • We can change the CronJob schedule to fit our needs. The example runs daily at midnight but we can customize the cron expression.
  • We also need to check that the CronJob has permission to do the rollout restart on the specified deployment. We might need to set up a service account with the right roles.

Using a CronJob for rolling restarts is a great way to keep our Kubernetes pods fresh regularly. We do not need to change the deployment settings. For more topics like this, we can check the Kubernetes documentation for more information about managing deployments and CronJobs.

Conclusion

In this article, we look at different methods to do a rolling restart of Kubernetes pods. We do this without changing the deployment YAML. We can use commands like kubectl rollout restart or add dummy annotations. These ways help us refresh our pods while keeping everything stable.

By knowing these methods, we improve our skills in managing Kubernetes. This also helps our applications to run better. If we need more help, we can check out our guides on debugging image pull backoff issues and Kubernetes persistent volumes.

Comments