Skip to main content

[SOLVED] How to share storage between Kubernetes pods? - kubernetes

[SOLVED] Mastering Shared Storage Between Kubernetes Pods

In Kubernetes, sharing storage between pods is very important for many apps. This is especially true for apps that need to keep state or share data. In this chapter, we will look at different ways to share storage among Kubernetes pods. This will help our apps access the data they need easily. Whether we are using databases, caching layers, or other stateful applications, knowing how to share storage can improve our deployment strategies in Kubernetes.

In this article, we will talk about these solutions for sharing storage between Kubernetes pods:

  • Solution 1 - Using Persistent Volumes (PV) and Persistent Volume Claims (PVC)
  • Solution 2 - Using StatefulSets for Shared Storage
  • Solution 3 - Setting Up Shared File Systems with NFS
  • Solution 4 - Using a Cloud Provider’s Managed File System
  • Solution 5 - Using ReadWriteMany Access Modes
  • Solution 6 - Using Storage Classes for Dynamic Provisioning

For more insights into related topics, check out how to install Helm in a certain order and the differences between targetPort. Now, let us dive into the solutions and see how to manage shared storage in our Kubernetes environment.

Solution 1 - Using Persistent Volumes (PV) and Persistent Volume Claims (PVC)

One of the best ways to share storage between Kubernetes pods is by using Persistent Volumes (PV) and Persistent Volume Claims (PVC). This way, we can define storage resources separately from the pods that use them. It helps us manage storage more easily in our Kubernetes cluster.

Step 1: Create a Persistent Volume (PV)

First, we need to create a Persistent Volume. A PV can use different storage options like NFS, AWS EBS, GCE Persistent Disk, or Azure Disk. Here is an example of a PV configuration using NFS:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /path/to/nfs
    server: nfs-server.example.com

In this example:

  • capacity shows the total storage space we have.
  • accessModes tells how we can access the volume (ReadWriteMany means multiple pods can read and write).
  • nfs gives the NFS server and path where the storage is located.

Step 2: Create a Persistent Volume Claim (PVC)

Next, we will create a PVC that asks for storage from the PV. The PVC states the needed size and access mode. Here’s an example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Step 3: Use the PVC in Your Pods

After we create the PVC, we can use it in our pod settings. Below is an example of a pod that uses the PVC we just defined:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: my-image
      volumeMounts:
        - mountPath: /data
          name: my-volume
  volumes:
    - name: my-volume
      persistentVolumeClaim:
        claimName: my-pvc

In this setup:

  • volumeMounts shows where the PVC will be mounted inside the container.
  • volumes part points to the PVC by its name. This lets the pod access the shared storage.

Additional Considerations

  • Make sure the access modes in the PV and PVC match.
  • We can check the status of PV and PVC using these commands:
kubectl get pv
kubectl get pvc

This method gives us a strong solution for sharing storage between Kubernetes pods using PVs and PVCs. For more advanced cases, we can look into Storage Classes for dynamic storage or StatefulSets for managing applications that need to keep their state.

Solution 2 - Using StatefulSets for Shared Storage

StatefulSets in Kubernetes help us manage applications that need to keep their state. They give each pod a stable network ID and reliable storage. This is very useful for apps that need storage that lasts even when pods change. In this solution, we will see how we can use StatefulSets to share storage between Kubernetes pods easily.

Setting Up a StatefulSet with Persistent Storage

To share storage between pods with StatefulSets, we can use Persistent Volume Claims (PVCs). Each pod in a StatefulSet can have its own PVC. This PVC gets created automatically, giving us stable storage that stays even when individual pods go away.

  1. Create a Persistent Volume: First, we need to make a Persistent Volume for our StatefulSet. This will depend on what storage we are using, like AWS EBS, GCE PD, or NFS.

    Here is an example of a Persistent Volume setup:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: my-pv
    spec:
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteOnce
      nfs: # This is for NFS
        path: /path/to/nfs
        server: nfs-server.example.com
  2. Make a Storage Class (optional): If we use a dynamic provisioner, we should define a Storage Class:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: my-storage-class
    provisioner: nfs-provisioner # Replace this with your provisioner
    parameters:
      archiveOnDelete: "false"
  3. Define a StatefulSet: Now, we will define a StatefulSet that uses the PVCs for each pod.

    Here is an example of a StatefulSet setup:

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: my-statefulset
    spec:
      serviceName: "my-service"
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
            - name: my-container
              image: my-image:latest
              volumeMounts:
                - name: my-storage
                  mountPath: /usr/share/mydata
      volumeClaimTemplates:
        - metadata:
            name: my-storage
          spec:
            accessModes: ["ReadWriteOnce"]
            resources:
              requests:
                storage: 10Gi

Accessing Shared Storage

With this setup, each pod in the StatefulSet will have its own PVC. They can read and write from the same NFS server. But we should remember that since the PVC access mode is ReadWriteOnce, only one pod can write to a volume at a time.

Benefits of Using StatefulSets

  • Stable Identity: Each pod gets a unique ID. This helps us address them easily.
  • Stable Storage: Storage stays linked to the StatefulSet, not individual pods. This means we have persistent storage.
  • Orderly Deployment and Scaling: StatefulSets control how we deploy and scale pods in a clear order.

Considerations

  • Make sure your storage solution can support the access modes that your app needs.
  • If you need multiple pods to read and write at the same time, think about using a shared file system like NFS with the right access modes.

For more info about setting up Persistent Volumes and StatefulSets, you can check this guide on Persistent Volumes and Claims.

Solution 3 - Configuring Shared File Systems with NFS

We can share storage between Kubernetes pods using NFS. NFS stands for Network File System. We set up an NFS server and then create a Persistent Volume (PV) that points to the NFS share. This way, multiple pods can access the same storage at the same time. This is great for applications needing shared storage.

Step 1: Set Up an NFS Server

First, we need to set up an NFS server. If we already have one, we can skip this step. Here is how to set it up on a Linux machine:

  1. Install NFS utilities:

    sudo apt-get update
    sudo apt-get install nfs-kernel-server
  2. Create a directory to share:

    sudo mkdir -p /srv/nfs/kubedata
  3. Set permissions:

    sudo chown nobody:nogroup /srv/nfs/kubedata
    sudo chmod 777 /srv/nfs/kubedata
  4. Configure exports: We add this line to /etc/exports to let Kubernetes nodes access the NFS share:

    /srv/nfs/kubedata *(rw,sync,no_subtree_check)
  5. Export the shared directory:

    sudo exportfs -a
  6. Start the NFS server:

    sudo systemctl restart nfs-kernel-server

Step 2: Create a Persistent Volume (PV)

Now that the NFS server is ready, we create a Persistent Volume in our Kubernetes cluster that points to the NFS share.

  1. Define the PV: We create a YAML file named nfs-pv.yaml with this content:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: nfs-pv
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      nfs:
        path: /srv/nfs/kubedata
        server: <NFS_SERVER_IP>

    We replace <NFS_SERVER_IP> with the IP address of our NFS server.

  2. Apply the PV configuration:

    kubectl apply -f nfs-pv.yaml

Step 3: Create a Persistent Volume Claim (PVC)

Next, we create a Persistent Volume Claim to ask for storage from the PV.

  1. Define the PVC: We create a YAML file named nfs-pvc.yaml with this content:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: nfs-pvc
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi
  2. Apply the PVC configuration:

    kubectl apply -f nfs-pvc.yaml

Step 4: Use the PVC in Your Pods

Finally, we can use the PVC in our pod definitions to share storage between multiple pods.

  1. Define a Pod: We create a YAML file named nfs-pod.yaml that uses the PVC:

    apiVersion: v1
    kind: Pod
    metadata:
      name: nfs-pod
    spec:
      containers:
        - name: nfs-container
          image: nginx
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: nfs-storage
      volumes:
        - name: nfs-storage
          persistentVolumeClaim:
            claimName: nfs-pvc
  2. Apply the Pod configuration:

    kubectl apply -f nfs-pod.yaml

Now, we can create many pods that use the same PVC to share the NFS storage. This way, multiple pods can access the shared storage together. It is a good solution for shared storage in Kubernetes.

For more details on setting up Persistent Volumes, we can check the Kubernetes documentation.

Solution 4 - Using a Cloud Provider’s Managed File System

Using a cloud provider’s managed file system is a simple way to share storage between Kubernetes pods. Managed file systems give us storage that is available and can grow easily. Many pods can access this storage at the same time. This is helpful for apps that need shared data. For example, web servers and content management systems need this kind of access.

Example with Amazon EFS (Elastic File System)

Amazon EFS gives us a managed file system that works well with Kubernetes. Here’s how we can set it up:

  1. Create an EFS File System:

    • We go to the AWS Management Console.
    • We find the EFS service and create a new file system.
    • We write down the File System ID.
  2. Install the EFS CSI Driver: The Amazon EFS CSI driver helps us create Persistent Volumes with EFS. We can install it using Helm:

    helm repo add efs csi-driver-efs
    helm install efs-csi-driver efs/csi-driver-efs
  3. Create a Storage Class:

    We need to create a StorageClass that uses the EFS CSI driver. We save the following YAML in a file called efs-sc.yaml:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: efs-sc
    provisioner: efs.csi.aws.com
    volumeBindingMode: WaitForFirstConsumer

    We apply the storage class:

    kubectl apply -f efs-sc.yaml
  4. Create a Persistent Volume (PV) and Persistent Volume Claim (PVC):

    Next, we create a PersistentVolume and PersistentVolumeClaim to use EFS storage. We save the following YAML in a file named efs-pv-pvc.yaml:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: efs-pv
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: efs-sc
      csi:
        driver: efs.csi.aws.com
        volumeHandle: <Your-EFS-File-System-ID>
    
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: efs-pvc
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: efs-sc
      resources:
        requests:
          storage: 5Gi

    We change <Your-EFS-File-System-ID> with our EFS File System ID. Then, we apply the configuration:

    kubectl apply -f efs-pv-pvc.yaml
  5. Mount the EFS PVC in Pods:

    Now we can use this PersistentVolumeClaim in our pod settings. Here is an example of how to mount it in a pod:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
            - name: my-app-container
              image: my-app-image
              volumeMounts:
                - mountPath: /mnt/efs
                  name: efs-storage
          volumes:
            - name: efs-storage
              persistentVolumeClaim:
                claimName: efs-pvc

By following these steps, we can share storage between Kubernetes pods using Amazon EFS. This way, many pods can access the same file system at the same time. It is perfect for apps that need to share data.

For more details on using managed file systems with Kubernetes, we can check the Kubernetes documentation.

Solution 5 - Implementing ReadWriteMany Access Modes

We can share storage effectively between Kubernetes pods by using the ReadWriteMany (RWX) access mode. This mode lets many pods read and write to the same Persistent Volume at the same time. Here’s how we can implement it:

  1. Choose a Storage Backend That Supports RWX:
    Not every storage solution supports RWX access mode. Some common choices are NFS, GlusterFS, and cloud provider file systems like Amazon EFS, Google Filestore, or Azure Files. We need to pick a backend that fits our needs.

  2. Define a Persistent Volume (PV):
    We must create a Persistent Volume and set the access mode to ReadWriteMany. Below is an example of how to define an NFS-backed PV with RWX access.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: my-pv
    spec:
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteMany
      nfs:
        path: /path/to/nfs
        server: nfs-server.example.com
  3. Create a Persistent Volume Claim (PVC):
    Next, we will create a Persistent Volume Claim that asks for the storage defined in our PV. Here’s an example PVC that claims the RWX volume.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
  4. Use the PVC in Your Pods:
    After we create the PVC, we can use it in our pod specifications. Here’s how:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
            - name: my-container
              image: my-image
              volumeMounts:
                - mountPath: /mnt/data
                  name: shared-storage
          volumes:
            - name: shared-storage
              persistentVolumeClaim:
                claimName: my-pvc
  5. Verify Access:
    After we deploy our pods, we need to check if they can read and write to the shared volume. We can do this by running a command inside the pod to see if we can create a file or keep data across different pod instances.

Using the ReadWriteMany access mode gives us a flexible way to share storage among many Kubernetes pods. This helps with teamwork and sharing data. For more info on Kubernetes storage setup, please check this Kubernetes documentation.

Solution 6 - Using Storage Classes for Dynamic Provisioning

Kubernetes helps us to dynamically get storage for our pods with Storage Classes. This feature lets us choose different types of storage based on what our application needs. It hides the details of the storage provider. With Storage Classes, we make managing storage resources in a Kubernetes cluster easier.

Step-by-Step Guide to Using Storage Classes

  1. Define a Storage Class: First, we need to make a YAML file for our Storage Class. This file tells what kind of storage we want. Here is an example of a Storage Class definition using the standard provisioner, which is common in many Kubernetes setups.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: my-storage-class
    provisioner: kubernetes.io/gce-pd # Example for Google Cloud
    parameters:
      type: pd-standard # Specify disk type
    reclaimPolicy: Delete # Options: Delete, Retain
    allowVolumeExpansion: true # Allow resizing of volumes

    Now we apply the Storage Class configuration:

    kubectl apply -f my-storage-class.yaml
  2. Create a Persistent Volume Claim (PVC): After we define the Storage Class, we can create a PVC that uses this Storage Class. The PVC asks for storage based on what we set in the Storage Class.

    Here is an example of a PVC that uses the Storage Class we just made:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi # Asking for 5 GiB of storage
      storageClassName: my-storage-class # Reference the Storage Class

    We apply the PVC configuration:

    kubectl apply -f my-pvc.yaml
  3. Using the PVC in a Pod: After we create the PVC, we can use it in our pod specifications. Here is an example of how to mount the PVC in a pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-app
    spec:
      containers:
        - name: my-container
          image: nginx
          volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: my-storage
      volumes:
        - name: my-storage
          persistentVolumeClaim:
            claimName: my-pvc # Reference the PVC

    Now we apply this pod configuration:

    kubectl apply -f my-app-pod.yaml
  4. Checking the Setup: We can check the status of our PVC to see if it is bound and if the storage is ready:

    kubectl get pvc my-pvc
  5. Looking at Storage Class Options: Different cloud providers and on-premises solutions can give us many choices for Storage Classes. For example, in AWS, we could use provisioner: kubernetes.io/aws-ebs and set parameters for volume type. For more details on how to set up dynamic provisioning in Kubernetes, please check the Kubernetes documentation.

Using Storage Classes for dynamic provisioning is a great way to manage storage in Kubernetes. It gives us flexibility and makes things easy. For more help on how to set up storage in Kubernetes, you can look at related articles like how to set dynamic values with Helm.

Conclusion

In this article, we looked at different ways to share storage between Kubernetes pods. We talked about using Persistent Volumes (PV), StatefulSets, NFS, and cloud-managed file systems. These methods help pods to access data easily. They also improve teamwork among pods. This makes operations smooth in our Kubernetes setup.

If you want to learn more about managing Kubernetes, we can check out our guide on how to access the Kubernetes API and how to manage your pods well.

Comments