Skip to content

CephFS

CephFS

Below we give examples of how to use CephFS with different container engines.

Check here for information on how to create and manage CephFS shares using Manila.

Docker Swarm

Not yet supported.

Kubernetes

There are two ways to integrate with CephFS storage:

  • Auto Provision: You only define the claim, and Kubernetes provisions the storage automatically in OpenStack Manila / CephFS
  • Existing Shares: You have defined your Manila shares, and pass the information explicitly to Kubernetes

NOTE: For kubernetes versions <=1.13 the provisioner field in the StorageClass should be csi-cephfsplugin

Auto Provisioning

In this case you do not need to explicitly create the Manila shares in advance, just specify how much storage you need and Kubernetes handles the rest.

Example for a nginx server:

# vim nginx-cephfs.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: manila-cephfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1G
  storageClassName: geneva-cephfs-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        volumeMounts:
          - mountPath: /var/lib/www/html
            name: mypvc
      volumes:
      - name: mypvc
        persistentVolumeClaim:
          claimName: manila-cephfs-pvc
          readOnly: false

Things that might need to be customized:

  • storageClassName: the example uses geneva-cephfs-testing, replace with another share type as needed (they map to storage classes in kubernetes). To list the available storage classes:
# kubectl get storageclass
NAME                    PROVISIONER          AGE
geneva-cephfs-testing   manila-provisioner   1d
meyrin-cephfs           manila-provisioner   1d
  • storage: 1G above, pass the storage size you want
  • osSecretName: os-trustee in the kube-system namespace in the example. This default creates shares in the same project as the cluster

If you want to create Manila shares in a different project, you'll need to create an appropriate secret and pass it in osSecretName. Instruction and required fields in the secret are available here.

Existing Manila Share

You might want to mount an existing Manila Share instead of auto provision.

Note: A share created with size 1 is 1,073,741,824 bytes (1 GiB), but there is currently an issue that prevents the persistent volume claim from binding when the requested size is greater than 1,000,000,000 bytes (1 GB). Therefore in the following examples the claim size is given with units G instead of Gi. Once attached to a pod the full size is available to use.

Here's an example:

$ manila list
+--------------------------------------+-----------+------+-------------+-----------+-----------+-----------------------+------+-------------------+
| ID                                   | Name      | Size | Share Proto | Status    | Is Public | Share Type Name       | Host | Availability Zone |
+--------------------------------------+-----------+------+-------------+-----------+-----------+-----------------------+------+-------------------+
| a9f56250-bd30-4e88-9285-3a4eea064725 | myshare01 | 1    | CEPHFS      | available | False     | Geneva CephFS Testing |      | nova              |
+--------------------------------------+-----------+------+-------------+-----------+-----------+-----------------------+------+-------------------+

$ manila access-list myshare01
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
| id                                   | access_type | access_to | access_level | state  | access_key                               |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
| 314fe7f7-1ba5-4c36-9ce3-73868f888b18 | cephx       | testid    | rw           | active | AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+

And a corresponding nginx deployment:

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: manila-csicephfs-share
provisioner: manila-provisioner
parameters:
  type: "Geneva CephFS Testing"
  zones: nova
  osSecretName: os-trustee
  osSecretNamespace: kube-system
  protocol: CEPHFS
  backend: csi-cephfs
  csi-driver: cephfs.csi.ceph.com
  osShareID: a9f56250-bd30-4e88-9285-3a4eea064725
  osShareAccessID: 314fe7f7-1ba5-4c36-9ce3-73868f888b18
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: manila-cephfs-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1G
  storageClassName: manila-csicephfs-share
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        volumeMounts:
          - mountPath: /var/lib/www/html
            name: mypvc
      volumes:
      - name: mypvc
        persistentVolumeClaim:
          claimName: manila-cephfs-pvc
          readOnly: false
Existing CephFS Share

It is also possible to skip the Manila Provisioner and directly reference the CephFS Share (but if the share is in Manila, there's no reason to do this).

Here's an example:

$ manila share-export-location-list myshare01
+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+
| ID                                   | Path                                                                                                                                                    | Preferred |
+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+
| b9cbceb9-9574-4fa8-9a04-306ef748e3f5 | 128.142.39.77:6790,128.142.39.144:6790,188.184.86.25:6790,188.184.94.56:6790,188.185.66.208:6790:/volumes/_nogroup/f01d729f-615b-4ef9-a954-b3dc6361d0ce | False     |
+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+

$ manila access-list myshare01
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
| id                                   | access_type | access_to | access_level | state  | access_key                               |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
| 314fe7f7-1ba5-4c36-9ce3-73868f888b18 | cephx       | testid    | rw           | active | AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+

You will need a PersistentVolume and a PersistentVolumeClaim that binds to this PersistentVolume, and a secret with the access credentials:

$ kubectl -n default create secret generic csi-cephfs-secret \
    --from-literal=userKey=AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== \
    --from-literal=userID=testid
$ vim nginx-cephfs.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: preprovisioned-cephfs-pv
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 1Gi
  csi:
    driver: cephfs.csi.ceph.com

    # Unique identifier of the volume.
    # Can be anything, but we recommend to set it to the PersistentVolume name.
    volumeHandle: preprovisioned-cephfs-pv

    # Secret name and namespace in both nodeStageSecretRef
    # and nodePublishSecretRef must match the Secret created above.
    nodeStageSecretRef:
      name: csi-cephfs-secret
      namespace: default
    nodePublishSecretRef:
      name: csi-cephfs-secret
      namespace: default

    volumeAttributes:
      # The volume attributes below are passed to the cephfs-csi driver.
      # For complete list of available volume parameters please see
      # https://github.com/ceph/ceph-csi/blob/devel/docs/deploy-cephfs.md

      monitors: 128.142.39.77:6790,128.142.39.144:6790,188.184.86.25:6790,188.184.94.56:6790,188.185.66.208:6790
      rootPath: /volumes/_nogroup/f01d729f-615b-4ef9-a954-b3dc6361d0ce
      provisionVolume: "false"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: preprovisioned-cephfs-pvc
spec:
  accessModes:
   - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  volumeName: preprovisioned-cephfs-pv # must match PV name above
---
apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
   - name: web-server
     image: nginx
     volumeMounts:
       - mountPath: /var/lib/www/html
         name: mypvc
  volumes:
   - name: mypvc
     persistentVolumeClaim:
       claimName: preprovisioned-cephfs-pvc # must match PVC name above
       readOnly: false

$ kubectl create -f nginx-cephfs.yaml