CephFS
CephFS
Warning
This documentation is deprecated, please check here for its new home
Below we give examples of how to use CephFS with different container engines.
Check here for information on how to create and manage CephFS shares using Manila.
Docker Swarm
Not yet supported.
Kubernetes
There are two ways to integrate with CephFS storage:
- Auto Provision: You only define the claim, and Kubernetes provisions the storage automatically in OpenStack Manila / CephFS
- Existing Shares: You have defined your Manila shares, and pass the information explicitly to Kubernetes
NOTE: For kubernetes versions <=1.13 the provisioner field in the StorageClass should be csi-cephfsplugin
Auto Provisioning
In this case you do not need to explicitly create the Manila shares in advance, just specify how much storage you need and Kubernetes handles the rest.
Example for a nginx server:
# vim nginx-cephfs.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: manila-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
storageClassName: geneva-cephfs-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: manila-cephfs-pvc
readOnly: false
Things that might need to be customized:
- storageClassName: the example uses geneva-cephfs-testing, replace with another share type as needed (they map to storage classes in kubernetes). To list the available storage classes:
# kubectl get storageclass
NAME PROVISIONER AGE
geneva-cephfs-testing manila-provisioner 1d
meyrin-cephfs manila-provisioner 1d
- storage: 1G above, pass the storage size you want
- osSecretName: os-trustee in the kube-system namespace in the example. This default creates shares in the same project as the cluster
If you want to create Manila shares in a different project, you'll need to create an appropriate secret and pass it in osSecretName. Instruction and required fields in the secret are available here.
Existing Manila Share
You might want to mount an existing Manila Share instead of auto provision.
Note: A share created with size 1 is 1,073,741,824 bytes (1 GiB), but there is currently an issue that prevents the persistent volume claim from binding when the requested size is greater than (for a size 1 volume) 1,000,000,000 bytes (1 GB). Therefore in the following examples the claim size is given with units G instead of Gi. Once attached to a pod the full size is available to use.
Here's an example:
$ openstack share list
+--------------------------------------+---------------------------+------+-------------+-----------+-----------+-----------------------+-----------------------+-------------------+
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+---------------------------+------+-------------+-----------+-----------+-----------------------+-----------------------+-------------------+
| a9f56250-bd30-4e88-9285-3a4eea064725 | myshare01 | 1 | CEPHFS | available | False | Geneva CephFS Testing | manila@cephfs1#cephfs | nova |
+--------------------------------------+---------------------------+------+-------------+-----------+-----------+-----------------------+-----------------------+-------------------+
$ openstack share access list myshare01
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
| id | access_type | access_to | access_level | state | access_key | created_at | updated_at |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
| 314fe7f7-1ba5-4c36-9ce3-73868f888b18 | cephx | testid | rw | active | AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== | 2021-06-04T14:14:50.000000 | 2021-06-04T14:14:51.000000 |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
These two IDs are used to mount the share in the cluster.
For Kubernetes v1.21 and higher
In clusters using Kubernetes v1.21 or a higher version, mounting existing manila shares is now done by creating a PersistentVolume directly, instead of a StorageClass.
apiVersion: v1
kind: PersistentVolume
metadata:
name: existing-volume
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1G
csi:
driver: cephfs.manila.csi.openstack.org
volumeHandle: a9f56250-bd30-4e88-9285-3a4eea064725
nodeStageSecretRef:
name: os-trustee
namespace: kube-system
nodePublishSecretRef:
name: os-trustee
namespace: kube-system
volumeAttributes:
shareID: a9f56250-bd30-4e88-9285-3a4eea064725
shareAccessID: 314fe7f7-1ba5-4c36-9ce3-73868f888b18
As well as the shareID and shareAccessID being set to the IDs of the share and share access, the volumeHandle should also be set to the share ID as the value must be unique between PersistentVolumes in the cluster.
A PersistentVolumeClaim can then be created for this PersistentVolume:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: existing-manila-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
storageClassName: ""
volumeName: existing-volume
The storageClassName value must be set to an empty string, as this PVC is not using a storage class.
Once the PVC has changed status to Bound
, it is possible to
mount to a pod in the same way as before.
For Kubernetes v1.20 and lower
In clusters using Kubernetes v1.20 or a lower version, a PersistentVolumeClaim is created by using a custom StorageClass.
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: manila-csicephfs-share
provisioner: manila-provisioner
parameters:
type: "Geneva CephFS Testing"
zones: nova
osSecretName: os-trustee
osSecretNamespace: kube-system
protocol: CEPHFS
backend: csi-cephfs
csi-driver: cephfs.csi.ceph.com
osShareID: a9f56250-bd30-4e88-9285-3a4eea064725
osShareAccessID: 314fe7f7-1ba5-4c36-9ce3-73868f888b18
The share and access IDs are set on the last two lines.
A PersistentVolumeClaim can then be created using that StorageClass, and mounted to a pod.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: manila-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
storageClassName: manila-csicephfs-share
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: manila-cephfs-pvc
readOnly: false
Existing CephFS Share
It is also possible to skip the Manila Provisioner and directly reference the CephFS Share (but if the share is in Manila, there's no reason to do this).
Here's an example:
$ openstack share show myshare01 -c export_locations
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| export_locations | |
| | id = b9cbceb9-9574-4fa8-9a04-306ef748e3f5 |
| | path = 128.142.39.77:6790,128.142.39.144:6790,188.184.86.25:6790,188.184.94.56:6790,188.185.66.208:6790:/volumes/_nogroup/f01d729f-615b-4ef9-a954-b3dc6361d0ce |
| | preferred = False |
| | share_instance_id = f01d729f-615b-4ef9-a954-b3dc6361d0ce |
| | is_admin_only = False |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
$ openstack share access list b8e3cab2-e1f3-4ec2-a7dc-e3a27a4ba7d2
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
| id | access_type | access_to | access_level | state | access_key | created_at | updated_at |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
| 314fe7f7-1ba5-4c36-9ce3-73868f888b18 | cephx | testid | rw | active | AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== | 2021-06-04T14:14:50.000000 | 2021-06-04T14:14:51.000000 |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
You will need a PersistentVolume and a PersistentVolumeClaim that binds to this PersistentVolume, and a secret with the access credentials:
$ kubectl -n default create secret generic csi-cephfs-secret \
--from-literal=userKey=AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== \
--from-literal=userID=testid
$ vim nginx-cephfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: preprovisioned-cephfs-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
# Unique identifier of the volume.
# Can be anything, but we recommend to set it to the PersistentVolume name.
volumeHandle: preprovisioned-cephfs-pv
# Secret name and namespace in both nodeStageSecretRef
# and nodePublishSecretRef must match the Secret created above.
nodeStageSecretRef:
name: csi-cephfs-secret
namespace: default
nodePublishSecretRef:
name: csi-cephfs-secret
namespace: default
volumeAttributes:
# The volume attributes below are passed to the cephfs-csi driver.
# For complete list of available volume parameters please see
# https://github.com/ceph/ceph-csi/blob/devel/docs/deploy-cephfs.md
monitors: 128.142.39.77:6790,128.142.39.144:6790,188.184.86.25:6790,188.184.94.56:6790,188.185.66.208:6790
rootPath: /volumes/_nogroup/f01d729f-615b-4ef9-a954-b3dc6361d0ce
provisionVolume: "false"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: preprovisioned-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: preprovisioned-cephfs-pv # must match PV name above
---
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: preprovisioned-cephfs-pvc # must match PVC name above
readOnly: false
$ kubectl create -f nginx-cephfs.yaml