CephFS
CephFS
Below we give examples of how to use CephFS with different container engines.
Check here for information on how to create and manage CephFS shares using Manila.
Docker Swarm
Not yet supported.
Kubernetes
There are two ways to integrate with CephFS storage:
- Auto Provision: You only define the claim, and Kubernetes provisions the storage automatically in OpenStack Manila / CephFS
- Existing Shares: You have defined your Manila shares, and pass the information explicitly to Kubernetes
NOTE: For kubernetes versions <=1.13 the provisioner field in the StorageClass should be csi-cephfsplugin
Auto Provisioning
In this case you do not need to explicitly create the Manila shares in advance, just specify how much storage you need and Kubernetes handles the rest.
Example for a nginx server:
# vim nginx-cephfs.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: manila-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
storageClassName: geneva-cephfs-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: manila-cephfs-pvc
readOnly: false
Things that might need to be customized:
- storageClassName: the example uses geneva-cephfs-testing, replace with another share type as needed (they map to storage classes in kubernetes). To list the available storage classes:
# kubectl get storageclass
NAME PROVISIONER AGE
geneva-cephfs-testing manila-provisioner 1d
meyrin-cephfs manila-provisioner 1d
- storage: 1G above, pass the storage size you want
- osSecretName: os-trustee in the kube-system namespace in the example. This default creates shares in the same project as the cluster
If you want to create Manila shares in a different project, you'll need to create an appropriate secret and pass it in osSecretName. Instruction and required fields in the secret are available here.
Existing Manila Share
You might want to mount an existing Manila Share instead of auto provision.
Note: A share created with size 1 is 1,073,741,824 bytes (1 GiB), but there is currently an issue that prevents the persistent volume claim from binding when the requested size is greater than 1,000,000,000 bytes (1 GB). Therefore in the following examples the claim size is given with units G instead of Gi. Once attached to a pod the full size is available to use.
Here's an example:
$ manila list
+--------------------------------------+-----------+------+-------------+-----------+-----------+-----------------------+------+-------------------+
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+-----------+------+-------------+-----------+-----------+-----------------------+------+-------------------+
| a9f56250-bd30-4e88-9285-3a4eea064725 | myshare01 | 1 | CEPHFS | available | False | Geneva CephFS Testing | | nova |
+--------------------------------------+-----------+------+-------------+-----------+-----------+-----------------------+------+-------------------+
$ manila access-list myshare01
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
| id | access_type | access_to | access_level | state | access_key |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
| 314fe7f7-1ba5-4c36-9ce3-73868f888b18 | cephx | testid | rw | active | AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
And a corresponding nginx deployment:
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: manila-csicephfs-share
provisioner: manila-provisioner
parameters:
type: "Geneva CephFS Testing"
zones: nova
osSecretName: os-trustee
osSecretNamespace: kube-system
protocol: CEPHFS
backend: csi-cephfs
csi-driver: cephfs.csi.ceph.com
osShareID: a9f56250-bd30-4e88-9285-3a4eea064725
osShareAccessID: 314fe7f7-1ba5-4c36-9ce3-73868f888b18
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: manila-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
storageClassName: manila-csicephfs-share
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: manila-cephfs-pvc
readOnly: false
Existing CephFS Share
It is also possible to skip the Manila Provisioner and directly reference the CephFS Share (but if the share is in Manila, there's no reason to do this).
Here's an example:
$ manila share-export-location-list myshare01
+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+
| ID | Path | Preferred |
+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+
| b9cbceb9-9574-4fa8-9a04-306ef748e3f5 | 128.142.39.77:6790,128.142.39.144:6790,188.184.86.25:6790,188.184.94.56:6790,188.185.66.208:6790:/volumes/_nogroup/f01d729f-615b-4ef9-a954-b3dc6361d0ce | False |
+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+
$ manila access-list myshare01
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
| id | access_type | access_to | access_level | state | access_key |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
| 314fe7f7-1ba5-4c36-9ce3-73868f888b18 | cephx | testid | rw | active | AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== |
+--------------------------------------+-------------+-----------+--------------+--------+------------------------------------------+
You will need a StorageClass and a PersistentVolumeClaim, and a secret with the access credentials:
$ kubectl -n default create secret generic csi-cephfs-secret \
--from-literal=userKey=AQDCVu5ZwQ6EJBABttVTlFDraH6ZARwnk7EZEA== \
--from-literal=userID=testid
$ vim nginx-cephfs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs
provisioner: cephfs.csi.ceph.com
parameters:
monitors: 128.142.39.77:6790,128.142.39.144:6790,188.184.86.25:6790,188.184.94.56:6790,188.185.66.208:6790
provisionVolume: "false"
# Required if provisionVolume is set to false
rootPath: /volumes/_nogroup/f01d729f-615b-4ef9-a954-b3dc6361d0ce
# The secret has to contain user and/or admin credentials.
csiProvisionerSecretName: csi-cephfs-secret
csiProvisionerSecretNamespace: default
csiNodeStageSecretName: csi-cephfs-secret
csiNodeStageSecretNamespace: default
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
storageClassName: csi-cephfs
---
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: csi-cephfs-pvc
readOnly: false
$ kubectl create -f nginx-cephfs.yaml