Using ceph RBD block storage with Openshift 3.11 cluster
OpenShift Enterprise clusters can be provisioned with persistent storage using Ceph RBD.
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project.
While the Ceph RBD-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
This topic presumes some familiarity with OpenShift Enterprise and Ceph RBD.
See the Persistent Storage concept topic for details on the OpenShift Enterprise persistent volume (PV) framework in general.
Provisioning-
To provision Ceph volumes, the following are required:
An existing storage device in your underlying infrastructure.
The Ceph key to be used in an OpenShift Enterprise secret object.
The Ceph image name.
The file system type on top of the block storage (e.g., ext4).
ceph-common installed on each schedulable OpenShift Enterprise node in your cluster:
1) Create new pool in ceph ( run on ceph mon node ).
[root@ceph1 ~]# ceph osd pool create kube 128
pool 'kube' created
2) Create client keyring for use with openshift
[root@ceph1 ~]# ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
Now run ceph auth-key to get generated ceph client token in base64
[root@ceph1 ~]# ceph auth get-key client.admin | base64
Note the value
3) Create ceph secret yaml on OCP.
# vim ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: kube-system
data:
key: QVFENktpUmVFQmdKSmhBQTZtMGxXVlhxNGVpQ0tlVGRKRlg5a3c9PQ==
type:
kubernetes.io/rbd
- Use data key base64 value as generated in step 2
4) create the ceph-secret in default namespace so that every project can use this secret to be able to create dynamic volume.
# oc create -f ceph-secret.yaml
5) Create new file for ceph user secret.
# vim ceph-user-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-user-secret
data:
key: QVFBK2F5UmVMNTFFT0JBQVJtTnV0QnJpWFh1dTRTK0Y5LzMrdnc9PQ==
type: kubernetes.io/rbd
6) Now create one storage class for ceph RBD
# vim storageClass.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: cephblock
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.1.21:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: kube
userId: kube
userSecretName: ceph-user-secret
7) Now most important part is we will setup ceph user secret to available as default for all new created namespaces/ projects.
For this we will create default project template and will change master-config.yaml for bootstraping.
Get the current default project templatein yaml
$ oc adm create-bootstrap-project-template -o yaml > template.yaml
Now edit this template yaml and put ceph-user-secret.yaml content in to its project section
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
- apiVersion: v1
kind: Secret
metadata:
name: ceph-user-secret
data:
key: QVFBK2F5UmVMNTFFT0JBQVJtTnV0QnJpWFh1dTRTK0Y5LzMrdnc9PQ==
type: kubernetes.io/rbd
save the file and apply config.
# oc create -f template.yaml -n default
Now perform bellow step on all master node one by one
- edit master-config.yaml file as bellow
...
projectConfig:
projectRequestTemplate: "default/project-request"
...
save the file ans restart bellow services
# systemctl restart origin-master-api origin-master-controllers
Note- if using OKD ( opensource version )
then restart service using bellow command
# master-restart api
# master-restart controllers
8) Now create new project and get secrets. You will get both ceph-secret and ceph-user-secret
# oc new-project rbdtest
# oc get secret
furthermore you can try to create a claim and a pod using pvc
[root@occontrol ~]# oc new-project testrbdceph
Now using project "testrbdceph" on server "https://occontrol.mylab.local:8443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git
to build a new example application in Ruby.
[root@occontrol ~]# oc create -f claim2.yaml
persistentvolumeclaim/ceph-claim created
[root@occontrol ~]# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ceph-claim Bound pvc-d3c7d1df-3ad3-11ea-ac41-525400d82ea3 2Gi RWO cephblock 4s
[root@occontrol ~]# oc create -f busybox.yaml
pod/ceph-pod1 created
[root@occontrol ~]# oc get po
NAME READY STATUS RESTARTS AGE
ceph-pod1 1/1 Running 0 32s
--
[root@occontrol ~]# cat claim2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
[root@occontrol ~]# cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1
spec:
containers:
- name: ceph-busybox
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: ceph-vol1
persistentVolumeClaim:
claimName: ceph-claim
Done ! Enjoy!
Comments
Post a Comment