Rook (Ceph) Deployment on OpenShift 4
OpenShift 4 (Hereafter referred as OCP 4, OpenShift Container Platform 4) is right on the horizon, it comes with some major changes and cool features. In this article, I will go through installing Ceph using Rook Operator on OpenShift 4.
Before installing anything we need infrastructure, for this, we are going to use AWS EC2 instances & EBS volumes for Ceph. One of the cool features of OpenShift 4 is you don’t need to worry about provisioning infrastructure resources. Let's see how.
Step — 1: Installing OpenShift 4.0
- Create your account on https://cloud.openshift.com/clusters/install and follow the instructions (they are straight forward).
Before you start OCP 4 deployment, be aware that it needs 6 EC2 instances and 6 EIP. So send a request to AWS support and get the limit of EIP increased for your account.
./openshift-install --dir=cluster-1 create install-config
./openshift-install --dir=cluster-1 create cluster
- It should take 30–40 minutes to get your OCP 4.0 cluster ready and the installer provides you with login details. Download OpenShift Client CLI if you don’t already have it.
- Login to your OCP 4.0 Cluster
# export KUBECONFIG=/Users/karasing/ocp4.0/cluster-1/auth/kubeconfig
# oc login -u system:admin
Step — 2: Preparation for Ceph deployment
- Create EBS volumes for each of the OCP worker nodes. By default OCP 4.0 provisions 3 EC2 instances in each of AZ of a region (ex: us-east-1). EBS volumes are AZ restricted, so make sure you create EBS volume in each AZ and attach them to OCP worker nodes. Verify the attached volumes.
# ssh-add <public key provide during ocp deployment>
# ssh-add -L
# ssh -A core@<ocp-master-ip>
$ ssh core@<ocp-worker-1-ip>
[core@ip-10-0-167-245 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 120G 0 disk
├─xvda1 202:1 0 300M 0 part /boot
└─xvda2 202:2 0 119.7G 0 part /sysroot
xvdf 202:80 0 100G 0 disk
- Git Clone my version of Ceph Rook operator configuration files. You can definitely use upstream config files too, however, don’t forget to follow OpenShift pre-requisite. My version of configuration files are tried and tested, should help you move faster.
# git clone https://github.com/ksingh7/ocp4-rook.git
# cd ocp4-rook/ceph
Step — 3: Deploying Ceph cluster using Rook Operator on OCP 4.0
- List default Security Context Constraints
$ oc get scc
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret]
hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret]
hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
node-exporter false [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*]
nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*]
restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
- Create the security context constraints needed by the Rook pods
$ oc create -f scc.yaml
$ oc get scc
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret]
hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret]
hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
node-exporter false [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*]
nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*]
restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
rook-ceph true [] MustRunAs RunAsAny MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir flexVolume hostPath persistentVolumeClaim projected secret]
- Deploy the rook operator
$ oc create -f operator.yaml
namespace/rook-ceph-system created
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
deployment.apps/rook-ceph-operator created
- Verify Operator pod creation
$ watch "oc get pods -n rook-ceph-system"
- Once the operator is ready, you should have the following pods
$ oc get pods -n rook-ceph-system
NAME READY STATUS RESTARTS AGE
rook-ceph-agent-6crgf 1/1 Running 0 55s
rook-ceph-agent-h694k 1/1 Running 0 55s
rook-ceph-agent-nd86j 1/1 Running 0 55s
rook-ceph-operator-769d6854b9-8l96z 1/1 Running 0 95s
rook-discover-h9rnk 1/1 Running 0 55s
rook-discover-pb5xr 1/1 Running 0 55s
rook-discover-v97lz 1/1 Running 0 55s
- Create rook cluster, which is nothing but a full fledge Ceph cluster including all the daemons
$ oc create -f cluster.yaml$ watch "oc get pods -n rook-ceph"$ oc get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-mgr-a-7f6f66b8bf-jt9q4 1/1 Running 0 95s
rook-ceph-mon-a-6f64d98cb8-f6rrm 1/1 Running 0 3m7s
rook-ceph-mon-b-5cf6d94d64-8qrs7 1/1 Running 0 2m31s
rook-ceph-mon-c-85d74649d4-r99ng 1/1 Running 0 112s
rook-ceph-osd-0-7795cd8696-nhwgx 1/1 Running 0 57s
rook-ceph-osd-1-cdcc9cb7b-886w6 1/1 Running 0 58s
rook-ceph-osd-2-7bdd4fb84c-8gjbw 1/1 Running 0 56s
rook-ceph-osd-prepare-ip-10-0-142-20-v2fsh 0/2 Completed 0 71s
rook-ceph-osd-prepare-ip-10-0-159-59-277vl 0/2 Completed 0 70s
rook-ceph-osd-prepare-ip-10-0-175-128-jvtmw 0/2 Completed 0 69s$ oc -n rook-ceph get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr ClusterIP 172.30.155.0 <none> 9283/TCP 2m40s
rook-ceph-mgr-dashboard ClusterIP 172.30.139.121 <none> 8443/TCP 2m40s
rook-ceph-mon-a ClusterIP 172.30.113.108 <none> 6790/TCP 4m34s
rook-ceph-mon-b ClusterIP 172.30.52.164 <none> 6790/TCP 3m58s
rook-ceph-mon-c ClusterIP 172.30.70.225 <none> 6790/TCP 3m19s
karasing-OSX-2:~/ocp4.0/rook/new$
- Create a toolbox container to verify your Ceph cluster
$ oc create -f toolbox.yaml
$ oc get pods -n rook-ceph
- Login into the container to access your Ceph cluster
$ oc -n rook-ceph exec -it rook-ceph-tools bash[root@rook-ceph-tools /]# ceph -s
cluster:
id: 83cf0021-54f9-4e97-8f41-e2ffc8796573
health: HEALTH_OKservices:
mon: 3 daemons, quorum b,c,a
mgr: a(active)
osd: 3 osds: 3 up, 3 indata:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 31 GiB used, 328 GiB / 359 GiB avail
pgs:[root@rook-ceph-tools /]#
Step — 4: Accessing Ceph as S3 Object Storage
- Object storage CRD configuration files are kept in the object-access directory of my repo
- Ping rook operator to create object storage service
$ oc create -f object.yaml$ oc -n rook-ceph get pod -l app=rook-ceph-rgw
NAME READY STATUS RESTARTS AGE
rook-ceph-rgw-object-649f8b8955-vdjzd 1/1 Running 0 48s$ oc -n rook-ceph get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr ClusterIP 172.30.155.0 <none> 9283/TCP 4h18m
rook-ceph-mgr-dashboard ClusterIP 172.30.139.121 <none> 8443/TCP 4h18m
rook-ceph-mon-a ClusterIP 172.30.113.108 <none> 6790/TCP 4h20m
rook-ceph-mon-b ClusterIP 172.30.52.164 <none> 6790/TCP 4h19m
rook-ceph-mon-c ClusterIP 172.30.70.225 <none> 6790/TCP 4h18m
rook-ceph-rgw-object ClusterIP 172.30.56.17 <none> 8081/TCP 71m
$
- Create an S3 user
$ oc create -f object-user.yaml
- Get S3 user Access/Secret key
$ oc -n rook-ceph get secrets$ oc -n rook-ceph describe secret rook-ceph-object-user-object-s3-user-secret
Name: rook-ceph-object-user-object-s3-user-secret
Namespace: rook-ceph
Labels: app=rook-ceph-rgw
rook_cluster=rook-ceph
rook_object_store=object
user=s3-user-secret
Annotations: <none>Type: kubernetes.io/rookData
====
AccessKey: 20 bytes
SecretKey: 40 bytes$ oc -n rook-ceph get secret rook-ceph-object-user-object-object -o yaml | grep AccessKey | awk '{print $2}' | base64 --decode$ oc -n rook-ceph get secret rook-ceph-object-user-object-object -o yaml | grep SecretKey | awk '{print $2}' | base64 --decode
- Let’s try it baby !!!
- Login to Ceph toolbox container
oc -n rook-ceph exec -it rook-ceph-tools bash
- Create /etc/hosts file entry for rook-ceph-rgw-object which is the service name for RGW ( there should be a better way that I need to find)
172.30.56.17 rook-ceph-rgw-object
- Export S3cmd variables
export AWS_HOST=rook-ceph-rgw-object:8081
export AWS_ENDPOINT=172.30.56.17:8081
export AWS_ACCESS_KEY_ID=I2ZZ40QTI573JGLJ4KAI
export AWS_SECRET_ACCESS_KEY=lbnTNvfnnXW9vVlt1aD6qFXWTUFsE8mgSOp6Klyt
- List S3 buckets
s3cmd ls --no-ssl --host=${AWS_HOST}
- Create S3 Bucket
s3cmd mb --no-ssl --host=${AWS_HOST} --host-bucket= s3://rookbucket
Create Object Storage Service
- Get services
$ oc -n rook-ceph get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr ClusterIP 172.30.155.0 <none> 9283/TCP 30h
rook-ceph-mgr-dashboard ClusterIP 172.30.139.121 <none> 8443/TCP 30h
rook-ceph-mon-a ClusterIP 172.30.113.108 <none> 6790/TCP 30h
rook-ceph-mon-b ClusterIP 172.30.52.164 <none> 6790/TCP 30h
rook-ceph-mon-c ClusterIP 172.30.70.225 <none> 6790/TCP 30h
rook-ceph-rgw-object ClusterIP 172.30.56.17 <none> 8081/TCP 27h
- Create an OpenShift route to expose rook-ceph-rgw-object service
$ oc -n rook-ceph expose svc/rook-ceph-rgw-object
route.route.openshift.io/rook-ceph-rgw-object exposed$ oc -n rook-ceph get route | awk '{ print $2 }'
HOST/PORT
rook-ceph-rgw-object-rook-ceph.apps.cluster-1.ceph-s3.com
- Your Ceph S3 service is not internet accessible
$ curl http://rook-ceph-rgw-object-rook-ceph.apps.cluster-1.ceph-s3.com
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
- You can now access Ceph Object Storage S3 service from any s3 client such as s3cmd
karasing-OSX-2:~/git/ocp4-rook$ s3cmd --access_key=I2ZZ40QTI573JGLJ4KAI --secret_key=lbnTNvfnnXW9vVlt1aD6qFXWTUFsE8mgSOp6Klyt --no-ssl --host=rook-ceph-rgw-object-rook-ceph.apps.cluster-1.ceph-s3.com --host-bucket="%(bucket)s.rook-ceph-rgw-object-rook-ceph.apps.cluster-1.ceph-s3.com" ls
2019-02-19 19:42 s3://rookbucket
Notes
- I have deployed OCP 4.0 on AWS ( consuming 6xM4.large instances). On top OCP, I have deployed the Ceph cluster using Rook Operator. This is my dev cluster and doesn't need to be run round the clock. Without any pre-requisite, you can stop all of the EC2 instances and bring them UP when you need. OpenShift takes care of everything while bringing up cluster services and Apps (rook-ceph).
To do List
- Create internet accessible Ceph dashboard service [blocker]
- OCP storage class creation and consuming Ceph RBD volumes (RWO PV)
- Deploying CephFS and consuming through RWX PV
Miscellaneous Commands
$ cd cluster-1 ; ../openshift-install destroy cluster ; rm -rf cluster-1