Posts

Showing posts from January, 2020

Setup your own grafana on Openshift 3.11 plateform to monitor entire cluster resources and object.

Image
Follow bellow steps to perform the setup Create a New project first oc new-project mygrafana Get the existing grafana secret from openshift-monitoring project oc get secrets grafana-datasources -n openshift-monitoring --export -o yaml > grafana-datasources.yaml Now create secret in my own grafana project oc create -f  grafana-datasources.yaml -n mygrafana Now get the version of grafana that is being used in your openshift-monitoring namespace [root@occontrol ~]# oc get deployment grafana -n openshift-monitoring  --export -o yaml | grep 'image: grafana'         image: grafana/grafana:5.2.1 Now setup a deployment of the grafana using same image version oc new-app --name=grafana grafana/grafana:5.2.1 -n mygrafana Now we will mount the secret grafana-datasources.yaml in this new deployment. # oc set volume dc/grafana --add --name=grafana-dashsources --type=secret --secret-name=grafana-datasources --mount-path=/etc/grafana/provisioning/da...

Create and setup a Ceph Storage cluster in using very easy step.

Image
          Ceph is a widely used open source storage platform. It provides high performance, reliability, and scalability. The Ceph free distributed storage system provides an interface for object, block, and file-level storage. Ceph is build to provide a distributed storage system without a single point of failure. A Ceph cluster requires these Ceph components:     Ceph OSDs (ceph-osd) - Handles the data store, data replication and recovery. A Ceph cluster needs at least two Ceph OSD servers. I will use three CentOS 7 OSD servers here.     Ceph Monitor (ceph-mon) - Monitors the cluster state, OSD map and CRUSH map. I will use one server.     Ceph Meta Data Server (ceph-mds) - This is needed to use Ceph as a File System. For a new learner of ceph, it is a bigger challange to install and setup ceph cluster. I am telling you a very simple way how you can setup a ceph storage cluster by performing f...

Using ceph RBD block storage with Openshift 3.11 cluster

Image
         OpenShift Enterprise clusters can be provisioned with persistent storage using Ceph RBD. Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the Ceph RBD-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts. This topic presumes some familiarity with OpenShift Enterprise and Ceph RBD. See the Persistent Storage concept topic for details on the OpenShift Enterprise persistent volume (PV) framework in general. Provisioning- To provision Ceph volumes, the following are required:     An existing storage device in your underlying infrastructure.     The Ceph key to be used in an OpenShift Enterprise secret object.     The Ceph image name.     The file system type ...