Setting up shared volume using GlusterFS on centos 7

        
   I come to this solution due to one of my requirement in which i wanted to setup shared volume for my two node KVM virtual server node. I wanted to make a shared volume storage pool on both KVM node so that i can migrate VM between nodes.
           Normally this design is for ovirt HA cluster. My setup was not intended to work as HA cluster but a manual cluster to move VM from one host node to another.
           This setup is also usefull in case you need to setup any multimaster solution in which you dont want to use NFS and need data redundancy.

So here is the setup -
I have two nodes VM1  and VM2 each with a partition  /dev/vdb1 of 5GB size

I created one mount point for both VM /gb1 and mounted the partition on this mount point

# mkfs.xfs /dev/vdb1

# mount /dev/vdb1 /gb1

Above practice needs to be done on both VM node.

Next is i will install centos gluster release on both the nodes

# yum install -y centos-release-gluster5

Install gluster package now

# yum install glusterfs-server -y

start and enable gluster service

# systemctl start glusterd  &&  systemctl enable glusterd

Now setup firewall rule on both node

[root@vm2 data]# firewall-cmd --permanent --add-port=49152/tcp
success
[root@vm2 data]# firewall-cmd --permanent --add-port=24007/tcp
success
[root@vm2 data]# firewall-cmd --permanent --add-port=111/tcp
success
[root@vm1 data]# firewall-cmd --reload
success

Now probe both VM as peer for each other

[root@vm1 data]# gluster peer probe vm2

[root@vm2 data]# gluster peer probe vm1

Now will create gluster volume gfs_data. and from now i will run commands on any of one node.

[root@vm1 data]# gluster volume create gfs_data replica 2 vm1:/gb1/b1 vm2:/gb1/b1

Here b1 is will be directory which will be replicated as gluster volume. You can choose any name.

Now check the volume status

# gluster volume status gfs_data detail

If the volume is stopped, start this

# gluster volume start gfs_data

Now the gluster volume is ready to mount. I need to mount this gluster volume on /data location to both of the node.

[root@vm2] # mount -t glusterfs vm1:/gfs_data /data
[root@vm1] # mount -t glusterfs vm2:/gfs_data /data

I can check volume status any time-

[root@vm2 ~]# gluster volume status gfs_data detail
Status of volume: gfs_data
------------------------------------------------------------------------------
Brick                : Brick vm1:/gb1/b1  
TCP Port             : 49152              
RDMA Port            : 0                  
Online               : Y                  
Pid                  : 1705               
File System          : xfs                
Device               : /dev/vdb1          
Mount Options        : rw,seclabel,relatime,attr2,inode64,noquota
Inode Size           : 512                
Disk Space Free      : 5.0GB              
Total Disk Space     : 5.0GB              
Inode Count          : 2620928            
Free Inodes          : 2620874            
------------------------------------------------------------------------------
Brick                : Brick vm2:/gb1/b1  
TCP Port             : 49152              
RDMA Port            : 0                  
Online               : Y                  
Pid                  : 1772               
File System          : xfs                
Device               : /dev/vdb1          
Mount Options        : rw,seclabel,relatime,attr2,inode64,noquota
Inode Size           : 512                
Disk Space Free      : 5.0GB              
Total Disk Space     : 5.0GB              
Inode Count          : 2621440            
Free Inodes          : 2621385   


[root@vm2 ~]# gluster volume info all

Volume Name: gfs_data
Type: Replicate
Volume ID: 946fa059-68f8-4685-a041-87a9e181c41c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vm1:/gb1/b1
Brick2: vm2:/gb1/b1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


You can check the peer status  from any node

[root@vm1 gb1]# gluster peer status
Number of Peers: 1

Hostname: vm2
Uuid: afdef858-f7e3-4e82-8f23-5d109834d620
State: Peer in Cluster (Connected)











[root@vm2 ~]# gluster peer status
Number of Peers: 1

Hostname: vm1
Uuid: 8daaa23d-d3bc-489d-81c5-f70b88abaceb
State: Peer in Cluster (Connected)

Comments

Popular posts from this blog

Running web ssh client on port 443 /80 with nginx as reverse proxy

Running cockpit behind nginx reverse proxy with nginx ssl and cockpit non ssl

Setup VOD streaming server with nginx using RTMP on Ubuntu 18.04