Multipath for Persistent iSCSI LUNs on single wired services

After rebooting one of our iSCSI based Oracle servers one day, we found ourselves in a bit of a dilemma. Through our cloud vendor's SAN engineering support, we discovered that iSCSI mounted persistence is NOT part of NetApp’s hardware-capable functionality. The engineer also stated that “all existing iSCSI presentations are currently not using multipathing”. Another engineer also stated “relying on them [iSCSI LUNs] to stay the same in my experience is sketchy: It works until it doesn't.” Come to find out it was VMware's upgrade that caused the problem.
In the past -- prior to the VMware VM hardware upgrades from version 4 to version 7, the iSCSI devices were mapped in order, even when new LUNs were presented. Now, this is no longer the case. All multiple path’d iSCSI based LUNs are now at risk once upgraded to the new VM hardware version due to the way that VMware realigns the IRQs for the PCI devices.
Our 'wonderful' vendor's SAN engineering team recommended that we use either UUID or Labeling the LUNs within /etc/fstab for each partition mount or a UDEV configuration on the RHEL VMs (or reconfigure all of the Oracle RHEL VMs to use a single iSCSI LVM mounted configuration). Using Labeled/UUID path configuration is quick, but has a limitations for resizing or if added LUNs are added to the VM. (You would have to re-extract the new UUID each time something was modified on the iSCSI side.) The UDEV configuration is just nasty in itself (I promise you don’t want to see the configuration for it) and even a bigger pain to maintain going forward.
Using the vendor's SAN engineering team statement "should the NetApp 6040 HA filer ever fail, lockup, lose network connectivity or otherwise be unable to server data -- the entire vFiler would be failed over to the alternate filer head (another NetApp 6040) -- IP address and all, therefore at no time would a LUN suddenly apper to be presented from a different IP address and that the UUIDs and WWIDs would appear seamless as well." So, given this context, myself and a past co-worker/fellow SAN engineer worked on a solution to see whether or not it is possible to configure multipath to allow for persistent iSCSI LUNs on a singled wired service that was being presented to the RHEL VM. Guess what... it IS possible and actually handles LUN transverses better and added greater performance overall!
The steps in the configuration might seem a bit long, but the overall solution is very easy to maintain once established.

NOTE: Applications that are going to access mount points must be turned off/disabled prior to doing steps below!

1) If mounted, umount all previous iSCSI LUN mount points:
    Type umount /u02 /u03 /u04 /u05 /u06

2) Install Multipath and LSSCSI.
    Type: 'yum install device-mapper-multipath lsscsi'

3) Get a list on iSCSI LUNs available.
    Type: 'lsscsi -l'

4) Extract the World Wide Identifer (WWID) from scsi id's.

    Type: 'cat /proc/partitions |grep sd|while read a b c d;do echo -n $d$'\t'" scsi_id=";(echo $d|tr -d [:digit:]|xargs -i scsi_id -g -s /block/{}) done'

    NOTE: Make note ONLY of the one's with the '1' after them (these are the active partitions). Ignore /dev/sda (root mount point)

5) Rename old /etc/multipath.conf to /etc/multipath.conf-OLD

6) Create new /etc/multipath.conf and use the following template:

defaults {
        user_friendly_names no
}

blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
        devnode "^sda"
        devnode "^sda[0-9]"
        #wwid "*"
}

devices {
       device {
               vendor                  "NETAPP"
               product                 "LUN"
               path_grouping_policy    group_by_prio
               getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
               path_checker            readsector0
               features                "1 queue_if_no_path"
               failback                0
       }
}

multipaths {
        multipath {
        wwid    360a98000486e6165644a2f6130682d6c
        alias   u02
        }
        multipath {
        wwid    360a98000486e6165644a2f6130694d4f
        alias   u03
        }
        multipath {
        wwid    360a98000486e6165644a2f61306a4a64
        alias   u04
        }
        multipath {
        wwid    360a98000486e61626b4a2f61306b752d
        alias   u05
        }
        multipath {
        wwid    360a98000486e6165644a2f61306d3047
        alias   u06
        }
}

7) Using the above template, switch out the wwid tags with the ones extracted via the scsi_id script (above).

8) Configure multipath dm-* devices.
    Type: 'multipath'
    Type: 'multipath -F'
    Type: 'multipath -r'

9) Start up multipath daemon.
    Type '/etc/init.d/multipathd start'

10) Verify that multipath devices found are matched to correct iSCSI LUN data size.
    Type 'multipath -v2 -l'

11) Verify that multipath devices were created correctly using dm-* tags.
    Type 'ls -l /dev/mpath'

12) Match up /dev/sd* devices to multipath device numbering.
    Type: 'lsscsi -l'
    Type: 'multipath -l'

12) Using alias in step #6 and LUNs discovered in step #10,
    create a new /etc/fstab iSCSI mount points for volumes (add 'p1' for active partition):

    /dev/mpath/u02p1      /u02    ext3    _netdev 0 0
    /dev/mpath/u03p1      /u03    ext3    _netdev 0 0
    /dev/mpath/u04p1      /u04    ext3    _netdev 0 0
    /dev/mpath/u05p1      /u05    ext3    _netdev 0 0
    /dev/mpath/u06p1      /u06    ext3    _netdev 0 0

13) Attempt to mount iSCSI devices under multipath daemon instance.
    Type: 'mount -a -v'

14) Auto-start multipath daemon for reboots.
    Type 'chkconfig multipathd on'

15) Reboot the VM.

16) Verify that iSCSI and Multipath daemons start. Also verify if iSCSI LUNs correctly mount.

17) Contact DBA or Application Engineer to start application and verify mount points are correctly configured.
    Also check to see if data contained is also accessible and preserved.

Comments

Popular posts from this blog

Running web ssh client on port 443 /80 with nginx as reverse proxy

Running cockpit behind nginx reverse proxy with nginx ssl and cockpit non ssl

Setup VOD streaming server with nginx using RTMP on Ubuntu 18.04