Showing posts with label Redhat. Show all posts
Showing posts with label Redhat. Show all posts

Monday, May 20, 2013

Redhat Linux - How to for GFS Filesystems

How-to for GFS Filesystems

You can use either of the following formats to create a clustered GFS file system:
#gfs_mkfs -p lock_dlm -t ClusterName:FSName -j Number BlockDevice
#mkfs -t gfs -p lock_dlm -t LockTableName -j NumberJournals BlockDevice

You can use either of the following formats to create a local GFS file system:
#gfs_mkfs -p lock_nolock -j NumberJournals BlockDevice
#mkfs -t gfs -p lock_nolock -j NumberJournals BlockDevice

At each node, mount the GFS file systems. For more information about mounting a GFS file
Command usage:
mount BlockDevice MountPoint
mount -o acl BlockDevice MountPoint
The -o aclmount option allows manipulating file ACLs. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl), but are not allowed to set them (with setfacl).

Formatting the logical Volume
#gfs_mkfs -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0

The gfs_jadd command must be run on mounted file system, but it only needs to be run on one node in the cluster. All the other nodes sense that the expansion has occurred.

#lvextend -L +80G /dev/vgcl_gfs_san_eos/vol01_lv
#gfs_grow -v /dev/vgcl_gfs_san_eos/vol01_lv /db/eospr1/vol01

Sunday, May 19, 2013

Redhat Linux 5,6 - How to Scan and configure new Luns from Storage?


SCAN AND CONFIGURE NEW LUNS on RHEL 5 and RHEL 6

If you have 4 fc ports on which you have assigned the storage luns.
Then you need to run the below for loop to scan the LUNS on all the 4 ports.

for i in host1 host2 host3 host0
> do
> echo "1" > /sys/class/fc_host/$i/issue_lip
> echo "- - -" > /sys/class/scsi_host/$i/scan
> done

After this you can run fdisk -l or multipath -ll to see the new luns. Once the new luns are visible , you can use the luns using LVM or raw paritioning.

IF you want to rescan the disk lun then you can run the below command.


echo 1 > /sys/block/sda/device/rescan
echo 1 > /sys/block/sdb/device/rescan
echo 1 > /sys/block/sdc/device/rescan
echo 1 > /sys/block/sdd/device/rescan

Thursday, April 14, 2011

Redhat Cluster Commands -Linux

Checking status of the cluster
# clustat
Moving a service/package over to another node
# clusvcadm -r -m
Starting a service/package
# clusvcadm -e -m
Stopping/disabling a service/package
# clusvcadm -d

Resource Group Locking (for cluster Shutdown / Debugging):

clusvcadm -l --This prevents resource groups from starting on the local node.
clusvcadm -S --Show lock state
clusvcadm -u --Unlock local resource group manager.This allows resource groups to start on the local node.
clusvcadm -Z -Freeze group in place
clusvcadm -U -Unfreeze/thaw group

[root@host1 ~]# clustat -l
Cluster Status for host1cl @ Thu Apr 14 02:25:29 2011
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
host1.com 1 Online, Local, rgmanager
host2.com 2 Online, rgmanager

Service Information
------- -----------

Service Name : service:fibrbase
Current State : failed (114)
Flags : none (0)
Owner : none
Last Owner : host1.com
Last Transition : Thu Apr 14 01:39:21 2011

Service Name : service:fsgprod
Current State : failed (114)
Flags : none (0)
Owner : none
Last Owner : host1.com
Last Transition : Thu Apr 14 01:39:27 2011

Service Name : service:wcmnrocp
Current State : failed (114)
Flags : none (0)
Owner : none
Last Owner : host1.com
Last Transition : Thu Apr 14 01:30:20 2011

[root@host1~]#


Gracefully halting the cluster
# clusvcadm -d
**Do the following on each node:**
# umount
# service rgmanager stop
# service gfs stop
# service clvmd stop
# service fenced stop
# cman_tool status
# cman_tool leave
# service ccsd stop

Gracefully starting the cluster (Done on each node)
# service ccsd start
# service cman start
# service fenced start
# service clvmd start
# service gfs start
# service rgmanager start
# cman_tool nodes (shows status of nodes)