Manage Learn to apply best practices and optimize your operations.

Mount Global File System 2, avoid corruption in RHEL High Availability

You have a working cluster, now what? Red Hat's GFS2 will help you avoid file system corruption. Learn how to mount the default cluster file system.

Not every type of file system resource is created equally when it comes to RHEL High Availability. Ext4 is fine,...

but to prevent corruption in clusters, mount Global File System 2.

You need to use a file system resource when you create an Apache Web service that is managed by a Red Hat Enterprise Linux cluster with the RHEL High Availability add-on. But which type of file system resource is the most appropriate depends on the specific environment.

Your RHEL working cluster with an Apache Web service has an Ext4 file system. For services that fail over between nodes, Ext4 is fine, but you have to be sure that two nodes don't try to write to the Ext4 file system simultaneously, as that will corrupt the file system.

If multiple nodes in the cluster need access to the same file system simultaneously, you'll need a clustered file system. In a clustered file system, the file system cache is synchronized among all the participating nodes, avoiding file system corruption. Red Hat offers the Global File System 2 (GFS2) as the default cluster file system.

Mounting Global File System 2

You need to have a running cluster to use GFS2. Install the cluster version of Logical Volume Management 2 (LVM2), and make sure the accompanying service is started on all nodes that will run the GFS2 file system. Next, create a cluster-aware LVM2 volume and create the GFS2 file system on it. Once created, you can mount the GFS2 file system from /etc/fstab on the nodes involved, or create a cluster resource that mounts it automatically.

On one of the cluster nodes, use the fdisk utility to create a partition on the storage-area network (SAN) device and make sure to mark it as partition type 0x8e. Reboot both nodes to ensure the partitions are seen on each one, and verify they are available before continuing.

On both nodes, use yum install -y lvm2-cluster gfs2-utils to install Cluster Logical Volume Management (cLVM), as well as the GFS2 software. On both nodes, use service clvmd start to start the cLVM service, and chkconfig clvmd on to enable it.

On one node, use pvcreate /dev/sdb3 to mark the LVM partition on the SAN device as a physical volume. Always verify that the name of the partition is correct.

Use vgcreate -c y clusgroup /dev/sdb3 to create a cluster-enabled volume group, then use lvcreate -l 100%FREE -n clusvol clusgroup to create a cluster-enabled volume with the name clusvol.

On both nodes, use lvs to verify that the cluster-enabled LVM volume has been created.

Use mkfs.gfs2 -p lock_dlm -t name_of_your_cluster:gfs -j 2 /dev/clusgroup/clusvol to format the clustered LVM volume as a GFS2 file system. The -p option tells mkfs to use the lock_dlm lock table. This instructs the file system to use distributed lock manager so that file locks are synchronized to all nodes in the cluster. The option -t is equally important; it specifies the name of your cluster, followed by the name of the GFS resource you want to create in the cluster. The option -j 2 tells mkfs to create two GFS2 journals; you'll need one for each node that accesses the GFS volume.

On both nodes, mount the GFS2 file system temporarily on /mnt, using mount /dev/clusgroup/clusvol /mnt. On both nodes, create some files on the file system; you'll notice that the files appear immediately on the other nodes as well.

Use mkdir /gfsvol to create a directory on which you can mount the GFS volume.

Make the mount persistent by adding a line to /etc/fstab:

/dev/clusgroup/clusvol     /gfsvol     gfs2 _netdev  0 0

Use chkconfig gfs2 to enable the GFS2 service, which is needed to mount GFS2 volumes from /etc/fstab.

Reboot both nodes to verify that the global file system is mounted automatically. At this point, GFS2 is available on all cluster nodes. When using GFS2 as the shared file system, you no longer need to set up a shared file system resource in the cluster service for RHEL. Although convenient, you don't have to use GFS2 in all scenarios. If only one node needs access to the shared file system at the same time, Ext4 is good enough.

Sander van Vugt is an independent trainer and consultant based in the Netherlands. He is an expert in Linux high availability, virtualization and performance. He has authored many books on Linux topics, including Beginning the Linux Command Line,Beginning Ubuntu LTS Server Administration and Pro Ubuntu Server Administration.

This was last published in February 2014

Dig Deeper on Linux servers

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Share your com
Performed the below steps on Node2:

Step 1. yum install -y gfs2-utils
Step 2. pvcreate /dev/dm-0 /dev/dm-1 /dev/dm-2
Step 3. vgcreate vol_group /dev/dm-0 /dev/dm-1 /dev/dm-2
step 4. lvcreate -L 1G vol_group -n lun

[root@node2-emulex Desktop]# lvdisplay
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  --- Logical volume ---
  LV Path                /dev/vol_group/lun
  LV Name                lun
  VG Name                vol_group
  LV UUID                PWF1HB-R1ku-Hkmw-SdyG-d3Cg-7lc0-niWjmL
  LV Write Access        read/write
  LV Creation host, time node2-emulex, 2017-03-31 15:55:31 +0530
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Step 5. [root@node2-emulex Desktop]# lvs
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  LV   VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lun  vol_group -wi-a----- 1.00g


Step 6. mkfs.gfs2 -p lock_dlm -t mycluster:gfs -j 2 /dev/vol_group/lun

[root@node2-emulex Desktop]# pvs
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  PV                 VG        Fmt  Attr PSize   PFree
  /dev/mapper/mpathb vol_group lvm2 a--  232.83g 232.83g
  /dev/mapper/mpathc vol_group lvm2 a--  465.66g 464.66g
  /dev/mapper/mpathd vol_group lvm2 a--  698.49g 698.49g

Step 7. [root@node2-emulex Desktop]#  mount -t gfs2 /dev/vol_group/lun /mnt/
mount: mount /dev/mapper/vol_group-lun on /mnt failed: Transport endpoint is not connected

Issue:

1. Unable to mount the FS.
2. On first node, lvs command doesnt show the output as second node (Step 5)

Query:
Currently lvm2-cluster package is not installed on both the nodes.
Only if lvm2-cluster is installed the Logical volumes will be shared across both the nodes?
ment
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close