If a server goes down, essential services don't have to shut down along with it. To prevent downtime, Linux administrators can set up a Heartbeat cluster on Linux. Heartbeat adds the advantage of a cluster to Xen virtual machines (VMs), thus maintaining VMs' uptime when a server crashes.
This series explains how to configure and use a Heartbeat cluster on SUSE Linux Enterprise Server using a storage area network (SAN) and a Xen VM as a cluster resource. I'll also discuss the Linux Heartbeat project, whose mission is improving critical services availability in the network environment at critical times. In this, the first installment, I cover installation of a SAN. In the next tip, I'll cover the configuration of the Oracle cluster file system (OCFS2).
Configuring the SAN
We begin building our Heartbeat cluster by configuring shared storage on the SAN.
Before we start, let's make a small list of required components to build a complete cluster solution with heartbeat:
- A storage volume that allows simultaneous write access to all nodes in the cluster
- A file system that allows for simultaneous writes
- The Heartbeat software for high availability clustering
- One or more Xen VMs
The first part of installing a Xen VM high availability environment (which we'll refer to as Xen HA) involves configuring a SAN. This is done with services included in SUSE Linux Enterprise Server. In a Xen HA environment, you need the SAN to store the Xen disk images and configuration files on a centralized location where they can be reached by both nodes simultaneously. If one node goes down, you still want the other node to be able to reach the required configuration files.
To prepare your cluster for storage, just create two logical unit numbers (LUNs) on the SAN. If a SAN is already in place, you can use that, but if you don't yet have a SAN, SUSE Linux Enterprise Server (SLES) 10.1 includes everything you need to create one based on iSCSI. One can be created with a server running SLES 10.1 as the storage server and with two servers running SLES 10.1 that access the LUNs that are offered by the storage server.
Creating a shared storage device
A shared device needs to be created on the storage server: it can be a complete hard disk, a partition or logical volume or a file that is created as a disk image file. Since you'll need at least four gigabytes to store the Xen virtual disk image, I recommend using a real device such as a partition or volume for better performance. And make sure this device is available before you proceed.
In case you don't have the opportunity to use a real device, you can use
dd if=/dev/zero of=/var/xenimages bs=1M count=4096 to create a 4 GB file to be used for storing the Xen disk image. Of course, a 4 GB file size isn't very large to store images of Xen VMs; just increase the number used with the count parameter to make this file size bigger.
Make sure you have a small device available for storing the Xen VM configuration files. For example, use
dd if=/dev/zero of=/var/xenconfig bs=1M count=1024 to create this as a 1 GB disk image file.
Defining an iSCSI target
Next, you have to configure an iSCSI target at the storage server that shares the disk devices. Start YaST, and enter the password for the user root and from Miscellaneous. Then, start the iSCSI Target module. At the service tab, make sure that "when booting" is selected. On the targets tab, delete the example target that exists already. Proceed by clicking "add" to add a new target. This brings up a window as shown in Figure 1:
figure 1: From this interface, you specify what LUNs to offer using the iSCSI target.
Click "add" to define the LUNs. Every LUN is offered as a device at the nodes in the cluster. For every LUN that you create, you need a separate device to share. Therefore, select the LUN number for your first LUN and in the path field, specify the name of the device that you want to share, for example,
/var/xenimages. Click OK to add the device, and close the iSCSI Target configuration program.
Connecting the shared device
Now that your server is offering shared storage, go to the nodes console and start the iSCSI Initiator module from YaST. At the service tab, make sure that "when booting" is selected. Next, select the discovered targets tab and from there, click the discovery button. Enter the IP address of the iSCSI target server, leave the authentication options blank and click "next". This brings up a window (see figure 2) that mentions the name of the iSCSI target service. Click the link, and select "log in" to make sure that you are connected.
figure 2: Before proceeding, establish a connection between the iSCSI initiator and the iSCSI target.
Click the connected targets tab, then click "toggle start-up". This ensures that the connection is established automatically the next time that your server reboots. Click Finish to complete the wizard. To connect all other nodes in the network that need access to the shared storage, repeat the steps for connecting the shared device.
Testing the SAN
To verify that the SAN is working, you can use the
lsscsi command at all nodes in the cluster. This command should list two new available disk devices. Also, you can use the
iscsiadm –m session command to display the session that exists from the initiator to the target.
Now you've learned how to configure an iSCSI SAN using SUSE Linux Enterprise Server 10.1, an affordable alternative to proprietary SANs. In Part 2, you'll learn how to create a cluster safe OCFS2 file system on this SAN, and providing high availability to VMs.
About the author: Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high availability (HA) clustering and performance optimization, as well as an expert on SUSE Linux Enterprise Desktop 10 (SLED 10) administration.