Tip

Setting up a mirrored Linux DRBD configuration over the network

Administrators can use the distributed replicated block device (DRBD) in Linux to set up basic data redundancy to connect the storage of multiple servers.

When creating an environment where multiple nodes can access your data simultaneously, a distributed replicated block device (DRBD) is an excellent choice. This is particularly true if you set up a two-node cluster where one node needs to take over the exact state of the other node as quickly as possible. In this article you'll learn how to configure it.

Creating a DRBD configuration
The purpose of setting up a DRBD configuration is to create a storage device that is synchronized over the network. To accomplish this setup, you’ll need two servers; both servers will need storage devices that should be as similar as possible (try for the same disk type and size on both nodes). For our purposes, a disk device with the name /dev/sdb is used as an example.

After installing the DRBD packages on your preferred Linux distribution, it’s time to create the DRBD configuration. Let’s create the configuration files manually:

To start, assume that you're using two different servers that have the names drbd1 and drbd2, and each of the servers includes a dedicated hard disk named /dev/sdb as the DRBD. Once you make sure that the default DRBD port 7780 is open on the firewall, you're ready to start.

  1. The name of the default DRBD configuration file is /etc/drbd.conf. This file serves as a starting point to find the additional configuration, so you'll need to include two lines that make sure that these configuration files are found. Verify that the following two lines are in the drbd.conf file:

include "drbd.d/global_common.conf";
include "drbd.d/*.res";

  1. See that the real configuration is included in the /etc/drbd.d/global_common.conf file. You'll have to include two lines that minimize startup time for DRBD:

startup {                
...                
wfc-timeout 1;                
degr-wfc-timeout 1;
}

  1. Define the DRBD resource itself. Do this by creating several configuration files; one for each resource. Just make sure that this file uses the extension .reg to have it included in the configuration. Below you can see what the configuration file would look like for a DRBD resource — we’ll call ours drbd0 — that is used to create a device on the /dev/sdb disk:

resource drbd0 {                
device /dev/drbd0 minor 0;                
disk /dev/sdb;                
meta-disk internal;                
on drbd1 {                                
address 10.0.0.10:7780;                
}                
on drbd2 {                                
address 10.0.0.20:7780;                
}                
syncer {                                
rate 7M;                
}
}

A little more explanation is in order here. The name of the resource is defined in the first part of this file. Again, we're using drbd0, but you're free to choose any name you like. Next, the name of the device node as it will occur in the /dev directory is specified and includes the minor number that is used for this device. Make sure the combination of name and device node is unique in all cases — otherwise the kernel won't be able to distinguish between different DRBDs.

Now you'll specify which local device is going to be replicated between nodes. This device is going to be wiped during initialization of the DRBD, so make sure there's nothing on it that you need. Following the name of the device, include the configuration for the different nodes. The node names must be equal to the kernel names as returned by the uname command. Finally,set the synchronization speed. Don't put this too high if you don't have a dedicated network connection for the DRBD, otherwise you'll use all your bandwidth and you risk choking off other network traffic.

  1. After creating the initial configuration on one node, it's a good idea to verify the configuration. To do this, use the command drbdadm dump all. If this command shows the contents of all configuration files instead of complaining about missing parts, everything is OK and you can proceed.
  2. At this point, you can transfer the first node to the second. Make sure you can perform the transfer using the node name of the other node — if nodes cannot reach each other by node name, your DRBD is going to fail. Configure your /etc/hosts or DNS if necessary before continuing.

scp /etc/drbd.conf drbd2:/etc/
scp /etc/drbd.d/* drbd2:/etc/drbd.d/

  1. Let’s create the DRBD metadata on both nodes. First, use  the drdadm command as in the example below, and next you can start the DRBD service:

#drbdadm -- --ignore-sanity-checks create-md drbd0
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
#service drbd start

  1. Use the drbd status command to verify the current status of the DRBD. You'll see at this point that both devices have the status connected, but also that both are set as secondary device, and that they're inconsistent. That is because you haven't started the synchronization yet, which you're about to do now using the following command:

drbdadm -- --overwrite-data-of-peer primary drbd0

If you use the service drbd status command again to monitor the current synchronization status, you'll see that the status is now set to synchronized (sync'ed) and that you have established a primary/secondary relationship. You'll now have to wait until the status on both nodes is UpToDate.

Using and testing the DRBD configuration
It can take a long time for the two devices to synchronize, but once the synchronization is finished, you can assign a primary node and create a file system on that node. To do this, use the following commands:

drbdadm primary drbd0
mkfs.ext3 /dev/drbd0
mount /dev/drbd0 /mnt

The device should now be mounted on the primary node in the directory /mnt. If you now create files in that directory, the data blocks these files are using will immediately be synchronized to the other node. But since we’re using a primary/secondary setup, it's not possible to access these files directly on the other node.

If all was successful, you can perform a test which will make the other node the primary. To do this:

  1. Umount the DRBD on node drbd1.
  2. Use command drbdadm secondary drbd0 to make node drbd1 the secondary.
  3. Go to node drbd2 and promote the DRBD to primary using the command drbdadm primary drbd0.
  4. On node drbd2, use the command service drbd status to verify that all went well. Congratulations! Your DRBD is now operational. It's time to move on and integrate it in to a high availability cluster resource manager such as Pacemaker — if you wish.

Next steps with your DRBD configuration
You’ve successfully created an active/passive DRBD configuration and you know how to failover the primary device to another node. Now what? There is much more to using DRBD in a data center environment. For example, you can use it in dual primary mode — useful if you want read-write access to the DRBD on two nodes simultaneously. Or integrate DRBD in a Pacemaker cluster to ensure that automatic failover happens if the current primary node fails. In our next article in this series we´ll talk about some common DRBD troubleshooting scenarios.

About the author: Sander van Vugt is an independent trainer and consultant living in the Netherlands. Van Vugt is an expert in Linux high availability, virtualization and performance and has completed several projects that implement all three. Sander is also a regular speaker on many Linux conferences all over the world. He is also the writer of various Linux-related books, such as Beginning the Linux Command LineBeginning Ubuntu Server Administration and Pro Ubuntu Server Administration.

Dig Deeper on Data center ops, monitoring and management

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close