1. Configure iSCSI target server and initiator clients
1.1. Servers
Install 3 servers with following IP addresses. Storage server will use additional disk /dev/sdb for iSCSI.
Add these lines to /etc/hosts on all servers:
10.0.0.200 storage
10.0.0.201 node1
10.0.0.202 node2
On all servers disable SELinux to simplify setup example:
# cat /etc/selinux/config
...
SELINUX=disabled
...
On all servers disable firewalld and reboot to simplify setup example:
systemctl disable firewalld
reboot
1.2. iSCSI Target on storage server
Install and enable target service:
yum install targetcli -y
systemctl start target
systemctl enable target
Configure target server:
targetcli
cd backstores
cd block
create my_device /dev/sdb
cd /iscsi
create iqn.2020-02.localhost.storage:target1
cd iqn.2020-02.localhost.storage:target1/tpg1/luns
create /backstores/block/my_device
cd ../acls
create iqn.2020-02.localhost.storage:node1
create iqn.2020-02.localhost.storage:node2
cd /
ls
exit
Last ls command will show output to verify:
/> ls
o- / ........................................................ [...]
o- backstores ............................................. [...]
| o- block ................................. [Storage Objects: 1]
| | o- my_device ....... [/dev/sdb (8.0GiB) write-thru activated]
| | o- alua .................................. [ALUA Groups: 1]
| | o- default_tg_pt_gp ...... [ALUA state: Active/optimized]
| o- fileio ................................ [Storage Objects: 0]
| o- pscsi ................................. [Storage Objects: 0]
| o- ramdisk ............................... [Storage Objects: 0]
o- iscsi ........................................... [Targets: 1]
| o- iqn.2020-02.localhost.storage:target1 ............ [TPGs: 1]
| o- tpg1 .............................. [no-gen-acls, no-auth]
| o- acls ......................................... [ACLs: 2]
| | o- iqn.2020-02.localhost.storage:node1 . [Mapped LUNs: 1]
| | | o- mapped_lun0 ............ [lun0 block/my_device (rw)]
| | o- iqn.2020-02.localhost.storage:node2 . [Mapped LUNs: 1]
| | o- mapped_lun0 ............ [lun0 block/my_device (rw)]
| o- luns ......................................... [LUNs: 1]
| | o- lun0 . [block/my_device (/dev/sdb) (default_tg_pt_gp)]
| o- portals ................................... [Portals: 1]
| o- 0.0.0.0:3260 .................................... [OK]
o- loopback ........................................ [Targets: 0]
/> exit
Check listening port:
# ss -na | grep 3260
tcp LISTEN 0 256 *:3260 *:*
1.3. iSCSI initiator clients on server node1
Install iscsi-initiator:
yum install iscsi-initiator-utils -y
Update file /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2020-02.localhost.storage:node1
Run iscsiadm in discovery mode:
# iscsiadm -m discovery -t st -p 10.0.0.200
10.0.0.200:3260,1 iqn.2020-02.localhost.storage:target1
Enable and start iscsid:
systemctl enable iscsid
systemctl start iscsid
Connect the target:
iscsiadm -m node -T iqn.2020-02.localhost.storage:target1 -p 10.0.0.200 -l
Verify that new disk appeared
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 7G 0 part
├─centos-root 253:0 0 6.2G 0 lvm /
└─centos-swap 253:1 0 820M 0 lvm [SWAP]
sdb 8:16 0 8G 0 disk
sr0 11:0 1 1024M 0 rom
1.4. iSCSI initiator clients on server node2
The same steps as for node1 with one difference.
Install iscsi-initiator:
yum install iscsi-initiator-utils -y
Update file /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2020-02.localhost.storage:node2
Run iscsiadm in discovery mode:
# iscsiadm -m discovery -t st -p 10.0.0.200
10.0.0.200:3260,1 iqn.2020-02.localhost.storage:target1
Enable and start iscsid:
systemctl enable iscsid
systemctl start iscsid
Connect the target:
iscsiadm -m node -T iqn.2020-02.localhost.storage:target1 -p 10.0.0.200 -l
Verify that new disk appeared
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 7G 0 part
├─centos-root 253:0 0 6.2G 0 lvm /
└─centos-swap 253:1 0 820M 0 lvm [SWAP]
sdb 8:16 0 8G 0 disk
sr0 11:0 1 1024M 0 rom
2. Installing High Availability Cluster with Pacemaker, Corosync, and Pcsd
On node1 and node2 install, enable services and set password for hacluster user:
yum install corosync pacemaker pcs -y
systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
systemctl start pcsd
passwd hacluster
Now we can use only one node to configure cluster.
Authorize nodes in the cluster
pcs cluster auth node1 node2
Username: hacluster
Password:
New cluster name:
pcs cluster setup --name storage_cluster node1 node2
Start and enable all cluster services.
pcs cluster start --all
pcs cluster enable --all
No fencing device in this setup. Disable STONITH and Ignore the Quorum Policy:
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
pcs property list
Cluster is ready.
3. Configuring cluster LVM resource and mountpoint
Create partition on iSCSI device
[root@node1 ~]# fdisk /dev/sdb
Command (m for help): n
Select (default p): p
Partition number (1-4, default 1):
First sector (65528-16777215, default 65528):
Using default value 65528
Last sector, +sectors or +size{K,M,G} (65528-16777215, default 16777215):
Using default value 16777215
Partition 1 of type Linux and of size 8 GiB is set
Command (m for help): w
Create logical volume with xfs filesystem:
pvcreate /dev/sdb1
vgcreate cluster_vg /dev/sdb1
lvcreate -L 1G -n cluster_lv cluster_vg
mkfs.xfs /dev/cluster_vg/cluster_lv
On node1 and node2 update /etc/lvm/lvm.conf:
# grep -e 'locking_type =' -e 'use_lvmetad =' /etc/lvm/lvm.conf
locking_type = 1
use_lvmetad = 0
On node1 and node2 apply lvm.conf changes:
lvmconf --enable-halvm --services --startstopservices
On node1 and node2 exclude vluster_vg from /etc/lvm/lvm.conf but add other volume groups.
In this example:
[root@node1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- <7.00g 0
cluster_vg 1 1 0 wz--n- <7.94g <6.94g
On node1 and node2 exclude cluster_vg but add centos in /etc/lvm/lvm.conf:
# Example
# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
volume_list = [ "centos" ]
On node1 and node2 rebuild the initramfs and reboot to guarantee that the boot image will not try to activate a volume group controlled by the cluster:
dracut -f -v
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
Reboot can be longer than usual.
Create cluster resources in one group:
pcs resource create my_lvm LVM volgrpname=cluster_vg exclusive=true --group my_group
pcs resource create my_fs Filesystem device="/dev/mapper/cluster_vg-cluster_lv" directory="/mnt" fstype="xfs" --group my_group
Verify:
[root@node1 ~]# pcs resource show
Resource Group: my_group
my_lvm (ocf::heartbeat:LVM): Started node2
my_fs (ocf::heartbeat:Filesystem): Started node2
Now filesystem is mounted on the node2. Power off or reboot or disconnect node2 to test cluster.
In this example resources moved to the node1:
[root@node1 ~]# pcs resource show
Resource Group: my_group
my_lvm (ocf::heartbeat:LVM): Started node1
my_fs (ocf::heartbeat:Filesystem): Started node1
[root@node1 ~]# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cluster_vg-cluster_lv 1014M 33M 982M 4% /mnt