Water's Home

Just another Life Style

0%

Pacemaker Cluster Stack

Install Packages On Controller Nodes

[root@controller ~]# yum install corosync pacemaker pcs fence-agents resource-agents -y

Set-Up the Cluster

[root@controller ~]# systemctl enable pcsd
[root@controller ~]# systemctl start pcsd

[root@controller ~]# echo myhaclusterpwd passwd –stdin hacluster

[root@controller ~]# pcs cluster auth controller1 controller2 controller3 -u hacluster -p myhaclusterpwd –force

[root@controller1 ~]# pcs cluster setup –force –name my-cluster controller1 controller2 controller3
Destroying cluster on nodes: controller1, controller2, controller3…
controller1: Stopping Cluster (pacemaker)…
controller2: Stopping Cluster (pacemaker)…
controller3: Stopping Cluster (pacemaker)…
controller3: Successfully destroyed cluster
controller1: Successfully destroyed cluster
controller2: Successfully destroyed cluster

Sending ‘pacemaker_remote authkey’ to ‘controller1’, ‘controller2’, ‘controller3’
controller1: successful distribution of the file ‘pacemaker_remote authkey’
controller3: successful distribution of the file ‘pacemaker_remote authkey’
controller3: successful distribution of the file ‘pacemaker_remote authkey’
controller2: successful distribution of the file ‘pacemaker_remote authkey’
Sending cluster config files to the nodes…
controller1: Succeeded
controller2: Succeeded
controller3: Succeeded

Synchronizing pcsd certificates on nodes controller1, controller2, controller3…
controller3: Success
controller2: Success
controller1: Success
Restarting pcsd on the nodes in order to reload the certificates…

controller3: Success
controller2: Success
controller1: Success
[root@controller1 ~]# pcs cluster start –all
controller1: Starting Cluster…
controller2: Starting Cluster…
controller3: Starting Cluster…
[root@controller1 ~]# pcs cluster enable –all
controller1: Cluster Enabled
controller2: Cluster Enabled
controller3: Cluster Enabled
[root@controller1 ~]# pcs cluster status
Cluster Status:
Stack: unknown
Current DC: NONE
Last updated: Fri Dec 15 00:21:36 2017
Last change: Fri Dec 15 00:21:24 2017 by hacluster via crmd on controller1
3 nodes configured
0 resources configured
PCSD Status:
controller3: Online
controller2: Online
controller1: Online

Start Corosync On Controllers

[root@controller ~]# systemctl start corosync

[root@controller1 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
id = 192.168.220.21
status = ring 0 active with no faults
[root@controller2 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.220.22
status = ring 0 active with no faults
[root@controller3 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 3
RING ID 0
id = 192.168.220.23
status = ring 0 active with no faults

[root@controller ~]# corosync-cmapctl runtime.totem.pg.mrp.srp.members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.220.21)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.220.22)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(192.168.220.23)
runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.3.status (str) = joined

Start Pacemaker

[root@controller1 ~]# systemctl start pacemaker
[root@controller1 ~]# crm_mon -1
Stack: corosync
Current DC: controller1 (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Fri Dec 15 00:34:25 2017
Last change: Fri Dec 15 00:21:45 2017 by hacluster via crmd on controller1

3 nodes configured
0 resources configured

Online: [ controller1 controller2 controller3 ]

No active resources

Set Basic Cluster Properties

[root@controller1 ~]# pcs property set pe-warn-series-max=1000 \

pe-input-series-max=1000 \
pe-error-series-max=1000 \
cluster-recheck-interval=5min
[root@controller1 ~]# pcs property set stonith-enabled=false