Pacemaker Cluster Stack

Install Packages On Controller Nodes

  1. [root@controller ~]# yum install corosync pacemaker pcs fence-agents resource-agents -y

Set-Up the Cluster

  1. [root@controller ~]# systemctl enable pcsd
  2. [root@controller ~]# systemctl start pcsd
  3.  
  4. [root@controller ~]# echo myhaclusterpwd | passwd --stdin hacluster
  5.  
  6. [root@controller ~]# pcs cluster auth controller1 controller2 controller3  -u hacluster -p myhaclusterpwd --force
  7.  
  8. [root@controller1 ~]# pcs cluster setup --force --name my-cluster controller1 controller2 controller3
  9. Destroying cluster on nodes: controller1, controller2, controller3...
  10. controller1: Stopping Cluster (pacemaker)...
  11. controller2: Stopping Cluster (pacemaker)...
  12. controller3: Stopping Cluster (pacemaker)...
  13. controller3: Successfully destroyed cluster
  14. controller1: Successfully destroyed cluster
  15. controller2: Successfully destroyed cluster
  16.  
  17. Sending 'pacemaker_remote authkey' to 'controller1', 'controller2', 'controller3'
  18. controller1: successful distribution of the file 'pacemaker_remote authkey'
  19. controller3: successful distribution of the file 'pacemaker_remote authkey'
  20. controller3: successful distribution of the file 'pacemaker_remote authkey'
  21. controller2: successful distribution of the file 'pacemaker_remote authkey'
  22. Sending cluster config files to the nodes...
  23. controller1: Succeeded
  24. controller2: Succeeded
  25. controller3: Succeeded
  26.  
  27. Synchronizing pcsd certificates on nodes controller1, controller2, controller3...
  28. controller3: Success
  29. controller2: Success
  30. controller1: Success
  31. Restarting pcsd on the nodes in order to reload the certificates...
  32.  
  33. controller3: Success
  34. controller2: Success
  35. controller1: Success
  36. [root@controller1 ~]# pcs cluster start --all
  37. controller1: Starting Cluster...
  38. controller2: Starting Cluster...
  39. controller3: Starting Cluster...
  40. [root@controller1 ~]# pcs cluster enable --all
  41. controller1: Cluster Enabled
  42. controller2: Cluster Enabled
  43. controller3: Cluster Enabled
  44. [root@controller1 ~]# pcs cluster status
  45. Cluster Status:
  46.  Stack: unknown
  47.  Current DC: NONE
  48.  Last updated: Fri Dec 15 00:21:36 2017
  49.  Last change: Fri Dec 15 00:21:24 2017 by hacluster via crmd on controller1
  50.  3 nodes configured
  51.  0 resources configured
  52. PCSD Status:
  53.   controller3: Online
  54.   controller2: Online
  55.   controller1: Online

Start Corosync On Controllers

  1. [root@controller ~]# systemctl start corosync
  2.  
  3. [root@controller1 ~]# corosync-cfgtool -s
  4. Printing ring status.
  5. Local node ID 1
  6. RING ID 0
  7.         id      = 192.168.220.21
  8.         status  = ring 0 active with no faults
  9. [root@controller2 ~]# corosync-cfgtool -s
  10. Printing ring status.
  11. Local node ID 2
  12. RING ID 0
  13.         id      = 192.168.220.22
  14.         status  = ring 0 active with no faults
  15. [root@controller3 ~]# corosync-cfgtool -s
  16. Printing ring status.
  17. Local node ID 3
  18. RING ID 0
  19.         id      = 192.168.220.23
  20.         status  = ring 0 active with no faults
  21.  
  22. [root@controller ~]# corosync-cmapctl runtime.totem.pg.mrp.srp.members
  23. runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
  24. runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.220.21) 
  25. runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
  26. runtime.totem.pg.mrp.srp.members.1.status (str) = joined
  27. runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
  28. runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.220.22) 
  29. runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
  30. runtime.totem.pg.mrp.srp.members.2.status (str) = joined
  31. runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
  32. runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(192.168.220.23) 
  33. runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
  34. runtime.totem.pg.mrp.srp.members.3.status (str) = joined

Start Pacemaker

  1. [root@controller1 ~]# systemctl start pacemaker
  2. [root@controller1 ~]# crm_mon -1
  3. Stack: corosync
  4. Current DC: controller1 (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
  5. Last updated: Fri Dec 15 00:34:25 2017
  6. Last change: Fri Dec 15 00:21:45 2017 by hacluster via crmd on controller1
  7.  
  8. 3 nodes configured
  9. 0 resources configured
  10.  
  11. Online: [ controller1 controller2 controller3 ]
  12.  
  13. No active resources

Set Basic Cluster Properties

  1. [root@controller1 ~]# pcs property set pe-warn-series-max=1000 \
  2. >   pe-input-series-max=1000 \
  3. >   pe-error-series-max=1000 \
  4. >   cluster-recheck-interval=5min
  5. [root@controller1 ~]# pcs property set stonith-enabled=false

Leave a Reply

Your email address will not be published. Required fields are marked *