corosync_pacemaker_rev1

C:\Users\ABC\Desktop\clust.png

In the case of two nodes, for example, if the first node is already active, the second node must be passive or on standby.

The passive (a.k.a. failover) server serves as a backup that's ready to take over as soon as the active (a.k.a. primary) server gets disconnected or is unable to serve.

Active-Passive

active_passive_high_availability_cluster

When clients connect to a 2-node cluster in active-passive configuration, they only connect to one server. In other words, all clients will connect to the same server. Like in the active-active configuration, it's important that the two servers have exactly the same settings (i.e., redundant).

If changes are made on the settings of the primary server, those changes must be cascaded to the failover server. So when the failover does take over, the clients won't be able to tell the difference.

Node 1: 192.168.1.76

Node2:192.168.1.68

Machines: 2 Centos 6.5 can be accessible at both the ends

Add these nodes in hosts file on both nodes

vi /etc/hosts

192.168.1.76 node1

192.168.1.68 node2

1. Set up NTP and DNS for both your Linux Cluster nodes.

  1. Add repository

Add HA clustering repo from Centos6.5 on both nodes! You will need this Repository to install CRM SHELL to manage Pacemaker resources:

vi /etc/yum.repos.d/ha-clustering.repo

[haclustering]

name=HA Clustering

baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/

enabled=1

gpgcheck=0

3. Install packages

Install Corosync, Pacemaker and CRM Shell. Run this command on both Linux Cluster nodes:

yum install pacemaker corosync crmsh -y

4. Create configuration

Create Corosync configuration file which must be located in “/etc/corosync/” folder. You can copy /paste the following configuration to the IP address of your first Linux Cluster node:

vi /etc/corosync/corosync.conf

D:\IMPSLIP\1.png

Copy Corosync configuration file to the second Linux Cluster node2 and add bindaddr: node2

5. Generate Auth Key

Generate Corosync Authentication Key by running “corosync-keygen” – This might take some time!._The key is located in “/etc/corosync” directory, file is named “authkey”:_

[root@node1 /]# corosync-keygen

Corosync Cluster Engine Authentication key generator.

Gathering 1024 bits for key from /dev/random.

Press keys on your keyboard to generate entropy.

Press keys on your keyboard to generate entropy (bits = 176).

Press keys on your keyboard to generate entropy (bits = 240).

Press keys on your keyboard to generate entropy (bits = 304).

Press keys on your keyboard to generate entropy (bits = 368).

Press keys on your keyboard to generate entropy (bits = 432).

Press keys on your keyboard to generate entropy (bits = 496).

Press keys on your keyboard to generate entropy (bits = 560).

Press keys on your keyboard to generate entropy (bits = 624).

Press keys on your keyboard to generate entropy (bits = 688).

Press keys on your keyboard to generate entropy (bits = 752).

Press keys on your keyboard to generate entropy (bits = 816).

Press keys on your keyboard to generate entropy (bits = 880).

Press keys on your keyboard to generate entropy (bits = 944).

Press keys on your keyboard to generate entropy (bits = 1008).

Writing corosync key to /etc/corosync/authkey.

Transfer the “/etc/corosync/authkey” file to the second Linux Cluster node.

6. Start Corosync service on both nodes:

[root@node1 /]# service corosync start

Starting Corosync Cluster Engine (corosync): [ OK

]

[root@node1 /]# service corosync start

Starting Corosync Cluster Engine (corosync): [ OK ]

7. Start Pacemaker service on both nodes:

[root@node2 /]# service pacemaker start

Starting Pacemaker Cluster Manager: [ OK ]

[root@node2 ~]# service pacemaker start

Starting Pacemaker Cluster Manager: [ OK ]

  1. Check cluster status

After a few seconds you can check your Linux Cluster status with “crm status” command:

[root@node1 /]# crm status

Last updated: Thu Sep 19 15:28:49 2013

Last change: Thu Sep 19 15:11:57 2013 via crmd on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.9-2.2-2db99f1

2 Nodes configured, 2 expected votes

0 Resources configured.

Online: [ node1 node2 ]

As we can see the status says 2 nodes are configured in this Linux Cluster – node1 and node2 Both nodes are online. Current DC is node1

NEXT STEP is to configure Pacemaker resources – applications, IP addresses in the cluster.

[root@node1 ~]# crm help

View Linux Cluster Configuration

[root@node11 ~]# crm configure show

node node1

node node2

property $id="cib-bootstrap-options"

dc-version="1.1.9-2.6-2db99f1"

cluster-infrastructure="classic openais (with plugin)"

expected-quorum-votes="2"

**Before we start adding Resources to our Cluster we need to disable STONITH (Shoot The Other Node In The Head) – since we are not using it in our configuration:**

[root@node1 ~]# crm configure property stonith-enabled=false

If you in a 2 node cluster stops one of the two nodes, the node which is up fails, because the voting system fails.

So disable QUORUM

[root@node1 ~]# crm configure property no-quorum-policy=ignore

Adding Floating IP Address Resource

Let’s add IP address resource to our Linux Cluster. The information we need to configure IP address is:

Cluster Resource Name: CLUSTERIPResource Agent: ocf:heartbeat:IPaddr2 (get this info with “crm ra meta IPaddr2”)IP address: 192.168.1.150Netmask: 24Monitor interval: 30 seconds (get this info with “crm ra meta IPaddr2”)

Run the following command on a Linux Cluster node to configure ClusterIP resource:

[root@node1 ~]# crm configure primitive CLUSTERIP ocf:heartbeat:IPaddr2 params ip=192.168.1.150 cidr_netmask="24" op monitor interval="30s"

Check Cluster Configuration with:

[root@node1 ~]# crm configure show

node node1

node node2

primitive CLUSTERIP ocf:heartbeat:IPaddr2

params ip="192.168.61.150" cidr_netmask="24"

op monitor interval="30s"

property $id="cib-bootstrap-options"

dc-version="1.1.9-2.6-2db99f1"

cluster-infrastructure="classic openais (with plugin)"

expected-quorum-votes="2"

stonith-enabled="false"

last-lrm-refresh="1381240623"

[root@node1 ~]# crm status

Last updated: Tue Oct 8 15:59:19 2013

Last change: Tue Oct 8 15:58:11 2013 via cibadmin on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.9-2.6-2db99f1

2 Nodes configured, 2 expected votes

1 Resources configured.

Online: [node1 node2]

CLUSTERIP (ocf::heartbeat:IPaddr2): Started node1

As we can see a new resource called CLUSTERIP is configured in the Cluster and started on node1

Adding Apache (httpd) Resource

yum install httpd on both the servers

Next resource is an Apache Web Server. Prior to Apache Cluster Resource Configuration, httpd package must be installed and configured on both nodes! The information we need to configure Apache Web Server is:

Cluster Resource Name: ApacheResource Agent: ocf:heartbeat:apache (get this info with “crm ra meta apache”)Configuration file location: /etc/httpd/conf/httpd.confMonitor interval: 30 seconds (get this info with “crm ra meta apache”)Start timeout: 40 seconds (get this info with “crm ra meta apache”)Stop timeout: 60 seconds (get this info with “crm ra meta apache”)

Run the following command on a Linux Cluster node to configure Apache resource:

[root@node1 ~]# crm configure primitive Apache ocf:heartbeat:apache params configfile=/etc/httpd/conf/httpd.conf op monitor interval="30s" op start timeout="40s" op stop timeout="60s"

Check Cluster Configuration with:

[root@node1 ~]# crm configure show

node node1

node node2

primitive Apache ocf:heartbeat:apache

params configfile="/etc/httpd/conf/httpd.conf"

op monitor interval="30s"

op start timeout="40s" interval="0"

op stop timeout="60s" interval="0"

meta target-role="Started"

primitive CLUSTERIP ocf:heartbeat:IPaddr2

params ip="192.168.61.150" cidr_netmask="24"

op monitor interval="30s"

property $id="cib-bootstrap-options"

dc-version="1.1.9-2.6-2db99f1"

cluster-infrastructure="classic openais (with plugin)"

expected-quorum-votes="2"

stonith-enabled="false"

last-lrm-refresh="1381240623"

Check Cluster Status with:

[root@node1 ~]# crm status

Last updated: Thu Oct 10 11:13:59 2013

Last change: Thu Oct 10 11:07:38 2013 via cibadmin on node1

Stack: classic openais (with plugin)

Current DC: node1 - partition with quorum

Version: 1.1.9-2.6-2db99f1

2 Nodes configured, 2 expected votes

2 Resources configured.

Online: [ node1 node2 ]

ClusterIP (ocf::heartbeat:IPaddr2): Started node1

Apache (ocf::heartbeat:apache): Started node2

As we can see both Cluster Resources (Apache and ClusterIP) are configured and started – ClusterIP is started on node1. Cluster node and Apache is started on node2 node.

Apache and ClusterIP are at the moment running on different Cluster nodes but we will fix this later, setting Resource Constraints like: colocation (colocating resources), order (order in which resources start and stop), …

results matching ""

    No results matching ""