HP Integrity rx5670 Windows Integrity Cluster Installation and Configuration G - Page 16

Majority node set (MNS) quorum

Page 16 highlights

Introduction Cluster terminology Figure 1-3 Majority node set (MNS) quorum A majority node set quorum appears as a single quorum resource from the perspective of the server cluster. However, the data is actually stored by default on the system disk of each node of the cluster. The clustering software ensures that the configuration data stored on the MNS is kept consistent across the different disks. Majority node set quorums are available in Windows Server 2003 Enterprise Edition and Windows Server 2003 Datacenter Edition. As Figure 1-3 shows, majority node set clusters require only that the cluster nodes be connected by a network. That network doesn't need to be a local area network (LAN), either. It can be a wide area network (WAN) or a virtual private network (VPN) connecting cluster nodes in different buildings or even cities. This allows the cluster to overcome the geographic restrictions imposed by its storage connections. The following is a sample diagram of an MNS quorum in a four-node cluster: MNS quorum example While the disks that make up the MNS could in theory be disks on a shared storage fabric, the MNS implementation provided as part of Windows Server 2003 uses a directory on each node's local system disk to store the quorum data. If the configuration of the cluster changes, that change is reflected across the different disks. The change is only considered to have been committed, or made persistent, if that change is made to: (/2) + 1 This ensures that a majority of the nodes have an up-to-date copy of the data. The cluster service itself will only start up and therefore bring resources online if a majority of the nodes configured as part of the cluster are up and running the cluster service. If there are fewer nodes, the cluster is said not to have quorum and therefore the cluster service waits (trying to restart) until more nodes try to join. Only when a majority or quorum of nodes are available, will the cluster service start up, and bring the resources online. In this way, because the up-to-date configuration is written to a majority of the nodes regardless of node failures, the cluster will always guarantee that it starts up with the latest and most up-to-date configuration. In the case of a failure or split-brain, all partitions that do not contain a majority of nodes are terminated. This ensures that if there is a partition running that contains a majority of the nodes, it can safely start up any resources that are not running on that partition, safe in the knowledge that it can be the only partition in the cluster that is running resources (because all other partitions are terminated). 16 Chapter 1

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

Introduction
Cluster terminology
Chapter 1
16
Majority node set (MNS) quorum
A majority node set quorum appears as a single quorum resource from the perspective of
the server cluster. However, the data is actually stored by default on the system disk of
each node of the cluster. The clustering software ensures that the configuration data
stored on the MNS is kept consistent across the different disks. Majority node set
quorums are available in Windows Server 2003 Enterprise Edition and Windows Server
2003 Datacenter Edition.
As Figure 1-3 shows, majority node set clusters require only that the cluster nodes be
connected by a network. That network doesn't need to be a local area network (LAN),
either. It can be a wide area network (WAN) or a virtual private network (VPN)
connecting cluster nodes in different buildings or even cities. This allows the cluster to
overcome the geographic restrictions imposed by its storage connections.
The following is a sample diagram of an MNS quorum in a four-node cluster:
Figure 1-3
MNS quorum example
While the disks that make up the MNS could in theory be disks on a shared storage
fabric, the MNS implementation provided as part of Windows Server 2003 uses a
directory on each node's local system disk to store the quorum data. If the configuration
of the cluster changes, that change is reflected across the different disks. The change is
only considered to have been committed, or made persistent, if that change is made to:
(<Number of nodes configured in the cluster>/2) + 1
This ensures that a majority of the nodes have an up-to-date copy of the data. The
cluster service itself will only start up and therefore bring resources online if a majority
of the nodes configured as part of the cluster are up and running the cluster service. If
there are fewer nodes, the cluster is said not to have quorum and therefore the cluster
service waits (trying to restart) until more nodes try to join. Only when a majority or
quorum of nodes are available, will the cluster service start up, and bring the resources
online. In this way, because the up-to-date configuration is written to a majority of the
nodes regardless of node failures, the cluster will always guarantee that it starts up with
the latest and most up-to-date configuration.
In the case of a failure or split-brain, all partitions that do not contain a majority of
nodes are terminated. This ensures that if there is a partition running that contains a
majority of the nodes, it can safely start up any resources that are not running on that
partition, safe in the knowledge that it can be the only partition in the cluster that is
running resources (because all other partitions are terminated).