HP OpenVMS Systems

ask the wizard
Content starts here

Cluster Quorum, Partitioning, and Corruptions?

» close window

The Question is:

 
I have a cluster comprised of two GS60's and a shared interconect for disk
 access. Both nodes also have local disks. A quorum disk has been setup for
 each node on one of the local disks (This means that there are two quorum
 disks in the cluster, each only
 accessable to the local node at boot time). Does this have any negative
 consequences as compared to a single quorum disk on the shared disk resources?
 What is the preferred method of using a quorum disk?
 


The Answer is :

 
  Please do not attempt to defeat the cluster quorum mechanism and the
  associated voting scheme.
 
  These mechanisms are the "blade guards", and are explicitly implemented
  to prevent data loss and data corruption.  The "quorum hang" is probably
  better thought of as the "user data integrity interlock", and should not
  be thought of as a mechanism which exists solely to irritate and to then
  be defeated by a clever system manager.
 
  In the specific case of the quorum disk, the OpenVMS documentation
  is quite explicit.  There can be either no quorum disk, or one quorum
  disk.  Two or more quorum disks are not permitted.
 
  Consider what can happen if there are two quorum disks, if you will.
  This could result in a partitioned cluster, and shared resources could
  then be accessed in an uncoordinated fashion.  Data corruptions.
 
  Further, a quorum disk cannot be located on a volume that is a member
  of a host-based shadowset.  (Quorum disks that are resident on
  controller-based RAID devices are usually permissible.)
 
  Consider what might happen if the member volumes of a shadowset are
  located across hosts, and the connection between the hosts fails.  If
  the quorum disk could be located on a shadowset, this failure could
  then lead to a partitioned cluster.  And data corruptions.
 
  When a partitioned cluster occurs -- it is surprisingly simple to
  partition a cluster when incorrect VOTES and EXPECTED_VOTES values
  are in use, and when cluster storage connections such as multi-host
  SCSI are in use -- the severity and the scale of the data corruptions
  can be surprisingly large.
 
  For details on the closely-related topic of establishing correct values
  for VOTES and EXPECTED_VOTES, please see the OpenVMS FAQ.
 

answer written or last revised on ( 6-MAY-2002 )

» close window