HP OpenVMS Systems

ask the wizard
Content starts here

OpenVMS Clusters and Interconnects?

» close window

The Question is:

 
I have 2 Alpha 2100 on a CI cluster and a TA90 Tape drive on an HSC. There
is another standalone AlphaStation 255 which needs to have access to the
TA90 tape drive. I know that the only way to do this is by adding the
AlphaStation to the existing cluster
but this is not possible since an AlphaStation is not supported under a CI
cluster. The only option I have is to go for a LAN(ETHERNET) based Cluster
with the AlphaStation 255 configured as a satellite client. My question is,
how can I make this configura
tion on a DEC TCP/IP network? I've checked the VMScluster manual and I
cannot find any discussion about this. I would appreciate your assistance.
 
 
 


The Answer is :

 
  An OpenVMS Cluster operates using a protocol known as System
  Communication Services (SCS), and SCS can operate over a variety
  of communications ports.  Available ports include CI, Ethernet,
  DSSI, Memory Channel, FDDI, Galaxy shared memory, etc.  Specific
  cluster interconnects have specific capabilities and prerequisites.
 
  SCS is compliant with Ethernet and IEEE 802.3 requirements and can
  thus share an Ethernet or 802.3 network with other protocols, such
  as DECnet, IP, and LAT.
 
  SCS does NOT operate over DECnet nor IP protocols, it shares the
  network with these protocols.
 
  An OpenVMS Cluster can operate (standalone) when there are no cluster
  interconnects present, and can also obviously operate when there are
  one or more cluster interconnects present.  The central requirement
  of interconnections is that every node must have a direct connection
  to every other node -- this is certainly easiest with a uniform
  connection to a broadcast medium, but it can also be met through
  other means.  (ie: Connecting all members to the same Ethernet, 802.3,
  CI, or FDDI medium is the easiest way to meet the requirement for total
  connectivity, but many other configurations are also equally valid.
  For example, total connectivity can be achieved with three hosts when
  each node has two DSSI buses, each shared with one other node -- this
  particular configuration is often drawn as a triangle with the hosts
  represented as the vertices and the three DSSI busses present as the
  legs.)
 
  An OpenVMS Cluster must have at least one system disk.  Multiple
  system disks are permissible, and at least two system disks are
  required for mixed-architecture cluster configurations -- one for
  OpenVMS VAX and one for OpenVMS Alpha.
 
  An individual OpenVMS Cluster member can have a local or direct access
  to the system disk, or it can be configured as a satellite of another
  member.  Systems with local or direct access operate the same as
  standalone OpenVMS nodes.  Satellite systems differ in the initial
  system bootstrap -- the system console requests a download using
  the maintenance and operations protocol (MOP).  MOP is available as
  part of DECnet Phase IV, DECnet-Plus, and (in OpenVMS V6.2 and later)
  in the OpenVMS LANCP utility.  Once the satellite download request is
  made and then serviced, all subsequent OpenVMS Cluster traffic uses
  SCS protocols.
 
  An OpenVMS Cluster includes a mechansim known as the connection
  manager, and this connection manager is what prevents user and
  system data corruptions that can result from a situation known
  as "partitioning" -- partitioning is a situation where systems
  modify resources without the necessary coordination.  Extensive
  information on the specifics of the connection manager and on
  quorum is included in the OpenVMS Frequently Asked Questions (FAQ),
  in the section of the FAQ describing the appropriate settings of
  the SYSGEN parameters VOTES and EXPECTED_VOTES.
 
  Typically only members with direct access to a system disk are
  configured as voting members of the cluster.  While satellites can
  vote, they cannot bootstrap to contribute a vote when the cluster
  is forming, or when the cluster is in a state known as a "user
  data integrity interlock" or "quorum hang".
 
  In your specific case, the CI nodes can be configured to use the
  Ethernet or IEEE 802.3 network as an additional cluster communications
  port, and the AlphaStation 255 can be configured with the same cluster
  group number and cluster password.  The values of the SYSGEN parameters
  VOTES and EXPECTED_VOTES will need to be set per the description in
  the OpenVMS FAQ, and the SCSNODE and SCSSYSTEMID parameter values will
  also need to be set per the OpenVMS Cluster documentation.  (SCSNODE
  is typically set to six or fewer alphanumeric characters, usually the
  same as the nodename.  SCSSYSTEMID is set to a unique value, usually
  to the DECnet Phase IV host address (when DECnet Phase IV is in use).
 
  In particular, you will be using CLUSTER_CONFIG or CLUSTER_CONFIG_LAN
  to configure the Cluster nodes.  The latter is identical to the former,
  save it uses LANCP for satellite operations.  (LANCP is available in
  OpenVMS V6.2 and later, and eliminates the need for the installation
  of DECnet Phase IV or DECnet-Plus for access to the MOP protocol used
  for the initial satellite bootstrap.  Once the initial MOP download
  request is performed, and the image(s) are downloaded via MOP, MOP is
  not used again.)
 
  For various PCI-based systems, there have been PCI CI adapters made
  available.  The Wizard would recommend checking for support on the
  particular system before proceeding down this path, and would recommend
  considering the differences in the adapter cost and the available
  bandwidth.  (CI has a higher hardware cost and generally lower system
  overhead than other adapters, and two paths each with 70 megabit per
  second throughput.)  Other PCI options available include Memory Channel
  and FDDI, etc.  (Classic Ethernet at 10 megabits per second throughput
  is usually considered quite slow by current performance standards.)
 

answer written or last revised on ( 13-JAN-1999 )

» close window