HP OpenVMS Systems

ask the wizard
Content starts here

Tuning OpenVMS RMS I/O Performance?

» close window

The Question is:

 
I have written a testing program to write 10000 records in a file. It takes
 less than 4s. However, if I run 48 copies of the testing program at the same
 time (by submit it to a job queue), it takes >47s to write 10000 records.
 
How to avoid this?
 
I am using standard C to open a file with some VMS specified attribute, the
 code is as follows.
 
::open(filename.c_str(), O_CREAT | O_APPEND | O_WRONLY, 00644, "shr=get", "rop
 = asy");
 
I have not called fsync in the testing program as I intent to let the system to
 fsync the file.
 
I use CXX to compile and link the testing program.
 
Thanks,
 
Wing
 
 


The Answer is :

 
  All file access performed by C programs on OpenVMS is performed using
  RMS.  In particular, you will thus want to review the available RMS
  documentation for options that can provide you with improved performance.
 
  The original design principle of RMS -- fitting with the OpenVMS design
  philosophy -- is one of extreme reliability and reproducibility.  For
  unshared file access, RMS can and will buffer appended records, but for
  shared files, RMS will write every change through to the disk -- in the
  terminology of UNIX, you could say that RMS will perform an fsync call
  on every write to a shared file.  It does this because the data can be
  seen by other processes, which can then read and use the data immediately
  -- RMS provides shared and synchronized access to files.
 
  You could ask RMS to DEFER WRITES (via the C keyword "fop=dfw") to stop
  this if you (as the application designer) can accept the potential loss
  of data involved.  This can have potentially large beneficial effects on
  RMS file performance, but the OpenVMS Wizard will warn you that in the
  test you propose, the effects of this change will be minimal.  The reason
  why is that RMS does not share buffers in memory: it 'pings'. If stream 1
  adds a records, it will hold the dirty buffer. In order for stream 2 to
  add its record, RMS will have to tell stream 1 (through a blocking AST)
  to write out that buffer, and then re-read into stream 2 private buffer
  space.
 
  So from an RMS perspective in a shared high contention mode, each record
  added may cause a write and read. You can probably verify this in the
  ACCOUNTING information for the batch jobs, or in I/O rates to the disk.
  The price for this I/O can potentially be mitigated through the use of
  a filesystem cache (XFC, VCC (VIOC), or third-party caching product) or
  through the caching hardware controllers (HSZ, HSG, etc).  You can also
  examine the disk I/O throughput using the MONITOR DISK/ITEM=QUEUE tool.
  (A value of 0.5 in this display means that half of all I/O requests to
  the disk are waiting; that the disk I/O is effectively saturated.)
 
  What RMS application performance tuning can and should consider can
  involves any and all of the following:
 
    - Use Deferred Write (likely to be good enough for real life usage).
    - Create an append slave server task and ship it records to add.
    - use an application managed lock (SYS$ENQ) to group multiple adds.
    - Courageous folks who don't mind venturing into unsupported territory
      could consider using the RMS file lock through SYS$MODIFY.
      (Hints: FAB$M_ESC, $RMEDEF, RME$C_KEEP_LOCK_ON)
    - If the file is not shared and access is for sequential write, the
      FAB$M_SQO flag (in FAB$L_FOP) option can help performance -- this
      option turns off some of the locking that can sometimes be overly
      cautious and unnecessary.
    - Correct sizing for the multibuffer and multiblock counts.
    - appropriate process quotas.
    - Correct allocation and extend sizes -- the default sizes can
      be far too small for data-intensive applications.
    - Enable and use RMS Global Buffers
    - Sufficient system parameter settings, and available physical memory.
      (Recent OpenVMS releases will require more physical memory.)
    - Tools such as AMDS and Availability Manager can help you monitor
      system activity during testing, and tools such as PCA and SCA
      can be useful in locating bottlenecks within an application.
 
  And again, you will want to review the available documentation on RMS
  and file operations in the OpenVMS manual set.
 
  You will also want to review the available ECO kits for OpenVMS,
  applying the mandatory kits and other relevent kits -- in the case
  of V7.3, this includes disabling XFC (via setting VCC_FLAGS system
  parameter to 1 prior to XFC V2.0 kit) and installing various ECO kits.
  In general, you will also want to consider OpenVMS upgrades as available,
  as there have been a series of improvements made in the I/O system and
  related components in recent OpenVMS releases -- the specific source
  of the performance in this case is not clear.
 
  When posting questions such as this, please consider including example
  code -- without this, the OpenVMS Wizard cannot easily provide and
  cannot easily tailor (generic) suggestions to your specific code.
 

answer written or last revised on ( 1-APR-2002 )

» close window