HP OpenVMS Systems

ask the wizard
Content starts here

Third-party products? (Sybase, HENCO)

» close window

The Question is:

 
Hello wizard,
 
can you help me with the following question ??
****
1) My Customer has their RMS files residing on VAX/VMS 6.2 Digital Box Model
6520
(ie platform A) and require a copy of Sybase OMNIConnect to access these
data files. We
proposed the OMNI to run on DS620 VAX/OpenVMS 7.1(ie platform B). We DID NOT
propose VAX clustering for reason (a) Customer has a budget constraint  as
the
clustering cost 3 times than a single server architecture solution(b)
customer
refused to upgrade the VAX/VMS 6.2 OS (ie platform A).
 
2) But after some testing on site and advices from the Digital Support, I
gathered that our intial propsal may  fail in handling concurrent access and
update to the RMS file. I therefore need to seek advice from you. Please see
the
example below.
 
3) I tested the concurrent access with HENCO application (the programs are
in
platform A). Henco is a development tool and its language looks similar to
COBOL.
 
At platform B:
step 1) Thru OMNI
begin tran
update RMS-tab set field_1 = "CCCC"
go
select * from RMS-tab
go
result - I see field_1 = "CCCCCCC"
i highlight that I haven't fire commit or rollback
 
At platform A:
Step 2) Thru  HENCO
open RMS-tab and move "BBBBB" to field_1
 
result - status return successful.
 
At platform B
Step 3) thru OMNI
 
commit tran
go
select * from RMS-tab
go
result : I see field_1 = "BBBBBB", OMNI failed to lock RMS -tab at step (1).
The isolation level enviroment was set to 1 all the while.
 
Questions : I would like to get confirmation from you that
1) These locking problems can ONLY BE resolved by VAX clustering.
2) If NOT, is there any tuning on the VAX that can be done ?
 
 


The Answer is :

 
  Please contact the vendors -- Sybase and the vendor of the Henco tool
  or language -- for assistance with the application packages in use here.
  Neither is part of OpenVMS, and neither is familiar to the OpenVMS Wizard.
 
  Single-node file locking is fully operational, fully supported, enabled
  by default, and can fully coordinate shared (write) file access.
  Cross-node file locking is a direct extension of single-node locking,
  and operates transparently across all members of an OpenVMS Cluster.
 
  Cross-node file locking is supported (only) within an OpenVMS Cluster
  configuration.  In a collection of OpenVMS nodes that are not configured
  into a single OpenVMS Cluster, there is no cross-node locking nor
  other cross-node data synchronization performed by any OpenVMS component.
  (This capability is one of the core benefits that the OpenVMS Cluster
  configuration provides.)
 
  Applications can, however, implement their own locking schemes, or other
  schemes to perform synchronization, either locally, across nodes in an
  OpenVMS Cluster, or in a distributed environment.  In addition,
  applications can implement data caching or other mechanisms that could
  bypass the available locking, leading to the visibility of stale or
  unexpected data.
 
  Depending on the implementation language, it is also possible for data
  caching to occur within the language run-time library support.  For
  instance, the caching of data can potentially cause problems for
  applications written in C if the file access is via C I/O calls -- and
  the data is being shared without the knowledge and participation of the
  C run-time -- when the setvbuf(_IONBF) operator is not used.  Other C
  keywords of interest include fflush and fsync.  (RMS services provide
  fully integrated and fully distributed cache management on an OpenVMS
  node and across members of an OpenVMS Cluster, and explicit flush calls
  are not needed for the sharing of data.)
 
  As for what the application products implement with respect to locking or
  caching, you will need to contact the vendor(s) directly.
 

answer written or last revised on ( 8-SEP-1999 )

» close window