HP OpenVMS Systems

ask the wizard
Content starts here

Disk and File Fragmentation Considerations?

» close window

The Question is:

 
Hi Wizard...
How do I identify fragmentation that is at the danger level.
I have been advised to check fragmentation by doing a dump header on a file
and looking at the number of 'Header areas'  and the number of extents, or
retrieval pointers under Map area.
Ok.. so I count them. what is the point at which we should do a backup and
restore. 5 headers, 18 extents? 10 headers, 30 extents? Is there a 'rule of
thumb'?
thanks,
Laura
 


The Answer is :

 
  There is no particular rule of thumb for disk fragmentation -- some level
  of disk fragmentation can be both benefitial and entirely normal, while
  excessive levels of fragmentation tend to reduce performance.  Where this
  normalicy becomes a problem depends greatly on your particular local
  requirements, and (sometimes) whether or not you are being sold a disk
  defragmentation tool.
 
  Without knowing the total size and the typical access pattern(s) for a
  file, providing a specific value for the maximum number of extents is
  not possible.
 
  The DFU tool on the OpenVMS Freeware can check disk fragmentation, and
  the disk fragmentation monitoring portion of the DFO package can also be
  used (and without requiring a DFO product license).
 
  If you have specific files that are severely fragmented and that are
  also affecting performance, you can look at setting (more) appropriate
  file extent files and at using COPY to recreate the file.  (As an
  example of a file that tends to be fragmented, but that does not
  generally affect performance, the OPERATOR.LOG file.)
 
  Other considerations include appropriate tuning of indexed files (with
  EDIT/FDL, and CONVERT and CONVERT/FDL passes as required), as well as
  monitoring the length of the disk I/O queues (MONITOR DISK/ITEM=QUEUE),
  and watching for increases in the numbers of split I/Os and window turns
  (which can indicate fragmentation).  Also of interest will be the cache
  hit rates, and the use of global buffers on active shared files.
 
  Also of interest will be process quotas -- insufficient process quotas
  can throttle application performance.
 
  One way to establish your baseline is to CONVERT/RECLAIM and EDIT/FDL
  and CONVERT/FDL any indexed files to remove cruft, then examine the
  number of extents on the critical files (indexed or other), then measure
  application performance, then use BACKUP/IMAGE (or a purchased tool) to
  defragment your disk(s), then measure performance again.  (BACKUP/IMAGE
  has an advantage of also leaving you with a known-good backup copy, too.)
 
  The general rule of thumb for tuning is to first find the bottleneck and
  then remove it -- don't focus first on any particular area such as disk
  fragementation -- and systematically evaluate overall performance looking
  for the limiting factor(s).  Then remove them.
 
  The OpenVMS Performance Management manual (general tuning) and the OpenVMS
  File Applications manual (RMS topics) will be of interest here.
 

answer written or last revised on ( 2-JUL-1999 )

» close window