HP OpenVMS Systemsask the wizard |
The Question is: I have a total disk size of 33501920 blocks. How can I maximize the number of files allowed in my volume? We run a cluster with two nodes, the disk volume is shared between the two. The Answer is :
The OpenVMS Wizard will assume references to the queue manager are
unrelated to the problem, save for the propensity of batch jobs to
create log files.
You will have to delete files, or you will have to reinitialize the
volume with a larger number of files specified, or both. You can
also consider setting file version limits on the directories on the
disk volume, to help control the numbers of back versions maintained.
Also consider defragmenting the disk, as this can improve performance
and as this can also release the extra file headers that are used for
highly fragmented files.
Each file will take at least one disk block for the file header.
As empty files are not particilarly interesting, each file will
require one block for the header and one disk volume cluster for
the file data.
In theory, there can be at most 33501920/2 = 16,750,960 individual
non-empty files -- less some minor overhead -- created on this disk
volume. Given that many files on a volume are larger than one disk
cluster in size, this theoretical value is typically greatly reduced.
On all OpenVMS versions, the disk volume cluster size -- the cluster
factor -- will default to 33501902/(255*4096) = 33 (rounded up) blocks
for a volume of this size. Thus with the default cluster factor, the
smallest (non-empty )file will require 34 blocks. Thus you can have
at most 33501920/34 = 985350 files on this volume.
The initialization command for the maximal number of files for this
and other disk volumes of this capacity is thus:
$ initialize ddcu: volume-label -
/headers = 985350 -
/maximum_files = 985350 -
/system
The /headers qualifier preallocates blocks to indexf.sys, to reserve
the storage and to speed the file creation operations.
With OpenVMS V7.2 and later, you can explicitly select a volume cluster
factor smaller than the default using the /CLUSTER_SIZE qualifier.
In practice, you can select a cluster factor of one block for volumes
of capacities of up to approximately 137 gigabytes on V7.2 and later.
Disks larger than this will require a correspondingly larger cluster
factor. On versions prior to V7.2, the default calculation for the
/CLUSTER_SIZE cluster factor is also the minimum cluster factor
permitted on the volume.
|