This is really only a partial summary since response was
rather sparse, but for what it's worth:
> [...background detail removed...]
>        My speculation is many of the physical reads on this domain were
>        satisfied by UBC even though they were not satisfied within
>        Oracle's buffers.
>        Can anybody offer an opinion?
Yes, satisfied by UBC.
Since Oracle "physical" reads really means reads passed to
the operating system, tuning of UBC can affect Oracle performance.
>  *3    Does anybody know why hsz dstat failed to report the
>        write command rates?
No response on this.  
This workload was primarily read.  When running dstat on a more active
system the write command rates were reported.
Since dstat is unofficial, particulary for hsz40, not worth
pursuing further.
>  *4    For the two active domains on the monitored hsz40's, the HSZ
>        read command rate significantly exceeded the uaiostat transfer rate.
>        During many of the intensive 5 minute reporting intervals
>        the ratio was close to exactly 1.50 hsz read commands to
>        UNIX transfers.  Likewise, if you subtract out the Oracle write
>        from the uaiostat transfers the ratio also approaches 1.50 for
>        the entire period.
>        My only guess is the hsz40 'read-ahead' algorithm may have been
>        invoked on these domains since the tests primarily reflect
>        serial reads through large data files.
>        Can anybody offer an opinion?
No response on this.
>  *5    Does anybody know where there is a good discussion
>        (short of reading the UNIX source code) on the vpf_ubc* fields?
No response on this.
We have started reading the source code to augment the manuals.
By appearance some of the other tuning parameters we've previously
left alone merit further examination:
including ubc_maxdirytwrites, vm_ubcbuffers, vm_syncswapbuffers,
vm_asyncswapbuffers, and particularly vm_ubcseqstartpercent.
We're starting to look at this but first working on a monitoring
tool to help gauge how effective tweaking these will be.
If anybody has some examples and discussion on adjusting these,
I'd be interested in hearing your experience.
>  *6    The values reported by vmubc were meaningless,
>        Table#3 came from vm_perfsum.
>        Can anybody offer an opinion on vmubc and UBC usage analysis,
>        specifically as it relates to AdvFS?
AdvFS does utilize UBC somewhat differently than UFS
(from a colleague reading the source).
>  #7    Does anybody know of any good reference material on breaking
>        out IO performance components (Oracle, UNIX, AdvFS, UBC, kzpsa,
>        hsz40, devices) along the entire path?
This question was previously asked by another, he got no response.
>        Does anybody know of good io benchmarking materials for
>        Oracle under Digital UNIX?
>        (I would be happy to sign non-disclosures if necessary).
No response on this.
On a side note, while waiting for installupdates running for
v3.2g->v4.0b last weekend I passed the time by rewriting iostat 
from scratch.  Besides the enhancements we had previously added
as uaiostat for hsz support and summaries, I've added
a horizontal display since we now have systems which cannot
show all their disks in 132 columns.   If anybody else has some 
pet gripes with iostat which they'd like tucked into a replacement
utility, let me know and I'll see if I can work them in.
Basic command syntax is compatable with iostat.
A sample:
sxkac_at_glacier> uaio -St -h6 60 4
Boot:  97/04/20 97/04/30    |  Seconds    :       60       60       60       60
       10:05:58 11:14:14    |      240    | 11:15:14 11:16:14 11:17:14 11:18:14
glacier         transfer  tps transfer  tps transfer transfer transfer transfer
Bus#   0:    1:        .    .        .    .        .        .        .        .
Bus#   3:    4: 17271465   20    19823   83     4897     5299     5196     4431
Bus#   6:    4:  1066812    1        .    .        .        .        .        .
Bus#   7:    5: 16297545   19    17559   73     6223     1875     1947     7514
Total  4:   14: 34635822   40    37382  156    11120     7174     7143    11945
Disk rz0      :        .    .        .    .        .        .        .        .
Disk rz25     :  5575342    6     5546   23     1422     1456     1393     1275
Disk rz26     :  3178761    4     1727    7     1265      179      167      116
Disk rz27     :  6515573    8    10701   45     1798     3329     3208     2366
Disk rz28     :  2001789    2     1849    8      412      335      428      674
Disk rz48     :   305967    0        .    .        .        .        .        .
Disk rz49     :   294580    0        .    .        .        .        .        .
Disk rz50     :   290914    0        .    .        .        .        .        .
Disk rz51     :   175351    0        .    .        .        .        .        .
Disk rzb57    :  3464076    4     5278   22     2262      346       61     2609
Disk rzc57    :  4320239    5     4164   17      787      548     1434     1395
Disk rz58     :  2709392    3     1617    7      671      117       99      730
Disk rz59     :  2726999    3     3518   15     1304      699      152     1363
Disk rz60     :  3076839    4     2982   12     1199      165      201     1417
cpu User        410028.4  12%    477.3  50%    117.5    109.7    125.7    124.4
cpu Nice         17157.0   0%      2.1   0%      0.1      0.0      1.9      0.1
cpu System      330583.1  10%    175.6  18%     40.9     39.3     45.9     49.5
cpu Wait        630270.0  18%      0.0   0%      0.0      0.0      0.0      0.0
cpu Idle       2084341.3  60%    305.1  32%     81.6     90.9     66.6     66.0
I'll make uaio available for ftp when I'm done testing changes to summarize 
previously archived runs of the program.  
_____________________________________________________________________
Kurt Carlson,      University of Alaska SOIS/TS,        (907)474-6266
sxkac_at_alaska.edu   910 Yukon Drive #105.63, Fairbanks,  AK 99775-6200
Received on Wed Apr 30 1997 - 22:12:14 NZST