HP OpenVMS Systems

ask the wizard
Content starts here

System Buffer Unavailable?

» close window

The Question is:

 
I am running DECnet Phase IV, and have noticed that my line counters
show "system buffer unavailable" increasing at an alarming
rate.
 
The documentation is unclear as to which network and account
parameters I should be looking at to correct this.
 
Can you suggest which things I should be looking at?
 
Peter
 
 


The Answer is :

 
 
    Re: Below...
 
    >I am running DECnet Phase IV, and have noticed that my line counters
    >show "system buffer unavailable" increasing at an alarming
    >rate.
    >
    >The documentation is unclear as to which network and account
    >parameters I should be looking at to correct this.
    >
    >Can you suggest which things I should be looking at?
 
    The first item to check would be DECnet Executor Pipeline Quota.
 
    $ MCR NCP
    NCP> sho exec pipeline quota
    NCP> set exec pipeline quota 4032
    NCP> def exec pipeline quota 4032
 
    There are several logical names that can affect DECnet performance,
    check the setting of:
 
      "NETACP$BUFFER_LIMIT" = "131070"
      "NETACP$ENQUEUE_LIMIT" = "600"
      "NETACP$EXTENT" = "16384"
      "NETACP$MAXIMUM_WORKING_SET" = "500"
      "NETACP$PAGE_FILE" = "25000"
 
    These are normally defined in sys$manager:sylogicals.com.
    (The values above are an example from a local Alpha system)
 
    e.g.
    $ define/system/exec NETACP$BUFFER_LIMIT 131070
 
 
    The high rate of "system buffer unavailable" could be due to
    hardware/network problems, or the CPU being very busy at high IPL, so
    the driver can't get around to servicing interrupts and emptying its
    buffers. I would encourage you to discuss this your local MCS
    hardware/software support folks.
 
    For some background on why 4032 is a good value for pipeline quota, read
    on.
 
 
    James	:-)
 
    
 
    DECnet Phase IV performance can be optimized by tuning the number of
    transmitter buffers to mesh with the speed and window size of the
    Ethernet receiver.
 
    For the majority of LAN based systems,  set DECnet Executor Pipeline
    quota to 4032.
 
    For example:
 
    $ MCR NCP
    NCP> set exec pipeline quota 4032
    NCP> define exec pipeline quota4032
 
    DECnet Phase IV Executor Pipeline quota usage
 
    The DECnet Executor Pipeline quota determines the maximum transmit
    window size. This is the maximum number of packets that will be
    transmitted before asking the receiver for an ACK (Implicit flow
    control).
 
    The maximum transmit window size is determined by dividing Pipeline
    quota by the Executor Buffer size.
 
    Although Ethernet packets can be 1498 bytes and FDDI packets can be 4468
    bytes, the default DECnet Executor Buffer size is 576. This is
    independent of the packet size sent on the wire.
 
    Given the default of 576 for exec_buffer_size:
 
    Max_transmit_window_size = exec_pipeline_quota/exec_buffer_size
 
    The initial transmit window size = ((Max_transmit_window_size / 3)*2)+1
 
    From there, the NSP flow control algorithms raise and lower transmit
    window size between 1 and Max_transmit_window_size.
 
    The minimum is 1 buffer, Pipeline quota = 576.
 
    The maximum is 40 buffers, Pipeline quota = 23040.
 
    Values for Pipeline quota larger than 23040 have no effect.
 
    In DECnet Phase IV there are several layers of buffers for Ethernet
    communication.
 
    The Ethernet Adaptor has a pool of buffers. This number is determined by
    the Adaptor and the driver, and cannot be tuned by a system or network
    manager.
 
    When incoming packets are dropped due to "Insufficient space in the
    Ethernet Adaptor", or "Insufficient speed of the Ethernet Adapter", you
    get either "Local Buffer Errors" or "Device Over run Errors". In this
    case the Ethernet Adapter itself is not fast enough or doesn't have
    enough onboard memory. This could be caused by design limitations of the
    adapter itself, or an I/O bound system where the adapter can't get at
    main system memory fast enough to empty its in ternal buffers.
 
    When packets are dropped due to "insufficient buffering" in the driver
    on the main system then you'll get "system buffer unavailable". This can
    be due to the CPU being very busy at high IPL, so the driver can't get
    around to servicing interrupts and emptying its buffers. This can be
    caused, or aggravated, by the system's total environment, such as memory
    usage, disk usage, maladjustment of system parameters, runaway
    applications, CPU speeds, etc..
 
    Since a system or network manager cannot increase the Ethernet Adaptor
    buffer pool, the only actions that can be taken in this case is to use a
    different (faster or more buffer space) Ethernet Adaptor on the
    receiving node, tune the VMS system to best advantage, understand the
    implications of memory and disk loading, and limit the "bursts" of
    packets other systems transmit to the the overrun system. For DECnet
    Phase IV, lowering the EXECUTOR PIPELINE QUOTA on transmitting nodes
    lowers the number of packets the overrun system needs to handle at a
    given time making communication smoother and more efficient.
 
 
    The next layer of buffers is a pool of buffers held for DECnet by the
    Ethernet driver. They specify the length of the line's receive queue.
    This pool of buffers can be tuned by the system or network manager by
    adjusting the setting of the LINE RECEIVE BUFFER count in the range of 1
    to 32.  When a significant user_buffer_unavailable count occurs, packets
    are being dropped by the Ethernet Driver because DECnet is not
    processing them fast enough.
 
    If the user_buffer_unavailable CIRCUIT counter increments, its because
    DECnet is not processing packets fast enough. When the
    user_buffer_unavailable LINE counter increments then it could be any
    Ethernet application that is not processing packets fast enough.
 
    A system or network manager can try to limit the user_buffer_unavailable
    count by increasing the LINE RECEIVE BUFFER count, or by limiting the
    "burstiness" of transmission by limiting the EXECUTOR PIPELINE QUOTA of
    transmitting nodes.
 
    The last set of buffers is the window size. DECnet Phase IV provides a 7
    buffer (hard coded) receive window per connection (link). This cannot be
    tuned by the system or network manager. The window size specifies the
    maximum number of frames that may be received before the data is
    transferred from system buffers to process buffers. The real limit can
    be lower. When 2 packets arrive without an associated receive IRP queued
    then a "backpressure" congestion control message is sent back telling
    the sending system to "XOFF" the logical link.
 

answer written or last revised on ( 18-JUN-1998 )

» close window