HP OpenVMS Systems

ask the wizard
Content starts here

Application-reported IP data loss?

» close window

The Question is:

 
I am programming a client-server application using UCX V.4.1 and the DEC C++
compiler. Both use non-blocking sockets. The server sends packets up to
about 8 KB long and checks for the total byte count transmitted (returned by
send ()), retrying if an erro
r is found until all the buffer is sent. The client uses recv () until the
total amount of bytes is received or the buffer is empty (by checking to the
recv() return value). It works for small packets (up to some hundred bytes),
but when the packet size i
s increased to some KB it starts to loose some data (not whole packets, but
portions of them). For example, the server may send two packets 2000 bytes
long and the client will receive them in one chunk of 3700 bytes long.
Any idea?
 


The Answer is :

 
  First, move to V5.0A.
 
  Then -- if the problem persists -- please contact the Compaq Customer
  Support Center.  An example of some source code that demonstrates the
  failure will greatly simplify and speed the effort involved in locating
  the underlying cause.
 
  That said, use of the sys$qio interface to TCP/IP Services is often the
  easiest approach for event-driven programming.  Also, please see topic
  1661 here for a discussion of common OpenVMS programming mistakes.

answer written or last revised on ( 30-SEP-1999 )

» close window