hp Availability Manager Version 2.3 Release Notes The following notes address late-breaking information and known problems for the HP Availability Manager Version 2.3. These notes fall into the following categories: o Installation note o Problems corrected o New and changed features o Operation notes o Display notes 1 Installation Note This note pertains to the installation of Availability Manager Version 2.3. 1.1 Uninstall Prior Versions Before Installing the New Kit Before you install the kit, you need to uninstall any previous versions of the software. This is explained in the Version 2.3 installation instructions. Prior to installation, you might want to make a copy of your AVAILMAN.INI file as a reminder of the names of the groups you usually monitor. Also, delete any desktop shortcuts for previous versions of the Availability Manager because they will be invalid with the new version. 2 Problems Corrected in Version 2.3 The following sections discuss key problems that have been corrected since the release of the Availability Manager Version 2.2-1. 2.1 Corrected Host Node Page/Swap File Display OpenVMS Version 7.3-1 and higher do not have a page or swap file "Reserved" field. Availability Manager displays have been updated to reflect this change. 1 2.2 Wait States on Single Process Are Now Explained In previous versions of the Availability Manager, explanations of wait states were omitted from the description of the Single Process Wait States page. Wait state calculations are now explained in Chapter 3 of the Availability Manager User's Guide and in tooltips. 2.3 Out-of-Memory Problem In previous versions, a memory leak caused the graphical user interface eventually to become unresponsive. This problem has been corrected. 2.4 Data Collector Errors In previous versions, the Data Collector would, on rare occasions, cause a systemwide failure due to divide-by- zero and range-check errors. These problems have been corrected. 2.5 Most Events Trigger Color Scheme Any event that is not classified as an informational message causes a node to display in red, as described in the Getting Started chapter of Availability Manager User's Guide. 2.6 Problem with Seasonal Time Changes Corrected Previous versions of the Availability Manager used a version of the Java runtime environment that had problems with seasonal time changes. Availability Manager Version 2.3 uses a version of Java runtime environment that has corrected this problem. For OpenVMS systems, make sure that the time zone differential logical name SYS$TIMEZONE_DIFFERENTIAL is defined correctly. 2.7 Additional Problems Corrected The following problems have also been corrected: o Tooltips now show up in node displays. o Single disk display windows now display consistently. o Various font size problems have been corrected on lock and cluster pages. 2 3 New and Changed Features in Version 2.3 The following sections discuss new and changed features introduced in this version of the Availability Manager. 3.1 DECamds Parity The Availability Manager has now reached functional parity with DECamds; this means that all features supported by DECamds are now supported by the Availability Manager. The Availability Manager also contains many additional enhancements and new features. 3.2 Memory Utilization Memory utilization has been improved in the Data Analyzer when the Availability Manager loads program libraries. 3.3 Performance There has been a moderate improvement in overall performance of the Data Analyzer. 3.4 Window Turn Rate The window turn rate for disks is now supported on the OpenVMS I/O Summary page. 3.5 NOPROC Event Support; Watch Process Customization Page The NOPROC event has been implemented in this release. You can now monitor up to eight processes on a node using the new Watch Process Customization page. If you enter a process name, the Availability Manager signals a NOPROC event if a process disappears and displays the following message in the Events pane: NOPROC node-name cannot find process named: process-name If the process then reappears, the following message is displayed in the Events pane: PRCFND node-name has recently discovered process process-name This feature requires the latest version of the Availability Manager Version 2.3 Data Collector on the OpenVMS node being monitored. 3 3.6 LOVOTE and LOVLSP Events LOVOTE and LOVLSP events have been implemented. LOVOTE and LOVLSP are explained in Appendix B of the Availability Manager User's Guide. 3.7 Lock Log In previous versions, no way existed to see lock contention history. This made lock contention resolution difficult. To facilitate lock contention investigation, locks under contention are written out to a log file called AvailManLock.Log. 3.8 LAN Adapters Renamed to LAN Devices In cluster displays, the term "LAN adapters" has been renamed to "LAN devices" to be consistent with other OpenVMS utilities such as SCACP. 3.9 CPU Process State Summary Display This new display allows you to easily monitor process states on the system on the OpenVMS CPU Modes Summary and Process States page. Refer to Chapter 3 of the Availability Manager User's Guide. 3.10 How to Print a Screen Documentation has been added to explain how to print a screen. Refer to the Getting Started chapter in the Availability Manager User's Guide. 3.11 Event Counts and List of Events A count of events has been added to the Node Pane of the main Application Window. Also, if you hold the cursor over a node name or the number of events, the Availability Manager displays a list of the events associated with the number of events. 4 Operation Notes The following sections contain notes pertaining to the operation of the Availability Manager Version 2.3. 4 4.1 Administrator Account Required On Windows 2000 and Windows XP platforms, the Data Analyzer must be run from an account in the Administrator group. This restriction will be removed in the next major release of Availability Manager. 4.2 Problem Displaying Large Numbers of Processes or Disks Very busy networks can sometimes interfere with the transfer of data between the Data Analyzer and the Data Collector. This problem is noticeable when you display large numbers of disks or processes. The number of disks or processes might change temporarily because of a lost data message. This problem will be corrected in a future release. 4.3 Event Reporting Problems The following list contains known event-reporting problems: o Unimplemented threshold event: LOSTVC o Event reporting irregularities: - Some posted events may not be canceled promptly when the condition goes away. - LOVOTE and LOVLSP events are posted for every node in the cluster rather than once per cluster. 4.4 Out-of-Memory Problems on Long Runs If a session runs for many days, and the Data Analyzer is collecting data on many nodes, the Data Analyzer might run out of virtual memory (object heap). (See the Availability Manager installation instructions for Windows or OpenVMS for details on how to modify the heap size.) On Windows systems, the Data Analyzer does not report the problem. On OpenVMS systems, the Data Analyzer displays an "OutOfMemoryException" error in the window in which the Data Analyzer was started. On either system, one or more parts of the display might stop updating. The only solution is to restart the Data Analyzer. 5 5 Display Notes The following sections contain display notes pertaining to the Data Analyzer. 5.1 Position of Main Application Window The Availability Manager saves and restores the position, size, and dimensions of the main Application window when you restart the application. 5.2 Problems Using the Data Analyzer on All Platforms The following sections contain notes about the display of the Data Analyzer on Windows and OpenVMS platforms. 5.2.1 What to Do If a Node Is Displayed Twice A node can be displayed twice in the Node pane when the Data Collector (RMDRIVER) is started before the network transports are started. To avoid this problem, always start your network transports (DECnet) before starting the Availability Manager Data Collector. 5.2.2 Events Sometimes Displayed After Background Collection Stops The Data Analyzer sometimes displays events after users customize their systems to stop collecting a particular kind of data. This is most likely to occur when the Data Analyzer is monitoring many nodes. Under these conditions, a data handler sometimes clears events before all pending packets have been processed. The events based on the data in these packets are displayed even though users have requested that this data not be collected. 5.2.3 Truncated LAN Channel Summary Display The LAN Channel Summary display might be disabled for some OpenVMS nodes if there are more than seven channels for that virtual circuit. This problem results from a restriction in the OpenVMS Version 7.3 PEDRIVER. For this condition, the following error message is displayed: Error retrieving ChSumLAN data, error code=0x85 (Continuation data disallowed for request) This problem has been corrected in the OpenVMS Version 7.3-1 PEDRIVER. 6 5.3 Problems Using the Data Analyzer on OpenVMS Systems The following sections contain notes about the display of the Data Analyzer on OpenVMS platforms. 5.3.1 Exiting Field on Data Collection Customization Page While using the OpenVMS Data Collection Customization page on OpenVMS, if you change a data collection interval and press Enter to exit the field, the value is not entered as expected. You must use the mouse to move the cursor out of the field. 5.3.2 Long Runs Exhaust XLIB Resource ID The version of Motif currently shipping with OpenVMS is based on X11R5. That release of X11 uses a resource ID allocation scheme that works poorly with the Motif support in Java for OpenVMS. As a result, long-running Availability Manager sessions might stop updating the display at a time that depends on the speed of the OpenVMS machine. For example, a session running on a dual-processor 275 MHz system reported the following after 14 hours: Xlib: resource ID allocation space exhausted! On faster machines, this message was reported after only 8 hours. This problem is expected to be corrected in a later version of DECwindows Motif. 7