OpenVMS Alpha IP PPP does not presently support authentication, and the Microsoft Windows NT option to disable authentication during a RAS connection apparently doesn't currently work---RAS connections will require authentication---and this will thus prevent RAS connections.
Future versions of OpenVMS and TCP/IP Services may add this, and future
versions of Microsoft Windows may permit operations with authentication
15.3 OpenVMS and DECnet Networking?
The following sections contain information on OpenVMS and DECnet
15.3.1 Can DECnet-Plus operate over IP?
Yes. To configure DECnet-Plus to operate over IP transport and over IP
backbone networks, install and configure DECnet-Plus, and install and
configure the PWIP
mechanism available within the currently-installed IP stack. Within
TCP/IP Services, this is a PWIPDRIVER configuration option within the
UCX$CONFIG (versions prior to V5.0) or TCPIP$CONFIG (with V5.0 and
later) configuration tool.
15.3.2 What does "failure on back translate address request" mean?
The error message:
BCKTRNSFAIL, failure on the back translate address request
indicates that the destination node is running DECnet-Plus, and that its naming service (DECnet-Plus DECdns, LOCAL node database, etc) cannot locate a name to associate with the source node's address. In other words, the destination node cannot determine the node name for the node that is the source of the incoming connection.
Use the DECNET_REGISTER mechanism (on the destination node) to register or modify the name(s) and the address(es) of the source node. Check the namespace on the source node, as well.
Typically, the nodes involved are using a LOCAL namespace, and the node name and address settings are not coherent across all nodes. Also check to make sure that the node is entered into its own LOCAL namespace. This can be a problem elsewhere, however. Very rarely, a cache corruption has been known to cause this error. To flush the cache, use the command:
$ RUN SYS$SYSTEM:NCL flush session control naming cache entry "*"
Also check to see that you are using the latest ECO for DECnet-Plus for the version you are running. DECnet-Plus can use the following namespaces:
Of these, searching DNS/BIND and LocalFile, respectively, is often the
most appropriate configuration.
15.3.3 Performing SET HOST/MOP in DECnet-Plus?
First, issue the NCL command SHOW MOP CIRCUIT *
$ RUN SYS$SYSTEM:NCL SHOW MOP CIRCUIT *
Assume that you have a circuit known as FDDI-0 displayed. Here is an example of the SET HOST/MOP command syntax utilized for this circuit:
$ SET HOST/MOP/ADDRESS=08-00-2B-2C-5A-23/CIRCUIT=FDDI-0
Also see Section 15.6.3.
15.3.4 How to flush the DECnet-Plus session cache?
$ RUN SYS$SYSTEM:NCL FLUSH SESSION CONTROL NAMING CACHE ENTRY "*"
Most Alpha and most VAX systems have a console command that displays the network hardware address. Many systems will also have a sticker identifying the address, either on the enclosure or on the network controller itself.
The system console power-up messages on a number of VAX and Alpha systems will display the hardware address, particularly on those systems with an integrated Ethernet network adapter present.
If you cannot locate a sticker on the system, if the system powerup message is unavailable or does not display the address, and if the system is at the console prompt, start with the console command:
A console command similar to one of the following is typically used to display the hardware address:
SHOW DEVICE SHOW ETHERNET SHOW CONFIG
On the oldest VAX Q-bus systems, the following console command can be used to read the address directly off the (DELQA, DESQA, or the not-supported-in-V5.5-and-later DEQNA) Ethernet controller:
Look at the low byte of the six words displayed by the above command. (The oldest VAX Q-bus systems---such as the KA630 processor module used on the MicroVAX II and VAXstation II series---lack a console HELP command, and these systems typically have the primary network controller installed such that the hardware address value is located at the system physical address 20001920.)
If the system is a VAX system, and another VAX system on the network is configured to answer Maintenance and Operations Protocol (MOP) bootstrap requests (via DECnet Phase IV, DECnet-Plus, or LANCP), the MOM$SYSTEM:READ_ADDR.EXE tool can be requested:
B/R5:100 ddcu Bootfile: READ_ADDR
Where ddcu is the name of the Ethernet controller in the above command. The primarly local DELQA, DESQA, and DEQNA Q-bus controllers are usually named XQA0. An attempt to MOP download the READ_ADDR program will ensue, and (if the download is successful) READ_ADDR will display the hardware address.
If the system is running, you can use DECnet or TCP/IP to display the hardware address with one of the following commands.
$! DECnet Phase IV $ RUN SYS$SYSTEM:NCP SHOW KNOWN LINE CHARACTERISTICS
$! DECnet-Plus $ RUN SYS$SYSTEM:NCL SHOW CSMA-CD STATION * ALL STATUS
$! TCP/IP versions prior to V5.0 $ UCX SHOW INTERFACE/FULL
$! TCP/IP versions V5.0 and later $ TCPIP SHOW INTERFACE/FULL
A program can be created to display the hardware address, reading the necessary information from the network device drivers. A complete example C program for reading the Ethernet or IEEE 802.3 network controller hardware address (via sys$qio calls to the OpenVMS network device driver(s)) is available at the following URL:
To use the DECnet Phase IV configurator tool to watch for MOP SYSID activity on the local area network:
$ RUN SYS$SYSTEM:NCP SET MODULE CONFIGURATOR KNOWN CIRCUIT SURVEILLANCE ENABLED
Let the DECnet Phase IV configurator run for at least 20 minutes, and preferably longer. Then issue the following commands:
$ RUN SYS$SYSTEM:NCP SHOW MODULE CONFIGURATOR KNOWN CIRCUIT STATUS TO filename.txt SET MODULE CONFIGURATOR KNOWN CIRCUIT SURVEILLANCE DISABLED
The resulting file (named filename.txt) can now be searched for the information of interest. Most DECnet systems will generate MOP SYSID messages identifying items such as the controller hardware address and the controller type, and these messages are generated and multicast roughly every ten minutes.
Information on the DECnet MOP SYSID messages and other parts of the
maintenance protocols is included in the DECnet network architecture
specifications referenced in section DOC9.
15.4.1 How do I reset the LAN (DECnet-Plus NCL) error counters?
On recent OpenVMS releases:
$ RUN SYS$SYSTEM:LANCP SET DEVICE/DEVICE_SPECIFIC=FUNCTION="CCOU" devname
On OpenVMS V7.1, all DECnet binaries were relocated into separate installation kits---you can selectively install the appropriate network: DECnet-Plus (formerly known as DECnet OSI), DECnet Phase IV, and HP TCP/IP Services (often known as UCX).
On OpenVMS versions prior to V7.1, DECnet Phase IV was integrated, and there was no installation question. You had to install the DECnet-Plus (DECnet/OSI) package on the system, after the OpenVMS upgrade or installation completed.
During an OpenVMS V7.1 installation or upgrade, the installation procedure will query you to learn if DECnet-Plus should be installed. If you are upgrading to V7.1 from an earlier release or are installing V7.1 from a distribution kit, simply answer "NO" to the question asking you if you want DECnet-Plus. Then---after the OpenVMS upgrade or installation completes -- use the PCSI PRODUCT INSTALL command to install the DECnet Phase IV binaries from the kit provided on the OpenVMS software distribution kit.
If you already have DECnet-Plus installed and wish to revert, you must reconfigure OpenVMS. You cannot reconfigure the "live" system, hence you must reboot the system using the V7.1 distribution CD-ROM. Then select the DCL ($$$ prompt) option. Then issue the commands:
$$$ DEFINE/SYSTEM PCSI$SYSDEVICE DKA0: $$$ DEFINE/SYSTEM PCSI$SPECIFIC DKA0:[SYS0.] $$$ PRODUCT RECONFIGURE VMS /REMOTE/SOURCE=DKA0:[VMS$COMMON]
The above commands assume that the target system device and system root are "DKA0:[SYS0.]". Replace this with the actual target device and root, as appropriate. The RECONFIGURE command will then issue a series of prompts. You will want to reconfigure DECnet-Plus off the system, obviously. You will then want to use the PCSI command PRODUCT INSTALL to install the DECnet Phase IV kit from the OpenVMS distribution media.
Information on DECnet support, and on the kit names, is included in the OpenVMS V7.1 installation and upgrade documentation.
Subsequent OpenVMS upgrade and installation procedures can and do offer
both DECnet Phase IV and DECnet-Plus installations.
15.5 How can I send (radio) pages from my OpenVMS system?
There are third-party products available to send messages to radio paging devices (pagers), communicating via various protocols such as TAP (Telocator Alphanumeric Protocol); paging packages.
RamPage (Ergonomic Solutions) is one of the available packages that can generate and transmit messages to radio pagers. Target Alert (Target Systems; formerly the DECalert product) is another. Networking Dynamics Corp has a product called Pager Plus. The System Watchdog package can also send pages. The Process Software package PMDF can route specific email addresses to a paging service, as well.
Many commercial paging services provide email contact addresses for their paging customers---you can simply send or forward email directly to the email address assigned to the pager.
Some people implement the sending of pages to radio pagers by sending commands to a modem to take the "phone" off the "hook", and then the paging sequence, followed by a delay, and then the same number that a human would dial to send a numeric page. (This is not entirely reliable, as the modem lacks "call progress detection", and the program could simply send the dial sequence when not really connected to the paging company's telephone-based dial-up receiver.)
See Section 13.1 for information on the available catalog of products.
15.6 OpenVMS, Clusters, Volume Shadowing?
The following sections contain information on OpenVMS and Clusters,
Volume Shadowing, and Cluster-related system parameters.
15.6.1 OpenVMS Cluster Communications Protocol Details?
The following sections contain information on the OpenVMS System
Communications Services (SCS) Protocol. Cluster terminology is
available in Section 126.96.36.199.1.
188.8.131.52 OpenVMS Cluster (SCS) over DECnet? Over IP?
The OpenVMS Cluster environment operates over various network protocols, but the core of clustering uses the System Communications Services (SCS) protocols, and SCS-specific network datagrams. Direct (full) connectivity is assumed.
An OpenVMS Cluster does not operate over DECnet, nor over IP.
No SCS protocol routers are available.
Many folks have suggested operating SCS over DECnet or IP over the years, but SCS is too far down in the layers, and any such project would entail a major or complete rewrite of SCS and of the DECnet or IP drivers. Further, the current DECnet and IP implementations have large tracts of code that operate at the application level, while SCS must operate in the rather more primitive contexts of the system and particularly the bootstrap---to get SCS to operate over a DECnet or IP connection would require relocating major portions of the DECnet or IP stack into the kernel. (And it is not clear that the result would even meet the bandwidth and latency expectations.)
The usual approach for multi-site OpenVMS Cluster configurations
involves FDDI, Memory Channel (MC2), or a point-to-point remote bridge,
brouter, or switch. The connection must be transparent, and it must
operate at 10 megabits per second or better (Ethernet speed), with
latency characteristics similar to that of Ethernet or better. Various
sites use FDDI, MC2, ATM, or point-to-point T3 link.
184.108.40.206 Configuring Cluster SCS for path load balancing?
This section discusses OpenVMS Cluster communications, cluster
terminology, related utilities, and command and control interfaces.
220.127.116.11.1 Cluster Terminology?
SCS: Systems Communication Services. The protocol used to communicate between VMSCluster systems and between OpenVMS systems and SCS-based storage controllers. (SCSI-based storage controllers do not use SCS.)
PORT: A communications device, such as DSSI, CI, Ethernet or FDDI. Each CI or DSSI bus is a different local port, named PAA0, PAB0, PAC0 etc. All Ethernet and FDDI busses make up a single PEA0 port.
VIRTUAL CIRCUIT: A reliable communications path established between a pair of ports. Each port in a VMScluster establishes a virtual circuit with every other port in that cluster.
All systems and storage controllers establish "Virtual Circuits" to enable communications between all available pairs of ports.
SYSAP: A "system application" that communicates using SCS. Each SYSAP communicates with a particular remote SYSAP. Example SYSAPs include:
VMS$DISK_CL_DRIVER connects to MSCP$DISK
The disk class driver is on every VMSCluster system. MSCP$DISK is on all disk controllers and all VMSCluster systems that have SYSGEN parameter MSCP_LOAD set to 1
VMS$TAPE_CL_DRIVER connects to MSCP$TAPE
The tape class driver is on every VMSCluster system. MSCP$TAPE is on all tape controllers and all VMSCluster systems that have SYSGEN parameter TMSCP_LOAD set to 1
VMS$VAXCLUSTER connects to VMS$VAXCLUSTER
This SYSAP contains the connection manager, which manages cluster connectivity, runs the cluster state transition algorithm, and implements the cluster quorum algorithm. This SYSAP also handles lock traffic, and various other cluster communications functions.
SCS$DIR_LOOKUP connects to SCS$DIRECTORY
This SYSAP is used to find SYSAPs on remote systems
MSCP and TMSCP
The Mass Storage Control Protocol and the Tape MSCP servers are SYSAPs that provide access to disk and tape storage, typically operating over SCS protocols. MSCP and TMSCP SYSAPs exist within OpenVMS (for OpenVMS hosts serving disks and tapes), within CI- and DSSI-based storage controllers, and within host-based MSCP- or TMSCP storage controllers. MSCP and TMSCP can be used to serve MSCP and TMSCP storage devices, and can also be used to serve SCSI and other non-MSCP/non-TMSCP storage devices.
SCS CONNECTION: A SYSAP on one node establishes an SCS connection to
its counterpart on another node. This connection will be on ONE AND
ONLY ONE of the available virtual circuits.
18.104.22.168.2 Cluster Communications Control?
When there are multiple virtual circuits between two OpenVMS systems it is possible for the VMS$VAXCLUSTER to VMS$VAXCLUSTER connection to use any one of these circuits. All lock traffic between the two systems will then travel on the selected virtual circuit.
Each port has a "LOAD CLASS" associated with it. This load class helps to determine which virtual circuit a connection will use. If one port has a higher load class than all others then this port will be used. If two or more ports have equally high load classes then the connection will use the first of these that it finds. Prior to enhancements found in V7.3-1 and later, the load class is static and normally all CI and DSSI ports have a load class of 14(hex), while the Ethernet and FDDI ports will have a load class of A(hex). With V7.3-1 and later, the load class values are dynamic.
For instance, if you have multiple DSSI busses and an FDDI, the VMS$VAXCLUSTER connection will chose the DSSI bus as this path has the system disk, and thus will always be the first DSSI bus discovered when the OpenVMS system boots.
To force all lock traffic off the DSSI and on to the FDDI, for instance, an adjustment to the load class value is required, or the DSSI SCS port must be disabled.
In addition to the load class mechanisms, you can also use the "preferred path" mechanisms of MSCP and TMSCP services. This allows you to control the SCS connections used for serving remote disk and tape storage. The preferred path mechanism is most commonly used to explicitly spread cluster I/O activity over hosts and/or storage controllers serving disk or tape storage in parallel. This can be particularly useful if your hosts or storage controllers individually lack the necessary I/O bandwidth for the current I/O load, and must thus aggregate bandwidth to serve the cluster I/O load.
For related tools, see various utilities including LAVC$STOP_BUS and
LAVC$START_BUS, and see DCL commands including SET PREFERRED_PATH.
22.214.171.124.3 Cluster Communications Control Tools and Utilities?
In most OpenVMS versions, you can use the tools:
These tools permit you to disable or enable all SCS traffic on the on the specified paths.
You can also use a preferred path mechanism that tells the local MSCP disk class driver (DUDRIVER) which path to a disk should be used. Generally, this is used with dual-pathed disks, forcing I/O traffic through one of the controllers instead of the other. This can be used to implement a crude form of I/O load balancing at the disk I/O level.
Prior to V7.2, the preferred path feature uses the tool:
In OpenVMS V7.2 and later, you can use the following DCL command:
$ SET PREFERRED_PATH
The preferred path mechanism does not disable nor affect SCS operations on the non-preferred path.
With OpenVMS V7.3 and later, please see the SCACP utility for control
over cluster communications, SCS virtual circuit control, port
selection, and related.
15.6.2 Cluster System Parameter Settings?
The following sections contain details of configuring cluster-related
126.96.36.199 What is the correct value for EXPECTED_VOTES in a VMScluster?
The VMScluster connection manager uses the concept of votes and quorum to prevent disk and memory data corruptions---when sufficient votes are present for quorum, then access to resources is permitted. When sufficient votes are not present, user activity will be blocked. The act of blocking user activity is called a "quorum hang", and is better thought of as a "user data integrity interlock". This mechanism is designed to prevent a partitioned VMScluster, and the resultant massive disk data corruptions. The quorum mechanism is expressly intended to prevent your data from becoming severely corrupted.
On each OpenVMS node in a VMScluster, one sets two values in SYSGEN: VOTES, and EXPECTED_VOTES. The former is how many votes the node contributes to the VMScluster. The latter is the total number of votes expected when the full VMScluster is bootstrapped.
Some sites erroneously attempt to set EXPECTED_VOTES too low, believing that this will allow when only a subset of voting nodes are present in a VMScluster. It does not. Further, an erroneous setting in EXPECTED_VOTES is automatically corrected once VMScluster connections to other nodes are established; user data is at risk of severe corruptions during the earliest and most vulnerable portion of the system bootstrap, before the connections have been established.
One can operate a VMScluster with one, two, or many voting nodes. With any but the two-node configuration, keeping a subset of the nodes active when some nodes fail can be easily configured. With the two-node configuration, one must use a primary-secondary configuration (where the primary has all the votes), a peer configuration (where when either node is down, the other hangs), or (preferable) a shared quorum disk.
Use of a quorum disk does slow down VMScluster transitions somewhat -- the addition of a third voting node that contributes the vote(s) that would be assigned to the quorum disk makes for faster transitions---but the use of a quorum disk does mean that either node in a two-node VMScluster configuration can operate when the other node is down.
The quorum disk must be on a non-host-based shadowed disk, though it can be protected with controller-based RAID. Because host-based volume shadowing depends on the lock manager and the lock manager depends on the connection manager and the connection manager depends on quorum, it is not technically feasible (nor even particularly reliable) to permit host-based volume shadowing to protect the quorum disk.
If you choose to use a quoum disk, a QUORUM.DAT file will be automatically created when OpenVMS first boots and when a quorum disk is specified -- well, the QUORUM.DAT file will be created when OpenVMS is booted without also needing the votes from the quorum disk.
In a two-node VMScluster with a shared storage interconnect, typically each node has one vote, and the quorum disk also has one vote. EXPECTED_VOTES is set to three.
Using a quorum disk on a non-shared interconnect is unnecessary---the use of a quorum disk does not provide any value, and the votes assigned to the quorum disk should be assigned to the OpenVMS host serving access to the disk.
For information on quorum hangs, see the OpenVMS documentation. For information on changing the EXPECTED_VOTES value on a running system, see the SET CLUSTER/EXPECTED_VOTES command, and see the documentation for the AMDS and Availability Manager tools. Also of potential interest is the OpenVMS system console documentation for the processor-specific console commands used to trigger the IPC (Interrrupt Priority Level %x0C; IPL C) handler. (IPC is not available on OpenVMS I64 V8.2.) AMDS, Availability Manager, and the IPC handler can each be used to clear a quorum hang. Use of AMDS and Availability Manager is generally recommended over IPC, particularly because IPC can cause CLUEXIT bugchecks if the system should remain halted beyond the cluster sanity timer limits, and because some Alpha consoles and most (all?) Integrity consoles do not permit a restart after a halt.
The quorum scheme is a set of "blade guards" deliberately implemented by OpenVMS Engineering to provide data integrity---remove these blade guards at your peril. OpenVMS Engineering did not implement the quorum mechanism to make a system manager's life more difficult--- the quorum mechanism was specifically implemented to keep your data from getting scrambled.