Additional System Configuration Issues

New networking driver

This section contains notes on the previos V2.10 IRAF networking interface as implemented for VMS. Further information can be found in hosts of this manual, the IRAF Version 2.10 Revisions Summary, and the VMS/IRAF system notes file, iraf$local/notes.vms.

The VMS/IRAF networking code was extensively revised for V2.10 to permit client access to a VMS node via TCP/IP, and runtime selection of TCP/IP or DECNET for the transport method.

Selection of the transport protocol is made in the dev$hosts file, as in the examples below.

The first example above states that logical node robur, with alias r, is a DECNET domain node named robur. The DECNET connection is to be made by executing the command "irafks" on the remote node. User login information is supplied by the user's .irafhosts file if any, otherwise by dev$irafhosts. A numbered DECNET object could be used instead, in which case the syntax on the right would change to "robur::n", where n is the number of the DECNET object to connect to.

The second example states that logical node ursa, with alias u, is a TCP/IP domain node with network name ursa. The network connection is made by executing, on the remote node ursa, the host command given after the "!". VMS/IRAF supports only the IRAF rexec connect protocol for outgoing connections to TCP/IP servers. TCP/IP connections are discussed in more detail below.

DECNET networking

In the simplest form of a DECNET connection the hosts file entry for the server node will reference a zero-numbered network object as in the example, e.g., "hostname::0=irafks" (or "hostname::task=irafks"). If "irafks" is not a defined DECNET network object then the user must have a file IRAFKS.COM in their login directory, similar to the following:

This works, but has the disadvantage that every user account must have an IRAFKS.COM file in the login directory, or IRAF DECNET-based networking will not work. A better approach is to define IRAFKS as a known network object which will execute the file IRAFKS.COM supplied in the IRAFHLIB directory with V2.11 VMS/IRAF. This eliminates the need for each user to have a private copy of the IRAFKS.COM file, and makes proxy logins possible, which can eliminate the need for password prompts.

The following is an example of how to install irafks as a known network object. This must be done on each DECNET host. The device name USR$3: shown in the example should be replaced by the value of IRAFDISK: on the local system.

In normal operation the NCP configuration is automatically preserved when the system is rebooted. If the local system has a network configuration procedure to redefine the NCP settings in the event of damage to the network database, then an NCP DEFINE command similar to that shown should be added to this file.

An alternative to the above scheme is to define IRAFKS as a numbered system object. This works as above, except that the "number 0" becomes "number n", where n is the network object number, and the dev$hosts file syntax for the server becomes "hostname::n".

Proxy logins

The DECNET proxy login capability allows DECNET connections to be made between a set of well-known cooperating systems (e.g., the local cluster) without the need to supply login authentication information every time a network connection is made. The effect is to eliminate password prompts, thereby making networking much more transparent to the user. In some cases eliminating password prompts is necessary to make the software function correctly, for example when using a VMS host as a gateway machine interactive prompting for passwords is not possible, and the gateway cannot function correctly without some way to disable password prompts. In the case of IRAF networking this can be done by having the user put password information in their .irafhosts file, but the use of proxy logins is preferred since it avoids plaintext passwords and avoids the need for the .irafhosts file altogether.

To enable proxy logins for IRAF networking one must first define IRAFKS (the IRAF kernel server) as a known network object, as outlined in the previous section. The VMS authorize utility is then used to enable proxy logins.

For example, assume we have two nodes ROBUR and DRACO.

This would enable proxy logins between nodes draco and robur for all user accounts that have the same user login name on both systems. Alternatively one could do this for only a specified set of user accounts. The authorize utility automatically updates the affected disk files so that the change will be preserved the next time the system is booted.

To eliminate the password prompts during IRAF networking connections one must also edit the user .irafhosts file, or the system irafhosts file (in dev) to disable login authentication. For example, if the dev$irafhosts file contains

then authentication will be disabled for connections made between nodes draco and robur, but a password will be required to connect to any other node. The above irafhosts file would result in a DECNET login request such as

which forces a login as "username" on the remote host. If the user name is omitted, the default proxy login on the remote node is used. The user's .irafhosts file, if present, will be used instead and should be configured similarly.

These changes make IRAF DECNET based networking transparent between cooperating nodes in a local network, without requiring users to place password information in .irafhosts files. This is especially desirable if a routing node is used to route IRAF networking connections between TCP/IP and DECNET networks.

Configuring a VMS host as a TCP/IP server

For incoming TCP/IP connections (VMS node acting as server) VMS/IRAF supports two connect protocols, rexec and rexec-callback. The rsh protocol, the default connect protocol for V2.11 UNIX/IRAF systems, is not supported by VMS/IRAF. To act as a TCP/IP server a VMS node must run a host-level networking package (such as Multinet) which supports the rexecd networking daemon.

On the UNIX side the dev$hosts entry for a directly-connected VMS node should be similar to the following example.

This specifies that for logical node robur, alias r, the rexec-callback connect protocol (TCP/IP) should be used to connect to the network node robur. The command to be executed on the remote node is whatever is to the right of the "!", i.e., the command "irafks" in our example.

When the rexecd daemon executing on the remote VMS node responds, the first thing it will do is login using the username and password supplied with the rexec request, and execute the user's LOGIN.COM file. For an IRAF user this must contain the command iraf, which causes the IRAFUSER.COM file in [IRAF.VMS.HLIB] to be interpreted by DCL. This defines all the IRAF logical names and symbols. One of the sybols defined is the irafks command which is subsequently executed to complete the rexec network request. The irafks command is defined in IRAFUSER.COM as follows:

Hence, as might be expected, the result of the rexec connect is to run the IRAF kernel server irafbin:irafks.exe on the server node. In the case of the rexec-callback connect protocol, the actual command passed to DCL includes command line arguments to the IRAF kernel server telling it how to call the client back on a private network socket. What happens is that the client reserves a port and creates a socket on this port, then issues a rexec call to the VMS system to run the kernel server. The port and host name are passed on the rexec command line. The rexecd daemon on the server runs the irafks.exe process on the VMS node, and irafks calls the client back on the socket reserved by the client. The end result is a high bandwidth direct socket connection between the client and server processes.

See the discussion in hosts for a discussion of TCP/IP and DECNET internetworking, including examples of how to configure a VMS node as an Internet gateway for IRAF networking.

Network security alarms

One potential consequence of IRAF networking is that, if IRAF networking is not properly configured (either at the system level or at the user level), network connection attempts may fail, and IRAF in its normal operation may repeatedly generate failed connection attempts. Repeated failed connection attempts can trigger network security alarms on a VMS system, even though no real security problem exists. VMS systems managers should be aware of this possibility so that they don't waste time trying to track down a potential network security problem which doesn't exist. Should this circumstance occur, the best solution is to properly configure IRAF networking so that the connection attempts succeed, e.g., by setting up the user's .irafhosts file, or by enabling proxy logins for the user.

New magtape driver

This section contains notes on the previous V2.10 IRAF magtape driver as implemented for VMS which are valid V2.11. The reader is assumed to be familiar with the basics of magtape access in IRAF, including remote access to a drive via IRAF networking. Further information can be found in tapecap of this manual, in the IRAF Version 2.10 Revisions Summary, and in the VMS/IRAF system notes file, iraf$local/notes.vms.

Allocation

Explicit allocation of magtape devices is now optional. If the drive is not allocated with the CL allocate command it will be automatically allocated and mounted when the first tape access occurs. Once allocated, it stays allocated even if you log out of the CL and back in, allowing successive IRAF or other processes to access the drive without it being deallocated.

The tape drive can be allocated at the VMS level (e.g., "allocate tu4:") before starting the CL. This reserves the drive but if you run devstatus in the CL it will report that the drive has not been "allocated", because IRAF doesn't consider the drive allocated in the IRAF sense unless it has been both allocated and mounted at the VMS level. One can either allocate the drive in the CL, or run an IRAF magtape task which accesses the drive to cause the drive to be automatically allocated and mounted.

If the drive is allocated in DCL before starting IRAF, then allocated and deallocated in the CL, when you log out of the CL the drive will still be allocated at the VMS level even though it was deallocated in the CL. If the drive is NOT allocated in DCL before starting the CL, allocating and deallocating the drive in the CL and then exiting will result in the drive not being allocated at the VMS level.

Density

When a drive is automounted it is possible to set the density. The V2.11 implementation supports this (you just write to mta1600 or whatever), but this does not work reliably since VMS requires that the first operation following a mount which changes the density be a write. Often the write is preceded by a file positioning operation such as a rewind or seek to end of tape, which prevents the density from being changed.

To reliably change the tape density, do an "init/density=NNN device" at the DCL level and verify the new density with "show magtape" ("!init", "!show" in IRAF). It may be necessary to repeat the operation before it succeeds.

Status output logging

Status output logging is fully implemented in VMS/IRAF, i.e., one can enable status output and direct the messages to a text file, a terminal, or a network socket.

For VMS/IRAF the magtape naming conventions have been extended slightly to allow the different types of status output logging devices to be indicated.

For example, "mtexamine mta[:so=orion]" would examine the files on drive mta, optionally sending status output messages to a tape status daemon running on network node orion. Logging to a named terminal device will usually fail due to permission problems, but logging to ">tt" will direct status output to the current terminal window.

The VMS version of the driver will print a device type string such as

where the first part of the message comes from VMS and the " - ..." is from tapecap. The density field can be either the tapecap value or the runtime GETDVI value. For example, if the density is shown as "6250" this is the value from the tapecap entry. If the density is shown as "(6250)", i.e., in parentheses, this is the actual device value as determined by a GETDVI call to VMS. Note that it is possible for the density given in the tapecap entry to differ from the actual device density setting. In this case the tape will be written correctly, but the "Device Type", "Capacity" and "Tape Used (%)" status fields will be incorrect. To avoid this problem be sure the density of the tapecap entry you are using to access the drive matches the density value of the physical device.

To enable status output logging to a socket a network magtape status display server such as xtapemon must be started on the remote node before accessing the tape drive. To permanently direct status output to a node a statement such as 'set tapecap = ":so=hostname"' may be added to the login.cl or loginuser.cl file.

Using tape drives over the network

Changes in the VMS/IRAF V2.10 magtape and networking interfaces make it possible to remotely use VMS tape drives from Internet (UNIX) or DECNET hosts. In practice this is straightforward, but there are two caveats to keep in mind: 1) do not explicitly allocate the device, 2) if you run IRAF tasks in different packages that access the same tape drive, type flpr before running a task in a different package. Aside from these two items, everything should work as when accessing a magtape device directly on a UNIX or VMS system. Both problems arise from the way VMS handles device allocation.

Explicitly allocating a device does not work in VMS/IRAF when remotely accessing a tape drive because the effect is to allocate the drive to the kernel server process for the client CL, thereby preventing other client IRAF processes from accessing the drive. In practice this is not a problem since it is not necessary to explicitly allocate the device; the VMS/IRAF kernel server will automatically allocate and mount the device when it is accessed over the network.

An example of the problem of trying to access a remote VMS drive from two different IRAF packages occurs when the rewind task, which is in the SYSTEM package, is used in conjunction with DATAIO tasks such as rfits, wfits, or mtexamine. You can run tasks in a given package at will, but if you try to access the drive with a task in a different package there will be a conflict. Again, the problem is that when remotely accessing a device, the VMS device allocation facilities cause the device to be allocated to the kernel server process, rather than to the user. There is a one-to-one correspondence between client-side IRAF processes and kernel server processes. In IRAF each package has its own executable, and hence its own kernel server, leading to the allocate conflict.

For example if you have been using rfits (DATAIO) and you want to do a rewind (SYSTEM), type "flpr rfits" and wait 5-10 seconds, then you should be able to do the rewind. To run another DATAIO task, type "flpr rewind" and again wait 5-10 seconds before running the DATAIO task. The delay is necessary to wait for VMS to shutdown the kernel server for the package which previously had the device allocated.

Configuring user accounts for IRAF

User accounts should be loosely modeled after the IRAF account. All that is required for a user to run IRAF is that they run mkiraf in their desired IRAF login directory before starting up the CL. Each user should review the resulting login.cl file and make any changes they wish. Any user wanting to run batch jobs, including printer access, must execute the `iraf' command in their LOGIN.COM file, making sure it is executed for non-interactive jobs. Individual process quotas for IRAF users should be set as described in the next section.

VMS quotas and privileges required to run IRAF

The only privilege required by IRAF is TMPMBX, which is probably already standard on your system. Systems with DECNET capabilities should also give their users NETMBX privilege, although it is not required to run IRAF. No other privileges are required or useful for normal activities.

Although privileges are no problem for VMS/IRAF, it is essential that the IRAF user have sufficient VMS quota, and that the system tuning parameters be set correctly, otherwise IRAF will not be able to function well or may not function at all. If a quota is exceeded, or if the system runs out of some limited resource, the affected VMS system service will return an error code to IRAF and the operation will fail (this frequently happens when trying to spawn a connected subprocess). The current recommended ranges of per-user quotas are summarized below. Users running DECwindows/Motif may already have, and may definitely need, larger numbers for these quotas, so don't reduce them to these values.

The significance of most of these quotas is no different for IRAF than for any other VMS program, hence we will not discuss them further here. The PRCLM quota is especially significant for IRAF since an IRAF job typically executes as several concurrent processes. The PRCLM quota determines the maximum number of subprocesses a root process (user) may have. Once the quota has been reached process spawns will fail causing the IRAF job or operation to abort.

The minimum number of subprocesses a CL process can have is 1 (x_system.e). As soon as a DCL command is executed via OS escape a DCL subprocess is spawned, and we have 2 subprocesses. The typical process cache limit is 3, one slot in the cache being used by x_system.e, hence with a full cache we have 4 subprocesses (the user can increase the process cache size if sufficient quota is available to avoid excessive process spawning when running complex scripts). It is common to have one graphics kernel connected, hence in normal use the typical maximum subprocess count is 5. However, it is conceivable to have up to 3 graphics kernel processes connected at any one time, and whenever a background job is submitted to run as a subprocess a whole new subprocess tree is created. Hence, it is possible to run IRAF with a PRCLM of 5, but occasional process spawn failures can be expected. Process spawn failures are possible even with a PRCLM of 10 if subprocess type batch jobs are used (the default), but in practice such failures are rare. If all batch jobs are run in batch queues it should be possible to work comfortably with a PRCLM of 5-6, but in practice users seem to prefer to avoid the use of batch queues, except for very large jobs.

Since IRAF uses memory efficiently the working set parameters do not seem critical to IRAF, provided the values are not set unrealistically low, and provided WSEXTENT is set large enough to permit automatic growth of a process working set when needed. Configuring VMS to steal pages from inactive processes is not recommended as it partially cancels the effect of the process cache, causing process pagein whenever a task is executed. It is better to allow at least a minimum size working set to each process. However, this is not a hard and fast rule, being dependent on individual system configurations and workloads.

In addition to sufficient per user authorized quota, the system tuning parameters must be set to provide enough dynamically allocatable global pages and global sections to handle the expected load. If these parameters are set too small, process connects will fail intermittently, usually when the system load is high. Each subprocess needs about 8 global pages when activated (IRAF uses global pages and shared memory for interprocess communications, due to the relatively low bandwidth achievable with the VMS mailbox facilities).

With IRAF in heavy use (i.e., a dozen simultaneous users) this can easily reach a requirement for several hundred additional global pages. Each installed image and subprocess also needs at least one, usually two, global sections. Note that the size of the executable found by doing a DIR/SIZE=ALL on [IRAF.BIN_ALPHA]*.EXE can be considered an upper bound to the number of pages needed to install it (if anyone wants to play it safe: typically, it's about 50-70 percent of this size). Currently, for 2.11, we have CL=640, S_IRAF=2730, IRAFKS=54, X_SYSTEM=431, X_PLOT=312, and X_IMAGES=2865. The system parameters on our DEC 3000/300 (AXP) are currently set to GBLPAGES = 205538 and GBLSECTIONS = 610. For every increment of 512 in GBLPAGES, GBLSECTIONS must be increased by 4. After making any of these changes, we recommend running AUTOGEN to ensure correct relationships among the many sysgen parameters.

Interfacing new graphics devices

There are three types of graphics devices that concern us here. These are the graphics terminals, graphics plotters, and image displays.

Graphics terminals

The IRAF system as distributed is capable of talking to just about any conventional graphics terminal or terminal emulator, using the stdgraph graphics kernel supplied with the system. All one need do to interface to a new graphics terminal is add new graphcap and termcap entries for the device. This can take anywhere from a few hours to a few days, depending on one's level of expertise, and the characteristics of the device. Be sure to check the contents of the dev$graphcap file to see if the terminal is already supported, before trying to write a new entry. Useful documentation for writing graphcap entries is the GIO reference manual and the HELP pages for the showcap and stty tasks (see graphcap ). Assistance with interfacing new graphics terminals is available via the IRAF Hotline.

Graphics plotters

The current IRAF system comes with several graphics kernels used to drive graphics plotters. The standard plotter interface is the SGI graphics kernel, which is interfaced as the tasks sgikern and stdplot in the PLOT package. Further information on the SGI plotter interface is given in the paper The IRAF Simple Graphics Interface, a copy of which is available from the network archive.

SGI device interfaces for most plotter devices already exist, and adding support for new devices is straightforward. Sources for the SGI device translators supplied with the distributed system are maintained in the directory iraf$vms/gdev/sgidev. NOAO serves as a clearinghouse for new SGI plotter device interfaces; contact us if you do not find support for a local plotter device in the distributed system, and if you plan to implement a new device interface let us know so that we may help other sites with the same device.

The older NCAR kernel is used to generate NCAR metacode and can be interfaced to an NCAR metacode translator at the host system level to get plots on devices supported by host-level NCAR metacode translators. The host level NCAR metacode translators are not included in the standard IRAF distribution, but public domain versions of the NCAR implementation for VMS systems are widely available. A site which already has the NCAR software may wish to go this route, but the SGI interface will provide a more efficient and simpler solution in most cases.

The remaining possibility with the current system is the calcomp kernel. Many sites will have a Calcomp or Versaplot library (or Calcomp compatible library) already available locally. To make use of such a library to get plotter output on any devices supported by the interface, one may copy the library to the hlib directory and relink the Calcomp graphics kernel.

A graphcap entry for each new device will also be required. Information on preparing graphcap entries for graphics devices is given in the GIO design document, and many actual working examples will be found in the graphcap file. The best approach is usually to copy one of these and modify it.

Image display devices

The majority of VMS/IRAF users will use a networked UNIX or VMS workstation running some version of the X window system (DECwindows/VMS or Motif/VMS in the case of VMS) for IRAF graphics and image display. X clients for graphics and image display are available for all IRAF platforms. See Xterm of this manual for information about the xterm graphics terminal emulator and the saoimage image display server.

Those VMS/IRAF sites that have VAX/VMS workstations running the VWS display system can use the UISDISP.EXE display task in [IRAF.VMS.UIS] for image display. This is a standalone IMFORT program, i.e. it does not communicate with the tasks in the TV.DISPLAY package. See the file uisdisp.txt in the same directory for information on using the task.

Some interfaces for hardware image display devices are also available, although a general display interface is not yet included in the system. Only the IIS model 70 and 75 are currently supported by NOAO. Interfaces for other devices are possible using the current datastream interface, which is based on the IIS model 70 datastream protocol with extensions for passing the WCS, image cursor readback, etc. (see the ZFIOGD driver in iraf$vms/gdev). This is how all the current displays, e.g., imtool and saoimage, and the IIS devices, are interfaced, and there is no reason why other devices could not be interfaced to IRAF via the same interface. Eventually this prototype interface will be obsoleted and replaced by a more general interface.

If there is no IRAF interface for your device, the best approach at present is to use the IMFORT interface and whatever non-IRAF display software you currently have to construct a host level Fortran or C display program. The IMFORT library provides host system Fortran or C programs with access to IRAF images on disk. Documentation on the IMFORT interface is available in A User's Guide to Fortran Programming in IRAF -- The IMFORT Interface, Doug Tody, September 1986, a copy of which is included in the IRAF User Handbook, Volume 1A. If you do not have an existing image display program into which to insert IMFORT calls, it is not recommended that the TV.DISPLAY package code be used as a template. Rather, a standalone image display server should be constructed using the datastream protocol in the iraf$vms/gdev/zfiogd.x driver mentioned above (but that could be a very lengthy job; contact the Hotline).