24. Configuring the MultiNet NFS v2 Server

 

 

This chapter describes how to configure and maintain the MultiNet NFS v2 server, the MultiNet software that allows users of OpenVMS computers to export files to a variety of computers.

This chapter refers to the MultiNet NFS server and NFS client software as the NFS Server and NFS Client, and to the OpenVMS server host as the server, and to NFS client hosts as clients.

Understanding the MultiNet NFS Server

The NFS server is a high-performance OpenVMS implementation of Sun Microsystems' Network File System (NFS) protocol. It allows client computers running a variety of operating systems to remotely access files on an OpenVMS server. To users on a client system, all mounting and access operations are transparent, and the mounted directories and files appear to reside on the client.

After the NFS server is installed, the system manager configures the server and its clients to allow network file access. The NFS server includes configuration utilities for this purpose.

The NFS server is exceptionally fast due to parallel request processing, a file and directory cache between the file systems and the network, and an optional writeback cache feature. For example, because the NFS server can process many client requests simultaneously, a single client does not interfere with the requests of others.

Servers and Clients

An NFS server system is an OpenVMS system that makes its local files available to the network. A client is a host that accesses these files as if they were its own. Typical clients include:

·         Systems running UNIX or Linux

·         PCs running Microsoft Windows or MacOS

·         OpenVMS computers running the MultiNet NFS client (V4.1 or greater).

The OpenVMS server can make any of its file systems available to the network. A file system is a hierarchy of devices, directories, and/or files stored as a FILES-11 ODS-2 or ODS-5 on-line disk structure. File systems can include bound volumes and shadow sets.

The OpenVMS server exports the file systems, making them available to the network. Authorized clients mount the exported file systems, making them available to their users as if they were local directories and files.

Each file system is identified by the name of its mount point; that is, the name of the device or directory at the top of its hierarchy. When a client mounts a file system, it connects the mount point to a mount directory in its own file system. Through this connection, all files below the mount point on the server are available to client users as if they were below the client mount directory.

 

Note: Exported file system names cannot be longer than 14 characters. The NFS server allows NFS clients access to only the highest version of OpenVMS files.

 

 

Each client automatically converts mounted directory and file structures, contents, and names to the format required by its own operating system. For example, an OpenVMS file named:

USERS:[JOE_NOBODY]LOGIN.COM

might appear to a UNIX end user as:

/vmsmachine/users/joe_nobody/login.com

and to a Windows end user as:

E:\users\joe_nobody\login.com

 

Note: The NFS server can convert all valid UNIX, Windows, or Linux file names to valid OpenVMS names. Similarly, the server can convert those OpenVMS file names back to valid UNIX, Windows, or Linux names.

 

 

Security

The NFS server provides two levels of security:

Access to Individual...

Description

File systems

Can be restricted to specific clients listed in mount restriction lists for those file systems, as described in the Restricting Access to a Mount Point section.

Directories and files

These are controlled on a per-user basis. The NFS server consults a database that maps users on NFS client systems to OpenVMS userids. When the NFS server receives an NFS file access request, it maps the client user identifier in the request to an OpenVMS userid/UIC and compares the UIC to the owner, protection mask, and any directory or file ACLs. The NFS server either grants or denies access, as described in the Mapping Between Users' OpenVMS and Client Identifiers section.

 

The NFS server considers default privileges as defined by the user's UAF entry that override OpenVMS protection codes (SYSPRV, BYPASS, READALL, and GRPPRV) when granting access. However, since UNIX clients don't understand OpenVMS privileges, the client may prevent an operation which would otherwise have been allowed. If the UNIX user root (uid 0) is mapped to an OpenVMS user with BYPASS privilege, the user root can access all files.

To get GROUP protection access to a file from UNIX clients, a user must pass both the client and the server protection check. The client check is done using the UNIX GID; the server check is done using the Group portion of the OpenVMS UIC. For GROUP access to be granted, a user must be in the same UIC group on the OpenVMS system and have the same GID on the UNIX system.

 

Note: All NFS security relies on trusting the client to provide the server with truthful authentication parameters. Nothing in the NFS protocol specification prevents a client from using fraudulent parameters to bypass the security system.

 

 

VMS DELETE access does not directly translate to NFS. In NFS, a user with WRITE access to the directory can delete a file. The NFS server implements DELETE access in the same way as NFS. With this in mind, it is important for the system manager to review protection settings on exported file systems.

Mapping Between Users' OpenVMS and Client Identifiers

Clients identify each user to the network by a pair of UNIX or UNIX-style user-ID (UID) and group-ID (GID) codes. During an access operation, the server translates back and forth between the user's OpenVMS UIC and UID/GID pair. Whenever the server starts up, it reads the NFS.CONFIGURATION file which includes a UID translation database that maps each user's OpenVMS user name to their client UID/GID pair. The server translates each user name to its UIC and builds a translation table to map between each UID/GID pair and UIC.

As described in the following sections, you must create and maintain the UID translation list which maps between each users' OpenVMS user name and UID/GID pair.

 

Note: For file protections to work properly, each mapping must be both unique and consistent in each direction (see the Grouping NFS Client Systems for UID/GID Mappings section for a description of exceptions to this rule). You cannot map a single UID to multiple OpenVMS user names, nor can you use a single user name for multiple UIDs.

 

 

For a PC-NFSD client user, you must create a UNIX-type UID/GID pair when you specify the mapping. Whenever the user provides the correct access information to the server, the server provides the client with the user's UID/GID.

To display the current UID translation list, use the SHOW command described in the Invoking the NFS Configuration Utility (NFS-CONFIG) and Displaying Configuration Information section.

Grouping NFS Client Systems for UID/GID Mappings

In the MultiNet UID/GID to OpenVMS user name translation database, each entry is associated with a particular NFS group. An NFS group is a collection of NFS systems that share a single set of UID/GID mappings. Within an NFS group, the mapping between UID/GID pairs and OpenVMS user names on the server system must be one-to-one. You cannot map a single UID/GID to multiple user names, nor can you use a single user name for multiple UID/GIDs. However, duplicate translations may exist between NFS groups.

If no NFS group is specified when a UID/GID translation is added to the configuration, the translation is placed in the "default" NFS group. Translations in this group are used only for client systems not specified in an NFS group.

 

Note: A client system must not reside in more than one NFS group.

 

 

When the NFS server receives an NFS request from a client, it consults the local NFS group database to determine which group the client is associated with. If the client is not specified explicitly in a group, it is assumed to be in the default group. Once the NFS server has determined the NFS group to which the client belongs, it uses the UID/GID translation list for that group to determine the OpenVMS user name (and hence, OpenVMS UIC) to use when accessing local files.

If there is no UID/GID mapping for a user in the NFS group to which the client system belongs, the user is treated as unknown, and the UID/GID -2/-2 is used. Any translations in the default group are not considered if the client is specified in an NFS group.

UNIX password files may be copied from client systems for UID/GID translations when OpenVMS user names are the same as those on the client. UNIX password files may also be placed in an NFS group. With the addition of a password file, be sure the UIDs within the NFS group remain unique.

Consider the following example. At Flowers Inc., the engineering department has a group of UNIX hosts, the sales department has a collection of PCs, and the marketing department has a mix of PCs and UNIX hosts. Each group also has its own UNIX system acting as an NFS server for the group. Unfortunately, the groups did not coordinate with each other when they assigned user names and UID/GID pairs, and none of the groups are willing to change their current configurations. The accounting department, on the other hand, recently purchased a VAX 4000 computer running OpenVMS and the NFS server and wishes to make certain personnel data available via NFS to the other groups.

The accounting system manager configures the NFS server on the VAX system as follows:

1. Using the NFS-CONFIG ADD NFS-GROUP command, the system manager creates the three NFS groups ENGINEERING, SALES, and MARKETING, placing the NFS systems in each department in the appropriate NFS group. The default group is used for NFS systems in the accounting department.

2. The system manager obtains UID/GID mappings from each department and creates OpenVMS user names for each NFS client user who needs access to the NFS server on the OpenVMS system.

3. Finally, the system manager uses the NFS-CONFIG ADD UID-TRANSLATION and ADD NFS-PASSWD-FILE commands to create the mappings between OpenVMS user names and the UID/GID pairs for each NFS group. See Naming Mount Points for details on specifying these mappings.

If all systems in your environment share the same UID/GID pairs, you do not need to create or specify NFS groups. All translations are automatically placed in the default group (which has no group name associated with it).

Handling Incomplete Mappings

When mappings are incomplete or nonexistent, access operations are severely limited or denied.

If any OpenVMS files or directories are owned by a UIC for which there is no mapping, the server handles them as though they were owned by the UNIX user "nobody," whose identifiers are UID -2, GID -2. Similarly, if any client users have UIDs for which there are no mappings, the server grants them access to OpenVMS files as if they were the OpenVMS user DEFAULT, whose UIC is [200,200]. In either case, only the WORLD-READ and WORLD-EXECUTE file protection settings are granted.

WRITE access is never granted to unmapped users even in those cases where OpenVMS protections allow WORLD WRITE access.

UNIX File System Semantics

This section describes the techniques the NFS server uses to map UNIX file system semantics to OpenVMS file system semantics.

Mapping UNIX File Links

The NFS server provides primitive support for symbolic and hard link operations under OpenVMS.

Because the OpenVMS file system has no support for symbolic links, symbolic links created by an NFS client are stored under OpenVMS in a file with undefined record attributes which begins with *SYMLINK*. Using this method of storing link contents, the NFS server appears to support symbolic links, but the links cannot be used directly by OpenVMS applications.

The NFS server supports the hard link operation by making additional directory entries for a file under OpenVMS. These hard links can be used directly by OpenVMS applications. Unlike the UNIX file system, the OpenVMS file system does not keep a link reference count on files which have multiple links, although the NFS server attempts to simulate this by keeping a reference count in memory. After the reference count is purged by a reboot or by restarting the NFS server, deleting a file with multiple hard links could result in the loss of data although there may be other directory entries.

Likewise, deleting a file's remaining directory entry (if the file previously had hard links) could result in the contents of the file not being deleted, resulting in a lost file which can later be found by disk analysis. These limitations are inherent in the OpenVMS file system.

Mapping UNIX Device Special Files

Device block and character special files created by an NFS client are stored under OpenVMS in a file with undefined record attributes which begins with *SPECIAL*. Using this method of storing special files, the NFS server appears to support them, but the files cannot be used directly by OpenVMS applications.

Mapping UNIX setuid, setgid, and "sticky" File Modes

The NFS server appears to support the setuid, setgid, and sticky (VTX) file modes by using the reserved-to-customer bits in the user characteristics field of the file header. Although these protection modes have no meaning to OpenVMS, the NFS server still stores them.

Mapping UNIX File Names

The NFS server attempts to store files with any file name, even when client file names contain characters not permitted by OpenVMS. To accomplish this, the NFS server performs a mapping between OpenVMS and NFS client file names, using the inverse mapping of the NFS client. This mapping ensures consistency between other NFS clients accessing and creating files using the NFS server, and the NFS client accessing and creating files using other NFS servers. All mapping sequences on the OpenVMS server begin with the "$" escape character. This file name mapping can be disabled as described in the Mount Point Option Summary section.

As "$" is the mapping sequence escape character, a real "$" in a file name as seen by the client is mapped to "$$" on the OpenVMS server. For example, the client file name foo$bar.c maps to FOO$$BAR.C on the OpenVMS server.

A "$" followed by a letter (A to Z) in a file name on the server indicates a case-shift in the file name on the client. For client systems like UNIX which support case-sensitive file names, a file name can begin in lowercase and change back and forth between uppercase and lowercase. For example, the client file name "aCaseSENSITIVEFilename" would map to "A$C$ASE$SENSITIVEF$ILENAME" on the OpenVMS NFS server.

A "$" followed by any digit 4 to 9 indicates a mapping as shown in the table below.

VMS Char.

Server Char.

Hex Value

VMS Char.

Server Char.

Hex Value

VMS Char.

Server Char.

Hex Value 

$4A

^A

1

$5A

!

21

$7A

Space

20

$4B

^B

2

$5B

22

$7B

;

3B

$4C

^C

3

$5C

#

23

$7C

3C

$4D

^D

4

$5E

%

25

$7D

=

3D

$4E

^E

5

$5F

&

26

$7E

3E

$4F

^F

6

$5G

27

$7F

?

3F

$4G

^G

7

$5H

(

28

 

 

 

$4H

^H

8

$5I

)

29

$8A

@

40

$4I

^I

9

$5J

*

2A

$8B

[

5B

$4J

^J

A

$5K

+

2B

$8C

\

5C

$4K

^K

B

$5L

2C

$8D

]

5D

$4L

^L

C

$5N

.

2E

$8E

^

5E

$4M

^M

D

$5O

/

2F

 

 

 

$4N

^N

E

$5Z

:

3A

$9A

60

$4O

^O

F

 

 

 

$9B

{

7B

$4P

^P

10

$6A

^@

00

$9C

|

7C

$4Q

^Q

11

$6B

^[

1B

$9D

}

7D

$4R

^R

12

$6C

^\

1C

$9E

~

7E

$4S

^S

13

$6D

^]

1D

$9F

DEL

7F

$4T

^T

14

$6E

^^

1E

 

 

 

$4U

^U

16

$6F

^_

1F

 

 

 

$4V

^V

16

 

 

 

 

 

 

$4W

^W

17

 

 

 

 

 

 

$4X

^X

18

 

 

 

 

 

 

$4Y

^Y

19

 

 

 

 

 

 

$4Z

^Z

1A

 

 

 

 

 

 

 

The digit after the dollar sign and the trailing letter indicates the character in the client file name. In the special case of the "dot" character (.), the first dot in the client file name maps directly to a dot in the server OpenVMS file name. Any dot characters after the first one in the client file name are mapped to the character sequence $5N on the OpenVMS server. In directory files, any dot in the client file name maps to $5N on the server. For example, the client file name foo.bar#1.old maps to FOO.BAR$5C1$5NOLD on the OpenVMS server unless foo.bar#1.old was a directory file, in which case it would map to FOO$5NBAR$5C1$5NOLD.DIR on the OpenVMS server.

Finally, a "$" followed by a three-digit octal number indicates a character in the file name on the server that has the binary value of that three-digit octal number. As all character binary values from 0 to 177 (octal) already have mappings, only characters from 200 to 377 are mapped in this fashion. Thus, the leading digit of the octal number must be either 2 or 3.

Mapping OpenVMS Text Files to UNIX Text Files

The NFS server attempts to make access to text files as transparent as possible. Most OpenVMS files containing ASCII text have RMS record attributes of Variable Length Records with Carriage Return Carriage Control (VAR-CR). When VAR-CR files are read via the NFS server, the NFS server automatically converts the contents of the file into the equivalent UNIX byte stream. Because of this conversion, there are a number of restrictions imposed on VAR-CR files:

·         VAR-CR files cannot be written to unless the file is first truncated to a size of zero.

·         Access to extremely large VAR-CR files (those larger than the size of the cache specified by MAXIMUM-CACHE-BUFFERS and MAXIMUM-FILESYSTEM-BUFFERS) is extremely slow, and may result in NFS timeouts if the file system is not mounted with a large enough timeout value. This happens because the NFS server may need to read the entire file to convert a small portion near the end of the file, and because the entire file must be read to determine its size when represented as a Stream file.

All other types of files are not converted by the NFS server. The client sees the raw disk blocks directly.

NFS Server Architecture

The NFS server includes five top-level protocols that run parallel to each other over a stack of lower-level protocols. The top-level protocols are:

·         The Network File System protocol (NFS) is an IP-family protocol that provides remote file system access, handling client queries.

·         The RPC (Remote Procedure Call) mount protocol (RPCMOUNT) is used by clients to mount file systems and get mount-point information.

·         The RPC-protocol port mapper (RPCPORTMAP) performs the mapping between RPC program numbers and UDP and TCP port numbers.

·         The RPC quota daemon (RPCQUOTAD) returns disk quota information.

·         The RPC status monitor (RPCSTATUS) and the RPC lock manager (RPCLOCKMGR) together coordinate access to sections of files.

·         The PC-NFSD protocol provides authentication and remote-printing functions specific to PC-NFS. Only PC and PC-compatible clients use this protocol.

Underlying the NFS, RPCLOCKMGR, RPCMOUNT, RPCPORTMAP, RPCQUOTAD, RPCSTATUS, and PC-NFSD protocols is a stack of protocols:

·         The Remote Procedure Call (RPC) protocol allows clients to make procedure calls across the network to the server.

·         The external Data Representation (XDR) protocol handles architectural differences between the client and server systems, allowing the NFS protocol to communicate between systems of dissimilar architectures.

·         The User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Internet Protocol (IP) are used for the lowest levels of communication.

Traditionally, NFS has always been run over UDP. The NFS server also supports communication over TCP, which may provide performance improvements when communicating with NFS clients across slow network links or wide area networks (WANs), which suffer from packet loss and delay.

The list of queues that PCNFSD returns in a single UDP datagram is limited to the first 45,000 bytes.

NFS Server Configuration Overview

There are three main aspects to configuring the NFS server system:

1.      Enabling the NFS server on the server host.

2.      Configuring the NFS server.

3.      Configuring the clients.

These operations are performed normally in the following sequence:

1.      Enable the NFS server, using SERVER-CONFIG.

2.      Make sure that each user who will access the server has an OpenVMS user account on the server and an account on the client.

3.      Invoke NFS-CONFIG to perform Steps 4 through 6.

4.      Provide the NFS server with a basis for translating between the OpenVMS and client identifiers for each user.

5.      Export each file system:

a.       Choose a name for the mount point.

b.      Export the mount point and reload the server to make the change effective.

c.       Mount and test the file system on each client.

d.      If you want to restrict access to the file system to specific clients, create a mount restriction list for the mount point, restart the server, and retest the mount operation from each client.

6.      Only when necessary, change global parameter settings (followed by a server restart), and retest the configuration. The default parameter settings are sufficient in most cases.

The following sections describe these operations.

Enabling the NFS Server

Enable the NFS server by enabling the following services:

·         NFS server

·         RPCMOUNT mount server

·         RPCQUOTAD quota server

·         RPCLOCKMGR lock manager

·         RPCSTATUS status monitor

·         RPCPORTMAP RPC-protocol port mapper

For networks that include PC or PC-compatible clients with PC-NFS software, you should also enable the PC-NFSD server. Use the MultiNet Server Configuration Utility (SERVER-CONFIG) to enable these services.

The following sample sessions show how to enable protocols with SERVER-CONFIG. The first example pertains to systems that do not include PC clients and do not use the PC-NFSD protocol:

$ MULTINET CONFIGURE/SERVER
MultiNet Server Configuration Utility 5.6
[Reading in configuration from MULTINET:SERVICES.MASTER_SERVER]
SERVER-CONFIG>ENABLE NFS
SERVER-CONFIG>ENABLE RPCMOUNT
SERVER-CONFIG>ENABLE RPCQUOTAD
SERVER-CONFIG>ENABLE RPCPORTMAP
SERVER-CONFIG>ENABLE RPCLOCKMGR
SERVER-CONFIG>ENABLE RPCSTATUS
SERVER-CONFIG>RESTART
Configuration modified, do you want to save it first ? [YES] YES
[Writing configuration to
SYS$COMMON:[MULTINET]SERVICES.MASTER_SERVER]
%RUN-S-PROC_ID, identification of created process is 0000017A
SERVER-CONFIG>EXIT
[Configuration not modified, so no update needed]
$

The following example pertains to systems that do include PC clients and use PC-NFSD.

$ MULTINET CONFIGURE/SERVER
MultiNet Server Configuration Utility 5.6
[Reading in configuration from MULTINET:SERVICES.MASTER_SERVER]
SERVER-CONFIG>ENABLE NFS
SERVER-CONFIG>ENABLE RPCMOUNT
SERVER-CONFIG>ENABLE RPCQUOTAD
SERVER-CONFIG>ENABLE RPCPORTMAP
SERVER-CONFIG>ENABLE RPCLOCKMGR
SERVER-CONFIG>ENABLE RPCSTATUS
SERVER-CONFIG>ENABLE PCNFSD
SERVER-CONFIG>RESTART
Configuration modified, do you want to save it first ? [YES] YES
[Writing configuration to
SYS$COMMON:[MULTINET]SERVICES.MASTER_SERVER]
%RUN-S-PROC_ID, identification of created process is 0000017A
SERVER-CONFIG>EXIT
[Configuration not modified, so no update needed]
$

Creating OpenVMS User Accounts for Client Users

An OpenVMS user account must exist for each client user who will have access to the OpenVMS file systems. In addition, the account must have access to those file systems.

As described in the Creating UID/GID Mappings section, you must also provide the server with a basis for mapping between each user's client and OpenVMS accounts.

Invoking the NFS Configuration Utility (NFS-CONFIG) and Displaying Configuration Information

To invoke the NFS Configuration Utility (NFS-CONFIG), enter:

$ MULTINET CONFIGURE/NFS

In response, NFS-CONFIG reads its current configuration file, NFS.CONFIGURATION, as shown in the following example. All configuration operations that use NFS-CONFIG change this file.

$ MULTINET CONFIGURE/NFS
MultiNet NFS Configuration Utility 5.6
[Reading in configuration from MULTINET:NFS.CONFIGURATION]
NFS-CONFIG>

You can display various information about the current configuration in the NFS.CONFIGURATION file. The following examples include these lists:

·         The file system export list, which is a list of the file systems available to the network. A mount restrictions list appears next to the entry for each file system, showing the clients that can access the file system (unless all clients can access it).

·         The UID/GID to OpenVMS user name translation list.

·         The global parameter list, which contains the server's global parameters.

These lists and corresponding aspects of server configurations are explained in subsequent sections of this chapter.

NFS-CONFIG>SHOW
Filesystem        Restrictions
----------        ------------
SYS$SYSDEVICE:    brown.example.com localhost
UID Translations: VMS Username     Unix UID     Unix GID
                  ------------     --------     --------
                   JOHN              10            15
                   BANZAI             2            40
NFS-CONFIG>

The following example shows a full display:

NFS-CONFIG>SHOW /FULL
Exported Filesystem "SYS$SYSDEVICE:":
     Mounts restricted to:
      brown.example.com
         localhost
UID Translations: VMS Username     Unix UID     Unix GID
                  ------------     --------     --------
                     JOHN             10           15
                     BANZAI           2            40
Kernel-Mode NFS server.
Kernel-Mode exceptions will cause NFS to hibernate for debugging.
Number of RPC Transports:             100 simultaneous requests
Size of duplicate request cache:      250 entries
File cache timer interval:             30 seconds
Read-Only flush age:                50000 seconds
Read/Write flush age:               50000 seconds
File info flush age:                 1200 seconds
Directory info flush age:             300 seconds
File info idle flush age:             600 seconds
Directory info idle flush age:        150 seconds
Use Directory Blocking ASTs for cache consistency
Use File Blocking ASTs for cache consistency
Maximum cache files:                 3000 files
Maximum cache buffers:                500 buffers
Maximum open channels:                 50 channels
Maximum file system files:           3000 files
Maximum file system buffers:          500 buffers
Maximum file system channels:          50 channels
Maximum Queued Removes:                25 files
Seconds Before Writeback:               3 seconds
  Maximum Dirty Buffers:                0 buffers (no limit)
  Maximum Write Jobs:                   0 operations (no limit)
NFS-CONFIG>

Creating UID/GID Mappings

The following sections describe how to create and manipulate UID/GID mappings.

Adding and Deleting Mappings

There are two methods for adding and deleting mappings of user names to UID/GID pairs. You can combine these methods as needed:

·         Add and delete individual mappings and NFS groups with NFS-CONFIG.

·         If the system includes UNIX clients with users with the same UNIX and OpenVMS user names, use one or more /etc/passwd files as the basis for multiple mappings and add those mappings to the configuration with NFS-CONFIG.

After creating or modifying the UID translation list, reload the server to make the changes take effect, as described in the Reloading the NFS Server Configuration and Restarting the Server section.

Adding and Deleting Individual Mappings

The ADD UID-TRANSLATION command creates an individual mapping between an OpenVMS user name and a UID/GID pair. For example:

NFS-CONFIG>ADD UID-TRANSLATION JOHN 10 15

To create a mapping between an OpenVMS user name and a UID/GID pair associated with the NFS group MARKETING, for example:

NFS-CONFIG>ADD UID-TRANSLATION JOHN 10 15 MARKETING

If you are creating UID/GID pairs, each code must be a positive integer or zero, and each user must have a unique UID, independent of the operating system the client is running. Someone who uses multiple clients must have the same UID for each of the clients, or use NFS groups to group together systems sharing the same UID mappings. To delete an individual mapping, use the DELETE UID-TRANSLATION command:

NFS-CONFIG>DELETE UID-TRANSLATION JOHN

To delete a mapping associated with an NFS group:

NFS-CONFIG>DELETE UID-TRANSLATION MARKETING/JOHN

Adding and Deleting NFS Groups

Use the ADD NFS-GROUP command to create an NFS group. For example:

NFS-CONFIG>ADD NFS-GROUP SALES WHORFIN.EXAMPLE.COM, CC.EXAMPLE.COM

 

Note: Client names must be fully qualified.

 

 

To delete a system from an NFS group, use the DELETE NFS-GROUP command:

NFS-CONFIG>DELETE NFS-GROUP SALES WHORFIN.EXAMPLE.COM

To delete the NFS group itself, use an asterisk (*) for the host specification:

NFS-CONFIG>DELETE NFS-GROUP SALES *

Adding Multiple Mappings

The /etc/passwd files from UNIX NFS clients can be used to create multiple mappings only when the user names on the UNIX and OpenVMS systems are the same. To create a multi-user mapping, use FTP (or another file transfer utility) to copy each applicable /etc/passwd file from the UNIX system to the OpenVMS system running the NFS server. Use the NFS-CONFIG ADD NFS-PASSWD-FILE command to create the mapping. For example:

NFS-CONFIG>ADD NFS-PASSWD-FILE MULTINET:NFS.PASSWD

To create a multi-user mapping associated with the NFS group MARKETING, you might use the command:

NFS-CONFIG>ADD NFS-PASSWD-FILE MULTINET:NFS.PASSWD1 MARKETING

 

CAUTION! If you add or delete users, or change the mapping between user name and UID/GID in an /etc/passwd file on an NFS client, be sure to make the same change in the NFS passwd file on the server.

 

 

The following example shows a UID translation list that includes both individual mappings and passwd file entries created with the NFS-CONFIG ADD UID-TRANSLATION and ADD NFS-PASSWD-FILE commands (excerpted from the output of a SHOW command).

NFS Passwd Files: MULTINET:NFS.PASSWD, MULTINET:NFS.PASSWD2
UID Translations: VMS Username     Unix UID     Unix GID
                  ------------     --------     --------
                     JOHN             10           15
                     BANZAI            2           40

The next example shows a UID translation list that includes NFS group entries, individual mappings, and passwd file entries created with the NFS-CONFIG ADD NFS-GROUP, ADD UID-TRANSLATION, and ADD NFS-PASSWD-FILE commands (excerpted from the output of a SHOW command).

NFS Group Name      Members
--------------      -------
ENGINEERING         control.example.com,fang.example.com
SALES               small-berries.example.com,whorfin.example.com
NFS Passwd Files: ENGINEERING/MULTINET:NFS.PASSWD, MULTINET:NFS.PASSWD2

UID Translations: VMS Username     Unix UID     Unix GID
                  ------------     --------     --------
                     JOHN             10           15
                     BANZAI            2           40
                     ENGINEERING/MAX  30           10
                     SALES/TOMMY      30           10

To delete an NFS passwd file entry, use the DELETE NFS-PASSWD-FILE command. For example:

NFS-CONFIG>DELETE NFS-PASSWD-FILE MULTINET:NFS.PASSWD

Exporting File Systems

To make a file system available to the network, you must export it by adding the name of the file system's mount point to the NFS.CONFIGURATION file system export list, then reloading the NFS server. To display the current list, use the SHOW command. A sample list is included in the Invoking the NFS Configuration Utility and Displaying Configuration Information section.

 

Note: All directories accessible via NFS must have at least READ and EXECUTE access set for the desired level of access (SYSTEM, OWNER, GROUP, or WORLD). In particular, the root directory file 000000.DIR (and possibly other directories below it) must have WORLD READ and EXECUTE access set. Otherwise, users on UNIX and PC systems may not be able to access files in their directories below 000000.DIR, even if they own those files and directories. If the directory protections are set incorrectly, directories that have files in them may appear to be empty.

 

 

Naming Mount Points

You must specify the names of the server mount points that are to be available to the network. The server accepts the following formats for mount point names:

·         A device name (for example, DUA0:)

·         A device and directory name (for example, DUA0:[USERS] or SYS$SYSDEVICE:[USERS])

·         A logical name (for example, SYS$SYSDEVICE: or SYS$SYSTEM:)

When a mount point can be specified with more than one name (for example, SYS$SYSDEVICE:, DISK$VAXVMSRL4:, and DUA0:) you can use any of them. However, note that the name you choose is also the name the client uses to access the file system when mounting it.

Although exported file systems can overlap, this practice is not recommended because information is duplicated in the NFS server cache. Overlapping information in the cache will often cause undesirable behavior by the NFS server.

For example, if you use the device name DUA0: as a mount point, you make the entire file system available for access. If you use the directory name DUA0:[USERS] as a mount point, you make DUA0:[USERS...]*.* available. If you specify both DUA0: and DUA0:[USERS] as mount points, the DUA0:[USERS] mount point is redundant and will result in unexpected behavior by the NFS server.

 

Note: To export a directory that requires you to use more than 13 characters, you must create a logical name that points to the directory and export the logical name instead of the directory name.

 

 

Adding File Systems to the Export List

Use the NFS-CONFIG ADD EXPORT command to export a mount point. Remember that you must reload the NFS server to make the new mount point available to the network.

 

Note: Do not export search paths.

 

 

The following example adds the file system SYS$SYSDEVICE: to the export list.

$ MULTINET CONFIGURE/NFS
MultiNet NFS Configuration Utility 5.6
[Reading in configuration from MULTINET:NFS.CONFIGURATION]
NFS-CONFIG>ADD EXPORT SYS$SYSDEVICE:
[Added new Exported file system "SYS$SYSDEVICE:"]
[Current Exported File System set to "SYS$SYSDEVICE:"]
NFS-CONFIG>RESTART

Once exported, a file system is "open" or available to all clients. You can restrict access to a mount point (as described in the Restricting Access to a Mount Point section); however, you should first configure the clients that will access it and test the resulting configuration before defining restrictions. Performing configuration operations in this sequence facilitates the verification of file system exports and mounts, since the server will not reject mount requests to an open file system.

Similarly, although you can change the settings of several global parameters (as described in the Modifying NFS Server Global Parameters section), wait until you have tested your initial configuration before making such changes.

If your network includes PC clients, you may want to configure the remote printing service of the PC-NFSD protocol (as described in the Configuring PC-NFSD Remote Printing Service section).

Removing File Systems from the Export List

To remove a file system from the network, use the NFS-CONFIG DELETE EXPORT command.  For example:

NFS-CONFIG>DELETE EXPORT SYS$DEVICE:

Establishing Cluster-wide Aliases

MultiNet allows the system manager to declare a cluster-wide IP address serviced by a single node at any given time. If that node should fail, servicing of the cluster-wide IP address will "fail-over" to another node, allowing NFS clients to continue to access cluster disks even if the host running the NFS server crashes. For more information about creating cluster-wide IP addresses, refer to Chapter 11.

Reloading the NFS Server Configuration and Restarting the Server

Whenever you change the server configuration, you alter the NFS.CONFIGURATION file. Most of the remaining procedures described in this chapter change the configuration. Before you can use a new or revised configuration, you must reload the NFS server, either from within NFS-CONFIG or from DCL.

Reloading the server involves reloading the NFS and RPCMOUNT services:

·         Enter the following command to reload both protocols:

NFS-CONFIG>RELOAD

·         Enter the following command from DCL to restart only the NFS server:

$ MULTINET NETCONTROL NFS RELOAD

·         Enter the following command from DCL to restart only the RPCMOUNT protocol:

$ MULTINET NETCONTROL RPCMOUNT RELOAD

You may also restart the NFS server by killing the current process and running a new one. The following DCL command does this:

NFS-CONFIG>RESTART

Restarting the server causes the file cache to be flushed and the new server will need to rebuild it. Therefore, it is recommended that you use the RELOAD command whenever possible.

Shutting Down the NFS Server

You can edit your SYSHUTDOWN.COM procedure to include commands that stop the NFS server. For example:

$ MULTINET NETCONTROL NFS SHUTDOWN

Testing the System Configuration

Test the configuration at these times:

·         After your initial configuration, when you have:

o   Specified the mappings between UIDs/GIDs and user names

o   Configured the NFS server

o   Restarted the NFS server

o   Configured one or more clients for the NFS server

·         After you modify the configuration by reconfiguring the NFS server, adding clients, or reconfiguring existing clients

To test a configuration, check all file systems from one client, and at least one file system from every client:

1. Log in as one of the client's users. For example, on a Linux host client, you might log in as "joebob" (be sure your system includes a mapping for "joebob's" UID/GID and a user name on the server system).

2. Mount a file system the user can access. For instructions on mounting file systems, see the Configuring Clients section.

3. Check the mount as described in the next steps.

a. Check the contents of the file system's mount directory. For example, on a Sun host client, use the cd command to change to the mount directory, and the ls -l command to list the names of the directory's files.

b. Verify that files in the mount directory can be read. For example, on a Sun host client, use the cp command to copy a file from directories under the mount point to the /dev/null directory.

c. Verify that files can be written to the OpenVMS server. For example, on a Sun host client, use the following command to copy a file to the current directory:

$ cp /vmunix .

 

Note: Process Software recommends using the cp utility to test the server because it is better at reporting protection problems than most other UNIX utilities, including cat.

 

 

4. Repeat this process until you have mounted and checked all file systems that the client's users wish to access.

5. Log in from each of the other clients and check file system mounts as described in Steps 1 through 4.

Checking for Errors

After exporting file systems and restarting the server, but before configuring clients, enter the following command:

$ REPLY/ENABLE=NETWORK/TEMP

This command causes network event messages to be displayed on your terminal, including error messages from the NFS and RPCMOUNT servers. See the MultiNet Messages, Logicals, and DECnet Applications book for lists of error messages and the conditions that generate them.

Configuring Clients

After configuring the NFS server, you must configure each client that will access OpenVMS server file systems. Different types of clients require different configuration procedures. There are, however, two general guidelines for configuring clients:

·         Each client must explicitly mount each file system to which it requires access. For most types of clients, a mount directory must be created for each file system.

·         For most types of clients, you must change the default settings of some configuration parameters in the client's MOUNT command. These settings control how the client accesses the OpenVMS server.

The following section explains how to configure hosts runningUNIX. Chapter 28 explains how to configure OpenVMS systems as clients using the NFS client software.

Configuring UNIX Host Clients

As part of the configuration process, you must log into each client and mount all file systems the client will have access to. For each client, you may also need to adjust the wsize, timeo, and retrans parameters for the client's mount command. Before a file system is mounted, you must also ensure that the directory under which a file system will be mounted exists on the client.

Mounting File Systems on UNIX Hosts

Mount each file system as follows:

1. Use the mkdir command to create the mount directories. For example, enter the following command while logged in as root to create a directory called /mnt:

# mkdir /mnt

2. Mount each file system by executing the mount command with conservative values for wsize, timeo, and retrans, using this syntax:

# mount -o options file-system mount-point, retrans=5 \ vmsmachine:sys\$sysdevice: /mnt

For example:

# mount -o soft,rw,timeo=50, retrans=5 vmsmachine:sys\$sysdevice: /mnt

Once you have mounted the remote file system, you can experiment with other wsize, timeo, and retrans values to improve performance, as described in the Explicitly Specifying Mount Parameter Settings section.

 

Note: When the mount point name is specified with OpenVMS syntax, any special characters (for example, $, [, and ]) must be delimited with a backslash (\) for proper processing by the UNIX shell.

 

 

3. If a mount is not successful, errors may be reported to the user's display or to the OpenVMS console via OPCOM. (NUL characters no longer appear in the OPCOM output.)

4. After performing a successful mount, and after adjusting the wsize, timeo, and retrans values, add the file system and its mount parameters to the client's /etc/fstab file so file system mounts will occur automatically.

Explicitly Specifying Mount Parameter Settings

As part of configuring a UNIX host client, you may need to change the number of block I/O daemon (biod) processes or the values of one or more mount-parameter settings to correct the two problems discussed next. Make these corrections after performing the first mount of a file system as described in Step 2 in the procedure in the preceding section.

The OpenVMS XQP is relatively slow. There are times when the NFS server must perform many operations before returning the answer to a seemingly simple query. The resulting delay can cause a client to report "RPC timeout errors" and unnecessarily retransmit its query.

For example, accessing a large directory file can cause an unexpected delay in processing an NFS request. Process Software recommends you keep fewer than 1,000 files in each directory, especially when you frequently add and delete files.

Such problems usually occur sporadically, and are often not reproducible because the server has cached the result and can answer the query quickly when it is made a second time.

In Step 2 of Mounting File Systems on UNIX Hosts the problem was avoided by mounting the file system with larger than normal timeo (timeout) and/or retrans (retransmission) parameter settings. The higher timeo value increases the length of delay the server will tolerate before timing out. However, if a packet is lost during transmission, a large timeo value means a long delay before retransmission.

The higher retrans value increases the number and rate of retransmissions a client makes before timing out, hence decreasing the delay between retransmissions. Retransmissions do not adversely affect the server, however, as each new request is recorded in the duplicate-requests cache (described in the Modifying NFS Server Global Parameters section). The server discards all retransmissions (which are duplicates of the original request) as it processes the original.

The timeo and retrans values can be adjusted to achieve an appropriate tradeoff for your network. A high timeo value with a low retrans value is an appropriate solution for a reliable network that requires few retransmissions. In contrast, although specifying a high retrans value and a low timeo value can create significant overhead in unnecessary queries, this solution is appropriate for an unreliable network because it minimizes the delay when a packet is lost.

The total time available to the server to complete an operation is the product of the timeo and retrans values. For most systems, appropriate values are 50 for timeo (5 seconds-timeo is usually specified in tenths of seconds), and 5 for retrans.

Restricting Access to a Mount Point

By default, when you export a file system, its mount point is "open" and available for mounting by any client on your network. However, for each exported file system, you can create a list of clients permitted to mount it. This list, called the mount restriction list, appears next to the name of the file system's mount point in the export list. The presence of a mount restriction list prevents all unlisted clients from mounting the mount point.

You can export a mount point for read-only access using the NFS-CONFIG command ADD MOUNT-RESTRICTION. Use the -ro (read-only) keyword instead of the nfs_group name. Any attempts to write to the disk specified by this mount point fails. This restriction affects any NFS group associated with that particular mount point. This example shows how to export a disk to restrict all users to read-only access:

NFS-CONFIG>ADD MOUNT-RESTRICTION DISK$ONE -ro

The next example shows how to restrict one group of users (those on BOOTE.EXAMPLE.COM) to read-only access, at the same time denying access to everyone else:

NFS-CONFIG>ADD MOUNT-RESTRICTION DISK$USERS BOOTE.EXAMPLE.COM
NFS-CONFIG>ADD MOUNT-RESTRICTION DISK$USERS -ro

Use NFS-CONFIG to create and modify mount restriction lists. Use the following procedure to create a mount restriction list for a file system's mount point or add a client to the list. You must reload the server before a new or changed list goes into effect.

1. Select the mount point by entering SELECT.

2. Add a client to the mount point's list by entering ADD MOUNT-RESTRICTION. (If no list exists, specify the first client to automatically create a list.)

3. In addition to a client name, you can specify the name of an NFS group as described in the Grouping NFS Client Systems for UID/GID Mappings section. Specifying a group name is equivalent to individually listing each of the clients in that group.

The following example shows the client SALES.EXAMPLE.COM being added to the mount restriction list for the SYS$SYSDEVICE: mount point.

NFS-CONFIG>SELECT SYS$SYSDEVICE:
[Current Exported File System set to "SYS$SYSDEVICE:"]
NFS-CONFIG>ADD MOUNT-RESTRICTION SALES.EXAMPLE.COM
[Added Mount restriction to "SYS$SYSDEVICE:" allowing host
"sales.example.com"]
NFS-CONFIG>RESTART
$

4. To remove a client from the mount restriction list, use the DELETE MOUNT-RESTRICTION command followed by the RESTART command.

5. To display the mount restriction list for a mount point, use the SHOW command.

Controlling NFS File Access with OpenVMS Access Control Lists (ACLs)

Because of the differences between OpenVMS security and UNIX system security (after which NFS is modeled), configuring the NFS server to handle ACLs properly requires a thorough understanding of both systems.

The NFS server handles ACL mappings by allowing the addition of VMS rights identifiers to the NFS UID/GID translation table. The syntax is the same as that for adding a user name to the table.

Some effort is required to determine the necessary correlation between UIC groups and rights identifiers in OpenVMS and GIDs on the NFS client. The network administrator must scan the owners and ACLs of the files being exported and make sure all UICs and rights identifiers associated with the file system have a valid UID/GID translation. This is necessary to make sure the NFS server's representation of security information on all files is accurate.

Although file access is determined by the server based on the user's UIC, the correct representation of security information to the client can be critical. Many multi-user clients grant access to data in the cache locally, without making a request to the server. Therefore, it is imperative that the representation of a file's protection and security information is accurate.

Single-user clients do not usually have this problem. However, on any client that denies access based on returned security information, improper mapping may deny access unnecessarily.

To make sure the NFS server can handle requests for files with ACLs:

1. Make sure there are UID/GID translations for all rights identifiers in all ACLs associated with exported files.

2. For each rights identifier, make sure the appropriate users on NFS client systems have the same GID.

To illustrate this solution, consider an environment in which the files belonging to a project are exported as part of a single file system and you need to control access to each project's files by an ACL. Perform the following tasks for each project:

1. On the NFS client, select a GID for the project members.

2. Then, on the NFS server:

a. Create UID/GID mappings for each NFS client user who needs to access the project files (see the Creating UID/GID Mappings section). These mappings must match the GIDs on the client for the project.

b. Use AUTHORIZE to create a new identifier for the project.

c. Add a UID mapping for the project identifier. The GID associated with the project identifier must be the same as the project GID assigned to the NFS client. The choice of UID can be arbitrary, but the UID must not conflict with any other currently assigned UIDs.

d. Modify the protection of the project files to allow no WORLD or GROUP access. If the OpenVMS group is significant, however, you may want to allow GROUP access.

e. Add ACLs to the project files and directories that grant READ and WRITE access to holders of the project identifier that you created in Step 3.

f. Use AUTHORIZE to grant the project identifier you created in Step 2 to users with the project GID (created in Step 1).

3. Now, to add new users to the project:

·         Assign the project GID to the user on the NFS client. This mapping must match the GID on the client for the project.

·         Add a UID/GID mapping for the user on the NFS server.

·         Grant the OpenVMS group identifier to the new user.

 

Note: The preceding procedure is the supported method for using ACLs to control access to files exported via NFS. If you cannot use this method, refer to the How the NFS Server Interprets ACL and UIC Protection section for details on how the NFS server converts UIC and ACL protection information into UID/GID-style file protection masks for NFS clients.

 

 

Idiosyncrasies of ACL Support over NFS

When using ACLs, OpenVMS lets the NFS server assign different access masks for many different groups of users. When a file's attributes are transmitted to the client, the NFS protocol only lets the server return an access mask for the owner's GID; the protocol does not allow the NFS server to return multiple GIDs and their associated access masks. Because some NFS clients grant or deny access based on the protections returned with the file's attributes, the NFS server's responses to attribute requests sometimes change the owner's GID and associated access mask to properly represent access for the client user.

One anomaly these dynamic responses produce is that a directory listing on the client (for example, an ls -l command on a UNIX client) shows files accessed through ACLs as being owned by different GIDs at different times, depending on who accessed them most recently.

If the client grants or denies access based on the protection information in the cache, users may experience intermittent access failures when more than one user tries to gain access to the same file via an ACL. This phenomenon happens when the user would normally receive access through the group or through an ACE (access control list entry).

While world access can always be consistently mapped, owner access is only consistently mapped if the ACL does not contain ACEs that cannot be mapped to a GID. For details, see the How the NFS Server Handles ACLs section. If the UID/GID translation table is configured correctly, users should never have access to files to which they have no legitimate access on the server. However, they may intermittently be denied access.

How the NFS Server Interprets ACL and UIC Protection

The main difficulty facing NFS server administrators is how to coordinate NFS use of the UNIX-style UID/GID protection model with OpenVMS ACL support.

 

Note: Consulting ACLs as part of an NFS server's access-checking scheme is necessary, but not sufficient, to adequately support the presence of ACLs assigned to files.

 

 

Consider the case where an OpenVMS system manager wants to grant access to files based on project groups without having to make sure that all client UIDs map to the same OpenVMS group. A single user may be a member of several projects, a concept incompatible with the single-group model used by VMS.

It might seem that the system manager only needs to add rights identifiers to the mapped accounts and then set up ACLs on the appropriate files. The problem with this scheme is that file protections would normally be set to WORLD=NOACCESS, allowing file accesses to be granted only by the ACL. However, because the file protections deny access on a UID basis, any local access checks performed at the client will fail, bypassing the ACLs.

This problem can be solved if the NFS server makes intelligent use of the NFS GID. The NFS protocol allows a single user to be identified with up to 10 groups (projects), consistent with UNIX. In this model, the NFS client checks the list of GIDs assigned to the local user to see if it matches the group ID associated with the file. If there is a match, the GROUP field of the file protection mask is used to determine accessibility.

The NFS server takes advantage of this model by selectively modifying the returned group ownership for files based upon applicable ACEs. The NFS server processes ACLs in the following manner. To determine whether the NFS server will grant access:

1. The NFS server obtains rights identifiers for the OpenVMS account associated with the requester's UID.

2. The NFS server selects the first (if any) ACE assigned to the file (matching one of the rights identifiers held by the OpenVMS account). The protection specified in the ACE is used in place of the protection mask associated with the file.

3. If there are no matching ACEs, the NFS server performs the standard UIC protection check.

When asked by the NFS client for the protection mask and ownership for a file, the NFS server does the following:

1. The NFS server obtains rights identifiers for the OpenVMS account associated with the requester's UID.

2. If one of the ACE identifiers matches the file owner's UIC, the NFS server uses the protection mask in the ACE to calculate the OWNER field of the protection mask returned to the NFS client. Otherwise, the NFS server uses the OWNER field of the protection mask associated with the file to calculate the OWNER field returned to the NFS client.

3. The NFS server selects the first (if any) protection ACE assigned to the file (matching one of the rights identifiers held by the requester's VMS account).

4. If the NFS server encounters a matching ACE whose identification is a UIC, and the identifier:

·         Is in the same OpenVMS group as the file owner, the server ignores the ACE.

·         Is not in the same OpenVMS group as the file owner, the server maps the requester's UIC and GID along with the along with the ACE's protection mask as the owner of the file when returning NFS attribute information

5. If the NFS server selects an ACE, the group ownership it returns to the NFS client is taken from the GID associated with the matching identifier.

If no matching ACE is found, the NFS server obtains the GID for the file from the file owner.

To assign GIDs to UICs and identifiers, use the NFS-CONFIG ADD UID command described in the Creating UID/GID Mappings section.

Under this scheme, the system manager sets file protections as needed (for example, W:NOACCESS, G:RWE), and creates an ACL to grant access to processes holding a specific rights identifier.

When the NFS client performs local access checking, it compares the list of GIDs (associated with the user) against the file's group ownership, which the NFS server bases on the ACL information for the file. This scheme prevents the client's caching mechanism from defeating the ACLs associated with the file.

How the NFS Server Handles ACLs

The key to understanding how ACLs affect file access is in the exchange that takes place when an NFS client requests attributes for a file or directory it wants to access. The client sends the server the user's UID/GID pair when it identifies the file it wants to access. The server must respond with the UID/GID pair of the file's owner along with the protections on that file in UNIX format (R/W/E for owner, group, and world/others). To accomplish this, the NFS server must translate the OpenVMS protection mask, applicable ACEs, and UIC-based file owner into a UNIX-style protection mask and UID/GID-based owner.

If the file being requested has no ACLs associated with it, the NFS server simply returns the OpenVMS file owner's UID/GID pair which it obtains from the NFS server's UID translation table and the file's owner, group, and world protections.

If the file has an ACL, the NFS server scans the ACL for ACEs in a format that does not allow the NFS server to map group protections. These ACLs must be handled in a special way (see the Handling ACLs with Unmappable ACEs section).

If there are no unmappable ACEs, the client's UID is translated, and the ACL is scanned for a match based on the associated UIC. At the same time, the list is also scanned for ACEs that should be mapped to world or owner protections. Based on the scan, the server returns attributes as follows:

·         The OWNER protection mask returned is the owner default protection mask logically OR'd with the access mask of the first ACE matching the owner's UIC and associated rights identifiers. This emulates OpenVMS behavior and prevents the owner of the file from being denied access because of an ACE.

·         The WORLD protection mask returned is the access mask associated with the first "wildcard" ACE, if one exists. Otherwise, the WORLD protection mask returned is the default WORLD protection.

·         The GROUP protection mask returned is the access mask associated with the first ACE matching the requestor's UIC and associated rights identifiers. The GID returned is the GID translation of the rights identifier or UIC that matched this ACE. If no such ACE is found, the GROUP protection mask returned is the default group protection mask and the GID returned is the GID translation of the file's owner.

·         The UID returned is the UID translation of the file's owner.

Handling ACLs with Unmappable ACEs

Occasionally, ACE access cannot be mapped to a GID as described in the previous section. This happens when the ACE identifier is specified in the following manner:

[*,member]

This also happens in cases of multiple identifiers on a single ACE, such as:

ACE

Description

A+B

A and B represent rights identifiers.

[a,*]+[b,*]

a and b represent UIC groups.

A+[a,*]

A is a rights identifier and a is a UIC group.

 

If the ACL associated with the file contains any ACEs that cannot be mapped to a GID, file attributes are returned as follows:

·         The owner protection mask returned is the access mask associated with the first ACE matching the requestor's UIC and associated rights identifiers. If no such ACE is found, the owner protection mask returned is the default protection mask appropriate for the requestor; that is, the owner's default protection mask if they own the file, the group protection mask if appropriate, and so on.

·         The owner UID/GID returned is the UID/GID translation of the requestor.

·         The group protection mask returned is NONE.

·         The world protection mask returned is NONE.

The NFS server cannot accurately represent OpenVMS protections in this case. This technique ensures no users are granted access to data to which they would not normally have access in the client's cache on the server. However, on multi-user clients where access is denied based on cached file attributes, this mapping may result in intermittent access failures to other users trying to access the file simultaneously.

Disabling the NFS Server's ACL Support

You can disable the NFS server's ACL support with the logical name MULTINET_NFS_SERVER_NFS_ACL_SUPPORT_DISABLED. To do so, this logical name must be defined as TRUE or YES in the system-wide logical name table.

After defining the logical name, restart the NFS server so the definition takes effect.

/VMS_STYLE_CREATE Mount Point Option

The /VMS_STYLE_CREATE mount point option instructs the NFS server to use OpenVMS semantics to determine the file owner when a file is created by an NFS client. This mechanism lets NFS clients create files and charge disk quota to a rights identifier in the same way that OpenVMS users are accustomed.

Normally, the NFS server sets the file owner field exactly as specified by the NFS client software.

Limitations and Restrictions

Alarm ACEs are not supported.

Configuring PC-NFSD Remote Printing Service

The MultiNet PC-NFSD server includes support for a remote printing service used by PC and PC-compatible clients running PC-NFS. This section describes how to configure the service.

Before you can configure the service, the PC-NFSD server must be enabled (usually performed when enabling the NFS server). Before PC users can use the service, you must set up OpenVMS accounts, UID/GID names, and UID/GID-user name translations for them. The cache-interrupt parameters (described later in the Modifying NFS Server Global Parameters section,) must also be set to 1 (the default setting).

To configure the service, you must specify a generic mount directory name the server can use to create individual mount directories for spool areas on the clients. Each of these directories must be below an exported mount point available to all of the PCs.

The exported mount point directory must allow write access by the PC client users so their subdirectories and print files can be created.

For example, if the PC-NFSD spool area is specified as SYS$SYSDEVICE:[TMP], all PCNFS user print files are placed in this directory, and all PCNFS users must have write access to the SYS$SYSDEVICE:[TMP] directory.

If the PC-NFSD spool area is specified as SYS$SYSDEVICE:[TMP.%], PCNFS user print files are placed in individual subdirectories of SYS$SYSDEVICE:[TMP], and all PCNFS users must have write access to the SYS$SYSDEVICE:[TMP] directory.

The generic name can include a percent (%) character, which is replaced by the names of individual clients when the server creates their spool directories.

For example, if all clients can access either SYS$SYSDEVICE: or SYS$SYSDEVICE:[TMP] exported file systems, the name SYS$SYSDEVICE:[TMP.%] could be used for the generic spool directory. When the server receives a client's first remote printing request, the server creates a mount spool directory for that client. The server defines the name for this directory by using the generic directory name and replaces "%" with the client's name. The server supplies the name to the client, and the client mounts the directory. From then on, the client uses the directory to hold all files to be printed remotely, and the server performs all of the printing operations.

Configure the remote-printing service as described in the following example, and as illustrated in the following example.

 

Note: You use SERVER-CONFIG (not NFS-CONFIG) for this task. You also use SERVER-CONFIG to restart the server after you have finished.

 

 

1. Check the following settings before configuring the remote printing service:

·         Make sure file protections on the exported directories and subdirectories are as desired for the specified user names. Make sure file protection on the exported directory allows at least WORLD:RE access.

·         For PCNFS printing, make sure the PC-NFSD spool directory allows write access by the user and the user has sufficient disk quota if disk quotas are enabled on the OpenVMS volume that contains the PC-NFSD spool area.

2. Invoke the SERVER-CONFIG utility:

$ MULTINET CONFIGURE/SERVER

3. Select the PC-NFSD protocol:

SERVER-CONFIG>SELECT PCNFSD

4. Invoke the parameter-editing procedure:

SERVER-CONFIG>SET PARAMETER

5. If the SPOOL-DIRECTORY parameter is set, the utility asks you if you want to delete it. Respond by entering YES.

Delete parameter "spool-directory" ? [NO] Y

6. The utility prompts you to add parameters or exit the utility. Set the SPOOL-DIRECTORY parameter and specify the generic directory name for it:

You can now add new parameters for PCNFSD. An empty line terminates.

Add Parameter: SPOOL-DIRECTORY directory-name

directory-name is the generic spool directory name.

7. In response, the utility prompts you again to add parameters or exit the utility. Press RETURN to exit.

8. Restart the MultiNet master server.

$ MULTINET CONFIGURE/SERVER
MultiNet Server Configuration Utility 5.6
[Reading in configuration from MULTINET:SERVICES.MASTER_SERVER]
SERVER-CONFIG>SELECT PCNFSD
[The Selected SERVER entry is now PCNFSD]
SERVER-CONFIG>SET PARAMETER
Delete parameter "spool-directory" ? [NO] Y
You can now add new parameters for PCNFSD.  An empty line terminates.
Add Parameter: SPOOL-DIRECTORY SYS$SYSDEVICE:[TMP.%]
Add Parameter:
[Service specific parameters for PCNFSD changed]
Restart the server to make these changes take effect.
SERVER-CONFIG>RESTART
Configuration modified, do you want to save it first ? [YES] Y
[Writing configuration to
SYS$COMMON:[MULTINET]SERVICES.MASTER_SERVER]
%RUN-S-PROC_ID, identification of created process is 000002CD
SERVER-CONFIG>EXIT
$

Use the logical name MULTINET_PCNFSD_QUEUE_TYPES to select the type of queues you want returned. Define the queues to be a comma-separated list of these valid queue types: GENERIC, PRINTER, SERVER, SYMBIONT, and TERMINAL.

$ DEFINE/SYSTEM/EXECUTIVE MULTINET_PCNFSD_QUEUE_TYPES "PRINTER"

Use the logical name MULTINET_PCNFSD_PRINTER_LIMIT to determine if the returned packet size is to be limited (takes a number for its value, in bytes). If this logical is not defined, MultiNet determines the size of the packet at run-time. For example:

$ DEFINE/SYSTEM/EXECUTIVE MULTINET_PCNFSD_PRINTER_LIMIT 45000

Modifying NFS Server Mount Point Options

By default, the NFS server maps OpenVMS file system semantics to UNIX file system semantics. File names undergo a special mapping and only the top version of files is accessible through NFS. This section describes the mount point options that control or disable this conversion.

The NFS protocol specification requires the NFS server to act like a UNIX file system. If you use any of the options described in this section, the NFS server acts more like an OpenVMS file system, and may be incompatible with some NFS clients.

The mount point options are specified as "qualifiers" to the mount point name, separated from the mount point name with a "#" character. With the NFS client, the switches are part of the directory specification being mounted. Because the switches are passed in the name, these options cannot be used with the automounter.

Mount Point Option Summary

The below table shows the qualifiers used to control the behavior of the NFS server.

Qualifier

Description

/APPROXIMATE_TEXT_SIZE

Allows UNIX commands such as ls -l to execute faster by determining file sizes only when the OpenVMS file length exceeds the specified threshold.

/CREATE_NEW_VERSION

Causes an NFS create() operation on an already existing file to create a new version of the file instead of overwriting the old version. This qualifier has no effect on ULTRIX clients, because they do not send the correct NFS operation.

/DISPLAY_TOP_VERSION

Causes the NFS server to display the OpenVMS version number at the end of a file name when that file name is the highest version number available. The version number is usually not displayed.

/DISPLAY_VERSION

Causes the NFS server to display files that are not the highest version number. These files are usually not displayed.

/VMS_FILENAMES

Disables the file name mapping described in the Mapping UNIX File Names section.

/VMS_LOWERCASE_FILENAMES

Disables the file name mapping described in the Mapping UNIX File Names section, but changes the OpenVMS name to lowercase for display.

 

 

Note: This applies to ODS-2 exports only.

 

/VMS_STYLE_CREATE

Enables the use of OpenVMS semantics instead of NFS semantics for determining ownership of created files. With NFS semantics, the NFS client specifies everything explicitly. With OpenVMS semantics, file ownership may be inferred from the parent directory, ACLs, previous versions, and so on.

Examples of Mount Point Option Usage

The following example shows mounting and accessing a file system from UNIX using mount point options.

# mount -o soft,rw kaos:/users\#/vms_filenames/display_version /mnt
# ls /mnt/SMITH.DIR
BIN.DIR                     TMP.DIR
LOGIN.COM                   TODO
LOGIN.COM;32                TODO.;508
MAIL.MAI
#

Modifying NFS Server Global Parameters

Global parameters affect NFS server operations. Their default settings are appropriate for almost all configurations. The following sections describe the NFS global parameters and explain how to change their values.

Global parameters can be set with the NFS-CONFIG SET command. For a complete list of SET commands, refer to the MultiNet Administrator's Reference. Descriptions of the global parameters are also available online with the NFS-CONFIG HELP command.

 

Note: Change the settings only if absolutely necessary.

 

 

The NFS global parameters control:

·         Operations of the directory and file cache, including its size and discard rate

·         Operations of the duplicate-requests cache

·         Special operations for debugging the server

Most of the parameters control the operation of the directory and file cache that exists between the server's file systems and the network. The following sections describe the cache and the parameters.

If you must change the settings, wait until after you complete the initial system configuration and test. It is much easier to test and debug specific aspects of the NFS server before you have changed global characteristics.

NFS Mode of Operation

The default for the server is kernel mode. If the server becomes unresponsive, reboot the server and change to user mode. This will, however, give you slower performance. If the server becomes unresponsive in user mode, issue this command

$ STOP /ID=pid.

If the server crashes when in user mode, restart the server. The commands for changing to the two modes follow:

·         To put into user mode:

$ mu conf/nfs
NFS-CONFIG>SET USER-MODE 1
[Global NFS parameter "user-mode-server" set to 1]
NFS-CONFIG>restart
Configuration modified, do you want to save it first ? [YES] YES
[Writing NFS file server configuration to
MULTINET_COMMON_ROOT:[MULTINET]NFS.CONFIGURATION]
Connected to NETCONTROL server on "127.0.0.1"
< pc4.example.net Network Control V5.6 at Thu 26-Oct-2019 4:29PM-EST
< NFS/RPCLockMgr Server Started
< RPCMOUNT database reloaded
NFS Client UID mappings reloaded.
NFS-CONFIG>exit

·         To put into kernel mode (default):

NFS-CONFIG>SET USER-MODE 0
[Global NFS parameter "user-mode-server" set to 0]
NFS-CONFIG>restart
Configuration modified, do you want to save it first ? [YES] YES
[Writing NFS file server configuration to
MULTINET_COMMON_ROOT:[MULTINET]NFS.CONFIGURATION]
Connected to NETCONTROL server on "127.0.0.1"
< pc4.example.net Network Control V5.6 at Thu 26-Oct-2019 4:29PM-EST
< NFS/RPCLockMgr Server Started
< RPCMOUNT database reloaded
NFS Client UID mappings reloaded.
NFS-CONFIG>exit

NFS Server Memory Considerations

The NFS server uses memory in the following manner to cache files and directories for faster access, to hold various internal states, and to buffer requests that arrive from the clients:

·         The NFS code and fixed data structures require about 1400 pages.

·         The server's file and directory cache consumes the most memory-by default, a little under 20,000 pages (or 10 megabytes) of virtual memory.

As you install and configure OpenVMS and the NFS server, make sure that no OpenVMS limitation will interfere with these server requirements. If possible, specify a value of at least 30,000 for the SYSGEN VIRTUALPAGECNT parameter, and provide at least 30,000 pages of space in the system page file. If these resources are not available, adjust the settings of the server's parameters to decrease the maximum size of the cache, allowing it to fit within the limits of the available memory.

The following equation defines the maximum amount of memory the server can use at one time, as a function of the global parameters.

total_memory_consumption = fixed_consumption + variable_cache_consumption

 

fixed_consumption

=  1400 pages

variable_cache_consumption

=  (1 page x MAXIMUM-CACHE-FILES)
+  (17 pages x MAXIMUM-CACHE-BUFFERS)
+  (1 page x MAXIMUM-OPEN-CHANNELS)

 

MAXIMUM-CACHE-FILES - is the maximum number of file headers that can be cached.

 

MAXIMUM-CACHE-BUFFERS - is the maximum number of data buffers that can be cached.

 

MAXIMUM-OPEN-CHANNELS - is the maximum number of channels that can be open at a time between the disk and the cache.

Process Memory

Process memory requirements and limitations are defined by:

·         SYSGEN parameters you set when configuring OpenVMS

·         NFS server process quotas

·         NFS server global parameters you specify when configuring the server

Virtual and Physical Memory

OpenVMS imposes two memory limits: virtual and physical. As indicated in the following discussion, you must make sure that OpenVMS provides the server with adequate resources for both virtual and physical memory and that the relationship between the amounts of the two is appropriate.

The amount of virtual memory available to the server is defined by the lesser of:

·         The SYSGEN VIRTUALPAGECNT parameter

·         The NFS server PAGEFILE quota-by default, 65,536 pages

The amount of physical memory available to the NFS server is defined by the lesser of:

·         The SYSGEN WSMAX parameter

·         The NFS server WSQUOTA and WSEXTENT quotas-by default, 2,000 and 20,000 pages, respectively

When you install and configure an OpenVMS server, be sure to provide the server with enough virtual and physical memory. If the server runs out of virtual memory, it returns ENOBUFS error messages to clients whose requests cannot be satisfied. See the MultiNet Messages, Logicals, and DECnet Applications book for information about ENOBUFS messages.

It is equally important to provide enough physical memory to the server to prevent excessive page faulting under normal operation. If physical memory is scarce, reduce the cache's default size so enough memory is available to hold it without heavy page faulting, or allow the page faulting.

In general, a disk read performed to satisfy a page fault requires far fewer resources than a disk read performed to replace part of the cache that has been removed. This removal occurs as the server reaches the cache size limit specified in the configuration parameters. However, during a page fault, the server can perform no other activity for any client.

Under conditions of high load from many clients, better performance usually results from reducing the size of the cache to eliminate or reduce page faulting. Under high load from a few clients, better performance usually results from allowing the page faulting.

OpenVMS Channels

When a client requests information not in the cache, the server uses OpenVMS channels to access the required directories and files from disk. When you install and configure an OpenVMS server, you must ensure enough channels are available for server requirements. If the server runs out of OpenVMS channels, it returns an ENOBUFS error message to the client that requested the additional channel.

By default, the server can use up to 50 channels at once. (This number is appropriate for almost all systems, because channels are generally deassigned shortly after they have been used to read data into the cache.) You can increase or decrease the maximum number of channels by adjusting the setting of the server's MAXIMUM-OPEN-CHANNELS global parameter.

If you plan to increase the MAXIMUM-OPEN-CHANNELS value, you might need to increase the setting of the SYSGEN CHANNELCNT parameter when you install the server because this OpenVMS parameter must always have a value that is at least 10% greater than that of MAXIMUM-OPEN-CHANNELS. Do not increase the value for MAXIMUM-OPEN-CHANNELS above 450; this limit is set by the server's open file limit (FILLM) quota.

For more information about the global parameters that affect channel availability and operation, see the Directory and File Cache Parameters section.

Directory and File Cache Parameters

The directory and file cache holds data from directories and files that has been requested by client users. When answering repeated requests for the same data, the server uses the cache rather than the disk, greatly improving response time.

Channels, File Headers, and Data Buffers

When an NFS client user first requests information about a directory or file, the server assigns a channel to access it, and creates a cache entry to hold the contents of a file header. A cached file header contains information about characteristics of the directory or file (for example, its size or owner).

As the user requests data from the directory or file, the server creates 8-kilobyte data buffers for it, using the channel to read the data from disk.

If a channel remains inactive for a pre-specified length of time, the server deassigns the channel. However, the cached header and data buffers remain in the cache, and user requests can be satisfied without accessing the file on disk again.

Directory and File Times

OpenVMS does not correctly update modification dates for directories on disk. (A directory modification date is the time when the specified directory was brought into the NFS server's directory and file cache.) However, because clients rely on modification dates when they use their own caches, the NFS server provides clients with modification dates.

Concurrency Parameters

The NUMBER-OF-RPC-TRANSPORTS parameter controls the number of simultaneous requests the NFS server can process.

When the set limit is reached, no new requests are processed until one of the requests in progress completes. Processing multiple requests simultaneously prevents a single client from locking out other clients while it is performing a slow operation.

The default setting for this parameter (10) allows the server to process 10 requests simultaneously. This value may be changed to adjust the tradeoff between concurrency and memory requirements.

Cache Interrupt Parameters

You can set the cache-interrupt parameters to cause the server to automatically discard cached information about a directory or file when an OpenVMS user tries to access the directory or file on the disk. The USE-DIRECTORY-BLOCKING-ASTS and USE-FILE-BLOCKING-ASTS parameters control whether the server flushes the cache in this situation.

These parameters can be set to 1 (on) or 0 (off). By default, they are both on, causing the server to discard the cached file header and all data buffers for a directory or file whenever an OpenVMS user attempts to access it on disk. These parameters must be set to 1 (one) to allow PC clients to use the PC-NFSD remote printing function. A setting of 1 also ensures that client users almost always receive the directory or file as it exists on disk. This consistency comes at the expense of the overhead of the additional interrupts and disk reads.

Cache-Timing Parameters

Because cached information may not be automatically updated if the directory or file is changed on the disk, the server periodically discards cached information. This requires a reread from disk the next time the information is needed.

The cache-timing parameters control the intervals at which channels are deassigned, and at which cached headers and cached data buffers are discarded. One of the parameters controls the polling interval at which the other parameters are checked.

Several of the cache-timing parameters distinguish between idle and active channels and cached data. An idle entity is one that is not being accessed by any client; an active entity is one that is in use.

Some of the cache timing parameters apply only to directories or only to files. Directory settings affect the speed at which local OpenVMS users see files created and deleted; file settings affect the speed at which users see file contents created and deleted.

Cache Maintenance Interval Parameters

The FILE-CACHE-TIMER-INTERVAL parameter determines how often the NFS server scans the cache, polls the other parameters to see if their timers have expired, and processes those that have.

The default setting for the FILE-CACHE-TIMER-INTERVAL parameter (30 seconds) is normally not changed during configuration.

Channel Deassignment Parameters

The READ-ONLY-FLUSH-AGE and READ-WRITE-FLUSH-AGE parameters determine how long idle channels can remain assigned to a file.

The READ-ONLY-FLUSH-AGE parameter applies to files that have been opened for read operations only; the READ-WRITE-FLUSH-AGE parameter applies to files that have been opened for both read and write operations. Closing a channel does not discard the data in the file headers and data buffers; clients can continue to access the cached data without requiring that the file be reopened.

The default values are 180 seconds for read-only channels and 60 seconds for read-write channels. You can shorten or lengthen the timer intervals to adjust tradeoffs between improved response time and the overhead of keeping channels assigned.

Cache Refresh Parameters

The DIRECTORY-INFO-IDLE-FLUSH-AGE, DIRECTORY-INFO-FLUSH-AGE, FILE-INFO-IDLE-FLUSH-AGE, and FILE-INFO-FLUSH-AGE parameters control how long cached headers and data buffers for a directory or file can remain in the cache.

As previously indicated, unless the cache-interrupt parameters are on, cached headers and buffers are not automatically discarded whenever an OpenVMS user attempts to access directories and files on disk. The cache-flush parameters specify a period after which the server discards cached information (requiring rereads from disk if the information is needed again). Two of the parameters control idle intervals, and two control active intervals.

Each setting for the four parameters represents a tradeoff between response time and concurrency between information stored in the cache and on the disk.

The default setting for the DIRECTORY-INFO-IDLE-FLUSH-AGE parameter is 150 seconds. The default setting for the DIRECTORY-INFO-FLUSH-AGE is 300 seconds. (In combination, these settings specify that cached directory information is discarded after 300 seconds if the information is in use, but discarded after 150 seconds if the information is not in use.)

The default setting for the FILE-INFO-IDLE-FLUSH-AGE parameter is 600 seconds. The default setting for the FILE-INFO-FLUSH-AGE parameter is 1200 seconds.

You can raise or lower any of the default settings, but do not set either directory parameter below 15 seconds or the server will be unable to complete directory operations.

Cache Size Parameters

The cache-size parameters for the directory and file cache determine the maximum numbers of channels, file headers, and data buffers that can simultaneously exist for the cache or for a given file system. The settings for each of these parameters reflect tradeoffs between response time and memory requirements.

In addition, in combination with SYSGEN's VIRTUALPAGECNT parameter and allowable page blocks, three of these parameters affect the maximum amount of memory that the variable portion of the cache can use at a time. As described in the NFS Server Memory Considerations section, those parameters are MAXIMUM-CACHE-FILES, MAXIMUM-CACHE-BUFFERS, and MAXIMUM-OPEN-CHANNELS.

OpenVMS Channel Usage Parameters

The MAXIMUM-OPEN-CHANNELS and MAXIMUM-FILESYSTEM-CHANNELS parameters determine the maximum number of open channels allowed simultaneously for the cache as a whole and for single file systems on a per-mount-point basis.

When a set limit is reached, and a request to access a new directory or file is received, the server deassigns the oldest open channel and uses the channel to complete the new request, ignoring the setting of the READ-ONLY-FLUSH-AGE or READ-WRITE-FLUSH-AGE parameter.

The default setting for both parameters is 50. You can change either value to adjust the tradeoff between response time and memory requirements. Do not increase them to greater than 90% of the value for the OpenVMS SYSGEN parameter CHANNELCNT, which determines the maximum number of channels available to the OpenVMS system as a whole. The 10% buffer between the values is required to handle the OpenVMS channels used for operations other than server operations, and to handle the times when the server is briefly allowed to exceed the MAXIMUM-OPEN-CHANNELS value (for example, the period between the time when the server opens a channel that causes the limit to be exceeded and the time when it closes another channel to observe the limit).

If the server runs out of OpenVMS channels, an ENOBUFS error is returned to the client that requested the additional channel (see the MultiNet Messages, Logicals, and DECnet Applications book).

Cache Memory Requirements Parameters

The MAXIMUM-CACHE-FILES, MAXIMUM-FILESYSTEM-FILES, MAXIMUM-CACHE-BUFFERS, and MAXIMUM-FILESYSTEM-BUFFERS parameters determine the maximum number of cached file headers and data buffers allowed simultaneously for the cache as a whole and for single file systems on a per-mount-point basis.

As described earlier, cached file headers contain attribute information for directories and files, and data buffers contain data from directories and files. A cached file header requires about 128 bytes of memory; a data buffer requires about 8 kilobytes.

The default setting for both cached-header parameters is 3000; the default setting for both data-buffer parameters is 500. You can change these settings to adjust the tradeoffs between response time and memory requirements, but increasing any of the values may require increasing the size of the server's virtual address space or page file quota, as described in the NFS Server Memory Considerations section.

 

Note: Unless the settings for MAXIMUM-CACHE-BUFFERS and MAXIMUM-FILESYSTEM-BUFFERS are high enough to allow the cache to hold the largest files the client will access, performance will be severely degraded for those files. Each cached data buffer holds 16 disk blocks.

 

 

Auto Server Cache Sizing

The NFS server limits the number of files it has open at one time based on the BYTLM process quota. For the server to operate properly, it must have sufficient BYTLM to have 30 files open simultaneously. If the server process BYTLM quota is too small to accommodate this requirement, the server issues the following OPCOM messages:

%%%%%%%%%% OPCOM 11-APR-2020 14:52:04.87 %%%%%%%%%%
Message from user SMITH on NODE1
NFS Server: Max Accessed files: 11.

"11" is the calculated number of simultaneous file accesses the server can have, based on the available BYTLM quota.

If this number is less than 30, the following message also appears:

%%%%%%%%%% OPCOM 11-APR-2020 14:52:04.88 %%%%%%%%%%
Message from user SMITH on NODE1
NFS Server: Increase BYTLM to new_value.

new_value is the recommended value for the BYTLM process quota.

Use SERVER-CONFIG to set the BYTLM process quota for the NFS server process:

$ MULTINET CONFIGURE/SERVER
SERVER-CONFIG>SELECT NFS
SERVER-CONFIG>SET PQL-BYTLM nnnnn
SERVER_CONFIG>EXIT

Restart the NFS server to put the new process quota into effect:

$ MULTINET NETCONTROL NFS RESTART

Writeback Cache Parameters

The SECONDS-BEFORE-WRITEBACK, MAXIMUM-DIRTY-BUFFERS, and MAXIMUM-WRITE-JOBS parameters control the functions of the optional writeback feature of the directory and file cache.

The directory and files cache normally functions as a write-through cache. In this case, whenever a client is notified that a write request has completed, the data is stored on the disk, and data integrity is guaranteed.

The optional writeback feature greatly increases the speed of write operations (as perceived by the user) by notifying the client that write operations are complete when the data is stored in cache memory on the server, but before it has been written to disk.

This increase in perceived write performance is achieved at the risk of data loss if the OpenVMS server system crashes while a write operation is in progress. During a write operation, data may also be lost if the server encounters an error such as insufficient disk space or disk quota or a hardware write error.

If the server cannot complete a writeback write operation, it discards the write operation, flags the file's cached header to indicate the error, and sends an error message in response to the next request for the file. However, if there is no new request before the affected header is discarded or the next request is from another user, data can be lost.

If you enable the writeback cache feature, you can prevent data losses from occurring during system shutdowns by adding the following line to the server's SYS$MANAGER:SYSHUTDWN.COM file:

$ MULTINET NETCONTROL NFS SHUTDOWN

 

Note: You can also use this command to perform a simple shutdown of the NFS server.

 

 

The writeback cache parameters have the following meanings:

·         The SECONDS-BEFORE-WRITEBACK parameter determines whether the writeback feature is enabled, and specifies how long the server will delay initiating a write operation after receiving data for a write request. The longer the delay, the greater the chance that the server can coalesce multiple small write operations into fewer, larger, and more efficient operations.

 

Note: This timing parameter is not affected by the FILE-CACHE-TIMER-INTERVAL parameter.

 

 

The default setting (0) disables the writeback feature. Any other value enables the feature. The recommended value for writeback delay is five seconds; little performance is gained from longer delays.

·         If the writeback cache is enabled, the MAXIMUM-DIRTY-BUFFERS and MAXIMUM-WRITE-JOBS parameters control its operation.

·         MAXIMUM-DIRTY-BUFFERS sets a limit on the number of buffers that can remain in the cache awaiting writeback before the SECONDS-BEFORE-WRITEBACK time has expired. As soon as this limit is reached, the server begins writeback of the oldest buffer. The default setting for this parameter (0) sets no limit.

·         MAXIMUM-WRITE-JOBS sets a limit on the number of write operations that can be simultaneously processed. When this limit is reached, the server defers starting a new write operation until a current operation completes. The default setting for this parameter (0) sets no limit.

Duplicate Request Detection Cache Parameters

The server has a duplicate-request detection cache to store the most recent responses it has sent to clients requesting directory and file access. The NUMBER-OF-DUPLICATE-REQUESTS-CACHED parameter defines the number of responses that can be cached.

The duplicate-request detection cache operates in conjunction with the cache the RPC protocol module keeps of the transaction IDs (XIDs) of the last 400 requests it has seen. The RPC layer uses its cache to detect duplicate requests.

For example, if the network layer drops a UDP packet containing a response to a client, the client repeats the request after an interval, and the RPC protocol notifies the NFS server that the request was a duplicate. The server then looks in its duplicate-request detection cache for the response so it can resend it without repeating the original operation.

By default, the cache stores the last 250 responses sent.

 

Note: Too low a value causes the following error message to be printed frequently on the OpenVMS console: "Duplicate Detected but not in cache." Too low a value can also cause an incorrect answer to be sent. A value above 400 has the same effect as 400, which is the maximum number of XIDs stored by the RPC protocol.

 

 

Delete-Behind Cache Parameters

The MAXIMUM-QUEUED-REMOVES parameter affects the way client users perceive the speed at which directories and files are deleted.

The OpenVMS file deletion operation is very slow. The NFS server uses its delete-behind queue to hide some of this delay from the client user; when a request to delete a directory or file arrives, the request is answered immediately, but the delete request is usually only queued to the OpenVMS file system.

The MAXIMUM-QUEUED-REMOVES parameter limits the number of requests that can be queued; when that number is reached, the next delete request must wait until the next queued request has completed. This delay can be significant if the next request is to delete a large directory; directory deletions always occur synchronously, and each file in a directory must be deleted before the directory itself is deleted.

Therefore, the parameter's setting defines when, in a series of deletions, the client user perceives the delay in the OpenVMS deletion. The default setting is 25.

Time Zone Parameters

Although OpenVMS does not track time zones, the NFS server requires this information. The TIMEZONE parameter identifies the local time zone for the OpenVMS server. This parameter is also a MultiNet global parameter. If the parameter is set appropriately there, you do not need to set it again as a server global parameter.

The server uses the TIMEZONE setting to calculate the offset between Greenwich Mean Time and the local time recorded for directories and files when they are cached and modified in the cache.

Valid TIMEZONE settings are the time zone abbreviations; for example, PST (Pacific Standard Time). When the setting defines a U.S. time zone, the server automatically adjusts the time zone to conform to the U.S. Federal Daylight Savings Time rules.

The default setting is GMT (Greenwich Mean Time). If your local time and the time to which your OpenVMS clock is set differ, set the TIMEZONE parameter to correspond to the OpenVMS clock.

For more information about MultiNet's handling of time, see Chapter 14.

Special Debugging Parameters

The following special debugging parameters exist only to debug the NFS server under unusual circumstances. Do not use them without instructions from Process Software Technical Support.

CRASH-ON-EXCEPTION

MAXIMUM_DEBUG_PRINTS

DEBUG-MESSAGE-CACHE-SIZE

NFSDEBUG

EXIT-ON-EXCEPTION

PRINT-TO-STDOUT

FILECACHE-DEBUG

RPCDEBUG

HIBERNATE-ON-EXCEPTION

 

 

NFS Troubleshooting Tips

This section describes workarounds for common problems encountered when using the NFS server.

Approximate Text Size Threshold

If you have many large files, UNIX commands that expect file sizes to be available may take a long time to execute. To set the approximate text size threshold so UNIX commands like ls -l execute faster:

1.      Set the threshold on the server system.

2.      Use the mount option on the client system so the threshold takes effect.

The following example shows how to set the threshold on the server:

$ MULTINET CONFIGURE /NFS
NFS-CONFIG>SET APPROXIMATE-TEXT-SIZE-THRESHOLD 250
[Global NFS parameter "approximate-text-size-threshold" set to 250]
NFS-CONFIG>SAVE
[Writing NFS file server config to MULTINET_ROOT:[MULTINET]NFS.CONFIGURATION]
NFS-CONFIG>RELOAD
Connected to NETCONTROL server on "127.0.0.1"
< Code-Z.EXAMPLE.COM Network Control 5.6 at Mon 18-Aug-2019 9:25PM-PDT
< OK: NFS/RPCLockMgr server configuration reloading
< RPCMOUNT database reloaded

To take advantage of the approximate text size threshold, NFS clients must mount the file system with the /APPROXIMATE_TEXT_SIZE option. The following example shows how to use the mount option on a UNIX NFS client.

% mount -o soft hq:/altusers/alex/test\#/approximate_text

NFS Stream_LF File Conversion

If you receive an error message related to incompatible file attributes, the following information will help. When you use the COPY command to copy a non-Stream_LF format file to a disk mounted by the NFS client, MultiNet converts the file to Stream_LF format (by default) to ensure that text files can be shared between OpenVMS and UNIX systems. To preserve the non-Stream_LF format, use the /SEMANTICS=NOSTREAM_CONVERSION qualifier as part of the NFSMOUNT command.

For more information on the NFSMOUNT command, refer to the command page in the DCL Commands section of the MultiNet Administrator's Reference. For more information on NFS default file attributes, see Chapter 28.

Performance Problems with Large Directories

Because of XQP limitations, you may experience performance problems when processing certain requests on large directory trees. Process Software recommends that you keep fewer than 1,000 files in each directory.