Revision: $Revision: 947 $ ($Date: 2012-07-19 14:03:32 +0200 (Thu, 19 Jul 2012) $)
Candidates should be able to export filesystems using NFS. This objective includes access restrictions, mounting an NFS filesystem on a client and securing NFS.
NFS configuration files
NFS tools and utilities
Access restrictions to certain hosts and/or subnets
Mount options on server and client
|The file |
|The exportfs command|
|The showmount command|
|The nfsstat command|
|The file |
|The file |
|The rpcinfo command|
|The mountd command|
|The portmapper command|
The NFS protocol is currently being reworked, a process which has, so far, taken several years. This has consequences for those using NFS. Modern NFS daemons will currently run in kernel space (part of the running kernel) and support version 3 of the NFS protocol (version 2 will still be supported for compatibility with older clients). Older NFS daemons running in user space (which is almost independent of the kernel) and accepting only protocol version 2 NFS requests, will still be around. This section will primarily describe kernel-space NFS-servers supporting protocol version 3 and compatible clients. Differences from older versions will be pointed out when appropriate.
Details about this NFS work-in-progress are in the section called “NFS protocol versions” below.
To run NFS, the following is needed:
Each is discussed in detail below.
When configuring a kernel for NFS, it must be decided whether or not the system will be a client or a server. A system with a kernel that contains NFS-server support can also be used as an NFS client.
The situation described here is valid for the 2.4.x kernel series. Specifications described here may change in the future.
NFS-related kernel options
If you want to use NFS as a client, select this.
If this is the only NFS option selected, the system
will support NFS protocol version 2 only. To use protocol
version 3 you will also need to select
CONFIG_NFS_FS is selected, support for
an old-fashioned user-space NFS-server
(protocol version 2) is also present.
You can do without this option when the system is a
kernel-space NFS-server server only (i.e., neither client nor
CONFIG_NFSD) Kernel space only.
When you select this, you get a
kernel-space NFS-server supporting NFS
protocol version 2.
Additional software is needed to control the kernel-space
NFS-server (as will be shown later).
To run an old-fashioned user-space NFS-server this option is not
This option adds support for version 3 of the NFS
protocol to the kernel-space NFS-server. The kernel-space
NFS-server will support both version 2 and 3 of the NFS
You can only select this if you also select
NFS server support (
When configuring during a compiler build (i.e., make menuconfig, make xconfig, etc), the options listed above can be found in the File Systems section, subsection Network File Systems.
Table 9.1, “Kernel options for NFS” provides an overview of NFS support in the kernel.
Table 9.1. Kernel options for NFS
|Description||option(s)||allows / provides|
|NFS file system support||allows both NFS (v2) client and user space NFS (v2) server|
|NFSv3 client support||
||allows NFS (v2 + v3) client|
|NFS server support||provides NFS (v2) kernel server|
|NFSv3 server support||
||provides NFS (v2 + v3) kernel server|
Selecting at least one of the NFS kernel options turns on
Sun RPC (Remote Procedure Call) support automatically.
This results in a kernel space RPC input/output daemon.
It can be recognised as
the process listing.
The portmapper is needed for all NFS traffic .
Most distributions will install the portmapper if NFS software (other than the kernel) is being installed.
First, make sure the portmapper has support for the tcp wrapper built in. You can test this by running ldd /sbin/portmap . The result could be something like
libwrap.so.0 => /lib/libwrap.so.0 (0x40018000) libnsl.so.1 => /lib/libnsl.so.1 (0x40020000) libc.so.6 => /lib/libc.so.6 (0x40036000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
The line with
(libwrap belongs to the tcp wrapper) shows that this portmapper is
compiled with tcp-wrapper support.
If that line is missing, get a better portmapper or compile one
A common security-strategy is blocking incoming portmapper requests by default, but allowing specific hosts to connect. This strategy will be described here.
Start by editing the file
and adding the following line:
This denies every system access to the portmapper. It can be extended with a command:
portmap: ALL: (echo illegal rpc request from %h | mail root) &
Now all portmapper requests are denied. In the second example, requests are denied and reported to root.
The next step is allowing only those systems access that
are allowed to do so.
This is done by putting a line in
This allows each host with an IP address starting with the numbers shown to connect to the portmapper and, therefore, use NFS. Another possibility is specifying part of a hostname:
This allows all hosts inside the example.com domain to connect. To allow all hosts of a NIS workgroup:
To allow hosts with IP addresses in a subnet:
This allows all hosts from 192.168.24.16 to 192.168.24.23 to connect (examples from [Zadok01]).
NFS is implemented as a set of daemons. These
can be recognized by their name: they all start with the
rpc. prefix followed by the name of the daemon.
Among these are:
rpc.nfsd (only a support program in systems
with a kernel NFS server),
The source for these daemons
can be found in the
(see [NFS] for more information on nfs-utils).
It will also contain the source of other support programs,
such as exportfs,
showmount and nfsstat.
These will be discussed later in
the section called “Exporting filesystems” and the section called “Testing NFS”.
Distributions may provide
nfs-utils as a
ready-to-use package, sometimes under different names. Debian,
provides lock and status daemons in a special
nfs-common package, and the
NFS and mount daemons in
(which come in user-space and kernel-space versions).
Each of the daemons mentioned here can also be secured using the tcp wrapper. Details in the section called “Securing NFS”.
When implementing an NFS server, you can install support for an kernel-space or an user-space NFS server, depending on the kernel configuration. The rpc.nfsd command (sometimes called nfsd) is the complete NFS server in user space. In kernel space, however, it is just a support program that can start the NFS server in the kernel.
A kernel-space or a user-space NFS server
The kernel-space NFS server is part of the running kernel.
A kernel NFS server appears as
the process list.
The version of rpc.nfsd that supports the NFS server inside the kernel is just a support program to control NFS kernel server(s).
The rpc.nfsd program can also contain an
old-fashioned user-space NFS server (version 2 only).
A user-space NFS server is a complete
NFS server. It can be recognized as
rpc.nfsd in the process list.
The mountd (or rpc.mountd) mount-daemon handles incoming NFS (mount) requests. It is required on a system that provides NFS server support.
The configuration of the mount daemon includes exporting (making available) a filesystem to certain hosts and specifying how they can use this filesystem.
Exporting filesystems, using the
file and the exportfs command
will be discussed in the section called “Exporting filesystems”.
A lock daemon for NFS is implemented in rpc.lockd.
You won't need lock-daemon support when using modern (2.4.x)
kernels. These kernels provide one
internally, which can be recognized as
the process list.
Since the internal kernel lock-daemon takes precedence, starting
rpc.lockd accidentally will do no harm.
There is no configuration for rpc.lockd.
According to the manual page, status daemon rpc.statd
implements only a reboot notification service.
It is a user-space daemon - even on systems with NFS version 3
support. It can be recognized as
rpc.statd in the process listing.
It is used on systems with NFS client and NFS server support.
There is no configuration for rpc.statd.
Exporting a filesystem, or part of it, makes it available for use by another system. A filesystem can be exported to a single host, a group of hosts or to everyone.
In examples below the system called
be the NFS server and the system called
clientN one of the clients.
/etc/exports contains the definition(s)
of filesystem(s) to be exported, the name of the host that is
allowed to access it and how the host can access it.
Each line in
/etc/exports has the following
Name of the directory to be exported
The name of the system (host) that is allowed to access
Options between braces. Will be discussed further on.
More than one system with options can be listed:
/home/ftp/pub clientN(rw) *.example.com(ro)
/home/ftp/pub clientN (rw)
The first allows
read and write access. The second allows
clientN access with default options
(see man 5 exports) and all
systems read and write access!
Several export options can be used in
/etc/exports. Only the most important will be
discussed here. See the exports(5) manual page for a full list.
Two types of options will be listed here:
general options and user/group id options.
Also relevant is the way NFS handles user and group permissions across systems. NFS software considers users with the same UID and the same username as the same users. The same is true for GIDs.
root can read (and write)
root permission over NFS is considered dangerous.
User/group id squashing
All requests by user
clientN (the client) will be done as user
nfsshop (the server). This implies, for
instance, that user
the client can only read files on the server that are
All requests as
root on the
client will be done as
on the server.
This is necessary when, for instance, backups are to be made over NFS.
This implies that root on
completely trusts user root on
Requests of any user other than root on
clientN are performed as user
Use this if you cannot map usernames and UID's easily.
All requests of a non-root user on
are attempted as the same user on
Example entry in
/etc/exports on system
nfsshop (the server system):
/ client5(ro,no_root_squash) *.example.com(ro)
nfsshop allows system
clientN read-only access
to everything and reads by user root are done as root on
Systems from the
are allowed read-only access, but requests
from root are done as user
root_squash is true by default.
Here is an example file:
# /etc/exports on nfsshop # the access control list for filesystems which may be exported # to NFS clients. See exports(5). / client2.exworks(ro,root_squash) / client3.exworks(ro,root_squash) / client4.exworks(ro,root_squash) /home client9.exworks(ro,root_squash)
allowed to mount the complete filesystem
But they have read-only access and requests are done as user
client9 is only allowed to mount
/home directory with the same rights
as the other three hosts.
/etc/exports is configured, the export
list in it can be activated using the
exportfs command. It can also be used to
reload the list after a change or deactivate the export list.
Table 9.2, “Overview of exportfs” shows some of the functionality
Table 9.2. Overview of exportfs
exportfs ||reexport all directories|
exportfs ||export or unexport all directories|
exportfs ||de-activate the export list (unexport all)|
Older (user-space) NFS systems may not have the
On these systems the export list will be installed automatically
by the mount daemon when it is started.
Reloading after a change is done by sending a
SIGHUP signal to the running mount-daemon
The export list is activated (or reactivated) with the following command:
Before the exportfs -r is issued, no filesystems are exported and no other system can connect.
When the export list is activated, the kernel export table will be filled. The following command will show the kernel export table:
The output will look something like:
# Version 1.1 # Path Client(Flags) # IPs / client4.exworks(ro,root_squash,async,wdelay) # 192.168.72.4 /home client9.exworks(ro,root_squash,async,wdelay) # 192.168.72.9 / client2.exworks(ro,root_squash,async,wdelay) # 192.168.72.2 / client3.exworks(ro,root_squash,async,wdelay) # 192.168.72.3
Explanation: all named hosts are allowed to mount the root
/home) of this machine with the
listed options. The IP addresses are listed for convenience.
Also use exportfs -r
after you have made changes to
on a running system.
When running exportfs -r, some
things will be done in the directory
/var/lib/nfs. Files there are easy to corrupt
by human intervention with far-reaching consequences, as I have
learned from personal experience.
The showmount shows information about the exported filesystems and active mounts to the host. Table 9.3, “Overview of showmount” shows how showmount can be used.
Table 9.3. Overview of showmount
showmount ||show active export list|
|showmount||show names of clients with active mounts|
showmount ||show directories that are mounted by remote clients|
showmount ||show both client-names and directories|
showmount accepts a host name as its last
argument. If present, showmount will query the
NFS-server on that host. If omitted, the current host will be
queried (as in the examples below, where the current host is
# showmount --exports Export list for nfsshop: / client2.exworks,client3.exworks,client4.exworks /home client9.exworks
The information is more sparse than the output of cat /proc/fs/nfs/exports shown earlier.
Without options. Without parameters, showmount will show names of hosts currently connected to the system:
# showmount Hosts on nfsshop: client9.exworks
# showmount --directories Directories on nfsshop: /home
# showmount --all All mount points on nfsshop: client9.exworks:/home
An NFS client system is a system that does a mount-attempt, using the mount command. The mount needs to have support for NFS built-in. This will generally be the case.
The NFS client-system needs to have appropriate NFS support in the kernel, as shown earlier (see the section called “Configuring the kernel for NFS”). Next, it needs a running portmapper. Last, software is needed to perform the remote mounts attempt: the mount command.
Familiarity with the mount command and
/etc/fstab is assumed in this
paragraph. If in doubt, consult the appropriate manual pages.
This specifies the filesystem
The mount point
Example: to mount the
/usr filesystem, which is on
nfsshop, onto the local
mount -t nfs nfsshop:/usr /usr
Fine-tuning of the mount request is done through options.
Several options are possible after the
Mount options for NFS
ro is specified the remote
NFS filesystem will be mounted
read-only. With the
rw option the remote
filesystem will be made available for both reading and
writing (if the NFS server agrees).
The default on the NFS server side
ro, but the default on the client side
(mount -t nfs) is
rw. The server-setting takes
precedence, so mounts will be done
-o ro can also be written
-o rw can also be written
rsize option specifies the size for
read transfers (from server to client).
wsize option specifies the opposite
direction. A higher number makes data transfers faster on a
reliable network. On a network where many retries are needed,
transfers may become slower.
Default values are either
4096, depending on your kernel
version. Current kernels accept a maximum of up to
NFS version 3 over
tcp, which will probably
production-ready by the time you read this, allows a maximum
32768. This size is defined with
NFSSVC_MAXBLKSIZE in the file
found in the kernel source-archive.
Specifies the transport-layer protocol for the NFS connection.
Most NFS version 2 implementations support only
implementations do exist.
NFS version 3 will allow both
tcp (the latter is under active
Future version 4 will allow only
See the section called “NFS protocol versions”.
Specifies the NFS version used for the transport
(see the section called “NFS protocol versions”).
Modern versions of mount will use version
3 by default. Older implementations that
still use version
2 are probably
The system will try indefinitely.
A mount-attempt can be interrupted by the user if
intr is specified.
A mount-attempt cannot be interrupted by a user if
nointr is set. The mount request can seem
to hang for days if
retry has its default
value (10000 minutes).
These options control the background mounting facility. It is off by default.
Background mounting is also affected
by other options. When
intr is specified,
the mount attempt will be interrupted by a an RPC timeout.
This happens, for example, when either the remote host is down
or the portmapper is not running. In a test setting the
backgrounding was only done when a “connection
Options can be combined using comma's:
mount -t nfs -o ro,rsize=8192 nfsshop:/usr/share /usr/local/share
A preferred combination of options might be:
The mount will be tried indefinitely, with retries in the background,
but can still be interrupted by the user that started the mount.
Of course, all these options can also be specified in
/etc/fstab. Be sure to specify the
noauto option if the filesystem
should not be mounted automatically
at boot time.
user option will allow non-root
users to perform the mount. This is not default.
Example entry in
nfsshop:/home /homesOnShop nfs ro,noauto,user 0 0
Now every user can do
After NFS has been set up, it can be tested. The following tools can help: showmount, rpcinfo and nfsstat.
As shown in the section called “The showmount command”, the showmount --exports command lists the current exports a server system. This can be used as a quick indication of the health of the created NFS system. Nevertheless, there are more sophisticated ways of doing this.
To see which file systems are mounted check /proc/mounts. It will also show nfs mounted filesystems.
$ cat /proc/mounts
The rpcinfo -p command lists all registered services the portmapper knows about. Each rpc... program registers itself at startup with the portmapper, so the names shown correspond to real daemons (or the kernel equivalents, as is the case for NFS version 3).
It can be used on the server system
to see if the portmapper is functioning:
program vers proto port 100003 3 udp 2049 nfs
This selection of the output shows that this portmapper will accept connections for nfs version 3 on udp.
A full sample output of rpcinfo -p on a server system:
rpcinfo -p program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 757 status 100024 1 tcp 759 status 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100021 1 udp 32770 nlockmgr 100021 3 udp 32770 nlockmgr 100021 4 udp 32770 nlockmgr 100005 1 udp 32771 mountd 100005 1 tcp 32768 mountd 100005 2 udp 32771 mountd 100005 2 tcp 32768 mountd 100005 3 udp 32771 mountd 100005 3 tcp 32768 mountd
As can be seen in the listing above, the portmapper will accept RPC requests for versions 2 and 3 of the NFS protocol, both on udp.
As can be seen, each RPC service has its own version
mountd service, for instance,
supports incoming connections for versions 1, 2 or 3 of mountd on
both udp and tcp.
It is also possible to probe
nfsshop (the server
system) from a client system, by specifying the name of the server
system after -p:
rpcinfo -p nfsshop
The output, if all is well, of course, will be the same.
It is possible to test a connection without doing any real work:
rpcinfo -u remotehost program
This is like the ping command to test a network
connection. However, rpcinfo -u works like a real
rpc/nfs connection, sending a so-called null
pseudo request. The -u option forces
rpcinfo to use udp transport.
The result of the test on
rpcinfo -u nfsshop nfs program 100003 version 2 ready and waiting program 100003 version 3 ready and waiting
The -t options will do the same for tcp transport:
rpcinfo -t nfsshop nfs rpcinfo: RPC: Program not registered program 100003 is not available
This system obviously does have support for nfs on udp, but not on tcp.
In the example output, the number 100003 is used instead of
or together with the name
nfs. Name or number
can be used in each others place. That is, we could also have
rpcinfo -u nfsshop 100003
Table 9.4, “Some options for the nfsstat program” provides an overview of relevant options for nfsstat.
Sample output from nfsstat -sn on the server host
Server nfs v2: null getattr setattr root lookup readlink 1 0% 3 0% 0 0% 0 0% 41 0% 0 0% read wrcache write create remove rename 5595 99% 0 0% 0 0% 1 0% 0 0% 0 0% link symlink mkdir rmdir readdir fsstat 0 0% 0 0% 0 0% 0 0% 7 0% 2 0% Server nfs v3: null getattr setattr lookup access readlink 1 100% 0 0% 0 0% 0 0% 0 0% 0 0% read write create mkdir symlink mknod 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% remove rmdir rename link readdir readdirplus 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% fsstat fsinfo pathconf commit 0 0% 0 0% 0 0% 0 0%
1's under both
are the result of the rpcinfo -u nfsshop nfs
command shown earlier.
NFS security has several unrelated issues. First, the NFS protocol and implementations have some known weaknesses. NFS file-handles are numbers that should be random, but are not, in reality. This opens the possibility of making a connection by guessing file-handles. Another problem is that all NFS data transfer is done as-is. This means that anyone able to listen to a connection can tap the information (this is called sniffing). Bad mount-point names combined with human error can be a completely different security risk.
Both sniffing and unwanted connection requests can be prevented by limiting access to each NFS server to a set of known, trusted hosts containing trusted users: within a small workgroup, for instance. Tcp-wrapper support or firewall software can be used to limit access to an NFS server.
The tcp wrapper.
Earlier on (see the section called “Securing the portmapper”) it was shown how to
limit connections to the portmapper from specific hosts.
The same can be done for the NFS related daemons, i.e.,
rpc.mountd and rpc.statd.
If your system runs an old-fashioned user-space NFS server (i.e.,
rpc.nfsd in the process list), consider
protecting rpc.nfsd and possibly
rpc.lockd, as well. If, on the other hand, your
system is running a modern kernel-based NFS implementation
[nfsd] in the process list), you
cannot do this, since the rpc.nfsd program is not
the one accepting the connections.
Make sure tcp-wrapper support is built into each daemon you wish to
Firewall software. The problem with tcp-wrapper support is that there already is a connection inside the host at the time that the connection-request is refused. If a security-related bug exists within either the tcp-wrapper library (not very likely) or the daemon that contains the support, unwanted access could be granted. Or worse. Firewall software (e.g., iptables) can make the kernel block connections before they enter the host. You may consider blocking unwanted NFS connections at each NFS server host or at the entry point of a network to all but acceptable hosts. Block at least the portmapper port (111/udp and 111/tcp). Also, considering blocking 2049/udp and 2049/tcp (NFS connections). You might also want to block other ports like the ones shown with the rpcinfo -p command: for example, the mount daemon ports 32771/udp and 32768/tcp (at least on my system). How to set up a firewall is shown in detail in Chapter 12, System Security (212) .
Simple human error in combination with bad naming may also result in a security risk. You would not be the first person to remove a remote directory tree because the mount point was not easily recognized as such and the remote system was mounted with read-write permissions.
Mount read-only. Mounting a remote filesystem read-only can prevent accidental erasure. So, mount read-only if at all possible. If you do need to mount a part read-write, make the part that can be written (erased) as small as possible.
Design your mountpoints well. Also, name a mount point so that it can easily be recognized as a mount point. One of the possibilities is to use a special name:
Progress has been made in NFS software. Although no software can prevent human error, other risks (e.g., guessable file-handles and sniffing) can be prevented with better software.
NFS version 4 is a new version of the NFS protocol intended to fix all existing problems in NFS. It is still in the design phase. More about NFS version 4 and differences between the versions in the section called “NFS protocol versions” in a moment.
Guessable file handles. One of the ways to break in a NFS server is to guess so-called file-handles. The old (32-bit) file-handles (used in NFS version 2) were rather easy to guess. Version 3 of the NFS protocol offers improved security by using 64-bit file-handles that are considerably harder to guess.
Version 4 security enhancements. The upcoming version 4 of the NFS protocol defines encrypted connections. When the connection is encrypted, getting information by sniffing is made much harder or even impossible.
Table 9.5, “Overview of NFS-related programs and files” provides an overview of the most important files and software related to NFS.
Table 9.5. Overview of NFS-related programs and files
|program or file||description|
|The kernel||provides NFS support|
|The portmapper||handles RPC requests|
|rpc.nfsd||NFS server control (kernel space) or software (user space)|
|rpc.mountd||handles incoming (un)mount requests|
|The file ||defines which filesystems are exported|
|The exportfs command||(un)exports filesystems|
|showmount --exports||shows current exports|
|The rpcinfo command||reports RPC information|
|The nfsstat command||reports NFS statistics|
|showmount --all||shows active mounts to me (this host)|
|mount -t nfs
||mounts a remote filesystem|
|umount -t nfs -a||unmounts all remote filesystems|
Currently, there are a lot of changes in the NFS protocol that can affect the way the system is set up. Table 9.6, “Overview of NFS protocol versions” provides an overview.
Table 9.6. Overview of NFS protocol versions
|Protocol version||Current status||kernel or user space||udp or tcp transport|
|2||becoming obsolete||user, kernel||udp, some tcp impl. exist|
|3||new standard||kernel||udp, tcp: under development|
The trends that can be seen in table Table 9.6, “Overview of NFS protocol versions” are: kernel space instead of user space and tcp instead of udp.
A note on the transport protocol.
tcp (NFS v3,v4, some v2) are
considered better than
udp (NFS v2,v3).
udp option might be the best on a small, fast
allows considerably larger packet sizes (
wsize) to be set. With sizes of 64k,
tcp connections are reported to be 10% faster
than connections over
does not allow sizes that large.
See Zadok01 for a
discussion about this.