| 3 comments ]

NFS Servers and Clients

Network File System (NFS) is a legacy network file system protocol allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks. NFS can be configured to serve mounts from Linux systems or be used by Linux to mount remote filesystems. This document describes both processes in minor detail.

Note, there are better ways to achieve this and NFS is probably one of the less secure ways of file sharing. Using SSH (Secure SHell) may be more prudent in many scenarios.

Using NFS server mounts on an Open Client Workstation is in violation of ITCS300 security rules and should never be practiced.

Considerations

  • NFS may have problems starting on some systems due to the order to which the services start. Partly this is due to how they were installed.
  • Portmap is required, which is a part of the xinetd service. If killed at anytime, NFS is damaged.
  • NFS will be affected by the firewall, during setup temporarily turn off iptables (ie. service iptables stop) .

Configurations and commands

  • Short list:
    Config/Command Details
    /etc/exports
    • where the shared files are setup on Linux systems serving shares.
    NFS over UDP or TCP
    • NFS requires several ports
    showmount -e
    • show available mounts on remote host: showmount -e server1 (i.e. server1 being the host)
    xinetd
    • the service which provides the required portmap service for NFS (among other things)
    nfs
    • nfs-utils provides the NFS server daemon. Again, this is only needed for users trying to setup NFS servers
    chkconfig
    • sets services to run upon boot.
    service
    • the command that provides starts/stops/status on a particular service (i.e. service xinetd status )
    su -
    • switches users to root user and inherits the root PATH values.


NFS Client Configuration Steps

Client Configuration Steps

(for users looking to simply mounting to another system running NFS accessible mounts. In short, the user wants to get to a remote NFS server.)

  1. Open a terminal session, switch to root user (ie. su - )
  2. Confirm remote mounts are available using showmount :
    showmount -e remote.host.ibm.com
  3. Create a local mount point directory:
    mkdir /g39data
  4. Then mount the system using the mount command:
    /usr/sbin/mount -o rw,soft,bg,intr remote.host.ibm.com:/g39data /g39data


    The following are options commonly used for NFS mounts:

    • hard or soft - Specifies whether the program using a file via an NFS connection should stop and wait (hard) for the server to come back online if the host serving the exported file system is unavailable, or if it should report an error (soft). If hard is specified, the user cannot terminate the process waiting for the NFS communication to resume unless the intr option is also specified. If soft, is specified, the user can set an additional timeo= option, where specifies the number of seconds to pass before the error is reported.
    • intr - Allows NFS requests to be interrupted if the server goes down or cannot be reached.
    • nfsvers=2 or nfsvers=3 - Specifies which version of the NFS protocol to use.
    • nolock - Disables file locking. This setting is occasionally required when connecting to older NFS servers.
    • noexec - Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system via NFS containing incompatible binaries.
    • nosuid - Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program.
    • rsize=8192 and wsize=8192 - These settings speed up NFS communication for reads (rsize) and writes (wsize) by setting a larger data block size, in bytes, to be transferred at one time. Be careful when changing these values; some older Linux kernels and network cards do not work well with larger block sizes.
    • tcp - Specifies for the NFS mount to use the TCP protocol instead of UDP.


  5. now test the mounts
back to top
GNFS Server Configuration Steps

NFS Server Configuration Steps

(for users looking to setup mount points for their local Linux host. In short, the user wants to setup a NFS share server.)

  1. Install the nfs-utils and xinetd services.
  2. Turn off iptables firewall for testing purposes.
  3. Edit the /etc/exports file and setup mounts
    # /etc/exports
    # mount dir the allowed incoming connections
    # ----------- -----------------------------------------
    /pub *.ibm.com(ro,sync) 192.168.15.0/255.255.255.0(ro,sync)
    /home/support 9.0.0.0/255.0.0.0(rw,sync) 192.168.15.0/255.255.255.0(rw,sync)


    Users can also setup mounts using graphical tools. The Redhat based Open Client for Linux systems requires the system-config-nfs package.

  4. Start the xinetd and then nfs service. In redhat based systems that would be:
    service xinetd start
    service nfs start



    If you have problems, try doing it manually (Note, the following commands are based on RedHat (open client)):

    [root@funbox rc5.d]# /etc/rc.d/init.d/portmap start
    Starting portmap: [ OK ]



    Start the NFS service using the service command:

    [root@funbox rc5.d]# service nfs start
    Starting NFS services: [ OK ]
    Starting NFS quotas: [ OK ]
    Starting NFS daemon: [ OK ]
    Starting NFS mountd: [ OK ]
    Starting RPC idmapd: [ OK ]



  5. Then to show that the mounts are active use showmount -e :
    [root@funbox rc5.d]# showmount -e localhost
    Export list for localhost:
    /pub *.ibm.com,192.168.15.0/255.255.255.0
    /home/support 192.168.15.0/255.255.255.0,9.0.0.0/255.0.0.0


  6. Then to set the nfs service for startup, place the commands in rc.local or issue the following:
    chkconfig --level 35 nfs on


  7. Read documentation on setting up firewall rules.


back to top
Securing NFS

Securing NFS

NFS works well for sharing entire filesystems with a large number of known hosts in a largely transparent manner. Many users accessing files over an NFS mount may not be aware that the filesystem they are using is not local to their system. However, with ease of use comes a variety of potential security problems.

The following points should be considered when exporting NFS filesystems on a server or mounting them on a client. Doing so will minimize NFS security risks and better protect your data and equipment.
Host Access

NFS controls who can mount an exported filesystem based on the host making the mount request, not the user that will utilize the filesystem. Hosts must be given explicit rights to mount the exported filesystem. Access control is not possible for users, other than file and directory permissions. In other words, when you export a filesystem via NFS to a remote host, you are not only trusting the host you are allowing to mount the filesystem. You are also allowing any user with access to that host to use your filesystem as well. The risks of doing this can be controlled, such as requiring read-only mounts and squashing users to a common user and group ID, but these solutions may prevent the mount from being used in the way originally intended.

Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS filesystem, the system associated with a particular hostname or fully qualified domain name can be pointed to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount the NFS share, since no username or password information is exchanged to provide additional security for the NFS mount. The same risks hold true to compromised NIS servers, if NIS netgroups are used to allow certain hosts to mount an NFS share. By using IP addresses in /etc/exports, this kind of attack is more difficult.

Wildcards should be used sparingly when granting host access to an NFS share. The scope of the wildcard may encompass systems that you may not know exist and should not be allowed to mount the filesystem.
File Permissions

Once the NFS filesystem is mounted read-write by a remote host, protection for each shared file involves its permissions, and its user and group ID ownership. If two users that share the same user ID value mount the same NFS filesystem, they will be able to modify each others files. Additionally, anyone logged in as root on the client system can use the su command to become a user who could access particular files via the NFS share.

The default behavior when exporting a filesystem via NFS is to use root squashing. This sets the user ID of anyone utilizing the NFS share as the root user on their local machine to a value of the server's nobody account. You should never turn off root squashing unless multiple users with root access to your server does not bother you.

If you are only allowing users to read files via your NFS share, consider using the all_squash option, which makes every user accessing your exported filesystem to take the user ID of the nobody user.
- (Information obtained from http://redhat.com/)


back to top
NFS Firewall Settings (IPTABLES)

How to (securely) open your Open Client when running a NFS server

With the firewall running, nfs clients will hang trying to connect to your nfs mount points if the communication ports are not made available. However, doing this add direct vunerabilities to your system and should be practiced on secure networks.

To create a iptables firewall rule:

  1. Open a terminal session and change to the /etc/iptables.d/filter/INPUT folder.
  2. Then create a file named "27-allow-nfs-mounts.rule" with the content below:
    # portmapper
    -A INPUT -p udp -m udp --dport 111 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
    # or
    #-A INPUT -p udp -m udp --dport portmapper -j ACCEPT
    #-A INPUT -p tcp -m tcp --dport portmapper -j ACCEPT
    #
    # status (the ports vary and are not in /etc/services)
    -A INPUT -p udp -m udp --dport 840 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 843 -j ACCEPT
    # rquotad (the ports vary and are not in /etc/services)
    -A INPUT -p udp -m udp --dport 902 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 905 -j ACCEPT
    # mountd (the ports vary and are not in /etc/services)
    -A INPUT -p udp -m udp --dport 918 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 921 -j ACCEPT
    #nfs
    -A INPUT -p udp -m udp --dport 2049 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
    # nlockmgr (the ports vary and are not in /etc/services)
    -A INPUT -p udp -m udp --dport 32792 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 42621 -j ACCEPT

    Some of the ports above may be variable, use rpcinfo -p localhost to help identify ports in use on your system.
    In addition, you may have to edit the "xx-c4eb-drop.rule" file and remove the entry that dropped the 111 (portmap) port.

  3. After editing or adding these rule files run the ibm script (/opt/ibm/c4eb/firewall/create-rule-file.sh on the Open Client 2.0) and restart the iptables service.

An alternative rule is to accept all packets from a certain IP address:

# 192.168.1.34 in example below
-A INPUT -s 192.168.1.34 -j ACCEPT

3 comments

Anand Raju said... @ May 12, 2016 at 4:01 AM

Awesome..

deepak singh said... @ November 26, 2016 at 12:52 AM

Thanks for providing this informative information you may also refer.
http://www.s4techno.com/blog/2016/07/12/fix-grub-issue-of-dual-boot-between-linux-and-windows/

Lexiie Casanova said... @ November 18, 2017 at 7:48 PM

Penyon

Post a Comment