Linux NFS
The Network File System (NFS) is a distributed file system protocol that was originally developed by Sun Microsystems. It allows a client computer to “mount” network folders from a server so that the resulting mount appears and behaves as a local file system to the client. NFS builds on the Open Network Computing Remote Procedure Call system and is currently an open standard that is defined as an RFC, which allows anyone to implement it.
RHEL 8
Install
sudo yum install -y nfs-utils sudo systemctl start rpcbind # start required services showmount -e SERVER_IP sudo mount -t nfs SERVER_IP:/server/dir /client/dir
Create collaborative directory with set-GID - new files inherit parent dir group otherwise will use the user primary group
The setgid
affects both files as well as directories. When used on a file, it executes with the privileges of the group of the user who owns it instead of executing with those of the group of the user who executed it.
When the bit is set for a directory, the set of files in that directory will have the same group as the group of the parent directory, and not that of the user who created those files. This is used for file sharing since they can be now modified by all the users who are part of the group of the parent directory.
mkdir /new/dir chmod g+s /new/dir touch /new/dir/file
Virtual Data Optimizer (VDO), new in RHEL 7.5, it's a transparent compression/deduplication layer technology. Use cases like running multiple VMs on a single VDO volume let VDO shine.
yum install vdo # Create a vdo volume vdo create --name=vdo_vol --device=dev/devName --vdoLogicalSize=vol_size # --vdoLogicalSize=50G # size we want to present to operating system # eg. 50G disk we can present as 100G because of de-duplication optimization enabled vdostatus --hu
Ubuntu NFS version 4
- Single server scenario
In this example we will install server NFS running on Ubuntu 14.04 LTS and mount its exported file system to another Ubuntu host. This is the most common scenario where you will deploy a single server that allows one or more individual clients or networks to have access to one or more folders that can be mounted locally.
NFS server
One of requirments is that NFS server MUST have at least one static IP address that we can bind NFS service to. Next, be sure that the hostname (short and fully qualified) exist as an entry in your local hosts file.
vi /etc/hosts # add a line below, so the static ip has short and FQDN names 10.0.0.100 nfs-server nfs-server.example.com hostname -f #verify FQDN
Install packages
- nfs-common - common NFS client library
- nfs-kernel-server - NFS server demon/service
- rpcbind - tells other networked machines at what location to find a service
sudo apt-get install nfs-common nfs-kernel-server rpcbind
Create default RPCBIND config file, to explicitly call out that we are not passing any options to the daemon
vi /etc/default/rpcbind #crate a file with only line below OPTIONS=""
Allow other hosts on the network contact our server. Here all hosts on 10.0.0.0/24 network cat use portmap service and in turn NFS shares.
vi /etc/hosts.allow portmap: 10.0.0.
Enable idmapd, this is required for NFSv4
vi /etc/default/nfs-common #add a line below NEED_IDMAPD=YES
Configure idmapd, the file contains user mapping, you can leave it as it is
vi /etc/default/nfs-common
Ubuntu 18, quick setup
NFS will translate any root operations on the client to the nobody:nogroup credentials as a security measure. Therefore, we need to change the directory ownership to match those credentials.
export DEBIAN_FRONTEND=noninteractive sudo apt-get update && sudo apt-get install -y nfs-kernel-server # sudo useradd -m -d /home/nas nas # sudo groupadd -f -g 1001 nas # sudo usermod -a -G nas nas # sudo mkdir -p /nas && sudo chown nas:nas /nas sudo mkdir -p /nas sudo chown nobody:nogroup /nas # sudo chmod 0777 /nas sudo mkfs -t ext4 /dev/nvme0n1 sudo cp /etc/fstab /etc/fstab.bak echo '/dev/nvme0n1 /nas ext4 defaults,nofail 0 2' | sudo tee --append /etc/fstab >/dev/null echo '/nas *(rw,sync,no_subtree_check,all_squash)' | sudo tee --append /etc/exports >/dev/null sudo systemctl restart nfs-server.service
NFS Server - export configuration
Create NFS general share directory
mkdir /srv/exports
Make the directory available using the access control list for filesystems which may be exported to NFS clients
vi /etc/exports #add the last line # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # Example for NFSv4: # /srv/nfs4 # gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /exports 10.0.0.0/255.255.255.0(rw,no_root_squash,no_subtree_check,crossmnt,fsid=0)
This simple file does a number of things. It defines the base directory of our share (in our case the previously created directory we called /exports
), it provides read and write access to anyone on
our allowed client network, it gives remote root/admin users full control over local root owned
files (no_root_squash), don’t worry about the exported directory being an entire file system
(no_subtree_check), allow subdirectories of the exported folder to be seen as subfolders (crossmnt)
and finally assume that the volume share exported is a regular file system and not another share or
special device (fsid=0).
Flags:
- rw: This option gives the client computer both read and write access to the volume.
- sync: This option forces NFS to write changes to disk before replying. This results in a more stable and consistent environment since the reply reflects the actual state of the remote volume. However, it also reduces the speed of file operations.
- no_subtree_check: This option prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
- no_root_squash: By default, NFS translates requests from a root user remotely into a non-privileged user on the server. This was intended as security feature to prevent a root account on the client from using the file system of the host as root. no_root_squash disables this behavior for certain shares.
- all_squash tells NFS that for any user connecting from 10.0.5.10, ignore their actual UID/GID and instead treat them as if UID=anonuid and GID=anongid. Since you set anonuid=0,anongid=0 that gives all users on 10.0.5.10 root access privileges on /STORAGE, effectively bypassing all security on /STORAGE and leaving it wide open to abuse from anyone appearing to come from the 10.0.5.10 IP address.
Start NFS deamon and verify rpcbind status
sudo service nfs-kernel-server start sudo service rpcbind status rpcinfo -p # check that NFS is running on the server, run on nfs-server rpcinfo -p <nfsServerIp|DNS> # run from nfs-client exportfs -a # apply exports changes, each time exports config file get changed exportfs -rav # re-export the shares
Server operations
# List all exports [root@nfs-server-5bqrf /]$ showmount -e localhost Export list for localhost: / * /exports * # Show supported protocol versions [root@nfs-server-5bqrf /]$ rpcinfo -s localhost program version(s) netid(s) service owner 100000 2,3,4 local,udp,tcp,udp6,tcp6 portmapper superuser 100005 3 tcp6,udp6,tcp,udp mountd superuser 100003 4,3 udp6,tcp6,udp,tcp nfs superuser # <- NFS 100227 3 udp6,tcp6,udp,tcp nfs_acl superuser 100021 4,3,1 tcp6,udp6,tcp,udp nlockmgr superuser 100024 1 tcp6,udp6,tcp,udp status 29
Configure Client
Install packages
sudo apt-get install nfs-common rpcbind
Mount NFS remote export
sudo mkdir /mnt/share sudo mount.nfs4 10.0.0.100:/ /mnt/share
Verify mount
mount | grep nfs 10.0.0.100:/ on /mnt/share type nfs4 (rw,addr=10.0.0.100,clientaddr=10.0.0.11) ls /mnt/share #should show NFS server files within this exported directory
Make the export permanent in fstab
vi /etc/fstab #add below 10.0.0.100:/ /mnt/share nfs4 rw 0 0
Troubleshooting
Packet capture, running on nfs-server
tshark -tad -nr client.pcap -Y 'frame.number == 500' -O rpc | sed '/^Re/,$ !d' # 'Replay State/Auth State/client auth' tshark -tad -nr client.pcap -Y nfs.status!=0