Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Set Up NFS Server with Proper Permissions and Security

When NFS quietly becomes a security problem

In most enterprises, NFS doesn’t arrive as a “project.” It arrives as a convenience. A team needs a shared directory for build artifacts. Another team needs a place for logs. A third team needs a shared dataset for analytics. At first, it’s one export, one subnet, one server. Then the environment grows: more clients, more VLANs, more automation, more service accounts, more compliance questions. And suddenly the same thing that made NFS useful—simple shared access—becomes the thing that can quietly widen risk if we don’t control it.

Production-grade NFS is not about making it work. It’s about making it predictable: controlled permissions, explicit client scope, hardened defaults, auditable changes, and verification at every step. That’s what we are going to build here on Linux, with enterprise expectations in mind.

Prerequisites and assumptions we are building on

Before we touch a command, we need to be explicit about the environment assumptions. NFS is sensitive to identity, network boundaries, and stateful services, so “almost correct” tends to fail in confusing ways.

  • Server OS: A modern Linux distribution with systemd. The commands below are written for Debian/Ubuntu-style package management (apt). If we are on RHEL/Rocky/Alma, the service concepts are identical, but package names and firewall tooling differ.
  • Clean baseline: We assume the server is not already exporting the same paths, and there is no conflicting NFS configuration. If NFS was previously configured, we should review and remove stale exports before proceeding.
  • Network design: We assume NFS clients are on private networks (VLANs/subnets) and that we can define exactly which subnets are allowed. NFS should not be exposed to the public internet.
  • Identity model: NFS permissions are enforced by numeric UID/GID. In enterprises, we typically rely on centralized identity (LDAP/SSSD/AD) or consistent local UID/GID mapping. If UID/GID differs between server and clients, permissions will look “random.”
  • Time sync: We assume NTP/chrony is in place. Kerberos-based NFS (not covered here) requires tight time sync, and even without Kerberos, consistent time helps with auditing.
  • Firewall control: We assume we can manage host firewall rules (UFW or nftables/iptables) and that upstream network ACLs exist. We will still harden at the host level.
  • Storage: We assume the export path is on a stable filesystem (local disk, RAID, SAN LUN, etc.) and that we understand its performance and backup model. NFS will faithfully share whatever storage behavior we give it.

Step 1: Confirm the server identity and network boundaries

Before configuring NFS, we need to know which interface and IP address the server uses for client traffic. This matters for firewall rules, for troubleshooting, and for ensuring we are not accidentally serving on an unintended interface.

hostnamectl
ip -br addr
ip route

We have now confirmed the server hostname, active interfaces, assigned IPs, and default routing. This gives us the facts we need to scope firewall rules and to validate that clients will reach the correct address.

Define the client subnets as shell variables

In production, we should explicitly list the client subnets that are allowed to mount. We will store them as variables so the commands remain copy/paste-safe and consistent.

CLIENT_NET_1="10.10.0.0/16"
CLIENT_NET_2="10.20.0.0/16"

We have now defined two allowed client networks. We will reference these consistently in exports and firewall rules so the scope stays intentional and reviewable.

Step 2: Install NFS server components

Next we install the NFS server packages. This provides the kernel NFS server support and the user-space tooling that manages exports and RPC services.

sudo apt-get update
sudo apt-get install -y nfs-kernel-server

NFS server components are now installed. The systemd unit files and default configuration directories are present, and the server is capable of exporting directories once we define exports.

Verify the NFS service state

Before we configure anything, we confirm the service is present and manageable under systemd. This avoids chasing configuration issues when the service is not running or not installed correctly.

systemctl status nfs-server --no-pager || systemctl status nfs-kernel-server --no-pager

We have now verified the service status. Depending on distribution naming, we will see either nfs-server or nfs-kernel-server. If it is inactive, that is fine for now; it will become active once exports are applied and the service is started.

Step 3: Create a controlled export directory with predictable ownership

Now we create the directory we will export. The key here is that NFS permissions are not “NFS permissions”—they are standard Linux filesystem permissions enforced using UID/GID. So we must decide who owns the data and how clients will access it.

For enterprise environments, a common pattern is to create a dedicated group for shared access and to use the setgid bit so new files inherit the group. This keeps collaboration predictable without relying on fragile client-side umask behavior.

Create a shared group and a service account

We will create a group for NFS-shared access and a non-login service account that owns the directory. This makes ownership explicit and auditable.

sudo groupadd --system nfs_shared 2>/dev/null || true
sudo useradd --system --no-create-home --shell /usr/sbin/nologin --gid nfs_shared nfs_svc 2>/dev/null || true

We now have a system group (nfs_shared) and a system user (nfs_svc) that we can use to own exported directories. If they already existed, the commands safely continued without breaking the run.

Create the export path and apply permissions

We will create the export directory, assign ownership to our service account and group, and set permissions that support group collaboration. We will also set the setgid bit so new files inherit the group automatically.

sudo mkdir -p /srv/nfs/projects
sudo chown -R nfs_svc:nfs_shared /srv/nfs/projects
sudo chmod 2770 /srv/nfs/projects

The directory /srv/nfs/projects now exists, is owned by nfs_svc:nfs_shared, and has permissions 2770. The leading 2 sets the setgid bit, which helps keep group ownership consistent as teams create files and directories.

Verify filesystem permissions

We confirm ownership and mode so we know the server-side filesystem is correct before we involve NFS exports.

stat -c "path=%n owner=%U group=%G mode=%a" /srv/nfs/projects
getent passwd nfs_svc
getent group nfs_shared

We have now verified the directory’s owner/group/mode and confirmed that the service account and group exist in the local identity database.

Step 4: Configure NFS exports with explicit scope and safe defaults

This is where production-grade discipline matters. We will define exports in /etc/exports with explicit client networks and conservative options. The goal is to reduce surprise: clients can only mount what we intend, from where we intend, with behavior that aligns with enterprise change control.

We will export using NFSv4 style (single export root) to simplify client mounts and reduce legacy complexity. We will also avoid risky patterns that broaden access beyond what we can justify.

Back up the current exports file

Before changing exports, we take a timestamped backup. This makes rollback straightforward during incident response or change windows.

sudo cp -a /etc/exports /etc/exports.bak.$(date +%F_%H%M%S) 2>/dev/null || true

We have now preserved the previous exports configuration (if it existed). This reduces risk during iterative hardening.

Write a complete, controlled /etc/exports

We will define an NFSv4 export root at /srv/nfs and a specific export for /srv/nfs/projects. We will restrict access to the client subnets we defined earlier. We will also use options that support stable behavior:

  • rw: clients can write (adjust to ro for read-only datasets).
  • sync: server replies after data is committed to stable storage, improving integrity.
  • no_subtree_check: reduces subtree-related edge cases when exporting directories.
  • root_squash: prevents client root from becoming server root on the export.
  • fsid=0: defines the NFSv4 export root.

We will generate the file using a heredoc so it is copy/paste-safe and complete.

sudo tee /etc/exports >/dev/null <<EOF
# Production-grade NFSv4 exports
# Explicitly scoped to approved client networks only.

# NFSv4 export root
/srv/nfs ${CLIENT_NET_1}(ro,fsid=0,sync,no_subtree_check,root_squash) ${CLIENT_NET_2}(ro,fsid=0,sync,no_subtree_check,root_squash)

# Project share (mounted by clients as: server:/projects)
/srv/nfs/projects ${CLIENT_NET_1}(rw,sync,no_subtree_check,root_squash) ${CLIENT_NET_2}(rw,sync,no_subtree_check,root_squash)
EOF

We have now replaced /etc/exports with a controlled configuration that exports an NFSv4 root and a specific share. Access is limited to the two defined client networks, and root privileges from clients are squashed to reduce impact if a client is compromised.

Apply exports and verify what the kernel is serving

Next we apply the exports and verify the active export table. This ensures the configuration is syntactically correct and actually loaded.

sudo exportfs -ra
sudo exportfs -v

The exports have now been reloaded. The verbose export listing shows exactly which paths are exported, to which client networks, and with which options. This is the first place we catch mistakes before clients start mounting.

Step 5: Lock NFS to v4 and reduce legacy exposure

In enterprise environments, we want to reduce the number of moving parts. NFSv4 consolidates behavior and typically reduces reliance on older RPC patterns. We will configure the server to serve NFSv4 and disable older versions where possible.

Configure NFS server protocol versions

We will set NFSv4 enabled and disable NFSv2/v3. This reduces legacy protocol exposure and simplifies firewalling. We will write a complete config file for clarity.

sudo tee /etc/nfs.conf >/dev/null <<EOF
[nfsd]
vers2 = n
vers3 = n
vers4 = y
vers4.0 = y
vers4.1 = y
vers4.2 = y
EOF

We have now explicitly configured the NFS daemon to serve only NFSv4.x. This reduces the attack surface and avoids accidental fallback to older protocol versions.

Restart services and verify listeners

Now we restart NFS services so the protocol settings and exports are applied. Then we verify which ports are listening. For NFSv4, we expect TCP 2049 to be the primary service port.

sudo systemctl restart nfs-server || sudo systemctl restart nfs-kernel-server
sudo systemctl status nfs-server --no-pager || sudo systemctl status nfs-kernel-server --no-pager
sudo ss -lntp | awk 'NR==1 || /:2049 /'

The NFS service has now been restarted with the new protocol configuration. The socket check confirms whether port 2049 is listening, which is the key indicator that NFSv4 is available.

Step 6: Harden host firewall rules for NFS

Even when upstream firewalls exist, host-level firewalling is a practical control: it prevents accidental exposure during network changes and provides a last line of defense. We will allow NFS only from the approved client networks.

Because Linux environments vary, we will provide UFW steps (common on Ubuntu) and an nftables approach (common in modern enterprise builds). We should use one, not both.

Option A: UFW (Ubuntu/Debian environments that use UFW)

We will first confirm whether UFW is installed and active. Then we will allow TCP 2049 only from the approved client networks and deny everything else by default.

sudo apt-get install -y ufw
sudo ufw status verbose

We have now ensured UFW is present and we have visibility into its current state. If it is inactive, we will enable it after adding rules to avoid locking ourselves out unexpectedly.

Next we set a conservative default policy and add explicit allow rules for NFS from the client networks.

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow from "${CLIENT_NET_1}" to any port 2049 proto tcp
sudo ufw allow from "${CLIENT_NET_2}" to any port 2049 proto tcp
sudo ufw enable
sudo ufw status verbose

UFW is now enforcing a deny-by-default inbound policy, with explicit allowances for NFSv4 (TCP 2049) from the approved client networks only. The status output confirms the active rules.

Option B: nftables (enterprise environments standardizing on nft)

If we are using nftables, we will create a minimal ruleset that allows established traffic, SSH (so we keep administrative access), and NFS from the approved client networks. We will also ensure the rules persist across reboots.

Before applying a firewall ruleset, we should confirm the primary interface name so we can reason about traffic flow. We will detect it from the default route.

EXT_IFACE=$(ip route show default 0.0.0.0/0 | awk '{print $5; exit}')
echo "EXT_IFACE=${EXT_IFACE}"

We have now captured the primary interface used for default routing. This helps us validate that firewall rules are applied on the expected traffic path.

Now we will install nftables (if needed), write a complete ruleset, enable it, and verify the active rules.

sudo apt-get install -y nftables

sudo tee /etc/nftables.conf >/dev/null <<EOF
#!/usr/sbin/nft -f

flush ruleset

table inet filter {
  chain input {
    type filter hook input priority 0;
    policy drop;

    iif "lo" accept
    ct state established,related accept

    # Keep administrative access (adjust source restrictions per enterprise policy)
    tcp dport 22 accept

    # NFSv4 only, restricted to approved client networks
    ip saddr ${CLIENT_NET_1} tcp dport 2049 accept
    ip saddr ${CLIENT_NET_2} tcp dport 2049 accept

    # Optional: allow ICMP for diagnostics
    ip protocol icmp accept
    ip6 nexthdr icmpv6 accept
  }

  chain forward {
    type filter hook forward priority 0;
    policy drop;
  }

  chain output {
    type filter hook output priority 0;
    policy accept;
  }
}
EOF

sudo systemctl enable --now nftables
sudo systemctl status nftables --no-pager
sudo nft list ruleset

nftables is now enforcing a drop-by-default inbound policy, allowing only loopback, established traffic, SSH, and NFSv4 from the approved client networks. The ruleset listing confirms exactly what is permitted.

Step 7: Validate exports from the server side

Before involving clients, we validate locally that the server believes it is exporting what we intended. This reduces the troubleshooting surface area.

sudo exportfs -v
sudo showmount -e localhost 2>/dev/null || true

The export list confirms the active exports. On some hardened NFSv4-only configurations, showmount may not provide meaningful output because it is more aligned with older RPC-based discovery; that is expected. The authoritative view remains exportfs -v.

Step 8: Client-side mount approach and verification pattern

In enterprises, we want mounts to be explicit, repeatable, and resilient across reboots. Even if client configuration is managed by automation (Ansible, Puppet, SCCM for Linux, etc.), the underlying pattern remains the same: verify reachability, mount explicitly, then persist via /etc/fstab with safe options.

Confirm server reachability from a client network

From a client host on an approved subnet, we first confirm that TCP 2049 is reachable. This isolates network/firewall issues before we touch mount configuration.

SERVER_IP="10.10.1.10"
nc -vz "${SERVER_IP}" 2049

If the connection succeeds, the network path and firewall rules are aligned for NFSv4. If it fails, we should focus on routing, security groups/ACLs, and host firewall rules before changing NFS configuration.

Mount the NFSv4 export explicitly

Now we mount the export using NFSv4. We will create a mount point, mount the share, and verify it is mounted with the expected filesystem type.

sudo mkdir -p /mnt/projects
sudo mount -t nfs4 -o rw,hard,timeo=600,retrans=2 "${SERVER_IP}:/projects" /mnt/projects
mount | grep -E ' /mnt/projects ' || true
df -hT /mnt/projects

The share is now mounted at /mnt/projects. The mount and filesystem checks confirm that the client sees an NFSv4 mount and that the path is accessible.

Persist the mount across reboots

To make the mount persistent, we add an /etc/fstab entry with options that behave well in production. We will use _netdev so the mount waits for networking, and we will keep the mount “hard” so applications do not silently corrupt workflows during transient network issues.

grep -qE '^[^#].*s/mnt/projectss' /etc/fstab || echo "${SERVER_IP}:/projects /mnt/projects nfs4 rw,hard,timeo=600,retrans=2,_netdev 0 0" | sudo tee -a /etc/fstab >/dev/null
sudo umount /mnt/projects
sudo mount -a
mount | grep -E ' /mnt/projects ' || true

The mount is now defined in /etc/fstab and has been validated by unmounting and remounting via mount -a. This confirms it will survive reboots and standard mount workflows.

Troubleshooting: symptoms, causes, and fixes

Symptom: Mount hangs or times out

  • Likely causes: Firewall blocking TCP 2049, routing issues between VLANs, NFS service not listening, or upstream ACLs blocking traffic.
  • Fix: Verify listener and firewall on the server, then verify reachability from the client.
# On the server
sudo ss -lntp | awk 'NR==1 || /:2049 /'
sudo systemctl status nfs-server --no-pager || sudo systemctl status nfs-kernel-server --no-pager
sudo exportfs -v

# On the client
nc -vz "${SERVER_IP}" 2049

These checks confirm whether NFS is listening, whether exports are loaded, and whether the network path is open. If nc fails, the issue is network/firewall, not mount syntax.

Symptom: “Permission denied” when accessing files

  • Likely causes: UID/GID mismatch between client and server, filesystem permissions too restrictive, or the client is not in an allowed subnet.
  • Fix: Confirm export scope, then confirm UID/GID mapping and directory permissions.
# On the server
sudo exportfs -v
stat -c "path=%n owner=%U group=%G mode=%a" /srv/nfs/projects

# Check numeric IDs (critical for NFS)
id nfs_svc
getent group nfs_shared

If the client user’s UID/GID does not match what the server expects, access will fail even if names look the same. Align identity via centralized directory services or consistent local UID/GID allocation.

Symptom: “Stale file handle” errors

  • Likely causes: Export path moved/renamed, filesystem remounted, or underlying storage changed while clients were mounted.
  • Fix: Avoid moving exported directories. If it happened, unmount and remount on clients after restoring stable paths.
# On the client
sudo umount -f /mnt/projects 2>/dev/null || true
sudo mount -a

Unmounting and remounting refreshes file handles. The long-term fix is operational discipline: exported paths should be treated as stable interfaces.

Symptom: Service is running but exports are not visible

  • Likely causes: Syntax error in /etc/exports, exports not reloaded, or export root misconfigured for NFSv4.
  • Fix: Re-apply exports and check for parsing errors.
sudo exportfs -ra
sudo exportfs -v
sudo journalctl -u nfs-server -n 100 --no-pager 2>/dev/null || sudo journalctl -u nfs-kernel-server -n 100 --no-pager

Reloading exports and reviewing logs typically reveals syntax issues or path problems. The verbose export list is the authoritative view of what the server is actually serving.

Common mistakes

Mistake: Clients can mount, but writes fail unexpectedly

Symptom: Mount succeeds, reads work, but writes fail with “Permission denied” or files are created with odd ownership.

Fix: Confirm server-side directory mode and group inheritance, then confirm UID/GID alignment.

# On the server
stat -c "path=%n owner=%U group=%G mode=%a" /srv/nfs/projects
sudo chmod 2770 /srv/nfs/projects

This ensures group inheritance is enforced at the directory level. If ownership still looks wrong, the root cause is usually UID/GID mismatch across systems.

Mistake: Firewall rules allow NFS from everywhere

Symptom: Security review flags the server because TCP 2049 is open broadly, or unexpected clients can connect.

Fix: Restrict firewall rules to approved client subnets and verify with a port scan from an unapproved network segment.

# Server-side quick check
sudo ss -lntp | awk 'NR==1 || /:2049 /'

# If using UFW
sudo ufw status verbose

# If using nftables
sudo nft list ruleset | sed -n '1,200p'

These checks confirm whether access is scoped. In production, we should treat “open to all” as a defect, not a convenience.

Mistake: Changes made but not applied

Symptom: /etc/exports looks correct, but clients still see old behavior.

Fix: Reload exports and restart NFS services, then verify the active export table.

sudo exportfs -ra
sudo systemctl restart nfs-server || sudo systemctl restart nfs-kernel-server
sudo exportfs -v

This forces the kernel export table and the service state to match the configuration on disk.

How do we at NIILAA look at this

This setup is not impressive because it is complex. It is impressive because it is controlled. Every component is intentional. Every configuration has a reason. This is how infrastructure should scale — quietly, predictably, and without drama.

At NIILAA, we help organizations design, deploy, secure, and maintain production-grade NFS and broader file services across Linux estates—aligning identity, network boundaries, hardening standards, and operational runbooks so shared storage stays reliable under growth, audits, and real incident pressure.

Website: https://www.niilaa.com
Email: [email protected]
LinkedIn: https://www.linkedin.com/company/niilaa
Facebook: https://www.facebook.com/niilaa.llc

Leave A Comment

All fields marked with an asterisk (*) are required