Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Securing Linux Servers Against Brute-Force Attacks

Why brute-force becomes a “slow burn” problem on Linux servers

In the beginning, a Linux server is quiet. We deploy it for a single app, a small team, maybe a staging environment. SSH is open because we need access, sudo is enabled because we need to administer, and PAM is doing its job in the background without much attention.

Then time passes. The server gets a second service. A third admin. A CI runner. A vendor account. A new subnet. Logs grow. Access patterns change. And somewhere in that noise, brute-force attempts start showing up—first as a few failed SSH logins, then as steady background radiation, and eventually as a real operational risk: account lockouts, noisy auth logs, elevated CPU from repeated handshakes, and the uncomfortable question of whether one weak credential will eventually slip through.

We are going to defend Linux servers against brute-force attacks in a way that holds up in real environments: controlled SSH exposure, hardened sudo behavior, PAM-based protections, and automated banning of abusive sources—without relying on cloud-only tools. The goal is not “more security knobs.” The goal is predictable, testable control.

Prerequisites and assumptions

Before we touch configuration, we need to be explicit about the environment we are securing. These assumptions keep the steps copy/paste-safe and reduce the chance of locking ourselves out.

  • Platform: Linux. The steps below target modern systemd-based distributions. We will provide separate sections for Debian/Ubuntu-family and RHEL-family (RHEL, Rocky, Alma, CentOS Stream).
  • Access: We have console access (hypervisor console, iDRAC/iLO, or physical) or an out-of-band method. This matters because SSH hardening can lock us out if we make a mistake.
  • Privileges: We can run commands as root. If we normally use sudo, we will use sudo -i to get a root shell for consistency.
  • SSH already installed: OpenSSH server is installed and running. If it is not, we will install it in the OS-specific section.
  • Firewall: We will use the host firewall (nftables/iptables via UFW or firewalld). If there is an upstream firewall, we still keep host-level controls because they provide defense-in-depth and local visibility.
  • Change control: We will back up configuration files before editing. We will validate syntax before restarting services.
  • Authentication model: We will prefer SSH keys for interactive admin access. Password authentication will be disabled where feasible. If an environment requires passwords (legacy automation), we will still enforce rate limits and lockouts.

We will start by collecting a small baseline so we can verify improvements and troubleshoot quickly.

Baseline: confirm OS, SSH, and current exposure

We are going to capture OS identity, confirm SSH is listening, and record the current firewall state. This gives us a “before” snapshot and helps us avoid guessing later.

sudo -i

set -eu

cat /etc/os-release

systemctl status ssh 2>/dev/null || systemctl status sshd 2>/dev/null || true

ss -tulpn | awk 'NR==1 || /:22[[:space:]]/ {print}'

(ufw status verbose 2>/dev/null || true)
(firewall-cmd --state 2>/dev/null || true)

We now have the OS family, whether the SSH service is named ssh or sshd, confirmation that port 22 is listening (or not), and which firewall tool is active (if any). We will use this information to apply the correct steps without assumptions.

Strategy: what we are protecting and how

Brute-force is not a single problem; it is a pattern of pressure against authentication. So we will apply layered controls that each reduce risk in a different way:

  • Reduce the attack surface: restrict SSH exposure at the firewall and in sshd configuration.
  • Make authentication harder to guess: disable password auth where possible, disallow root SSH login, and limit who can log in.
  • Slow down and block abusive sources: use Fail2ban to ban IPs that repeatedly fail authentication.
  • Protect privilege escalation: harden sudo behavior and add PAM controls to reduce password guessing and limit repeated failures.
  • Verify and persist: ensure changes survive reboots and are observable via logs and status checks.

Implementation for Debian/Ubuntu-family systems

This section applies to Ubuntu Server and Debian on systemd. We will install required packages, harden SSH, configure the firewall, enable Fail2ban, and apply PAM protections for both SSH and sudo.

Install required packages

We are going to install OpenSSH server (if missing), Fail2ban for automated banning, and supporting tools. This is foundational: without these packages, later configuration either won’t apply or won’t persist.

sudo -i

set -eu

apt-get update

DEBIAN_FRONTEND=noninteractive apt-get install -y openssh-server fail2ban iptables nftables rsyslog

The system now has OpenSSH server, Fail2ban, and logging support. Even if the distribution uses nftables under the hood, Fail2ban can still enforce bans via its backend. We also ensured rsyslog is present so authentication logs are reliably written.

Harden SSH server configuration

We are going to harden sshd_config to reduce brute-force success probability and limit who can authenticate. We will back up the current configuration, apply a controlled set of changes, validate the configuration, and only then reload the service.

First, we will detect the SSH service name and the active sshd configuration path.

sudo -i

set -eu

SSH_SERVICE=""
if systemctl list-unit-files | awk '{print $1}' | grep -qx "ssh.service"; then
  SSH_SERVICE="ssh"
elif systemctl list-unit-files | awk '{print $1}' | grep -qx "sshd.service"; then
  SSH_SERVICE="sshd"
else
  echo "ERROR: Neither ssh.service nor sshd.service found."
  exit 1
fi
echo "SSH service: ${SSH_SERVICE}"

SSHD_CONFIG="/etc/ssh/sshd_config"
test -f "${SSHD_CONFIG}"
echo "sshd_config: ${SSHD_CONFIG}"

We now know which systemd unit controls SSH and confirmed the configuration file exists. Next, we will back up the file and write a hardened configuration snippet using sshd_config directives that are widely supported.

We are going to create a backup and then apply changes in-place in a safe way: we will append a managed block at the end of the file. This avoids breaking distribution defaults while still enforcing our policy (later directives override earlier ones).

sudo -i

set -eu

SSHD_CONFIG="/etc/ssh/sshd_config"
cp -a "${SSHD_CONFIG}" "${SSHD_CONFIG}.bak.$(date +%F_%H%M%S)"

cat >> "${SSHD_CONFIG}" <<'EOF'

# --- NIILAA managed hardening block: brute-force defense ---
# Reduce remote attack surface and tighten authentication behavior.
Protocol 2
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
ChallengeResponseAuthentication no
UsePAM yes

# Limit authentication attempts and sessions.
MaxAuthTries 3
MaxSessions 5
LoginGraceTime 20

# Reduce information leakage.
X11Forwarding no
AllowTcpForwarding no
PermitTunnel no
PrintMotd no

# Keep connections healthy but not chatty.
ClientAliveInterval 300
ClientAliveCountMax 2

# Logging for visibility.
LogLevel VERBOSE
# --- end NIILAA managed hardening block ---
EOF

sshd -t

We backed up the SSH configuration, appended a controlled hardening block, and validated syntax with sshd -t. If validation passes, we can reload SSH safely. If validation fails, we should restore the backup before proceeding.

Now we will reload the SSH service so the new policy takes effect without dropping existing sessions.

sudo -i

set -eu

SSH_SERVICE=""
if systemctl list-unit-files | awk '{print $1}' | grep -qx "ssh.service"; then
  SSH_SERVICE="ssh"
else
  SSH_SERVICE="sshd"
fi

systemctl reload "${SSH_SERVICE}" || systemctl restart "${SSH_SERVICE}"

systemctl status "${SSH_SERVICE}" --no-pager
ss -tulpn | awk 'NR==1 || /:22[[:space:]]/ {print}'

SSH is now running with stricter authentication rules. Password-based logins are disabled, root login over SSH is disabled, and repeated guessing is limited by MaxAuthTries and LoginGraceTime. We also confirmed the service is healthy and still listening.

Ensure we have at least one key-based admin path

We are going to confirm that at least one administrative account can authenticate using SSH keys. This is not optional when password authentication is disabled. We will create a dedicated admin group and restrict SSH access to that group, but only after we confirm membership and keys exist.

First, we will create an sshadmins group and identify our current sudo-capable users.

sudo -i

set -eu

getent group sshadmins >/dev/null || groupadd --system sshadmins

echo "Users in sudo group (if present):"
getent group sudo 2>/dev/null || true

echo "Users in wheel group (if present):"
getent group wheel 2>/dev/null || true

We now have a dedicated group available. Next, we will add an existing admin user to this group. Because usernames vary, we will safely list human users and then we will explicitly choose one by exporting a variable before applying changes.

We are going to print candidate usernames (UID ≥ 1000) and then set ADMIN_USER once we decide. This keeps commands copy/paste-safe while still requiring an intentional choice.

sudo -i

set -eu

awk -F: '$3 >= 1000 && $1 != "nobody" {print $1}' /etc/passwd

echo "Set ADMIN_USER to an existing admin username before continuing."

Now we will set ADMIN_USER and add it to sshadmins. We will also verify that the user has an authorized_keys file in place.

sudo -i

set -eu

ADMIN_USER=""

if [ -z "${ADMIN_USER}" ]; then
  echo "ERROR: ADMIN_USER is empty. Edit this command block and set ADMIN_USER to a real username."
  exit 1
fi

usermod -aG sshadmins "${ADMIN_USER}"

HOME_DIR="$(getent passwd "${ADMIN_USER}" | awk -F: '{print $6}')"
test -d "${HOME_DIR}"

mkdir -p "${HOME_DIR}/.ssh"
chmod 700 "${HOME_DIR}/.ssh"
touch "${HOME_DIR}/.ssh/authorized_keys"
chmod 600 "${HOME_DIR}/.ssh/authorized_keys"
chown -R "${ADMIN_USER}:${ADMIN_USER}" "${HOME_DIR}/.ssh"

id "${ADMIN_USER}"
ls -ld "${HOME_DIR}/.ssh"
ls -l "${HOME_DIR}/.ssh/authorized_keys"

The admin user is now in the sshadmins group, and the SSH key directory and file permissions are correct. This matters because incorrect permissions are a common reason key-based login fails, which becomes critical once passwords are disabled.

Now we will restrict SSH access to the sshadmins group. This reduces the number of accounts exposed to brute-force attempts.

sudo -i

set -eu

SSHD_CONFIG="/etc/ssh/sshd_config"

# Append an access control rule if not already present.
grep -qE '^[[:space:]]*AllowGroups[[:space:]]+sshadmins' "${SSHD_CONFIG}" || 
  echo "AllowGroups sshadmins" >> "${SSHD_CONFIG}"

sshd -t

SSH_SERVICE=""
if systemctl list-unit-files | awk '{print $1}' | grep -qx "ssh.service"; then
  SSH_SERVICE="ssh"
else
  SSH_SERVICE="sshd"
fi

systemctl reload "${SSH_SERVICE}" || systemctl restart "${SSH_SERVICE}"
systemctl status "${SSH_SERVICE}" --no-pager

SSH is now restricted to members of sshadmins. This is a strong control in enterprise environments because it turns “any local user account” into “only explicitly approved admin accounts.”

Host firewall: allow SSH only from trusted networks

We are going to reduce exposure by limiting who can reach port 22. This is one of the highest-leverage brute-force defenses: if attackers cannot connect, they cannot guess.

Because firewall tooling differs, we will implement this using UFW on Debian/Ubuntu. We will first detect the primary network interface and current SSH port, then apply rules.

We are going to detect the default route interface and the SSH listening port. Even if we keep port 22, we should not assume it.

sudo -i

set -eu

EXT_IFACE="$(ip route show default 0.0.0.0/0 | awk 'NR==1{for(i=1;i<=NF;i++) if($i=="dev"){print $(i+1); exit}}')"
echo "External interface: ${EXT_IFACE}"

SSH_PORT="$(ss -tln | awk '/LISTEN/ && $4 ~ /:([0-9]+)$/ {print $4}' | awk -F: '$NF ~ /^[0-9]+$/ {print $NF}' | sort -n | uniq | awk 'NR==1{print $1}')"
# If multiple ports are listening, we prefer 22 if present.
if ss -tln | awk '{print $4}' | grep -qE ':22$'; then SSH_PORT="22"; fi
echo "SSH port: ${SSH_PORT}"

We now have the interface and SSH port. Next, we will define a trusted management CIDR and apply UFW rules. Because every environment differs, we will print current IPs and routes first so we choose the correct management network.

We are going to display local addresses and routes, then we will set MGMT_CIDR explicitly. This keeps the commands safe while still being realistic.

sudo -i

set -eu

ip -br addr
ip route

echo "Set MGMT_CIDR to the trusted admin network (example: 192.0.2.0/24)."

Now we will apply UFW rules: default deny inbound, allow established outbound, allow SSH only from MGMT_CIDR, and enable the firewall. We will also ensure UFW is enabled at boot.

sudo -i

set -eu

MGMT_CIDR=""

if [ -z "${MGMT_CIDR}" ]; then
  echo "ERROR: MGMT_CIDR is empty. Edit this command block and set MGMT_CIDR to a real CIDR."
  exit 1
fi

SSH_PORT="22"
if ! ss -tln | awk '{print $4}' | grep -qE ":${SSH_PORT}$"; then
  SSH_PORT="$(ss -tln | awk '{print $4}' | awk -F: '$NF ~ /^[0-9]+$/ {print $NF}' | sort -n | uniq | awk 'NR==1{print $1}')"
fi

ufw --force reset
ufw default deny incoming
ufw default allow outgoing

ufw allow from "${MGMT_CIDR}" to any port "${SSH_PORT}" proto tcp

ufw --force enable
systemctl enable ufw

ufw status verbose

The host firewall is now enforcing a simple rule: SSH is reachable only from the trusted management network. This dramatically reduces brute-force noise and risk. We also ensured the firewall persists across reboots.

Fail2ban: automatically ban repeated authentication failures

We are going to configure Fail2ban to watch SSH authentication logs and ban IPs that repeatedly fail. This does not replace strong authentication; it reduces the impact of sustained guessing and keeps logs and CPU calmer.

We will create a dedicated jail.local so our configuration survives package updates. We will also choose a backend that works well on systemd systems.

sudo -i

set -eu

cat > /etc/fail2ban/jail.local <<'EOF'
[DEFAULT]
# Ban time increases operational calm during sustained attacks.
bantime = 1h
findtime = 10m
maxretry = 5

# Use systemd journal when available; fall back to log files if needed.
backend = systemd

# Email notifications are intentionally not configured here to keep this self-contained.
# In production, integrate with your alerting pipeline.

[sshd]
enabled = true
port = ssh
mode = aggressive
EOF

systemctl enable --now fail2ban
systemctl status fail2ban --no-pager

fail2ban-client ping
fail2ban-client status
fail2ban-client status sshd

Fail2ban is now enabled and monitoring SSH. The sshd jail is active, and we verified the daemon is responsive and the jail is loaded. When repeated failures occur, Fail2ban will add firewall rules to block the offending IPs for the configured ban time.

PAM protection: lockouts and delay for authentication abuse

We are going to add PAM-level protections that apply even when attackers rotate IPs or when brute-force targets local authentication paths. This is especially relevant for sudo and any environment where password authentication is still used somewhere.

On Debian/Ubuntu, we can use pam_faillock (preferred on modern systems) or pam_tally2 (legacy). We will check which module exists and then apply the correct configuration.

We are going to detect whether pam_faillock is available.

sudo -i

set -eu

if ldconfig -p 2>/dev/null | grep -q pam_faillock || find /lib /usr/lib -name 'pam_faillock.so' 2>/dev/null | grep -q .; then
  echo "pam_faillock is available."
else
  echo "pam_faillock not found; we will use pam_tally2 if available."
fi

Now we will implement a controlled lockout policy. The intent is not to punish legitimate admins; it is to stop repeated guessing. We will set a reasonable threshold and unlock time, and we will apply it to both SSH and sudo authentication paths via common-auth.

We are going to back up PAM configuration and then add pam_faillock rules to /etc/pam.d/common-auth when available.

sudo -i

set -eu

PAM_COMMON_AUTH="/etc/pam.d/common-auth"
cp -a "${PAM_COMMON_AUTH}" "${PAM_COMMON_AUTH}.bak.$(date +%F_%H%M%S)"

if find /lib /usr/lib -name 'pam_faillock.so' 2>/dev/null | grep -q .; then
  # Insert faillock rules near the top if not already present.
  if ! grep -q "pam_faillock.so" "${PAM_COMMON_AUTH}"; then
    awk '
      NR==1{
        print "auth required pam_faillock.so preauth silent deny=5 unlock_time=900 fail_interval=600"
        print "auth [default=die] pam_faillock.so authfail deny=5 unlock_time=900 fail_interval=600"
      }
      {print}
      END{
        print "account required pam_faillock.so"
      }
    ' "${PAM_COMMON_AUTH}" > "${PAM_COMMON_AUTH}.new"
    mv "${PAM_COMMON_AUTH}.new" "${PAM_COMMON_AUTH}"
  fi
else
  echo "pam_faillock.so not present; no changes applied here."
fi

# Show the effective lines for review
grep -nE "pam_faillock.so" "${PAM_COMMON_AUTH}" || true

We backed up the PAM file and, when available, added pam_faillock rules that lock an account after 5 failed attempts within 10 minutes, unlocking after 15 minutes. This affects authentication flows that use common-auth, including sudo on Debian/Ubuntu.

Now we will verify that sudo still works in the current session and that PAM configuration is syntactically intact. PAM does not have a single universal “test” command, so we validate by checking logs and performing a controlled sudo check.

sudo -i

set -eu

# Confirm sudo can still validate credentials (this should succeed for authorized admins).
sudo -n true 2>/dev/null || true

# Check recent auth-related logs for PAM errors.
journalctl -n 50 --no-pager | grep -Ei "pam|sudo|sshd" || true

If we see PAM errors in the journal, we should revert the backup immediately. If logs are clean, we have added a meaningful layer of protection that applies beyond SSH alone.

Harden sudo behavior

We are going to tighten sudo defaults to reduce abuse and improve auditability. The goal is to make privilege escalation explicit, logged, and less forgiving of repeated guessing.

We will create a dedicated sudoers drop-in file under /etc/sudoers.d with correct permissions, and we will validate it with visudo before it takes effect.

sudo -i

set -eu

cat > /etc/sudoers.d/niilaa-hardening <<'EOF'
# NIILAA sudo hardening: safer defaults and better auditing
Defaults        use_pty
Defaults        logfile="/var/log/sudo.log"
Defaults        log_input,log_output
Defaults        timestamp_timeout=5
Defaults        passwd_tries=3
Defaults        badpass_message="Sorry, try again."
EOF

chmod 0440 /etc/sudoers.d/niilaa-hardening
visudo -cf /etc/sudoers.d/niilaa-hardening
visudo -c

We created a controlled sudo policy file, locked down its permissions, and validated sudoers syntax. Sudo will now run commands in a pseudo-terminal (reducing certain attack techniques), log to a dedicated file, and limit password retries. This directly supports brute-force resistance on privilege escalation paths.

Now we will verify that sudo logging is working.

sudo -i

set -eu

# Generate a sudo event (should succeed for authorized admins).
sudo true

# Verify log file exists and has entries.
test -f /var/log/sudo.log
tail -n 20 /var/log/sudo.log

Sudo is now producing an audit trail in /var/log/sudo.log. In production, we would forward this to a central log system, but even locally it improves incident response and accountability.

Implementation for RHEL-family systems

This section applies to RHEL, Rocky Linux, AlmaLinux, and CentOS Stream. We will use dnf, firewalld, and the RHEL-style PAM stack. The security intent is the same: hardened SSH, controlled firewall exposure, Fail2ban bans, and PAM protections for SSH and sudo.

Install required packages

We are going to install OpenSSH server (if missing), Fail2ban, and ensure logging is available. On some RHEL-family systems, Fail2ban may be in EPEL. We will detect availability and install accordingly.

sudo -i

set -eu

dnf -y install openssh-server rsyslog || true

# Try to install fail2ban directly; if not available, enable EPEL (common in enterprise).
if ! dnf -y install fail2ban; then
  dnf -y install epel-release
  dnf -y install fail2ban
fi

systemctl enable --now rsyslog

The system now has SSH, Fail2ban, and rsyslog enabled. This ensures authentication events are recorded and Fail2ban has a reliable source of truth.

Harden SSH server configuration

We are going to apply the same SSH hardening principles, but we will use the RHEL service name (sshd) and validate configuration before restarting.

sudo -i

set -eu

SSHD_CONFIG="/etc/ssh/sshd_config"
cp -a "${SSHD_CONFIG}" "${SSHD_CONFIG}.bak.$(date +%F_%H%M%S)"

cat >> "${SSHD_CONFIG}" <<'EOF'

# --- NIILAA managed hardening block: brute-force defense ---
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
ChallengeResponseAuthentication no
UsePAM yes

MaxAuthTries 3
MaxSessions 5
LoginGraceTime 20

X11Forwarding no
AllowTcpForwarding no
PermitTunnel no
PrintMotd no

ClientAliveInterval 300
ClientAliveCountMax 2

LogLevel VERBOSE
# --- end NIILAA managed hardening block ---
EOF

sshd -t

systemctl enable --now sshd
systemctl reload sshd || systemctl restart sshd
systemctl status sshd --no-pager

ss -tulpn | awk 'NR==1 || /:22[[:space:]]/ {print}'

SSH is now hardened and running with the updated policy. We validated syntax before reload, and we confirmed the service is active and listening.

Restrict SSH access to an admin group

We are going to create an sshadmins group and restrict SSH logins to that group. This reduces the number of accounts exposed to brute-force attempts.

sudo -i

set -eu

getent group sshadmins >/dev/null || groupadd --system sshadmins

awk -F: '$3 >= 1000 && $1 != "nobody" {print $1}' /etc/passwd
echo "Set ADMIN_USER to an existing admin username before continuing."

Now we will add a chosen admin user to the group and ensure key permissions are correct.

sudo -i

set -eu

ADMIN_USER=""

if [ -z "${ADMIN_USER}" ]; then
  echo "ERROR: ADMIN_USER is empty. Edit this command block and set ADMIN_USER to a real username."
  exit 1
fi

usermod -aG sshadmins "${ADMIN_USER}"

HOME_DIR="$(getent passwd "${ADMIN_USER}" | awk -F: '{print $6}')"
mkdir -p "${HOME_DIR}/.ssh"
chmod 700 "${HOME_DIR}/.ssh"
touch "${HOME_DIR}/.ssh/authorized_keys"
chmod 600 "${HOME_DIR}/.ssh/authorized_keys"
chown -R "${ADMIN_USER}:${ADMIN_USER}" "${HOME_DIR}/.ssh"

id "${ADMIN_USER}"

The admin user is now eligible for SSH access under the new policy, and key permissions are correct.

Now we will enforce the group restriction in sshd and reload.

sudo -i

set -eu

SSHD_CONFIG="/etc/ssh/sshd_config"
grep -qE '^[[:space:]]*AllowGroups[[:space:]]+sshadmins' "${SSHD_CONFIG}" || 
  echo "AllowGroups sshadmins" >> "${SSHD_CONFIG}"

sshd -t
systemctl reload sshd || systemctl restart sshd
systemctl status sshd --no-pager

SSH is now restricted to sshadmins. This is a clean, auditable control that scales well as teams grow.

Firewalld: allow SSH only from trusted networks

We are going to configure firewalld to allow SSH only from a trusted management CIDR. This is one of the most effective brute-force defenses because it prevents unsolicited connections entirely.

First, we will ensure firewalld is installed and running, then we will identify the active zone and set a trusted source network.

sudo -i

set -eu

dnf -y install firewalld
systemctl enable --now firewalld
firewall-cmd --state

ACTIVE_ZONE="$(firewall-cmd --get-active-zones | awk 'NR==1{print $1}')"
echo "Active zone: ${ACTIVE_ZONE}"

ip -br addr
ip route

echo "Set MGMT_CIDR to the trusted admin network (example: 192.0.2.0/24)."

We now know firewalld is active and which zone is in use. Next, we will set MGMT_CIDR, remove broad SSH exposure from the zone if present, and then allow SSH only from that source network.

sudo -i

set -eu

MGMT_CIDR=""
if [ -z "${MGMT_CIDR}" ]; then
  echo "ERROR: MGMT_CIDR is empty. Edit this command block and set MGMT_CIDR to a real CIDR."
  exit 1
fi

ACTIVE_ZONE="$(firewall-cmd --get-active-zones | awk 'NR==1{print $1}')"

# Remove generic SSH service exposure if it exists (safe if not present).
firewall-cmd --permanent --zone="${ACTIVE_ZONE}" --remove-service=ssh || true

# Allow SSH only from the management CIDR.
firewall-cmd --permanent --zone="${ACTIVE_ZONE}" --add-rich-rule="rule family=ipv4 source address=${MGMT_CIDR} service name=ssh accept"

firewall-cmd --reload
firewall-cmd --zone="${ACTIVE_ZONE}" --list-all

Firewalld now permits SSH only from the trusted management network. This change persists across reboots because we used --permanent and reloaded the firewall.

Fail2ban: enable SSH jail

We are going to configure Fail2ban with a local jail file so updates do not overwrite our settings. On RHEL-family systems, the backend may be log-file based or systemd-journal based depending on configuration. We will use systemd when available.

sudo -i

set -eu

cat > /etc/fail2ban/jail.local <<'EOF'
[DEFAULT]
bantime = 1h
findtime = 10m
maxretry = 5
backend = systemd

[sshd]
enabled = true
port = ssh
mode = aggressive
EOF

systemctl enable --now fail2ban
systemctl status fail2ban --no-pager

fail2ban-client ping
fail2ban-client status
fail2ban-client status sshd

Fail2ban is now active and monitoring SSH failures. When repeated failures occur, it will ban the source IP for the configured duration, reducing sustained brute-force pressure.

PAM protection for sudo and SSH

We are going to enforce account lockouts after repeated failures using pam_faillock, which is standard on RHEL-family systems. This protects both SSH and sudo authentication paths when they use PAM.

We will configure /etc/security/faillock.conf (preferred on modern systems) so the policy is centralized and consistent.

sudo -i

set -eu

FAILLOCK_CONF="/etc/security/faillock.conf"
if [ -f "${FAILLOCK_CONF}" ]; then
  cp -a "${FAILLOCK_CONF}" "${FAILLOCK_CONF}.bak.$(date +%F_%H%M%S)"
fi

cat > "${FAILLOCK_CONF}" <<'EOF'
# NIILAA PAM faillock policy: brute-force resistance
deny = 5
fail_interval = 600
unlock_time = 900
silent
EOF

# Verify module presence
find /lib64 /usr/lib64 /lib /usr/lib -name 'pam_faillock.so' 2>/dev/null | head -n 5 || true

# Show effective config
cat "${FAILLOCK_CONF}"

We set a centralized lockout policy: 5 failures within 10 minutes triggers a 15-minute lock. This reduces the chance of successful guessing and limits repeated abuse against sudo and SSH.

Now we will verify that PAM is using faillock in the system-auth stack (common on RHEL-family). We will not blindly edit PAM stack files unless necessary, but we will confirm the expected references exist.

sudo -i

set -eu

grep -R --line-number "pam_faillock.so" /etc/pam.d/system-auth /etc/pam.d/password-auth 2>/dev/null || true

# Check for recent PAM errors
journalctl -n 80 --no-pager | grep -Ei "pam|faillock|sudo|sshd" || true

If the PAM stack already references pam_faillock.so, our centralized configuration will take effect. If it does not, we should integrate faillock carefully via the distribution’s authselect tooling (common in RHEL 8/9) rather than manual edits, to avoid breaking managed PAM profiles.

Verification checklist

We are going to verify the defenses as a system, not as isolated settings. The goal is to confirm that SSH is hardened, firewall exposure is controlled, Fail2ban is active, and sudo/PAM protections are in place.

  1. SSH configuration is valid and active

    sudo -i
    sshd -t
    systemctl status ssh 2>/dev/null || systemctl status sshd 2>/dev/null
    ss -tlnp | awk 'NR==1 || /:22[[:space:]]/ {print}'

    We confirmed sshd syntax, service health, and that the port is listening.

  2. Firewall is enforcing restricted SSH access

    sudo -i
    (ufw status verbose 2>/dev/null || true)
    (firewall-cmd --get-active-zones 2>/dev/null || true)
    (firewall-cmd --list-all 2>/dev/null || true)

    We confirmed the active firewall policy and that SSH is not broadly exposed.

  3. Fail2ban is running and the sshd jail is enabled

    sudo -i
    systemctl status fail2ban --no-pager
    fail2ban-client status
    fail2ban-client status sshd

    We confirmed Fail2ban is active and monitoring SSH.

  4. Sudo hardening is applied and logging works

    sudo -i
    visudo -c
    sudo true
    test -f /var/log/sudo.log && tail -n 20 /var/log/sudo.log

    We confirmed sudoers syntax is valid and that sudo events are logged.

Troubleshooting

When brute-force defenses are deployed correctly, failures tend to be predictable. We will focus on symptoms we actually see in production and the fastest safe fixes.

Symptom: SSH login fails immediately after hardening

  • Likely cause: Password authentication was disabled but no working SSH key is installed for the admin account.
  • Fix: Use console/out-of-band access, add the correct public key to ~/.ssh/authorized_keys, and verify permissions.
sudo -i

set -eu

ADMIN_USER=""

if [ -z "${ADMIN_USER}" ]; then
  echo "ERROR: Set ADMIN_USER to the affected username."
  exit 1
fi

HOME_DIR="$(getent passwd "${ADMIN_USER}" | awk -F: '{print $6}')"
ls -ld "${HOME_DIR}" "${HOME_DIR}/.ssh" || true
ls -l "${HOME_DIR}/.ssh/authorized_keys" || true

chmod 700 "${HOME_DIR}/.ssh"
chmod 600 "${HOME_DIR}/.ssh/authorized_keys"
chown -R "${ADMIN_USER}:${ADMIN_USER}" "${HOME_DIR}/.ssh"

We corrected the most common permission issues that prevent key-based authentication.

Symptom: SSH service fails to reload or restart

  • Likely cause: A syntax error in /etc/ssh/sshd_config.
  • Fix: Validate with sshd -t, revert to the backup, then reload.
sudo -i

set -eu

SSHD_CONFIG="/etc/ssh/sshd_config"
sshd -t || true

echo "If sshd -t reports errors, revert to the most recent backup:"
ls -1t /etc/ssh/sshd_config.bak.* 2>/dev/null | head -n 3 || true

Once we identify the correct backup, we can restore it and reload SSH.

sudo -i

set -eu

LATEST_BAK="$(ls -1t /etc/ssh/sshd_config.bak.* | head -n 1)"
cp -a "${LATEST_BAK}" /etc/ssh/sshd_config

sshd -t

systemctl reload ssh 2>/dev/null || systemctl reload sshd 2>/dev/null || 
systemctl restart ssh 2>/dev/null || systemctl restart sshd 2>/dev/null

systemctl status ssh 2>/dev/null || systemctl status sshd 2>/dev/null

We restored a known-good configuration, validated it, and brought SSH back cleanly.

Symptom: Fail2ban is running but does not ban attackers

  • Likely cause: Fail2ban is reading the wrong log source (journal vs file), or the sshd jail is not enabled.
  • Fix: Confirm jail status, confirm backend, and inspect Fail2ban logs.
sudo -i

set -eu

fail2ban-client status
fail2ban-client status sshd || true

journalctl -u fail2ban -n 200 --no-pager | tail -n 200

If the jail is not enabled, we should re-check /etc/fail2ban/jail.local and restart Fail2ban. If the backend is wrong, switching between systemd and auto is often enough.

Symptom: Legitimate admin account gets locked out after a few mistakes

  • Likely cause: PAM lockout thresholds are too aggressive for the operational reality, or automation is using wrong credentials repeatedly.
  • Fix: Identify the source of failures, then tune deny/unlock_time and clear the lock for the affected user.
sudo -i

set -eu

# Check recent auth failures
journalctl -n 200 --no-pager | grep -Ei "authentication failure|pam|faillock|sudo|sshd" || true

On RHEL-family systems, we can clear faillock counters for a user like this:

sudo -i

set -eu

ADMIN_USER=""

if [ -z "${ADMIN_USER}" ]; then
  echo "ERROR: Set ADMIN_USER to the affected username."
  exit 1
fi

faillock --user "${ADMIN_USER}" --reset || true
faillock --user "${ADMIN_USER}" || true

We reset the lockout state for the user and confirmed the current faillock status.

Common mistakes

Mistake: Disabling password authentication before confirming key access

  • Symptom: SSH prompts for a key, then disconnects; no password prompt appears.
  • Fix: Use console access to add a valid public key to authorized_keys, fix permissions, then retry.

Mistake: Restricting SSH with AllowGroups but forgetting to add admins to the group

  • Symptom: SSH login fails with “Permission denied” even with a valid key.
  • Fix: Add the admin user to sshadmins, then reconnect (new group membership requires a new session).
sudo -i

set -eu

ADMIN_USER=""

if [ -z "${ADMIN_USER}" ]; then
  echo "ERROR: Set ADMIN_USER to the affected username."
  exit 1
fi

usermod -aG sshadmins "${ADMIN_USER}"
id "${ADMIN_USER}"

We ensured the user is in the allowed group. A new SSH session will pick up the updated group membership.

Mistake: Enabling the firewall without allowing SSH from the management network

  • Symptom: Existing SSH session stays alive, but new SSH connections time out.
  • Fix: Use console access to add the correct allow rule for the management CIDR, then verify firewall status.

Mistake: Fail2ban bans the wrong IP (or bans internal jump hosts)

  • Symptom: Admin access from a jump host suddenly stops after a few typos.
  • Fix: Unban the IP, then add trusted ranges to ignoreip in Fail2ban’s configuration.
sudo -i

set -eu

# Show currently banned IPs for sshd
fail2ban-client status sshd | sed -n 's/.*Banned IP list:[[:space:]]*//p' || true

If we need to unban a specific IP, we can do it explicitly.

sudo -i

set -eu

BAD_IP=""

if [ -z "${BAD_IP}" ]; then
  echo "ERROR: Set BAD_IP to the IP address to unban."
  exit 1
fi

fail2ban-client set sshd unbanip "${BAD_IP}"
fail2ban-client status sshd

We removed the ban for the specified IP and confirmed the jail status.

How do we at NIILAA look at this

This setup is not impressive because it is complex. It is impressive because it is controlled. Every component is intentional. Every configuration has a reason. This is how infrastructure should scale — quietly, predictably, and without drama.

At NIILAA, we help organizations design, deploy, secure, and maintain Linux security baselines like this in real production environments—across home labs, professional teams, and enterprise fleets. We focus on repeatable hardening, safe rollout plans that avoid lockouts, verification that stands up to audits, and operational practices that keep security effective months later, not just on day one.

Leave A Comment

All fields marked with an asterisk (*) are required