Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Subscribe and receive upto $1000 discount on checkout. Learn more
Configure Site-to-Site VPN Using WireGuard

When a “temporary” network becomes permanent

Most enterprise networks do not start complicated. We connect a branch office because it is “just one site.” We add a second site because the business grows. Then a third appears after an acquisition. Suddenly, the “simple” connectivity request becomes a recurring operational risk: inconsistent routing, ad-hoc firewall rules, brittle remote access, and a growing blast radius when something breaks.

Site-to-site VPN is where that story either stabilizes or spirals. If we treat it as a controlled, repeatable system, we get predictable connectivity, clear security boundaries, and a path to scale. In this guide, we will build a production-grade site-to-site VPN using WireGuard on Linux, designed for on-prem and hybrid networks, with strong defaults, persistence across reboots, and verification at every step.

Architecture we are implementing

We will connect two sites:

  • Site A (HQ / On-Prem): Linux gateway with a public IP (or reachable via port-forwarding).
  • Site B (Branch / On-Prem or Hybrid edge): Linux gateway with a public IP (or reachable via port-forwarding).

Each site has an internal LAN behind the gateway. WireGuard will create a routed tunnel between the gateways so that hosts on each LAN can reach the other LAN without installing VPN software on every endpoint.

Example addressing plan (we will use these values consistently)

  • Site A LAN: 10.10.0.0/16
  • Site B LAN: 10.20.0.0/16
  • WireGuard tunnel network: 10.99.0.0/24
    • Site A tunnel IP: 10.99.0.1/24
    • Site B tunnel IP: 10.99.0.2/24
  • WireGuard UDP port: 51820

We will also implement:

  • Kernel IP forwarding for routing between LAN and tunnel
  • Firewall rules that allow only what is needed
  • NAT only if required (we will prefer routing)
  • Persistent systemd service via wg-quick
  • Verification steps after each major change

Prerequisites and assumptions

Before we touch commands, we need to be explicit about what we are assuming. This is where most “it worked in the lab” deployments fail in production.

  • OS: Linux servers acting as gateways. This guide assumes a modern distribution with systemd (Ubuntu 22.04/24.04 LTS, Debian 12, RHEL 9 derivatives). Commands are written to be broadly compatible.
  • Access: Root shell access (or a user with passwordless sudo). We will write to /etc, manage system services, and modify firewall rules.
  • Network role: Each WireGuard node is a gateway with:
    • One interface facing the internet (public IP or NAT with port-forwarding)
    • One interface facing the internal LAN
  • Routing: We will route between 10.10.0.0/16 and 10.20.0.0/16 through the gateways. Internal networks must not overlap. If they overlap, we must renumber or use NAT with careful policy design.
  • Firewall: We will use nftables if available, but to keep this copy/paste-safe across enterprise Linux, we will implement firewall rules using iptables and persist them with iptables-persistent (Debian/Ubuntu) or via a systemd unit (generic). If the environment uses firewalld, we can translate rules, but we will keep one consistent approach here.
  • DNS: Not required for the tunnel itself. We can use IPs. If we use DNS names for endpoints, they must resolve reliably from each gateway.
  • Time: System clocks should be correct (NTP enabled). WireGuard is tolerant, but good time hygiene matters for logs and incident response.

What we will collect first

We will collect interface names, default routes, and the public endpoint IPs. We will do this with commands that print values, then store them in shell variables so subsequent commands remain copy/paste-safe.

Step 1: Identify interfaces and network parameters on each gateway

We are going to detect the default outbound interface (internet-facing) and the LAN interface. This matters because firewall and forwarding rules must be bound to the correct interfaces. Guessing here is how we accidentally open the wrong path or break production traffic.

On Site A gateway

set -euo pipefail

EXT_IFACE=$(ip -o -4 route show to default | awk '{print $5}' | head -n1)
echo "Site A external interface: ${EXT_IFACE}"

ip -o -4 addr show | awk '{print $2, $4}' | sort -u

We just captured the default-route interface into EXT_IFACE and printed all IPv4 addresses per interface. From the output, we should identify the LAN-facing interface (for example, lan0 or ens192) and confirm the LAN subnet is 10.10.0.0/16.

Now we will set the LAN interface variable explicitly based on what we saw. We do this explicitly because “auto-detect LAN” is unreliable in multi-NIC servers.

LAN_IFACE="REPLACE_WITH_SITE_A_LAN_INTERFACE"
echo "Site A LAN interface: ${LAN_IFACE}"

We set LAN_IFACE for Site A. We must replace the value once, and then we can safely copy/paste the rest of the Site A section without further edits.

On Site B gateway

We repeat the same process on Site B. Consistency is what keeps operations calm later.

set -euo pipefail

EXT_IFACE=$(ip -o -4 route show to default | awk '{print $5}' | head -n1)
echo "Site B external interface: ${EXT_IFACE}"

ip -o -4 addr show | awk '{print $2, $4}' | sort -u

We now have Site B’s external interface and a list of interface addresses. We will again set the LAN interface explicitly.

LAN_IFACE="REPLACE_WITH_SITE_B_LAN_INTERFACE"
echo "Site B LAN interface: ${LAN_IFACE}"

At this point, both gateways have a known external interface and a known LAN interface. That is the foundation for correct routing and firewalling.

Step 2: Install WireGuard and baseline tooling

We are going to install WireGuard and a few utilities used for verification. We do this before generating keys so we can validate the kernel module and tools are present.

On both Site A and Site B

set -euo pipefail

if command -v apt-get >/dev/null 2>&1; then
  apt-get update
  apt-get install -y wireguard iproute2 iptables iptables-persistent resolvconf
elif command -v dnf >/dev/null 2>&1; then
  dnf install -y wireguard-tools iproute iptables
elif command -v yum >/dev/null 2>&1; then
  yum install -y wireguard-tools iproute iptables
else
  echo "Unsupported package manager. Install wireguard-tools, iproute2, and iptables manually."
  exit 1
fi

wg version || true

We installed WireGuard tooling and networking utilities. On Debian/Ubuntu, we also installed iptables-persistent to keep firewall rules across reboots. The final line prints the WireGuard tool version, confirming the binary is available.

Step 3: Generate WireGuard keys with correct permissions

We are going to generate a private/public keypair on each gateway. The private key must be protected because it is the identity of the node. We will store it under /etc/wireguard with strict permissions.

On Site A

set -euo pipefail
umask 077

install -d -m 700 /etc/wireguard

wg genkey | tee /etc/wireguard/sitea.key | wg pubkey > /etc/wireguard/sitea.pub

chmod 600 /etc/wireguard/sitea.key
chmod 644 /etc/wireguard/sitea.pub

echo "Site A public key:"
cat /etc/wireguard/sitea.pub

We created /etc/wireguard with secure permissions, generated Site A’s private key and public key, and printed the public key. We will copy the public key into Site B’s configuration later. The private key never leaves Site A.

On Site B

set -euo pipefail
umask 077

install -d -m 700 /etc/wireguard

wg genkey | tee /etc/wireguard/siteb.key | wg pubkey > /etc/wireguard/siteb.pub

chmod 600 /etc/wireguard/siteb.key
chmod 644 /etc/wireguard/siteb.pub

echo "Site B public key:"
cat /etc/wireguard/siteb.pub

We repeated the same secure key generation on Site B and printed Site B’s public key. We will copy this public key into Site A’s configuration.

Step 4: Decide how endpoints will reach each other

We are going to define the public endpoint for each site. In production, this is usually a static public IP or a stable DNS name. If a gateway is behind NAT, we must configure port-forwarding of UDP 51820 to the gateway and use the public IP of the NAT device as the endpoint.

We will store endpoint values in variables so we can reuse them safely.

On Site A: set Site B endpoint

We are going to set the remote endpoint (Site B) as a variable. We must replace it once with the real public IP or DNS name of Site B.

SITEB_ENDPOINT_HOST="REPLACE_WITH_SITE_B_PUBLIC_IP_OR_DNS"
WG_PORT="51820"
echo "Site B endpoint: ${SITEB_ENDPOINT_HOST}:${WG_PORT}"

We now have a consistent reference to Site B’s endpoint for configuration.

On Site B: set Site A endpoint

We do the same on Site B for Site A’s endpoint.

SITEA_ENDPOINT_HOST="REPLACE_WITH_SITE_A_PUBLIC_IP_OR_DNS"
WG_PORT="51820"
echo "Site A endpoint: ${SITEA_ENDPOINT_HOST}:${WG_PORT}"

We now have a consistent reference to Site A’s endpoint for configuration.

Step 5: Configure WireGuard interfaces with routed site-to-site policy

We are going to create wg0 on both gateways using wg-quick. We will define:

  • The tunnel IP for each side
  • The peer public key
  • AllowedIPs for the remote LAN and the remote tunnel IP
  • Keepalive to maintain NAT mappings (important for hybrid edges and NATed sites)

We will also include firewall hooks in PostUp and PostDown so rules are applied when the tunnel comes up and removed when it goes down. This keeps changes controlled and reversible.

On Site A: create /etc/wireguard/wg0.conf

We are going to build the full configuration file. Before we write it, we will load Site A’s private key into a variable and we will paste Site B’s public key into a variable. This avoids editing the file multiple times and keeps the process repeatable.

set -euo pipefail

SITEA_PRIV_KEY=$(cat /etc/wireguard/sitea.key)
SITEB_PUB_KEY="REPLACE_WITH_SITE_B_PUBLIC_KEY"

SITEA_TUN_IP="10.99.0.1/24"
SITEB_TUN_IP="10.99.0.2/32"

SITEA_LAN_CIDR="10.10.0.0/16"
SITEB_LAN_CIDR="10.20.0.0/16"

WG_PORT="51820"

We loaded the local private key, defined tunnel and LAN CIDRs, and prepared variables for a complete config. Now we will write the configuration file in one shot.

install -m 600 /dev/null /etc/wireguard/wg0.conf

cat > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address = ${SITEA_TUN_IP}
ListenPort = ${WG_PORT}
PrivateKey = ${SITEA_PRIV_KEY}

# Forwarding rules are handled at the OS level; these firewall rules only allow the intended flows.
PostUp = iptables -A INPUT -i ${EXT_IFACE} -p udp --dport ${WG_PORT} -j ACCEPT
PostUp = iptables -A INPUT -i wg0 -j ACCEPT
PostUp = iptables -A FORWARD -i wg0 -o ${LAN_IFACE} -d ${SITEA_LAN_CIDR} -j ACCEPT
PostUp = iptables -A FORWARD -i ${LAN_IFACE} -o wg0 -s ${SITEA_LAN_CIDR} -j ACCEPT
PostUp = iptables -A FORWARD -i wg0 -o ${LAN_IFACE} -s ${SITEB_LAN_CIDR} -d ${SITEA_LAN_CIDR} -j ACCEPT
PostUp = iptables -A FORWARD -i ${LAN_IFACE} -o wg0 -s ${SITEA_LAN_CIDR} -d ${SITEB_LAN_CIDR} -j ACCEPT

PostDown = iptables -D INPUT -i ${EXT_IFACE} -p udp --dport ${WG_PORT} -j ACCEPT
PostDown = iptables -D INPUT -i wg0 -j ACCEPT
PostDown = iptables -D FORWARD -i wg0 -o ${LAN_IFACE} -d ${SITEA_LAN_CIDR} -j ACCEPT
PostDown = iptables -D FORWARD -i ${LAN_IFACE} -o wg0 -s ${SITEA_LAN_CIDR} -j ACCEPT
PostDown = iptables -D FORWARD -i wg0 -o ${LAN_IFACE} -s ${SITEB_LAN_CIDR} -d ${SITEA_LAN_CIDR} -j ACCEPT
PostDown = iptables -D FORWARD -i ${LAN_IFACE} -o wg0 -s ${SITEA_LAN_CIDR} -d ${SITEB_LAN_CIDR} -j ACCEPT

[Peer]
PublicKey = ${SITEB_PUB_KEY}
Endpoint = ${SITEB_ENDPOINT_HOST}:${WG_PORT}
AllowedIPs = ${SITEB_TUN_IP}, ${SITEB_LAN_CIDR}
PersistentKeepalive = 25
EOF

chmod 600 /etc/wireguard/wg0.conf

We created Site A’s WireGuard interface configuration. The interface will listen on UDP 51820, and it will route traffic to Site B’s LAN (10.20.0.0/16) through the peer. The firewall rules allow the WireGuard UDP port inbound on the external interface and allow forwarding between the tunnel and the LAN for only the defined subnets.

On Site B: create /etc/wireguard/wg0.conf

We will mirror the configuration on Site B, swapping tunnel IPs and LAN CIDRs. We will load Site B’s private key and paste Site A’s public key.

set -euo pipefail

SITEB_PRIV_KEY=$(cat /etc/wireguard/siteb.key)
SITEA_PUB_KEY="REPLACE_WITH_SITE_A_PUBLIC_KEY"

SITEB_TUN_IP="10.99.0.2/24"
SITEA_TUN_IP="10.99.0.1/32"

SITEB_LAN_CIDR="10.20.0.0/16"
SITEA_LAN_CIDR="10.10.0.0/16"

WG_PORT="51820"

We prepared Site B’s variables. Now we will write the full config file.

install -m 600 /dev/null /etc/wireguard/wg0.conf

cat > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address = ${SITEB_TUN_IP}
ListenPort = ${WG_PORT}
PrivateKey = ${SITEB_PRIV_KEY}

PostUp = iptables -A INPUT -i ${EXT_IFACE} -p udp --dport ${WG_PORT} -j ACCEPT
PostUp = iptables -A INPUT -i wg0 -j ACCEPT
PostUp = iptables -A FORWARD -i wg0 -o ${LAN_IFACE} -d ${SITEB_LAN_CIDR} -j ACCEPT
PostUp = iptables -A FORWARD -i ${LAN_IFACE} -o wg0 -s ${SITEB_LAN_CIDR} -j ACCEPT
PostUp = iptables -A FORWARD -i wg0 -o ${LAN_IFACE} -s ${SITEA_LAN_CIDR} -d ${SITEB_LAN_CIDR} -j ACCEPT
PostUp = iptables -A FORWARD -i ${LAN_IFACE} -o wg0 -s ${SITEB_LAN_CIDR} -d ${SITEA_LAN_CIDR} -j ACCEPT

PostDown = iptables -D INPUT -i ${EXT_IFACE} -p udp --dport ${WG_PORT} -j ACCEPT
PostDown = iptables -D INPUT -i wg0 -j ACCEPT
PostDown = iptables -D FORWARD -i wg0 -o ${LAN_IFACE} -d ${SITEB_LAN_CIDR} -j ACCEPT
PostDown = iptables -D FORWARD -i ${LAN_IFACE} -o wg0 -s ${SITEB_LAN_CIDR} -j ACCEPT
PostDown = iptables -D FORWARD -i wg0 -o ${LAN_IFACE} -s ${SITEA_LAN_CIDR} -d ${SITEB_LAN_CIDR} -j ACCEPT
PostDown = iptables -D FORWARD -i ${LAN_IFACE} -o wg0 -s ${SITEB_LAN_CIDR} -d ${SITEA_LAN_CIDR} -j ACCEPT

[Peer]
PublicKey = ${SITEA_PUB_KEY}
Endpoint = ${SITEA_ENDPOINT_HOST}:${WG_PORT}
AllowedIPs = ${SITEA_TUN_IP}, ${SITEA_LAN_CIDR}
PersistentKeepalive = 25
EOF

chmod 600 /etc/wireguard/wg0.conf

We created Site B’s WireGuard configuration. It mirrors Site A’s intent: route Site A’s LAN (10.10.0.0/16) through the tunnel and allow only the necessary forwarding flows.

Step 6: Enable IP forwarding and keep it persistent

We are going to enable IPv4 forwarding on both gateways. Without this, the gateways can establish a tunnel but will not route traffic between LAN and tunnel. We will apply it immediately and persist it via /etc/sysctl.d.

On both Site A and Site B

set -euo pipefail

cat > /etc/sysctl.d/99-wireguard-s2s.conf <<EOF
net.ipv4.ip_forward=1
EOF

sysctl --system

sysctl net.ipv4.ip_forward

We wrote a persistent sysctl configuration, applied it, and verified that net.ipv4.ip_forward is now set to 1. This change survives reboots.

Step 7: Bring up the tunnel and enable it at boot

We are going to start the WireGuard interface using wg-quick and then enable the systemd unit so it comes up automatically after reboots. This is where the configuration becomes operational, not just theoretical.

On Site A

set -euo pipefail

systemctl enable --now wg-quick@wg0

systemctl status --no-pager wg-quick@wg0
wg show
ip -4 addr show dev wg0
ip -4 route show | grep -E '10.20.0.0/16|10.99.0.0/24' || true

We enabled and started the wg0 interface. The status output confirms the service is active, wg show displays peer state, and the route check confirms the kernel has a route to Site B’s LAN via wg0.

On Site B

set -euo pipefail

systemctl enable --now wg-quick@wg0

systemctl status --no-pager wg-quick@wg0
wg show
ip -4 addr show dev wg0
ip -4 route show | grep -E '10.10.0.0/16|10.99.0.0/24' || true

We brought up the tunnel on Site B and verified the interface, peer visibility, and routes. At this stage, the tunnel should be established if UDP 51820 is reachable between endpoints.

Step 8: Verify UDP reachability and handshake health

We are going to verify that WireGuard is listening on the expected UDP port and that handshakes are occurring. This separates “service is running” from “traffic can actually flow.”

On both sites

set -euo pipefail

ss -lunp | grep -E ':51820b' || true
wg show

ss confirms the system is listening on UDP 51820. wg show should show a latest handshake timestamp that updates when traffic flows. If it says “never,” we likely have a firewall/NAT/endpoint issue.

Step 9: Ensure LAN routing is correct (the part people skip)

We are going to make sure internal hosts know how to reach the remote LAN. In a clean routed design, internal hosts use their local gateway (the WireGuard gateway) as the next hop for the remote LAN.

There are two common enterprise patterns:

  • The WireGuard gateway is already the default gateway for the LAN. In this case, no additional LAN routing is needed.
  • The WireGuard gateway is not the default gateway. In this case, we must add a static route on the LAN’s default router pointing the remote LAN to the WireGuard gateway.

Verify whether the WireGuard gateway is the default gateway for the LAN

We are going to check the gateway’s LAN IP and confirm whether LAN clients use it as their default route. First, we will print the LAN IP on each gateway.

On Site A gateway

ip -o -4 addr show dev "${LAN_IFACE}" | awk '{print $4}'

This prints Site A gateway’s LAN address/prefix. If LAN clients do not use this device as their default gateway, we must add a route on the LAN’s actual router: route 10.20.0.0/16 via this IP.

On Site B gateway

ip -o -4 addr show dev "${LAN_IFACE}" | awk '{print $4}'

This prints Site B gateway’s LAN address/prefix. If LAN clients do not use this device as their default gateway, we must add a route on the LAN’s actual router: route 10.10.0.0/16 via this IP.

Step 10: End-to-end connectivity tests

We are going to test in layers: tunnel IP to tunnel IP, then LAN to LAN. This helps us pinpoint where a failure lives.

Test 1: Tunnel reachability

We will ping the remote tunnel IP from each gateway. This validates the WireGuard tunnel itself.

On Site A

ping -c 3 10.99.0.2

If this succeeds, Site A can reach Site B over the tunnel. If it fails, we focus on endpoints, keys, firewall, and handshake state.

On Site B

ping -c 3 10.99.0.1

If this succeeds, Site B can reach Site A over the tunnel.

Test 2: LAN-to-LAN reachability from gateways

Now we will test routing to the remote LAN from each gateway. We should pick a stable host on each LAN that responds to ICMP (or use TCP checks if ICMP is blocked). We will first show how to discover a likely target via ARP/neigh tables, but in production we typically choose a known server.

On Site A: pick a target in Site B LAN

ip neigh show dev "${LAN_IFACE}" | head -n 20 || true

This shows local neighbors, not remote. For the remote side, we should use a known host IP in 10.20.0.0/16. We will set it as a variable and test.

SITEB_TEST_HOST="REPLACE_WITH_A_REAL_HOST_IN_10.20.0.0/16"
ping -c 3 "${SITEB_TEST_HOST}"

If this succeeds, routing and forwarding are working from Site A gateway to a host in Site B LAN. If it fails but tunnel ping works, the issue is usually LAN routing on the far side, host firewall, or missing return route.

On Site B: pick a target in Site A LAN

SITEA_TEST_HOST="REPLACE_WITH_A_REAL_HOST_IN_10.10.0.0/16"
ping -c 3 "${SITEA_TEST_HOST}"

This validates Site B gateway can reach Site A LAN. Again, if tunnel ping works but this fails, we focus on LAN routing and return paths.

Step 11: Persist firewall rules safely

We already attached iptables rules to the interface lifecycle via PostUp/PostDown. That is good because it keeps rules aligned with the tunnel state. However, in enterprise environments we also want to ensure the base firewall policy is not accidentally blocking forwarding or UDP 51820 before WireGuard comes up.

We will verify current iptables policy and then persist rules depending on the platform.

On both sites: inspect current iptables state

iptables -S
iptables -S FORWARD
iptables -L -n -v

We now see the active policy and counters. If the default FORWARD policy is DROP (common on hardened gateways), our explicit FORWARD accept rules in PostUp are essential. The counters will increment when traffic flows, which is a practical way to confirm packets are traversing the expected path.

Persisting rules on Debian/Ubuntu (iptables-persistent)

If we installed iptables-persistent, we can save the current ruleset. This is useful if we also maintain baseline rules outside WireGuard. We will save the current state now.

if command -v netfilter-persistent >/dev/null 2>&1; then
  netfilter-persistent save
  systemctl enable --now netfilter-persistent
  systemctl status --no-pager netfilter-persistent
fi

We saved the current iptables rules and enabled persistence. Note that WireGuard’s PostUp rules are applied dynamically when the interface starts; persistence here is about the broader firewall posture.

Generic persistence approach (systemd unit to restore iptables)

On systems without iptables-persistent, we can snapshot rules to a file and restore them at boot with a small systemd unit. We will only do this if needed, because many enterprises manage firewall centrally.

set -euo pipefail

iptables-save > /etc/iptables.rules

cat > /etc/systemd/system/iptables-restore.service <<EOF
[Unit]
Description=Restore iptables rules
DefaultDependencies=no
Before=network-pre.target
Wants=network-pre.target

[Service]
Type=oneshot
ExecStart=/sbin/iptables-restore /etc/iptables.rules
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable --now iptables-restore.service
systemctl status --no-pager iptables-restore.service

We saved the current iptables rules and created a systemd unit to restore them at boot. This ensures firewall posture is consistent across reboots.

Security considerations that matter in production

  • Key protection: Private keys are stored with 600 permissions under /etc/wireguard. We should restrict root access and include these files in secure backup policies only if required.
  • Least privilege routing: AllowedIPs is the policy boundary. We only allow the remote LAN and the remote tunnel IP, not 0.0.0.0/0.
  • Firewall scope: We only open UDP 51820 on the external interface and only allow forwarding between the two LAN CIDRs through wg0.
  • Auditability: wg show, iptables counters, and systemd status provide a clean operational story during incidents.
  • Change control: Configuration is file-based and deterministic. We can version-control wg0.conf templates (without private keys) and apply changes predictably.

Troubleshooting

When site-to-site VPN fails, the fastest path to resolution is to treat it like a pipeline: endpoint reachability, handshake, routes, forwarding, and return path.

Symptom: wg show shows “latest handshake: never”

  • Likely causes:
    • UDP 51820 blocked by firewall on one side
    • NAT port-forwarding missing or pointing to the wrong host
    • Wrong endpoint IP/DNS
    • Peer public key mismatch
  • Fix:
    • Confirm listening port: ss -lunp | grep -E ':51820b'
    • Confirm endpoint resolves: getent ahosts REPLACE_WITH_ENDPOINT_DNS (if using DNS)
    • Confirm keys: compare the configured PublicKey with cat /etc/wireguard/site*.pub
    • Temporarily add a packet capture to confirm UDP arrives:
      tcpdump -ni "${EXT_IFACE}" udp port 51820

Symptom: Handshake works, but LAN-to-LAN traffic fails

  • Likely causes:
    • IP forwarding not enabled
    • Missing static route on the LAN router (if the WireGuard gateway is not the default gateway)
    • Host firewall on the destination LAN blocks traffic
    • Overlapping subnets between sites
  • Fix:
    • Verify forwarding: sysctl net.ipv4.ip_forward should be 1
    • Verify routes:
      ip -4 route show | grep -E '10.10.0.0/16|10.20.0.0/16'
    • Check iptables counters to see if packets are forwarded:
      iptables -L FORWARD -n -v
    • Confirm the LAN router has a route to the remote LAN via the WireGuard gateway LAN IP.

Symptom: Tunnel works for a while, then stops until we restart

  • Likely causes:
    • NAT mapping expires on one side
    • Endpoint IP changes (dynamic public IP) without DNS update
  • Fix:
    • Keepalive is already set to 25 seconds. Confirm it is present in both configs.
    • If endpoint IP changes, use a stable DNS name and ensure it updates quickly, or use a fixed IP.
    • Check logs:
      journalctl -u wg-quick@wg0 --no-pager -n 200

Symptom: We can ping remote tunnel IP, but not remote LAN hosts

  • Likely causes:
    • Remote LAN hosts do not have a return route to our LAN (common when the WireGuard gateway is not the default gateway)
    • Remote LAN firewall blocks ICMP or the specific protocol
  • Fix:
    • Add/verify static route on the remote LAN router: route our LAN via the remote WireGuard gateway LAN IP.
    • Test with TCP instead of ICMP if ICMP is blocked:
      nc -vz -w 2 REPLACE_WITH_REMOTE_LAN_HOST 22 || true

Common mistakes

Mistake: Wrong interface variables cause firewall rules to apply to the wrong NIC

  • Symptom: WireGuard service is active, but no handshake; or LAN forwarding never works.
  • Fix: Re-check interface names and update EXT_IFACE and LAN_IFACE correctly. Then restart:
    systemctl restart wg-quick@wg0
    wg show
    iptables -L -n -v

Mistake: AllowedIPs is too broad or too narrow

  • Symptom: Handshake works, but traffic to the remote LAN is dropped or routed incorrectly.
  • Fix: Ensure each peer’s AllowedIPs includes the remote LAN CIDR and the remote tunnel /32. Then restart:
    systemctl restart wg-quick@wg0
    wg show

Mistake: Overlapping subnets between sites

  • Symptom: Traffic goes to the wrong place, or routes never behave predictably even though the tunnel is up.
  • Fix: Renumber one site, or implement NAT with strict policy (only as a last resort). In enterprise environments, renumbering is usually the correct long-term move.

Mistake: The WireGuard gateway is not the LAN default gateway and no static route exists

  • Symptom: Gateways can reach each other and even reach remote LAN hosts, but LAN clients cannot reach the remote LAN.
  • Fix: Add a static route on the LAN’s actual router pointing the remote LAN to the WireGuard gateway LAN IP.

How do we at NIILAA look at this

This setup is not impressive because it is complex. It is impressive because it is controlled. Every component is intentional. Every configuration has a reason. This is how infrastructure should scale — quietly, predictably, and without drama.

At NIILAA, we help organizations design, deploy, secure, and maintain site-to-site connectivity that holds up under real operational pressure: clean routing, hardened gateways, auditable change control, and verification that fits enterprise runbooks. When networks grow, we keep the connectivity layer boring in the best possible way.

Leave A Comment

All fields marked with an asterisk (*) are required