When “just expose the app” stops being safe
Most public-facing services do not start as “internet infrastructure.” They start as a single VM, a single port, and a single team trying to ship. Then the service grows. A second app appears. A third team needs access. A quick firewall rule becomes a permanent exception. TLS gets renewed by hand “just this once.” Logs scatter across hosts. And one day, an incident report forces an uncomfortable question: why are our application servers directly reachable from the internet?
A reverse proxy is where we regain control. Not because it is fancy, but because it gives us a single, hardened choke point: one place to terminate TLS, enforce security headers, rate-limit abusive clients, centralize logs, and keep backend servers private. In this guide, we will implement a production-grade reverse proxy security posture using NGINX, and we will also include an equivalent HAProxy section because real environments often standardize differently.
Architecture we are implementing
- Reverse proxy (public): the only host that accepts inbound traffic from the internet on TCP 80/443.
- Backend application servers (private): not publicly reachable; only accept traffic from the reverse proxy on specific ports.
- Security controls at the edge: TLS termination, strict forwarding rules, request size limits, timeouts, security headers, rate limiting, and centralized access/error logging.
- Network controls: firewall rules that enforce “proxy-only” access to backends and reduce the proxy’s exposed surface.
Prerequisites and assumptions
We are going to assume a clean, production-oriented baseline so the steps are predictable and safe to apply:
- Operating system: Ubuntu Server 22.04 LTS or Debian 12 on both the reverse proxy and backend servers.
- Privileges: we have shell access and can run commands as root (either via
sudo -ior prefixing withsudo). - DNS: we control a public DNS name that will point to the reverse proxy’s public IP (for example,
app.example.com). - Backend reachability: the reverse proxy can reach the backend servers over a private network (VPC/subnet/VLAN) or a secured routed network.
- Backend service: the backend application is already listening on a known port (we will verify it). We will not expose that port publicly.
- Time sync: NTP is enabled (important for TLS and log correlation). On most modern installs,
systemd-timesyncdis present. - No “CDN-only” dependency: we are not relying on a CDN as the security boundary. The reverse proxy is the boundary.
We will implement two platform sections:
- NGINX reverse proxy (primary platform)
- HAProxy reverse proxy (alternative platform)
Step 1: Collect the values we will use safely
Before we touch configuration, we will capture the values that tend to vary between environments: the external interface, the proxy’s public IP, and the backend’s private IP and port. We do this first so our firewall and proxy rules are explicit and repeatable.
On the reverse proxy server, we will detect the default outbound interface and the primary IP address. This avoids guessing interface names like eth0 or ens3.
sudo -i
EXT_IFACE=$(ip -o -4 route show to default | awk '{print $5}' | head -n1)
PROXY_IP=$(ip -o -4 addr show dev "$EXT_IFACE" | awk '{print $4}' | cut -d/ -f1 | head -n1)
echo "EXT_IFACE=$EXT_IFACE"
echo "PROXY_IP=$PROXY_IP"
We now have two shell variables in the current root shell: EXT_IFACE and PROXY_IP. We will use them in firewall rules and verification steps so the commands remain copy/paste-safe.
Next, we will define the DNS name and backend target. We will keep these as variables so we can reuse them consistently across NGINX/HAProxy and verification commands.
FQDN="app.example.com"
BACKEND_IP="10.0.1.10"
BACKEND_PORT="8080"
echo "FQDN=$FQDN"
echo "BACKEND_IP=$BACKEND_IP"
echo "BACKEND_PORT=$BACKEND_PORT"
We have now established the minimum set of values needed to build a controlled reverse proxy. In production, these values should come from your CMDB/IaC inventory, but the variables keep the steps deterministic.
Step 2: Lock down the backend so it is not public-facing
Before we harden the reverse proxy, we will make sure the backend is not accidentally reachable from the internet. This is the most common “reverse proxy” failure mode: the proxy exists, but the backend is still exposed, so attackers simply bypass the proxy.
2.1 Verify the backend is listening only where we expect
On the backend server, we will confirm the service is listening on the expected port and see whether it is bound to all interfaces (0.0.0.0) or only a private IP. Binding to a private IP is preferred, but firewall enforcement is still required.
sudo -i
ss -ltnp | awk 'NR==1 || /:8080s/'
This shows whether the backend is listening on TCP 8080 and which address it is bound to. If it is bound to 0.0.0.0:8080, it can accept traffic from any interface, so the firewall becomes the enforcement point.
2.2 Enforce “proxy-only” access to the backend port with UFW
We will use UFW as a readable front-end to nftables/iptables on Debian/Ubuntu. The goal is simple: allow the reverse proxy’s private IP to reach the backend port, and deny everyone else. We will also keep SSH access intact.
On the backend server, we will set the reverse proxy’s private IP. If the reverse proxy has a different private IP than its public IP, we should use the private one here. We will explicitly set it to avoid accidental trust of the wrong address.
sudo -i
PROXY_PRIVATE_IP="10.0.1.5"
echo "PROXY_PRIVATE_IP=$PROXY_PRIVATE_IP"
Now we will apply firewall rules. We will default-deny inbound traffic, allow SSH, and allow the backend port only from the reverse proxy.
sudo -i
ufw --force reset
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
ufw allow from "$PROXY_PRIVATE_IP" to any port "$BACKEND_PORT" proto tcp
ufw --force enable
ufw status verbose
The backend is now protected by a clear policy: only the reverse proxy can reach the application port. SSH remains available for administration. The ufw status verbose output confirms the effective rules.
2.3 Verify the backend is reachable from the proxy and blocked from elsewhere
We will verify from the reverse proxy that the backend responds, and we will also verify locally on the backend that UFW is active.
On the backend server:
sudo -i
ufw status verbose
systemctl is-enabled ufw
systemctl is-active ufw
This confirms the firewall is enabled and persistent across reboots.
On the reverse proxy server, we will test connectivity to the backend. We will use a simple TCP check and an HTTP request if the backend speaks HTTP.
sudo -i
timeout 3 bash -c "cat </dev/null >/dev/tcp/$BACKEND_IP/$BACKEND_PORT" && echo "TCP OK" || echo "TCP FAILED"
curl -sS -o /dev/null -w "HTTP %{http_code}n" "http://$BACKEND_IP:$BACKEND_PORT/" || true
If TCP fails, the proxy cannot reach the backend and we should fix routing/security groups/VLAN ACLs before proceeding. If HTTP returns a code (even 404), that is still a successful reachability test.
NGINX platform implementation
Step 3: Install and baseline NGINX on the reverse proxy
We will install NGINX from the OS repository for stability and security updates. Then we will ensure the service is enabled at boot and confirm it is listening.
sudo -i
apt-get update
apt-get install -y nginx
systemctl enable nginx
systemctl start nginx
systemctl status --no-pager nginx
NGINX is now installed, started, and configured to persist across reboots. The status output should show active (running).
Next, we will confirm which ports are listening. At this stage, NGINX typically listens on TCP 80.
sudo -i
ss -ltnp | awk 'NR==1 || /:80s|:443s/'
This confirms the current exposure. We will keep it minimal: only 80/443 should be reachable from the internet once we finish.
Step 4: Configure firewall on the reverse proxy
We will enforce a tight inbound policy: allow SSH for administration, allow HTTP/HTTPS for public traffic, and deny everything else. This reduces the blast radius if a service is accidentally installed later.
sudo -i
ufw --force reset
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw --force enable
ufw status verbose
The reverse proxy now only accepts inbound connections on SSH, HTTP, and HTTPS. This is a foundational control that stays effective even if application configuration drifts.
Step 5: Prepare TLS with Let’s Encrypt (Certbot) in a controlled way
We will terminate TLS at the reverse proxy. That means the proxy holds the certificate and private key, and backends can remain HTTP-only on the private network (or we can add mTLS later if required). We will use Certbot with the NGINX plugin so renewal is automated and configuration is consistent.
First, we will install Certbot and the NGINX integration.
sudo -i
apt-get update
apt-get install -y certbot python3-certbot-nginx
certbot --version
Certbot is now installed. The version output confirms the binary is available.
Next, we will ensure DNS resolves to this reverse proxy before requesting a certificate. We will query public DNS and compare it to our proxy IP. This prevents issuing certificates that cannot validate.
sudo -i
getent ahosts "$FQDN" | awk '{print $1}' | head -n5
echo "Proxy IP: $PROXY_IP"
If the resolved IP does not match the proxy’s public IP, we should fix DNS first. Certificate issuance will fail otherwise.
Step 6: Create a production-grade NGINX reverse proxy configuration
We will create a dedicated server block for our application. The configuration will do the following:
- Redirect HTTP to HTTPS.
- Proxy requests to the backend over the private network.
- Preserve client context with
X-Forwarded-For,X-Forwarded-Proto, andHost. - Apply sane timeouts and request size limits to reduce abuse and prevent resource exhaustion.
- Add security headers that are safe for most web apps.
- Enable basic rate limiting to slow down brute-force and noisy clients.
- Log in a way that supports incident response.
First, we will create a reusable snippet for proxy headers. This keeps the server block clean and consistent across multiple apps.
sudo -i
cat > /etc/nginx/snippets/proxy-headers.conf <<'EOF'
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Prevent upstream from seeing hop-by-hop headers
proxy_set_header Connection "";
EOF
We have created a snippet that standardizes forwarding headers. This is critical for accurate application logs, correct redirects, and security decisions based on scheme and client IP.
Next, we will create the site configuration. We will start with HTTP and HTTPS blocks, and we will reference the backend IP and port directly. In larger environments, we would use upstream blocks and health checks, but this is a clean baseline.
sudo -i
cat > /etc/nginx/sites-available/app.conf <<EOF
# Rate limiting zone: 10 requests/second per IP with a small burst.
limit_req_zone $binary_remote_addr zone=perip:10m rate=10r/s;
server {
listen 80;
listen [::]:80;
server_name $FQDN;
access_log /var/log/nginx/${FQDN}_access.log;
error_log /var/log/nginx/${FQDN}_error.log warn;
# Keep HTTP only for ACME validation and redirect everything else.
location /.well-known/acme-challenge/ {
root /var/www/html;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name $FQDN;
access_log /var/log/nginx/${FQDN}_access.log;
error_log /var/log/nginx/${FQDN}_error.log warn;
# TLS settings will be managed by Certbot, but we keep safe defaults here.
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Security headers (adjust CSP per application needs).
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
# HSTS should be enabled only after we confirm HTTPS is stable.
# We will enable it in a later step.
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Basic hardening
client_max_body_size 10m;
server_tokens off;
# Timeouts to protect the proxy under slow-client conditions
client_body_timeout 15s;
client_header_timeout 15s;
send_timeout 30s;
# Proxy timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
location / {
include /etc/nginx/snippets/proxy-headers.conf;
# Rate limit requests per client IP
limit_req zone=perip burst=20 nodelay;
proxy_http_version 1.1;
proxy_pass http://$BACKEND_IP:$BACKEND_PORT;
}
}
EOF
We have created a complete NGINX site configuration with redirect logic, proxying, rate limiting, and baseline security headers. At this point, TLS certificates are not yet installed, so the HTTPS server block will not be valid until Certbot writes the certificate paths.
Now we will enable the site and disable the default site to avoid accidental exposure of the default NGINX page.
sudo -i
ln -sf /etc/nginx/sites-available/app.conf /etc/nginx/sites-enabled/app.conf
rm -f /etc/nginx/sites-enabled/default
nginx -t
systemctl reload nginx
systemctl status --no-pager nginx
NGINX has validated the configuration (nginx -t) and reloaded it. Removing the default site reduces confusion and prevents unintended content from being served.
Step 7: Issue the TLS certificate and activate HTTPS cleanly
We will now request a certificate for the DNS name and let Certbot inject the certificate paths into our NGINX configuration. This is the point where the reverse proxy becomes the controlled TLS boundary.
sudo -i
certbot --nginx -d "$FQDN"
Certbot will modify the NGINX configuration to reference the issued certificate and key, and it will reload NGINX. If the ACME challenge succeeds, HTTPS will become active for the domain.
Now we will verify that renewal is scheduled and that the certificate is present.
sudo -i
systemctl status --no-pager certbot.timer || true
systemctl list-timers --all | awk 'NR==1 || /certbot/'
certbot certificates
This confirms automated renewal is in place (via a systemd timer on most modern installs) and shows the certificate details.
Step 8: Enable HSTS after confirming stability
HSTS is powerful: it tells browsers to only use HTTPS for our domain. That is good security, but it also means mistakes become sticky in client browsers. We will enable it only after we confirm HTTPS works reliably.
First, we will verify HTTPS response headers and status.
sudo -i
curl -sS -D- -o /dev/null "https://$FQDN/" | awk 'NR==1 || tolower($0) ~ /server:|strict-transport-security|x-frame-options|x-content-type-options|referrer-policy|permissions-policy/'
If we see a successful HTTP status line (200/301/302/404 are all acceptable depending on the app) and the expected headers, we are ready to enable HSTS.
Now we will uncomment the HSTS header line in our site config in a safe, non-interactive way. We will create a backup first.
sudo -i
cp -a /etc/nginx/sites-available/app.conf /etc/nginx/sites-available/app.conf.bak.$(date +%F)
sed -i 's|# add_header Strict-Transport-Security|add_header Strict-Transport-Security|' /etc/nginx/sites-available/app.conf
nginx -t
systemctl reload nginx
HSTS is now enabled. We backed up the configuration before changing it, validated syntax, and reloaded NGINX.
Step 9: Verify end-to-end behavior and logging
We will verify that traffic flows through the proxy, that the backend is not exposed publicly, and that logs are being written for operational visibility.
9.1 Verify NGINX is listening on 80/443
sudo -i
ss -ltnp | awk 'NR==1 || /:80s|:443s/'
This confirms the proxy is listening on the expected public ports.
9.2 Verify HTTP redirects to HTTPS
sudo -i
curl -sS -I "http://$FQDN/" | awk 'NR==1 || tolower($0) ~ /^location:|^server:/'
We should see a 301/308 redirect and a Location: header pointing to https://.
9.3 Verify HTTPS reaches the backend through the proxy
sudo -i
curl -sS -o /dev/null -w "HTTPS %{http_code}n" "https://$FQDN/"
A non-zero HTTP code confirms the proxy is serving HTTPS and forwarding to the backend. If the backend returns 404, that still proves the path works.
9.4 Verify logs are being written
We will generate a request and then tail the access log to confirm we have traceability.
sudo -i
curl -sS -o /dev/null "https://$FQDN/" || true
tail -n 5 "/var/log/nginx/${FQDN}_access.log"
This confirms that requests are recorded. In production, these logs should be shipped to a central system, but local logging is still essential for first-response debugging.
HAProxy platform implementation
Step 3 (HAProxy): Install and baseline HAProxy on the reverse proxy
In environments that standardize on HAProxy, we still want the same outcome: one controlled edge, private backends, and explicit security behavior. We will install HAProxy from the OS repository and ensure it starts at boot.
sudo -i
apt-get update
apt-get install -y haproxy
systemctl enable haproxy
systemctl start haproxy
systemctl status --no-pager haproxy
HAProxy is now installed and running. The service is enabled for persistence across reboots.
Step 4 (HAProxy): Configure firewall on the reverse proxy
The firewall posture is the same: only SSH, HTTP, and HTTPS should be reachable from the internet.
sudo -i
ufw --force reset
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw --force enable
ufw status verbose
The reverse proxy is now constrained to the minimum required inbound ports.
Step 5 (HAProxy): Prepare TLS certificates for HAProxy
HAProxy typically uses a PEM file that contains the full certificate chain and the private key. We will use Certbot in standalone mode to obtain the certificate, then build the PEM file for HAProxy.
First, we will stop anything that might be listening on port 80 during issuance. If HAProxy is already bound to 80, we will temporarily stop it for the ACME challenge.
sudo -i
systemctl stop haproxy
HAProxy is now stopped, freeing port 80 for Certbot’s standalone validation.
Now we will install Certbot and request the certificate.
sudo -i
apt-get update
apt-get install -y certbot
certbot certonly --standalone -d "$FQDN"
Certbot has obtained the certificate under /etc/letsencrypt/live/. Next we will build the HAProxy PEM bundle.
We will create a directory for HAProxy certificates with strict permissions and generate the PEM file.
sudo -i
install -d -m 0750 /etc/haproxy/certs
cat "/etc/letsencrypt/live/$FQDN/fullchain.pem" "/etc/letsencrypt/live/$FQDN/privkey.pem" > "/etc/haproxy/certs/$FQDN.pem"
chmod 0640 "/etc/haproxy/certs/$FQDN.pem"
chown root:haproxy "/etc/haproxy/certs/$FQDN.pem"
ls -l "/etc/haproxy/certs/$FQDN.pem"
HAProxy now has a single PEM file containing the certificate chain and private key, readable by the HAProxy group but not world-readable.
Finally, we will ensure renewal can rebuild the PEM file. We will add a deploy hook that runs after successful renewal.
sudo -i
cat > /etc/letsencrypt/renewal-hooks/deploy/haproxy-pem.sh <<'EOF'
#!/bin/sh
set -eu
FQDN="app.example.com"
PEM_DIR="/etc/haproxy/certs"
LIVE_DIR="/etc/letsencrypt/live/${FQDN}"
PEM_FILE="${PEM_DIR}/${FQDN}.pem"
install -d -m 0750 "${PEM_DIR}"
cat "${LIVE_DIR}/fullchain.pem" "${LIVE_DIR}/privkey.pem" > "${PEM_FILE}"
chmod 0640 "${PEM_FILE}"
chown root:haproxy "${PEM_FILE}"
systemctl reload haproxy
EOF
chmod 0750 /etc/letsencrypt/renewal-hooks/deploy/haproxy-pem.sh
We have created a deploy hook that rebuilds the PEM bundle and reloads HAProxy after renewal. This makes TLS maintenance predictable and hands-off.
Step 6 (HAProxy): Configure HAProxy for secure reverse proxying
We will configure HAProxy to:
- Redirect HTTP to HTTPS.
- Terminate TLS on 443 using our PEM bundle.
- Forward to the backend over the private network.
- Preserve client IP and scheme via standard headers.
- Apply timeouts and basic request hardening.
We will back up the existing configuration and then write a complete, clean config.
sudo -i
cp -a /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak.$(date +%F)
cat > /etc/haproxy/haproxy.cfg <<EOF
global
log /dev/log local0
log /dev/log local1 notice
user haproxy
group haproxy
daemon
# Reasonable TLS defaults; keep compatibility in mind for enterprise clients.
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 30s
timeout server 60s
timeout http-request 10s
timeout http-keep-alive 10s
frontend fe_http
bind :80
http-request redirect scheme https code 301 if !{ ssl_fc }
frontend fe_https
bind :443 ssl crt /etc/haproxy/certs/$FQDN.pem alpn h2,http/1.1
# Preserve client context for the backend
http-request set-header X-Forwarded-Proto https
http-request set-header X-Forwarded-Port 443
http-request set-header X-Real-IP %[src]
http-request add-header X-Forwarded-For %[src]
# Basic hardening: limit request body size (bytes)
http-request deny if { req.body_size gt 10485760 }
default_backend be_app
backend be_app
balance roundrobin
option httpchk GET /
http-check expect rstatus (2|3|4)[0-9][0-9]
server app1 $BACKEND_IP:$BACKEND_PORT check
EOF
haproxy -c -f /etc/haproxy/haproxy.cfg
systemctl start haproxy
systemctl status --no-pager haproxy
We have replaced the HAProxy configuration with a controlled reverse proxy setup, validated it with a config check, and started the service. HAProxy is now terminating TLS and forwarding to the backend.
Step 7 (HAProxy): Verify end-to-end behavior
We will confirm that HAProxy is listening on 80/443 and that HTTPS works for the domain.
sudo -i
ss -ltnp | awk 'NR==1 || /:80s|:443s/'
curl -sS -I "http://$FQDN/" | awk 'NR==1 || tolower($0) ~ /^location:|^server:/'
curl -sS -o /dev/null -w "HTTPS %{http_code}n" "https://$FQDN/"
This confirms the listener ports, the HTTP-to-HTTPS redirect, and a successful HTTPS response path through the proxy.
Finally, we will confirm Certbot renewal is present. On some systems it is a timer; on others it is a cron job. We will check both patterns safely.
sudo -i
systemctl status --no-pager certbot.timer || true
systemctl list-timers --all | awk 'NR==1 || /certbot/' || true
test -x /etc/cron.daily/certbot && echo "certbot daily cron present" || true
certbot renew --dry-run
The dry-run confirms that renewal can succeed without waiting for the real renewal window.
Troubleshooting
Symptom: HTTPS works on the proxy, but the app behaves as if it is on HTTP
- Likely cause: missing or incorrect
X-Forwarded-Protoheader, or the backend is not configured to trust proxy headers. - Fix (NGINX): confirm the proxy headers snippet is included and contains
X-Forwarded-Proto.
sudo -i
nginx -T 2>/dev/null | awk '/proxy_set_header X-Forwarded-Proto/ {print; found=1} END{if(!found) exit 1}' && echo "Header present" || echo "Header missing"
If the header is missing, we should re-check the include /etc/nginx/snippets/proxy-headers.conf; line in the site config and reload NGINX.
Symptom: 502 Bad Gateway from NGINX or HAProxy
- Likely cause: backend is down, wrong backend IP/port, firewall blocks proxy-to-backend, or routing is missing.
- Fix: verify backend reachability from the proxy and confirm the backend is listening.
sudo -i
timeout 3 bash -c "cat </dev/null >/dev/tcp/$BACKEND_IP/$BACKEND_PORT" && echo "TCP OK" || echo "TCP FAILED"
curl -sS -o /dev/null -w "HTTP %{http_code}n" "http://$BACKEND_IP:$BACKEND_PORT/" || true
If TCP fails, we should check the backend firewall rule allowing the proxy IP, and confirm security groups/VLAN ACLs allow the flow.
Symptom: Certbot fails with connection or challenge errors
- Likely cause: DNS does not point to the proxy, port 80 is blocked, or another service is already bound to port 80 during standalone validation.
- Fix: confirm DNS resolution and firewall rules, and confirm port 80 is reachable and free when using standalone mode.
sudo -i
getent ahosts "$FQDN" | awk '{print $1}' | head -n5
ufw status verbose
ss -ltnp | awk 'NR==1 || /:80s/'
If something is listening on port 80 during standalone issuance, we should stop it temporarily (as we did for HAProxy) or use the NGINX plugin method when NGINX is running.
Symptom: Backend logs show the proxy IP instead of the real client IP
- Likely cause: backend is not using
X-Forwarded-Foror is not configured to trust the proxy. - Fix: ensure the proxy sends
X-Forwarded-Forand configure the backend framework to trust proxy headers only from the proxy’s private IP.
sudo -i
# NGINX: confirm X-Forwarded-For is set
nginx -T 2>/dev/null | awk '/proxy_set_header X-Forwarded-For/ {print; found=1} END{if(!found) exit 1}' && echo "Header present" || echo "Header missing"
We should never configure the backend to trust forwarded headers from “anywhere.” Trust should be limited to the reverse proxy’s private IP range.
Common mistakes
Mistake: Leaving the backend publicly reachable
- Symptom: the backend responds directly when hitting its public IP/port, bypassing the proxy.
- Fix: enforce firewall rules on the backend to only allow the reverse proxy’s private IP to the application port.
sudo -i
# Run on backend
ufw status verbose
If the rule is missing, we should add ufw allow from PROXY_PRIVATE_IP to any port BACKEND_PORT proto tcp and keep default-deny inbound.
Mistake: Not validating configuration before reload
- Symptom: service fails to reload, downtime occurs, or the proxy serves the default site.
- Fix: always run a config test before reload and keep a dated backup.
sudo -i
# NGINX
nginx -t
# HAProxy
haproxy -c -f /etc/haproxy/haproxy.cfg
These checks prevent avoidable outages caused by syntax errors.
Mistake: Enabling HSTS too early
- Symptom: browsers refuse HTTP and users get stuck if HTTPS is misconfigured.
- Fix: enable HSTS only after HTTPS is stable and verified, and start without
includeSubDomainsif the domain hosts mixed services.
sudo -i
curl -sS -D- -o /dev/null "https://$FQDN/" | awk 'NR==1 || tolower($0) ~ /strict-transport-security/'
If HSTS needs to be rolled back, we should remove the header and reload, but we should remember that clients may cache HSTS for the configured duration.
How do we at NIILAA look at this
This setup is not impressive because it is complex. It is impressive because it is controlled. Every component is intentional. Every configuration has a reason. This is how infrastructure should scale — quietly, predictably, and without drama.
At NIILAA, we help organizations design, deploy, secure, and maintain reverse proxy architectures that hold up under real production pressure: clear network boundaries, hardened edge configuration, safe certificate automation, observable traffic flows, and operational runbooks that teams can actually use during incidents.
Website: https://www.niilaa.com
Email: [email protected]
LinkedIn: https://www.linkedin.com/company/niilaa
Facebook: https://www.facebook.com/niilaa.llc