Why Use Nginx as a Reverse Proxy for Multiple Websites?
If you are running multiple websites or web applications and only have one VPS with a single public IP address, Nginx reverse proxy is the most efficient and reliable solution available in 2026. Instead of paying for separate servers for each project, you can route traffic to the correct backend based on the domain name visitors use.
A reverse proxy sits between your users and your backend servers (or local application ports). Nginx listens on ports 80 and 443, inspects the incoming request’s Host header, and forwards that request to the appropriate backend. This means you can run WordPress on port 8080, a Node.js app on port 3000, and a Python API on port 5000, all behind a single Nginx instance serving clean URLs with SSL.
In this tutorial, we will walk through every step: installing Nginx, creating virtual host configurations for each domain, setting up SSL certificates with Let’s Encrypt, and fixing the most common proxy pass errors that trip people up.
Prerequisites
Before you begin, make sure you have the following ready:
- A VPS running Ubuntu 22.04, 24.04, or Debian 12 (the commands below are Debian/Ubuntu-based, but the Nginx config syntax is universal)
- Root or sudo access to the server
- Two or more domain names pointed to your server’s public IP address via DNS A records
- Your backend applications already running on different local ports (e.g., localhost:3000, localhost:8080)
Step 1: Install Nginx
Connect to your server via SSH and update your package list:
sudo apt update
sudo apt upgrade -y
sudo apt install nginx -y
Verify that Nginx is running:
sudo systemctl status nginx
You should see active (running) in the output. If it is not running, start it with:
sudo systemctl start nginx
sudo systemctl enable nginx
Step 2: Configure the Firewall
If you use UFW (Uncomplicated Firewall), allow HTTP and HTTPS traffic:
sudo ufw allow 'Nginx Full'
sudo ufw enable
sudo ufw status
This opens ports 80 and 443 so visitors can reach your websites.
Step 3: Understand the Nginx Configuration Structure
Before creating virtual hosts, it helps to understand how Nginx organizes its configuration files:
| Path | Purpose |
|---|---|
| /etc/nginx/nginx.conf | Main config file. Contains the http block and includes other config files. |
| /etc/nginx/sites-available/ | Directory where you store individual server block config files (one per site). |
| /etc/nginx/sites-enabled/ | Directory with symlinks to configs in sites-available. Only configs linked here are active. |
| /etc/nginx/conf.d/ | Alternative directory. Files ending in .conf are auto-included. |
Make sure your /etc/nginx/nginx.conf file contains these include lines inside the http block:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
On most default Ubuntu/Debian installations, both lines are already present.
Step 4: Set Up Virtual Host Configurations for Each Website
This is the core of the entire setup. We will create a separate server block file for each domain. Let’s say you want to proxy two websites:
- siteA.com running on localhost:3000
- siteB.com running on localhost:8080
Configuration for siteA.com
Create the config file:
sudo nano /etc/nginx/sites-available/siteA.com
Paste the following:
server {
listen 80;
listen [::]:80;
server_name siteA.com www.siteA.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Configuration for siteB.com
Create a second file:
sudo nano /etc/nginx/sites-available/siteB.com
Paste the following:
server {
listen 80;
listen [::]:80;
server_name siteB.com www.siteB.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Enable Both Sites
Create symbolic links in the sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/siteA.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/siteB.com /etc/nginx/sites-enabled/
Remove the default config if it is still enabled (optional but recommended to avoid conflicts):
sudo rm /etc/nginx/sites-enabled/default
Test and Reload
Always test your configuration before reloading:
sudo nginx -t
If the output says syntax is ok and test is successful, reload Nginx:
sudo systemctl reload nginx
At this point, visiting http://siteA.com should forward to your app on port 3000, and http://siteB.com should forward to port 8080.
Step 5: Add SSL Certificates with Let’s Encrypt (Certbot)
Serving your sites over HTTPS is non-negotiable in 2026. Let’s Encrypt provides free SSL certificates, and Certbot makes the process automatic.
Install Certbot
sudo apt install certbot python3-certbot-nginx -y
Generate Certificates for Each Domain
Run Certbot for each site:
sudo certbot --nginx -d siteA.com -d www.siteA.com
sudo certbot --nginx -d siteB.com -d www.siteB.com
Certbot will ask a few questions (your email, whether to redirect HTTP to HTTPS). Choose to redirect all HTTP traffic to HTTPS when prompted. Certbot will automatically modify your Nginx config files to add the SSL directives.
Verify Auto-Renewal
Certbot installs a systemd timer that auto-renews certificates. Verify it is active:
sudo systemctl status certbot.timer
You can also do a dry run:
sudo certbot renew --dry-run
What Your Config Looks Like After Certbot
After running Certbot, your siteA.com config file will look something like this:
server {
listen 80;
listen [::]:80;
server_name siteA.com www.siteA.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name siteA.com www.siteA.com;
ssl_certificate /etc/letsencrypt/live/siteA.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/siteA.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Step 6: Adding More Websites
The beauty of this setup is that scaling is straightforward. For every new website or application you want to add:
- Deploy your application on a new local port (e.g., localhost:4000).
- Point the new domain’s DNS A record to your server IP.
- Create a new config file in
/etc/nginx/sites-available/. - Symlink it to
/etc/nginx/sites-enabled/. - Run
sudo nginx -tandsudo systemctl reload nginx. - Run Certbot for the new domain.
You can realistically host 10, 20, or even more lightweight sites on a single VPS this way, depending on your server resources.
Step 7: Useful Proxy Headers Explained
The proxy_set_header directives in your config are critical. Here is what each one does:
| Header | Purpose |
|---|---|
Host $host |
Passes the original domain name to the backend so the app knows which site was requested. |
X-Real-IP $remote_addr |
Sends the visitor’s real IP address to the backend instead of 127.0.0.1. |
X-Forwarded-For $proxy_add_x_forwarded_for |
Appends the client IP to the forwarded-for chain, useful when multiple proxies are involved. |
X-Forwarded-Proto $scheme |
Tells the backend whether the original request was HTTP or HTTPS. |
Upgrade / Connection |
Required for WebSocket support. Without these, real-time features will break. |
Troubleshooting Common Nginx Reverse Proxy Errors
Even with a clean configuration, things can go wrong. Here are the most common issues and how to fix them.
502 Bad Gateway
This is the most frequent error. It means Nginx reached the backend but got no valid response.
- Cause: The backend application is not running on the specified port.
- Fix: Check if your app is actually listening. Run
sudo ss -tlnp | grep 3000(replace 3000 with your port). If nothing shows up, start your application.
504 Gateway Timeout
Nginx waited too long for the backend to respond.
- Cause: The backend is overloaded or a long-running process exceeded the default timeout.
- Fix: Increase the timeout values in your location block:
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
403 Forbidden
- Cause: File permission issues or the backend is rejecting the connection.
- Fix: Check Nginx error logs at
/var/log/nginx/error.logfor details. Also confirm your backend is binding to127.0.0.1or0.0.0.0, not just a specific interface.
“No server is defined to handle the request” or Wrong Site Loads
- Cause: The
server_namedoes not match the domain, or the DNS is not pointing to your server. - Fix: Double-check your
server_namedirective. Verify DNS withdig siteA.comornslookup siteA.com. Make sure it returns your server’s IP.
Mixed Content Warnings (HTTPS)
- Cause: The backend app generates HTTP links even though the site is served over HTTPS.
- Fix: Ensure you are passing the
X-Forwarded-Protoheader. Many frameworks (Express, Django, Rails) use this header to generate the correct URLs.
Checking Logs
When something breaks, always check the logs first:
sudo tail -f /var/log/nginx/error.log
sudo tail -f /var/log/nginx/access.log
For site-specific logs, you can add custom log paths in each server block:
access_log /var/log/nginx/siteA.access.log;
error_log /var/log/nginx/siteA.error.log;
Performance Tips for Running Multiple Sites Behind Nginx
Once everything is working, consider these optimizations:
- Enable Gzip compression in your
nginx.confto reduce bandwidth usage. - Set up proxy caching for static assets if your backend apps serve images, CSS, or JS.
- Use HTTP/2 by adding
http2to your listen directive:listen 443 ssl http2; - Monitor resource usage with tools like
htop,netdata, orPrometheus + Grafanato ensure your VPS can handle the load. - Set worker_processes to
autoinnginx.confso Nginx uses all available CPU cores.
Nginx Reverse Proxy vs. Nginx Proxy Manager
You may have come across Nginx Proxy Manager (NPM), a popular GUI tool built on top of Nginx. Here is a quick comparison:
| Feature | Manual Nginx Config | Nginx Proxy Manager |
|---|---|---|
| Setup complexity | Moderate (command line) | Easy (web UI) |
| Flexibility | Full control over every directive | Limited to UI options |
| SSL management | Certbot CLI | Built-in Let’s Encrypt |
| Best for | Production servers, advanced configs | Home labs, quick setups |
| Runs via | Native package | Docker container |
For production environments where you need fine-grained control, the manual approach described in this article is the better choice. Nginx Proxy Manager is excellent for quick home lab setups or for users who prefer a graphical interface.
Complete Example: Three Sites on One Server
To tie everything together, here is a real-world scenario. You have one VPS at IP 203.0.113.50 and you want to host:
- blog.example.com – a Ghost blog on port 2368
- app.example.com – a Node.js app on port 3000
- api.example.com – a Python FastAPI service on port 8000
All three DNS records point to 203.0.113.50. You create three config files in /etc/nginx/sites-available/, each with the appropriate server_name and proxy_pass value. Symlink all three, run nginx -t, reload, then run Certbot for each subdomain. Done.
The total time to set this up from scratch is roughly 15 to 20 minutes once you are familiar with the process.
FAQ
Can I use Nginx reverse proxy to serve both static sites and dynamic applications?
Yes. For static sites, you can either use a root directive to serve files directly from a folder, or run a lightweight static file server and proxy to it. For dynamic apps (Node.js, Python, PHP, Ruby), the proxy_pass approach described above works perfectly.
How many websites can I run behind one Nginx reverse proxy?
There is no hard limit in Nginx itself. The constraint is your server’s CPU, RAM, and bandwidth. A small VPS with 2 GB of RAM can comfortably handle 10 or more low-traffic sites. For high-traffic scenarios, you will want to scale your backend resources.
Do I need a separate IP address for each website?
No. That is the entire point of using server_name-based virtual hosts. Nginx reads the Host header from the incoming HTTP request and routes it to the correct server block. All sites share the same IP address.
Can I proxy to backend servers on different machines instead of localhost?
Absolutely. Replace http://127.0.0.1:3000 with the internal IP of the other machine, like http://192.168.1.10:3000. Just make sure the machines can communicate over the network and the necessary ports are open.
What if I want to use Docker containers as backends?
This works great with Docker. Each container exposes a port on the host (e.g., -p 3000:3000), and you configure Nginx to proxy_pass to that port. You can also use Docker networks and reference container names if Nginx itself runs in Docker.
How do I redirect www to non-www (or vice versa)?
Add a separate server block that catches the version you want to redirect and issues a 301:
server {
listen 80;
server_name www.siteA.com;
return 301 https://siteA.com$request_uri;
}
Is Nginx better than Apache as a reverse proxy?
For reverse proxy use cases, Nginx is generally preferred due to its event-driven architecture, lower memory usage, and high concurrency handling. Apache can do the same job with mod_proxy, but Nginx is the more popular choice for this specific role in 2026.
Wrapping Up
Setting up Nginx as a reverse proxy for multiple websites on one server is one of the most cost-effective and practical skills for anyone managing web infrastructure. With the configuration approach above, you get clean domain-based routing, free SSL certificates, WebSocket support, and a setup that scales simply by adding new config files.
If you run into issues or need managed hosting where this is handled for you, feel free to reach out to our team at Kelio Host. We help businesses deploy and manage multi-site server configurations every day.