Tags

,

Basics

Nginx proxy is very popular against direct application exposure. Besides it has a handy request rate limit functionality that can be done through limit_req_zone & limit_req config items as;

Experiment

Lets have a simple experiment setup to test this configuration. As a web server, prepare a dummy http request handler working at port 8080

package main

import (
"fmt"
"log"
"net/http"
)

func requestHandler(res http.ResponseWriter, req *http.Request) {
fmt.Fprint(res, "Everything's Gonna Be Alright!")
}

func main() {
// register requestHandler to incoming requests for "/"
http.HandleFunc("/", requestHandler)

// run http server on the port 8080
log.Fatal(http.ListenAndServe(":8080", nil))
}

Then use a nginx proxy in front with the config

events {
}

http {
limit_req_zone $binary_remote_addr zone=webSrv:10m rate=5r/s;
server {
listen 80;
location / {
limit_req zone=webSrv burst=10 nodelay;
proxy_pass http://web-srv:8080;
}
}
}

A docker compose file will be sufficient to have orchestration. Notice that containers using the same network namespace can use container names as resource url like http://web-srv:8080,  that is used in the proxy config above.

version: '3'
services:
proxy:
image: nginx:latest
container_name: proxy-srv
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- 9090:80
networks:
- proxy-net
- web-net

web:
image: loadtest-web:latest
container_name: web-srv
expose:
- "8080"
networks:
- web-net

networks:
proxy-net:
web-net:

We can use the docker compose file to spin up our experiment setup

We should see that we have two containers running with correct network settings 

  

Then we can write a basic script to test rate limit functionality.

From the nginx logs we will see that first 11 requests will be successful, but the remaining ones mostly fail with 503

References

a good nginx reverse proxy tutorial

nginx rate limiting