NGINX Load Balancer - Secure gRPC

This guide extends our previous blog post on NGINX Load Balancer for WCF & gRPC by adding SSL connections to the gRPC protocol. The steps are similar—just update the config file bpserver-loadbalancer.conf Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf # NGINX Load Balancer Configuration for Blue Prism Enterprise # Defining two upstream blocks for different ports upstream bpserver_backend_8199 { ip_hash; server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s; } upstream bpserver_backend_10000 { ip_hash; server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s; } server { listen 8199 ssl; server_name d11-lnx-alb01.gcs.cloud; ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass https://bpserver_backend_8199; proxy_ssl_verify off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 300s; proxy_read_timeout 300s; proxy_send_timeout 300s; proxy_pass_request_headers on; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } } server { listen 10000 ssl; # Add ssl here http2 on; server_name d11-lnx-alb01.gcs.cloud; # Add SSL certificate configuration ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { grpc_pass grpcs://bpserver_backend_10000; # Change to grpcs:// for SSL # gRPC specific settings grpc_read_timeout 300s; grpc_send_timeout 300s; # Headers for gRPC grpc_set_header Host $host; grpc_set_header X-Real-IP $remote_addr; grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } See also: NGINX Load Balancer for WCF & gRPC ...

April 24, 2025

NGINX Load Balancer for WCF & gRPC

This guide extends our previous blog post on NGINX Load Balancing for WCF Applications by adding gRPC protocol support on port 10000. While the setup process remains similar, we’ll focus on the specific configuration changes needed in the bpserver-loadbalancer.conf file. Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf # NGINX Load Balancer Configuration for Blue Prism Enterprise # Defining two upstream blocks for different ports upstream bpserver_backend_8199 { ip_hash; server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s; } upstream bpserver_backend_10000 { ip_hash; server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s; } server { listen 8199 ssl; server_name d11-lnx-alb01.gcs.cloud; ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass https://bpserver_backend_8199; proxy_ssl_verify off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 300s; proxy_read_timeout 300s; proxy_send_timeout 300s; proxy_pass_request_headers on; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } } server { listen 10000; http2 on; # Add this line to enable HTTP/2 server_name d11-lnx-alb01.gcs.cloud; location / { grpc_pass grpc://bpserver_backend_10000; # Use grpc_pass instead of proxy_pass # gRPC specific settings grpc_read_timeout 300s; grpc_send_timeout 300s; # Headers for gRPC grpc_set_header Host $host; grpc_set_header X-Real-IP $remote_addr; grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } See also: NGINX Load Balancer - Secure gRPC ...

April 23, 2025

NGINX Load Balancer for WCF App

This guide demonstrates how to implement a high-performance NGINX load balancer for WCF applications with the following features: Enhanced security through SSL/TLS encryption Reliable session management using IP-based persistence Custom-tuned configurations for WCF service optimisation Advanced timeout and buffer settings to handle complex WCF payloads The configuration ensures reliable, secure, and efficient load balancing specifically optimised for WCF service applications, with built-in session persistence and performance tuning. 1. Install required packages and SSL certificates ...

April 21, 2025

NGINX Container - Load Balancer

Let’s build a Dockerized NGINX setup with: SSL termination using a wildcard cert Reverse proxy + Load balancing to 2 backend servers Mounted volumes for certs and config 1. Updated Step for CA Chain #Create the CA chain file: cat mid-ca.crt ca.crt > ca-bundle.crt Cert file Purpose server_001.crt Wildcard cert for your domain server.key Private key for the wildcard cert ca-bundle.crt Combined mid-ca.crt + ca.crt (in that order) 2. Directory Structure (suggested) sh-5.2$ tree . └── nginx-lb ├── Dockerfile ├── certs │ ├── ca-bundle.crt │ ├── ca.crt │ ├── mid-ca.crt │ ├── server-bundle.crt │ ├── server.key │ ├── server_001.crt │ └── server_001.pfx ├── docker-compose.yml ├── nginx │ └── nginx.conf └── nginx-log 3. Create Dockerfile FROM nginx:alpine # Create the log directory inside the container RUN mkdir -p /var/log/nginx # Copy NGINX config and certs into the image (will be overridden by volume) COPY nginx/nginx.conf /etc/nginx/nginx.conf COPY certs/ /etc/nginx/certs/ # Expose port 443 for HTTPS EXPOSE 443 4. Create nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; # Log format definition log_format detailed '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '"$proxy_host" "$upstream_addr"'; # Access and error logs access_log /var/log/nginx/access.log detailed; error_log /var/log/nginx/error.log debug; ssl_certificate /etc/nginx/certs/server_001.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_client_certificate /etc/nginx/certs/ca-bundle.crt; ssl_verify_client off; upstream backend_apis { server d11-api-demo1.gcs.cloud:443; server d11-api-demo2.gcs.cloud:443; } server { listen 443 ssl; server_name d11-alb-ngx01.gcs.cloud; ssl_protocols TLSv1.2 TLSv1.3; ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt; location / { proxy_pass https://backend_apis; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_verify on; proxy_ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt; proxy_ssl_name $host; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } } } ...

April 13, 2025

NGINX Load Balancer - Bare Metal

Install NGINX: sudo apt update sudo apt install nginx -y Set SSL Certificates sh-5.2$ sudo mkdir -p /etc/nginx/ssl sh-5.2$ sudo cp certs/* /etc/nginx/ssl/ sh-5.2$ sudo ls -l /etc/nginx/ssl/ total 32 -rw-r--r--. 1 root root 3830 Apr 13 15:08 ca-bundle.crt -r--r--r--. 1 root root 1911 Apr 13 15:08 ca.crt -r--r--r--. 1 root root 1919 Apr 13 15:08 mid-ca.crt -rw-r--r--. 1 root root 6082 Apr 13 15:08 server-bundle.crt -rw-------. 1 root root 1704 Apr 13 15:08 server.key -rw-r--r--. 1 root root 2252 Apr 13 15:08 server_001.crt -rw-------. 1 root root 3363 Apr 13 15:08 server_001.pfx sh-5.2$ Create the NGINX Load Balancing Config Edit /etc/nginx/nginx.conf or (preferably) add a new file in /etc/nginx/conf.d/iis-loadbalancer.conf: ...

April 13, 2025

NGINX Container - Secure Web Page

Why Choose NGINX for Your Web Server? It’s lightweight and high-performance Excellent for serving static content and as a reverse proxy Simple configuration syntax Very popular in containerized environments So, let’s create a Docker container with Nginx and SSL! First, create a directory structure: cd ~ aws s3 cp s3://BUCKET NAME/ . --recursive sudo yum install unzip tree -y mkdir nginx-ssl unzip certs.zip mv certs nginx-ssl/ unzip html.zip mv html nginx-ssl/ cd nginx-ssl mkdir conf Create nginx.conf in the conf directory: Change server_name. server { listen 443 ssl; server_name d11-vdi-lin04.gcs.cloud; root /usr/share/nginx/html; location / { index index.html; } ssl_certificate /etc/nginx/certs/server_001.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt; ssl_protocols TLSv1.2 TLSv1.3; } Create the full certificate chain by concatenating the certificates in the correct order: cd certs cat mid-ca.crt ca.crt > ca-bundle.crt cat server_001.crt mid-ca.crt ca.crt > server-bundle.crt Create Dockerfile: FROM nginx:alpine RUN mkdir -p /etc/nginx/certs # Copy SSL certificates COPY certs/ca-bundle.crt /etc/nginx/certs/ COPY certs/server_001.crt /etc/nginx/certs/ COPY certs/server.key /etc/nginx/certs/ COPY conf/nginx.conf /etc/nginx/conf.d/default.conf COPY html /usr/share/nginx/html EXPOSE 443 CMD ["nginx", "-g", "daemon off;"] Make sure your HTML content is organized in a directory structure like this: . └── nginx-ssl ├── Dockerfile ├── certs │ ├── ca-bundle.crt │ ├── ca.crt │ ├── mid-ca.crt │ ├── server-bundle.crt │ ├── server.key │ ├── server_001.crt │ └── server_001.pfx ├── conf │ └── nginx.conf └── html ├── colour.conf ├── img │ └── GCS-AWS-logo_32_v02.png ├── index.html └── swagger └── ui └── index ├── img │ └── Tech-Task-v07.png └── index.html Build and run the container: # Build the image sudo docker build -t my-secure-nginx:latest . # Run the container sudo docker run -d --name secure-nginx \ -p 443:443 \ --restart always \ my-secure-nginx:latest Check the status using curl command. # -k flag to allow insecure connections curl -k https://localhost # Or specify your domain curl -k https://your-domain.com # To get more detailed with -v (verbose) flag curl -kv https://localhost See also: Deploy a Amazon Linux 2023 ...

April 10, 2025