NGINX Load Balancer - Secure gRPC
This guide extends our previous blog post on NGINX Load Balancer for WCF & gRPC by adding SSL connections to the gRPC protocol. The steps are similar—just update the config file bpserver-loadbalancer.conf Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf # NGINX Load Balancer Configuration for Blue Prism Enterprise # Defining two upstream blocks for different ports upstream bpserver_backend_8199 { ip_hash; server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s; } upstream bpserver_backend_10000 { ip_hash; server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s; } server { listen 8199 ssl; server_name d11-lnx-alb01.gcs.cloud; ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass https://bpserver_backend_8199; proxy_ssl_verify off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 300s; proxy_read_timeout 300s; proxy_send_timeout 300s; proxy_pass_request_headers on; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } } server { listen 10000 ssl; # Add ssl here http2 on; server_name d11-lnx-alb01.gcs.cloud; # Add SSL certificate configuration ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { grpc_pass grpcs://bpserver_backend_10000; # Change to grpcs:// for SSL # gRPC specific settings grpc_read_timeout 300s; grpc_send_timeout 300s; # Headers for gRPC grpc_set_header Host $host; grpc_set_header X-Real-IP $remote_addr; grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } See also: NGINX Load Balancer for WCF & gRPC ...
OpenSSL - Verify Certificate
Verify the certificate openssl x509 -in server/certs/client.crt -text -noout openssl x509 -in server/certs/server.crt -text -noout Verify the certificate chain # First, concatenate the CA certificates (leaf to root) cat mid-ca.crt ca.cert > ca-bundle.crt # Then verify using the chain file openssl verify -CAfile ca-bundle.crt server/certs/client.crt openssl verify -CAfile ca-bundle.crt server/certs/server.crt See also: OpenSSL - Initial Setup OpenSSL (1) - Root CA OpenSSL (2) - Intermediate CA OpenSSL (3) - Server Certificate ...
OpenSSL - Revoke Certificate
Revoke a certificate openssl ca -config mid-ca/mid-ca.crt -revoke server/certs/server.crt cat mid-ca/index See also: OpenSSL - Initial Setup OpenSSL (1) - Root CA OpenSSL (2) - Intermediate CA OpenSSL (3) - Server Certificate OpenSSL (4) - Client Certificate OpenSSL - Verify Certificate OpenSSL - Revoke Certificate
OpenSSL (4) - Client Certificate
Create a Client Certificate 1. Generate a client key file openssl genrsa -out server/private/client.key 2048 2. Generate a client Certificate Signing Request (CSR) openssl req -config mid-ca/mid-ca.conf -key server/private/client.key -new -sha256 -out server/csr/client.csr e.g., CN=GCS-Client-Certificate-v0x 3. Sign the client CSR using the client_cert extension openssl ca -config mid-ca/mid-ca.conf -extensions client_cert -days 3650 -notext -in server/csr/client.csr -out server/client-certs/client.crt 4. Generate client PFX (if needed) openssl pkcs12 -inkey server/private/client.key -in server/client-certs/client.crt -export -out server/client-certs/client.pfx -passout pass: See also: Download from CloudShell ...
NGINX Load Balancer for WCF & gRPC
This guide extends our previous blog post on NGINX Load Balancing for WCF Applications by adding gRPC protocol support on port 10000. While the setup process remains similar, we’ll focus on the specific configuration changes needed in the bpserver-loadbalancer.conf file. Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf # NGINX Load Balancer Configuration for Blue Prism Enterprise # Defining two upstream blocks for different ports upstream bpserver_backend_8199 { ip_hash; server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s; } upstream bpserver_backend_10000 { ip_hash; server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s; } server { listen 8199 ssl; server_name d11-lnx-alb01.gcs.cloud; ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass https://bpserver_backend_8199; proxy_ssl_verify off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 300s; proxy_read_timeout 300s; proxy_send_timeout 300s; proxy_pass_request_headers on; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } } server { listen 10000; http2 on; # Add this line to enable HTTP/2 server_name d11-lnx-alb01.gcs.cloud; location / { grpc_pass grpc://bpserver_backend_10000; # Use grpc_pass instead of proxy_pass # gRPC specific settings grpc_read_timeout 300s; grpc_send_timeout 300s; # Headers for gRPC grpc_set_header Host $host; grpc_set_header X-Real-IP $remote_addr; grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } See also: NGINX Load Balancer - Secure gRPC ...
NGINX Load Balancer for WCF App
This guide demonstrates how to implement a high-performance NGINX load balancer for WCF applications with the following features: Enhanced security through SSL/TLS encryption Reliable session management using IP-based persistence Custom-tuned configurations for WCF service optimisation Advanced timeout and buffer settings to handle complex WCF payloads The configuration ensures reliable, secure, and efficient load balancing specifically optimised for WCF service applications, with built-in session persistence and performance tuning. 1. Install required packages and SSL certificates ...
HAProxy Container - Load Balancer
HAProxy Load Balancer with SSL Termination 1. Install Docker sudo yum update -y sudo yum install docker -y sudo systemctl start docker sudo systemctl enable docker 2. Install Docker Compose # Download Docker Compose binary sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # Make it executable sudo chmod +x /usr/local/bin/docker-compose # Verify the installation docker-compose --version 3. Create a Docker Compose file (docker-compose.yml): version: '3' services: haproxy: image: haproxy:latest ports: - "443:443" volumes: - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro - ./certs:/etc/ssl/certs:ro restart: always 4. Create SSL certificates directory and copy certificates: mkdir certs cp ~/certs/server-bundle.crt certs/ cp ~/certs/server.key certs/ cat certs/server.key certs/server-bundle.crt > certs/server.pem 5. Create HAProxy configuration file (haproxy.cfg): global log /dev/log local0 log /dev/log local1 notice daemon maxconn 2000 defaults log global mode http option httplog option forwardfor timeout connect 5000 timeout client 50000 timeout server 50000 frontend https_front bind *:443 ssl crt /etc/ssl/certs/server.pem mode http # Add URL path rule for Swagger use_backend servers if { path_beg /swagger } default_backend servers backend servers mode http balance roundrobin server win1 d11-api-demo1.gcs.cloud:443 ssl verify none check server win2 d11-api-demo2.gcs.cloud:443 ssl verify none check This configuration will route any requests starting with /swagger to your backend servers. The only change needed is adding the path rule in the frontend section. ...
NGINX Container - Load Balancer
Let’s build a Dockerized NGINX setup with: SSL termination using a wildcard cert Reverse proxy + Load balancing to 2 backend servers Mounted volumes for certs and config 1. Updated Step for CA Chain #Create the CA chain file: cat mid-ca.crt ca.crt > ca-bundle.crt Cert file Purpose server_001.crt Wildcard cert for your domain server.key Private key for the wildcard cert ca-bundle.crt Combined mid-ca.crt + ca.crt (in that order) 2. Directory Structure (suggested) sh-5.2$ tree . └── nginx-lb ├── Dockerfile ├── certs │ ├── ca-bundle.crt │ ├── ca.crt │ ├── mid-ca.crt │ ├── server-bundle.crt │ ├── server.key │ ├── server_001.crt │ └── server_001.pfx ├── docker-compose.yml ├── nginx │ └── nginx.conf └── nginx-log 3. Create Dockerfile FROM nginx:alpine # Create the log directory inside the container RUN mkdir -p /var/log/nginx # Copy NGINX config and certs into the image (will be overridden by volume) COPY nginx/nginx.conf /etc/nginx/nginx.conf COPY certs/ /etc/nginx/certs/ # Expose port 443 for HTTPS EXPOSE 443 4. Create nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; # Log format definition log_format detailed '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '"$proxy_host" "$upstream_addr"'; # Access and error logs access_log /var/log/nginx/access.log detailed; error_log /var/log/nginx/error.log debug; ssl_certificate /etc/nginx/certs/server_001.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_client_certificate /etc/nginx/certs/ca-bundle.crt; ssl_verify_client off; upstream backend_apis { server d11-api-demo1.gcs.cloud:443; server d11-api-demo2.gcs.cloud:443; } server { listen 443 ssl; server_name d11-alb-ngx01.gcs.cloud; ssl_protocols TLSv1.2 TLSv1.3; ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt; location / { proxy_pass https://backend_apis; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_verify on; proxy_ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt; proxy_ssl_name $host; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } } } ...
F5 BIG-IP - Sample Configuration
Certificate Management Profile Node Pool Virtual Server
NGINX Load Balancer - Bare Metal
Install NGINX: sudo apt update sudo apt install nginx -y Set SSL Certificates sh-5.2$ sudo mkdir -p /etc/nginx/ssl sh-5.2$ sudo cp certs/* /etc/nginx/ssl/ sh-5.2$ sudo ls -l /etc/nginx/ssl/ total 32 -rw-r--r--. 1 root root 3830 Apr 13 15:08 ca-bundle.crt -r--r--r--. 1 root root 1911 Apr 13 15:08 ca.crt -r--r--r--. 1 root root 1919 Apr 13 15:08 mid-ca.crt -rw-r--r--. 1 root root 6082 Apr 13 15:08 server-bundle.crt -rw-------. 1 root root 1704 Apr 13 15:08 server.key -rw-r--r--. 1 root root 2252 Apr 13 15:08 server_001.crt -rw-------. 1 root root 3363 Apr 13 15:08 server_001.pfx sh-5.2$ Create the NGINX Load Balancing Config Edit /etc/nginx/nginx.conf or (preferably) add a new file in /etc/nginx/conf.d/iis-loadbalancer.conf: ...