NGINX Load Balancer - Secure gRPC

This guide extends our previous blog post on NGINX Load Balancer for WCF & gRPC by adding SSL connections to the gRPC protocol. The steps are similar—just update the config file bpserver-loadbalancer.conf Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf # NGINX Load Balancer Configuration for Blue Prism Enterprise # Defining two upstream blocks for different ports upstream bpserver_backend_8199 { ip_hash; server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s; } upstream bpserver_backend_10000 { ip_hash; server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s; } server { listen 8199 ssl; server_name d11-lnx-alb01.gcs.cloud; ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass https://bpserver_backend_8199; proxy_ssl_verify off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 300s; proxy_read_timeout 300s; proxy_send_timeout 300s; proxy_pass_request_headers on; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } } server { listen 10000 ssl; # Add ssl here http2 on; server_name d11-lnx-alb01.gcs.cloud; # Add SSL certificate configuration ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { grpc_pass grpcs://bpserver_backend_10000; # Change to grpcs:// for SSL # gRPC specific settings grpc_read_timeout 300s; grpc_send_timeout 300s; # Headers for gRPC grpc_set_header Host $host; grpc_set_header X-Real-IP $remote_addr; grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } See also: NGINX Load Balancer for WCF & gRPC ...

April 24, 2025

NGINX Load Balancer for WCF & gRPC

This guide extends our previous blog post on NGINX Load Balancing for WCF Applications by adding gRPC protocol support on port 10000. While the setup process remains similar, we’ll focus on the specific configuration changes needed in the bpserver-loadbalancer.conf file. Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf # NGINX Load Balancer Configuration for Blue Prism Enterprise # Defining two upstream blocks for different ports upstream bpserver_backend_8199 { ip_hash; server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s; } upstream bpserver_backend_10000 { ip_hash; server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s; server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s; } server { listen 8199 ssl; server_name d11-lnx-alb01.gcs.cloud; ssl_certificate /etc/nginx/ssl/server_001.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt; ssl_verify_client off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass https://bpserver_backend_8199; proxy_ssl_verify off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 300s; proxy_read_timeout 300s; proxy_send_timeout 300s; proxy_pass_request_headers on; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } } server { listen 10000; http2 on; # Add this line to enable HTTP/2 server_name d11-lnx-alb01.gcs.cloud; location / { grpc_pass grpc://bpserver_backend_10000; # Use grpc_pass instead of proxy_pass # gRPC specific settings grpc_read_timeout 300s; grpc_send_timeout 300s; # Headers for gRPC grpc_set_header Host $host; grpc_set_header X-Real-IP $remote_addr; grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } See also: NGINX Load Balancer - Secure gRPC ...

April 23, 2025

NGINX Load Balancer for WCF App

This guide demonstrates how to implement a high-performance NGINX load balancer for WCF applications with the following features: Enhanced security through SSL/TLS encryption Reliable session management using IP-based persistence Custom-tuned configurations for WCF service optimisation Advanced timeout and buffer settings to handle complex WCF payloads The configuration ensures reliable, secure, and efficient load balancing specifically optimised for WCF service applications, with built-in session persistence and performance tuning. 1. Install required packages and SSL certificates ...

April 21, 2025

HAProxy Container - Load Balancer

HAProxy Load Balancer with SSL Termination 1. Install Docker sudo yum update -y sudo yum install docker -y sudo systemctl start docker sudo systemctl enable docker 2. Install Docker Compose # Download Docker Compose binary sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # Make it executable sudo chmod +x /usr/local/bin/docker-compose # Verify the installation docker-compose --version 3. Create a Docker Compose file (docker-compose.yml): version: '3' services: haproxy: image: haproxy:latest ports: - "443:443" volumes: - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro - ./certs:/etc/ssl/certs:ro restart: always 4. Create SSL certificates directory and copy certificates: mkdir certs cp ~/certs/server-bundle.crt certs/ cp ~/certs/server.key certs/ cat certs/server.key certs/server-bundle.crt > certs/server.pem 5. Create HAProxy configuration file (haproxy.cfg): global log /dev/log local0 log /dev/log local1 notice daemon maxconn 2000 defaults log global mode http option httplog option forwardfor timeout connect 5000 timeout client 50000 timeout server 50000 frontend https_front bind *:443 ssl crt /etc/ssl/certs/server.pem mode http # Add URL path rule for Swagger use_backend servers if { path_beg /swagger } default_backend servers backend servers mode http balance roundrobin server win1 d11-api-demo1.gcs.cloud:443 ssl verify none check server win2 d11-api-demo2.gcs.cloud:443 ssl verify none check This configuration will route any requests starting with /swagger to your backend servers. The only change needed is adding the path rule in the frontend section. ...

April 13, 2025

NGINX Container - Load Balancer

Let’s build a Dockerized NGINX setup with: SSL termination using a wildcard cert Reverse proxy + Load balancing to 2 backend servers Mounted volumes for certs and config 1. Updated Step for CA Chain #Create the CA chain file: cat mid-ca.crt ca.crt > ca-bundle.crt Cert file Purpose server_001.crt Wildcard cert for your domain server.key Private key for the wildcard cert ca-bundle.crt Combined mid-ca.crt + ca.crt (in that order) 2. Directory Structure (suggested) sh-5.2$ tree . └── nginx-lb ├── Dockerfile ├── certs │ ├── ca-bundle.crt │ ├── ca.crt │ ├── mid-ca.crt │ ├── server-bundle.crt │ ├── server.key │ ├── server_001.crt │ └── server_001.pfx ├── docker-compose.yml ├── nginx │ └── nginx.conf └── nginx-log 3. Create Dockerfile FROM nginx:alpine # Create the log directory inside the container RUN mkdir -p /var/log/nginx # Copy NGINX config and certs into the image (will be overridden by volume) COPY nginx/nginx.conf /etc/nginx/nginx.conf COPY certs/ /etc/nginx/certs/ # Expose port 443 for HTTPS EXPOSE 443 4. Create nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; # Log format definition log_format detailed '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '"$proxy_host" "$upstream_addr"'; # Access and error logs access_log /var/log/nginx/access.log detailed; error_log /var/log/nginx/error.log debug; ssl_certificate /etc/nginx/certs/server_001.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_client_certificate /etc/nginx/certs/ca-bundle.crt; ssl_verify_client off; upstream backend_apis { server d11-api-demo1.gcs.cloud:443; server d11-api-demo2.gcs.cloud:443; } server { listen 443 ssl; server_name d11-alb-ngx01.gcs.cloud; ssl_protocols TLSv1.2 TLSv1.3; ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt; location / { proxy_pass https://backend_apis; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_verify on; proxy_ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt; proxy_ssl_name $host; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } } } ...

April 13, 2025

F5 BIG-IP - Sample Configuration

Certificate Management Profile Node Pool Virtual Server

April 13, 2025

NGINX Load Balancer - Bare Metal

Install NGINX: sudo apt update sudo apt install nginx -y Set SSL Certificates sh-5.2$ sudo mkdir -p /etc/nginx/ssl sh-5.2$ sudo cp certs/* /etc/nginx/ssl/ sh-5.2$ sudo ls -l /etc/nginx/ssl/ total 32 -rw-r--r--. 1 root root 3830 Apr 13 15:08 ca-bundle.crt -r--r--r--. 1 root root 1911 Apr 13 15:08 ca.crt -r--r--r--. 1 root root 1919 Apr 13 15:08 mid-ca.crt -rw-r--r--. 1 root root 6082 Apr 13 15:08 server-bundle.crt -rw-------. 1 root root 1704 Apr 13 15:08 server.key -rw-r--r--. 1 root root 2252 Apr 13 15:08 server_001.crt -rw-------. 1 root root 3363 Apr 13 15:08 server_001.pfx sh-5.2$ Create the NGINX Load Balancing Config Edit /etc/nginx/nginx.conf or (preferably) add a new file in /etc/nginx/conf.d/iis-loadbalancer.conf: ...

April 13, 2025

NGINX Container - Secure Web Page

Why Choose NGINX for Your Web Server? It’s lightweight and high-performance Excellent for serving static content and as a reverse proxy Simple configuration syntax Very popular in containerized environments So, let’s create a Docker container with Nginx and SSL! First, create a directory structure: cd ~ aws s3 cp s3://BUCKET NAME/ . --recursive sudo yum install unzip tree -y mkdir nginx-ssl unzip certs.zip mv certs nginx-ssl/ unzip html.zip mv html nginx-ssl/ cd nginx-ssl mkdir conf Create nginx.conf in the conf directory: Change server_name. server { listen 443 ssl; server_name d11-vdi-lin04.gcs.cloud; root /usr/share/nginx/html; location / { index index.html; } ssl_certificate /etc/nginx/certs/server_001.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt; ssl_protocols TLSv1.2 TLSv1.3; } Create the full certificate chain by concatenating the certificates in the correct order: cd certs cat mid-ca.crt ca.crt > ca-bundle.crt cat server_001.crt mid-ca.crt ca.crt > server-bundle.crt Create Dockerfile: FROM nginx:alpine RUN mkdir -p /etc/nginx/certs # Copy SSL certificates COPY certs/ca-bundle.crt /etc/nginx/certs/ COPY certs/server_001.crt /etc/nginx/certs/ COPY certs/server.key /etc/nginx/certs/ COPY conf/nginx.conf /etc/nginx/conf.d/default.conf COPY html /usr/share/nginx/html EXPOSE 443 CMD ["nginx", "-g", "daemon off;"] Make sure your HTML content is organized in a directory structure like this: . └── nginx-ssl ├── Dockerfile ├── certs │ ├── ca-bundle.crt │ ├── ca.crt │ ├── mid-ca.crt │ ├── server-bundle.crt │ ├── server.key │ ├── server_001.crt │ └── server_001.pfx ├── conf │ └── nginx.conf └── html ├── colour.conf ├── img │ └── GCS-AWS-logo_32_v02.png ├── index.html └── swagger └── ui └── index ├── img │ └── Tech-Task-v07.png └── index.html Build and run the container: # Build the image sudo docker build -t my-secure-nginx:latest . # Run the container sudo docker run -d --name secure-nginx \ -p 443:443 \ --restart always \ my-secure-nginx:latest Check the status using curl command. # -k flag to allow insecure connections curl -k https://localhost # Or specify your domain curl -k https://your-domain.com # To get more detailed with -v (verbose) flag curl -kv https://localhost See also: Deploy a Amazon Linux 2023 ...

April 10, 2025

SQL Server Container with Tools

File and Folder Structure at the end Create mssql.conf [network] tlscert = /var/opt/mssql/secrets/server-bundle.crt tlskey = /var/opt/mssql/secrets/server.key tlsprotocols = 1.2 forceencryption = 1 Create Dockerfile: # Use the official Microsoft SQL Server 2022 image as base FROM mcr.microsoft.com/mssql/server:2022-latest # Switch to root to install packages USER root # Install required dependencies RUN apt-get update && \ apt-get install -y curl apt-transport-https gnupg2 && \ mkdir -p /etc/apt/keyrings && \ curl -sSL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > /etc/apt/keyrings/microsoft.gpg && \ chmod 644 /etc/apt/keyrings/microsoft.gpg && \ echo "deb [signed-by=/etc/apt/keyrings/microsoft.gpg] https://packages.microsoft.com/ubuntu/22.04/prod jammy main" > /etc/apt/sources.list.d/mssql-release.list && \ apt-get update && \ ACCEPT_EULA=Y apt-get install -y mssql-tools unixodbc-dev && \ ln -s /opt/mssql-tools/bin/sqlcmd /usr/bin/sqlcmd && \ ln -s /opt/mssql-tools/bin/bcp /usr/bin/bcp && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* # Switch back to mssql user USER mssql Build an image # Build new image sudo docker build -t mssql-with-tools . Run commands # Run new container sudo docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Password123' \ -p 1433:1433 \ -v /data/containers/sql1/data:/var/opt/mssql/data \ -v /data/containers/sql1/log:/var/opt/mssql/log \ -v sql-certs:/var/opt/mssql/secrets:ro \ -v /data/mssql.conf:/var/opt/mssql/mssql.conf:ro \ --restart always \ --name sql1 \ -d mssql-with-tools Verify installation: # Test sqlcmd sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -?

April 2, 2025

RabbitMQ Container - SSL

Create a container (SSL) First, create a new working directory and prepare your certificate files: mkdir gcs-rabbit-ssl cd gcs-secure-rabbit mkdir certs # Copy your certificates to gcs-secure-rabbit/certs: # - ca.crt # - mid-ca.crt # - server-001.crt # - server-001.key Set 644 to these certificate chmod 644 certs/* Create a rabbitmq.conf (gcs-secure-rabbit/rabbitmq.conf): # RabbitMQ Configuration File # Disable non-SSL listeners listeners.tcp = none listeners.ssl.default = 5671 # SSL configuration ssl_options.cacertfile = /etc/rabbitmq/ssl/ca-bundle.crt ssl_options.certfile = /etc/rabbitmq/ssl/server.crt ssl_options.keyfile = /etc/rabbitmq/ssl/server.key ssl_options.verify = verify_peer ssl_options.depth = 2 ssl_options.fail_if_no_peer_cert = true # Management SSL configuration management.ssl.port = 15671 management.ssl.cacertfile = /etc/rabbitmq/ssl/ca-bundle.crt management.ssl.certfile = /etc/rabbitmq/ssl/server.crt management.ssl.keyfile = /etc/rabbitmq/ssl/server.key Create a Dockerfile (e.g., gcs-secure-rabbit/DockerFile): FROM rabbitmq:3.11.10-management # Create SSL directory RUN mkdir -p /etc/rabbitmq/ssl # Copy certificates COPY ca.crt mid-ca.crt /etc/rabbitmq/ssl/ COPY server-001.crt /etc/rabbitmq/ssl/server.crt COPY server-001.key /etc/rabbitmq/ssl/server.key # Create bundle certificate RUN cat /etc/rabbitmq/ssl/mid-ca.crt /etc/rabbitmq/ssl/ca.crt > /etc/rabbitmq/ssl/ca-bundle.crt # Copy config file COPY rabbitmq.conf /etc/rabbitmq/rabbitmq.conf # Expose SSL ports EXPOSE 5671 15671 CMD ["rabbitmq-server"] Build and run the container: # Build the image sudo docker build -t gcs-secure-rabbit:latest . # Run the container sudo docker run -d --hostname secure-rabbit --name secure-rabbit \ -p 15671:15671 \ -p 5671:5671 \ --restart always \ gcs-secure-rabbit:latest Check the container logs after running it: sudo docker logs secure-rabbit See also: RabbitMQ Container - HTTP ...

March 30, 2025