NGINX Plus
  • 07 Sep 2023
  • 6 Minutes to read
  • PDF

NGINX Plus

  • PDF

Article Summary

NGINX Plus is a commercial version of the popular open-source web server and reverse proxy server software. NGINX provides performance, scalability, and reliability in serving web content and handling web traffic efficiently. NGINX Plus builds upon the features of the open-source NGINX with additional enterprise-grade features and support. Traceable supports NGINX R16 Plus and later.


Deployment steps

Deploying Traceable’s Tracing agent for NGINX Plus consists of the following steps:

  1. Verify NGINX deployment type 
  2. Download the Tracing agent 
  3. Untar the downloaded file 
  4. Copy the *.so file to the modules directory 
  5. Configure nginx.conf  file 
  6. Reload NGINX
Note
Traceable supports blocking on Alpine Linux 3.9 and later and requires Traceable Platform agent 1.21.2 or later. Make sure that libstdc++ is installed on your Alpine Linux deployment.

Installation

Complete the following steps to deploy Traceable's NGINX tracing agent:

Step 1 - Verify NGINX deployment type

Enter the NGINX -V command to verify whether your NGINX is compiled with --compat flag. NGINX compiled with --compat flag allows loading dynamic modules.

# nginx -V
nginx version: nginx/1.19.0
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1d  10 Sep 2019
TLS SNI support enabled
  • If you do not see --compat flag in the output, make sure that you have a NGINX version that is compiled with --compat flag.
  • Take note of the value for --modules-path in the output. For example, /usr/lib/nginx/modules and --modules-path=/usr/lib64/nginx/modules

Step 2 - Download

To download the NGINX agent, navigate to Traceable’s download site and click on agent > nginx > latest and search for your NGINX version. If you are running Alpine, download that version. Otherwise, select the one prefixed with Linux.

Step 3 - Untar the downloaded file

Enter the following command to untar the downloaded file:

tar -xvzf <filename>.so.tgz

After you untar the file, you would get the following two files:

  • ngx_http_traceableai_module.so  - This is the dynamic module file. 
  • libtraceable.so - NGINX's blocking feature is dependent on this library file
Important:
Blocking is not supported with NGINX on Alpine and this file is not included in the Alpine tar file (tgz).

Step 4 - Copy the *.so files

Enter the following command to copy the *.so files to the modules directory:

sudo cp *.so /path/to/modules

You can find the path to your modules directory by running the nginx -V command. For example, the path could be: --modules-path=/usr/lib64/nginx/modules.

Step 5 - Configure nginx.conf file

You can use vi editor or any editor of your choice to edit the nginx.conf file. The following command uses vi editor:

sudo vi /etc/nginx/nginx.conf

Add the load_module directive to the top of the file (after the user directive):

load_module modules/ngx_http_traceableai_module.so;

Add the traceableai directive inside the http block. Replace YOUR-SERVICE-NAME with the service name from your environment.

traceableai {
  service_name nginx-YOUR-SERVICE-NAME;
  collector_host <collector_hostname>; #NOTE: prefixing with protocol (http[s]) will fail silently. Spans will not show in Traceable.
  collector_port 9411;
  blocking on; # To enable blocking
}

Content capture

You can configure the type of content you wish to capture. Configure capture_content_types inside the traceableai section. Capturing of the following content types is supported:

  • xml
  • json
  • grpc
  • x-www-form-urlencoded

If you wish to capture xml json, specify the directive as capture_content_types xml json. If the directive is not specified, Traceable by default captures json, grpc, and x-www-form-urlencoded content as these are the most common content types.

traceableai {
        service_name "nginx";
        collector_host "host.docker.internal";
        collector_port 9411;
        blocking on;
        blocking_log_to_console on;
        config_endpoint host.docker.internal:5441;
        config_polling_period 30;
        #sampling off;
        api_discovery on;
        # capture content type
        capture_content_types xml json
        #blocking_skip_internal_request off;
	    blocking_status_code 472;
    }

opentracing and opentracing_propagate_context

Add the opentracing and opentracing_propagate_context after the traceableai block. For more information on opentracing_propagate_context, see Broken Correlation.

...
load_module modules/ngx_http_traceableai_module.so;
...
server {
  http {
    traceableai {
          service_name nginx-YOUR-SERVICE-NAME;
          collector_host <collector_hostname>;
          collector_port 9411;
          blocking on;
    }
    
     opentracing on;
     opentracing_propagate_context;

 }
  
}
...truncated nginx.conf…

Optional - disable tracing

It is possible to disable tracing for individual location directives. Do so by defining opentracing off; inside the NGINX config file which contains the location to be excluded from tracing.

location /static {
    # disable monitoring for the /static location
    opentracing off;
    proxy_pass http://static-content:8080;
  }
  location /api{
    proxy_pass http://apiserver:9090;
  }

Optional - Add custom error code

You can add a custom error code in the 4xx series as a response status code when a request gets blocked. You can configure this in the nginx.conf file by setting blocking_status_code field in the traceableai section. The default value is set to 403. Following is an example snippet with the last line showing the blocking_status_code the value set to 472. 

traceableai {
        service_name "nginx";
        collector_host "host.docker.internal";
        collector_port 9411;
        blocking on;
        blocking_log_to_console on;
        config_endpoint host.docker.internal:5441;
        config_polling_period 30;
        #sampling off;
        api_discovery on;
        #blocking_skip_internal_request off;
	    blocking_status_code 472;
    }

Following is a sample nginx.conf file:

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

load_module modules/ngx_http_traceableai_module.so;

events {
    worker_connections  1024;
}

http {
    traceableai {
        service_name "nginx";
        collector_host "host.docker.internal";
        collector_port 9411;
        blocking on;
        blocking_log_to_console on;
        config_endpoint host.docker.internal:5441;
        config_polling_period 30;
        #sampling off;
        api_discovery on;
        #blocking_skip_internal_request off;
	    blocking_status_code 472;
    }

    opentracing on;
    opentracing_trace_locations on;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;

        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }

    include /etc/nginx/conf.d/*.conf;
}

Step 6 - Reload NGINX

Enter the following command to reload NGINX:

sudo nginx -s reload

Definitions

  • traceableai syntax - traceableai{} context - http
  • service_name syntax - service_name <identifiable_name_of_the_service> default - nginx context - traceableai Give a meaningful name of service with which you can identify it on Traceable’s Dashboard.
  • collector_host syntax - collector_host <hostname_of_collector> default - collector.traceableai context - traceableai
  • collector_port syntax - collector_port <port_of_collector> default - 9411 context - traceableai
  • blocking syntax - opa on|off default - off context - traceableai By default OPA is disabled. OPA is required if you wish to enable blocking in NGINX deployment.
  • opa_server syntax - opa_server <opa_server_address default - http://opa.traceableai:8181 context - traceableai URL on which OPA service is deployed for the NGINX module to communicate.
  • opentracing syntax - opentracing on|off default - off context - http, server, location` Enables or disables OpenTracing for NGINX requests.
  • opentracing_propagate_context syntax - opentracing_propagate_context context - http, server, location` Opentracing_propagate_context propagates the active span context for upstream requests. For more information, see Inject and Extract.

Verification

You can complete the following steps to verify if the installation was successful:

  1. Exercise the applications routing through NGINX.
  2. Visit Traceable platform and confirm that traces  are showing up.

Generate traffic (optional)

Traceable discovers APIs or displays metrics based on the user traffic. You need to proxy your application through NGINX. For example, if you have an application running on http://YourApplication.com:9191/order then add the following block to the /etc/nginx/conf.d/default.conf file.

location /order {
           proxy_pass http://YourApplication.com:9191/order;
           # set any proxy header you want to pass on
     }

 After configuring the above, you can access your application from http://nginx-proxy.YourApplication.com/order. This assumes that NGINX proxy that is installed on nginx-proxy.YourApplication.com is configured to accept traffic on port number 80. You can also use HTTPS instead of HTTP.


Upgrade

To upgrade the NGINX tracing agent:

  1. Install the module as per the install section.
  2. Update the config in nginx.conf if needed.
  3. Restart nginx

Uninstall

Complete the following steps to uninstall NGINX tracing agent:

  1. Remove the dynamic library files: ngx_http_traceableai.module.so and libtraceable.so.
  2. Remove the Traceable configuration lines from nginx.conf
  3. Restart NGINX

Troubleshooting

No Traces in Traceable

Tail the NGINX error logs:

sudo tail -f /var/log/nginx/error.log

Broken correlation

Identify all the location blocks in your NGINX configuration files. If they have the proxy_set_header directive defined, you will need to add opentracing_propagate_context to these location blocks. Check for any include statements that could potentially be including that directive. See the info message below.

opentracing_propagate_context internally uses proxy_set_header to pass the request context to upstream. proxy_set_header directives are inherited from the previous configuration level if and only if there are no proxy_set_header directives defined on the current level.

When the requests are proxied to upstream, but: 

-- opentracing_propagate_context has been defined in http, or server block, AND 

-- A proxy_set_header exists in the location block, then the request context is not propagated upstream.

If proxy_set_header exists inside the location block, then define opentracing_propagate_context in the location block only.


Was this article helpful?