Running with Nginx
Stacks do not exist. As soon as you change your database you're no longer LAMP or MEAN. Drop the term. Even then, the term only applies to an application component; it doesn't describe you. If you are a "Windows guy", learn Linux. If you are a "LAMP" guy, you have to at least have some clue about Windows. Don't marry yourself to only AWS or Azure. Learn both. Use both. They're tools. Only a fool keeps deliberately limits his toolbox.
No matter your interests, you really should learn Nginx.
So, what is it? The older marketing says it's a "reverse proxy". In reality, the purposes and use-cases for Nginx have changes over the years as other technologies have grown. Today, it's a tool to decouple application TCP connectivity. You put it places where you want to decouple the outside from the inside. A load balancer does this decoupling by sitting between incoming connections and a series of servers. A web server does this by sitting between an HTTP call and internal web content. A TLS terminator does this by sitting between everything external and unencrypted internal resources. If Nginx isn't a fit your a specific scenario, it's likely a perfect fit for another in the same infrastructure.
In older web hosting models, your web server handles BOTH the processing of the content AND the HTTP serving. It does too much. As much as IIS7 is an improvement over IIS6 (no more ISAPI), it still suffers from this. It's both running .NET and serving the content. The current web server model handles this differently: UWSGI runs Python, PM2 runs Node, and Kestrel runs .NET Core. Nginx handles the HTTP traffic and deals with all the TLS certs.
The days of having to deal with IIS and Apache are largely over. Python, Node, and .NET Core each know how to run their own code and Nginx knows TCP. The concepts have always been separate, now the processes are separate.
Let's run through some use cases and design patterns...
Adding Authentication
I'm going to start off with a classic example: adding username / password authentication to an existing web API.
Elasticsearch is one of my favorite database systems; yet, it doesn't have native support for SSL (discussing SSL later) or authorization. There's a tool called Shield for that, but it's over kill when I don't care about multiple users. Nginx came to the rescue. Below is a basic Nginx config. You should be able to look at the following config to get an idea of what's going on.
server {
listen 10.1.60.3;
auth_basic "ElasticSearch";
auth_basic_user_file /etc/nginx/es-password;
location / {
proxy_pass http://127.0.0.1:9200;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
I'm passing all traffic on to port 9200. This port is only bound locally, so HTTP isn't even publicly accessible. You can also see I'm setting some optional headers.
The es-password
file was created using the Apache tool htaccess
:
sudo htpasswd -c /srv/es-htpasswd searchuser
SSL/TLS Redirect
Let's lighten up a bit with a simpler example...
There are myriad ways to redirect from HTTP to HTTPS. Nginx is my new favorite way:
server {
listen 222.222.222.222:80;
server_name example.net;
server_name www.example.net;
return 301 https://example.net$request_uri;
}
Avoid the temptation of keeping a separate SSL config like Apache did. Your Nginx configurations should be by domain, not by function. Your configuration file will be
example.com.conf
. It will house all the configuration forexample.com
. It will have the core functionality and the SSL/TLS redirect.
Accessing localhost only services
There was a time when I needed to download some files from my Google Drive to a Linux Server. rclone
seemed to be an OK way to do that. During setup, it wanted me to go through the OpenID/OAuth stuff to give it access. Good stuff, but...
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
127.0.0.1?!? That's a remote server! Using Lynx browser over ssh wasn't going to cut it. Then I realized the answer: Nginx.
Here's what I did:
server {
listen 111.111.111.111:8080;
location / {
proxy_pass http://127.0.0.1:53682;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host 127.0.0.1;
}
}
Then I could access the server in my own browser using http://111.111.111.111:53682/auth
.
Boom! I got the Google authorization screen right away and everything came together.
Making services local only
This brings up an intersting point: what if you had a public service you didn't want to be public, but didn't have a way to secure it -- or, perhaps, you just wanted to change the port?
In a situation where I had to cheat, I'd cheat by telling iptables
(Linux firewall) to block that port, then use Nginx to open the new one.
For example:
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
iptables -A INPUT -j DROP
This says: allow localhost and stuff to port 8080, but block everything else.
If you do this, you need to save the rules using something like
iptables-save > /etc/iptables/rules.v4
. On Ubuntu, you can get this viaapt-get install iptables-persistent
.
Then, you can do something like the previous Nginx example to take traffic from a different port.
Better yet, use firewall-d. Using iptables directly is as old and obsolete as Apache.
TCP/UDP Connectivity
The examples here are just snippets. They actually go inside something like a block like the following:
http {
upstream myservers {
ip_hash;
server server1.example.com;
server server2.example.com;
server server3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myservers;
}
}
}
What about just UDP/TCP? Nginx isn't merely for HTTP. You can and should use it for raw UDP/TCP too. You just wrap it differently:
stream {
upstream myservers {
server server1.example.com:8452;
server server2.example.com:8452;
server server3.example.com:8452;
}
server {
listen 8452;
proxy_pass myservers;
}
}
No http://
and no location
. Now you're load balancing TCP connections without needing to be HTTP aware.
How about UDP?
listen 8452 udp;
Done.
File Serving
These days you want to keep your static content on a CDN. Serving static files can be expensive. Where possible, avoid this.
Some clear cases where you'd serve static files are robots.txt and favicon.ico. In your existing Nginx config, you'd add the following...
location /robots.txt {
alias /srv/robots.txt;
}
location /favicon.ico {
alias /srv/netfx/netfxdjango/static/favicon.ico;
}
For a pure SPA application, you'd throw index.html
in there as well. The SPA assets would load from the CDN. At this point your server would only serve 3 files.
Inline Content
If you don't want to deal with a separate file, you can just return the content directly:
location /robots.txt {
add_header Content-Type text/plain always;
return 200 'User-agent: ia_archiver
Disallow: /'; }
No big deal.
Setting the Host Header (Azure App Example)
So, you've got a free/shared Azure Web App. You've got you free hosting, free subdomain, and even free SSL. Now you want your own domain and your own SSL. What do you do? Throw money at it? Uh... no. Well, assuming you were proactive and keep a Linux server around.
This is actually a true story of how I run some of my websites. You only get so much free bandwidth and computing with the free Azure Web Apps, so you have to be careful. The trick to being careful is Varnish.
The marketing for Varnish says it's a caching server. As with all marketing, they're trying to make something sound less cool than it really it (though that's never their goal). Varnish can be a load-balancer or something to handle fail-over as well. In this case, yeah, it's a caching server.
Basically: I tell Varnish to listen to port 8080 on localhost. It will take traffic and provide responses. If it needs something, it will go back to the source server to get the content. Most hits to the server will be handled with Varnish. Azure breathe easy.
Because the Varnish config is rather verbose and because it's only tangentially related to this topic, I really don't want to dump a huge Varnish config here. So, I'll give snippets:
backend mydomain {
.host = "mydomain.azurewebsites.net";
.port = "80";
.probe = {
.interval = 300s;
.timeout = 60 s;
.window = 5;
.threshold = 3;
}
.connect_timeout = 50s;
.first_byte_timeout = 100s;
.between_bytes_timeout = 100s;
}
sub vcl_recv {
#++ more here
if (req.http.host == "123.123.123.123" || req.http.host == "www.example.net" || req.http.host == "example.net") {
set req.http.host = "mydomain.azurewebsites.net";
set req.backend = mydomain;
return (lookup);
}
#++ more here
}
This won't make much sense without the Nginx piece:
server {
listen 123.123.123.123:443 ssl;
server_name example.net;
server_name www.example.net;
ssl_certificate /srv/cert/example.net.crt;
ssl_certificate_key /srv/cert/example.net.key;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Host mydomain.azurewebsites.net;
}
}
Here's what to look for in this:
proxy_set_header Host mydomain.azurewebsites.net;
Nginx sets up a listener for SSL on the public IP. It will send requests to localhost:8080.
On the way, it will make sure the Host
header says "mydomain.azurewebsites.net". This does two things:
* First, Varnish will be able to detect that and send it to the proper backend configuration (above it).
* Second, Azure will give you a website based on the `Host` header. That needs to be right. That one line is the difference between getting your correct website or getting the standard Azure website template.
In this example, Varnish is checking the host because Varnish is handling multiple IP addresses, multiple hosts, and caching for multiple Azure websites. If you have only one, then these Varnish checks are superfluous.
A lot of systems rely on the Host header. Because raw HTTP is largely deprecated, you're going to be using SSL/TLS everywhere. You need to make sure your server's name matches the Host header. You'll see proxy_set_header Host SOMETHING
a lot.
Load-balancing
A very common use case for Nginx is as a load-balancer.
For each instance of load-balancing, you need to examine your scenario to see if Nginx, HAProxy, your cloud's load balancer, or another product is called for. Some intelligent load-balancing features are only available with Nginx Plus.
Nginx load balancing is simple:
upstream myservers {
server server1.example.com;
server server2.example.com;
server server3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myservers;
}
}
Of course, that's a bit naive. If you have systems where a connection must always return to the same backend system, you need to set some type of session persistence. Also stupid simple:
upstream myservers {
ip_hash;
server server1.example.com;
server server2.example.com;
server server3.example.com;
}
There are other modes than ip_hash, but those are in the docs.
Sometimes some systems are more powerful than others, thus handle more traffic than others. Just set weights:
upstream myservers {
server server1.example.com weight=4;
server server2.example.com;
server server3.example.com;
}
There's not a lot to it.
What if you wanted to send traffic to whatever server had the least number of connections?
upstream myservers {
least_conn;
server server1.example.com;
server server2.example.com;
server server3.example.com;
}
You can't really brag about your ability to do this.
Failover
Similar to a load-balancer is a server used for load-balancing. As I've said, Nginx is about decoupling application TCP connectivity. This is yet another instance of that.
Say you had an active/passive configuration where traffic goes to server A, but you want server B used when server A is down. Easy:
upstream myservers {
server a.example.com fail_timeout=5s max_fails=3;
server b.example.com backup;
}
Done.
Verb Filter
Back to Elasticsearch...
It uses various HTTP verbs to get the job done. You can POST, PUT, and to insert, update, or delete respectively, or you can use GET to do your searches. How about a security model where I only allow searches?
Here's a poorman's method that works:
server {
listen 222.222.222.222:80;
location / {
limit_except GET {
deny all;
}
proxy_pass http://127.0.0.1:9200;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
Verb Filter (advanced)
When using Elasticsearch, you have the option of accessing your data directly without the need for a server-side anything. In fact, your AngularJS (or whatever) applications can get data directly from ES. How? It's just an HTTP endpoint.
But, what about updating data? Surely you need some type of .NET/Python bridge to handle security, right? Nah.
Checkout the following location blocks:
location ~ /_count {
proxy_pass http://elastic;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
location ~ /_search {
proxy_pass http://elastic;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
location ~ /_ {
limit_except OPTIONS {
auth_basic "Restricted Access";
auth_basic_user_file /srv/es-password;
}
proxy_pass http://elastic;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
location / {
limit_except GET HEAD {
auth_basic "Restricted Access";
auth_basic_user_file /srv/es-password;
}
proxy_pass http://elastic;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
Here I'm saying: you can access anything with _count (this is how you get counts from ES), and anything with _search (this is how you query), but if you are accessing something else containing an underscore, you need to provide creds (unless it's an OPTION, which allows CORS to work). Finally, if you're accessing / directly, you can send GET and HEAD, but you need creds to do a POST, PUT, or DELETE.
You can add credential handling to your AngularJS/JavaScript application by sending creds via https://username:password@example.net.
Domain Unification
In the previous example, we have an Elasticsearch service. What about our website? Do we really want to deal with both domain.com and search.domain.com, and the resulting CORS nonsense? Do really REALLY want to deal with multiple SSL certs?
No, we don't.
In this case, you can use Nginx to unify your infrastructure to use one domain.
Let's just update the / in the previous example:
location / {
limit_except GET HEAD {
auth_basic "Restricted Access";
auth_basic_user_file /srv/es-password;
}
proxy_pass http://myotherwebendpoint;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
Now / uses gets its content from a different place than the other servers.
Let's really bring it home:
location /api {
proxy_pass http://myserviceendpoint;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
Now /api points to your API service.
Now you only have to deal with domain.com while having three different services / servers internally.
Unification with Relative Paths
Building on the previous example, what if you wanted to unify Elasticsearch with your domain?
This seems simple enough, but it's not the following:
location /search {
proxy_pass http://127.0.0.1:9200;
}
This would send traffic to http://127.0.0.1:9200/search whereas Elasticsearch listens on http://127.0.0.1:9200.
You want to map your /search to its /.
That's doable:
location ~ ^/search {
rewrite ^\/search\/?(.*) /\$1 break;
proxy_pass http://127.0.0.1:9200/$1;
}
This says: all that starts with /search
goes to /
because the pattern "starts with /search with an optional trailing slash and optional trailing characters" gets replaced with "slash with those optional trailing characters".
For notes on break
, see nginx url rewriting: difference between break and last.
General Rewriting
You can use the idea in previous example for local rewriting as well. This isn't just about mapping logical files to physical, this also effectively gives you aliases of aliases:
rewrite ^/robots.txt /static/robots.txt;
location /static {
alias /srv/common;
}
This lets you access robots.txt
can be accessed via /robots.txt
as well as /static/robots.txt
:
Killing 1990s "www."
Nobody types "www.", it's never on business cards, nobody says it, and most people forgot it exists. Why? This isn't 1997. The most important part of getting a pretty URL is removing this nonsense. Nginx to the rescue:
server {
listen 222.222.222.222:80
server_name example.net;
server_name www.example.net;
return 301 https://example.net$request_uri;
}
server {
listen 222.222.222.222:443 ssl http2;
server_name www.example.net;
# ... ssl stuff here ...
return 301 https://example.net$request_uri;
}
server {
listen 222.222.222.222:443 ssl http2;
server_name example.net;
# ... handle here ...
}
All three server blocks listen on the same IP, but the first listens on port 80 to redirect to the actual domain (there's no such thing as a "naked domain"-- it's just the domain; "www." is a subdomain), the second listens for the "www." subdomain on the HTTPS port (in this case using HTTP2), and the third is where everyone is being directed.
SSL/TLS Termination
This example simply expands the previous one by showing the SSL and HTTP2 implemenation.
Your application will not likely have SSL/TLS on every node. That's not something people do. If you have a requirement to secure communication between nodes, you're likely going to do it at a much lower level with IPSec.
At the application level, most people will use SSL/TLS termination: you add SSL/TLS termination to ingress point of your application domain. This is one of the things you see in application gateways, for example. The application might be an army of systems and web APIs that talk to each other across multiple systems (or within the same system), but are exposed externally via an application gateway that provides SSL termination. This gateway / terminator is usually Nginx.
Think about this in the context of some of the other use cases. When you merge this use case with the load-balancing one, you've optimized your infrastructure so the backend servers don't need the SSL/TLS. Then there's the Varnish example... Varnish does not support SSL/TLS. They force you to use SSL/TLS termination.
server {
listen 222.222.222.222:80;
server_name example.net;
server_name www.example.net;
return 301 https://example.net$request_uri;
}
server {
listen 222.222.222.222:443 ssl http2;
server_name www.example.net;
ssl_certificate /_cert/example.net.chained.crt;
ssl_certificate_key /srv/_cert/example.net.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES128-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA128:DHE-RSA-AES128-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA128:ECDHE-RSA-AES128-SHA384:ECDHE-RSA-AES128-SHA128:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:AES128-GCM-SHA384:AES128-GCM-SHA128:AES128-SHA128:AES128-SHA128:AES128-SHA:AES128-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
ssl_dhparam /srv/_cert/dhparam.pem;
return 301 https://example.net$request_uri;
}
server {
listen 222.222.222.222:443 ssl http2;
server_name example.net;
ssl_certificate /srv/_cert/example.net.chained.crt;
ssl_certificate_key /srv/_cert/example.net.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES128-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA128:DHE-RSA-AES128-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA128:ECDHE-RSA-AES128-SHA384:ECDHE-RSA-AES128-SHA128:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:AES128-GCM-SHA384:AES128-GCM-SHA128:AES128-SHA128:AES128-SHA128:AES128-SHA:AES128-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
ssl_dhparam /srv/_cert/dhparam.pem;
location / {
add_header Strict-Transport-Security max-age=15552000;
add_header Content-Security-Policy "default-src 'none'; font-src fonts.gstatic.com; frame-src accounts.google.com apis.google.com platform.twitter.com; img-src syndication.twitter.com bible.logos.com www.google-analytics.com 'self'; script-src api.reftagger.com apis.google.com platform.twitter.com 'self' 'unsafe-eval' 'unsafe-inline' www.google.com www.google-analytics.com; style-src fonts.googleapis.com 'self' 'unsafe-inline' www.google.com ajax.googleapis.com; connect-src search.jampad.net jampadcdn.blob.core.windows.net example.net";
include uwsgi_params;
uwsgi_pass unix:///srv/example.net/mydomaindjango/content.sock;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Host example.net;
}
}
Request Redaction
Sometimes when an application makes a call it sets the wrong Content-Type
. When that's wrong, the results can be unpredictable. So, you may need to remove it:
proxy_set_header Content-Type "";
You can also use this to enforce certain data access. You might be sick and tired of people constantly using the wrong Content-Type
. Just override it. Another situation is when a header contains authorization information that you'd like to entirely ignore. Just strip it off:
proxy_set_header Authorization "";
That says: I don't care if you have a token. You have no power here. You're anonymous now.
That brings up another point: you can use this to redact server calls.
Decorate Requests
In addition to removing security, you can add it:
location /api {
proxy_pass http://127.0.0.1:3000;
proxy_set_header X-AccessRoles "Admin";
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 3000;
proxy_set_header Host $host;
}
Decorate Responses
If you're doing local development without a server (e.g. SPA development), perhaps you still want to make sure your calls have a Cookie for testing.
location ^~ / {
proxy_pass http://127.0.0.1:4200;
add_header Set-Cookie SESSION=Zm9sbG93LW1lLW9uLXR3aXR0ZXItQG5ldGZ4aGFybW9uaWNz;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 4200;
proxy_set_header Host $host;
}
This will send a Cookie back. If called with a web browser, it will automagically be added to future calls by the browser.
Config Standardization
The previous example had a lot of security setup. It had TLS, CSP, and HSTS. Instead of trying to figure out how to setup TLS on each of your applications, just let Nginx handle it. Your 10,000 servers are now easy to setup. You're welcome.
Some applications are also a pain to setup for a specific IP address. You want 10.2.1.12, not 10.4.3.11, but your application has a horrible Java UI from 2005 that never runs right. Just have it listen on 127.0.0.1 and let Nginx listen on the external IP.
Verb Routing
Speaking of verbs, you could whip out a pretty cool CQRS infrastructure by splitting GET from POST.
This is more of a play-along than a visual-aide. You can actually try this one at home.
Here's a demo using a quick node server:
http = require('http') port = parseInt(process.argv[2]) server = http.createServer(function(req, res) { res.writeHead(200, {'Content-Type': 'text/html'}) res.end(req.method + ' server ' + port) }) host = '127.0.0.1' server.listen(port, host)
Here's our nginx config:
server {
listen 222.222.222.222:8192;
location / {
limit_except GET {
proxy_pass http://127.0.0.1:6001;
}
proxy_pass http://127.0.0.1:6002;
}
}
use
nginx -s reload
to quickly reload config without doing a full service restart
Now, to spin up two of them:
node server.js 6001 &
node server.js 6002 &
&
runs something as a background process
Now to call them (PowerShell and curl examples provided)...
(wget -method Post http://192.157.251.122:8192/).content
curl -XPOST http://192.157.251.122:8192/
Output:
POST server 6001
(wget -method Get http://192.157.251.122:8192/).content
curl -XGET http://192.157.251.122:8192/
Output:
GET server 6002
Cancel background tasks with
fg
then CTRL-C. Do this twice to kill both servers.
There we go, your inserts go to one location you read from a different one.
Development Environments
Another great thing about Nginx is that it's not Apache ("a patchy" web server, as the joke goes). Aside from Apache simply trying to do far too much, it's an obsolete product from the 90s that needs to be dropped. It's also often very hard to setup. The security permissions in Apache, for example, make no sense and the documentation is horrible.
Setting up Apache is a dev environment almost never happens, but Nginx is seamless enough for it not to interfere with day-to-day development.
The point: don't be afraid to use Nginx in your development setup.
Linux Sockets
When using Python you serve content with the WSGI: web software gateway inferface. It's literally a single function signature that enables web access. You run your application with something that executes WSGI content. One popular option is the tool called UWSGI. With this you can expose your Python web application as a Linux socket. Nginx will listen on HTTP and bridge the gap for you.
The single interface function signature is as follows (with an example):
def name_does_not_matter(environment, response_code):
response_code = '200 OK'
return 'Your content type was {}'.format(environment['CONTENT_TYPE'])
This is even what Django does deep down.
Here's the Nginx server config:
server {
listen 222.222.222.222:80;
location / {
include uwsgi_params;
uwsgi_pass unix:/srv/raw_python_awesomeness/content/content.sock;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
You can see UWSGI in action in my WebAPI for Python project at https://github.com/davidbetz/pywebapi.