Stream multiple RTMP IP Cameras with OBS and ffmpeg - streaming

I created a streaming server using Nginx and the RMTP module on a VPS running CentOS 7. I am trying to stream multiple IP cameras using OBS for broadcasting the stream. My question is how to create multiple m3u8 stream files for each camera, using different applications in the nginx.conf file. I tried using multiple instances of OBS in order to achieve that, but i am running out of CPU power. What i found is by using ffmpeg there is a way to stream multiple streams but i don't know the exact commands. The bitrate of each camera for the broadcast is approximately 512Kbps-1024Kbps. My nginx.conf is the following:
# RTMP configuration
rtmp {
server {
listen 1935; # Listen on standard RTMP port
chunk_size 4000;
# Define the Application
application camera1 {
live on;
exec ffmpeg -i rtmp://123.123.123.123/folder/$name -vcodec libx264 -vprofile
baseline -x264opts keyint=40 -acodec aac -strict -2 -f flv rtmp://123.123.123.123/hls/$name;
# Turn on HLS
hls on;
hls_path /mnt/hls/camera1;
hls_fragment 3s;
hls_playlist_length 60s;
# disable consuming the stream from nginx as rtmp
# deny play all;
}
application camera2 {
live on;
exec ffmpeg -i rtmp://123.123.123.123./$app/$name -vcodec libx264 -vprofile
baseline -x264opts keyint=40 -acodec aac -strict -2 -f flv rtmp://123.123.123.123/hls/$name;
# Turn on HLS
hls on;
hls_path /mnt/hls/camera2;
hls_fragment 3s;
hls_playlist_length 60s;
# disable consuming the stream from nginx as rtmp
# deny play all;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
aio on;
directio 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 80;
server_name 123.123.123.123;
location / {
root /var/www/html;
index index.html index.htm;
}
location /hls {
# Disable cache
add_header 'Cache-Control' 'no-cache';
# CORS setup
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length';
# allow CORS preflight requests
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
types {
application/dash+xml mpd;
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
root /mnt/;
}
}
}
Using 2 instances of OBS with different stream name i was able to stream 2 cameras simultaneously but i want to stream over 50 cameras and with this method it is impossible. I think it could be done with ffmpeg. The format of the RTSP streams are rtsp://username:password#hostname:port but i want a little help with the commands. Any help would be appreciated. Thanks in advance.

Related

How to setup Nginx as smtp proxy to external smtp server?

I am trying to setup nginx mail proxy Gmail smtp server. Nginx server runs on docker.
During tests, I use openssl s_client -connect mail.localhost.com:12465 -starttls smtp (ports are different from config for port availability reasons).
Openssl connects successfully and I manage to get EHLO and run AUTH LOGIN. Still, I receive error 502 5.5.1 Unrecognized command. u6sm12628547ejn.14 - gsmtp. Same message appears when less secure apps security is on in account.
Running command above directly to smpt.gmail.com works great, and error is displayed correctly if less secure apps security is not allowed.
Current nginx.conf is below.
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 9101;
server_name localhost;
location /smtpauthssl {
if ($http_auth_user ~* "(?<=#)(\S+$)" ) {
set $domain $1;
}
set $smtp_port '587';
add_header Auth-Status "OK";
add_header Auth-Port $smtp_port;
add_header Auth-Server "108.177.119.108";
return 200;
}
location = / {
return 403;
}
}
}
mail {
server_name mail.mydomain.com;
auth_http localhost:9101/smtpauthssl;
proxy_pass_error_message on;
proxy_smtp_auth on;
xclient on;
ssl_certificate /etc/ssl/certs/certificate.pem;
ssl_certificate_key /etc/ssl/certs/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
server {
listen 587 ssl;
protocol smtp;
smtp_auth login plain cram-md5;
}
}
My assumption is that something with logon is not passed to Gmail SMTP, but not sure what.
One more thing, what does Auth-Server header should contain, FQDN or IP address?

Azure Kubernetes Nginx-Ingress client certificate auth problem

I've been struggling with this all weekend, and I now on my knees hoping one of you geniuses can solve my problem.
I short: I have an ingress-nginx controller (Image: nginx/nginx-ingress:1.5.8) with whom I'm trying to achieve a self-signed mutual authentication.
The https aspect works all fine, but the problem I'm having (I think) is that the ingress controller reroute the request with the default cert and the ingress validates with the default CA(because it can't find my CA).
So.. Help!
Steps I've gone through on this cluster-f*** of a journey (pun intended):
I've tested it in a local Minikube-cluster and it all works like a charm. When I exec -it into the ingress-controller-pod and cat the nginx.conf for both my clusters (Minikube and Azure) I did find large differences; hence I just found out that I'm working with apples and pears in terms of minikube- vs azure-k8s nginx-ingresses.
This is the ingress setup that worked as a charm for my minikube cluster (the ingress I'm using is more or less a duplicate of the file you'll find in the link): https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/
In addition i found this which in a long way describes the problem that I'm having: https://success.docker.com/article/how-to-configure-a-default-tls-certificate-for-the-kubernetes-nginx-ingress-controller
From the link above the solution is simple; nuke the ingress from orbit and create a new one. Well.. Here's the thing, this is a production cluster and my bosses would be all but pleased if I did that.
Another discovery that I made whilst "exec -it bash"-roaming around inside the Azure-ingress-controller is that there is no public root cert folder (/etc/ssl/) to be found. Do not know why, but though I'd mention it.
I've also discovered the param --default-ssl-certificate=default/foo-tls, but this is a default. As there will be other needs for different client-auths later I have to be able to specify dynamic CA-certs for different ingresses.
I'll past my nginx.conf that I think is the problem below. Hoping to hear back from some of you because at this point in time I'm thoroughly lost. Hit me up if additional information is needed.
user nginx;
worker_processes auto;
daemon off;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65s;
keepalive_requests 100;
#gzip on;
server_names_hash_max_size 512;
variables_hash_bucket_size 256;
variables_hash_max_size 1024;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/nginx/secrets/default;
ssl_certificate_key /etc/nginx/secrets/default;
server_name _;
server_tokens "on";
access_log off;
location / {
return 404;
}
}
# stub_status
server {
listen 8080;
allow 127.0.0.1;
deny all;
location /stub_status {
stub_status;
}
}
server {
listen unix:/var/run/nginx-status.sock;
access_log off;
location /stub_status {
stub_status;
}
}
include /etc/nginx/config-version.conf;
include /etc/nginx/conf.d/*.conf;
server {
listen unix:/var/run/nginx-502-server.sock;
access_log off;
location / {
return 502;
}
}
}
stream {
log_format stream-main '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time';
access_log /var/log/nginx/stream-access.log stream-main;
So the problem came down to the ingress-controller being old and outdated. Didn't have the original helm-chart that is was deployed with so I was naturally worried about rollback options. Anyhoo -> took a leap of faith in the middle of the night local time and nuked the namespace; recreated the namespace; helm install stable/nginx-ingress.
There was a minimum downtime -> 1 min at most, but beware to lock down the public IP that's attached to the load balancer before going all 3.rd world war on your services.
Had to add an argument to the standard azure helm install command to imperatively set the public IP for the resource; pasting it below if any poor soul should find himself in the unfortunate situation with a new helm-cli and lost charts.
That's it; keep your services up to date and make sure to save your charts!
helm install nginx stable/nginx-ingress --namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.service.loadBalancerIP=*YourVeryPreciousPublicIP*

Nginx websocket proxy uses three connections per socket

I am trying to create an Nginx configuration that will serve as a proxy to incoming websocket connections (mainly for SSL offloading), but I am running into connection limits. I followed several guides and SO answers to accommodate more connections but something weird caught my attention. I currently have 18K clients connected and when I run ss -s on the Nginx machine, this is the report:
Total: 54417 (kernel 54537)
TCP: 54282 (estab 54000, closed 280, orphaned 0, synrecv 0, timewait 158/0), ports 18263
Transport Total IP IPv6
* 54537 - -
RAW 0 0 0
UDP 1 1 0
TCP 54002 36001 18001
INET 54003 36002 18001
FRAG 0 0 0
I understand how there can be 36K IP connections, but what I do not get is where those additional IPv6 connections come from. I am having problems scaling above 25K connections and I think part of that comes from the fact that somehow there are three connections set up for each socket. So, my question is this: does anyone know where those extra connections are coming from?
The entire system is running within a Kubernetes cluster, with the configuration as follows:
nginx.conf:
user nginx;
worker_processes auto;
worker_rlimit_nofile 500000;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
# Increase worker connections to accommodate more sockets
events {
worker_connections 500000;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log off; # don't use it, so don't waste cpu, i/o and other resources.
tcp_nopush on;
tcp_nodelay on;
include /etc/nginx/conf.d/*.conf;
}
proxy.conf (included via conf.d):
server {
listen 0.0.0.0:443 ssl backlog=100000;
# Set a big keepalive timeout to make sure no connections are dropped by nginx
# This should never be less than the MAX_CLIENT_PING_INTERVAL + MAX_CLIENT_PING_TIMEOUT in the ws-server config!
keepalive_timeout 200s;
keepalive_requests 0;
proxy_read_timeout 200s;
ssl_certificate /app/secrets/cert.chain.pem;
ssl_certificate_key /app/secrets/key.pem;
ssl_prefer_server_ciphers On;
ssl_protocols TLSv1.2;
location / {
proxy_pass http://127.0.0.1:8443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I also set the following options in Unix:
/etc/sysctl.d/custom.conf:
fs.file-max = 1000000
fs.nr_open = 1000000
net.ipv4.netfilter.ip_conntrack_max = 1048576
net.core.somaxconn = 1048576
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.ip_local_port_range 1024 65000
net.ipv4.tcp_max_syn_backlog = 3240000
net.nf_conntrack_max = 1048576
net.ipv4.tcp_tw_reuse= 1
net.ipv4.tcp_fin_timeout= 15
/etc/security/limits.d/custom.conf:
root soft nofile 1000000
root hard nofile 1000000
* soft nofile 1000000
* hard nofile 1000000
With help of some colleagues I found out that this is actually Kubernetes confusing everything by joining containers within a Pod in one IP namespace (so that each container can access the other via localhost (link)). So what I see there:
Incoming connections from the proxy
Outgoing connections from the proxy
Incoming connections from the server
Although this does not help me to achieve more connections on a single instance, it does explain the weird behaviour.

Elastic Beanstalk disable health state change based on 4xx responses

I have a rest api running on Elastic Beanstalk, which works great. Everything application-wise is running good, and working as expected.
The application is a rest api, used to lookup different users.
example url: http://service.com/user?uid=xxxx&anotherid=xxxx
If a user with either id's is found, the api responds with 200 OK, if not, responds with 404 Not Found as per. HTTP/1.1 status code defenitions.
It is not uncommon for our api to answer 404 Not Found on a lot of requests, and the elastic beanstalk transfers our environment from OK into Warning or even into Degraded because of this. And it looks like nginx has refused connection to the application because of this degraded state. (looks like it has a threshold of 30%+ into warningand 50%+ into degraded states. This is a problem, because the application is actually working as expected, but Elastic Beanstalks default settings thinks it is a problem, when it's really not.
Does anyone know of a way to edit the threshold of the 4xx warnings and state transitions in EB, or completely disable them?
Or should i really do a symptom-treatment and stop using 404 Not Found on a call like this? (i really do not like this option)
Update: AWS EB finally includes a built-in setting for this:
https://stackoverflow.com/a/51556599/1123355
Old Solution: Upon diving into the EB instance and spending several hours looking for where EB's health check daemon actually reports the status codes back to EB for evaluation, I finally found it, and came up with a patch that can serve as a perfectly fine workaround for preventing 4xx response codes from turning the environment into a Degraded environment health state, as well as pointlessly notifying you with this e-mail:
Environment health has transitioned from Ok to Degraded. 59.2 % of the requests are erroring with HTTP 4xx.
The status code reporting logic is located within healthd-appstat, a Ruby script developed by the EB team that constantly monitors /var/log/nginx/access.log and reports the status codes to EB, specifically in the following path:
/opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.2.0/gems/healthd-appstat-1.0.1/lib/healthd-appstat/plugin.rb
The following .ebextensions file will patch this Ruby script to avoid reporting 4xx response codes back to EB. This means that EB will never degrade the environment health due to 4xx errors because it just won't know that they're occurring. This also means that the "Health" page in your EB environment will always display 0 for the 4xx response code count.
container_commands:
01-patch-healthd:
command: "sudo /bin/sed -i 's/\\# normalize units to seconds with millisecond resolution/if status \\&\\& status.index(\"4\") == 0 then next end/g' /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.2.0/gems/healthd-appstat-1.0.1/lib/healthd-appstat/plugin.rb"
02-restart-healthd:
command: "sudo /usr/bin/kill $(/bin/ps aux | /bin/grep -e '/bin/bash -c healthd' | /usr/bin/awk '{ print $2 }')"
ignoreErrors: true
Yes, it's a bit ugly, but it gets the job done, at least until the EB team provide a way to ignore 4xx errors via some configuration parameter. Include it with your application when you deploy, in the following path relative to the root directory of your project:
.ebextensions/ignore_4xx.config
Good luck, and let me know if this helped!
There is a dedicated Health monitoring rule customization called
Ignore HTTP 4xx (screenshot attached)
Just enable it and EB will not degrade instance health on 4xx errors.
Thank you for your answer Elad Nava, I had the same problem and your solution worked perfectly for me!
However, after opening a ticket in the AWS Support Center, they recommended me to modify the nginx configuration to ignore 4xx on Health Check instead of modifying the ruby script. To do that, I also had to add a config file to the .ebextensions directory, in order to overwrite the default nginx.conf file:
files:
"/tmp/nginx.conf":
content: |
# Elastic Beanstalk Managed
# Elastic Beanstalk managed configuration file
# Some configuration of nginx can be by placing files in /etc/nginx/conf.d
# using Configuration Files.
# http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html
#
# Modifications of nginx.conf can be performed using container_commands to modify the staged version
# located in /tmp/deployment/config/etc#nginx#nginx.conf
# Elastic_Beanstalk
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 1024;
events {
worker_connections 1024;
}
http {
###############################
# CUSTOM CONFIG TO IGNORE 4xx #
###############################
map $status $loggable {
~^[4] 0;
default 1;
}
map $status $modstatus {
~^[4] 200;
default $status;
}
#####################
# END CUSTOM CONFIG #
#####################
port_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# This log format was modified to ignore 4xx status codes!
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
log_format healthd '$msec"$uri"'
'$modstatus"$request_time"$upstream_response_time"'
'$http_x_forwarded_for' if=$loggable;
sendfile on;
include /etc/nginx/conf.d/*.conf;
keepalive_timeout 1200;
}
container_commands:
01_modify_nginx:
command: cp /tmp/nginx.conf /tmp/deployment/config/#etc#nginx#nginx.conf
Although this solution is quite more verbose, I personally believe that it is safer to implement, as long as it does not depend on any AWS proprietary script. What I mean is that, if for some reason AWS decides to remove or modify their ruby script (believe me or not, they love to change scripts without previous notice), there is a big chance that the workaround with sed will not work anymore.
Here is a solution based off of Adriano Valente's answer. I couldn't get the $loggable bit to work, although skipping logging for the 404s seems like that would be a good solution. I simply created a new .conf file that defined the $modstatus variable, and then overwrote the healthd log format to use $modstatus in place of $status. This change also required nginx to get restarted. This is working on Elastic Beanstalk's 64bit Amazon Linux 2016.09 v2.3.1 running Ruby 2.3 (Puma).
# .ebextensions/nginx.conf
files:
"/tmp/nginx.conf":
content: |
# Custom config to ignore 4xx in the health file only
map $status $modstatus {
~^[4] 200;
default $status;
}
container_commands:
modify_nginx_1:
command: "cp /tmp/nginx.conf /etc/nginx/conf.d/custom_status.conf"
modify_nginx_2:
command: sudo sed -r -i 's#\$status#$modstatus#' /opt/elasticbeanstalk/support/conf/webapp_healthd.conf
modify_nginx_3:
command: sudo /etc/init.d/nginx restart
Based on Elad Nava's Answer, I think it's better to use the elasticbeanstalk healthd's control script directly instead of a kill:
container_commands:
01-patch-healthd:
command: "sudo /bin/sed -i 's/\\# normalize units to seconds with millisecond resolution/if status \\&\\& status.index(\"4\") == 0 then next end/g' /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.2.0/gems/healthd-appstat-1.0.1/lib/healthd-appstat/plugin.rb"
02-restart-healthd:
command: "sudo /opt/elasticbeanstalk/bin/healthd-restart"
Finally when investigating this issue, I've noticed that healthd and apache log status codes differently with the former using %s while the latter %>s resulting in discrepancies between them. I've also patched this using:
03-healthd-logs:
command: sed -i 's/^LogFormat.*/LogFormat "%{%s}t\\"%U\\"%>s\\"%D\\"%D\\"%{X-Forwarded-For}i" healthd/g' /etc/httpd/conf.d/healthd.conf
Solution provided by AWS support as of April 2018:
files:
"/tmp/custom-site-nginx.conf":
mode: "000664"
owner: root
group: root
content: |
map $http_upgrade $connection_upgrade {
default "upgrade";
"" "";
}
# Elastic Beanstalk Modification(EB_INCLUDE)
# Custom config
# HTTP 4xx ignored.
map $status $loggable {
~^[4] 0;
default 1;
}
server {
listen 80;
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd if=$loggable;
access_log /var/log/nginx/access.log;
location / {
proxy_pass http://docker;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
container_commands:
override_beanstalk_nginx:
command: "mv -f /tmp/custom-site-nginx.conf /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy.conf"
I recently ran into the same issue of being bombarded with 4xx errors as you have. I tried the suggestions listed above, but nothing worked for me. I reached out to AWS Support and here is what they have suggested, and it solved my problem. I have an Elastic Beanstalk application with 2 instances running.
Create a folder called .ebextensions
Inside this folder, create a file called nginx.config (make sure it has the .config extension. ".conf" won't do!)
If you are deploying your application with a Docker container, then please make sure this .ebextensions folder is included in the deployment bundle. For me, the bundle included the folder as well as the Dockerrun.aws.json
Here is the entire content of the nginix.config file:
files:
"/etc/nginx/nginx.conf":
content: |
# Elastic Beanstalk Nginx Configuration File
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
# Custom config
# HTTP 4xx ignored.
map $status $loggable {
~^[4] 0;
default 1;
}
# Custom config
# HTTP 4xx ignored.
map $status $modstatus {
~^[4] 200;
default $status;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
log_format healthd '$msec"$uri"$modstatus"$request_time"$upstream_response_time"$http_x_forwarded_for';
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

Nginx and Flask-socketio Websockets: Alive but not Messaging?

I've been having a bit of trouble getting Nginx to play nicely with the Python Flask-socketio library (which is based on gevent). Currently, since we're actively developing, I'm trying to get Nginx to just work as a proxy. For sending pages, I can get this to work, either by directly running the flask-socketio app, or by running through gunicorn. One hitch: the websocket messaging does not seem to work. The pages are successfully hosted and displayed. However, when I try to use the websockets, they do not work. They are alive enough that the websocket thinks it is connected, but they will not send a message. If I remove the Nginx proxy, they do work. Firefox gives me this error when I try to send a message:
Firefox can't establish a connection to the server at ws:///socket.io/1/websocket/.
Where web address is where the server is located and the unique id is just a bunch of randomish digits. It seems to be doing enough to keep the connection live (e.g., the client thinks it is connected), but can't send a message over the websocket. I have to think that the issue has to do with some part of the proxy, but am having mighty trouble debugging what the issue might be (in part because this is my first go-round with both Flask-socketIO and nginx). The configuration file I am using for nginx is:
user <user name>; ## This is set to the user name for the remote SSH session
worker_processes 5;
events {
worker_connections 1024; ## Default: 1024
}
http {
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
server {
listen 80;
server_name _;
location / {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}
I made the config file as an amalgam of a general example and a websocket specific one, but trying to fiddle with it has not solved the issue. Also, I am using the werkzeug Proxy_Fix call on my Flask app.wsgi_app when I use it in wsgi mode. I've tried it with and without that, to no avail, however. If anyone has some insight, I will be all ears/eyes.
I managed to fix this. The issues were not specific to flask-socketio, but they were specific to Ubuntu, NginX, and gevent-socketio. Two significant issues were present:
Ubuntu 12.04 has a truly ancient version of nginx (1.1.19 vs 1.6.x for stable versions). Why? Who knows. What we do know is that this version does not support websockets in any useful way, as 1.3.13 is about the earliest you should be using.
By default, gevent-socketio expects your sockets to be at the location /socket.io . You can upgrade the whole HTTP connection, but I had some trouble getting that to work properly (especially after I threw SSL into the mix).
I fixed #1, but in fiddling with it I purged by nginx and apt-get installed... the default version of nginx on Ubuntu. Then, I was mysteriously confused as to why things worked even worse than before. Many .conf files valiantly lost their lives in this battle.
If trying to debug websockets in this configuration, I would recommend the following steps:
Check your nginx version via 'nginx -v'. If it is anything less than 1.4, upgrade it.
Check your nginx.conf settings. You need to make sure the connection upgrades.
Check that your server IP and port match your nginx.conf reverse proxy.
Check that your client (e.g., socketio.js) connects to the right location and port, with the right protocol.
Check your blocked ports. I was on EC2, so you have to manually open 80 (HTTP) and 443 (SSL/HTTPS).
Having just checked all of these things, there are takeaways.
Upgrading to the latest stable nginx version on Ubuntu (full ref) can be done by:
sudo apt-get install python-software-properties
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:nginx/stable
sudo apt-get update
sudo apt-get install nginx
In systems like Windows, you can use an installer and will be less likely to get a bad version.
Many config files for this can be confusing, since nginx officially added sockets in about 2013, making earlier workaround configs obsolete. Existing config files don't tend to cover all the bases for nginx, gevent-socketio, and SSL together, but have them all separately (Nginx Tutorial, Gevent-socketio, Node.js with SSL). A config file for nginx 1.6 with flask-socketio (which wraps gevent-socketio) and SSL is:
user <user account, probably optional>;
worker_processes 2;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
# tcp_nopush on;
keepalive_timeout 3;
# tcp_nodelay on;
# gzip on;
client_max_body_size 20m;
index index.html;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Listen on 80 and 443
listen 80 default;
listen 443 ssl; (only needed if you want SSL/HTTPS)
server_name <your server name here, optional unless you use SSL>;
# SSL Certificate (only needed if you want SSL/HTTPS)
ssl_certificate <file location for your unified .crt file>;
ssl_certificate_key <file location for your .key file>;
# Optional: Redirect all non-SSL traffic to SSL. (if you want ONLY SSL/HTTPS)
# if ($ssl_protocol = "") {
# rewrite ^ https://$host$request_uri? permanent;
# }
# Split off basic traffic to backends
location / {
proxy_pass http://localhost:8081; # 127.0.0.1 is preferred, actually.
proxy_redirect off;
}
location /socket.io {
proxy_pass http://127.0.0.1:8081/socket.io; # 127.0.0.1 is preferred, actually.
proxy_redirect off;
proxy_buffering off; # Optional
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
Checking that your Flask-socketio is using the right port is easy. This is sufficient to work with the above:
from flask import Flask, render_template, session, request, abort
import flask.ext.socketio
FLASK_CORE_APP = Flask(__name__)
FLASK_CORE_APP.config['SECRET_KEY'] = '12345' # Luggage combination
SOCKET_IO_CORE = flask.ext.socketio.SocketIO(FLASK_CORE_APP)
#FLASK_CORE_APP.route('/')
def index():
return render_template('index.html')
#SOCKET_IO_CORE.on('message')
def receive_message(message):
return "Echo: %s"%(message,)
SOCKET_IO_CORE.run(FLASK_CORE_APP, host=127.0.0.1, port=8081)
For a client such as socketio.js, connecting should be easy. For example:
<script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/0.9.16/socket.io.min.js"></script>
<script type="text/javascript">
var url = window.location.protocol + document.domain + ':' + location.port,
socket = io.connect(url);
socket.on('message', alert);
io.emit("message", "Test")
</script>
Opening ports is really more of a server-fault or a superuser issue, since it will depend a lot on your firewall. For Amazon EC2, see here.
If trying all of this does not work, cry. Then return to the top of the list. Because you might just have accidentally reinstalled an older version of nginx.