I've been struggling with this all weekend, and I now on my knees hoping one of you geniuses can solve my problem.
I short: I have an ingress-nginx controller (Image: nginx/nginx-ingress:1.5.8) with whom I'm trying to achieve a self-signed mutual authentication.
The https aspect works all fine, but the problem I'm having (I think) is that the ingress controller reroute the request with the default cert and the ingress validates with the default CA(because it can't find my CA).
So.. Help!
Steps I've gone through on this cluster-f*** of a journey (pun intended):
I've tested it in a local Minikube-cluster and it all works like a charm. When I exec -it into the ingress-controller-pod and cat the nginx.conf for both my clusters (Minikube and Azure) I did find large differences; hence I just found out that I'm working with apples and pears in terms of minikube- vs azure-k8s nginx-ingresses.
This is the ingress setup that worked as a charm for my minikube cluster (the ingress I'm using is more or less a duplicate of the file you'll find in the link): https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/
In addition i found this which in a long way describes the problem that I'm having: https://success.docker.com/article/how-to-configure-a-default-tls-certificate-for-the-kubernetes-nginx-ingress-controller
From the link above the solution is simple; nuke the ingress from orbit and create a new one. Well.. Here's the thing, this is a production cluster and my bosses would be all but pleased if I did that.
Another discovery that I made whilst "exec -it bash"-roaming around inside the Azure-ingress-controller is that there is no public root cert folder (/etc/ssl/) to be found. Do not know why, but though I'd mention it.
I've also discovered the param --default-ssl-certificate=default/foo-tls, but this is a default. As there will be other needs for different client-auths later I have to be able to specify dynamic CA-certs for different ingresses.
I'll past my nginx.conf that I think is the problem below. Hoping to hear back from some of you because at this point in time I'm thoroughly lost. Hit me up if additional information is needed.
user nginx;
worker_processes auto;
daemon off;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65s;
keepalive_requests 100;
#gzip on;
server_names_hash_max_size 512;
variables_hash_bucket_size 256;
variables_hash_max_size 1024;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/nginx/secrets/default;
ssl_certificate_key /etc/nginx/secrets/default;
server_name _;
server_tokens "on";
access_log off;
location / {
return 404;
}
}
# stub_status
server {
listen 8080;
allow 127.0.0.1;
deny all;
location /stub_status {
stub_status;
}
}
server {
listen unix:/var/run/nginx-status.sock;
access_log off;
location /stub_status {
stub_status;
}
}
include /etc/nginx/config-version.conf;
include /etc/nginx/conf.d/*.conf;
server {
listen unix:/var/run/nginx-502-server.sock;
access_log off;
location / {
return 502;
}
}
}
stream {
log_format stream-main '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time';
access_log /var/log/nginx/stream-access.log stream-main;
So the problem came down to the ingress-controller being old and outdated. Didn't have the original helm-chart that is was deployed with so I was naturally worried about rollback options. Anyhoo -> took a leap of faith in the middle of the night local time and nuked the namespace; recreated the namespace; helm install stable/nginx-ingress.
There was a minimum downtime -> 1 min at most, but beware to lock down the public IP that's attached to the load balancer before going all 3.rd world war on your services.
Had to add an argument to the standard azure helm install command to imperatively set the public IP for the resource; pasting it below if any poor soul should find himself in the unfortunate situation with a new helm-cli and lost charts.
That's it; keep your services up to date and make sure to save your charts!
helm install nginx stable/nginx-ingress --namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.service.loadBalancerIP=*YourVeryPreciousPublicIP*
Related
I tried to configure nginx as image server as below
create myapp.conf and put it at /etc/nginx/conf.d
server {
listen 80;
listen [::]:80;
#here you could also use subdomain
server_name image.mydomain.com ;
#here you could also use context,e.g. location /<context>
location / {
root /myapp/imageServer/;
autoindex on;
}
}
The file exists at /myapp/imageServer/card/3cdad37c5a394567b53283321f6af9e9.png
But when i browse this file via https://image.mydomain.com/card/3cdad37c5a394567b53283321f6af9e9.png. I got 403 forbidden from nginx. There is any mistake of my nginx config?
i found the reason
go to /etc/nginx/nginx.conf
edit line as below
#user www-data;
user root;
I have created the load balancer service in AKS, my container is working fine, when i am running by using docker run command. Here, are the screenshot of my kubernetes dashboard and service is also working but I am not able to access the ip.
Kubernetes service is up and running
I am using my own image ashishrajput194/finreg-frontend:v1 and it is having angular project and I am using nginx as web server. Here, are the config file of nginx.
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/json max;
application/javascript max;
~image/ max;
}
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
expires $expires;
gzip on;
}
Please help to resolve my issue.
Need to use NGINX without docker
I have tried using gRPC-web integration using the envoy proxy that had docker dependencies and so I moved to NGINX, how to use NGINX without docker dependencies?
Since there is no direct support for grpc-web from nginx, we can make a below hack in the config file of the nginx to work with both grpc-web and grpc calls.
server {
listen 1449 ssl http2;
server_name `domain-name`;
ssl_certificate `pem-file`; # managed by Certbot
ssl_certificate_key `key-file`; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
#
## Any request with the content-type application/grpc+(json|proto|customType) will not enter the
## if condition block and make a grpc_pass while rest of the requests enters into the if block
## and makes a proxy_prass request. Explicitly grpc-web will also enter the if block.
#
if ($content_type !~ 'application\/grpc(?!-web)(.*)'){
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Transfer-Encoding,Custom-Header-1,X-Accept-Content-Transfer-Encoding,X-Accept-Response-Streaming,X-User-Agent,X-Grpc-Web,content-type,snet-current-block-number,snet-free-call-user-id,snet-payment-channel-signature-bin,snet-payment-type,x-grpc-web';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
proxy_pass http://reroute_url;
}
grpc_pass grpc://reroute_url;
}
}
The above config works based on the content-type mechanism, whenever the grpc-web makes a call to nginx, the content-type would be application/grpc-web and this content-type is not been handled by nginx with grpc_pass.
Hence, only those requests with content-type application/grpc+(proto|json|customType) works with grpc_pass while rest of the requests will work with proxy_pass.
You can look at what the dockerfile does and basically do it yourself outside of the docker image: https://github.com/grpc/grpc-web/blob/master/net/grpc/gateway/docker/nginx/Dockerfile
The main thing is basically to run make standalone-proxy, and run it as ./gConnector_static/nginx.sh. You will need a nginx.conf config file to specify where Nginx should receive and forward the gRPC-Web requests
I have a rest api running on Elastic Beanstalk, which works great. Everything application-wise is running good, and working as expected.
The application is a rest api, used to lookup different users.
example url: http://service.com/user?uid=xxxx&anotherid=xxxx
If a user with either id's is found, the api responds with 200 OK, if not, responds with 404 Not Found as per. HTTP/1.1 status code defenitions.
It is not uncommon for our api to answer 404 Not Found on a lot of requests, and the elastic beanstalk transfers our environment from OK into Warning or even into Degraded because of this. And it looks like nginx has refused connection to the application because of this degraded state. (looks like it has a threshold of 30%+ into warningand 50%+ into degraded states. This is a problem, because the application is actually working as expected, but Elastic Beanstalks default settings thinks it is a problem, when it's really not.
Does anyone know of a way to edit the threshold of the 4xx warnings and state transitions in EB, or completely disable them?
Or should i really do a symptom-treatment and stop using 404 Not Found on a call like this? (i really do not like this option)
Update: AWS EB finally includes a built-in setting for this:
https://stackoverflow.com/a/51556599/1123355
Old Solution: Upon diving into the EB instance and spending several hours looking for where EB's health check daemon actually reports the status codes back to EB for evaluation, I finally found it, and came up with a patch that can serve as a perfectly fine workaround for preventing 4xx response codes from turning the environment into a Degraded environment health state, as well as pointlessly notifying you with this e-mail:
Environment health has transitioned from Ok to Degraded. 59.2 % of the requests are erroring with HTTP 4xx.
The status code reporting logic is located within healthd-appstat, a Ruby script developed by the EB team that constantly monitors /var/log/nginx/access.log and reports the status codes to EB, specifically in the following path:
/opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.2.0/gems/healthd-appstat-1.0.1/lib/healthd-appstat/plugin.rb
The following .ebextensions file will patch this Ruby script to avoid reporting 4xx response codes back to EB. This means that EB will never degrade the environment health due to 4xx errors because it just won't know that they're occurring. This also means that the "Health" page in your EB environment will always display 0 for the 4xx response code count.
container_commands:
01-patch-healthd:
command: "sudo /bin/sed -i 's/\\# normalize units to seconds with millisecond resolution/if status \\&\\& status.index(\"4\") == 0 then next end/g' /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.2.0/gems/healthd-appstat-1.0.1/lib/healthd-appstat/plugin.rb"
02-restart-healthd:
command: "sudo /usr/bin/kill $(/bin/ps aux | /bin/grep -e '/bin/bash -c healthd' | /usr/bin/awk '{ print $2 }')"
ignoreErrors: true
Yes, it's a bit ugly, but it gets the job done, at least until the EB team provide a way to ignore 4xx errors via some configuration parameter. Include it with your application when you deploy, in the following path relative to the root directory of your project:
.ebextensions/ignore_4xx.config
Good luck, and let me know if this helped!
There is a dedicated Health monitoring rule customization called
Ignore HTTP 4xx (screenshot attached)
Just enable it and EB will not degrade instance health on 4xx errors.
Thank you for your answer Elad Nava, I had the same problem and your solution worked perfectly for me!
However, after opening a ticket in the AWS Support Center, they recommended me to modify the nginx configuration to ignore 4xx on Health Check instead of modifying the ruby script. To do that, I also had to add a config file to the .ebextensions directory, in order to overwrite the default nginx.conf file:
files:
"/tmp/nginx.conf":
content: |
# Elastic Beanstalk Managed
# Elastic Beanstalk managed configuration file
# Some configuration of nginx can be by placing files in /etc/nginx/conf.d
# using Configuration Files.
# http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html
#
# Modifications of nginx.conf can be performed using container_commands to modify the staged version
# located in /tmp/deployment/config/etc#nginx#nginx.conf
# Elastic_Beanstalk
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 1024;
events {
worker_connections 1024;
}
http {
###############################
# CUSTOM CONFIG TO IGNORE 4xx #
###############################
map $status $loggable {
~^[4] 0;
default 1;
}
map $status $modstatus {
~^[4] 200;
default $status;
}
#####################
# END CUSTOM CONFIG #
#####################
port_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# This log format was modified to ignore 4xx status codes!
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
log_format healthd '$msec"$uri"'
'$modstatus"$request_time"$upstream_response_time"'
'$http_x_forwarded_for' if=$loggable;
sendfile on;
include /etc/nginx/conf.d/*.conf;
keepalive_timeout 1200;
}
container_commands:
01_modify_nginx:
command: cp /tmp/nginx.conf /tmp/deployment/config/#etc#nginx#nginx.conf
Although this solution is quite more verbose, I personally believe that it is safer to implement, as long as it does not depend on any AWS proprietary script. What I mean is that, if for some reason AWS decides to remove or modify their ruby script (believe me or not, they love to change scripts without previous notice), there is a big chance that the workaround with sed will not work anymore.
Here is a solution based off of Adriano Valente's answer. I couldn't get the $loggable bit to work, although skipping logging for the 404s seems like that would be a good solution. I simply created a new .conf file that defined the $modstatus variable, and then overwrote the healthd log format to use $modstatus in place of $status. This change also required nginx to get restarted. This is working on Elastic Beanstalk's 64bit Amazon Linux 2016.09 v2.3.1 running Ruby 2.3 (Puma).
# .ebextensions/nginx.conf
files:
"/tmp/nginx.conf":
content: |
# Custom config to ignore 4xx in the health file only
map $status $modstatus {
~^[4] 200;
default $status;
}
container_commands:
modify_nginx_1:
command: "cp /tmp/nginx.conf /etc/nginx/conf.d/custom_status.conf"
modify_nginx_2:
command: sudo sed -r -i 's#\$status#$modstatus#' /opt/elasticbeanstalk/support/conf/webapp_healthd.conf
modify_nginx_3:
command: sudo /etc/init.d/nginx restart
Based on Elad Nava's Answer, I think it's better to use the elasticbeanstalk healthd's control script directly instead of a kill:
container_commands:
01-patch-healthd:
command: "sudo /bin/sed -i 's/\\# normalize units to seconds with millisecond resolution/if status \\&\\& status.index(\"4\") == 0 then next end/g' /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.2.0/gems/healthd-appstat-1.0.1/lib/healthd-appstat/plugin.rb"
02-restart-healthd:
command: "sudo /opt/elasticbeanstalk/bin/healthd-restart"
Finally when investigating this issue, I've noticed that healthd and apache log status codes differently with the former using %s while the latter %>s resulting in discrepancies between them. I've also patched this using:
03-healthd-logs:
command: sed -i 's/^LogFormat.*/LogFormat "%{%s}t\\"%U\\"%>s\\"%D\\"%D\\"%{X-Forwarded-For}i" healthd/g' /etc/httpd/conf.d/healthd.conf
Solution provided by AWS support as of April 2018:
files:
"/tmp/custom-site-nginx.conf":
mode: "000664"
owner: root
group: root
content: |
map $http_upgrade $connection_upgrade {
default "upgrade";
"" "";
}
# Elastic Beanstalk Modification(EB_INCLUDE)
# Custom config
# HTTP 4xx ignored.
map $status $loggable {
~^[4] 0;
default 1;
}
server {
listen 80;
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd if=$loggable;
access_log /var/log/nginx/access.log;
location / {
proxy_pass http://docker;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
container_commands:
override_beanstalk_nginx:
command: "mv -f /tmp/custom-site-nginx.conf /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy.conf"
I recently ran into the same issue of being bombarded with 4xx errors as you have. I tried the suggestions listed above, but nothing worked for me. I reached out to AWS Support and here is what they have suggested, and it solved my problem. I have an Elastic Beanstalk application with 2 instances running.
Create a folder called .ebextensions
Inside this folder, create a file called nginx.config (make sure it has the .config extension. ".conf" won't do!)
If you are deploying your application with a Docker container, then please make sure this .ebextensions folder is included in the deployment bundle. For me, the bundle included the folder as well as the Dockerrun.aws.json
Here is the entire content of the nginix.config file:
files:
"/etc/nginx/nginx.conf":
content: |
# Elastic Beanstalk Nginx Configuration File
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
# Custom config
# HTTP 4xx ignored.
map $status $loggable {
~^[4] 0;
default 1;
}
# Custom config
# HTTP 4xx ignored.
map $status $modstatus {
~^[4] 200;
default $status;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
log_format healthd '$msec"$uri"$modstatus"$request_time"$upstream_response_time"$http_x_forwarded_for';
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I've been having a bit of trouble getting Nginx to play nicely with the Python Flask-socketio library (which is based on gevent). Currently, since we're actively developing, I'm trying to get Nginx to just work as a proxy. For sending pages, I can get this to work, either by directly running the flask-socketio app, or by running through gunicorn. One hitch: the websocket messaging does not seem to work. The pages are successfully hosted and displayed. However, when I try to use the websockets, they do not work. They are alive enough that the websocket thinks it is connected, but they will not send a message. If I remove the Nginx proxy, they do work. Firefox gives me this error when I try to send a message:
Firefox can't establish a connection to the server at ws:///socket.io/1/websocket/.
Where web address is where the server is located and the unique id is just a bunch of randomish digits. It seems to be doing enough to keep the connection live (e.g., the client thinks it is connected), but can't send a message over the websocket. I have to think that the issue has to do with some part of the proxy, but am having mighty trouble debugging what the issue might be (in part because this is my first go-round with both Flask-socketIO and nginx). The configuration file I am using for nginx is:
user <user name>; ## This is set to the user name for the remote SSH session
worker_processes 5;
events {
worker_connections 1024; ## Default: 1024
}
http {
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
server {
listen 80;
server_name _;
location / {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}
I made the config file as an amalgam of a general example and a websocket specific one, but trying to fiddle with it has not solved the issue. Also, I am using the werkzeug Proxy_Fix call on my Flask app.wsgi_app when I use it in wsgi mode. I've tried it with and without that, to no avail, however. If anyone has some insight, I will be all ears/eyes.
I managed to fix this. The issues were not specific to flask-socketio, but they were specific to Ubuntu, NginX, and gevent-socketio. Two significant issues were present:
Ubuntu 12.04 has a truly ancient version of nginx (1.1.19 vs 1.6.x for stable versions). Why? Who knows. What we do know is that this version does not support websockets in any useful way, as 1.3.13 is about the earliest you should be using.
By default, gevent-socketio expects your sockets to be at the location /socket.io . You can upgrade the whole HTTP connection, but I had some trouble getting that to work properly (especially after I threw SSL into the mix).
I fixed #1, but in fiddling with it I purged by nginx and apt-get installed... the default version of nginx on Ubuntu. Then, I was mysteriously confused as to why things worked even worse than before. Many .conf files valiantly lost their lives in this battle.
If trying to debug websockets in this configuration, I would recommend the following steps:
Check your nginx version via 'nginx -v'. If it is anything less than 1.4, upgrade it.
Check your nginx.conf settings. You need to make sure the connection upgrades.
Check that your server IP and port match your nginx.conf reverse proxy.
Check that your client (e.g., socketio.js) connects to the right location and port, with the right protocol.
Check your blocked ports. I was on EC2, so you have to manually open 80 (HTTP) and 443 (SSL/HTTPS).
Having just checked all of these things, there are takeaways.
Upgrading to the latest stable nginx version on Ubuntu (full ref) can be done by:
sudo apt-get install python-software-properties
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:nginx/stable
sudo apt-get update
sudo apt-get install nginx
In systems like Windows, you can use an installer and will be less likely to get a bad version.
Many config files for this can be confusing, since nginx officially added sockets in about 2013, making earlier workaround configs obsolete. Existing config files don't tend to cover all the bases for nginx, gevent-socketio, and SSL together, but have them all separately (Nginx Tutorial, Gevent-socketio, Node.js with SSL). A config file for nginx 1.6 with flask-socketio (which wraps gevent-socketio) and SSL is:
user <user account, probably optional>;
worker_processes 2;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
# tcp_nopush on;
keepalive_timeout 3;
# tcp_nodelay on;
# gzip on;
client_max_body_size 20m;
index index.html;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Listen on 80 and 443
listen 80 default;
listen 443 ssl; (only needed if you want SSL/HTTPS)
server_name <your server name here, optional unless you use SSL>;
# SSL Certificate (only needed if you want SSL/HTTPS)
ssl_certificate <file location for your unified .crt file>;
ssl_certificate_key <file location for your .key file>;
# Optional: Redirect all non-SSL traffic to SSL. (if you want ONLY SSL/HTTPS)
# if ($ssl_protocol = "") {
# rewrite ^ https://$host$request_uri? permanent;
# }
# Split off basic traffic to backends
location / {
proxy_pass http://localhost:8081; # 127.0.0.1 is preferred, actually.
proxy_redirect off;
}
location /socket.io {
proxy_pass http://127.0.0.1:8081/socket.io; # 127.0.0.1 is preferred, actually.
proxy_redirect off;
proxy_buffering off; # Optional
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
Checking that your Flask-socketio is using the right port is easy. This is sufficient to work with the above:
from flask import Flask, render_template, session, request, abort
import flask.ext.socketio
FLASK_CORE_APP = Flask(__name__)
FLASK_CORE_APP.config['SECRET_KEY'] = '12345' # Luggage combination
SOCKET_IO_CORE = flask.ext.socketio.SocketIO(FLASK_CORE_APP)
#FLASK_CORE_APP.route('/')
def index():
return render_template('index.html')
#SOCKET_IO_CORE.on('message')
def receive_message(message):
return "Echo: %s"%(message,)
SOCKET_IO_CORE.run(FLASK_CORE_APP, host=127.0.0.1, port=8081)
For a client such as socketio.js, connecting should be easy. For example:
<script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/0.9.16/socket.io.min.js"></script>
<script type="text/javascript">
var url = window.location.protocol + document.domain + ':' + location.port,
socket = io.connect(url);
socket.on('message', alert);
io.emit("message", "Test")
</script>
Opening ports is really more of a server-fault or a superuser issue, since it will depend a lot on your firewall. For Amazon EC2, see here.
If trying all of this does not work, cry. Then return to the top of the list. Because you might just have accidentally reinstalled an older version of nginx.