http://prntscr.com/coliya -Chrome
http://prntscr.com/coljez -Opera
NGINX
server {
listen 0.0.0.0:80;
listen 0.0.0.0:443 ssl;
root /usr/share/nginx/html;
index index.html index.htm;
ssl on;
sslcertificate /etc/ssl/certs/ssl-bundle.crt;
sslcertificatekey /etc/ssl/private/budokai-onlinecom.key;
sslciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM- SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kED$
ssldhparam /etc/ssl/private/dhparmas.pem;
sslpreferserverciphers on;
sslprotocols TLSv1 TLSv1.1 TLSv1.2;
if ($sslprotocol = "") {
rewrite ^ https://$host$requesturi? permanent;
}
largeclientheader_buffers 8 32k;
location / {
proxyhttpversion 1.1;
proxysetheader Accept-Encoding "";
proxysetheader X-Real-IP $remoteaddr;
proxysetheader Host $host;
proxysetheader X-Forwarded-For $proxyaddxforwardedfor;
proxysetheader XFORWARDEDPROTO https;
proxysetheader X-NginX-Proxy true;
proxybuffers 8 32k;
proxybuffersize 64k;
proxysetheader Upgrade $httpupgrade;
proxysetheader Connection "Upgrade";
proxyreadtimeout 86400;
proxypass http://budokai-online.com:8080 ;
}
The problem I'm having is that some computers and some browsers are being redirected when trying to get a connection to the websocket. When that 302 error shows up, the '/*' routes has been activated! This route redirects the user to the login page as you saw in the redirect response.The websocket upgrade request is turned into an ordinary http request somehow, somewhere! This seems to be where the problem is. What can be causing this?
i had the same problem but it was related to varnish settings. if you are using varnish add:
sub vcl_recv {
if (req.http.upgrade ~ "(?i)websocket") {
return (pipe);
}
}
sub vcl_pipe {
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
set bereq.http.connection = req.http.connection;
}
}
check this link for reference:
https://varnish-cache.org/docs/4.1/users-guide/vcl-example-websockets.html
Related
When I access https://my.server.com, it returns blank page.
When I access http://myserver_external_ip:9100, it returns DevTools as expected.
I checked "my.server.com" by using Chrome DevTools, there was no error log.
server {
server_name my.server.com; # my domain
location / {
proxy_pass http://localhost:9100;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
listen 443 ssl;
}
server {
if ($host = my.server.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name my.server.com; # my domain
return 404;
}
I think both urls return same pages.
Nginx on CentOS 7 installed successfully getting default page as well but when try to browse application it gives
nginx error The page you are looking for is temporarily unavailable.
Please try again later.
When checked the error.log it says
2020/10/30 10:25:01 [crit] 355290#0: *1 connect() to 127.0.0.1:7823 failed (13: Permission denied) while connecting to upstream, c$
Adding nginx.conf details:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name xxxxx.in;
ssl_certificate /etc/nginx/ssl/xxx.crt;
ssl_certificate_key /etc/nginx/ssl/xxx.key;
root /usr/share/nginx/html;
#root /var/www/html;
#index index.php index.html index.htm;
index index.php index.html index.htm index.nginx-debian.html;
location / {
}
location /hoh-bot
{
allow all;
proxy_pass http://127.0.0.1:7823;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /hoh-bot-socket
{
proxy_pass http://127.0.0.1:7823/socket.io;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /hoh2 {
return 200 'Testng';
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I am trying to create a facebook messenger bot. every thing work woth heroku. then i transfer it to my own server. then i got the error "curl errno =35" i tried it with ngrok work fine on the server but not work with my server.
using debian with nginx x and letsencrypt.
the url is preetombot.bddevwork.net
my setting
server {
listen 80;
server_name preetombot.bddevwork.net www.preetombot.bddevwork.net;
#root /usr/share/nginx/www/preetombot.bddevwork.net;
#return 301 https://$server_name$request_uri;
}
server {
listen 443 default_server ssl http2;
server_name preetombot.bddevwork.net
www.preetombot.bddevwork.net;
ssl_stapling on;
ssl_stapling_verify on;
ssl_certificate /etc/letsencrypt/live/preetombot.bddevwork.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/preetombot.bddevwork.net/privkey.pem;
ssl_trusted_certificate /test/ca-certs.pem;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM$
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_dhparam /test/dhparam.pem;
root /usr/share/nginx/www/preetombot.bddevwork.net;
index index.php index.html index.htm;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:5000$request_uri;
proxy_redirect off;
proxy_http_version 1.1;
}
location ~ /.well-known{
allow all;
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/www;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I have multiple secure ports listening within a single server, in this case for FB bot I'm using port 8083.
upstream botd {
server application_1:8083 max_fails=3 fail_timeout=30s;
keepalive 64;
}
server {
listen 443 default_server;
listen [::]:443 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
rewrite_log on;
ssl on;
server_name _;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
ssl_certificate /etc/ssl/techie8.io/api.techie8.io.bundle;
ssl_certificate_key /etc/ssl/techie8.io/api.techie8.io.key;
# Botd skill.
location /botd {
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://botd;
break;
}
}
# Techie8 API.
location / {
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
#Timeout after 8 hours
proxy_read_timeout 43200000;
proxy_connect_timeout 43200000;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://application;
break;
}
}
}
In Flask App:
#app.route('/botd', methods=['GET'])
def handle_verification():
"""Handle Token verification."""
print "Handling Verification."
if request.args.get('hub.verify_token') == VERIFY_TOKEN:
print "Verification successful!"
return request.args.get('hub.challenge')
else:
print "Verification failed!"
return 'Error, wrong validation token'
#app.route('/botd', methods=['POST'])
def handle_messages():
print "Handling Incoming Messages\n"
payload = request.get_data()
print payload
for sender, message in messaging_events(payload):
print "Incoming Message from %s: %s" % (sender, message)
print ("Access Token: %s" % ACCESS_TOKEN)
send_message(ACCESS_TOKEN, sender, message)
return "ok"
In Facebook WebHook Callback URL I have my host configured:
https://api.mycompany.io/botd
I have 2 domains running on my server, NGINX just proxies them to node apps. I have a certificate for one, but for the other I'm just using cloudflare to provide HTTPS. I want to ensure that when users visit either domain, they always get redirected to the HTTPS version of the domain, without a www. This is my current configuration, uncommenting the block for the domain2 configuration file seems to break both sites :(
domain1 config file:
upstream domain1.com {
server 127.0.0.1:8000;
keepalive 8;
}
server {
listen 0.0.0.0:80;
server_name domain1.com www.domain1.com;
return 301 https://domain1.com$request_uri;
}
server {
#listen 80;
listen 443 ssl http2;
server_name domain1.com;
access_log /var/log/nginx/domain1.com.log;
root /var/www/domain1.com/client/public;
include /etc/nginx/global/cloudflare-allow.conf;
ssl_certificate /etc/nginx/ssl/domain1.crt;
ssl_certificate_key /etc/nginx/ssl/domain1.key;
if ($bad_referer) {
return 444;
}
location / {
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_pass http://domain1.com;
proxy_redirect off;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|webp)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# CSS and Javascript
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:rss|atom)$ {
expires 1h;
add_header Cache-Control "public";
}
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
}
}
server {
listen 443 ssl http2;
server_name www.domain1.com;
return 301 https://domain1.com$request_uri;
}
domain2 config file:
upstream domain2.com {
server 127.0.0.1:9000;
keepalive 8;
}
#server {
# listen 80;
# server_name domain2.com www.domain2.com;
# return 301 https://$server_name$request_uri;
#}
server {
listen 80;
#listen 443 ssl http2;
server_name domain2.com;
access_log /var/log/nginx/domain2.com.log;
root /var/www/domain2.com;
include /etc/nginx/global/cloudflare-allow.conf;
if ($bad_referer) {
return 444;
}
location / {
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_pass http://domain2.com;
proxy_redirect off;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|webp)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# CSS and Javascript
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:rss|atom)$ {
expires 1h;
add_header Cache-Control "public";
}
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
}
}
When SSL is done through CloudFlare's Flexible SSL mode, communication to the origin is HTTP traffic over port 80.
In order to detect whether this traffic is HTTPS you can't use the HTTPS environment variable, you must then check if the X-Forwarded-Proto header is set to HTTPS instead.
You can do this in Nginx as follows:
if ($http_x_forwarded_proto != "https") {
rewrite ^(.*)$ https://$server_name$1 permanent;
}
The easier way to do this is to simply set an "Always use HTTPS" Page Rule in CloudFlare.
This is my Nginx config:
upstream app_server {
# Bindings to the Gunicorn server
server 127.0.0.1:8002 fail_timeout=0;
}
server {
listen 80;
server_name "~^www\.(.*)$";
return 301 https://$host$request_uri;
}
server {
access_log path_to_nginx-access.log;
error_log path_to_nginx-error.log;
listen 443 ssl;
server_name _;
ssl_certificate path_to_nginx.crt;
ssl_certificate_key path_to_nginx.key;
client_max_body_size 4G;
keepalive_timeout 5;
root path_to_root;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server;
break;
}
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root path_to_templates;
}
}
My goal is to have all this addresses redirecting to https://domain.com
http://domain.com
https://domain.com
http://www.domain.com
https://www.domain.com
What should I change?
Keep in mind that I need to handle multiple domains with the same Nginx server (vide server_name).
Thanks!