I'm unable to post any pictures or other media in facebook. I have used the debuger (http://developers.facebook.com/tools/debug) and I always get:
Scrape Information
Response Code: 502
I use nginx 1.2.0 with php-fpm with sock not port (9000)
My errror log does not show any error. The access log
69.171.237.14 - - [23/Mar/2013:19:00:29 +0100] "GET /video/X1KAW64412WH1OO/5123 HTTP/1.1" 200 11715 "-" "facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)" "-"
Currently I have disabled the IPtables. php.ini and most of the timeouts are set to 3600
nginx.conf part:
location ~ \.php$ {
root /home/blabla/www;
# fastcgi_pass 127.0.0.1:9000;
try_files $uri =404;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/blabla/blabla/$fastcgi_script_name;
# fastcgi_param REQUEST_URI $request_uri;
#fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
access_log logs/access._php.log main;
fastcgi_send_timeout 5m;
fastcgi_read_timeout 5m;
fastcgi_connect_timeout 5m; }
I have tested using cloudflare services and it works great, but when I only point to my server it stop. It happen to all my other websites located to this machine so it must be webserver configuration prolem I guess. I use centos x64
I was running into the same issue.
My nginx logs indicated that a 200 response was being served, and requests in the browser showed a 200 response. Facebook was insistent that it was a 502 error.
It turns out that '502' can mean either 'Your server returned a 502' or 'I ran into a difficulty parsing the response'.
In my case, I had a non-compliant HTTP header (it conained a single question mark), which was causing Facebook to reject the response as being invalid. Removing this header fixed the issue.
Facebook by default use ipv6 address if available To Solve this problem you have to enable iPv6 in a Ningx config file for each virtual host (if many sites hosted) to listen Any IPv6 address at port 80.
This will solve the issue with Facebook opengraph.
Related
I'm trying to use the Woocommerce (v 3.5.4) Rest Api on my VPS (debian 9, Nginx).
Everything works well in my local machine (windows 10, XAMPP).
wpbop/ is the folder (var/www/wpbop/) where the wordpress files are stored.
The next basic URL in a browser should send the endpoints of the API (no need of athentication for this first step) :
http://my-public-ip/wpbop/wp-json/wc/v3
Or a curl in command line
curl http://127.0.0.1/wpbop/wp-json/wc/v3
in both cases, i get error 404 Not Found.
I can acces to the blog / admin blog without any problems ( http://my-public-ip/wpbop )
My permalinks are set on "Postname" in wordpress admin panel, this is recommanded by many people in same case.
EDIT - SOLUTION :
Since my Wordpress installation is in a sub-domain,
try_files $uri $uri/ /index.php$is_args$args;
can't find index.php. Just change this line by :
try_files $uri $uri/ /wpbop/index.php$is_args$args;
and it works !
Perhaps problem is coming from my Nginx conf file ?
server {
server_name localhost;
listen 80;
root /var/www;
location /wpbop {
index index.php;
access_log /var/log/nginx/blog.access.log;
error_log /var/log/nginx/blog.error.log;
try_files $uri $uri/ /index.php$is_args$args;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:7000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
}
}
I tried many things without any results, and I'm stuck for several days. Can someone help me ?
Thanks for reading me.
This case need a simple fix in the NGINX configuration file. This is related to the path of my wordpress installation.
Since my Wordpress installation is in a sub-domain,
try_files $uri $uri/ /index.php$is_args$args;
--> can't find index.php. Just change this line by :
try_files $uri $uri/ /wpbop/index.php$is_args$args;
when you get 404 code. try to access http://yoursite/?rest_route=/wp/v2/posts
Official documents https://developer.wordpress.org/rest-api/key-concepts
Move root /var/www/; by one level up (to server context). It is not being inherited.
I use this command to start my server :
fastcgi-mono-server4 -v /applications=www.testjet123.com:/:/usr/share/nginx/TestJet/ /socket=unix:/tmp/fastcgi.socket
Everything is OK, the server is not stopped, but at the end of the output I get this error :
[2017-11-13 06:29:00.445497] Notice : Adding applications 'www.testjet123.com:/:/usr/share/nginx/TestJet/'...
[2017-11-13 06:29:00.454111] Notice : Registering application:
[2017-11-13 06:29:00.454177] Notice : Host: www.testjet123.com
[2017-11-13 06:29:00.454193] Notice : Port: any
[2017-11-13 06:29:00.454204] Notice : Virtual path: /
[2017-11-13 06:29:00.454216] Notice : Physical path: /usr/share/nginx/TestJet/
[2017-11-13 06:29:00.466032] Error : Error parsing permissions "". Use octal.
The fastcgi_params are default :
#ASP.NET
#fastcgi_param PATH_INFO "/usr/share/nginx/TestJet/";
#fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REQUEST_SCHEME $scheme;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;
This is my nginx configuration :
server {
listen 80;
server_name testjet123.com www.testjet123.com
location / {
root /var/www/UI/html;
index index.html index.htm;
try_files $uri $uri/ =404;
}
location ~ \.(aspx|asmx|ashx|asax|ascx|soap|rem|axd|cs|config|dll)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
However, when I use a TCP socket instead of a UNIX socket I do not get the error, see below :
fastcgi-mono-server4 -v /applications=www.testjet123.com:/:/usr/share/nginx/TestJet/ /socket=tcp:9000
[2017-11-13 06:36:09.160760] Notice : Adding applications 'www.testjet123.com:/:/usr/share/nginx/TestJet/'...
[2017-11-13 06:36:09.169121] Notice : Registering application:
[2017-11-13 06:36:09.169187] Notice : Host: www.testjet123.com
[2017-11-13 06:36:09.169202] Notice : Port: any
[2017-11-13 06:36:09.169213] Notice : Virtual path: /
[2017-11-13 06:36:09.169225] Notice : Physical path: /usr/share/nginx/TestJet/
I am using RHEL 7.
Try using the /filename=/tmp/fastcgi.socket argument with /socket=unix, in place of the combined /socket=unix:/tmp/fastcgi.socket argument. I find that this has resolved the issue on debian based distros.
You'll need to set the permissions so that nginx (www-data) group has read & write access to the socket using : chmod 660 /tmp/fastcgi.socket & chgrp www-data /tmp/fastcgi.socket.
Although a bit off topic, my personal experience & reason for initially moving over to unix sockets from tcp/ip sockets was due to the connection between nginx and the mono service suddenly becomming disconnected after some period of time or number of requests. Causing inbound requests to hang / fail.
I experienced the same issue with HyperFastCGI however, using unix sockets didn't help with either HyperFastCGI or fastcgi-mono-server4.
It turned out that it was dues to the fact I was using the latest packages directly from the mono developers repo, which installed mono version 5.10.1. Re-building the machine and only using the mono packages supplied by the debian based distro, which in installed mono version 4.6.2, resolved my issue. No longer did the asp.net process detach or loose connection with the nginx service.
Additionally, according to : https://www.nginx.com/resources/wiki/start/topics/examples/mono/ it's reccomended to use unix sockets :
You could also bind it to a UNIX socket which is recommended.
Due to this recommendation I've continued to use unix sockets to communicate between nginx and the fastcgi-mono-server4 service, and opted to not use HyerfastCGI. I no longer experience any loss of connectivity or outage between nginx & the mono process; the connection is stable for many days and any number of requests. In fact I've yet to experience any loss to date. Also there's no form of memory leakage either using fastcgi-mono-server which other users have reported.
I have a server that runs on Nginx (ubuntu 16). I also have a domain name that redirects to the IP of this server. Of course, I want to show the user a domain name in the address bar, not IP (as it is now). To do this, I changed the site configuration settings in the /etc/nginx/sites-aviable folder to the following: (the project is written in symfony, location is mostly from docks on it)
server {
listen 80;
server_name **.***.***.***; #My server ip
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example.com/web;
index app.php app_dev.php;
location / {
try_files $uri /app.php$is_args$args;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
# DEV
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
# PROD
location ~ ^/app\.php(/|$) {
try_files $uri =404;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
# Phpmyadmin Configurations
location /phpmyadmin {
root /usr/share/;
index index.php index.html index.htm;
location ~ ^/phpmyadmin/(.+\.php)$ {
try_files $uri =404;
root /usr/share/;
#fastcgi_pass 127.0.0.1:9000;
#fastcgi_param HTTPS on; # <-- add this line
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~*^/phpmyadmin/(.+\.jpg|jpeg|gif|css|png|js|ico|html|xml|txt))${
root /usr/share/;
}
}
location /phpMyAdmin {
rewrite ^/* /phpmyadmin last;
}
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
}
As a result, now the user sees the domain name in the address bar, but it does not bring joy - the browsers write ERR_TOO_MANY_REDIRECTS and do not show the content.
As I understand, in some place there is a recursive redirect. In addition to */nginx/sites-aviable/example.com there are no other configs in this folder (default file is fully commented out).
Could it be that the server receiving a request to the address **.***.***.***:80 redirect it to example.com, and the domain services, catching the request, will redirect to **.***.***.***:80, and so on a loop?
How then to be? Or is the problem somewhere in local configurations?
UPD It is the contents of the access.log file after the attempt to open the site once:
(the line is repeated 9 times, . . . * - IP of my server)
**. ***. ***. *** - - [03/Oct/2017: 11: 59: 07 +0300] "GET / HTTP/1.1" 301 194 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv: 54.0) Gecko/20100101 Firefox/54.0"
UPD 2
I try curl -L -I http://mysite
Result of curl:
'HTTP/1.1 302 Moved Temporarily
Server: nginx
Date: Tue, 03 Oct 2017 09:49:32 GMT
Content-Type: text/html
Content-Length: 154
Connection: keep-alive
Location: http://**.***.***.*** //(my server IP)
HTTP/1.1 301 Moved Permanently
Server: nginx/1.10.3 (Ubuntu)
Date: Tue, 03 Oct 2017 09:49:32 GMT
Content-Type: text/html
Content-Length: 194
Connection: keep-alive
Location: http://example.com //(my cite)
....
// some repeats this
....
curl: (52) Empty reply from server'
Redirect 301 is described in my configuration.
Why there is a redirect 302 - I do not know. Is this the result of DNS services?
Try to debug using curl:
For example:
curl -L -I http://yoursite
the option -L will follow redirects and the -I will only show the headers.
In the output search for HTTP/1.1 301 Moved Permanently and the Location: htt.....
Also try to change your conf to either use http or https in many cases is where the loop happends:
return 301 $scheme://
to
return 301 https://
The error was not on the side of my server or the nginx configuration, I did not correctly configure DNS when I had a domain name. Instead of creating an A-record, I set a redirect to the IP of my server
I'm currently testing a perl cgi application in nginx and fcgiwrap. It's partially working. However I'm having issues getting errors back in the response.
All requests return 200. If the cgi errors, it just returns blank content.
I'm running both nginx and fcgiwrap from supervisord.
This is my supervisord.conf file...
[supervisord]
logfile=/tmp/supervisord.log
nodaemon=true
[fcgi-program:fcgiwrap]
command = /usr/sbin/fcgiwrap
user = www-data
socket = unix:///var/run/supervisor/%(program_name)s.sock
socket_owner = www-data:www-data
socket_mode = 0770
autorestart=true
autostart=true
startsecs=1
startretries=3
stopsignal=QUIT
stopwaitsecs=10
environment=PATH='/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin'
redirect_stderr=false
stdout_logfile=/var/log/fcgiwrap_out.log
stderr_logfile=/var/log/fcgiwrap_err.log
[program:nginx]
command=/usr/sbin/nginx -g 'daemon off;'
I get the errors appearing in /var/log/fcgiwrap_err.log. However the error message, and more importantly the status, aren't returned to nginx.
This is my nginx config...
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/web;
index index.html index.cgi;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.cgi$ {
gzip off;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/supervisor/fcgiwrap.sock;
fastcgi_index index.cgi;
fastcgi_param SCRIPT_FILENAME /var/www/web$fastcgi_script_name;
}
# deny access to .htaccess files
location ~ /\.ht {
deny all;
}
}
I'm not sure whether the issue is due to a misconfiguration of fcgiwrap, nginx, or supervisord.
You said to supervisord to monitor a socket (/var/run/supervisor/fcgiwrap.sock) but didn't say to fcgiwrap to use that socket.
So PHP connection through this empty socket will never reach fcgiwrap process, and hence you have no error message from fcgiwrap.
You need to change the fcgiwrap command to specify the socket, using -s parameter.
"upstream sent too big header while reading response header from upstream"
I keep getting this when I try and do an authentication from facebook. I've increased my buffers:
proxy_buffer_size 256k;
proxy_buffers 8 256k;
proxy_busy_buffers_size 512k;
fastcgi_buffers 8 256k;
fastcgi_buffer_size 128k;
But it doesn't seem to help. Any thoughts as to why this might occur?
nginx.conf file:
user www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
include /etc/nginx/mime.types;
proxy_buffer_size 256k;
proxy_buffers 8 256k;
proxy_busy_buffers_size 512k;
fastcgi_buffers 8 256k;
fastcgi_buffer_size 128k;
access_log /var/log/nginx/access.log;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-enabled/default
server {
listen 80 default;
server_name localhost;
access_log /var/log/nginx/localhost.access.log;
location / {
root /var/www/nginx-default;
index index.html index.htm;
}
location /doc {
root /usr/share;
autoindex on;
allow 127.0.0.1;
deny all;
}
location /images {
root /usr/share;
autoindex on;
}
}
In codeigniter I had the same error. This works for me:
http://forum.nginx.org/read.php?2,192785,196003#msg-196003
In .conf
location ~* \.php$ {
fastcgi_pass 127.0.0.1:9001;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# 16-sept-2012 parametros para evitar el 502
fastcgi_temp_file_write_size 10m;
fastcgi_busy_buffers_size 512k;
fastcgi_buffer_size 512k;
fastcgi_buffers 16 512k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_intercept_errors on;
fastcgi_next_upstream error invalid_header timeout http_500;
}
I had the same exact issue this morning. However, increasing buffer size worked for me. This is the settings that I used:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
The only setting I don't see in your config is
proxy_temp_file_write_size 256k;
Also, I added these values just for that vhost. I don't think it should matter, but might be worth trying.
Turns out Codeigniter sets its own max size. I haven't figured out how to limit that, but changing nginx won't change anything unfortunately. Thanks for all the help VBart and gsharma.
We are moving our production environment and the old one works without problem and had the same "upstream sent too big header while reading response header from upstream" problem. This is a Codeigniter 2.x application.
Like #gsharma said, after change the server config with this the error log disappeared.
fastcgi_buffers 256 4k;
fastcgi_buffer_size 8k;
However, still had some problems: login did'n work anymore.
The problem was around $config['sess_encrypt_cookie']=TRUE;
When using sess_encrypt_cookie, Codeigniter tries to use mcrypt library but if it doesn't exist uses a method called '_xor_encode'. Ok, I think this method it's buggy.
After install php-mcrypt everything worked without problems.
(sorry for my english)
I am getting this error, on a page that is 800 bytes long, 4 headers. It was a signout page to delete cookies. To expire cookies I was setting them back to my birthday. This did not work in nginx, they must be expired by less than a month to pass validation to remove the cookies.
I ran a check on a few more different, but invalid headers and got the same result. If nginx cannot validate the header it throws: upstream sent too big header while reading response header from upstream
2015: more information from experience:
upstream sent too big header while reading response header from upstream is nginx's generic way of saying "I don't like what I'm seeing"
Your upstream server thread crashed
The upstream server sent an invalid header back
The Notice/Warnings sent back from STDERR broke their buffer and both it and STDOUT were closed
3: Look at the error logs above the message, is it streaming with logged lines preceding the message? PHP message: PHP Notice: Undefined index:
Example snippet from a loop my log file:
2015/11/23 10:30:02 [error] 32451#0: *580927 FastCGI sent in stderr: "PHP message: PHP Notice: Undefined index: Firstname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice: Undefined index: Lastname in /srv/www/classes/data_convert.php on line 1090
... // 20 lines of same
PHP message: PHP Notice: Undefined index: Firstname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice: Undefined index: Lastname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice:
2015/11/23 10:30:02 [error] 32451#0: *580927 FastCGI sent in stderr: "ta_convert.php on line 1090
PHP message: PHP Notice: Undefined index: Firstname
you can see in the 3rd line (from the 20 previous errors) the buffer limit was hit, broke, and the next thread wrote in over it. Nginx then closed the connection and returned 502 to the client.
2: log all the headers sent per request, review them and make sure they conform to standards (nginx does not permit anything older than 24 hours to delete/expire a cookie, sending invalid content length because error messages were buffered before the content counted...)
examples include:
<?php
//expire cookie
setcookie ( 'bookmark', '', strtotime('2012-01-01 00:00:00') );
// nginx will refuse this header response, too far past to accept
....
?>
and this:
<?php
header('Content-type: image/jpg');
?>
<?php //a space was injected into the output above this line
header('Content-length: ' . filesize('image.jpg') );
echo file_get_contents('image.jpg');
// error! the response is now 1-byte longer than header!!
?>
1: verify, or make a script log, to ensure your thread is reaching the correct end point and not exiting before completion.