I have installed Apache Traffic Server and configured records.config using:
CONFIG proxy.config.http.cache.http INT 1
CONFIG proxy.config.reverse_proxy.enabled INT 1
CONFIG proxy.config.url_remap.remap_required INT 1
CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
CONFIG proxy.config.http.server_ports STRING 8080 8080:ipv6
I have also added a remap.config line cause I read it is essential:
regex_map http://(.*)/ http://localhost:80/
But when I try to access to localhost:8080, I get the output:
Not Found on Accelerator
Description: Your request on the specified host was not found. Check the location and try again
Why can I access to the server? I have followed the installation guide...
EDIT: Curl
curl localhost:8080
<HTML>
<HEAD>
<TITLE>Not Found on Accelerator</TITLE>
</HEAD>
<BODY BGCOLOR="white" FGCOLOR="black">
<H1>Not Found on Accelerator</H1>
<HR>
<FONT FACE="Helvetica,Arial"><B>
Description: Your request on the specified host was not found.
Check the location and try again.
</B></FONT>
<HR>
</BODY>
I had the same issue today. I figured it was caused by file permissions. Basically, trafficserver user had no write access to /etc/trafficserver folder (on Debian) and its content. I have changed ownership to trafficserver and now all working okay.
Take ownership recursively run the below in /etc:
chown -R trafficserver:trafficserver trafficserver
Also make sure Remap set to 0 in /etc/trafficserver/records.config
CONFIG proxy.config.url_remap.remap_required INT 0
Set this variable to 1 if you want Traffic Server to serve requests only from origin servers listed in the mapping rules of the remap.config file. If a request does not match, then the browser will receive an error.
Related
I have done extensive searches on the Internet for a solution to this issue, but all that I can find is always related to making timeout adjustments on a Linux machine running Apache. I am running IIS version 10 on Windows 2019 Server. When Facebook changed it's website approximately 30-days ago, the Open Graph image sharing protocol stopped working properly. An attempt to use the Facebook Developer scraper, I get the following timeout error.
Curl Timeout
The request to scrape the URL timed out.
Curl Error
Curl error: 28 (OPERATION_TIMEOUTED)
I also filed a bug report with Facebook, but they simply closed the report, stating that the problem is with my server or network connection. I opened and inspected the server's error logs, and found no issues. I then setup and inspected the IIS logs, and found that Facebook indeed hit the server properly and fetched an image and reported it back. But the timeout error still occurs and the image is not shared upon an attempt to share it. Here are the records from the IIS logs that seem to indicate that Facebook is indeed contacting my server correctly, except for the fact that they are using "http" rather than "https." This has been reported to Facebook.
2020-12-24 18:31:51 W3SVC3 EDENUSA-FS11 10.1.252.250 GET /images/qr_code/edenusa_qr_code.png - 443 - 69.171.249.113 facebookexternalhit/1.1+(+http://www.facebook.com/externalhit_uatext.php) - www.edenusa.com 200 0 0 70
2020-12-24 18:32:02 W3SVC3 EDENUSA-FS11 10.1.252.250 GET /rent-lighting/lighting/rent_lighting.asp - 443 - 69.171.249.111 facebookexternalhit/1.1+(+http://www.facebook.com/externalhit_uatext.php) - www.edenusa.com 200 0 0 21410
And following is a snippet of the required meta code in our header area, from the home page:
<!DOCTYPE html>
<head>
<title>Rent a Stage | Rent a Sound System | Rent Lighting System | Rent Up Lighting</title>
<meta prefix="fb: https://ogp.me/ns/fb#" property="fb:app_id" content="1376081292633720">
<meta property="og:url" content="https://www.edenusa.com/index.asp" />
<meta property="og:image:type" content="image/jpeg" />
<meta property="og:title" content="Rent a Stage | Rent a Sound System | Rent Lighting System | Rent Up Lighting" />
<meta property="og:image" content="https://www.edenusa.com/images/homepage/compressed/indian_temple_in_chino_hills.jpg" />
<meta property="fb:app_id" content="1376081292633720" />
I've worked on this for over a week now, without resolution. Anybody else having this issue, or know of a way to resolve the timeout issue?
This issue was resolved as follows:
We had to remove REST code in the GLOBAL.ASA that goes out and fetches geographic info (City and State only) based upon the client's IP address. The service endpoint is a bit slow, and required that a longer timeout than might be considered "normal" to be used. So when this code branch was commented out, the Facebook CURL timeout error no longer occurred. We are looking at another IP geographic info service that is faster.
After completing step 1, we found that on the home page ONLY, we had to leave the INDEX.ASP portion of the URL in place. We had code the stripped the "index.asp" off the canonical URL. For unknown reason, Facebook looks at the HTTP header, sees that the original URL has the "index.asp" included, and then compares that to the URL specified in the "og:url" meta tag.
In conclusion, the most recent rollout of Facebook includes new code that configured a shorter timeout value for CURL. This caused websites out on the web with a somewhat shorter startup time, to experience this issue. So for now, the only fix is to monitor a site's startup time, and shorten it down enough for the Facebook debugger/scraper to function as it did before the most recent changes.
I updated the constants of my template in the web editor of typo3. Each time I click on Save or Close+Save I get a pop-up from my browser to download a file. The content is like this:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Unavailable</title>
</head><body>
<h1>Service Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
</body></html>
The minimal example to get this is:
page.theme {
socialmedia.channels {
facebook.url = https://www.facebook.com/typo3/
}
}
It seams that Typo3 has a problem with the dots in the url. If I remove all of them or escape with a backslash "\" everything works. (But the backslash remains in the url and therefore produce invalid urls)
Some months before everything works fine. Some other templates in the same installation have also urls in their configuration and they are working (the page is rendered normaly). If I try to save them noe without any changes I get the same error.
That is the system I use:
Typo3-Version: 9.5.20
Webserver: Apache/2.4.43 (Unix)
PHP-Version: 7.3.21
Database: MySQL 5.6.42
Applicationcontext: Production
OS: SunOS SunOS localhost 5.10 Generic_150401-49 i86pc
Bootstrap Package: 11.0.2
We have a kubernetes cluster which has a dropwizard based web application running as a service. This application has a rest uri to upload files. It cannot upload files larger than 1MB. I get the following error:
ERROR [2017-07-27 13:32:47,629] io.dropwizard.jersey.errors.LoggingExceptionMapper: Error handling a request: ea812501b414f0d9
! com.fasterxml.jackson.core.JsonParseException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
! at [Source: <html>
! <head><title>413 Request Entity Too Large</title></head>
! <body bgcolor="white">
! <center><h1>413 Request Entity Too Large</h1></center>
! <hr><center>nginx/1.11.3</center>
! </body>
! </html>
I have tried the suggestions given in https://github.com/nginxinc/kubernetes-ingress/issues/21. I have edited the Ingress to set the proxy-body-size annotation. Also, I have tried using the configMap without any success. we are using kubernetes version 1.5. Please let me know if you need additional information.
Had this on my setup as well. Two advices here:
1: switch to official kubernetes nginx ingress, it's awesome (https://github.com/kubernetes/ingress-nginx)
2: with the above ingress, you can add an annotation to your Ingresses to controll body size limit on per ingress basis like this :
annotations:
ingress.kubernetes.io/proxy-body-size: 10m
works great
I'm setting up an ubuntu server using nginx and uwsgi. Yesterday, running
sudo service nginx restart
and
sudo service uwsgi restart
would generate this socket: /run/uwsgi/app/recoapi/recoapi.socket
I installed uwsgi using pip rather than apt-get, and ever since around that time, the recoapi.socket file hasn't been generated. I find the following error in my nginx error.log when I try to curl my server.
2013/09/01 13:59:12 [crit] 29712#0: *1 connect() to unix:///run/uwsgi/app/recoapi/recoapi.socket failed (2: No such file or directory) while connecting to upstream
The result of this error is that the output of my curl is
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.2.6 (Ubuntu)</center>
</body>
</html>
My uwsgi config file looks like this. The lines regarding the socket permissions seem to have no effect.
<uwsgi>
<plugin>python</plugin>
<uid>www-data</uid>
<gid>www-data</gid>
<chmod-socket>777</chmod-socket>
<chown-socket>www-data</chown-socket>
<socket>/run/uwsgi/app/recoapi/recoapi.socket</socket>
<pythonpath>/var/www/recoapi/application/</pythonpath>
<wsgi-file>/var/www/recoapi/application/wsgi_configuration_module.py</wsgi_file>
<app mountpoint="/">
<script>wsgi_configuration_module</script>
</app>
<processes>4</processes>
<harakiri>60</harakiri>
<reload-mercy>8</reload-mercy>
<cpu-affinity>1</cpu-affinity>
<stats>/tmp/stats.socket</stats>
<max-requests>2000</max-requests>
<limit-as>512</limit-as>
<reload-on-as>256</reload-on-as>
<reload-on-rss>192</reload-on-rss>
<no-orphans/>
<vacuum/>
</uwsgi>
I'm working from this tutorial.
This is my nginx configuration file:
server {
listen 80;
server_name $hostname;
access_log /var/www/recoapi/logs/access.log;
error_log /var/www/recoapi/logs/error.log;
location / {
#uwsgi_pass 127.0.0.1:9001;
uwsgi_pass unix:///run/uwsgi/app/recoapi/recoapi.socket;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
uwsgi_param SERVER_SOFTWARE nginx/$nginx_version;
}
location /static {
root /var/www/recoapi/public_html/static/;
}
}
The problem was invalid syntax in my xml uwsgi file.
The socket wasn't being created because the server wasn't being started because it couldn't read the uwsgi config file, because I had mismatched xml tags: wsgi-file and wsgi_file. That line was unnecessary anyway, so I deleted it and the socket was created again.
We'd like to have the following configuration :
one server is replying to GWT RPC : x.com (the one running Java)
another server is serving js / css / images : y.com (for bandwith optimization)
So the main page is : http://x.com/index.html
and contains this line: <script type="text/javascript" language="javascript" src="http://**x.com**/my-app.nocache.js"></script>
We're getting a SOP error: Unsafe JavaScript attempt to access frame with URL ...
Any suggestion, help about that ?
Add the following to your gwt.xml:
<add-linker name="xsiframe" />
This will generate slightly different code, that can be loaded cross-origin. Your "host page" will still have to be loaded from the same server you run your GWT-RPC servlets on, to not hit the SOP.
See this FAQ entry (the "xs" linker predates the "xsiframe" one, that latter is now preferred, and could eventually even replace the "std", default linker)
You have hit Same Origin Policy which prevents making XMLHTTPRequest to servers other than origin server. This effectively prevents cross-domain GWT-RPC.
The possible workarounds are described in Making cross-site requests:
Run a proxy on your server
Load the JSON response into a <script> tag