Where should I set 'Header set Access-Control-Allow-Origin "*"' Header in my apache2 server? - server

I want to access other servers from my server.
When I try to sent a GET/POST request to www.posttestserver.com, it is established successfully.
In response, that server provides me response header as:
Access-Control-Allow-Origin:*
Connection:Keep-Alive
Content-Encoding:gzip
Content-Length:129
Content-Type:text/html; charset=UTF-8
Date:Tue, 13 Jun 2017 07:24:27 GMT
Keep-Alive:timeout=5, max=100
Server:Apache/2.4.18 (Ubuntu)
Vary:Accept-Encoding
Then, how do I set this same type of header:
Access-Control-Allow-Origin:*
over my server, so that other websites accessing my server receive this in their response headers?
My server is apache2 hosted on ubuntu 16.04.
Note:
I have set this header:
Header set Access-Control-Allow-Origin "*"
in /etc/apache2/apache2.conf in section,
and in .htaccess file in /var/www/html.

Since you're on ubuntu, it would be preferable to create a short config file in /etc/apache2/conf-available/ and then use a2enconf to enable it.
This allows you to keep the shipped configuration files unmodified.

Related

How can I find out why ISAPI is returning a 302 status for a specific file?

I have a website hosted served by IIS 10 on a Windows Server (2019) running Plesk. The site is mainly Classic ASP. I have a staging subdomain at staging.example.com, with the production site at www.example.com.
The two are fairly strictly separated, except that I don’t store image files, PDFs and such things on the staging server; I have a URL rewrite directive that redirects to the production site with a 302 status based on the URL not matching the following regex:
\.(php|asp|js|css|csv|json|htm|html|svg|svgz)(\?.+)?$
This generally works well: ASP pages are served from the staging site when the staging URL is called, but images on the page are pulled from the production site.
Except that there’s one ASP file which – for some reason – gives a 302 and redirects to the production site no matter what I do. The file exists in both locations. I’ve tested the URL in the pattern tester provided in the IIS URL-rewrite section, and it matches the pattern (meaning it shouldn’t redirect).
When I trace the request (that is, the initial request to the staging URL) in Firefox’s browser console, I get the following response headers (redacted):
HTTP/2 302 Found
cache-control: no-cache
content-type: text/html
location: https://www.example.com/path/to/file.asp
server: Microsoft-IIS/10.0
set-cookie: ASPSESSION****=********; secure; path=/
x-powered-by: ASP.NET
x-powered-by-plesk: PleskWin
date: Sun, 19 Dec 2021 18:52:05 GMT
content-length: 201
Accept
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding
gzip, deflate, br
Accept-Language
en-US,en;q=0.5
Authorization
Basic *************
Connection
keep-alive
Cookie
[cookies]
Host
staging.example.com
Referer
https://staging.example.com/path/to/file.asp
Sec-Fetch-Dest
document
Sec-Fetch-Mode
navigate
Sec-Fetch-Site
same-origin
Sec-Fetch-User
?1
Upgrade-Insecure-Requests
1
User-Agent
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:96.0) Gecko/20100101 Firefox/96.0
I’ve painstakingly gone through the entire file and all the file includes within it, and I can’t find any kind of Response.Redirect in any of them that might be responsible.
So it seems it’s IIS that’s redirecting with a 302… despite the fact that there doesn’t seem to be a directive that tells it to do this.
Is there a way to trace exactly what on the server is causing this 302 for one specific file? Some sort of tracing mechanism that tells me where the request gets passed on to before the 302 response is returned?
 
 
Update 26 Dec
Based on samwu’s comment, I’ve enabled Failed Request Tracing for the page, and looking through the resulting .frb file, it’s clear that none of the rewrite conditions are met – they all have succeed: false. It seems the redirect is not happening in the WWW Server at all, in fact, but in the ISAPI extension. This is the only place that the production site URL is mentioned at all in the request trace (except of course in the GENERAL_RESPONSE_HEADER section at the very end):
ISAPI_START
MODULE_SET_RESPONSE_SUCCESS_STATUS ModuleName="IsapiModule", Notification="EXECUTE_REQUEST_HANDLER", HttpStatus="302", HttpReason="Object moved"
GENERAL_SET_RESPONSE_HEADER HeaderName="Location", HeaderValue="https://www.example.com/path/to/file.asp", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Content-Length", HeaderValue="201", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Content-Type", HeaderValue="text/html", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Cache-control", HeaderValue="no-cache", Replace="false"
NOTIFY_MODULE_COMPLETION ModuleName="IsapiModule", Notification="EXECUTE_REQUEST_HANDLER", fIsPostNotificationEvent="false", CompletionBytes="0", ErrorCode="The operation completed successfully. (0x0)"
ISAPI_END
In the ISAPI Filters section in IIS Manager, there are four filters: a 32-bit and a 64-bit version for ASP.Net 2.0 and the same for ASP.Net 4.0, all called aspnet_filter.dll. I’m guessing these are standard filters – I know for certain, at least, that we haven’t mucked about with any ISAPI filters at all.
As should be obvious by now, I’m not really a server admin, and ISAPI filters are definitely above my level of knowledge.
So how do I proceed from here? How do I figure out why ISAPI is redirecting?

301 moved permanently with socket.http

In python (and my browser), I am able to send a request to https://www.devrant.com/api/devrant/rants?app=3&sort=algo&limit=10&skip=0 and get a response, as expected, but with Lua, I get HTTP/1.1 301 Moved Permanently. Here is what I have tried so far:
http = require("socket.http");
print(http.request("https://www.devrant.com/api/devrant/rants?app=3&sort=algo&limit=10&skip=0")
which outputs an HTTP error page (moved permanently) and
301 table: 0x8f32470 http/1.1 301 Moved Permanently
the table's contents are:
location https://www.devrant.com/api/devrant/rants?app=3&sort=algo&limit=10&skip=0
content-type text/html
server nginx/1.10.0 (Ubuntu)
content-length 194
connection close
date Mon, 11 Dec 2017 01:41:35
Why does only Lua get this error? If I request to google, I get the google home page HTML. If I request to status.mojang.com, I get the mojang server statuses in a JSON response string, so the socket is functional for certain.
It's because you are using socket.http to request a page from https URL; since socket.http doesn't handle https, it sends the request to port 80, which gets forwarded to https URL, but socket library doesn't follow that redirect, as it doesn't "know" what to do with https, so it simply reports 301.
You need to install and use luasec and use ssl.https instead of socket.http, which will make it work.

406: not acceptable response received using LWP::UserAgent/File::Download

Edit: it seems the issue was caused by a dropped cookie. There should have been a session id cookie as well.
For posterity, here's the original question
When sending a request formed as this
GET https://<url>?<parameters>
Cache-Control: max-age=0
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset: iso-8859-1,utf-8,UTF-8
Accept-Encoding: gzip, x-gzip, deflate, x-bzip2
Accept-Language: en-US,en;q=0.5
If-None-Match: "6eb7d55abfd0546399e3245ad3a76090"
User-Agent: Mozilla/5.0 libwww-perl/6.13
Cookie: auth_token=<blah>; __cfduid=<blah>
Cookie2: $Version="1"
I receive the following response
response-type: text/html
charset=utf-8
HTTP/1.1 406 Not Acceptable
Cache-Control: no-cache
Connection: keep-alive
Date: Fri, 12 Feb 2016 18:34:00 GMT
Server: cloudflare-nginx
Content-Type: text/html; charset=utf-8
CF-RAY: 273a62969a9b288e-SJC
Client-Date: Fri, 12 Feb 2016 18:34:00 GMT
Client-Peer: <IP4>:443
Client-Response-Num: 10
Client-SSL-Cert-Issuer: /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limite
d/CN=COMODO ECC Domain Validation Secure Server CA 2
Client-SSL-Cert-Subject: /OU=Domain Control Validated/OU=PositiveSSL Multi-Domai
n/CN=ssl<blah>.cloudflaressl.com
Client-SSL-Cipher: <some value>
Client-SSL-Socket-Class: IO::Socket::SSL
Client-SSL-Warning: Peer certificate not verified
Client-Transfer-Encoding: chunked
Status: 406 Not Acceptable
X-Runtime: 9
I'm not entirely sure why the response is 406 Not Acceptable. When
downloaded with firefox, the file in question in 996 KB (as reported
by Windows 8.1's explorer). It looks like I have a partially
transferred file from my perl script at 991 KB (again, windows
explorer size), so it got MOST of the file before throwing the Not
Acceptable response. Using the same URL pattern and request style, I
was able to successfully download a 36 MB file from the server with
this perl library and request form, so the size of the file should not
be magically past some max (chunk) size. As these files are being
updated on approximately 15-minute intervals, I suppose it's possible
that a write was performed on the server, invalidating the ETag before
all chunks were complete on this file?
I tried adding chunked to Accept-Encoding, but that's not for
Transfer encoding and it appears to have no affect on the server's behavior. Additionally, as I've been able to download larger files
(same format) from the same server, that alone shouldn't be the cause
of my woes. LWP is supposed to be able to handle chunked data
returned by a response to GET (as per this newsgroup post).
The server in question is running nginx with Rack::Lint. The
particular server configuration (which I in no way control), throws
500 errors on its own attempts to send 304: not modified. This
caused me to write a workaround for File::Download (sub
lintWorkAround here), so I'm not above putting blame on the
server in this instance also, if warranted. I don't believe I buggered
up the chunk-handling code from File::Download 0.3 (see diff),
but I suppose that's also possible. Is it possible to request a
particular chunk size from the server?
I'm using LWP and libwww versions 6.13 in perl 5.18.2.
File::Download version is my own 0.4_050601.
So, what else could the 406 error mean? Is there a way to request that
the server temporarily cache/version control the entire file so that I
can download a given ETag'd file once the transfer begins?

serving gzipped files on Firebase Hosting

I am interested in serving gzipped html/css/js files using Firebase Hosting. I tried setting the Content-Encoding header in firebase.json, but it errors on deploy.
purportedly, the only headers you can set include: Cache-Control,Access-Control-Allow-Origin,X-UA-Compatible,X-Content-Type-Options,X-Frame-Options,X-XSS-Protection
any ideas out there?
By default, Firebase Hosting already gzips all of your files. Here, for example, are the response headers for a css file I have hosted on firebase. Note the Content-Encoding header:
Accept-Ranges:bytes
Cache-Control:max-age=7178000
Connection:keep-alive
Content-Encoding:gzip
Content-Length:3483
Content-Type:text/css; charset=utf-8
Date:Sun, 10 Jan 2016 02:09:57 GMT
ETag:"4c94283e07340e9cc0237fc2a349c94d"
Last-Modified:Sun, 10 Jan 2016 00:10:31 GMT
Server:nginx
Strict-Transport-Security:max-age=31556926; includeSubDomains; preload
Vary:Accept-Encoding
Via:1.1 varnish
X-Cache:HIT
X-Cache-Hits:1
X-Powered-By:Express
X-Served-By:cache-lax1432-LAX

Facebook links to my site resolve as 403 forbidden

Hi I'm experiencing a super weird problem.
Whenever I post links to my website on Facebook, they come up as Forbidden.
The site itself works great and I have no seen this when linking on other sites.
Could this be a server misconfiguration? Any thoughts on where to look?
here's some Info:
I have a dedicated server running WHM 11.25.0
i have 2 sites hosted here using cPanel 11.25.0
the error msg:
Forbidden You don't have
permission to access
/blog/deepwater-horizon-11/ on this
server. Additionally, a 404
Not Found error was encountered while
trying to use an ErrorDocument to
handle the request.
Apache/2.2.14 (Unix)
mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2
mod_auth_passthrough/2.1
mod_bwlimited/1.4 FrontPage/5.0.2.2635
Server at www.offshoreinjuries.com
Port 80
UPDATE:
Here is a sample link if it helps. (notice going the linked page directly works fine)
http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.offshoreinjuries.com%2Fblog%2Fdeepwater-horizon-11%2F&h=834ea
UPDATE and ANSWER:
Found the issue and added a complete answer below.
You must have a rule somewhere that reads the HTTP_REFERER and rejects incoming links from Facebook. Seriously. This is what happens between the lines:
No referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
HTTP/1.1 200 OK
Date: Fri, 28 May 2010 09:19:45 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.12
X-Pingback: http://www.offshoreinjuries.com/blog/xmlrpc.php
Content-Type: text/html; charset=UTF-8
200 OK, good.
Facebook referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
Referer: http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.offshoreinjuries.com%2Fblog%2Fdeepwater-horizon-11%2F&h=834ea
HTTP/1.1 403 Forbidden
Date: Fri, 28 May 2010 09:21:04 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
Content-Type: text/html; charset=iso-8859-1
403 Forbidden, bad.
Any other referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
Referer: http://alvaro.es/
HTTP/1.1 200 OK
Date: Fri, 28 May 2010 09:20:36 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.12
X-Pingback: http://www.offshoreinjuries.com/blog/xmlrpc.php
Content-Type: text/html; charset=UTF-8
200 OK again.
Your server is actively rejecting visitors from Facebook.
I was finally able to get to the bottom of this behavior.
The default mod_security settings of my host, HostGator include a set of whitelists and blacklists. Upon inspecting these I found .facebook.com/l.php blacklisted.
l.php is a wrapper page that provides a warning that you are leaving facebook. As I understand it since this can be easily exploited, HostGator chose to essentially blacklist all outbound facebook links.
I fixed my problem by removing .facebook.com/l.php from the mod_security blacklist, however I could have also just reset my mod_security settings to Default (vs the HostGator config) via a single click in WHM.