serving gzipped files on Firebase Hosting - firebase-hosting

I am interested in serving gzipped html/css/js files using Firebase Hosting. I tried setting the Content-Encoding header in firebase.json, but it errors on deploy.
purportedly, the only headers you can set include: Cache-Control,Access-Control-Allow-Origin,X-UA-Compatible,X-Content-Type-Options,X-Frame-Options,X-XSS-Protection
any ideas out there?

By default, Firebase Hosting already gzips all of your files. Here, for example, are the response headers for a css file I have hosted on firebase. Note the Content-Encoding header:
Accept-Ranges:bytes
Cache-Control:max-age=7178000
Connection:keep-alive
Content-Encoding:gzip
Content-Length:3483
Content-Type:text/css; charset=utf-8
Date:Sun, 10 Jan 2016 02:09:57 GMT
ETag:"4c94283e07340e9cc0237fc2a349c94d"
Last-Modified:Sun, 10 Jan 2016 00:10:31 GMT
Server:nginx
Strict-Transport-Security:max-age=31556926; includeSubDomains; preload
Vary:Accept-Encoding
Via:1.1 varnish
X-Cache:HIT
X-Cache-Hits:1
X-Powered-By:Express
X-Served-By:cache-lax1432-LAX

Related

Uploading a file with google cloud API with a PUT at root of server?

I have a server using the google Drive API. I tried with a curl PUT request to upload a simple file (test.txt) at http://myserver/test.txt. As you can see, I did the PUT request at the root of my server. The response I get is the following:
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UqANa4Bj6ilL7z5HZH0wlQi_ufxDiHPtb2zq1Gzcx7IxAEcOt-AOlWsbX1q_lsZUwWt_hyKOA3weAeVpQvPQTwbQhLhIA
ETag: "6e809cbda0732ac4845916a59016f954"
x-goog-generation: 1548877817413782
x-goog-metageneration: 1
x-goog-hash: crc32c=jwfJwA==
x-goog-hash: md5=boCcvaBzKsSEWRalkBb5VA==
x-goog-stored-content-length: 6
x-goog-stored-content-encoding: identity
Content-Type: text/html; charset=UTF-8
Accept-Ranges: bytes
Via: 1.1 varnish
Content-Length: 0
Accept-Ranges: bytes
Date: Wed, 30 Jan 2019 19:50:17 GMT
Via: 1.1 varnish
Connection: close
X-Served-By: cache-bwi5139-BWI, cache-cdg20732-CDG
X-Cache: MISS, MISS
X-Cache-Hits: 0, 0
X-Timer: S1548877817.232336,VS0,VE241
Vary: Origin
Access-Control-Allow-Methods: POST,PUT,PATCH,GET,DELETE,OPTIONS
Access-Control-Allow-Headers: Cache-Control,X-Requested-With,Authorization,Content-Type,Location,Range
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 300
I know you're not supposed to use the API that way. I did that for testing purposes. I understand every headers returned but can't figure out if my file has been uploaded because I don't have enough knowledge of this API.
My question is very simple :
Just by looking at the response, can you tell me if my file has been uploaded ?
If yes can I retrieve it and how ?
The HTTP status code traditionally indicates, for any given request, if it was successful. The status code in the response is always on the first line:
HTTP/1.1 200 OK
200 type status codes mean success. You should take some time to familiarize yourself with HTTP status codes if you intend to work with HTTP APIs.

Where should I set 'Header set Access-Control-Allow-Origin "*"' Header in my apache2 server?

I want to access other servers from my server.
When I try to sent a GET/POST request to www.posttestserver.com, it is established successfully.
In response, that server provides me response header as:
Access-Control-Allow-Origin:*
Connection:Keep-Alive
Content-Encoding:gzip
Content-Length:129
Content-Type:text/html; charset=UTF-8
Date:Tue, 13 Jun 2017 07:24:27 GMT
Keep-Alive:timeout=5, max=100
Server:Apache/2.4.18 (Ubuntu)
Vary:Accept-Encoding
Then, how do I set this same type of header:
Access-Control-Allow-Origin:*
over my server, so that other websites accessing my server receive this in their response headers?
My server is apache2 hosted on ubuntu 16.04.
Note:
I have set this header:
Header set Access-Control-Allow-Origin "*"
in /etc/apache2/apache2.conf in section,
and in .htaccess file in /var/www/html.
Since you're on ubuntu, it would be preferable to create a short config file in /etc/apache2/conf-available/ and then use a2enconf to enable it.
This allows you to keep the shipped configuration files unmodified.

406: not acceptable response received using LWP::UserAgent/File::Download

Edit: it seems the issue was caused by a dropped cookie. There should have been a session id cookie as well.
For posterity, here's the original question
When sending a request formed as this
GET https://<url>?<parameters>
Cache-Control: max-age=0
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset: iso-8859-1,utf-8,UTF-8
Accept-Encoding: gzip, x-gzip, deflate, x-bzip2
Accept-Language: en-US,en;q=0.5
If-None-Match: "6eb7d55abfd0546399e3245ad3a76090"
User-Agent: Mozilla/5.0 libwww-perl/6.13
Cookie: auth_token=<blah>; __cfduid=<blah>
Cookie2: $Version="1"
I receive the following response
response-type: text/html
charset=utf-8
HTTP/1.1 406 Not Acceptable
Cache-Control: no-cache
Connection: keep-alive
Date: Fri, 12 Feb 2016 18:34:00 GMT
Server: cloudflare-nginx
Content-Type: text/html; charset=utf-8
CF-RAY: 273a62969a9b288e-SJC
Client-Date: Fri, 12 Feb 2016 18:34:00 GMT
Client-Peer: <IP4>:443
Client-Response-Num: 10
Client-SSL-Cert-Issuer: /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limite
d/CN=COMODO ECC Domain Validation Secure Server CA 2
Client-SSL-Cert-Subject: /OU=Domain Control Validated/OU=PositiveSSL Multi-Domai
n/CN=ssl<blah>.cloudflaressl.com
Client-SSL-Cipher: <some value>
Client-SSL-Socket-Class: IO::Socket::SSL
Client-SSL-Warning: Peer certificate not verified
Client-Transfer-Encoding: chunked
Status: 406 Not Acceptable
X-Runtime: 9
I'm not entirely sure why the response is 406 Not Acceptable. When
downloaded with firefox, the file in question in 996 KB (as reported
by Windows 8.1's explorer). It looks like I have a partially
transferred file from my perl script at 991 KB (again, windows
explorer size), so it got MOST of the file before throwing the Not
Acceptable response. Using the same URL pattern and request style, I
was able to successfully download a 36 MB file from the server with
this perl library and request form, so the size of the file should not
be magically past some max (chunk) size. As these files are being
updated on approximately 15-minute intervals, I suppose it's possible
that a write was performed on the server, invalidating the ETag before
all chunks were complete on this file?
I tried adding chunked to Accept-Encoding, but that's not for
Transfer encoding and it appears to have no affect on the server's behavior. Additionally, as I've been able to download larger files
(same format) from the same server, that alone shouldn't be the cause
of my woes. LWP is supposed to be able to handle chunked data
returned by a response to GET (as per this newsgroup post).
The server in question is running nginx with Rack::Lint. The
particular server configuration (which I in no way control), throws
500 errors on its own attempts to send 304: not modified. This
caused me to write a workaround for File::Download (sub
lintWorkAround here), so I'm not above putting blame on the
server in this instance also, if warranted. I don't believe I buggered
up the chunk-handling code from File::Download 0.3 (see diff),
but I suppose that's also possible. Is it possible to request a
particular chunk size from the server?
I'm using LWP and libwww versions 6.13 in perl 5.18.2.
File::Download version is my own 0.4_050601.
So, what else could the 406 error mean? Is there a way to request that
the server temporarily cache/version control the entire file so that I
can download a given ETag'd file once the transfer begins?

How to correctly set Expires headers on Google Cloud Storage?

The Google Cloud Storage Developer Guide explains how to set Cache-Control headers, and explains their critical impact on the consistency behavior of the api, yet the Expires headers aren't mentioned nor did they appear to be inheriting from the Cache-Control configuration.
The Expires header appeared to always be equal to request time plus 1 year, regardless of Cache-Control setting, eg.
$ gsutil setmeta -h "Cache-Control:300" gs://example-bucket/doc.html
A request was made to a document (doc.html) in the Google Cloud Storage bucket (example-bucket) via
$ curl -I http://example-bucket.storage.googleapis.com/doc.html
which produced the following headers
HTTP/1.1 200 OK
Server: HTTP Upload Server Built on Oct 3 2012 16:52:30 (1349308350)
Date: Sat, 13 Oct 2012 00:51:13 GMT
Cache-Control: 300, no-transform
Expires: Sun, 13 Oct 2013 00:51:13 GMT
Last-Modified: Fri, 12 Oct 2012 20:08:41 GMT
ETag: "28fafe4213ae34c7d3ebf9ac5a6aade8"
x-goog-sequence-number: 82
x-goog-generation: 1347601001449082
x-goog-metageneration: 1
Content-Type: text/html
Accept-Ranges: bytes
Content-Length: 7069
Vary: Origin
Not sure why you say the Expires header shows request time plus 1 year. In your example, the Expires header shows a timestamp one hour after the request date, which is to be expected.
I just did an experiment where I set an object's max age to 3600 and then 7200 via this command:
gsutil setmeta "Cache-Control:max-age=7200" gs://marc-us/xyz.txt
Then I retrieved the object using the gsutil cat command with the -D option to see the request/response details, like this:
gsutil -D cat gs://marc-us/xyz.txt
In both experiments, the Expires header produced the expected timestamp, as per the object's max-age setting (i.e. one hour after request time and two hours after request time).
Looks like this was caused by a malformed header. Duh.
Cache-Control: 300, no-transform
should be
Cache-Control: public, max-age=300, no-transform
When things are set correctly, they work. See RFC 2616 (HTTP/1.1) Section 14.9 (Cache-Control).

iOS (iPhone/iPad): downloading a big PDF via Safari doesn't work

I've a small site designed to sell a HTTP-downloadable, ~300 MB PDF, No-DRM, page-scanned images e-book (download the test copy here http://test.magicmedicine.eu/get/ac123457965d0d4b4d17557a73cf2fe8 ).
It works flawlessly on PC, Mac and Android, but I'm experiencing issues with iOS: when the customer opens (I tried via broadband Wi-Fi+DSL) the download URL via Safari, the page loads for ~45 seconds (the page is blank but the activity indicator rotates), then Safari exits with no error messages at all.
I tried to create the PDF with the "Fast web view" (=progressive download) attribute and I also lowered the compatibility to the minimum (PDF version 1.3), with no results.
Application-side, the download is sent from Apache+PHP via mod_xsendfile ( https://tn123.org/mod_xsendfile/ ) to the client with the following headers (my intent is to avoid the PDF-in-the-browser-via-plugin view):
HTTP/1.1 200 OK
Date: Wed, 23 May 2012 09:50:13 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/5.3.13
Expires: Thu, 24 May 2012 11:50:13 +0200
Cache-Control: must-revalidate, post-check=0, pre-check=0
Pragma: public
Content-Disposition: attachment; filename="book.pdf"
Last-Modified: Sun, 20 May 2012 11:26:54 GMT
ETag: "2e01b4-dde8a9b-4c07610070008"
Content-Length: 232688283
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: application/octet-stream
Any ideas?
Note: I asked this on SuperUser a couple of days ago and was closed as "off topic". I hope here it's ok to repost it here.