Setting Jersey to allow caching? - rest

I have the following returned from a Jersey #GET method. It works, fine but always includes the No-cache header. I'd like to allow the client to cache this data since it rarely changes.
ResponseBuilder rb = Response.ok(c);
CacheControl cc = new CacheControl();
cc.setMaxAge(60);
cc.setNoCache(false);
return rb.cacheControl(cc).build();
The response is always:
Server Apache-Coyote/1.1
Pragma No-cache
Cache-Control no-cache, no-transform, max-age=60
Expires Wed, 31 Dec 1969 19:00:00 EST
Content-Type application/xml
Content-Length 291
Date Tue, 16 Feb 2010 01:54:02 GMT
That am I doing wrong here?

This was caused by having BASIC auth turned on.
Specifying this in the context will correct the issue:
<Valve className="org.apache.catalina.authenticator.BasicAuthenticator"
disableProxyCaching="false" />
Hope this helps someone else out.

Your code looks okay.
Which container are you using? Make sure cache is not disabled on it. Also verify downstream response handlers or filters aren't setting the no-cache directive.

Related

Does google chrome and similar browsers support range headers for standard downloads

My initial response headers - notice the Accept-Ranges header
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin
Access-Control-Allow-Credentials: true
X-RateLimit-Limit: 1
X-RateLimit-Remaining: 0
Date: Thu, 08 Apr 2021 06:14:19 GMT
X-RateLimit-Reset: 1617862461
Accept-Ranges: bytes
Content-Length: 100000000
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="some_file.txt"
Connection: keep-alive
Keep-Alive: timeout=5
I then restart the server and click resume download in chrome, but chrome doesn't send back in Range request headers
I'm following the documentation on Mozilla's website
Am I missing a header or misunderstanding how this works, especially with chrome and other browsers? Is there another way I can manually support resuming downloads by sending the right response and understanding the right request? From a technical perspective, if chrome sends back which range it now needs I will be able to resume the download.
According to this article, chrome should support something like this. I just need to be pointed in the right direction.
Ty!
Chrome needs some way to know that the file it's trying to download at that URL is indeed the same file when it tries to resume.
If you add support for an ETag header, this will likely work.

Uploading a file with google cloud API with a PUT at root of server?

I have a server using the google Drive API. I tried with a curl PUT request to upload a simple file (test.txt) at http://myserver/test.txt. As you can see, I did the PUT request at the root of my server. The response I get is the following:
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UqANa4Bj6ilL7z5HZH0wlQi_ufxDiHPtb2zq1Gzcx7IxAEcOt-AOlWsbX1q_lsZUwWt_hyKOA3weAeVpQvPQTwbQhLhIA
ETag: "6e809cbda0732ac4845916a59016f954"
x-goog-generation: 1548877817413782
x-goog-metageneration: 1
x-goog-hash: crc32c=jwfJwA==
x-goog-hash: md5=boCcvaBzKsSEWRalkBb5VA==
x-goog-stored-content-length: 6
x-goog-stored-content-encoding: identity
Content-Type: text/html; charset=UTF-8
Accept-Ranges: bytes
Via: 1.1 varnish
Content-Length: 0
Accept-Ranges: bytes
Date: Wed, 30 Jan 2019 19:50:17 GMT
Via: 1.1 varnish
Connection: close
X-Served-By: cache-bwi5139-BWI, cache-cdg20732-CDG
X-Cache: MISS, MISS
X-Cache-Hits: 0, 0
X-Timer: S1548877817.232336,VS0,VE241
Vary: Origin
Access-Control-Allow-Methods: POST,PUT,PATCH,GET,DELETE,OPTIONS
Access-Control-Allow-Headers: Cache-Control,X-Requested-With,Authorization,Content-Type,Location,Range
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 300
I know you're not supposed to use the API that way. I did that for testing purposes. I understand every headers returned but can't figure out if my file has been uploaded because I don't have enough knowledge of this API.
My question is very simple :
Just by looking at the response, can you tell me if my file has been uploaded ?
If yes can I retrieve it and how ?
The HTTP status code traditionally indicates, for any given request, if it was successful. The status code in the response is always on the first line:
HTTP/1.1 200 OK
200 type status codes mean success. You should take some time to familiarize yourself with HTTP status codes if you intend to work with HTTP APIs.

How to login to RQM using REST API?

I'm trying to communicate with IBM Rational Quality Manager server using its REST API. I'm using RESTClient browser plugin, and while the browser is logged in, everything works as expected. For the record, my requests look like
https://server/qm/service/com.ibm.rqm.integration.service.IIntegrationService/resources/project/testscript/urn:com.ibm.rqm:testscript:42
However, if I wait long enough for RQM to logout, REST API says I need to login back to proceed (see below). I'm pretty sure this is possible to do via the API itself, because RQM ships with RQMUrlUtility which accepts username and password and runs basically the same REST requests I'm using:
java -jar RQMUrlUtility.jar -command GET -user JazzUserID -password JazzPassword -filepath pathtoFile -url REST_URL
So far, I have found this topic explaining how to login using HTTP basic authentication. Following this advice, I have added Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= (not my real password) to the request, but RQM still fails to login. I have also tried setting User-Agent to a bogus value, as well as sending the value from JSESSIONID in X-Jazz-CSRF-Prevent header as described here, but regardless of all these headers being present or not, I get the same response:
Status Code: 200 OK
Cache-Control: no-cache="set-cookie, set-cookie2"
Connection: Keep-Alive
Content-Encoding: gzip
Content-Language: en-US
Content-Type: text/html; charset=UTF-8
Date: Tue, 26 Jan 2016 15:48:02 GMT
Expires: Thu, 01 Dec 1994 16:00:00 GMT
Keep-Alive: timeout=10, max=100
Set-Cookie: JazzFormAuth=Form; Path=/qm; Secure
x-com-ibm-team-scenario=ac55f959-c738-4ef0-854d-6e37648edcba%3Bname%3DInitial+Page+Load%3Bextras%3D%2Fqm%2Fauth%2Fauthrequired%2C1453823282026; Path=/
Transfer-Encoding: chunked
X-Powered-By: Servlet/3.0
X-com-ibm-team-repository-web-auth-msg: authrequired
Can anyone with experience with RQM API tell me what's wrong? Or perhaps I'm missing something basic, common to most RESP APIs out there?
Could it be your header name?
Authorisation: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
Should probably be:
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
Notice the "z".

How to correctly set Expires headers on Google Cloud Storage?

The Google Cloud Storage Developer Guide explains how to set Cache-Control headers, and explains their critical impact on the consistency behavior of the api, yet the Expires headers aren't mentioned nor did they appear to be inheriting from the Cache-Control configuration.
The Expires header appeared to always be equal to request time plus 1 year, regardless of Cache-Control setting, eg.
$ gsutil setmeta -h "Cache-Control:300" gs://example-bucket/doc.html
A request was made to a document (doc.html) in the Google Cloud Storage bucket (example-bucket) via
$ curl -I http://example-bucket.storage.googleapis.com/doc.html
which produced the following headers
HTTP/1.1 200 OK
Server: HTTP Upload Server Built on Oct 3 2012 16:52:30 (1349308350)
Date: Sat, 13 Oct 2012 00:51:13 GMT
Cache-Control: 300, no-transform
Expires: Sun, 13 Oct 2013 00:51:13 GMT
Last-Modified: Fri, 12 Oct 2012 20:08:41 GMT
ETag: "28fafe4213ae34c7d3ebf9ac5a6aade8"
x-goog-sequence-number: 82
x-goog-generation: 1347601001449082
x-goog-metageneration: 1
Content-Type: text/html
Accept-Ranges: bytes
Content-Length: 7069
Vary: Origin
Not sure why you say the Expires header shows request time plus 1 year. In your example, the Expires header shows a timestamp one hour after the request date, which is to be expected.
I just did an experiment where I set an object's max age to 3600 and then 7200 via this command:
gsutil setmeta "Cache-Control:max-age=7200" gs://marc-us/xyz.txt
Then I retrieved the object using the gsutil cat command with the -D option to see the request/response details, like this:
gsutil -D cat gs://marc-us/xyz.txt
In both experiments, the Expires header produced the expected timestamp, as per the object's max-age setting (i.e. one hour after request time and two hours after request time).
Looks like this was caused by a malformed header. Duh.
Cache-Control: 300, no-transform
should be
Cache-Control: public, max-age=300, no-transform
When things are set correctly, they work. See RFC 2616 (HTTP/1.1) Section 14.9 (Cache-Control).

What is `ff.im`?

When we visit fm.im, we are redirected to http://friendfeed.com.
Here are some other examples:
ff.im/abc
ff.im/efg
How is FriendFeed able to do this?
.im is the Isle of Man top-level domain (ccTLD). The registry normally requires names to be at least three characters long, unless you pay considerably more.
Two-character domains look cool but aren't particularly useful since IE rejects their cookies (old article, but still mostly true for newer IE versions).
When your browser requests ff.im:
GET / HTTP/1.1
host: ff.im
their webserver responds with a redirect, either to the main FriendFeed site:
HTTP/1.1 302 Found
Date: Sat, 09 Apr 2011 12:29:38 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Content-Length: 0
Location: http://friendfeed.com/
Server: FriendFeedServer/0.1
or to some other place (when using their URL-shortener).