I have a perl generated page. The contents of this page change every 30 minutes, so I'm setting $r->set_last_modified() to the time the contents last changed.
That all works well and I can see the correct header arriving at my browser.
When I refresh the page, I see my browser uses the correct "If-Modified-Since" header in the request to the server, but Apache2 ignores this and re-sends the entire page.
How can I get Apache2 to behave correctly and respond with a "HTTP/1.x 304 Not Modified" ?
(The "last-modified" / "if-modified-since" headers are handled correctly when requesting static content from the same Apache2 process.)
Thanks for any help.
EDIT: Are my expectations wrong? Do I have to explicitly handle inbound If-Modified-Since headers in my perl script?
Sadly, yes, your expectations are wrong.
At the point where you basically say to Apache "OK, I'm dealing with this request...", Apache is going to hand over responsibility for everything to you. If you want the request to honour If-Modified-Since, it's down to your code.
Face it, this is the right behaviour, since there's no way Apache can know what you /really/ mean by 'modified' in a Perl handler: it might be that the best check is to go query your backend DB for a timestamp on a record, for example....
Apache won't store your last-modified value when it processes a request. So in order to decide whether something was modified it will have to run your application.
Related
We're designing a REST service with server-side caching. We'd like to provide an option to the client to specifically ask for latest data even if the cached data has not expired. I'm looking into the HTTP 1.1 spec to see if there exists a standard way to do this, and the Cache Revalidation and Reload Controls appears to fit my need.
Questions:
Should we just use Cache Revalidation and Reload Controls?
If not, is it acceptable to include an If-Modified-Since header with epoch time, causing the server to always consider the resource as have changed? The spec doesn't preclude this, but I'm wondering if I'm abusing :) the intent of the header?
What'd be a good way to identify the resource to refresh? In our case, the URL path alone is not enough, and I'm not sure if query or matrix parameters are considered as part of a unique URL. What about using an ETag?
If your client wants a completely fresh representation of a resource, it may specify max-age=0 to do that. That is actually the intent to receive a response no older than 0 seconds.
All other mechanisms you mentioned (If-Modified-Since, ETag, If-Match, etc.) are all working with caches to make sure the resource is in some state. They work only, if you definitely know you have a valid state of the resource. You can think of it as optimistic locking. You can make conditional requests for when the resource did, or did not change. However you have to know whether you are expecting a change or not.
You could potentially misuse the If-Modified-Since as you say, but max-age communicates your intent better.
Also note, by design there may be multiple caches along the way, not just your server side cache. Most often the client caches also, and there may be other transparent caches on the way.
According to section-5.2.1.4, it appears that no-cache request directive best fits my need.
The "no-cache" request directive indicates that a cache MUST NOT use a
stored response to satisfy the request without successful validation
on the origin server.
Nothing is said about subsequent requests, which is exactly what I want. There is also a no-cache response directive in section-5.2.2.2, but that also applies to subsequent requests.
First time posting a question. I'm trying to call some SOAP webservices from inside a blackberry app using the ksoap2 library. I've successfully managed to get a response from the one service, which uses an HTTP url, but now that I'm trying to get response from a (different) HTTPS url, I've run up against a brick wall.
The response dump I'm getting has the following fault message:
"An error occurred while routing the message for element value : (country option I specified in my request). Keep-Alive and Close may not be set using this property. Parameter name: value."
The weird thing is that using Oxygen XML's SOAP tools with the XML request dump works just fine. Any ideas where to start looking? This has taken up a full day already.
Update:
Responding to your comment below - it turns out the double quoting is part of the SOAP spec. Some servers are more relaxed in their implementation, and will work without the quotes.
ksoap2 doesn't force the quotes onto your actions - you may want to patch your ksoap2 library to ensure the quotes are always there.
ymmv
Original:
I don't think this is a SOAP related problem, nor with BlackBerry.
I think the problem lies on the server side, since that error string is not a common error (just google it to see no hits on the whole internet other than this question).
Looks like this is a job for the network guy on the server side to tell you what he's seeing on his end.
Only other thing I can think of is to make the call using HTTP instead of HTTPS. You can then use some network sniffer to see what the difference between the messages is. Alternatively, install an SSL proxy with something like "Charles" and sniff the packets like that.
In fiddler, is there any way of knowing if some piece of code ( jscript, jquery, css) are been loaded from local cache vs downloaded from the server. I think this may be represented by different color in web sessions, but wasn't able to find legend for these colors.
If you see 304 Not Modified responses, those mean that the client made a conditional request, and server is signalling "no need to download, you have the newest version cached". That's one "class" of cached responses.
However, for some entities, not even conditional requests are sent (Expires header is in the future, etc. - see RFC2616 ). Those would not show up in Fiddler at all, as there is no request at all - the client may assume that the cached version is fresh.
What you can certainly see are the non-cached resources - anything coming back with a response code from the 2xx range should be non-cached (unless there's a seriously misconfigured caching proxy upstream, but those are rare nowadays).
You could clear your caches, and open the page. Save those results. Then open the page again - see what's missing when compared to the first load; those are cached.
Fiddler is an HTTP proxy, so it does not show cached content at all.
I'm attempting to move a web app we have (written in Perl) from an IIS6 server to an IIS7.5 server.
Everything seems to be parsing correctly, I'm just having some issues getting the app to actually work.
The app is basically a couple forms. You fill the first one out, click submit, it presents you with another form based on what checkboxes you selected (using includes and such).
I can get past the first form once... but then after that it stops working and pops up the generated error message. After looking into the code and such, it basically states that there aren't any checkboxes selected.
I know the app writes data into .dat files... (at what point, I'm not sure yet), but I don't see those being created. I've looked at file/directory permissions and seemingly I have MORE permissions on the new server than I did on the last. The user/group for the files/dirs are different though...
Would that have anything to do with it? Why would it pass me on to the next form, displaying the correct "modules" I checked the first time and then not any other time after that? (it seems to reset itself after a while)
I know this is complicated so if you have any questions for me, please ask and I'll answer to the best of my ability :).
Btw, total idiot when it comes to Perl.
EDIT AGAIN
I've removed the source as to not reveal any security vulnerabilities... Thanks for pointing that out.
I'm not sure what else to do to show exactly what's going on with this though :(.
I'd recommend verifying, step by step, that what you think is happening is really happening. Start by watching the HTTP request from your browser to the web server - are the arguments your second perl script expects actually being passed to the server? If not, you'll need to fix the first script.
(start edit)
There's lots of tools to watch the network traffic.
Wireshark will read the traffic as it passes over the network (you can run it on the sending or receiving system, or any system on the collision domain).
You can use a proxy server, like WebScarab (free), Burp, Paros, etc. You'll have to configure your browser to send traffic to the proxy server, which will then forward the requests to the server. These particular servers are intended to aid testing, in that you'll be able to mess with the requests as they go by (and much more)
As Sinan indicates, you can use browser addons like Fx LiveHttpHeaders, or Tamper Data, or Internet Explorer's developer kit (IIRC)
(end edit)
Next, you should print out all CGI arguments that the second perl script receives. That way, you'll know what the script really thinks it gets.
Then, you can enable verbose logging in IIS, so that it logs the full HTTP request.
This will get you closer to the source of the problem - you'll know if it's (a) the first script not creating correct HTML, resulting in an incomplete HTTP request from the browser, (b) the IIS server not receiving the CGI arguments for some odd reason, or (c) the arguments aren't getting from the IIS server and into the perl script (or, possibly, that the perl script is not correctly accessing the arguments).
Good luck!
What you need to do is clear.
There is a lot of weird excess baggage in the script. There seemed to be no subroutines. Just one long series of commands with global variables.
It is time to start refactoring.
Get one thing running at a time.
I saw HTML::Template there but you still had raw HTML mixed in with code. Separate code from presentation.
I've spent the last 5 months developing a gwt app, and it's now become time for third party people to start using it. In preparation for this one of them has set up my app behind a reverse proxy, and this immediately resulted in problems with the browser's same origin policy. I guess there's a problem in the response headers, but I can't seem to rewrite them in any way to make the problem go away. I've tried this
response.setHeader("Server", request.getRemoteAddress());
in some sort of naive attempt to mimic the behaviour I want. Didn't work (to the surprise of no-one).
Anyone knowing anything about this will most likely snicker and shake their heads when reading this, and I do not blame them. I would snicker too, if it was me... I know nothing at all about this, and that naturally makes this problem awfully hard to solve. Any help at all will be greatly appreciated.
How can I get the header rewrite to work and get away from the SOP issues I'm dealing with?
Edit: The exact problem I'm getting is a pop-up saying:
"SmartClient can't directly contact
URL
'https://localhost/app/resource?action='doStuffs'"
due to browser same-origin policy.
Remove the host and port number (even
if localhost) to avoid this problem,
or use XJSONDataSource protocol (which
allows cross-site calls), or use the
server-side HttpProxy included with
SmartClient Server."
But I shouldn't need the smartclient HttpProxy, since I have a proxy on top of the server, should I? I've gotten no indications that this could be a serialisation problem, but maybe this message is hiding the real issue...
Solution
chris_l and saret both helped to find the solution, but since I can only mark one I marked the answer from chris_l. Readers are encouraged to bump them both up, they really came through for me here. The solution was quite simple, just remove any absolute paths to your server and use only relative ones, that did the trick for me. Thanks guys!
The SOP (for AJAX requests) applies, when the URL of the HTML page, and the URL of the AJAX requests differ in their "origin". The origin includes host, port and protocol.
So if the page is http://www.example.com/index.html, your AJAX request must also point to something under http://www.example.com. For the SOP, it doesn't matter, if there is a reverse proxy - just make sure, that the URL - as it appears to the browser (including port and protocol) - isn't different. The URL you use internally is irrelevant - but don't use that internal URL in your GWT app!
Note: The solution in the special case of SmartClient turned out to be using relative URLs (instead of absolute URLs to the same origin). Since relative URLs aren't an SOP requirement in browsers, I'd say that's a bug in SmartClient.
What issue are you having exactly?
Having previously had to write a reverseproxy for a GWT app I can't remember hitting any SOP issues, one thing you need to do though is make sure response headers and uri's are rewritten to the reverseproxies url - this includes ajax callback urls.
One issue I hit (which you might also experience) when running behind a reverseproxy was with the serialization policy of GWT server.
Fixing this required writing an implementation of RemoteServiceServlet. While this was in early/mid 2009, it seems the issue still exists.
Seems like others have hit this as well - see this for further details (the answer by Michele Renda in particular)