500 read timeout Selenium : on opening website with large records - perl

I am using Perl with Selenium. I have set $sel->set_timeout("86400000");.
When opening a website with large content, 500 read timeout message is displayed. Can someone please help me?

It seems to me that not the Selenium webdriver (the client) has issued the timeout, but the webserver has been waiting too long.
What do you want to accomplish? Maybe you can just make a HTTP HEAD request to check that your URL is valid? (A HEAD request does not give you any content back, just the HTTP header with the http status code and, optionally, the "Content-Length" header, among other fields. The HEAD request is much faster that a GET or POST request and yo won't have problems with timeouts. You might get more than one HEAD respnses e.g. if your request is redirected to another server.
Or do you want to check the large content itself. Then I cannot help you at this point. There is not enough information.
You can use a Test::WWW::Mechanize object to create the HEAD request (it is a subclass of LWP::Request). NOt sure if selenium supports head requests.

Related

How to mock HTTP Error response with Charles?

Is it possible to intercept the request going through Charles and immediately return 500 error code without sending this request to the server?
Can't find any information on this. All resources suggest to wait for the response and then change HTTP response code to 500.
I assume you have already tried adding a rewrite rule to make the request to be returned with the 500 status. Have you tried combining this with a map local, to an empty file on your disk, for instance? It may work.
If this doesn't work too, I think I would do a Map Remote to another path on my localhost (for instance: http://localhost:8081/exected-response-500) and make that URL to return the 500 status error (in my case I would use a basic Spring Boot app to achieve this).

REST Api - Created resource redirect

I'm building REST API, and when resource is created normally I return HTTP 201 Created along with Location header to specify where that resource is located. But from some reason http client is not redirecting.
I'm using Postman for this. Does anyone have idea on this problem?
In short, a Location header is not sufficient to trigger a client redirect. It must be used in conjunction with a 3xx HTTP status code.
References:
https://en.m.wikipedia.org/wiki/HTTP_location
Redirecting with a 201 created
This is one of those things where the expectation does not meet what actually happens, and the first thing people think is "well that doesn't work properly", as has been suggested in other comments.
The Location is just a random header, and clients, such as Postman or curl or anything else need to be instructed to follow them. Most won't do this by default, as that is an unreasonable default.
YouTube for example returns a body for some responses and a Location tag too. One example would be video uploads. They respond to your original meta-data for the video is sent with a POST, and they shove a Location URL which is the endpoint to upload the video too. If clients just randomly redirected to that you'd be having a bad time.
You can use Paw to make a "sequence", which I believe will let you take values from headers to reuse. This is also possible with Runscope Ghostinspector.

SiteCatalyst image request onreadystatechange

Is there any possibility of "listening" to the state of GET SiteCatalyst image requests ?
I'd like to run a callback function only when the requests are over, to be more clear when they receive the 200 status code and I'm sure they're done.I'm confident no "built-in" method is available and maybe I should hack the core s.track.s.t() function...?Thanks a lot.
You are right, there is no global "built-in" callback method for when the Adobe Analytics request is complete.
A couple notes I should mention to you about attempting to hack the core code:
1) If you are using the AppMeasurement library version 1.4.1+, in some circumstances, a POST request may be made instead of an image request.
2) Responses that are not 200/OK or otherwise completed/successful does not necessarily mean the data failed to be sent to Adobe. Most common scenario is a NS_BINDING_ABORTED error returned.
The main bad effect I'm getting here is what I previously thought as a double XHR request.
It wasn't. In reality the first request gets redirected as it would be the first visit of a new visitor (302 status) and a new visitorID is brought down by Adobe server.
Then the redirected "200 status" request is made with this new visitorID within.This is bad because every XHR requests would result in a new visit of a new visitor even though a previously set "s_vi" cookie is there in browser, with the lack of previous collected data for that user.I know what XHR redirects couldn't be blocked so I'm wondering if there is a way to "tell" Adobe server it's not the first request ever made, in order to stop the redirect and do not use a new visitorID.

HTTP response code for stale pagination

I have a web service that runs a SQL query, sorted by one of several columns, and returns a single requested page (as in skip M limit N). The UI in front of it follows a 'load more' pattern, accumulating all loaded pages of results in one view.
The problem is new and deleted records, which can happen at any time, causing the result of a 'load more' to be wrongly aligned, and depending on the sort being used, even obscuring new records that should be shown. In addition to an automatic refresh on a timer in the frontend, I'm going to add a timestamp field to the RESTful request and response format to allow the webapp to detect that the view should be completely reloaded on a 'load more' call.
My question is, what HTTP status code is a best fit for this signal? In reviewing the codes I don't see an exact fit. I thought of using 302 Found with a link to 'page 1', but I wonder if that might cause unwanted caching of the redirect. I also thought of 400 Bad Request, but there's nothing actually wrong with the request, it's just the data needs to be reloaded.
Pages are served from a POST /path call where the requested page is provided in a JSON body.
I'm not a complete purist, so anything that would make it work without caching or other side effects is acceptable, but I would like to adhere to REST principles as much as possible.
anything that would make it work without caching or other side effects is acceptable
Responses to POST requests are not cacheable unless you explicitly mark them as such. So you can use any combination of status code, response headers and response entity to communicate the “please reload” message to the client.
You can use conditional requests. Include your client’s timestamp in the If-Unmodified-Since header. Respond with 412 Precondition Failed if client is stale. The client will have to know how to reload.
You can try 307 Temporary Redirect, but only if you encode pagination in /path, because upon receiving 307, (I’m assuming you’re doing AJAX here) XMLHttpRequest will transparently re-submit the same POST request to the new Location (at least this is what my Chromium does). Your actual page JSON will have to include metainformation on the range it covers, so that the client knows it has to replace the rows, not append them.

how to disable caching HTTP GET in metro app, I am using IXMLHTTPRequest2

I am doing an http GET to fetch data, I am using IXMLHTTPRequest2.
If I GET url "http://foo.com" (curl "http://foo.com"), the second time I get this url again, the content on server is actually changed, but what I am getting is the cached result.
Caching seems only honor URL, so if different header with same URL, still same cached result.
I have tried "Cache-Control: no-cache", "Cache Control: no-store", and "Pragma: no-cache". None of them are honored by the API.
Is there a way to turn cache off or walk around? (One walk around I am using is appending garbage at end of URL but I am not feeling good with it).
my question got answered here by Prashant: http://social.msdn.microsoft.com/Forums/en-US/winappswithnativecode/thread/1df95d3e-68c9-4351-822a-c2cfde380248/#1df95d3e-68c9-4351-822a-c2cfde380248
You can force XHR to retrieve the latest content by setting the "If-Modified-Since" HTTP header in the request and set a time in the past.
If you have control over the server response, you could send back an Expires HTTP response header with a value 0 or a date in the past. That should make XHR retrieve the latest response for you.
You are only required to do one of the above, changing both the client and server side code is not necessary.
The client side code could be changed to something like this:
xhr->Open(...)
xhr->SetRequestHeader(L"If-Modified-Since", L"Sat, 01 Jan 2000 00:00:01 GMT");
xhr->Send(...)
For changing the server side behavior if your server side code is based on ASP.net you could change your response header like this:
Response.Headers.Add("Expires", "0")
I think you need to use sockets.....i think these two links should help
C# WebClient disable cache
How do I Create an HTTP Request Manually in .Net?