Katalon Studio: Retry Url Until Successful - katalon-studio

I am using Katalon studio with the Navigate to Url action.
It would be useful to me to be able to retry this action until there is no error loading the page, either the 'connection refused' or 404 type error.
After the web page loads succesfully, it is OK to go ahead and execute the rest of my script.
Is there any example of a way to do this?

Try something like this, just change the css selector:
TestObject errorMessage = new TestObject().addProperty('css', ConditionType.EQUALS, 'span.error-message-example')
while (!WebUI.verifyElementVisible(errorMessage, 3, FailureHandling.OPTIONAL)){
WebUI.navigateToUrl('https://example.com')
}

I also have this issue, I found that I kept having 404 or connection reset issue if CPU is over-utilized. However I need to run a number of test cases so it is normal my CPU is overloaded.
I writen the following Chrome retry logic and use this for a year or so, been serving me well for :
for (times in 1..5 ) {
try {
WebUI.openBrowser('')
WebUI.navigateToUrl(url)
//Chrome
WebUI.verifyTextNotPresent("ERR_CONNECTION_RESET", false)
break
} catch ( Exception e) {
println "Cannot navigate to URL, try again in " + times
WebUI.closeBrowser()
WebUI.delay(1)
if(times >= 5){
//False error
WebUI.verifyMatch('1', '0', false)
}
}
}
Basically, if anything failed during initial startup it will try again for 5 time before give up.
See if this is something that helps you as well.

Related

Apache Camel - Getting a list of files from FTP as a result of a GET request

As the title suggests I'm trying to get a list of files from an FTP directory to send as a response of a GET request.
I have current rest route implementation:
rest().get("/files")
.produces(MediaType.APPLICATION_JSON_VALUE)
.route()
.routeId("restRouteId")
.to("direct:getAllFiles");
On the other side of the direct route I have the following routes:
from("direct:getAllFiles")
.routeId("filesDirectId")
.to("controlbus:route" +
"?action=start" +
"&routeId=ftpRoute");
from([ftpurl])
.noAutoStartup()
.routeId("ftpRoute")
.aggregate(constant(true), new FileAggregationStrategy())
.completionFromBatchConsumer()
.process(filesProcessor)
.to("controlbus:route" +
"?action=stop" +
"&routeId=" + BESTANDEN_ROUTE_ID);
The issue at hand is that with this method the request does not wait for the complete process to finish, it almost instantly returns an empty response with StatusCode 200.
I've tried multiple solutions but they all fail in either of two ways: either the request gets a response even though the route hasn't finished yet OR the route gets stuck waiting for inflight exchanges at some point and waits for the 5 minute timeout to continue.
Thanks in advance for your advice and/or help!
Note: I'm working in a Spring Boot application (2.0.5) and Apache Camel (2.22.1).
I think the problem here is that your two routes are not connected. You are using the control bus to start the second route but it doesn't return the value back to the first route - it just completes, as you've noted.
What I think you need (I've not tested it) is something like:
from("direct:getAllFiles")
.routeId("filesDirectId")
.pollEnrich( [ftpurl], new FileAggregationStrategy() )
.process( filesProcessor );
as this will synchronously consume your ftp consumer, and do the post processing and return the values to your rest route.
With the help of #Screwtape's answer i managed to get it working for my specific issue. A few adjustments were needed, here is a list of what you need:
Add the option "sendEmptyMessageWhenIdle=true" to the ftp url
In the AggregationStrategy add an if (exchange == null) clause
In the clause set a property "finished" to true
Wrap the pollEnrich with a loopDoWhile that checks the finished property
In its entirety it looks something like:
from("direct:ftp")
.routeId("ftpRoute")
.loopDoWhile(!finished)
.pollEnrich("ftpurl...&sendEmptyMessageWhenIdle=true", new FileAggregationStrategy())
.choice()
.when(finished)
.process(filesProcessor)
.end()
.end();
In the AggregationStrategy the aggregate method looks something like:
#Override
public Exchange aggregate(Exchange currentExchange, Exchange newExchange) {
if (currentExchange == null)
return init(newExchange);
else {
if (newExchange == null) {
currentExchange.setProperty("finished", true);
return currentExchange;
}
return update(currentExchange, newExchange);
}
}

Failed assertions inside for loop is not failing the Gatling scenario

I'm trying to validate whether error messages returned by API are proper or not.
So, I stored all local error message strings in HashMap errorMessage
.doIf(errorMessages.size()>1) {
exec(session => {
assert(ResponseJSON.contains(errorMessages.get("errorMessage1")))
for ((k,v)<- errorMessages){
assert(ResponseJSON.contains(v))
}
}
I could see the error on console as
hook-3' crashed with 'java.lang.AssertionError: assertion failed', forwarding to the next one
But, the Gatling scenarios are not failing here, what is I'm missing ?
Try using an exitHereIfFailed to exit the scenario.

"ERROR: 57014: canceling statement due to user request" Npgsql

I am having this phantom problem in my application where one in every 5 request on a specific page (on an ASP.NET MVC application) throws this error:
Npgsql.NpgsqlException: ERROR: 57014: canceling statement due to user request
at Npgsql.NpgsqlState.<ProcessBackendResponses>d__0.MoveNext()
at Npgsql.ForwardsOnlyDataReader.GetNextResponseObject(Boolean cleanup)
at Npgsql.ForwardsOnlyDataReader.GetNextRow(Boolean clearPending)
at Npgsql.ForwardsOnlyDataReader.Read()
at Npgsql.NpgsqlCommand.GetReader(CommandBehavior cb)
...
On the npgsql github page I found the following bug report: 615
It says there:
Regardless of what exactly is happening with Dapper, there's
definitely a race condition when cancelling commands. Part of this is
by design, because of PostgreSQL: cancel requests are totally
"asynchronous" (they're delivered via an unrelated socket, not as part
of the connection to be cancelled), and you can't restrict the
cancellation to take effect only on a specific command. In other
words, if you want to cancel command A, by the time your cancellation
is delivered command B may already be in progress and it will be
cancelled instead.
Although they have made "changes to hopefully make cancellations much safer" in Npgsql 3.0.2 my current code is incompatible with this version because the need of migration described here.
My current workaround (stupid): I have commented the code in Dapper that says command.Cancel(); and the problem seems to be gone.
if (reader != null)
{
if (!reader.IsClosed && command != null)
{
//command.Cancel();
}
reader.Dispose();
reader = null;
}
Is there a better solution to the problem? And secondly what am I loosing with the current fix (except that I have to remember the change every time I update Dapper)?
Configuration:
NET45,
Npgsql 2.2.5,
Postgresql 9.3
I found why my code didn't dispose the reader, resulting in calling command.Cancel(). This only happens with QueryMultiple method when not every refcursor is read.
Changing the code from:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
if (client != null)
{
client.Address = multipleResults.Read<Address>().Single();
}
return client;
}
To:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
var address = multipleResults.Read<Address>().SingleOrDefault();
if (client != null)
{
client.Address = address;
}
return client;
}
This fixed the issue and now the reader is properly disposed and command.Cancel() is not invoked.
Hope this helps anyone else!
UPDATE
The npgsql docs for version 2.2 states:
Npgsql is able to ask the server to cancel commands in progress. To do
this, call the NpgsqlCommand’s Cancel method. Note that another thread
must handle the request as the main thread will be blocked waiting for
command to finish. Also, the main thread will raise an exception as a
result of user cancellation. (The error code is 57014.)
I have also posted an issue on the Dapper github page.

Protractor: how to catch error or timeout on browser.driver.get(url) if remote url is fail to load?

It seems when protractor execute "browser.driver.get(...)" it waits until page is loaded or throw "Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL" message. Remote url is 1 to 10 times fail to load (following by freeze). A workaround for that is to refresh/reload page. Is there any way to implement that behaviour in Protractor? (let say repeat action 4-8 times and then continue)
You should be able to catch errors thrown through the promises api, like this:
browser.driver.get(...).then(function(result) {
// do something when page is found
}.thenCatch(function(error) {
// do something with the error
});

Execution stops on Cookies.getCookie() call with no exceptions

I'm trying to read a Cookie value in my server side implementation class. After some debugging my code now looks like this:
logger.info("Initiating login");
String oracleDad;
try {
logger.info("inside try");
oracleDad = Cookies.getCookie("dad");
logger.info("Read dad from cookie: " + oracleDad);
} catch (Exception e) {
logger.error("Failed to read dad from cookie", e);
oracleDad = "gtmd";
}
When I execute this code my onFailure block is fired with a Status code Exception:
com.google.gwt.user.client.rpc.StatusCodeException: 500 The call
failed on the server; see server log for details
My logging output on the server looks like this:
[INFO] c.g.e.server.rpc.MyProjectImpl - Initiating login
[INFO] c.g.e.server.rpc.MyProjectImpl - inside try
How is it possible that neither logger, the INFO or the ERROR, fire after the Cookies.getCookie() call? I'd hoped that by adding the catch(Exception e) I'd get some error message explaining why the code fails. But execution just seems to stop silently.
I'm using com.google.gwt.user.client.Cookies. I thought client code can be run on the server, just not vice versa. Is that correct? Is there something else I'm missing?
I'm using com.google.gwt.user.client.Cookies. I thought client code can be run on the server, just not vice versa. Is that correct? Is there something else I'm missing?
No, that's not correct, yes there is something you are missing: Server code can't run on the client, and client code can't run on the server.
You are not getting an Exception. You are getting an Error or some other Throwable.
Try catching Throwable in your try/catch block, and you'll see that you are getting an error where the server JVM is unable to load the Cookies class, because something is wrong. The JVM thinks that a native library is missing (because it doesn't know what JSNI is or how to run it), so it throws an UnsatisfiedLinkError:
Thrown if the Java Virtual Machine cannot find an appropriate native-language definition of a method declared native.
In GWT, the Cookies class is meant to interact with the browser itself to see what cookies have been defined on the currently loaded page. To use cookies on a J2EE server, ask the HttpServletRequest object for the cookies it knows about, and their values.