I have a model and wrote tests before I started. Now my problem is: While the functionality works, my tests are non-deterministic. Most of the time they work, but sometimes they don't. I assume it's because of the Future
But let's show you what I mean by example:
before {
db.run(animals.createTable)
}
after {
db.run(animals.dropTable)
}
"An animal" must "have a unique id" in
{
val setup = DBIO.seq(
animals.insert(Animal("Ape")),
animals.insert(Animal("Dog"))
)
db.run(setup)
val result = db.run(animals.tableQuery.result).futureValue
result shouldBe a[Seq[_]]
result distinct.length shouldEqual 2
result(0).id should not equal result(1).id
}
I assume sometimes the db.run(setup) finishes in time, but sometimes it does not, hence I then get an AssertionException "expected length was 2, actual 0". As said, to me it looks like a "race condition" here (I know that is not the correct terminus ;)).
So, what I tried was simply awaiting the result of the insert-statement like so:
Await.ready(db.run(setup), Duration.Inf)
But that doesn't change a thing. So why is that? Can somebody explain to me why Await does not block here? I assumed this to block and only execute the lines that come after, when the insert had been executed.
I also tried wrapping the assertions in an .onComplete-block, but no luck either.
Any hints for me?
I suspect your issue is that sometimes your before hook has not finished either, since it's also async. I would suspect that if you add an Await.ready to your future in the before block along with your setup block, the problem will go away.
Related
This is my first question ever on this website, please bear with me patiently.
I'm trying to make an http long polling program for a project using Gatling. While crawling through many questions on stackoverflow, though I've been able to reunite separate concepts into a piece of syntactically correct code, sadly, it doesn't do what is intended to do.
When a status code of 200 is obtained after any request, the loop should break and the test would be considered as approved. If the status code is different to 200, it should keep the connection alive and polling, not failing the test.
When the .tryMax value is reached and all responses gave a status different to 200, the loop should break and the test should be considered as failed.
Using the difference operator (!=) doesn't work either, so then I took the decision to alternatively use .equals() and test the loop, to no avail.
Being new to both Gatling and Scala, I'm still trying to figure out what's wrong with this code, execution-wise:
def HttpPollingAsync() = {
asLongAs(session => session("statuss").validate[String].equals("200")) {
exec(
polling
.every(10 seconds)
.exec(
http("polling-async-response")
.post("/" + BaseURL + "/resource-async-response")
.headers(headers)
.body(RawFileBody("requestdata.json"))
.check(
status.is(200),
jsonPath("$.status").is("200"),
jsonPath("$.status").saveAs("statuss")
))
).exec(polling.stop)
}
}
val scn = scenario("asyncpolling")
.tryMax(60){
exec(HttpPollingAsync())
}
setUp(scn.inject(atOnceUsers(10))).protocols(httpProtocol)
The exception I get when running this piece of code is (it's just syntactically correct):
Exception in thread "main" java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
at io.gatling.charts.report.ReportsGenerator.generateFor(ReportsGenerator.scala:49)
at io.gatling.app.RunResultProcessor.generateReports(RunResultProcessor.scala:59)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:38)
at io.gatling.app.Gatling$.start(Gatling.scala:81)
at io.gatling.app.Gatling$.fromArgs(Gatling.scala:46)
at io.gatling.app.Gatling$.main(Gatling.scala:38)
at io.gatling.app.Gatling.main(Gatling.scala)
So there's some part when it's never accessed or used.
Any bit of help or pointing me in the right direction would be appreciated.
Thank you!
an asLongAs loop condition is evaluated at the start of the loop - so on your first execution the condition will fail due to there being no session value for statuss.
the doWhile loop provides checking at the end of the loop.
I am trying to execute the code after repeat exhaustion using onErrorResume but onErrorResume is not being trigger.
Here is the code sample
Mono.just(request)
.filter(this::isConditionSatified)
.map(aBoolean -> performSomeOperationIfConditionIsSatified(request))
.repeatWhenEmpty(Repeat.onlyIf(i -> true)
.exponentialBackoff(Duration.ofSeconds(5)), Duration.ofSeconds(10))
.timeout(Duration.ofSeconds(30)))
.delaySubscription(Duration.ofSeconds(10)))
.onErrorResume(throwable -> {
log.warn("Max timeout reached", throwable);
return Mono.just(false);
});
onErrorResume is never trigged. I am trying to use it as a fallback. My goal is if the repeat exhaustion is hit, return the false value.
My unit test complains of
expectation "expectNext(false)" failed (expected: onNext(false); actual: onComplete())
Any help or suggestion would be helpful.
since an empty source is valid by itself, repeatWhenEmpty doesn't necessarily propagate an exception after exhausting its attempts. The Repeat util from addons doesn't, even when the "timeout" triggers (as hinted in the timeout parameter's javadoc: "timeout after which no new repeats are initiated", ok that could be clearer).
since you're using repeatWhenEMPTY, I'm guessing that the empty case is always "irrelevant" to you and thus defaultIfEmpty(false) should be the acceptable solution.
In my flutter code, I have logic that does this:
final jsonString = await rootBundle.loadString('AssetManifest.json');
And I have tests that I want to return a fake AssetManifest.json when this line is reached.
To mock it, I do this in the test:
ServicesBinding.instance.defaultBinaryMessenger
.setMockMessageHandler('flutter/assets', (message) {
final Uint8List encoded =
utf8.encoder.convert('{"Foo.ttf":["Foo.ttf"]}');
return Future.value(encoded.buffer.asByteData());
});
The weird thing is, this works, but any tests that run after it hang (they all get stuck in the code when it reaches the await rootBundle.loadString('AssetManifest.json') line.
I've tried adding
ServicesBinding.instance.defaultBinaryMessenger
.setMockMessageHandler('flutter/assets', null);
But this doesn't seem to properly "clean up" the mocked behavior. In fact, if I run the above line in my setUp, the first test to run hangs.
So am I mocking the behavior wrong? Or am I not cleaning it up properly?
I ran into the same issue, and believe it's due to caching by the bundle. This will cause the above test to fail, because the message never gets sent. When calling loadString, you can specify whether to cache the result. E.g. loadString('AssetManifest.json', false).
Note that if you use loadStructuredData, implementations can cache the result and you can't tell it not to.
In Protractor tests I call many times browser.wait method for example to wait once the particular element will appear on the screen or it will be clickable.
In many cases tests passes on my local machine, but does not on other.
I receive very generic information about the timeout which doesn't help me a lot to debug / find a source of issue.
Is it possible to make a browser.wait more verbose, for example:
if at least defaultTimeoutInterval will elapse when waiting for particular element, will it be possible to console.log information about the element that it tried to wait for,
take a screenshot when the timeout error occurs,
provide full call stack when timeout appears in browser.wait
If the main issue is that you don't know for which element the wait timed out, I would suggest writing a helper function for wait and use it instead of wait, something like:
wait = function(variable, variableName,waitingTime){
console.log('Waiting for ' + variableName);
browser.wait(protractor.ExpectedConditions.elementToBeClickable(variablename),waitingTime);
console.log('Success');
}
Because protractor stops executing test after first fail, if wait timed out, console won't print success message after failing to load a certain element.
For screenshots I suggest trying out protractor-jasmine2-screenshot-reporter, it generates an easily readable html report with screenshots and debug information on failed tests (for example, in which code line the failure occured).
Look into using protractor's Expected Condition, you can specify what to wait for and how long to wait for it.
For screenshots there are npm modules out there that can take a screenshot when a test fails. This might help.
browser.wait returns a promise, so catch the error and print/throw something meaningful like:
await browser.wait(ExpectedConditions.visibilityOf(css), waitingTime).catch((error) =>
{
throw new CustomError(`Could not find ${css} ${error.message}`)
});
This is my Block which contain an element.element(by.model("$ctrl.benchmark.name"));
This is not present on Dom. It give me error that element is not on page but still execute all lines of code written after it. I want this to handle in sequential way if above passes then go to next only. How can I handle these types of problem in Protractor.
it("Test BenchMark",function(){
browser.getTitle().then(function (name) {
console.log(name);
browser.sleep(2000);
element(by.linkText("Manage Benchmarks")).click();
browser.sleep(4000)
//element(by.xpath("//main[#class='ng-scope']//a[text()='Create Benchmark']")).click();
console.log("megha");
element(by.model("$ctrl.benchmark.name")).sendKeys("bench");
element(by.buttonText("Save")).click();
console.log(megha);
element(by.xpath("//button[#class='dropdown-toggle']")).click();
console.log("dropdown clicked")
});
The behavior which you are expecting will not be handled by Protractor, it will be by testing framework(ex: Jasmine). But
"Jasmine doesn't support failing early, in a single spec. The idea is to give
you all of the failures in case that helps figure out what is really wrong
in your spec"
You can use browser.wait() combined with Expected Conditions.
browser.wait() blocks control flow execution until a promise is resolved, and Expected Conditions all evaluate to a promise.
So in your case, you could use either presenceOf() and/or visibilityOf().
var EC = protractor.ExpectedConditions;
var el = element(by.model("$ctrl.benchmark.name"));
var present = EC.presenceOf(el); // wait for it to be added to DOM
var visible = EC.visibilityOf(el); // wait for it to be visible on page
browser.wait(EC.and(present, visible), 10000); // wait maximum of 10 seconds
// rest of code