Karma - custom adapter with dynamically updated number of total tests - karma-runner

I'm creating a custom adapter for Karma. While in other adapters I looked on (e.g. karma-mocha) the total number of tests is reported on the beginning of the execution (via info call), for my case it cannot be done on the beginning as total tests number is not known then.
I tried invoking info method after every test suite (sending number of tests executed) is executed but it seems it does not take any effect (looks like it is only updated on the beginning of the whole test run). Is it possible to achieve with some other approach or maybe custom reporter?
P.S. I know updating total number of tests after executing every test suite doesn't have much value as the user will still not see total number of tests before running them. However, "Executed 550 of 550" still looks better than "Executed 550 of 0".

Looks like Karma does something special when info is called before test results are reported.
this.info = function (info) {
// TODO(vojta): introduce special API for this
if (!startEmitted && util.isDefined(info.total)) {
socket.emit('start', info)
startEmitted = true
} else {
socket.emit('info', info)
}
}
You could try keeping track of your test results/errors, then after the test run has finished call info with the total number of tests before calling result, error, and complete respectively.

Related

How to debug further this dropped record in apache beam?

I am seeing intermittent dropped records(only for error messages though not for success ones). We have a test case that intermittenly fails/passes because of a lost record. We are using "org.apache.beam.sdk.testing.TestPipeline.java" in the test case. This is the relevant setup code where I have tracked the dropped record too ....
PCollectionTuple processed = records
.apply("Process RosterRecord", ParDo.of(new ProcessRosterRecordFn(factory))
.withOutputTags(TupleTags.OUTPUT_INTEGER, TupleTagList.of(TupleTags.FAILURE))
);
errors = errors.and(processed.get(TupleTags.FAILURE));
PCollection<OrderlyBeamDto<Integer>> validCounts = processed.get(TupleTags.OUTPUT_INTEGER);
PCollection<OrderlyBeamDto<Integer>> errorCounts = errors
.apply("Flatten Roster File Error Count", Flatten.pCollections())
.apply("Publish Errors", ParDo.of(new ErrorPublisherFn(factory)));
The relevant code in ProcessRosterRecordFn.java is this
if(dto.hasValidationErrors()) {
RosterIngestError error = new RosterIngestError(record.getRowNumber(), record.toTitleValue());
error.getValidationErrors().addAll(dto.getValidationErrors());
error.getOldValidationErrors().addAll(dto.getOldValidationErrors());
log.info("Tagging record row number="+record.getRowNumber());
c.output(TupleTags.FAILURE, new OrderlyBeamDto<>(error));
return;
}
I see this log for the lost record of Tagging record row for 2 rows that fail. After that however, inside the first line of ErrorPublisherFn.java, we log immediately after receiving each message. We only receive 1 of the 2 rows SOMETIMES. When we receive both, the test passes. The test is very flaky in this regard.
Apache Beam is really annoying in it's naming of threads(they are all the same name), so I added a logback thread hashcode to get more insight and I don't see any and the ErrorPublisherFn could publish #4 on any thread anyways.
Ok, so now the big question: How to insert more things to figure out why this is being dropped INTERMITTENTLY?
Do I have to debug apache beam itself? Can I insert other functions or make changes to figure out why this error is 'sometimes' lost on some test runs and not others?
EDIT: Thankfully, this set of tests are not testing errors upstream and this line "errors = errors.and(processed.get(TupleTags.FAILURE));" can be removed which forces me to remove ".apply("Flatten Roster File Error Count", Flatten.pCollections())" and in removing those 2 lines, the issue goes away for 10 test runs in a row(ie. can't completely say it is gone with this flaky stuff going on). Are we doing something wrong in the join and flattening? I checked the Error structure and rowNumber is a part of equals and hashCode so there should be no duplicates and I am not sure why it would be intermittently failure if there are duplicate objects either.
What more can be done to debug here and figure out why this join is not working in the TestPipeline?
How to get insight into the flatten and join so I can debug why we are losing an event and why it is only 'sometimes' we lose the event?
Is this a windowing issue? even though our job started with a file to read in and we want to process that file. We wanted a constant dataflow stream available as google kept running into limits but perhaps this was the wrong decision?

(Laravel 5) Monitor and optionally cancel an ALREADY RUNNING job on queue

I need to achieve the ability to monitor and be able to cancel an ALREADY RUNNING job on queue.
There's a lot of answers about deleting QUEUED jobs, but not on an already running one.
This is the situation: I have a "job", which consists of HUNDREDS OF THOUSANDS rows on a database, that need to be queried ONE BY ONE against a web service.
Every row needs to be picked up, queried against a web service, stored the response and its status updated.
I had that already working as a Command (launching from / outputting to console), but now I need to implement queues in order to allow piling up more jobs from more users.
So far I've seen Horizon (which doesn't runs on Windows due to missing process control libs). However, in some demos seen around it lacks (I believe) a couple things I need:
Dynamically configurable timeout (the whole job may take more than 12 hours, depending on the number of rows to process on the selected job)
Ability to CANCEL an ALREADY RUNNING job.
I also considered the option to generate EACH REQUEST as a new job instead of seeing a "job" as the whole collection of rows (this would overcome the timeout thing), but that would give me a Horizon "pending jobs" list of hundreds of thousands of records per job, and that would kill the browser (I know Redis can handle this without itching at all). Further, I guess is not possible to cancel "all jobs belonging to X tag".
I've been thinking about hitting an API route, fire the job and decouple it from the app, but I'm seeing that this requires forking processes.
For the ability to cancel, I would implement a database with job_id, and when the user hits an API to cancel a job, I'd mark it as "halted". On every loop I would check its status and if it finds "halted" then kill itself.
If I've missed any aspect just holler and I'll add it or clarify about it.
So I'm asking for an advice here since I'm new to Laravel: how could I achieve this?
So I finally came up with this (a bit clunky) solution:
In Controller:
public function cancelJob()
{
$jobs = DB::table('jobs')->get();
# I could use a specific ID and user owner filter, etc.
foreach ($jobs as $job) {
DB::table('jobs')->delete($job->id);
}
# This is a file that... well, it's self explaining
touch(base_path(config('files.halt_process_signal')));
return "Job cancelled - It will stop soon";
}
In job class (inside model::chunk() function)
# CHECK FOR HALT SIGNAL AND [OPTIONALLY] STOP THE PROCESS
if ($this->service->shouldHaltProcess()) {
# build stats, do some cleanup, log, etc...
$this->halted = true;
$this->service->stopProcess();
# This FALSE is what it makes the chunk() method to stop looping
return false;
}
In service class:
/**
* Checks the existence of the 'Halt Process Signal' file
*
* #return bool
*/
public function shouldHaltProcess() :bool
{
return file_exists($this->config['files.halt_process_signal']);
}
/**
* Stop the batch process
*
* #return void
*/
public function stopProcess() :void
{
logger()->info("=== HALT PROCESS SIGNAL FOUND - STOPPING THE PROCESS ===");
$this->deleteHaltProcessSignalFile();
return ;
}
It doesn't looks quite elegant, but it works.
I've surfed the whole web and many goes for Horizon or other tools that doesn't fit my case.
If anyone has a better way to achieve this, it's welcome to share.
Laravel queue have 3 important config:
1. retry_after
2. timeout
3. tries
See more: https://laravel.com/docs/5.8/queues
Dynamically configurable timeout (the whole job may take more than 12
hours, depending on the number of rows to process on the selected job)
I think you can config timeout + retry_after about 24h.
Ability to CANCEL an ALREADY RUNNING job.
Delete job in jobs table
Delete process by process id in your server
Hope it help you :)

ignore.synchronization=true/ browser.waitforAngularEnabled(true) takes so long when compared to browser.sleep()

While executing e2e tests in protractor when we are using ignore.synchronization=true/ browser.waitforAngularEnabled(true) to handle waits is too slow when compared to browser.sleep(10000) to proceed to next step. How to address these kind of wait issues to make the script execution faster?
Difference:
ignore.synchronization=true/ browser.waitforAngularEnabled(true) are used to make protractor wait until all the angular modules are loaded.
browser.sleep(// time in ms) is raw way of stopping the protractor for the given particular ms.
Solution:
To handle wait issues:
use browser.waitforAngularEnabled(false) after getting your base url. Then you can use expected waits which makes the protractor wait until that expectation is completed.
Refer https://www.protractortest.org/#/api?view=ProtractorExpectedConditions for more details
Hope it helps you

How to make Protractor's browser.wait() more verbose?

In Protractor tests I call many times browser.wait method for example to wait once the particular element will appear on the screen or it will be clickable.
In many cases tests passes on my local machine, but does not on other.
I receive very generic information about the timeout which doesn't help me a lot to debug / find a source of issue.
Is it possible to make a browser.wait more verbose, for example:
if at least defaultTimeoutInterval will elapse when waiting for particular element, will it be possible to console.log information about the element that it tried to wait for,
take a screenshot when the timeout error occurs,
provide full call stack when timeout appears in browser.wait
If the main issue is that you don't know for which element the wait timed out, I would suggest writing a helper function for wait and use it instead of wait, something like:
wait = function(variable, variableName,waitingTime){
console.log('Waiting for ' + variableName);
browser.wait(protractor.ExpectedConditions.elementToBeClickable(variablename),waitingTime);
console.log('Success');
}
Because protractor stops executing test after first fail, if wait timed out, console won't print success message after failing to load a certain element.
For screenshots I suggest trying out protractor-jasmine2-screenshot-reporter, it generates an easily readable html report with screenshots and debug information on failed tests (for example, in which code line the failure occured).
Look into using protractor's Expected Condition, you can specify what to wait for and how long to wait for it.
For screenshots there are npm modules out there that can take a screenshot when a test fails. This might help.
browser.wait returns a promise, so catch the error and print/throw something meaningful like:
await browser.wait(ExpectedConditions.visibilityOf(css), waitingTime).catch((error) =>
{
throw new CustomError(`Could not find ${css} ${error.message}`)
});

Not recovering from an f5 drop during spock test

Has anyone ever created a successful Spock test against an f5 dropped connection?
In my f5 rule, if a situation is satisfied - say a bad cookie, I drop the connection
if { [HTTP::cookie exists "badCookie"] } {
if { not ([HTTP::cookie "badCookie"] matches_regex {^([A-Z0-9_\s]+)$}) } {
drop
}
}
Testing this manually, in a browser, results in a slow but eventual timeout, time limit depending on the browser. But rather than manual tests for each of the f5 rules, I'd like to instead incorporate my tests into our Spock functional test library.
Using Spock, #Timeout() or #Timeout(value=5) just ends up doing a never ending increase in the timeout like:
[spock.lang.Timeout] Method 'abc' has not yet returned - interrupting. Next try in 0.50 seconds.
[spock.lang.Timeout] Method 'abc' has not yet returned - interrupting. Next try in 1.00 seconds.
[spock.lang.Timeout] Method 'abc' has not yet returned - interrupting. Next try in 2.00 seconds.
[spock.lang.Timeout] Method 'abc' has not yet returned - interrupting. Next try in 4.00 seconds.
[spock.lang.Timeout] Method 'abc' has not yet returned - interrupting. Next try in 8.00 seconds.
[spock.lang.Timeout] Method 'abc' has not yet returned - interrupting. Next try in 16.00 seconds.
Using the waitFor method approach in http://fbflex.wordpress.com/2010/08/25/geb-and-grails-tips-tricks-and-gotchas/ or https://github.com/hexacta/weet/blob/master/weet/src/groovy/com/hexacta/weet/pages/AjaxPage.groovy does not close out the method using a 5 second specification either.
An example of the code using each of those approaches (timeout class, timeout method, and waitFor) is at https://gist.github.com/ledlogic/b152370b95e971b3992f
My question is has anyone found a way to successfully run a Spock test to verify f5 rules are dropping connections?
For me using the #ThreadInterrupt annotation alongside the #Timeout annotation worked:
#ThreadInterrupt
#Timeout(value = 100, unit = MILLISECONDS)
def 'timeout test'() {
expect:
while(1) {true}
}
You'll find the full documentation here: http://docs.groovy-lang.org/docs/next/html/documentation/#GroovyConsole-Interrupt
However, this may not be sufficient to interrupt a script: clicking
the button will interrupt the execution thread, but if your code
doesn’t handle the interrupt flag, the script is likely to keep
running without you being able to effectively stop it. To avoid that,
you have to make sure that the Script > Allow interruption menu item
is flagged. This will automatically apply an AST transformation to
your script which will take care of checking the interrupt flag
(#ThreadInterrupt). This way, you guarantee that the script can be
interrupted even if you don’t explicitly handle interruption, at the
cost of extra execution time.