It's a question more about the architecture of a program that runs karma in a CI pipeline.
I have a set of web components. They are using karma to run tests (following open-wc.org recommendations). Then I have my custom CI pipeline that allows to schedule a test of selected group of components.
When the test is scheduled it execute tests for each component one by one. However in my logs I am getting messages like
MaxListenersExceededWarning: Possible EventEmitter memory leak
detected. 12 exit listeners added to [process]. Use
emitter.setMaxListeners() to increase limit
or sometimes
listen EADDRINUSE: address already in use 127.0.0.1:9877
which breaks the test (exists the process).
I can't really pinpoint the problem so I am guessing that I am not running the test in a correct way.
On the server I am using Server class to initialize the server, then I am calling start on the server. When the callback function passed to Server constructor is called I am assuming the server is stopped and I can start over with another component. But clearly it is not the case per errors I am getting.
So the question is what would be the right way of running Karma test in a loop, one by one, using node API instead of CLI.
Update
To be specific of how I am running the tests.
I am:
Creating configuration by calling config.parseConfig where the argument is component's karma config file
Calling new Server(opts, (code) => {}) where opts are the one generated in step 1
Adding listeners for browser_complete and browser_error to generate a report and to store it into the data store
Cleaning up (removing reference for the server) when constructor callback is called
Getting next component from the queue and going back to #1
To answer my on question,
I have moved the whole logic of executing a single test to a child process and after the test finishes, but before the next test is run, I am making sure the child process is killed. No more error messages are showing up.
Related
I use SOAP UI for testing a REST API. I have a few test cases which are independent of each other and can be executed in random order.
I know that one can disable aborting the whole run by disabling the option Fail on error as shown in this answer on SO. However, it can be so that the TestCase1 has prepared certain data to run tests first and it breaks in the middle of its run because an assertion fails or for some other reason. Now, the TestCase2 starts running after it and will test some other things, however, because TestCase1 has not had its all steps (including those that clean up) executed, it may fail.
I would like to be able to run all of the tests even if a certain test fails however I want to be able to execute a number of particular test case specific steps should a test fail. In programming terms, I would like to have a finally where each test case will have a number of steps that will be executed regardless of whether the test failed or passed.
Is there any way to achieve this?
You can use Teardown script at test case level
In below example test step fails but still teardown script runs. So its more like Finally
Alternatively you can try creating your own soft assertion which will not stop the test case even if it fails. for example
def err[]
then whenever there is an error you can do
err.add( "Values did not matched")
at the end you can check
assert err.size()>0 ,"There is an error"
log.info err
This way you can capture errors and do actual assertions at the end or alternatively you can use the below teardown script provided by SoapUI
Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.
Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.
I'm trying to automate a lengthy process that can be broken down into several steps. (say Steps 1-5)
I have written a script that separates these into functions and call them sequentially.
However, we now have the additional requirement of making the script restartable. That is, if it fails in any one of the steps, rerunning the script would cause it to skip all completed steps and retry from the failed one.
Is this at all possible without referencing an external log file?
I've tried using workflows but it seems like recursion isn't supported.
Any ideas?
Some options aside from using a log file.
Use the registry
you can set a registry value to a number depending on what step you stopped on, this removes the need for a log file but is somewhat similar in terms of 'external' storage
Check the task status on each run
depending on the tasks you could have the script 'test', for example, step 3 to see if it has already been completed, then check step 4, 5 etc. until it encounters one it needs to run and continue from there, this may be impossible or require a lot of overhead code though for not much payoff.
Allow the user to continue from within the script.
this is probably the best way of doing it (aside from just using a log file), run the script in blocks, and when an error is encountered you can prompt the user to fix the issue before pressing 'enter' to re-run the previous script block, this makes it easy to provide information about what failed as well.
the main thing here is that once a script 'quits', in order to know what happened in it's last run it needs an external source of information, or to handle it in another way.
I'm running in circles. I have webpage that creates a huge file. This file takes forever to be created and is in a subroutine.
What is the best way for my page to run this subroutine but not wait for it to be created/processed? Are there any issues with apache processes since I'm doing this from a webpage?
The simplest way to perform this task is to simply use fork() and have the long-running subroutine run in the child process. Meanwhile, have the parent return to Apache. You indicate that you've tried this already, but absent more information on exactly what the code looks like and what is failing it's hard to help you move forward on this path.
Another option is to have run a separate process that is responsible for managing the long-running task. Have the webpage send a unit of work to the long-running process using a local socket (or by creating a file with the necessary input data), and then your web script can return immediately while the separate process takes care of completing the long running task.
This method of decoupling the execution is fairly common and is often called a "task queue" (if there is some mechanism in place for queuing requests as they come in). There are a number of tools out there that will help you design this sort of solution (but for simple cases with filesystem-based communication you may be fine without them).
I think you want to create a worker grandchild of Apache -- that is:
Apache -> child -> grandchild
where the child dies right after forking the grandchild, and the grandchild closes STDIN, STDOUT, and STDERR. (The grandchild then creates the file.) These are the basic steps in creating a zombie daemon (a parent-less worker process unconnected with the webserver).
I'm a newbie to WF and rather lost. Here's what I have so far:
I've created a workflow service app (xamlx), added needed variables
I've created a custom NativeActivity where I'm calling CreateBookmark from within Execute, which is between the Receive & Send activity for the service. (Ultimately this will actually do something besides creating the bookmark).
The bookmark gets created just fine, but after stepping out of the Execute method, nothing happens for one minute until the service times out, giving me that message "The request channel timed out while waiting for a reply after 00:00:59.9699970. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout." (I tried posting an image of the xamlx, but as a newbie it won't let me; suffice it to say I'm getting from my Receive, into my custom native activity, but never getting as far as the SendReply).
I assume I'm missing something rather fundatmental, but I can't see what. I've originally tried using NativeActivity<T> to return what I want, but that behaves the same.
Found out what I was doing wrong: needed to use overload of CreateBookmark that has BookmarkOptions parameter and set it to BookmarkOptions.NonBlocking.
Strangely, I did not find one example anywhere that mentioned this.