With Pytest, is it possible to get coverage information from sub-process ran by celery? - celery

I am using Pytest to test a Python application with:
pytest -s --cov=myApp
However a process of my App runs asynchronously with Celery. The test module runs the Celery process properly but I don't get any coverage information.
Is is possible to get coverage from a process ran by celery?
I had a look at Celery testing but I don't want to test the asynchronously process directly/separately because I want to check how a process in MyAPP performs other actions with the task ID.
I also added the task module with --cov but I still don't get any coverage

Related

Get test execution logs during test run by Nunit Test Engine

We are using NUnit Test Engine to run test programatically.
Lokks like that after we add FrameworkPackageSettings.NumberOfTestWorkers to the Runner code, the test run for our Ui test hangs in execution. I'm not able to see at what time or event the execuiton hangs because Test Runned returns test result logs (in xml) only when entire execution ends
Is there a way to get test execution logs for each test?
I've added InternalTraceLevel and InternalTraceWriter but these logs are something different (BTW, looks like ParallelWorker#9 hangs even to write to console :) )
_package.AddSetting(FrameworkPackageSettings.InternalTraceLevel, "Debug");
var nunitInternalLogsPath = Path.GetDirectoryName(Uri.UnescapeDataString(new Uri(Assembly.GetExecutingAssembly().CodeBase).AbsolutePath)) + "\\NunitInternalLogs.txt";
Console.WriteLine("nunitInternalLogsPath: "+nunitInternalLogsPath);
StreamWriter writer = File.CreateText(nunitInternalLogsPath);
_package.AddSetting(FrameworkPackageSettings.InternalTraceWriter, writer);
The result file, with default name TestResult.xml is not a log. That is, it is not a file produced, line by line, as execution proceeds. Rather, it is a picture of the result of your entire run and therefore is only created at the end of the run.
InternalTrace logs are actual logs in that sense. They were created to allow us to debug the internal workings of NUnit. We often ask users to create them when an NUnit bug is being tracked. Up to four of them may be produced when running a test of a single assembly under nunit3-console...
A log of the console runner itself
A log of the engine.
A log of the agent used to run tests (if an agent is used)
A log received from the test framework running the tests
In your case, #1 is not produced, of course. Based on the content of the trace log, we are seeing #4, triggered by the package setting passed to the framework. I have seen the situation where the log is incomplete in the past but not recently. The logs normally use auto-flush to ensure that all output is actually written.
If you want to see a complete log #2, then set the WorkDirectory and InternalTrace properties of the engine when you create it.
However, as stated, these logs are all intended for debugging NUnit, not for debugging your tests. The console runner produces another "log" even though it isn't given that name. It's the output written to the console as the tests run, especially that produced when using the --labels option.
If you want some similar information from your own runner, I suggest producing it yourself. Create either console output or a log file of some kind, by processing the various events received from the tests as they execute. To get an idea of how to do this, I suggest examining the code of the NUnit3 console runner. In particular, take a look at the TestEventHandler class, found at https://github.com/nunit/nunit-console/blob/version3/src/NUnitConsole/nunit3-console/TestEventHandler.cs

RobotFrameWork: Is there a way of checking the report.html although the run paused?

Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes

How to run locust in pytest

I have many test cases written by pytest. I am now going to use locust to write some concurrency-related test cases. I found that locust needs to execute a command line first, and then enter some parameters on the user interface to execute these use cases. Can I execute locust directly through pytest code? If an error is reported during execution, the test case will be directly terminated and marked as failed. I hope I can run all test cases (the test cases I wrote before and the test cases written using locust) using pytest command directly.
You can use Locust as a library to run it via code instead of command line.
https://docs.locust.io/en/stable/use-as-lib.html
Then for doing any pytest asserts, you could use Locust event hooks.
https://docs.locust.io/en/stable/extending-locust.html
I haven't done this with pytest, but I think I should work.
First and foremost, welcome to StackOverflow, zhoujiazhi. I believe you will find the answer for your question on this already answered question.

Using Release Manager to kick off tests in MTM.

I am having an issue with kicking off test cases in Microsoft Test Manager from a script kicked off in Microsoft Release Manager. I can duplicate the issue when just running this command from powershell or command line. Here is the script:
C:\CODE\TCM\TCM.exe run /create /title:"Chads Example Test Case (Run from PowerShell)" /planid:31 /suiteid:2743 /configid:67 /settingsname:"DevWelisRemoteExecution" /testenvironment:"STAR_Regression" /collection:"http://tfssrv64:8080/tfs/DefaultCollection" /teamproject:QA /builddir:"\\tfssrv64\Builds" /include
Running this script returns a test run ID with no errors. I can immediately look in MTM and see the test run has started. It has a state of "pending". It eventually (some 20 minutes later) fails with the error "The test automation associated with the following test case could not be found: [48667]. Run the test case again using a build that contains the binary with the test automation.".
Facts: I can run the same test successfully, with a successful completion, from Microsoft Test Manager. (using the same settings specified in the script) Here are screen shots of the test runs from MTM.
Here is the MTM log of the successful test run.
The log of the successful test run.
Here is the same screen from the failed test run.
Here is the MTM log of the failed test run. Same test, ran from the above script.
The log of the failed test run.
Both test runs are using the same build number. Both test runs use the same test settings and configuration.
Any help would be greatly appreciated.....

Junit fails because the Server$StartJob still running

I am running junit test on eclipse, all it does is, trying to start the server in debug mode. However, I am getting the following error:
"Job found still running after platform shutdown. Jobs should be canceled by the plugin that scheduled them during shutdown: org.eclipse.wst.server.core.internal.Server$StartJob "
there is a piece of code that is triggered from a thread and it wont hit unless the junit tests are all complete. but once the tests are done, it closes the workbench and hence server startup job is never completed.
Is there a way to wait for the job to complete and then run junit test ?
Join their family to make the executing thread block until all jobs in the family are done.
See org.eclipse.core.runtime.jobs.IJobManager#join(Object, IProgressMonitor) and
org.eclipse.wst.server.core.ServerUtil#SERVER_JOB_FAMILY