pytest --forked flag processes don't die causing GitLab build to hang - pytest

We are using pytest-xdist to run pytest tests with the --forked flag (we are writing integration tests for code that uses Scrapy). However, we noticed that when the tests finish running, some of the created child processes remain alive, which causes our GitLab build to hang.
The Python version we'e using is 3.7.9.
I couldn't find other mentions of the issue online. Is anyone familiar with it? Are there any solutions/fixes/workarounds?

Related

Best practice for invoke-ing a tool that internally calls `exec`?

pipenv uses click, and uses click's invoke method in its unit test suite to test itself.
This does not work great when unit testing the pipenv run command, which internally uses execve. In the context of a single threaded test run this means our test runner (pytest) process gets completely replaced by whatever we're exec-ing, and the test suite immediately and silently exits.
(This bug on the pipenv issue tracker talks about this in more detail: https://github.com/pypa/pipenv/issues/4909)
Is there a best practice for how to handle situations like this with click? Should we not be using click's invoke at all, and instead using regular old subprocess? Or perhaps invoke could optionally run the command in a subprocess rather than the current process? Or maybe there's a best practice for cli apps that use click to detect if they're being run via invoke and just be careful not to do anything in the exec family (this is basically what pipenv does right now: if the CI env var is set, then pipenv run triggers a subprocess rather than using execve).

Using nx run-many shows: Another process, with id ..., is currently running ngcc

We are having an NX monorepo with 10+ Angular apps and 150+ libs. Our CI server is running all builds in docker containers using Ubuntu. We are storing and sharing the computation caching across all build agents. We are now using nx affected:apps to detect for which apps the builds need to run and nx affected:libs to create a list of affected libraries, for each app. This approach enable us to run distributed builds. We now have a dedicated build plan for each app and its dependent libraries.
So, we are using nx affected, computation caching and distributed builds but we are still struggling with long build durations because of the large number of tests we need to run.
The next step we took was to use nx run-many to run those tests in parallel but this did not work for us. Even with 2 parallel processes we see the following error:
Another process, with id ..., is currently running ngcc. Waiting up to 250s for it to finish.
We have tried all the workarounds without any success
If I run the same command inside the same docker container but running on my local machine, everything works ok.
So, instead of reducing the build time, this approach is adding to the total build duration (if we want to run 4 parallel processes we need to wait for 16min before the tests actually start).
Any ideas why this is happening?
Had a similar problem with NX throwing ngcc is already running
What helped me was to set the flag parallel.
from:
npx nx run-many --target=build --prod --all
to:
npx nx run-many --target=build --prod --all --parallel=1

Jodconverter: Fails randomly when running tests with libreoffice (docker)

While running conversions in test suites using jodconverter, it was randomly crashing and tests failed.
We are using Libreoffice with jodconverter running tests in docker. Took too much time to figure this out, so created this question.
Solution :
Use -PuseLibreOffice with the test command to signal jodconverter to use libs for libreoffice. Default is open office.
./gradlew test -PuseLibreOffice

How to integrate Perl automation test script with TeamCity build

I have a Rest API. I wrote my test automation in Perl which sends curl commands. I want to integrate the tests with TeamCity build so that any change in the code will be pulled, installed in a machine and the tests will be run. If all the tests pass then only the build will be green in TeamCity.
Now I don't know how to integrate Perl with TeamCity. Is there any plugins available for this?
You can use the Teamcity plugin for Perl to integrate your perl tests with Teamcity. If you use this ,
The test results are displayed in a nice Teamcity Tests tab witch a breakdown for Success, Failed and ignored tests.
You can go into the history of tests to know exactly when a change started breaking someone's tests.
You get a log info per each test which is useful for debugging when you have multiple tests.
The documentation for the plugin at the CPAN page has good examples of how to implement this
You can use the Command Line Runner to execute a Perl script. If it returns a non-zero exit code the build will fail. See https://confluence.jetbrains.com/display/TCD8/Configuring+Build+Steps:
The build step status is considered failed if the build process returned a non-zero exit code and the Fail build if build process exit code is not zero build failure condition is enabled (see Build Failure Conditions); otherwise build step is successful.

How to deactivate django plugin for some tests?

I'm running some tests for Django, and some other tests for the website using Selenium.
My choice of Testing framework is amazing Pytest.
for testing Django I've currently installed pytest-django plugin and tests for Django run as expected, however now I'm back to my previous tests that don't need Django plugin.
I start tests and the Django plugin is picked up automatically.
I've checked the documentation and found the article where it is explained how to disable\deactivate plugins, however when I run this command:
py.test -p no:django
I get an error that my "DJANGO_SETTINGS_MODULE" is not on sys.path.
Also
commands like:
py.test --traceconfig
or
py.test --version
throw me the same error.
Looks like Django plugin is getting to deep? Why is it called when I'm just checking the version or the 'installed plugins'?
QUESTION: Is there any way to temporary deactivate this plugin without uninstalling it?
This should work. When i install pytest-2.3.4 and run py.test -p no:django --version i don't get the DJANGO_SETTINGS issues. I get it when i leave away the -p no:django disabling.
If it doesn't work, please link to a full trace on a pastebin.
IIRC this is because of a combination of things:
On startup py.test looks for the setuptool entrypoint "pytest11"
Entrypoints get imported before they get activated or deactivated
pytest-django (as released, currently 1.4) does a load of Django imports upfront
A lot of Django needs the settings module configured even at import time
Unfortunately this is unavoidable in the released version of pytest-django. And the answer originally was: no, run the pytest-django and other tests in different virtualenvs.
However it is also the reason we started work on a version of the plugin which avoids these problems. What I consider the best version right now is the pytest23 branch at https://github.com/flub/pytest_django This version is pretty feature complete, certainly compared to the released version, it just needs a little more polishing mainly on the tests and documentation.
I believe/hope that within the next few weeks this branch will be merged and released, I just need to get Andreas to have a look through and agree. I consider it certainly stable enough to start using.