Why does pytest not discover my fixtures? - import

I'm having trouble with pytest not discovering fixtures. The situation is a bit messy, but I'll do my best to explain. I expect this question to be incomplete initially, so I'm hoping that we can work through it together.
I have a couple of git repos. In each of them, I use nox to (among other things) set up virtual environments and run tests with pytest.
Previously, I had logic in repo A that was unit tested there. This logic was then imported in repo B and used to define fixtures there. Those fixtures were then used in tests in repo B and elsewhere.
Now, I want to bundle the fixtures and the logic in repo A, test the fixtures there, and expose it as plugin for use in repo B.
The trouble I'm having is that the tests that I've moved from repo B to repo A cannot discover the fixtures.
What is the right way to do this?
Below is an attempt to show only the relevant parts of my current situation. Please ask if something is not clear.
repo A
-setup.py
-/repo_a
--__init__.py
--/pytest
---__init__.py
---config.py
---fixture.py
-/tests
--__init__.py
--conftest.py
--test_fixture.py
repo_a/pytest/fixture.py
import pytest
from repo_a.foo import bar
#pytest.fixture
def fixture_a():
bar.pass()
repo_a/pytest/config.py
pytest_plugins = ["repo_a.pytest.fixture.py"]
tests/confest.py
pytest_plugins = ["repo_a.pytest.config"]
tests/test_fixture.py
def test_fixture_a(fixture_a):
fixture_a.pass()
When running pytest tests/test_fixture.py this fails in the setup phase, with
ImportError: Error importing plugin "repo_a.pytest.config": No module named 'repo_a.pytest'
I would have expected this if there was no __init__.py in the pytest folder, but there is.

Related

github: get the consolidate view of test cases from all the repo

I have 5-6 github python repos. These repos has python test cases as well, which ran using pytest and github action. I want see the results of test cases running in all the repos at a single place/tool.
please suggest

How to prevent some files to be modified in github?

Let's say I have a GitHub repository that has a config file for a CI/CD tool, e.g. Jenkins. In my CI/CD pipeline, I have a unit test step, in which all unit tests inside tests directory of the repository are run. Now in this case, someone malicious, who has access to the repository may add a malicious script inside the tests directory. Is there a way to tell GitHub to ignore pushes that have changes to the tests directory???
You don't let malicious people have write access to your repository. Git isn't the right solution here.

How to run tests using buildbot without compiling/buildning the project

HI I have 2 questions regarding Buildbot:
I want to run tests I wrote, without compiling the project ( i have a different mechanizm of compiling/building the project) but I don't find any way to configure that. The Buildbot documentation explains how to build and only then run tests. I want to skip the build part. Anyone has tried to do this?
2.How to configure the buildbot to work with local repository. I have a computer which syncs the repository with the main one once a day at night, and I want to run Buildbot on this computer, and buildbot should run tests locally. (Tests are a separate project within the same solution )
My environment is: Win7, Visual Studio, git repository, tests are a separate project within the same solution.
Appreciate your help!
1) buildbot just runs a sequence of steps. They can be whatever you want. If you don't have a build step, then you don't need to run that.
2) The Git step just pass the repourl it is passed to git clone/git fetch, so if the repository is local, you can just pass it the path.

Hudson: Can't email upstream committers when downstream builds fail

I have a set of unit test projects (one per customer), a set of config projects per customer, and a core Java. The unit tests are just JUnit tests, the config projects are just customer-specific xml files, etc, and the core project is the runtime. So the unit tests are testing the specific config for each customer.
So my unit test projects depend upon the core, and their corresponding unit test project. All as Maven2 dependencies.
Now, what I want to happen is, if a developer updates a config project that breaks the unit test project, the build will fail. Even if the unit test or core projects are unchanged.
However, even though in Hudson, it has registered that the core and config projects are upstream to the unit test projects, it still only emails developers when the unit tests fail when they checked in to the unit test project.
I have tried using the "Blame Upstream Committers Plugin", and also the Email-Ext plugin, setting the Committers and Culprits to be emailed. But none of these work, even though I don't see why not.
One thing I can do is merge the unit test and config projects. This is a drastic move as they like the customer config isolated, but it is possible. But I would like to know why the above doesn't work.
Thanks if you can help,
Justin
Did you have fingerprinting enabled when you tried the Blame plugin? (Sorry to post this as an answer, can't comment yet)
I am struggling with this same issue. According to the docs for the plugin you need to ensure that you have fingerprinting turned on for both the upstream and downstream project, and that they must fingerprint files that "hudson (jenkins) can determine came from an upstream build". The easiest way to do this is only fingerprint the files that are built in the upstream project.
I think the files need to vary between projects in a unique way (i.e. change every build) as there are several people reporting that upstream projects of earlier builds get blamed (when it does work).
The above is true for either the plugin or the ext-mail plugin with hudson.upstreamCulprits=true enabled.

How to scale buildbot in a company

I've been looking into buildbot lately, and the lack of good documentation and sample configurations makes it hard to understand how buildbot is commonly used.
According to the buildbot manual, each buildmaster is responsible for 1 code base. That means that a company that wants to use buildbot on, say, 10 projects, needs to maintain 10 different sets of buildbot installations (master-slaves configurations, open ports, websites with output etc.). Is this really the way things are done? Am I missing an option that creates a mash-up that is easy to maintain and monitor?
Thanks!
At my place of work we use Buildbot to test a single program over several architectures and versions of Python. I use one build master to oversee about 16 slaves. Each set of slaves pulls from a different repo and tests it against Python 2.X.
From my experience, it would be easy to configure a single build master to run a mash-up of projects. This might not be a good idea because the waterfall page (where the build slaves report results) can get very congested with more than a few slaves. If you are comfortable scrolling through a long waterfall page, then this will not be an issue.
EDIT:
The update command in master.cfg:
test_python26_linux.addStep(ShellCommand, name = "update pygr",
command = ["/u/opierce/PygrBuildBot/update.sh","000-buildbot","ctb"], workdir=".")
000-buildbot and ctb are additional parameters to specify which branch and repo to pull from to get the information. The script update.sh is something I wrote to avoid an unrelated git problem. If you wanted to run different projects, you could write something like:
builder1.addStep(ShellCommand, name = "update project 1",
command = ["git","pull","git://github.com/your_id/project1.git"], workdir=".")
(the rest of builder1 steps)
builder2.addStep(ShellCommand, name = "update project 2",
command = ["git","pull","git://github.com/your_id/project2.git"], workdir=".")
(the rest of builder2 steps)
The two projects don't have to be related. Buildbot creates a directory for each builder and runs all the steps in that directory.
FYI, BuildBot 0.8.x supports several repositories on one master, simplifying things a bit.