How to run celery and celerybeat from the same command - celery

I have a project which has two tasks.py files. The strucutre looks like something like this
root/
webproject
__init__.py
models.py
views.pu
celerytasks/
__init__.py
celeryconfig.py
tasks.py
scheduledJobs/
celeryconfig.py
tasks.py
celerytasks are added by webrequests and scheduledJobs are well, scheduled jobs for the website (delete old files etc).
I have to run celery twice now from command line. Out code is not production yet, so I am using nohup to do this. My question is can I somehow run both with the same command.
I have just started using celery.

I use next arguments: worker -B, but better look to documentation: http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#starting-the-scheduler.

Related

Distribute shell scripts using setuptools and pyproject.toml

I'm trying to distribute a shell script along with a Python package. Ideally, the shell script is installed when I run pip install my_package. I read from this SO that, my expected behavior is exactly what the scripts keyword of setuptools.setup provides. E.g. the script my_script will be installed with the following setup.py script:
setup(
...
scripts=['my_script'],
...
)
However, I cannot use the above method for two reasons:
the official doc did not mention this behavior. I don't know if I can continue to do this way.
my whole project is built on pyproject.toml, without setup.py. Although pyproject.toml has provided a [project.scripts] table, as explained in the setuptools official doc, the scripts can only be python functions instead of shell scripts.
For completeness, in my case, the shell script reads git status and sets environment variables, which will be read from within my python project. The shell script and my python project are bonded so tightly that I would rather not split them into two projects.
I have also tried to use a python function to execute the shell script, e.g.
[project.scripts]
my_script = 'my_project:my_func'
def my_func():
subprocess.run(...)
The problem with this solution is that every time I run my_script, my_project is loaded and the loading process is really slow.
Maybe a link in the comments leads to this information already. Anyway, I think it is worth posting that scripts = [...] in setup.py can be written in pyproject.toml as:
[tool.setuptools]
script-files = ["scripts/myscript1", "scripts/myscript2"]
However, this feature is deprecated. I hope the authors of the packaging tools will recognize the problem with shell scripts and deal with it.
Link: setuptools docs
I'm not exactly sure it will work for you case, but I solved this by creating a "shim" setup.py file (it has an added benefit of being able to install your project in edit mode).
It usually just calls setup(), but it was possible to pass the scripts argument:
"""Shim setup file to allow for editable install."""
from setuptools import setup
if __name__ == "__main__":
setup(scripts=["myscript"])
Everything else was loaded from pyproject.toml.

Run all files in a directory to measure coverage

I want to run coverage for all files in a directory.
For instance, I have the following directory structure:
root_dir/
tests/
test1.py
test2.py
code_dir/
There are some python files in tests directory. I want to run them together using coverage run and generate a report.
Individually, I can do like this:
coverage run tests/test1.py
coverage run tests/test2.py
and generate a report.
How can I do this with a single command?
Thanks.
You should use a test runner to find and run those tests. Either pytest, or python -m unittest discover will do that for you.

How do get pytest to do discovery based on module name, and not path

I'm looking at moving from unittest to pytest. One thing I like to do, is to do a setup.py install and then run the tests from the installed modules, rather than directly from the source code. This means that I pick up any files I've forgotten to include in MANIFEST.in.
With unittest, I can get the test runner to do test discovery by specifying the root test module. e.g. python -m unittest myproj.tests
Is there a way to do this with pytest?
I'm using the following hack, but I wish there was a built in cleaner way.
pytest $(python -c 'import myproj.tests; print(myproj.tests.__path__[0])')
The Tests as part of application section of pytest good practices says if your tests are available at myproj.tests, run:
py.test --pyargs myproj.tests
With pytest you can instead specify the path to the root test directory. It will run all the tests that pytest is able to discover. You can find more detail in the pytest good practices

How can I combine the PyDev unit test runner with Web2py?

I'm using Eclipse/PyDev on a Web2py application, and I'd like to create a launch configuration that runs a unit test using web2py.
Normally, Web2py wants you to run a unit test with a test runner script, like so:
python web2py.py -S testa -M -R testRunner.py
testRunner.py includes a main method that runs:
unittest.TextTestRunner(verbosity=2).run(suite)
However, in PyDev, the test running is managed outside of your source, in pysrc\runfiles.py.
PyDev's test runner doesn't even take -S, -M, and -R as arguments, and it has no way of passing them on to web2py.py, which it expects to be a suite of tests, and not a runner.
Is there a way to test Web2py using a PyDev unittest configuration, and if so, how?
My suggestion in this case is using the pytest runner (configure it in the pyunit preferences)... I haven't searched, but I bet there's some plugin for running web2py with pytest.

How does `rake` know where to look for Rakefiles?

I'm trying to better understand how rake works. I've looked on the rake website to see how it works but there isn't a clear explanation for how rake searches for Rakefiles and the steps it goes through in resolving dependencies. Can someone explain how rake works?
By default rake will look for one of these files under the directory you execute it from:
rakefile
Rakefile
rakefile.rb
Rakefile.rb
You can look at Rake's Application docs to see this list
Additionally, any ruby file including other rakefiles can be included with a standard Ruby require command:
require 'rake/loaders/external-rakefile'
alternatively, you can import them:
import 'rake/loaders/external-rakefile'
To make a set of Rake tasks available for use from any directory, create a .rake subdirectory within your home directory, and place the appropriate Rake files there. Any rake command with the -g option will use these global Rake files (read more here):
rake -g -T
Additionally, if -g option is set, Rake will first try to load the files form RAKE_SYSTEM environment variable, if that is not set, it will default to a home user directory/.rake/*.rake. These files will be loaded/imported in addition to one of the default files listed above.
Otherwise it will load the first default file (from the above list), and additionally import all the rake files from the rakelib directory (under location you run rake from), OR this directory can be specified using:
--rakelibdir=RAKELIBDIR or -R RAKELIBDIR: Auto-import any .rake files in RAKELIBDIR. (default is 'rakelib')