For some reason, --autoreload doesn't work even though pyinotify is installed.
Is there a way to tell all workers to restart and load the new version of the modules?
I tried:
celery multi restart \*
But all it does is create a bunch of process with no task modules loaded.
Related
We are using pytest-xdist to run pytest tests with the --forked flag (we are writing integration tests for code that uses Scrapy). However, we noticed that when the tests finish running, some of the created child processes remain alive, which causes our GitLab build to hang.
The Python version we'e using is 3.7.9.
I couldn't find other mentions of the issue online. Is anyone familiar with it? Are there any solutions/fixes/workarounds?
In maximum rpm under the section on the %pre install script, it mentions that it's rare to use the %pre script. In fact, it further states that (at that time anyway) none of the 400+ RedHat packages used the %pre script.
I would think the %pre script would be the ideal location to stop the existing service before installing files over top of the currently installed version.
Is my thinking wrong? How is it that RedHat got away with never using %pre during upgrade for this purpose in any of their service packages?
Yes %pre is much more commonly used than when "Maximum RPM" was written in 1997. That doesn't change the fact that %pre should be used "rarely".
The reason is that %pre prevents installation (and may cause an entire
transaction to fail if there are needed install time dependencies).
Stopping a service in %pre and restarting in %post opens a larger window
where the service is not running than simply restarting a service in %post
The already running service typically reads its configuration files
only on startup (and so rpm can replace files while daemon is running).
And running executables have a reference count on the file system and so
continue to run even if the file that was executed was removed/replaced
by a newer package.
Well, I went and did the research I should have done before asking this question. I downloaded several service packages from RedHat 7.1 and ran:
rpm -qp --scripts <package-name>.rpm
I found out 1) that it's no longer true that %pre is not used. Even among the few that I checked a couple of them used %pre, and 2) it appears that most services just allow rpm to overwrite their data files and binaries during upgrade, and then use the upgrade portion of the %postun (post uninstall) script to restart (or try-restart) the service.
I would have thought this rather unsafe, as while you're writing over data files (especially) during upgrade, the old running service might get confused. It seems to me ultimately it's safer to stop the service during upgrade on %pre and start it again on upgrade during %postun... but that's just me.
I have a gradle build, which runs a few tests on our application. Currently the tests that store assets in mongoDB fail if the developer forgets to run mongod first. So I want any build that uses mongoDB to fail with a message the user that clearly tells him to start mongoDB. Ideally, later we would start mongoDB from gradle.
I already found this nice article about how to see if mongoDB is running under Linux, which is quite simple. I am sure something similar can be done under Windows using tasklist /FI "IMAGENAME eq mongod", etc. But I need to know how to use this correctly in gradle.
Is there a cross platform way to check if a service or normal process is running in gradle?
The suggestion provided by Orid to use the Gradle Mongo Plugin should work if you set the necessary Gradle tasks to be dependent on a startManagedMongoDb task.
While that may seem to be the easiest way, it may be breaking with how MongoDb will be used in non-development environments or on a continuous integration build-server, where the MongoDb service will already be running.
A very simple solution would be to add the MongoDb checking functionality to the top of a customized gradlew.bat (and the gradlew bash script if it will be run on a *nix operating system).
Another simple solution that wouldn’t require changing the gradlew.bat script would be to create your own MongoDb checking script that then called gradlew.bat, passing on command line arguments. I’m not sure if there is an equivalent to the bash $# for all positional arguments in windows, but looping through the arguments with SHIFT %1 can be used to generate the gradlew.bat command line.
I have created a python virtual enviornment to runn an applicaton using these instructions:
git clone http://github.com/MediaCrush/MediaCrush && cd MediaCrush
Create a virtual environment
Note: you'll need to use Python 2. If Python 3 is your default python interpreter (python --version), add --python=python2 to the virtualenv command.
virtualenv . --no-site-packages
Activate the virtualenv
source bin/activate
Install pip requirements
pip install -r requirements.txt
Install coffeescript
npm install -g coffee-script
Configure MediaCrush
cp config.ini.sample config.ini
Review config.ini and change any details you like. The default place to store uploaded files is ./storage, which you'll need to create (mkdir storage) and set the storage_folder variable in the config to an absolute path to this folder.
Compile static files
If you make a change to any of the scripts, you will need to run the compile_static.py script.
python compile_static.py
Start the services
You'll want to make sure Redis is running at this point. It's probably best to set it up to run when you boot up the server (systemctl enable redis.service on Arch).
MediaCrush requires the daemon and the website to be running concurently to work correctly. The website is app.py, and the daemon is celery. The daemon is responsible for handling media processing. Run the daemon, then the website:
celery worker -A mediacrush -Q celery,priority
python app.py
This runs the site in debug mode. If you want to run this on a production server, you'll probably want to run it with gunicorn, and probably behind an nginx proxy like we do.
gunicorn -w 4 app:app
I am trying to set this up on a remote server which is hosting 2 other websites.
I haven't actually got it to work properly yet, but what I don't understand is does this
virtual environment have to be running continuously?
If I close my remote connection, or exit the environment does the application cease to function?
And if not how do I exit the virtual environment and continue to work on the server?
The virtual environment isn't something that needs to be running. It's basically a directory where Python libraries and executables can be installed, and a handful of environment variables to ensure that:
new libraries are installed in the virtual environment
When a Python program looks for a library, it looks in the virtual environment
When the system looks for a program to run, it looks in the virtual environment first.
One of the things that happens when you activate the virtual environment is it defines a shell function called deactivate that unsets all the environment variables. So, to get out of the virtual environment, you just type deactivate.
If I close my remote connection, or exit the environment does the application cease to function?
It depends on how you've started your application. If you are just launching it from the command line, then when you close your connection the application will be stopped. Typically you want to use a service like upstart to start and manage your application (the particular service you choose is typically determined by your server's OS). When you configure that service, you'll want to make sure it runs source $your_environment_dir/bin/activate before starting your app, so that your app will run in the virtual environment.
So I've been playing with Akka Actors for a while now, and have written some code that can distribute computation across several machines in a cluster. Before I run the "main" code, I need to have an ActorSystem waiting on each machine I will be deploying over, and I usually do this via a Python script that SSH's into all the machines and starts the process by doing something like cd /into/the/proper/folder/ and then sbt 'run-main ActorSystemCode'.
I run this Python script on one of the machines (call it "Machine X"), so I will see the output of SSH'ing into all the other machines in my Machine X SSH session. Whenever I do run the script, it seems all the machines are re-compiling the entire code before actually running it, making me sit there for a few minutes before anything useful is done.
My question is this:
Why do they need to re-compile at all? The same JVM is available on all machines, so shouldn't it just run immediately?
How do I get around this problem of making each machine compile "it's own copy"?
sbt is a build tool and not an application runner. Use sbt-assembly to build an all in one jar and put the jar on each machine and run it with scala or java command.
It's usual for cluster to have a single partition mounted on every node (via NFS or samba). You just need to copy the artifact on that partition and they will be directly accessible in each node. If it's not the case, you should ask your sysadmin to install it.
Then you will need to launch the application. Again, most clusters come
with MPI. The tools mpirun (or mpiexec) are not restricted to real MPI applications and will launch any script you want on several nodes.