According to Celery Documentation:
librabbitmq
If you’re using RabbitMQ (AMQP) as the broker then you can install the librabbitmq module to use an optimized client written in C:
$ pip install librabbitmq
The ‘amqp’ transport will automatically use the librabbitmq module if it’s installed, or you can also specify the transport you want directly by using the pyamqp:// or librabbitmq:// prefixes.
I installed librabbitmq and changed the BROKER_URL setting so that it starts with librabbitmq://.
How do I verify that Celery is now using librabbitmq (i.e., that I did everything correctly)?
Uninstall librabbitmq.
Ensure that BROKER_URL starts with librabbitmq://.
Try to do something with celery (e.g., python manage.py celery worker if using djcelery).
The command will fail with ImportError: No module named librabbitmq.
Reinstall librabbitmq.
Repeat step 3.
The command should now work without any problems.
It's not 100% conclusive, but does yield a reasonably good indication that celery is using librabbitmq.
Related
I am attempting to debug some C# / .NET 5 code in WSL 2 with Ubuntu on Windows. I have WSL 2 setup with Windows 10 and want to test out creating a Systemd service. Unfortunately, it appears Systemd is not enabled with WSL 2 by default, even though a standard Ubuntu install does have it enabled by default. Is there any way to get Systemd enabled in WSL 2?
Note: See footnote at bottom of this answer for background on this Community Wiki.
There are several possible paths to enabling Systemd on WSL2 (but not WSL1). These are summarized here, with more detail provided below.
Option 1: Upgrade WSL to the latest application release (if supported by your system) and opt-in to the Systemd feature
Option 2: Run a Systemd-helper script designed for WSL2
Option 3: Manually run Systemd in its own namespace
And while not part of this question, for those simply looking to run certain applications that require Systemd, there are alternatives:
On WSL1 and WSL2:
Alternative 1: SysVInit scripts (e.g. sudo service <service_name> start) where available
Alternative 2: Manually configuring and running the service
On WSL2-only:
Alternative 3: Docker
Should you enable Systemd in WSL?
First, consider whether you should or need to enable Systemd in WSL. Enabling Systemd will automatically start a number of background services and tasks that you really may not need under WSL. As a result, it will also increase WSL startup times, although the impact will be dependent on your system. Check the Alternatives section below to see if there may be a better option that fits your needs. For example, the service command may do what you need without any additional effort.
More detail on each answer:
Option 1: Upgrade WSL to the latest application release (if supported by your system) and opt-in to the Systemd feature
Microsoft has now integrated Systemd support in the WSL2 application release (as opposed to the older "Windows feature" implementation).
Starting with WSL Application Release 1.0.0, this feature is available on both Windows 10 and Windows 11. Windows 10 users do need to be on UBR (update build revision) 2311 or later. The UBR is the last 4 digits of your full Windows build number (e.g. 10.0.19045.2311 for Windows 10 22H2). 2311 is installed with KB5020030, an optional Preview update, although if you are reading this later, it will likely be a later (non-Preview) monthly servicing update.
If you are on a supported Windows release, the WSL application with Systemd support can be installed:
Through the Microsoft Store (as "Windows Subsystem for Linux").
Or from the Releases page in the Github repo. To install a release manually:
Reboot (to make sure that WSL is not in use at all). A simple wsl --shutdown may work, but often will not.
Download the 1.0.0 (or later) release from the link above.
Start an Administrator PowerShell and:
Add-AppxPackage <path.to>/Microsoft.WSL_1.0.0.0_x64_ARM64.msixbundle
wsl --version # to confirm
To enable, start your Ubuntu (or other Systemd) distribution under WSL (typically just wsl ~ will work).
sudo -e /etc/wsl.conf
Add the following:
[boot]
systemd=true
Exit Ubuntu and again:
wsl --shutdown
Then restart Ubuntu.
sudo systemctl status
... should show your Systemd services.
Option 2: Run a Systemd-helper script designed for WSL2
There are a number of Systemd-enablement scripts available from various sources. Given the complexities involved in running Systemd under WSL, it is recommended that you:
Use one that is actively maintained
Attempt to understand, as much as possible, how they operate, and how they may impact other features and applications in your distribution(s) under WSL
When asking questions here or on any other site, disclose in the question which script you are using so that others can attempt to understand and/or reproduce your issue in the proper context
Several of the more popular projects that enable Systemd under WSL2 are:
Genie: 1.8k stars, last commit September, 2022
Distrod: 1.4k stars, last commit July 2022
WSL2-Hacks: 1.1k stars, mostly instructional, with a supporting script example. Last commit January, 2022
At the core, all of them operate on the same principles covered in the next option ...
Option 3: Manually run Systemd in its own namespace
One of the main issues with running Systemd in earlier versions of WSL is that both inits need to be PID 1. To get around this, it is possible to create a new namespace or container where Systemd can run as PID 1.
To see how this is done (at a very basic level):
Run:
sudo -b unshare --pid --fork --mount-proc /lib/systemd/systemd --system-unit=basic.target
This starts Systemd in a new namespace with its own PID mapping. Inside that namespace, Systemd will be PID1 (as it must, to function) and own all other processes. However, the "real" PID mapping still exists outside that namespace.
Note that this is a "bare minimum" command-line for starting Systemd. It will not have support for, at least:
Windows Interop (the ability to run Windows .exe)
The Windows PATH (which isn't necessary without Windows Interop anyway)
WSLg
The scripts and projects listed above do extra work to get these things working as well.
Wait a few seconds for Systemd to start up, then:
sudo -E nsenter --all -t $(pgrep -xo systemd) runuser -P -l $USER -c "exec $SHELL"
This enters the namespace, and you can now use ps -efH to see that systemd is running as PID 1 in that namespace.
At this point, you should be able to run systemctl.
And after proving to yourself that it's possible, it is recommended that you exit all WSL instances completely, then doing wsl --shutdown. Otherwise, some things will be "broken" until you do. They can likely be "fixed", but that's beyond the scope this answer. If you are interested, please refer to the projects listed above to see how they handle these situations.
Alternative 1: SysVInit scripts (e.g. sudo service <service_name> start) where available
In Ubuntu, Debian, and some other distributions on WSL, many of the common system services still have the "old" init.d scripts available to be used in place of systemctl with Systemd units. You can see these by using ls /etc/init.d/.
So, for example, you can start ssh with sudo service ssh start, and it will run the /etc/init.d/ssh script with the start argument.
Even some non-default packages such as MySql/MariaDB will install both the Systemd unit files and the old init.d scripts, so you can still use the service command for them as well.
On the hand, some packages, like Elasticsearch, only install Systemd units. And some distributions only provide Systemd units for most (if not all) packages in their repositories.
Alternative 2: Manually configuring and running the service
For those services that don't have a init-script equivalent, it can be possible to run them "manually".
For simplicity, let's assume that the ssh init.d script wasn't available.
In this case, the "answer" is to figure out what the Systemd unit files are doing and attempt to replicate that manually. This can vary widely in complexity. But I'd start with looking at the Systemd unit file that you are trying to run:
less /lib/systemd/system/ssh.service
# Trimmed
[Service]
EnvironmentFile=-/etc/default/ssh
ExecStartPre=/usr/sbin/sshd -t
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
RuntimeDirectory=sshd
RuntimeDirectoryMode=0755
Some of the less relevant lines have been trimmed to make it easier to parse, but you can man systemd.exec, man systemd.service, and others to see what most of the options do.
In this case, when you sudo systemctl start ssh, it:
Reads environment variables (the $SSHD_OPTS) from /etc/default/ssh
Tests the config, exits if there is a failure
Makes sure the RuntimeDirectory exists with the specified permissions. This translates to /run/sshd (from man systemd.exec). This also removes the runtime directory when you stop the service.
Runs /usr/sbin/sshd with options
So, if you don't have any environment-based config, you could just set up a script to:
Make sure the runtime directory exists. Note that, since it is in /run, which is a tmpfs mount, it will be deleted after every restart of the WSL instance.
Set the permissions to 0755
Start /usr/sbin/sshd as root
... And you would have done the same thing manually without Systemd.
Again, this is probably the simplest example. You might have much more to work through for more complex tasks.
Alternative 3: Docker
Many packages/services are available as Docker images. Docker typically runs very well under Ubuntu on WSL2 (specifically WSL2; it will not run on WSL1). If there's not a SysVinit "service" script for the service you are trying to start, there may very well be a Docker image available that runs in a containerized environment.
Example: Elasticsearch, as in this question.
Bonus #1: Doesn't interfere with other packages already installed (no dependency issues).
Bonus #2: The Docker images themselves pretty much never use Systemd, so you can often inspect the Dockerfile to see how the service is started without Systemd. For more information see the next option - "The manual way."
Microsoft recommends Docker Desktop for Windows for running Docker containers under WSL2.
Footnote This answer is being posted as a Community Wiki because it can apply to multiple Stack Overflow questions. It is originally based on answers to this Ask Ubuntu question. However, it is hoped that this wiki-answer can be continuously updated by the community as Systemd evolves on WSL.
This question has been chosen since:
It appears to be the most canonical, straightforward, "How do I enable Systemd on WSL?" question.
It is on-topic, as *creating Systemd services is (or at least can-be) unique to programming.
I run a centos 8 distro on docker and I would like to have bash TAB completion with dnf package manager. According to other posts, I did the following once my docker container is started:
dnf clean all && rm -r /var/cache/dnf && dnf upgrade -y && dnf update -y
and then
dnf install bash-completion sqlite -y
After doing that I restart the container but there is still no bash completion. I also tried to source directly the bash completion file by doing:
source /etc/profile.d/bash_completion.sh
but without any better effect.
Would you know what I am doing wrong ?
You shouldn't need BASH Completion in a Docker container. The only time you should be manually connecting to a shell inside a Linux container is to troubleshoot why the process running in the container is behaving abnormally. In fact, some container design advice might even go as far as suggesting you not include a shell inside your base OS at all!
The reason this isn't working for you is due to the way in which Linux containers operate. A Container is simply a namespaced process that is managed by the kernel installed on the Host OS. This process cannot be modified or interrupted or the container will be destroyed since the process will be sent a SIGTERM. When you attempt to source the bash_completion.sh script, you are attempting to pass new configuration arguments to your existing namespaced process managed by Docker.
If you really wanted to do this the best way to do it would be to create a new Docker Container Image based on the original CentOS 8 Base Image. And then from there install the bash completion package and add an echo command to add the source line to your user's .bashrc file.
EDIT:
With regards to the additional question asked OP in the comments of this answer I have added additional information below.
Why should not I need bash completion in a container
The reason you do not need bash completion in a container is because containers are not meant to be attached to with a shell. A is simply supposed to be a single instance of a process running under specific configured criteria. Containers aren't meant to be used to create dev environments for you to connect to, they're meant to run processes and applications in software infrastructure.
Manually updating & installing packages
You mention that one of the first things you do when you spin up a container is install packages. This is also alarming to me because you are not supposed to be manually interacting with a container at all. This includes package installation. Instead, you should generate a new Container Image from the older Base Image and add additional RUN statements to the Dockerfile to update the system and install these desired packages.
Cannot believe it is not possible
It is possible if you create a new Dockerfile that purposely installs it on a new layer of the base image and produces a new container image for you to use. BUT the point is that you shouldn't be connecting to Docker containers in the first place to even get to a point where you could need something like bash completion!
Here is a great summary on the difference between a container and a virtual machine that might help clarify some of this for you. In a nutshell, containers are supposed to run, and only run, processes.
I have 1 WorkerNode SPARK HDInsight cluster. I need to use scikit-neuralnetwork and vaderSentiment module in Pyspark Jupyter.
Installed the library using commands below:
cd /usr/bin/anaconda/bin/
export PATH=/usr/bin/anaconda/bin:$PATH
conda update matplotlib
conda install Theano
pip install scikit-neuralnetwork
pip install vaderSentiment
Next I open pyspark terminal and i am able to successfully import the module. Screenshot below.
Now, i open Jupyter Pyspark Notebook:
Just to add, I am able to import pre-installed module from Jupyter like "import pandas"
The installation goes to:
admin123#hn0-linuxh:/usr/bin/anaconda/bin$ sudo find / -name "vaderSentiment"
/usr/bin/anaconda/lib/python2.7/site-packages/vaderSentiment
/usr/local/lib/python2.7/dist-packages/vaderSentiment
For pre-installed modules:
admin123#hn0-linuxh:/usr/bin/anaconda/bin$ sudo find / -name "pandas"
/usr/bin/anaconda/pkgs/pandas-0.17.1-np19py27_0/lib/python2.7/site-packages/pandas
/usr/bin/anaconda/pkgs/pandas-0.16.2-np19py27_0/lib/python2.7/site-packages/pandas
/usr/bin/anaconda/pkgs/bokeh-0.9.0-np19py27_0/Examples/bokeh/compat/pandas
/usr/bin/anaconda/Examples/bokeh/compat/pandas
/usr/bin/anaconda/lib/python2.7/site-packages/pandas
sys.executable path is same in both Jupyter and terminal.
print(sys.executable)
/usr/bin/anaconda/bin/python
Any help would greatly appreciated.
The issue is that while you are installing it on the headnode (one of the VMs), you are not installing it on all the other VMs (worker nodes). When the Pyspark app for Jupyter gets created, it gets run in YARN cluster mode, and so the application master starts in a random worker node.
One way of installing the libraries in all worker nodes would be to create a script action that runs against worker nodes and installs the necessary libraries:
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-customize-cluster-linux/
Do note that there's two python installations in the cluster, and you have to refer to the Anaconda installation explicitly. Installing scikit-neuralnetwork would look something like this:
sudo /usr/bin/anaconda/bin/pip install scikit-neuralnetwork
The second way of doing this is to simply ssh into the workernodes from the headnode. First, ssh into the headnode, then figure out the workernode IPs by going to Ambari at: https://YOURCLUSTER.azurehdinsight.net/#/main/hosts. Then, ssh 10.0.0.# and execute the installation commands yourself for all worker nodes.
I did this for scikit-neuralnetwork and while it does import correctly, it throws saying it cannot create a file in ~/.theano. Because YARN is running Pyspark sessions as the nobody user, Theano cannot create its config file. Doing a little bit of digging around, I see that there's a way to change where Theano writes/looks for its config file. Please also take care of that while doing the installation: http://deeplearning.net/software/theano/library/config.html#envvar-THEANORC
Forgot to mention, to modify an env var, you need to set the variable when creating the pyspark session. Execute this in the Jupyter notebook:
%%configure -f
{
"conf": {
"spark.executorEnv.THEANORC": "{YOURPATH}",
"spark.yarn.appMasterEnv.THEANORC": "{YOURPATH}"
}
}
Thanks!
Easy way to resolve this was:
Create a bash script
cd /usr/bin/anaconda/bin/
export PATH=/usr/bin/anaconda/bin:$PATH
conda update matplotlib
conda install Theano
pip install scikit-neuralnetwork
pip install vaderSentiment
Copy the above created bash script to any container in Azure storage account.
While creating HDInsight Spark cluster, use script action and mention the above path in URL. Ex: https://sa-account-name.blob.core.windows.net/containername/path-of-installation-file.sh
Install it in both HeadNodes and WorkerNodes.
Now, open Jupyter and you should be able to import the modules.
I am taking my first steps with ipython notebook and I installed it successfully on a remote server of mine (over SSH) and I started it using the following command:
ipython notebook --ip='*' ---pylab=inline --port=7777
I then checked on http://myserver.sth:7777/ and the notebook was running just fine. I then wanted to close the SSH connection with the server and keep ipython running in the background. When I did this, I couldn't connect to myserver.sth:7777 anymore. Once I connected again to the remote server by SSH, I could connect again to the notebook. I then tried to use screen to start ipython: I created a new screen by screen -S ipy, I started ipython notebook as above and I used Ctrl+A,D to detach the screen and exit to the TTY. I could still connect remotely to the notebook. I then closed the SSH connection and I got a 404 NOT FOUND error when I tried to access my previously stored notebook and I couldn't see it on the list of notebook at http://myserver.sth:7777/. I tried to create a new notebook, but I got a 500 Internal Server Error.
I also tried running ipython notebook with and without using sudo.
Any ideas?
Rather than use screen, perhaps you could switch to an init script or supervisord to keep IPython notebook up and running.
Let's assume you go the supervisord route:
Install supervisord
Install supervisord using your package manager. For ubuntu it's named supervisor.
apt-get install supervisor
If you decide to install supervisor through pip, you'll have to set up its init.d script yourself.
Write a supervisor configuration file for IPython
The configuration file tells supervisor what to run and how.
After you install supervisor, it should have created /etc/supervisor/supervisord.conf. These lines should exist in the file:
[include]
files = /etc/supervisor/conf.d/*.conf
If they contain these lines, you're in good shape. I only show them to demonstrate where it expects new configuration files. Your configuration file can go there, named something like /etc/supervisor/conf.d/ipynb.conf.
Here's a sample configuration that was generated by Chef by an ipython-notebook-cookbook that runs the notebook in a virtualenv:
[program:ipynb]
command=/home/ipynb/.ipyvirt/bin/ipython notebook --profile=cooked
process_name=%(program_name)s
numprocs=1
numprocs_start=0
autostart=true
autorestart=true
startsecs=1
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=ipynb
redirect_stderr=false
stdout_logfile=AUTO
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stdout_capture_maxbytes=0
stdout_events_enabled=false
stderr_logfile=AUTO
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
stderr_capture_maxbytes=0
stderr_events_enabled=false
environment=HOME="/home/ipynb",SHELL="/bin/bash",USER="ipynb",PATH="/home/ipynb/.ipyvirt/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games",VIRTUAL_ENV="/home/ipynb/.ipyvirt"
directory=/home/ipynb
serverurl=AUTO
The above supervisor config also relies on an IPython notebook configuration (located at /home/ipynb/.ipython/profile_cooked/ipython_notebook_config.py). This makes configuration much easier (as you can also set up your password hash and many other configurables).:
c = get_config()
# Kernel config
# Make matplotlib plots inline
c.IPKernelApp.pylab = 'inline'
# The IP address the notebook server will listen on.
# If set to '*', will listen on all interfaces.
# c.NotebookApp.ip= '127.0.0.1'
c.NotebookApp.ip='*'
# Port to host on (e.g. 8888, the default)
c.NotebookApp.port = 8888 # If you want it on 80, I recommend iptables rules
# Open browser (probably want False)
c.NotebookApp.open_browser = False
Re-read and update, now that you have the configuration file
supervisorctl reread
supervisorctl update
Reality
In reality, I used to use a Chef cookbook to do the entire installation and configuration. However, using configuration management with tiny stuff like this is a bit of overkill (unless you're orchestrating these in automation).
Nowadays I use Docker images for IPython notebook, orchestrating via JupyterHub or tmpnb.
I'm using Fabric 1.6.0 on OS X 10.8.2, running commands on a remote host on Ubuntu Lucid 10.04.
On the server, I can run sudo /etc/init.d/celeryd restart to restart the Celery service.
I pass the same command through fabric using:
#task
def restart():
run('sudo /etc/init.d/celeryd restart')
Or
#task
def restart2():
sudo('/etc/init.d/celeryd restart')
Or use the command line form fab <task_that_sets_env.host> -- sudo /etc/init.d/celeryd restart
The command always fails silently - meaning that fabric returns no errors, but celeryd reports that it's not running.
I'm tearing my hair out here! There's nothing relevant in the Celery log file, and AFAIK Fabric should just pass the commands straight through.
Maybe I'm pretty late to the party, and you can downvote me if this doesnt work, but I've had similar problems running other programs in /etc/init.d with fabric. My solution (works with tomcat and mysql) is to add pty=False
#task
def restart():
sudo('/etc/init.d/celeryd restart', pty=False)
Theres documentation on the option here:
http://docs.fabfile.org/en/1.7/api/core/operations.html#fabric.operations.run