Systemd - daemonizing celery - Failed at step CHDIR spawning /bin/sh: No such file or directory - celery

This is basically the same service file that the celery docs tells you to use as a basic beginners file.
With the below configuration, journalctl -ex displays the error "Failed at step CHDIR spawning /bin/sh: No such file or directory".
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=apache
Group=apache
#Environment=PATH=/opt/python39/lib:/home/ec2-user/DjangoProjects/myproj
#Environment=PATH=/home/ec2-user/DjangoProjects/myproj
EnvironmentFile=/etc/conf.d/celery
#WorkingDirectory=/opt/python39
WorkingDirectory=/home/ec2-usuer/DjangoProjects/myproj
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always
[Install]
WantedBy=multi-user.target
/etc/conf.d/celery
# Name of nodes to start
# here we have a single node
#CELERYD_NODES="w1"
# or we could have three nodes:
CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/home/ec2-user/.local/bin/celery"
CELERY_BIN="/opt/python39/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
CELERYD_CHDIR="/home/ec2-user/DjangoProjects/myproj"
# App instance to use
# comment out this line if you don't use an app
#CELERY_APP="myproj"
CELERY_APP="myproj.celery_tasks"
#CELERY_APP="myproj.celery_tasks:myapp"
# ^^ ??? confusion ??? ^^
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
# you may wish to add these options for Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
export DJANGO_SETTINGS_MODULE="myproj.settings"
If I leave out the WorkingDirectory in the service file, it throws this error: "ModuleNotFoundError: No module named 'myproj'".
I've spent the last 2 days looking at different configurations and what not, and I haven't been able to get past one of these 2 errors. What am I missing?

I was able to solve it!
I found this link that was a tutorial... which said that WorkingDirectory and CELERYD_CHDIR are the same.
I also read something on SO that suggested using a virtual environment... so I did that, too :).
The updated files:
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=apache
Group=apache
#Environment=PATH=/opt/python39/lib:/home/ec2-user/DjangoProjects/myproj
#Environment=PATH=/home/ec2-user/DjangoProjects/myproj
EnvironmentFile=/etc/conf.d/celery
#WorkingDirectory=/opt/python39
WorkingDirectory=/home/ec2-user/DjangoProjects/myproj
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always
[Install]
WantedBy=multi-user.target
/etc/conf.d/celery
# Name of nodes to start
# here we have a single node
#CELERYD_NODES="w1"
# or we could have three nodes:
CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/home/ec2-user/.local/bin/celery"
#CELERY_BIN="/opt/python39/bin/celery"
CELERY_BIN="/home/ec2-user/.virtualenvs/myproj_prod/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
CELERYD_CHDIR="/home/ec2-user/DjangoProjects/myproj"
# App instance to use
# comment out this line if you don't use an app
#CELERY_APP="myproj"
#CELERY_APP="myproj.celery_tasks"
CELERY_APP="myproj.celery_tasks"
#CELERY_APP="myproj.celery_tasks:myapp"
# ^^ ??? confusion ??? ^^
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
# you may wish to add these options for Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
DJANGO_SETTINGS_MODULE="myproj.settings"
After I created /var/run/celery/ and /var/log/celery/ folders, I ran chmod and gave access to the user and group that would be running the service to those folders - apache.

Related

Postgresql: how can install it in a different location in a linux server?

I would like to install postgresql in a linux (ubuntu) server.
If I do the following:
sudo apt-get install postgresql
it is going to install it in
/var/lib/postgresql/9.5/main
while I would like to have it in
/home/database/postgresql/9.5/main
Postgres uses the PGDATA environment variable to understand where do you want it to work. Edit the init script (either /etc/init.d/postgresql or /usr/lib/systemd/postgresql.service) and set the PGDATA accordingly. For example:
# Note: changing PGDATA will typically require adjusting SELinux
# configuration as well.
# Note: do not use a PGDATA pathname containing spaces, or you will
# break postgresql-setup.
[Unit]
Description=PostgreSQL database server
After=syslog.target
After=network.target
[Service]
Type=forking
User=postgres
Group=postgres
# Note: avoid inserting whitespace in these Environment= lines, or you may
# break postgresql-setup.
# Location of database directory
Environment=PGDATA=/var/sql/pgsql/
# Where to send early-startup messages from the server (before the logging
# options of postgresql.conf take effect)
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog
# Disable OOM kill on the postmaster
OOMScoreAdjust=-1000
#ExecStartPre=/usr/local/pgsql/bin/postgresql95-check-db-dir ${PGDATA}
ExecStart=/usr/local/pgsql/bin/pg_ctl start -D ${PGDATA} -s -w -t 300
ExecStop=/usr/local/pgsql/bin/pg_ctl stop -D ${PGDATA} -s -m fast
ExecReload=/usr/local/pgsql/bin/pg_ctl reload -D ${PGDATA} -s
# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=300
[Install]
WantedBy=multi-user.target
You may also try to symlink /var/lib/postgresql/9.5/main to /home/database/postgresql/9.5/main - might be simpler.

how to run celery flower with config file?

For my project. I want to use flower config file to instead of use command line options.
But I write a file named flowerconfig.py, like follows:
# RabbitMQ management
broker_api = 'http://user:passwd#localhost:15672/api/'
# Enable debug logging
logging = 'DEBUG'
# view address
address = '0.0.0.0'
port = 10006
basic_auth = ["user:passwd"]
persistent = True
db = "var/flower_db"
But when I run flower with the command flower --conf=flowerconfig. I found this broker not work.
I replace the command with celery flower -A celery_worker.celery_app --conf=flowerconfig. celery_worker is my celery file.
the broker is running normally. but still the flowerconfig basic auth not work .enter code here
So I don't know if flower support file config. or other methods.
the versions:
flower==0.9.2
celery==4.2.1
You can create a bash script to run. For example:
#!/bin/bash
celery -A project flower \
--basic_auth=monitor:password \
--persistent=True \
--max_tasks=9999 \
-l info \
--address=0.0.0.0 \
--broker=redis://localhost:6379/0

docker varnish cmd error - no such file or directory

I'm trying to get a Varnish container running as part of a multicontainer Docker environment.
I'm using https://github.com/newsdev/docker-varnish as a base.
My Dockerfile looks like:
FROM newsdev/varnish:4.1.0
COPY start-varnishd.sh /usr/local/bin/start-varnishd
ENV VARNISH_VCL_PATH /etc/varnish/default.vcl
ENV VARNISH_PORT 80
ENV VARNISH_MEMORY 64m
EXPOSE 80
CMD [ "exec /usr/local/sbin/varnishd -j unix,user=varnishd -F -f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384" ]
When I run this as part of a docker-compose setup, I get:
ERROR: for eventsapi_varnish_1 Cannot start service varnish: oci
runtime error: container_linux.go:262: starting container process
caused "exec: \"exec /usr/local/sbin/varnishd -j unix,user=varnishd -F
-f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384\": stat exec
/usr/local/sbin/varnishd -j unix,user=varnishd -F -f
/etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p
http_req_hdr_len=16384 -p http_resp_hdr_len=16384: no such file or
directory"
I get the same if I try
CMD ["start-varnishd"]
(as it is in the base newsdev/docker-varnish)
or
CMD [/usr/local/bin/start-varnishd]
But if I run a bash shell on the container directly:
docker run -t -i eventsapi_varnish /bin/bash
and then run the varnishd command from there, varnish starts up fine (and starts complaining that it can't find the web container, obviously).
What am I doing wrong? What file can't it find? Again looking around the running container directly, it seems that Varnish is where it thinks it should be, the VCL file is where it thinks it should be... what's stopping it running from within docker-compose?
Thanks!
I didn't get to the bottom of why I was getting this error, but "fixed" it by using the (more recent?) fork: https://hub.docker.com/r/tripviss/varnish/. My Dockerfile is now just:
FROM tripviss/varnish:5.1
COPY default.vcl /usr/local/etc/varnish/

How do I get pcp to automatically attach nodes to postgres pgpool?

I'm using postgres 9.4.9, pgpool 3.5.4 on centos 6.8.
I'm having a major hard time getting pgpool to automatically detect when nodes are up (it often detects the first node but rarely detects the secondary) but if I use pcp_attach_node to tell it what nodes are up, then everything is hunky dory.
So I figured until I could properly sort the issue out, I would write a little script to check the status of the nodes and attach them as appropriate, but I'm having trouble with the password prompt. According to the documentation, I should be able to issue commands like
pcp_attach_node 10 localhost 9898 pgpool mypass 1
but that just complains
pcp_attach_node: Warning: extra command-line argument "localhost" ignored
pcp_attach_node: Warning: extra command-line argument "9898" ignored
pcp_attach_node: Warning: extra command-line argument "pgpool" ignored
pcp_attach_node: Warning: extra command-line argument "mypass" ignored
pcp_attach_node: Warning: extra command-line argument "1" ignored
it'll only work when I use parameters like
pcp_attach_node -U pgpool -h localhost -p 9898 -n 1
and there's no parameter for the password, I have to manually enter it at the prompt.
Any suggestions for sorting this other than using Expect?
You have to create PCPPASSFILE. Search pgpool documentation for more info.
Example 1:
create PCPPASSFILE for logged user (vi ~/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 ~/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Example 2:
create PCPPASSFILE (vi /usr/local/etc/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 /usr/local/etc/.pcppass), set variable PCPPASSFILE (export PCPPASSFILE=/usr/local/etc/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Script for auto attach the node
You can schedule this script with for example crontab.
#!/bin/bash
#pgpool status
#0 - This state is only used during the initialization. PCP will never display it.
#1 - Node is up. No connections yet.
#2 - Node is up. Connections are pooled.
#3 - Node is down.
source $HOME/.bash_profile
export PCPPASSFILE=/appl/scripts/.pcppass
STATUS_0=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 0 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 0 status "$STATUS_0;
if (( $STATUS_0 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 0 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 0 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
STATUS_1=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 1 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 1 status "$STATUS_1;
if (( $STATUS_1 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 1 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 1 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
exit 0
yes you can trigger execution of this command using a customised failover_command (failover.sh in your /etc/pgpool)
Automated way to up your pgpool down node:
copy this script into a file with execute permission to your desired location with postgres ownership into all nodes.
run crontab -e comamnd under postgres user
Finally set that script to run every minute at crontab . But to execute it for every second you may create your own
service and run it.
#!/bin/bash
# This script will up all pgpool down node
#************************
#******NODE STATUS*******
#************************
# 0 - This state is only used during the initialization.
# 1 - Node is up. No connection yet.
# 2 - Node is up and connection is pooled.
# 3 - Node is down
#************************
#******SCRIPT*******
#************************
server_node_list=(0 1 2)
for server_node in ${server_node_list[#]}
do
source $HOME/.bash_profile
export PCPPASSFILE=/var/lib/pgsql/.pcppass
node_status=$(pcp_node_info -p 9898 -h localhost -U pgpool -n $server_node -w | cut -d ' ' -f 3);
if [[ $node_status == 3 ]]
then
pcp_attach_node -n $server_node -U pgpool -p 9898 -w -v
fi
done

Varnish DAEMON_OPTS Options Errors

When using inline C with Varnish I've not been able to get /etc/varnish/default
to be happy at startup.
I've tested inline C with varnish for two things: GeoIP detection and Anti-Site-Scraping functions.
The DAEMON_OPTS always complains even though I'm following what other seem
to indicate works fine.
My problem is that this command line start up works:
varnishd -f /etc/varnish/varnish-default.conf -s file,/var/lib/varnish/varnish_storage.bin,512M -T 127.0.0.1:2000 -a 0.0.0.0:8080 -p 'cc_command=exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s'
But it errors out with trying to start up from default start scripts:
/etc/default/varnish has this in it:
DAEMON_OPTS="-a :8080 \
-T localhost:2000 \
-f /etc/varnish/varnish-default.conf \
-s file,/var/lib/varnish/varnish_storage.bin,512M \
-p 'cc_command=exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s'"
The error is:
# /etc/init.d/varnish start
Starting HTTP accelerator: varnishd failed!
storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin size 512 MB.
Error:
Unknown parameter "'cc_command".
If I try change the last line to:
-p cc_command='exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s'"
It's error is now:
# /etc/init.d/varnish start
Starting HTTP accelerator: varnishd failed!
storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin size 512 MB.
Error: Unknown storage method "hared"
It's trying to interpret the '-shared' as -s hared and 'hared' is not a storage type.
For both GeoIP and the Anti-Site-Scrape I've used the exact recommended daemon options
plus have tried all sorts of variations like adding ' and '' but no joy.
Here is a link to the instruction I've followed that work fine except the DAEMON_OPTS part.
http://drcarter.info/2010/04/how-fighting-against-scraping-using-varnish-vcl-inline-c-memcached/
I'm using Debian and the exact DAEMON_OPTS as stated in the instructions.
Can anyone help with a pointer on what's going wrong here?
Even if Jacob will probably never read this, visitors from the future might appreciate what I'm going to write.
I believe I know what's wrong, and it looks like a Debian-specific problem, at least verified on Ubuntu 11.04 and Debian Squeeze.
I traced the execution from my /etc/default/varnish that contains the $DAEMON_OPTS to the init script.
In the init script /etc/init.d/varnish, the start_varnishd() function is:
start_varnishd() {
log_daemon_msg "Starting $DESC" "$NAME"
output=$(/bin/tempfile -s.varnish)
if start-stop-daemon \
--start --quiet --pidfile ${PIDFILE} --exec ${DAEMON} -- \
-P ${PIDFILE} ${DAEMON_OPTS} > ${output} 2>&1; then
log_end_msg 0
else
log_end_msg 1
cat $output
exit 1
fi
rm $output
}
So I modified it to print the full start-stop-daemon command line, like:
start_varnishd() {
log_daemon_msg "Starting $DESC" "$NAME"
output=$(/bin/tempfile -s.varnish)
+ echo "start-stop-daemon --start --quiet --pidfile ${PIDFILE} --exec ${DAEMON} -- -P ${PIDFILE} ${DAEMON_OPTS} > ${output} 2>&1"
if start-stop-daemon \
--start --quiet --pidfile ${PIDFILE} --exec ${DAEMON} -- \
-P ${PIDFILE} ${DAEMON_OPTS} > ${output} 2>&1; then
log_end_msg 0
So I got a command line echoed on STDOUT, and copied-pasted it into my shell. And, surprise! It worked. WTF?
Repeated again to be sure. Yes, it works. Mmh. Could it be another of those bash/dash corner cases?
Let's try feeding the start-stop-daemon command line to bash, and see how it reacts:
start_varnishd() {
log_daemon_msg "Starting $DESC" "$NAME"
output=$(/bin/tempfile -s.varnish)
if bash -c "start-stop-daemon \
--start --quiet --pidfile ${PIDFILE} --exec ${DAEMON} -- \
-P ${PIDFILE} ${DAEMON_OPTS} > ${output} 2>&1"; then
log_end_msg 0
else
log_end_msg 1
cat $output
exit 1
fi
rm $output
}
Yes, it works just fine, at least for my case.
Here's the relevant part of my /etc/default/varnish:
...
## Alternative 2, Configuration with VCL
#
# Listen on port 6081, administration on localhost:6082, and forward to
# one content server selected by the vcl file, based on the request. Use a 1GB
# fixed-size cache file.
#
DAEMON_OPTS="-a :6081 \
-T localhost:6082 \
-f /etc/varnish/geoip-example.vcl \
-S /etc/varnish/secret \
-s malloc,100M \
-p 'cc_command=exec cc -fpic -shared -Wl,-x -L/usr/include/GeoIP.h -lGeoIP -o %o %s'"
...
I've seen posts where someone tried to work around this problem by moving the compile command into a separated shell script. Unfortunately that doesn't change the fact that start-stop-daemon is going to pass the $DAEMON_OPTS var through dash, and that will result in mangled options.
Would be something along the lines of:
-p 'cc_command=exec /etc/varnish/compile.sh %o %s'"
And then the compile.sh script as:
#!/bin/sh
cc -fpic -shared -Wl,-x -L/usr/include/GeoIP.h -lGeoIP -o $#
but it doesn't work, so just patch your init scripts, and you're good to go!
Hope you can find this information useful.
You can try using :-
DAEMON_OPTS="-a :8080 \
-T localhost:2000 \
-f /etc/varnish/varnish-default.conf \
-s file,/var/lib/varnish/varnish_storage.bin,512M \
-p cc_command='exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s'"
Obviously, your startup script interpreting the DAEMON_OPTS is not prepared for whitespace (even within single quotes). At my Fedora (15) installation, the suggested solution works fine; the arguments get interpreted correctly because the "$*" bash parameter is passed in /etc/init.d/varnish and in /etc/init.d/functions in daemon().
Did you get your startup scripts from a package or did you make custom scripts?
This isn't directly related to the question, but you may find yourself here if you are working through the Varnish Tutorial - Put Varnish on port 80.
For recent installs of Varnish on Debian systems the configuration for varnishd startup options can be found in /etc/systemd/system/multi-user.target.wants/varnish.service. The documented way of changing the port via /etc/default/varnish still exists, but is no longer functional unless you change your system to use init scripts rather than systemd.
After you've changed your options in /etc/systemd/system/multi-user.target.wants/varnish.service, don't forget to run systemctl daemon-reload, which will catalog the changes for executing the program.