in octobercms using laradock nohup php artisan queue:work --daemon & doesnot work.but manually run works.how to resolve? - queue

we use
'default' => env('QUEUE_DRIVER', 'database')
while running php artisan queue:work --timeout 3600 it work
but when running nohup php artisan queue:work --tries=3 --daemon & it works for the current job queued sinlge row for one time only.

Defining the queue in laravel is not enough, you need to setup a cron job entry for octobercms.
See the installation guide for Setting up the scheduler

Related

Automating pg_basebackup command for recovery using cron and script

I have a 2 node system running pacemaker, corosync and postgresql 9.4. I am carrying out pgsql replication using virtual IP and am able to successfully recover a downed machine using manual commands. Now to automate stuff, I want to run the following commands in a script to get my recovered master back on cluster.
`#su -postgres
$rm -rf /var/lib/pgsql/9.4/data/* //To delete old data files
$pg_basebackup -h 192.XX.XX.XX -U postgres -D /var/lib/pgsql/9.4/data -X stream -P // To recover the latest data from standby PC running latest entries
$rm /var/lib/pgsql/tmp/PGSQL.lock
$exit //exit from postgresql shell
#pcs resource cleanup msPostgresql`
Now when i run these commands as a script, it hangs after the first command itself i.e. su -postgres and the cursor blinks at bash$ syntax without inserting the commands down below.
I want to automate this process using cron but the script itself is not working for me. Can someone help me out here.
Regards
As far as I know "su -postgres" is wrong. You can use either "su postgres", or "sudo -i -u postgres"
Regarding the scripts here you can find tested, working scripts. The one you are interested in is called "initiate_replication.sh" there.

running service elasticsearch start fails, but running the command manually succeeds

Context:
I'm testing an elasticsearch 1.7.1 configuration that's set up by chef, and testing in kitchen
The chef script and configuration works because it's running in production somehow
running service elasticsearch start as the elasticsearch user fails, but the actual call it delegates to does not.
From what I've learned, chef scripts are run as root. So, when the test fails (it checks to see if elasticsearch is running by running service elasticsearch status), I log into the vagrant machine. As root, if I run service elasticsearch start, I get an OK (which is incorrect, but another issue) and then run a subsequent service elasticsearch status, I'm met with the error: elasticsearch dead but pid file exists
Digging further, I set debug statements on the init.d script that's run by service and saw that the actual command was basically a call to the init.d/functions function daemon, which just calls:
runuser -s /bin/bash elasticsearch -c 'ulimit -S -c 0 >/dev/null 2>&1 ; /usr/share/elasticsearch/bin/elasticsearch -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch/ -Des.default.path.data=/data/elasticsearch/data/ -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch/'
So I tried a sudo su - elasticsearch and then ran the part in quotes:
[elasticsearch#default-centos ~]$ ulimit -S -c 0 >/dev/null 2>&1 ;
/usr/share/elasticsearch/bin/elasticsearch
-p /var/run/elasticsearch/elasticsearch.pid -d
-Des.default.path.home=/usr/share/elasticsearch
-Des.default.path.logs=/var/log/elasticsearch/
-Des.default.path.data=/data/elasticsearch/data/
-Des.default.path.work=/tmp/elasticsearch
-Des.default.path.conf=/etc/elasticsearch/
A subsequent service elasticsearch status shows that elasticsearch is running just fine! I've even set the logging to TRACE, and there's no indication that elasticsearch has crashed.

docker build does not sustain processes

So this might be my Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get install -y mysql-server-5.6
RUN service mysql start
RUN service mysql status
It throws an error during the build that MySQL is not running, even though the previous command finished with a success. The deamons seem not to be able to be running between different commands in the Dockerfile.
This is an artificial example, but in my real Dockerfile I have lines which configure the database and they need to have a deamon running in the backgroud. The only solution to go around this that I found is to run:
RUN service mysql start && ./database_configure1.sh
RUN service mysql start && ./do_something_else_with_db.sh
and so on
But this is probably not the way to do it. Is there any better way to go about this?
Each RUN command within your Dockerfile runs within a different container, so here's the actual sequence of events:
service mysql start starts MySQL.
Then the container is stopped (MySQL is stopped).
Then a snapshot is taken.
Then a new container is launched using that snapshot.
service mysql status is run in the new container.
Of course, mysql isn't actually running in the latter container, so that fails.
So, instead, you need to do everything in a single build step. Usually, you'll want to do this by running a shell script within your container.
Here goes.
Your directory tree should look like this:
Dockerfile
do_stuff_with_mysql.sh
Then, in your Dockerfile, do:
ADD do_stuff_with_mysql.sh /
RUN chmod 755 /do_stuff_with_mysql.sh
RUN do_stuff_with_mysql.sh
And, in do_stuff_with_mysql.sh, you should have something that looks like this:
#!/bin/bash
set -o errexit
set -o nounset
service mysql start
./database_configure1.sh
./do_something_else_with_db.sh
service mysql stop
# you should loop on `service mysql status` to confirm MySQL is done shutting down

Running PostgreSQL with Supervisord

I want to run PostgreSQL 9.1 using Supervisor on Ubuntu 10.04. At the moment, I manually start PostgreSQL using the init script:
/etc/init.d/postgresql start
According to this post: http://nicksergeant.com/using-postgresql-with-supervisor-on-ubuntu-1010/, I need to modify the PostgreSQL config to make it run on TCP port instead of Unix socket, in order to make PostgreSQL work with Supervisor.
I have two questions regarding this approach:
Considering this is more of hack, is there any implication (e.g. security/permissions, performance, etc) of doing this?
Why cannot we just run the same init script postgresql in Supervisor config? Instead, as shown in the link above, it runs postmaster?
UPDATE:
Thanks to the useful suggestions from both answers below, I have setup a script for Supervisor to invoke PostgreSQL directly:
#!/bin/sh
# This script is run by Supervisor to start PostgreSQL 9.1 in foreground mode
if [ -d /var/run/postgresql ]; then
chmod 2775 /var/run/postgresql
else
install -d -m 2775 -o postgres -g postgres /var/run/postgresql
fi
exec su postgres -c "/usr/lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main -c config_file=/etc/postgresql/9.1/main/postgresql.conf"
I also set the config: /etc/postgresql/9.1/main/start.conf to manual so that PostgreSQL does not start automatically on boot (however, it's not clear to me whether this config is loaded). And then I setup the Supervisor config for postgres as:
[program:postgres]
user=root
group=root
command=/usr/local/bin/run_postgresql.sh
autostart=true
autorestart=true
stderr_logfile=/home/www-data/logs/postgres_err.log
stdout_logfile=/home/www-data/logs/postgres_out.log
redirect_stderr=true
stopsignal=QUIT
So now, I can start PostgreSQL in supervisorctl by doing start postgres, which runs fine. However, after I issue stop postgres, although supervisorctl declares postgres is stopped, the server apparently is still running as I can psql into it.
I wonder if this is a Supervisor config issue, or a PostgreSQL issue. Any suggestion welcome!
The blog post is rather badly written. There is no "TCP mode": the post's suggested method will still listen on a Unix socket, just in a different directory. Comments in the post such as "external pid file - not needed for TCP mode" are very misleading.
postmaster is the traditional name for the postgresql executable (to distinguish the master dispatching process from the backend slaves). For some time now there has been no separate executable, and now it is installed as simply "postgres".
Assuming that Supervisor is broadly simliar to the qmail/daemontools supervise scheme, it would be entirely possible (in fact, quite normal) to have it run a script that sets up the directories and environment, and then execs postgres with the requisite arguments (or propagates arguments given to the wrapper script, which would be unusual with supervise but makes more sense when you have a config file to put arguments into).
The way supervise worked (and I'm going to continue to assume "Supervisor" is the same) is to have the supervisor process run a subprocess as specified, and simply relaunch a new subprocess if it exits. This is based on the idea that the process being launched is a long-lived daemon process that only exits when something goes badly wrong, and that simply restarting it is a valid fix. By contrast, init scripts such as in /etc/init.d run the subprocess and detach it, and return control to their caller- if the subprocess exits, nothing special happens, and it must be restarted manually. If you tried to simply run /etc/init.d/postgresql start from supervise, it would continuously keep spawning postgresql daemons, as the return from the init script would be interpreted as the daemon process having exited, when in fact it had been started and detached.
To avoid auto-starting the service with the /etc/init.d scripts, the package for postgresql 9.1 provides a file /etc/postgresql/9.1/main/start.conf that contains:
# Automatic startup configuration
# auto: automatically start/stop the cluster in the init script
# manual: do not start/stop in init scripts, but allow manual startup with
# pg_ctlcluster
# disabled: do not allow manual startup with pg_ctlcluster (this can be easily
# circumvented and is only meant to be a small protection for
# accidents).
auto
This is the file to modify to avoid auto-start as opposed to moving away /etc/init.d/postgresql as the blog post suggests.
Also, changing unix sockets parameters for lack of /var/run/postgresql doesn't look like the best idea, because it's the default for any program linked with libpq, and because there's no difficulty in creating that directory with the proper permissions, just like it's done by the package's start sequence in /usr/share/postgresql-common/init.d-functions:
# create socket directory
if [ -d /var/run/postgresql ]; then
chmod 2775 /var/run/postgresql
else
install -d -m 2775 -o postgres -g postgres /var/run/postgresql
fi
And although the default value shouldn't cause a problem, note that whether postmaster ultimately stays in foreground or forks and runs in background is controlled by the silent_mode parameter in postgresql.conf. Make sure that it is off.
I am trying to make both tomcat and postgres run under supervisor, and found some hints here : https://serverfault.com/questions/425132/controlling-tomcat-with-supervisor
Here is my modified run_postgresql.sh, using bash :
#!/bin/bash
# This script is run by Supervisor to start PostgreSQL 9.1 in foreground mode
function shutdown()
{
echo "Shutting down PostgreSQL"
pkill postgres
}
if [ -d /var/run/postgresql ]; then
chmod 2775 /var/run/postgresql
else
install -d -m 2775 -o postgres -g postgres /var/run/postgresql
fi
# Allow any signal which would kill a process to stop PostgreSQL
trap shutdown HUP INT QUIT ABRT KILL ALRM TERM TSTP
exec sudo -u postgres /usr/lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main --config-file=/etc/postgresql/9.1/main/postgresql.conf
With this script postgresql stops correctly after supervisorctl stop postgres.

How to start a play2 application on a remote machine using capistrano

I'm trying to deploy a play2 application with capistrano, but I can't figure out how to (re)start the play2 application after a successful deployment. Just triggering 'play start' will cause the process to be hanging waiting for me to press ctrl+D
I've created a start script in the play app root folder
#!/bin/bash
nohup bash -c "/var/lib/play2/play start &>> /tmp/myapp.log 2>&1" &> /dev/null &
It works great when I run this on the server. When I try to call this from my local machine over ssh it also works. But when I am using capistrano, it doesn't seem to do anything. My capistrano config looks like this:
namespace :deploy do
task :restart do
stop
sleep 1
start
end
task :start do
run "cd #{current_release}/trip-api && ./start.sh"
end
task :stop do
run "cd #{current_release}/trip-api && ./stop.sh"
end
end
What's the best way to start a play2 application on a remote machine? How to get it working with capistrano?
Have a look at play documentation on deploying your application on production
The recommended way is to package your app with
play clean compile stage
And then run it with
$ target/start
To stop it, have a look at the docs:
The server’s process id is displayed at bootstrap and written to the
RUNNING_PID file. To kill a running Play server, it is enough to send
a SIGTERM to the process to properly shutdown the application.
In this quickstart for Openshift, it shows another way to start play as a service and how to stop it.
basically you do something like this to start:
APP_COMMAND="${OPENSHIFT_REPO_DIR}target/start $PLAY_PARAMS "\
"-Dhttp.port=${OPENSHIFT_INTERNAL_PORT} "\
"-Dhttp.address=${OPENSHIFT_INTERNAL_IP} "\
"-Dconfig.resource=openshift.conf"
echo $APP_COMMAND &>> $LOG_FILE
nohup bash -c "${APP_COMMAND} &>> ${LOG_FILE} 2>&1" &> /dev/null &
and to stop it
pid=`cat RUNNING_PID`
echo "Stopping play application" >> $LOG_FILE
kill -SIGTERM $pid
There are few fresh topics about running application available at Google Groups:
Start an application as a background process
When deployed on Ubuntu 10.04 cant detach from console
It's good idea to follow or join them
I would suggest using runit. We are currently running a bunch of services in production and it works great.
It only involves creating a simple shell script named run, pointing runit to its containing directory and then start it. Services should not daemonize by themselves and runit controls pid files, etc.
There is a command ( sv ) to start, stop and query services. ( sv start|stop|status|restart yourapp ).
A cursory google search got me this http://rubygems.org/gems/capistrano-runit though I do not use capistrano at all so I can't vouch for it's usefulness.
http://smarden.org/runit/
The faq is the best place to start: http://smarden.org/runit/faq.html
In debian you just apt-get install runit and are good to go.
update-service --add /your/service/dir/ will register the service with runit.
On deployment we stop services, change binaries and start services; it is really simple.