Rails, Deploying With Capistrano to a VPS on Unicorn - deployment

I need help debugging the following issue. It's my first time deploying, and I haven't been able to come up with the solution.
* 2012-12-05 18:37:44 executing `deploy:start'
* executing "/etc/init.d/unicorn_blog start"
executing command
/etc/init.d/unicorn_blog: 24: kill: No such process
master failed to start, check stderr log for details
Here's the stderr
/.../unicorn/socket_helper.rb:140:in `initialize': Address already in use - /tmp/unicorn.my_app.sock (Errno::EADDRINUSE)

It looks like you have a zombie Unicorn process running with a PID different from the one that was recorded by init.d. I would try running $ ps aux | grep unicorn to find the zombie process, then kill it.

Unsure how it works, but the following solution actually worked.
lsof /tmp/unicorn.my_app.socket
lists the pids
kill -9 pid
(replace 'pid' with one of those listed)
Then cap deploy:start from the local terminal.
source: Unicorn/Nginx process missing, socket open

I had to
sudo rm /tmp/unicorn.my_app.sock
and
sudo /etc/init.d/unicorn_myapp start

I got same error, and i fixed as below:
SSH to server where your project deploy to, and run these command
ps -ef | grep unicorn => list pid of unicorn. Find your process id of unocorn master.
Replace pid on "unicorn.my_app.sock" with above pid.
Try to deploy again with capistrano.

Related

Running Python Script in Background Infinitely

I am trying to write a python script which runs on another server such that even if I close my server connection on my PC's terminal it keeps on running on that server.When the script is kept alive, it runs infinitely listening to any events on a Website (UI), on event occurrence it then starts up certain dockers appropriately and keeps on listening to PosgreSQL Events.
When I tried to use nohup (to run the script in background) it did run in the background but was unable to listen to any of the events. Has any one worked on something similar before? Please share your thoughts.
I am sharing a part of my script.
self.pool = await asyncpg.create_pool(user='alg_user',password='algy',database='alg',host='brain',port=6543)
async with self.pool.acquire() as conn:
def enqueue_listener(*args):
self.queue.put_nowait(args)
await conn.add_listener('task_created', enqueue_listener)
print("Added the listener")
while True:
print("---- Listening for new job ----")
conn2, pid, channel, payload = await self.queue.get()
x = re.sub("[^\w]", " ", payload).split()
print(x)
if x[5] == '1':
tsk = 'TASK_ID=%s' % str(x[1])
if x[3] == '1':
command = "docker run --rm -it -e ALGORITHM_ID=1 -e TASK_ID=%s --network project_default project/docked_prol:1.0" % (str(x[1]))
subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
if x[3] == '8':
command = "docker run --rm -it -e ALGORITHM_ID=8 -e TASK_ID=%s --network project_default project/docked_pro:1.0" % (str(x[1]))
subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
The script is running absolutely fine when kept on running manually, but just need some advice with implementation methodology.
First of all, I am here after 3 years later.
To run a script infinitely as a background task, you need process manager tools. PM2 is my favorite process manager tool made in nodejs but can run any terminal task because it is a CLI.
Basically, you can install NodeJs and npm to reach pm2. (You can visit NodeJs.org to download the installer.)
You need to install the pm2 as a global module using npm install -g pm2 on your terminal
You can check if it is installed simply by pm2 -v
Then you can start your python script on your terminal using pm2 start file_name.py
It will create a thread in background to run your script and it will be restart forever.
If you were doing something that takes so much time and you dont want to see the task running on the terminal you can just disable restarting by adding the parameter --no-autorestart into the command. (# pm2 start file_name.py --no-autorestart)
If you want to see the logs or the state of the task, you can just use pm2 status, pm2 logs and pm2 monit.
If you want to stop the task, you can use pm2 stop task_name
You can use pm2 reload all or pm2 update to start all the tasks back
You can kill the task using pm2 kill
For more information you can visit PM2 Python Documentation
Running something in background via nohup will only work if the process/script runs automatically without providing external inputs, because there is no way to provide manual inputs to a background process.
First, try checking if the process is still running in background (ps -fe|grep processname).
If its running, then check the 'nohup.out' file to see where the process is getting stuck. This gets generated in the same directory where you started the process. This will give you some idea what is going on inside the process.

Writing a bash file that runs in the background and checks the connection?

I have to solve a problem at my firm, which is that we use Raspbian(unix) based Raspberry Pi machines to connect remotely to Windows 7 machines and work from there. The problem is, all the easy to use, free, unix based rdesktop applications can't handle disconnects. They freeze up, and the "not so talented" employees don't know how to stop the rdesktop and reconnect.
I need to write something, preferably a bash application, which can run in the background on the Raspberry and check the connection. If the connection is down it should kill the rdesktop and start a new one as the connection comes back up. I don't know where to start, because while I found some examples, they all used ping to check connection, but my boss said that all the Raspberries sendin ping packets all the time will overload our gateways. Is there a way to check connection without ping?
One way to solve it could be to create a daemon which continously checks the connection to the host machine.
Doing it this way involves creating two files
the script which pings the host /usr/local/bin/checkconnection.sh
the daemon file /etc/init.d/checkconnectiond
Create daemon file:
$ sudo touch /etc/init.d/checkconnectiond
$ sudo nano /etc/init.d/checkconnectiond
and paste the following:
# !/bin/sh
# /etc/init.d/checkconnectiond
### BEGIN INIT INFO
# Provides: checkconnectiond
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Script for checking connection for remote desktop
# Description: Script for checking connection for remote desktop
### END INIT INFO
case "$1" in
start)
while sleep 30; do (/usr/local/bin/checkconnection.sh &) ; done
;;
stop)
killall checkconnectiond -q
;;
*)
echo "Usage: /etc/init.d/checkconnectiond {start|stop}"
exit 1
;;
esac
exit 0
Create the script:
$ sudo nano /usr/local/bin/checkconnection.sh
The script:
if ping -c 1 host_ip &> /dev/null
then
# do nothing, host is up
else
killall remotedesktop-pid
fi
Remember to change the host_ip and remotedesktop-pid. You can use the process name when using killall, so if it's called "rdp" you could do killall rdp
Now we got a daemon that will automatically start when booting the raspberry. This daemon runs the checkconnection.sh every 30th second.
The checkconnection.sh script run a ping command to the host. If the ping is not successfull it will kill the remote desktop process so the user must manually restart it.
Sources:
I wrote the daemon script earlier for a raspberry project
Ping test: Checking host availability by using ping in bash scripts
Repeat shell command infinite: https://unix.stackexchange.com/questions/10646/repeat-a-unix-command-every-x-seconds-forever/111484#111484

How do I delete a virtualbox machine in the GURU_MEDITATION error state?

How do I delete a VirtualBox machine in the GURU_MEDITATION error state? Is it enough just to delete the directory while VirtualBox is not running?
EDIT: After posting, I deleted the entire directory that "Show in File Manager" navigates to.
It looks like:
Note that there is no power off, and even remove is greyed out. I believe this is the exact same as it looked even before I deleted the directory.
EDIT 2: I tried the command line poweroff after deleting the files. It hangs:
vboxmanage controlvm wmf-vagrant_1354733432 poweroff 0%...10%...20%...
EDIT 3: It also fails to unregister it from the command-line:
VBoxManage unregistervm wmf-vagrant_1354733432 --delete VBoxManage:
error: Cannot unregister the machine 'wmf-vagrant_1354733432' while it
is locked VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE
(0x80bb0007), component Machine, interface IMachine, callee
nsISupports Context: "Unregister(fDelete ?
(CleanupMode_T)CleanupMode_DetachAllReturnHardDisksOnly :
(CleanupMode_T)CleanupMode_DetachAllReturnNone,
ComSafeArrayAsOutParam(aMedia))" at line 160 of file
VBoxManageMisc.cpp
Kill the VBoxHeadless process and run "vagrant destroy"
Destroying vagrant and sending the kill signal with the "killall" command looks like:
killall -9 VBoxHeadless && vagrant destroy
If you can't power off the machine from VirtualBox GUI, then try from the command line using vboxmanage command (VBoxManage on OS X), e.g.:
vboxmanage controlvm NAMEOFVM poweroff
Change NAMEOFVM with the name from vboxmanage list vms command.
then unregister and delete the VM:
vboxmanage unregistervm NAMEOFVM --delete
Or delete it manually:
rm -fr ~/"VirtualBox VMs/NAMEOFVM"
I hit this problem. Eveything I read recommend that you should always manage the boxes via Virtual Box, not directly access files. But when I had an invalid box, the unregistervm command refused to delete it and vagrant destroy did not work. In the end the following process worked.
Kill all running VBox* processes
Delete the folder "boxname" from the folder "VirtualBox VMs"
Edit the file "VirtualBox.xml" and remove the tag corresponding to the invalid box.
I then ran this command the verify the box was gone.
VBoxManage list vms
After that I was able to create a new vm with the same name.
I had a VM that got in a similar state
$ vagrant up
Bringing machine 'tempu' up with 'virtualbox' provider...
==> mms: Checking if box 'hashicorp/precise64' is up to date...
==> mms: Resuming suspended VM...
==> mms: Booting VM...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["startvm", "9fcf2203-d4b3-47a1-a307-61bfa580bd28", "--type", "headless"]
Stderr: VBoxManage: error: The machine 'temp-ubuntu' is already locked by a session (or being locked or unlocked)
VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component Machine, interface IMachine, callee nsISupports
VBoxManage: error: Context: "LaunchVMProcess(a->session, sessionType.raw(), env.raw(), progress.asOutParam())" at line 592 of file VBoxManageMisc.cpp
I looked for a process called VBoxHeadless, but it wasn't running.
I then ran ps and found this process with the same vm id:
$ ps aux | grep -i virtualbox
user 63466 0.0 0.1 2523608 8396 ?? S 9:36am 0:02.67 /Applications/VirtualBox.app/Contents/MacOS/VBoxManage showvminfo 9fcf2203-d4b3-47a1-a307-61bfa580bd28 --machinereadable
Killing that process fixed the problem and VM started correctly after running vagrant up
This is a script I use when I get desperate. It wipes as much trace of any VM from the machine as I can find:
VBoxManage list runningvms | awk '{print $2}' | xargs --no-run-if-empty -t -n1 -IXXX VBoxManage controlvm XXX poweroff
VBoxManage list vms | awk '{print $2}' | xargs --no-run-if-empty -t -n1 VBoxManage unregistervm
killall -9 VBoxHeadless
rm -rf ~/Virtualbox\ VMs/*
I am using Debian Wheezy on a 64-bit multiple-processor host. I was able to solve it eventually by removing all VirtualBox data (though you did not need to delete the Vagrant base box):
Close Virtualbox if running
sudo apt-get remove --purge virtualbox
Move or delete ~/.VirtualBox and ~/VirtualBox\ VMs/. If you're not sure, back them up to a safe place.
Restart.
Reinstall virtualbox.
Use virtualbox/vagrant as normal.
There may be a less disruptive way (e.g. removing only parts of these directories). In my case, fortunately I was using only one VM at the time.
In my case, I wanted to delete ALL Vagrant boxes I currently have on my system by a command line, I did that by:
$ vagrant box list | cut -f 1 -d ' ' | xargs -L 1 vagrant box remove -f --all
Of course, after making sure no further process is attached any more:
killall -9 VBoxHeadless && vagrant destroy
No matching processes belonging to you were found
On windows 10, i solved this problem setting Default firewall configurations back.
Hope it helps...
I've been struggling with frozen Virtual Box instances created earlier using Vagrant.
Luckily found a solution mentioned in similar ticket
so, to recap, if your getting Timeout error or Vagrant complaining it can't provision or any other kind of related issue with Virtual Box try:
List virtual box instances first: VmboxManage list vms
Stop the virtual box instances using id|names of previous command: VBoxManage startvm VMNAME/id --type emergencystop
List vagrant boxes with vagrant box list
Remove one or more causing issues vagrant boxes: vagrant remove box ${box-name}
Afterwards, try vagrant up again and hopefully you will be back to business.
Good luck!
Open the task manager or system monitor and hover with the mouse over the VBoxHeadless to see the name of the VM and kill the process.
Now you can remove the VM with the VirtualBox Manager GUI.
Run vagrant global-status
Identify the vm id
Run vagrant destroy [vm id]
You can use below command to delete VM from vritual box-
vagrant destroy
And use below command to create VM and start again-
vagrant up

Disown shell once a process get started using shell script

I am trying to write a script for starting tomcat server which get disassociated from the shell once the execution of the script complete. For example please see below snapshot of the screen.
bash-3.00# ./startup.sh
Using CATALINA_BASE: /opt/tomcat/6.0.32
Using CATALINA_HOME: /opt/tomcat/6.0.32
Using CATALINA_TMPDIR: /opt/tomcat/6.0.32/temp
Using JRE_HOME: /opt/jdk1.6.0_26/
Using CLASSPATH: /opt/tomcat/6.0.32/bin/bootstrap.jar
bash-3.00# ps -eaf | grep tomcat
root 4737 2945 0 02:45:53 pts/24 0:00 grep tomcat
root 4734 29777 1 02:45:42 pts/24 0:19 /opt/jdk1.6.0_26//bin/java -Djava.util.logging.config.file=/opt/tomcat/6.0.32/c
Now as you can see that once the execution of the script complete the tomcat process is associated with pts/24 till I close the shell.
But what I want is that even if the shell is kept open the process should show a behavior like below
bash-3.00# ps -eaf | grep tomcat
root 13985 2945 0 22:40:13 pts/24 0:00 grep tomcat
root 13977 29777 1 22:40:01 ? 0:22 /opt/jdk1.6.0_26//bin/java -Djava.util.logging.config.file=/opt/tomcat/6.0.32//
The operating System is Solaris. The various option I used to accomplish the same are using nohup, and disown but still the process is associated with shell.
The other mechanism is to put in crontab or use svc to make the process start as system comes up i.e. daemon or we can write a small C program which forks a process and goes away.
Here please note that the process is running in background.
But I want to achieve the same using a shell or perl script. So any thought on the same will help me a lot.
Thanks in advance.
Well, you could go and do all the hard work yourself, but why when there's a module for that: Proc::Daemon (Not sure if it works on solaris)
The documentation also describes the process used, which is useful for you to understand anyhow, if you decided to go ahead and craft your own daemonizing code.
( nohup ./script.bash & )
The parenthesized sub-shell exits immediately and ps -ef |grep script.bash returns:
501 59614 1 0 0:00.00 ttys005 0:00.00 /bin/bash ./script.bash

Run a perl script at startup in Ubuntu

I have a perl script that I need to run once at startup with an argument under my user account.
So when I boot the system up it needs to execute a command like this,
./path/to/script.pl start
Any ideas?
You could use a line in your crontab (crontab -e)
To run a command at startup:
edit /etc/crontab
Add the following line:
#reboot root perl ./path/to/script.pl start
^^^ Runs as root. Change "root" to "BlackCow" to run as BlackCow
Or, you could use upstart (add a .conf file to /etc/init/). Here's a copy and paste from my notes:
Use upstart to run a daemon at reboot/start
e.g. /etc/init/prestocab.conf:
#!upstart
description "node.js server"
author "BlackCow"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on shutdown
script
export HOME="/root"
exec sudo -u root /usr/local/bin/node /home/prestocab/prestocab.com/www/socket.io/server.js 2>&1 >> /var/log/prestocab.log
end script
To use:
start prestocab
stop prestocab
restart prestocab
#
You might want to use some sort of process monitor to restart the daemon if it crashes
Depends on what init you are using, if your version of Ubuntu is using upstart
you have to configure the appropriate Upstart start scripts, if not
the rc scripts based on your runlevel. Check update-rc.d.
On Ubuntu, the simplest way is to add this line to your /etc/rc.local file (before the exit 0 line, substituting username with your own user name):
su -c "./path/to/script.pl start" username &