I'm using thin as the server for my Sinatra app. It is started thusly:
thin -C config/environment.yml -R config/config.ru start
Where environment.yml has thin stuff and config.ru has general stuff you'd find in a rackup file.
I would like to be able to daemonize (easy enough with thin's config file) and stop and restart this much like one does with apache/tomcat/etc.
When I try thin stop or restart or various other things, I get:
Can't stop process, no PID found in tmp/pids/thin.pid
Indeed, there is no such file. I have tried specifying a pid file and location (ex. /tmp/thin.pid, to be easy) in the thin configuration yml to various different places. All this does is change the location of the directory in the "no PID found in" message, still no pid file is created.
Any ideas?
Pid will be created when thin is daemonized, so double-check your config for daemonize: true option. Considering that it's yaml, whitespace can make things go wrong. Alternatively specify --daemonize switch.
If location of your pid file is non-default, you should also specify config file when issuing stop:
thin -C config/environment.yml stop
Related
I have a web page running on Apache which uses a matured set of Perl files for monitoring our workplace servers and applications. One of those tests goes through Cygwin´s SFTP, list files there and assess them.
The problem I have is with SFTP itself - when I run part of test either manually from cmd as D:\cygwin\bin\bash.exe -c "/usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]" or invoke the very same set of Perl files as web it works OK (returns list of files as it should). When exactly same code is run through web page it fails quick and does not tell anything. Only thing I have is error code 255 and "Connection closed". No error stream, no verbose output, nothing, no matter what way to capture any error I have used.
To cut long story short, the culprit was HOME path.
When run manually either directly from cmd or through Perl, the D:\cygwin\bin\bash.exe -c "env" would report HOME as HOME=/cygdrive/c/Users/[username]/ BUT this same command when run through web page reports HOME=/ i.e. root, apparently loosing the home somewhere along the path.
With this knowledge the solution is simple: prepend SFTP command with proper home path (e.g. D:\cygwin\bin\bash.exe -c "export HOME=/cygdrive/c/Users/%USERNAME%/ ; /usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]") and you are good to go.
I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.
I have an embedded system where user management in /etc/passwd file is done usually automatically with a Bourne shell script. However, it might happen that sometimes /etc/passwd file is edited with a text editor by root user or by passwd utility. Is there a way to program Bourne shell script in a way that it locks the /etc/passwd file during its execution so that other programs are not able to edit the file at the time? Also, this script should detect if /etc/passwd file is not opened by other processes. I could use following solution from Wooledge wiki:
# locking example -- CORRECT
# Bourne
lockdir=/tmp/myscript.lock
if mkdir "$lockdir"
then # directory did not exist, but was created successfully
echo >&2 "successfully acquired lock: $lockdir"
# continue script
else
echo >&2 "cannot acquire lock, giving up on $lockdir"
exit 0
fi
However, this ensures only that two instances of this script are not running simultaneously. I also have a BusyBox lock available which behaves similarly to flock, but again, as far as I can tell, I can't protect other processes editing /etc/passwd file.
The vipw command may provide this for you and you can customize the editor using the EDITOR environment name.
See man vipw for details.
Is there a way to program Bourne shell script in a way that it locks the /etc/passwd file during its execution so that other programs are not able to edit the file at the time?
That is called mandatory file locking and the answer is probably no. In Linux that requires the mand option when the file system is mounted. I would guess that's a nonstarter in your environment, but if it's an option (so to speak) have a look at your favorite resource for how to proceed from there.
It's not shell-script functionality you need. For one process to prevent another from opening a file requires kernel support. Unix programs traditionally use advisory locks, or cooperate some other way. vipw(8) is an example of how that's done.
I've ended up in a situation where a directory is being monitored, apparently by inotify, to trigger a process that doesn't exist.
I simply want to stop inotify from monitoring the directory, but after searching and reading the man page I can't find how to do this. The manual mentions inotify_rm_watch but running this, or int inotify_rm_watch, or inotify, are all not recognised from the command line.
The directory is still monitored after rebooting the system, so it's not as simple as just killing a process.
How do I permanently stop a directory being monitored by inotify? Is there some inotify config file that lists what is monitored that I should remove it from?
inotify_rm_watch is a programing interface that needs to be called from same process that called inotify_add_watch in first place.
Inotify is used by programs to react to file changes. To stop it from happening you have to stop the specific program using inotify. But in most cases you probably don't want to stop programs from watching for file changes because it is part of intended behavior for them.
You can list all programs using inotify with following shell command:
ps -p $(find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print 2> /dev/null | sed -e 's/^\/proc\/\([0-9]*\)\/.*/\1/')
inotify watches are NOT preserved past the termination of the process that added the watch, let alone reboots. inotify_rm_watch is not a command-line utility, but a function meant to be called by the process that owns the watch.
If the directory is being monitored, it's because there's a process running that is monitoring it. Stop running that process, and the directory will no longer be monitored.
I am using a vpn service from certain server. I was given with a root account, and when I connect with a root account, the command line looks like below.
root#xa9g82:/etc/#
Then I used useradd to add an account called 'temp'
When I connected to the server with temp, then the command line only has a single character.
$
The user information is not shown, neither the path. Also, note that, in root's command line I can use tab to automatically complete the filename, however 'temp's command line inserts tab space, when I press tab. It is very inconvenient.
I am using Ubuntu 10.04. How can I resolve this issue?
I usually edit ~/.bashrc. Being root, you might want to change the system-wide preferences, at /etc/bash.bashrc. Personally, I changed some lines in ~/.bashrc to look like:-
# If this is an xterm set the title to user#host:dir
case "$TERM" in
xterm*|rxvt*)
## PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u#\h: \w\a\]$PS1" # default
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\h: \W\a\]$PS1" # How I like it
;;
*)
;;
esac
use prompt to set the prompt.... (man prompt...)
it depends on what shell you run each one has it's own tricks, but you can make it looks as you wish.
BASH
TCSH
It is likely that the default shell for root is set to /bin/sh, which does not provide many of the features that you may used to if you use a shell like bash. To check if this is the case, run the following command:
cat /etc/passwd | grep ^root
The last component of the line that this command outputs will be your shell (which, as stated previously, I'm guessing is /bin/sh). If this is not the shell you want (it probably isn't), then edit /etc/passwd (using nano or whatever editor you're most comfortable with) and change your shell to something more palatable, like /bin/bash. After doing this, you'll need to log out and then log back in.