Does it make sense to create a script like this or is it possible that the spool files can get corrupted?
I want to be able to control the maximum mail spool file size to 500MB or keep it for a month, archive the rest, whichever comes first:
/var/spool/mail/* {
monthly
size 500M
missingok
rotate 24
notifempty
sharedscripts
}
I didn't use logrotate for this. What I did instead was redirect the spool files to /dev/null to clean them out. I also stopped my email service before and started it after the cleanup. You can tailor this as you wish
echo "emailclean.sh starting on $(date)"
echo "Stopping fetchmail service"
#/sbin/initctl stop <service>
pkill fetchmail
sleep 10
echo "Cleaning up old mail"
cat /dev/null > /var/spool/mail/root
cat /dev/null > /var/spool/mail/webalert
sleep 10
echo "Starting fetchmail Service as user webalert"
sudo -u webalert /usr/bin/fetchmail
#/sbin/initctl start <service>
echo "Cleanup Complete! /var/spool/mail/root and webalert files sent to dev0"
echo "emailclean.sh finished cleanup on $(date)"
Related
I would like to connect to a VPN on start-up of OSMC.
Environment:
installed OSMC on Raspberry 2
downloaded, compiled and installed shrew soft vpn on the device
As user 'osmc' with ssh
> sudo iked starts the daemon successfully
> ikec -r "test.vpn" -a starts the client, loads the config and connects successfully
rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo iked >> /home/osmc/iked.log 2>> /home/osmc/iked.error.log &
ikec -a -r "test.vpn" >> /home/osmc/ikec.log 2>> /home/osmc/ikec.error.log &
exit 0
after start of raspberry iked is as process visible with ps -e
but ikec is not running
osmc#osmc:~$ /etc/rc.local starts the script and connects to vpn successfully
Problem:
Why does the script not working correctly on start-up?
Thank you for your help!
I was also looking to do the same thing as you and ran into the same problem. I'm no linux expert, but I did figure out a workaround.
I created a script called ikec_after_reboot.sh and it looks like this...
$ cat ikec_after_reboot.sh
#!/bin/bash
echo "Starting ikec"
ikec -r test.vpn -a
I then installed cron.
sudo apt-get update
sudo apt-get install cron
Edit the cron job as root and run the ikec script 60 seconds after reboot.
sudo crontab -e
SHELL=/bin/bash
#reboot sleep 60 && /home/osmc/ikec_after_reboot.sh & >> /home/osmc/ikec.log 2>&1
Now edit your /etc/rc.local file and add the following.
sudo iked >> //home/osmc/iked.log 2>> /home/osmc/iked.error.log &
exit 0
Hopefully, this is helpful to you.
Hi great people of stackoverflow,
Were hosting a docker container on EB with an nodejs based code running on it.
When redeploying our docker container we'd like the old one to do a graceful shutdown.
I've found help & guides on how our code could receive a sigterm signal produced by 'docker stop' command.
However further investigation into the EB machine running docker at:
/opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
shows that when "flipping" from current to the new staged container, the old one is killed with 'docker kill'
Is there any way to change this behaviour to docker stop?
Or in general a recommended approach to handling graceful shutdown of the old container?
Thanks!
Self answering as I've found a solution that works for us:
tl;dr: use .ebextensions scripts to run your script before 01flip, your script will make sure a graceful shutdown of whatevers inside the docker takes place
first,
your app (or whatever your'e running in docker) has to be able to catch a signal, SIGINT for example, and shutdown gracefully upon it.
this is totally unrelated to Docker, you can test it running wherever (locally for example)
There is a lot of info about getting this kind of behaviour done for different kind of apps on the net (be it ruby, node.js etc...)
Second,
your EB/Docker based project can have a .ebextensions folder that holds all kinda of scripts to execute while deploying.
we put 2 custom scripts into it, gracefulshutdown_01.config and gracefulshutdown_02.config file that looks something like this:
# gracefulshutdown_01.config
commands:
backup-original-flip-hook:
command: cp -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak
test: '[ ! -f /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak ]'
cleanup-custom-hooks:
command: rm -f 05gracefulshutdown.sh
cwd: /opt/elasticbeanstalk/hooks/appdeploy/enact
ignoreErrors: true
and:
# gracefulshutdown_02.config
commands:
reorder-original-flip-hook:
command: mv /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/enact/10flip.sh
test: '[ -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh ]'
files:
"/opt/elasticbeanstalk/hooks/appdeploy/enact/05gracefulshutdown.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# find currently running docker
EB_CONFIG_DOCKER_CURRENT_APP_FILE=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_file)
EB_CONFIG_DOCKER_CURRENT_APP=""
if [ -f $EB_CONFIG_DOCKER_CURRENT_APP_FILE ]; then
EB_CONFIG_DOCKER_CURRENT_APP=`cat $EB_CONFIG_DOCKER_CURRENT_APP_FILE | cut -c 1-12`
echo "Graceful shutdown on app container: $EB_CONFIG_DOCKER_CURRENT_APP"
else
echo "NO CURRENT APP TO GRACEFUL SHUTDOWN FOUND"
exit 0
fi
# give graceful kill command to all running .js files (not stats!!)
docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | xargs docker exec $EB_CONFIG_DOCKER_CURRENT_APP kill -s SIGINT
echo "sent kill signals"
# wait (max 5 mins) until processes are done and terminate themselves
TRIES=100
until [ $TRIES -eq 0 ]; do
PIDS=`docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | cat`
echo TRIES $TRIES PIDS $PIDS
if [ -z "$PIDS" ]; then
echo "finished graceful shutdown of docker $EB_CONFIG_DOCKER_CURRENT_APP"
exit 0
else
let TRIES-=1
sleep 3
fi
done
echo "failed to graceful shutdown, please investigate manually"
exit 1
gracefulshutdown_01.config is a small util that backups the original flip01 and deletes (if exists) our custom script.
gracefulshutdown_02.config is where the magic happens.
it creates a 05gracefulshutdown enact script and makes sure flip will happen afterwards by renaming it to 10flip.
05gracefulshutdown, the custom script, does this basically:
find current running docker
find all processes that need to be sent a SIGINT (for us its processes with 'workers' in its name
send a sigint to the above processes
loop:
check if processes from before were killed
continue looping for an amount of tries
if tries are over, exit with status "1" and dont continue to 10flip, manual interference is needed.
this assumes you only have 1 docker running on the machine, and that you are able to manually hop on to check whats wrong in the case it fails (for us never happened yet).
I imagine it can also be improved in many ways, so have fun.
I looking for the right way to run shell script first boot Solaris.
I need to run resize command, there is a my script
#!/bin/sh -ux
echo "#!/bin/sh -ux" > /etc/rc3.d/S90scale
echo "/sbin/zpool set autoexpand=on rpool" >> /etc/rc3.d/S90scale
echo "/sbin/zpool online -e rpool c1d0" >> /etc/rc3.d/S90scale
echo "rm /etc/rc3.d/S90scale" >> /etc/rc3.d/S90scale
echo "/sbin/shutdown -y -i6 -g0" >> /etc/rc3.d/S90scale
chmod a+x /etc/rc3.d/S90scale
actually script working properly, but unfortunately resize do not work. When I do the same things from user session everything just fine.
What exactly I doing wrong?
Your method is not the "right" one to run a script once after boot as it uses the legacy approach. The correct way would be to create an smf service that runs once. However, it does work anyway with Solaris 10 and 11 as the rc scripts while deprecated are still processed so I won't elaborate more about smf.
The main issue is you don't check for errors and whatever happens, it remove the script and reboot preventing any analysis to occur.
I would suggest to modify your script to log what is happening in a file and quit on error:
#!/bin/ksh
cat > /etc/rc3.d/S90scale <<%EOF%
exec > /var/tmp/S90scale.log 2>&1 # logs everything to file
set -xe # show commands and exits on error
/sbin/zpool set autoexpand=on rpool
/sbin/zpool online -e rpool c1d0
mv /etc/rc3.d/S90scale /etc/rc3.d/_S90scale
/sbin/shutdown -y -i6 -g0
%EOF%
chmod a+x /etc/rc3.d/S90scale
After the next reboot complete, you should have a look to the /var/tmp/S90scale.log file and possibly see an error message there.
I'm trying to create a service / script to automatically start and controll my nodejs server, but it doesnt seem to work at all.
First of all, I used this source as main reference http://kvz.io/blog/2009/12/15/run-nodejs-as-a-service-on-ubuntu-karmic/
After testing around, I minimzed the content of the actual file to avoid any kind of error, resulting in this (the bare minimum, but it doesnt work)
description "server"
author "blah"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/var/www"
exec nodejs /var/www/server/server.js >> /var/log/node.log 2>&1
end script
The file is saved in /etc/init/server.conf
when trying to start the script (as root, or normal user), I get:
root#iof304:/etc/init# start server
start: Job failed to start
Then, I tried to check my syntax with init-checkconf, resulting in:
$ init-checkconf /etc/init/server.conf
File /etc/init/server.conf: syntax ok
I tried different other things, like initctl reload-configuration with no result.
What can I do? How can I get this to work? It can't be that hard, right?
This is what our typical startup script looks like. As you can see we're running our node processes as user nodejs. We're also using the pre-start script to make sure all of the log file directories and .tmp directories are created with the right permissions.
#!upstart
description "grabagadget node.js server"
author "Jeffrey Van Alstine"
start on started mysql
stop on shutdown
respawn
script
export HOME="/home/nodejs"
exec start-stop-daemon --start --chuid nodejs --make-pidfile --pidfile /var/run/nodejs/grabagadget.pid --startas /usr/bin/node -- /var/nodejs/grabagadget/app.js --environment production >> /var/log/nodejs/grabagadget.log 2>&1
end script
pre-start script
mkdir -p /var/log/nodejs
chown nodejs:root /var/log/nodejs
mkdir -p /var/run/nodejs
mkdir -p /var/nodejs/grabagadget/.tmp
# Git likes to reset permissions on this file, but it really needs to be writable on server start
chown nodejs:root /var/nodejs/grabagadget/views/layout.ejs
chown -R nodejs:root /var/nodejs/grabagadget/.tmp
# Date format same as (new Date()).toISOString() for consistency
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/log/nodejs/grabagadget.log
end script
pre-stop script
rm /var/run/nodejs/grabagadget.pid
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/log/nodejs/grabgadget.log
end script
As of Ubuntu 15, upstart is no longer being used, see systemd.
I'm scripting up my amazon deployment, and I haven't managed to automate a step in it.
The step is between setting up RAID (via mdadm) and then installing my db (mongo) on the new mounted directory. This is because I have to wait for mdadm to finish in the background before installing mongo. I know when mdadm is finished by running the following command:
sudo mdadm --detail /dev/md0
When mdadm is still in progress this command will produce a progress indicator e.g.:
Rebuild Status : 2% complete
When mdadm is finished this status will be gone.
Does anyone have a clean solution for being able to tell when mdadm is finished, so that the script can run entirely on its own, and then continue on to install mongo once mdadm is done?
At the moment I'm contemplating placing a script of sorts on the box using boto, running the script from boto, and having the script exit once it parses and reads that mdadm is finished...
Thanks a lot for your help!
I am using:
mdadm --wait /dev/md0
Note that the above command will return a non-zero exit status if it did not have to wait...you might need to take that into account in the script.
Ok... so as I said I'll post up my solution (I'm completely new to writing bash scripts, so if you have any advice for improvement I'm all ears!!!)
#!/bin/bash
false=1
true=0
function drives_are_ready {
RAID_INFO=`sudo mdadm --detail /dev/md0`
rebuild_status_line_count=`echo "$RAID_INFO" | grep "Rebuild Status" | wc -l`
echo `echo "$RAID_INFO" | grep "Rebuild Status"`
if (( rebuild_status_line_count == 0 )); then
return $true
else
return $false
fi
}
drives_are_ready
raid_is_finished=$?
while (( raid_is_finished == $false )); do
echo "RAID isn't finished yet... will check again in 10s"
sleep 10s
drives_are_ready
raid_is_finished=$?
done
echo "RAID is done."
I scp the file to my instance, and then chmod and run the script via boto.
You don't necessarily need to wait for the superblock resynchronization before using the disks, but in my (and I'm sure yours as well) experience, it is a very good idea with ec2 instances.
You could simply check for it in a bash while loop
#!/bin/bash
... stuff in your script that doesn't require raid ...
# Pause until mdadm --detail returns nothing
while [[ `sudo mdadm --detail /dev/md0 | grep 'Rebuild Status'` != '' ]] do
sleep 30
done
echo "Raid superblock resynchronization complete"
... stuff in your script that requires raid synchronization to be complete...