I'm looking to automated fsck on my FreeBSD server. I have an idea how to do this, but because it's running pretty powerful commands, I'd like some more eyes on it before I set it to run.
Step 1. Cron job.
My cron will look something like this: 0 17 * * 0 myfsckscript.sh > /usr/local/var/log/fscklog/$(date).log, to run at 5PM every sunday. It will be run from root's crontab, because what I'm doing requires root permissions.
The script goes something like this:
init 1 # Run single-user mode, so fsck can run correctly
fsck -y # Run fsck
fsck -y # Run again, to clean up. Makes my machine act better
init 5 # bring it back up.
My main concerns follow:
Does running this pose any substantial dangers I should know about?
Are there any errors in my script?
Anything I should add?
Did I actually get it right?
I'm sorry this is mostly a confirmation question, but with my level of skill with sh I'm not comfortable setting this to run without someone more experienced taking a look first.
What about just add this to /etc/rc.conf:
fsck_y_enable="YES"
background_fsck="NO"
Basically, means to run fsck -y and don't try to run in background, so depending on the size of your disks, this could take a while to finish.
Related
I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.
I have this Dockerfile that works fine, but I was told that maybe it wasn't the best way to do what I wanted :
FROM debian:jessie
RUN apt-get update && apt-get install -y lighttpd php5-cgi php5-common php5 php5-mysql php5-gd
RUN echo server.modules += \(\"mod_rewrite\"\) >> /etc/lighttpd/lighttpd.conf
CMD ["lighttpd", "-D", "-f", "/etc/lighttpd/lighttpd.conf"]
RUN lighty-enable-mod fastcgi-php
RUN service lighttpd restart
RUN chown -R www-data:www-data /var/www/html
As you can see, I am creating an image for a container with lighttpd and php.
My question is about the place of the CMD part in my Dockerfile. I was told that it was better to put it at the end of the file, but as you can see, I did it in the middle of mine and it worked just fine.
It does not stop the creation nor does it interfere with the service lighttpd restart in the run part below it.
Is there any best practice regarding this or is this normal? Could I create a Dockerfile with my CMD just after the apt-get install?
Thanks for your answers regarding my question and sorry for my english if there is any big mistakes.
I believe it's more of a logical preference, no need to define the command when the image isn't ready yet. There is an added convenience that debugging with the CMD or ENTRYPOINT set to a shell may make debugging a failed build a little easier. But otherwise, the last ENTRYPOINT and/or CMD modifies the config of the image and gets inherited to all child images (each line of your Dockerfile).
It shouldn't matter where you put the CMD entry in terms of Docker using that as the default command (plus Docker will use the last one if there is more than one). Where it might make a difference is if you were trying to structure your build in order to optimize caching the layers. I.e., you want to put anything that is likely to change lower down in the Dockerfile.
I think it's more convention to put it last and makes it easier to read. Is there a specific reason in your case you don't want to put it last?
I am able to execute scrip from command line.
I'm executing it like this:
/path/to/script run
But while executing it from cron like below, the page is not comming:
55 11 * * 2-6 /path/to/script.pl run >> /tmp/script.log 2>&1
The line which is getting a webpage uses LWP::Simple:
my $site = get("http://sever.com/page") ;
I'm not modyfing anything. The page is valid and accessible.
I'm getting enpty page only when I execute this script from crontab. I am able to execute itfrom command line!
Crontab is owned by root. And job is executed as root.
Thanks in advance for any clue!
It's difficult to say what might be causing this, but there are differences between your environment, and the environment created by crontab.
You could try running it through a shell with appropriate args to construct your user environment:
55 11 * * 2-6 /bin/tcsh -l /path/to/script.pl run >> /tmp/script.log 2>&1
I'm assuming you are running it by cron with your own user ID of course. If you aren't, then obviously you should try running it manually with the user ID that cron is using to run it.
If it's not a difference in environment variables (e.g. those that specify a proxy to use), I believe you are running afoul of SElinux. Among other things, it prevents background applications (e.g. cron jobs) from accessing the internet unless you explicitly allow them to do so. I don't know how to do so, but you should be able to find out quite easily.
I have to take quite a few steps before I get into the file I need to be, which is why I'm trying to set up an alias in my terminal, that gets me to the file by running that alias.
The following steps are needed to arrive where I have to be:
cd Sites
vagrant ssh
cd /var/www/miniportal.billetten.dk/logs/
sudo -s
cd /etc/apache2/sites-available/
nano 25-av_miniportal.conf
Edit line 33 in that file (I guess it's possible to jump to that line)
I tried setting up an alias like this, but the problem is that it stops running the rest of the command after I SSH'd into Vagrant. if I manually exit Vagrant, it continues the command (and of course returns an error, because there is no such folder).
The question is: How do I make sure that everything from step 3 is executed AFTER step 2 is done logging in through SSH?
My ultimate goal is to set up an Apple Automator program that lets me put in a value that gets entered on line 33, but I'm fine with just an alias for now.
I know I asked this question a long time ago, but in the meantime I found a solution and forgot I had posted this question.
My alias in my .zshrc-file looks like this:
alias changeCust='ssh -t root#192.168.56.101 "nano +32 /etc/apache2/sites-enabled/25-av_miniportal.conf && service apache2 reload"'
In other words, it SSHs into vagrant as root (it asks for my password), nanos into a file on line 32 (or whatever line you need), then, when the file is saved, it reloads apache2 and the changes are applied.
Just use the below one and change the values.
alias AliasName='ssh -t root#your.ip.addres.here "nano +lineNumber /path/to/file"'
I'm not sure how to title that more succinctly and still have it be meaningful.
(Note that this works fine when run mid-day, via cron or manually, so I "know" the script itself is sound.)
I have a cron job (ubuntu 13.04.)
It runs as my user (not root.)
The job itself runs at 6:00 in the morning. It's the first 'business level' job that runs all day.
1 6 * * 1-5 /home/me/bin/run_perl_job
run_perl_job is just:
#!/bin/bash
cd /home/me/bin
./script.pl
The script copies a file to "/mnt/shared_drive/outputfile.xls"
The mount point is defined in fstab as:
//fileserver/share /mnt/shared_drive cifs user=domain/me%password,iocharset=utf8,gid=1000,uid=1000,sec=ntlm,file_mode=0777,dir_mode=0777 0 0
Now. Given that:
When I run the script in a normal shell, it works fine.
When I look at the mount point first thing in the morning (via a normal terminal) it shows up (and is writeable) without event.
When I copy the crontab line and set it to run in a couple minutes, to see the symptom, it works fine (creates the file quite happily.)
The ONLY time this fails is if it's running in its normal time slot (6:01). The rest of the script functions ( the file itself has to be pulled down via sftp, etc.) So I know it's not dying.
It's driving me batty because the test cycle is 24 hours.
I just added the following couple lines to the beginning of the 'run_perl_job' script, hoping it exposes something tomorrow:
cd /mnt/shared_drive
ls -lrt >>home/me/bin/process.log
But I'm stumped. "It's almost as though" the mount point had gotten stale overnight and is waiting for some kind of access attempt before remounting. I'd run "mount -a" at the top of the 'run_perl_job' script if I could reasonably do it. But given that it's got to be sudo'ed, that doesn't seem reasonable to me.
Thoughts? I'm running out of ideas and this test cycle is awful.
how about putting a
umount -f -v /mnt/shared_drive
mount -v -a
into a root cron job just before your script runs. That way you don't need to sudo in your script and have the password in plain sight. -v might give you a hint on what is happening to make it stale