Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"
Related
I'm having trouble getting the 'complete' function in the fish shell to behave as I would like and I've been searching for an answer for days now.
Summary
Essentially I need to provide tab directory auto-completion as if I was in a different directory to the one I am currently in. It should behave exactly as 'cd' and 'ls' do, but with the starting point in another directory. It seems like such a trivial thing to be able to do but I can't find a way to make it work.
Explanation
Example folder structure below
- root
- foo
- a
- dir1
- subdir1
- dir2
- subdir2
- b
- dir3
- subdir3
- dir4
- subdir4
I am running these scripts whilst in the 'root' directory, but I need tab auto-complete to behave as if I was in the 'foo' directory.
testfunc -d a/dir2/subdir2
Instead of
testfunc -d foo/a/dir2/subdir2
There are a lot of directories inside 'foo' and a lot of sub-directories within them, and this auto-complete behaviour is necessary to speed our process (this script is used extensively throughout the day).
Attempted Solution
I've tried using the 'complete' builtin to get this working by specifying the directory to use, but all this managed to do was auto-complete the first level of directories with a space after the argument instead of continuing to auto-complete like 'cd' would.
complete -x -c testfunc -a "(__fish_complete_directories ./foo/)"
Working bash version
I have already got this working in Bash and I am trying to port it over to fish. See below for the Bash version.
_testfunc()
{
local cur prev words cword
_init_completion || return
compopt +o default
case $prev in
testfunc)
COMPREPLY=( $( compgen -W '-d' -- "$cur" ) )
compopt +o nospace
return
;;
-d)
curdir=$(pwd)
cd foo/ 2>/dev/null && _filedir -d
COMPREPLY=( $( compgen -d -S / -- "$cur" ) )
cd $curdir
return
;;
esac
} &&
complete -o nospace -F _testfunc testfunc
This is essentially stepping into the folder that I want, doing the autocompletion, then stepping back into the original folder that the script was run in. I was hoping this would be easier in Fish after getting it working in Bash (I need to support these two shells), but I'm just pulling my hair out.
Any help would be really appreciated!
I am not a bash completions expert, but it looks like the bash completions are implemented by changing directories, running completions, and then changing back. You can do the same in fish:
function complete_testfunc
set prevdir $PWD
cd foo
__fish_complete_directories
cd $prevdir
end
complete -x -c testfunc -a "(complete_testfunc)"
does that work for you?
I'm trying to write a script that I'll put in my pi's cron to check for network connectivity every 10 seconds, if it fails a ping to google it will write a text file as false, then next time it succeeds, it will restart a program, because the specific program has issues with reconnecting to the network automatically.
The script seemed to be working when I was executing it from the terminal out of the same directory, then I cd back to / and added a bunch of comments, and now it just exits the script without any output, and for the life of me I can't figure out where i messed it up - I'm still relatively new to scripting so I could be missing something absolutely obvious here, but I couldn't find anything useful on google.
file heirarchy:
/home/pi/WEB_UI/
inside the WEB_UI folder are both of the scripts i'm running here.
nonet.sh - the script in question
pianobar.sh - a simple script to pkill a program and reload it after 5 seconds.
var.txt - a text file that will only ever contain "true" or "false
I've tried removing all of the comments, changing the file locations to ./ and making the while; do commands a single line, but I can't figure out where the issue is. if I run sh -x for the script, it returns:
pi#raspberrypi:~/WEB_UI $ sh -x nonet.sh
+ ping -q -c 1 -W 1 google.com
+ read line
interestingly I get the same result from a test script I was using that was basically
"if var.txt says 'true', echo 'up', else echo 'down'"
I wonder if something is wrong with my sh interpreter?
#!/bin/sh
#ping google, if successful return true
if ping -q -c 1 -W 1 google.com >/dev/null; then
#read variable line, perform action do
while read line
do
#leading $ means return previous output. if line is false:
if [ "$line" = "false" ]
then
#return network up text, run pianobar script, set var.txt to true.
echo "the network is back up"
sh /home/pi/WEB_UI/pianobar.sh
echo true > /home/pi/WEB_UI/var.txt
else
#otherwise return network is up, set var.txt to true
echo "the network is up"
echo true > /home/pi/WEB_UI/var.txt
#fi ends an if statement, done ends a while loop.
#text after done tells the while loop where to get the line variable
fi
done < /home/pi/WEB_UI/var.txt
else
while read line
do
if [ "$line" = "false" ]
then
#if var.txt is already false, ping google again
if ping -q -c 1 -W 1 google.com >/dev/null; then
#if ping works, the network is back, restart pianobar, set var to true
echo "the network is back up"
sh /home/pi/WEB_UI/pianobar.sh
echo true > /home/pi/WEB_UI/var.txt
else
#if var.txt is false, network is still down. wait.
echo "the network is still down"
fi
else
echo "the network is down"
echo false > /home/pi/WEB_UI/var.txt
fi
done < /home/pi/WEB_UI/var.txt
fi
the script SHOULD just echo a simple line saying whether the network is up, down, back up, or still down, depending on how many flags it passes/fails. Any assistance would be greatly appreciated!
as Shellter said in comments above, the issue was that I needed to add \n to the end of the line in my var.txt
I think I saw another post recently where while read... was frustrated by a missing \n char, so maybe you want to do printf "false\n" > file instead. Good luck.
When executing a job on LSF you can specify the working directory and create a output directory, i.e
bsub -cwd /home/workDir -outdir /home/$J program inputfile
where it will look for inputfile in the specified working directory. The -outdir will create a new directory based on the JobId.
What I'm wondering is how you pipe the results created from the run in the working directory to the newly created output dir.
You can't add a command like
mv * /home/%J
as the underlying OS has no understanding of the %J identifier. Is there an option in LSF for piping the data inside the job, where it knows the jobId?
You can use the environment variable $LSB_JOBID.
mv * /data/${LSB_JOBID}/
If you copy the data inside your job script then it will hold the compute resource during the data copy. If you're copying a small amount of data then its not a problem. But if its a large amount of data you can use bsub -f so that other jobs can start while the data copy is ongoing.
bsub -outdir "/data/%J" -f "/data/%J/final < bigfile" sh script.sh
bigfile is the file that your job creates on the compute host. It will be copied to /data/%J/final after the job finishes. It even works on a non-shared filesystem.
I use a Makefile to create pdfs of papers I'm working on. I'd also like to use make to upload the latest version to my website, which requires sftp. I though I could do something like this (which words on the command line) but it seems that in make, the EOF is getting ignored i.e., this
website:
sftp -oPort=2222 me#mywebsite.com << EOF
cd papers
put research_paper.pdf
EOF
generates an error message
cd papers
/bin/sh: line 0: cd: papers: No such file or directory
which I think is saying "papers" doesn't exist on your local machine i.e., the 'cd' is being executed locally, not remotely.
Couple of ideas:
use ncftp which every Linux distro as well as brew should have: it remembers 'state' so the cd becomes unnecessary
use scp instead of sftp if possible
write a trivial shell script doing the EOF business and call that
For what it is worth, here is my script to push tarballs to the CRAN winbuilder -- and takes target directory and script as arguments to ncftpput.
#!/bin/bash
function errorexit () {
echo "Error: $1"
exit 1
}
if [ "$#" -lt 1 ]; then
errorexit "Need to specify argument file"
fi
if [ ! -f ${1} ]; then
errorexit "File ${1} not found, aborting."
fi
ncftpput win-builder.r-project.org /R-release ${1}
ncftpput win-builder.r-project.org /R-devel ${1}
I then just do wbput.sh foo_1.2-3.tar.gz and off it goes...
You cannot (normally) put a single command on multiple lines in a Make recipe, so here documents are a no-go. Try this instead:
website: research_paper.pdf
printf 'cd papers\nput $<\n' \
| sftp -oPort=2222 me#mywebsite.com
The target obviously depends on the PDF, so I made it an explicit dependency, as well.
I have a small bash script that download files from another server, sometimes download gets interrupted. How can I check if wget has completed download successfully?
if it gets interrupted then it may have part of the file ?
If it has part of the file - how would you know if the file is the full file or not depends on two different checks.
Either you have the actual file maybe from another attempt of the same script executed then the files compared - you could compare the files using md5 to ensure their identical.
The other less accurate method could be done over 1 attempt and you could do a du -sk on the file and if its above a certain size it passes - this by no way can ensure if file is 100% there if cut off 99%
but you could also look into wget -c which resumes downloads ---
so maybe run it twice with this option:
wget --help 2>&1 |grep "\-\-continue"
-c, --continue resume getting a partially-downloaded file.
if it is a web server you are in control of you could install:
https://metacpan.org/pod/Apache::OpenIndex
I think this displays the md5 sum of the directoryindex so you can then parse this and compare to local md5 sum of your downloaded file - if a miss match run wget -c