Bash Shell Script Infinite loop log in script - sh

Currently trying to write a script to display user logins and logout on a network. Current code is as follows:
echo "The current users are:"
who | awk '{print $1}' | sort > tempfile1
cp tempfile1 tempfile2
more tempfile1
while true
do
who | awk '{print $1}' | sort > temp2
cmp -s tempfile1 tempfile2
case "$?" in
0)
echo "No user has logged in/out in the last 3 seconds."
;;
1)
user=`comm -23 tempfile1 tempfile2`
file=`grep $user tempfile1 tempfile2 | cut -c 1-5`
[ $file == "tempfile1" ]
echo "User "$user" has logged out."
[ $file == "tempfile2" ];
echo "User "$user" has logged in."
;;
esac
rm tempfile1
mv tempfile2 tempfile1
sleep 3
done
Running the script i get the following:
The current users are:
No user has logged in/out in the last 3 seconds.
mv: cannot stat ‘tempfile2’: No such file or directory
rm: cannot remove ‘tempfile1’: No such file or directory
mv: cannot stat ‘tempfile2’: No such file or directory
I am fairly certain there is a syntax issues within this code somewhere, but I am blind. Have compared to other similiar examples of this type of script to no avail. If anyone can help point out how much of an idiot i am that would be super helpful. cheers.

At the end of the first time through the loop you rm tempfile1 then mv tempfile2. When you get back to the top of the loop and do cmp you don't have both files.
Is who | awk '{print $1}' | sort > temp2 supposed to be who | awk '{print $1}' | sort > tempfile2 ? (temp2 is never referenced anywhere else...)

Related

How To Send email alert on log file entry with awk in Linux?

I'm working on a script to monitor a log of jobs executed and I want to receive a mail notification with the line where appears the job in the body of the mail. This is what I got so far but it keeps throwing error, I can make it work but just with an empty body. Could you please help?
Job="jobname"
tail -fn0 logfile.log | awk -v Jobs="$Job"'/jobname/
{
system("grep -i "Jobs" logfile.log | mail -s "Jobs Is Completed" mail#mail.com")
exit
}'
What's wrong with just:
Job="jobname"
tail -fn0 logfile.log |
grep --line-buffered -i "jobname.*$Job" |
mail -s "$Job Is Completed" mail#mail.com"
Your use of jobname as literal in 2 places and a shell variable named Job with an awk variable named Jobs populated from it and both containing jobname anyway was very confusing so hopefully you can tweak the above to do whatever you need to do if the variable usage above is not quite right.
watchdog.sh
#!/bin/bash
mailWorker(){
while read -r line; do
if [[ $line == *$match* ]]
then
# mailing
grep -i "Jobs" "$logfile" | mail -s "Jobs Is Completed" mail#mail.com
break
fi
done
}
logfile="/path/to/logfile.log"
match="jobname"
if [ ! -f "$logfile" ]; then
touch "$logfile"
fi
tail -f --lines=0 "$logfile" | mailWorker

How to remove some text in long filename from bunch of files in directory

Can't boot my Windows PC today and I am on 2nd OS Linux Mint. With my limited knowledge on Linux and shell scripts, I really don't have an idea how to do this.
I have a bunch of files in a directory generated from my system, need to remove the last 12 characters from the left of ".txt"
Sample filenames:
filename1--2c4wRK77Wk.txt
filename2-2ZUX3j6WLiQ.txt
filename3-8MJT42wEGqQ.txt
filename4-sQ5Q1-l3ozU.txt
filename5--Way7CDEyAI.txt
Desired result:
filename1.txt
filename2.txt
filename3.txt
filename4.txt
filename5.txt
Any help would be greatly appreciated.
Here is a programmatic way of doing this while still trying to account for pesky edge cases:
#!/bin/sh
set -e
find . -name "filename*" > /tmp/filenames.list
while read -r FILENAME; do
NEW_FILENAME="$(
echo "$FILENAME" | \
awk -F '.' '{$NF=""; gsub(/ /, "", $0); print}' | \
awk -F '/' '{print $NF}' | \
awk -F '-' '{print $1}'
)"
EXTENSION="$(echo "$FILENAME" | awk -F '.' '{print $NF}')"
if [[ "$EXTENSION" == "backup" ]]; then
continue
else
cp "$FILENAME" "${FILENAME}.backup"
fi
if [[ -z "$EXTENSION" ]]; then
mv "$FILENAME" "$NEW_FILENAME"
else
mv "$FILENAME" "${NEW_FILENAME}.${EXTENSION}"
fi
done < /tmp/filenames.list
Create a List of Files to Edit
First up create a list of files that you would like to edit (assuming that they all start with filename) and under the current working directory (.):
find . -name "filename*" > /tmp/filenames.list
If they don't start with filename fret not you could always use a find command like:
find . -type f > /tmp/filenames.list
Iterate over a list of files
To accomplish this we use a while read loop:
while read -r LINE; do
# perform action
done < file
If you had the ability to use bash you could always use a named pipe redirect:
while read -r LINE; do
# perform action
done < <(
find . -type f
)
Create a rename variable
Next, we create a variable NEW_FILENAME and using awk we strip off the file extension and any trailing spaces using:
awk -F '.' '{$NF=""; gsub(/ /, "", $0); print}'
We could just use the following though if you know for certain that there aren't multiple periods in the filename:
awk -F '.' '{print $1}'
The leading ./ is stripped off via
awk -F '/' '{print $NF}'
although this could have been easily done via basename
With the following command, we strip everything after the first -:
awk -F '-' '{print $1}'
Creating backups
Feel free to remove this if you deem unnecessary:
if [[ "$EXTENSION" == "backup" ]]; then
continue
else
cp "$FILENAME" "${FILENAME}.backup"
fi
One thing that we definitely don't want is to make backups of backups. The above logic accounts for this.
Renaming the files
One thing that we don't want to do is append a period to a filename that doesn't have an extension. This accounts for that.
if [[ -z "$EXTENSION" ]]; then
mv "$FILENAME" "$NEW_FILENAME"
else
mv "$FILENAME" "${NEW_FILENAME}.${EXTENSION}"
fi
Other things of note
Odds are that your Linux Mint installation has a bash shell so you could simplify some of these commands. For instance, you could use variable substitution: echo "$FILENAME" | awk -F '.' '{print $NF}' would become "${FILENAME##.*}"
[[ is not defined in POSIX sh so you will likely just need to replace [[ with [, but review this document first:
https://mywiki.wooledge.org/BashFAQ/031
From the pattern of filenames it looks like that the first token can be picked before "-" from filenames. Use following command to rename these files after changing directory to where files are located -
for srcFile in `ls -1`; do fileN=`echo $srcFile | cut -d"-" -f1`; targetFile="$fileN.txt"; mv $srcFile $targetFile; done
If above observation is wrong, following command can be used to remove exactly 12 characters before .txt (4 chars) -
for srcFile in `ls -1`; do fileN=`echo $srcFile | rev | cut -c17- | rev`; targetFile="$fileN.txt"; mv $srcFile $targetFile; done
In ls -1, a pattern can be added to filter files from current directory if that is required.

Why can't I filter tail's output multiple times through pipes?

Unexpectedly, this fails (no output; tried in sh, zsh, bash):
echo "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played | sed 's#pl#st#g'
Note that two times grep also fails, indicating that it's quite irrelevant which commands are used:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played | grep played
grep alone works:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played
played
sed alone works:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | sed 's#pl#st#g'`
foo
stayed
bar
With cat instead of tail, it works:
# echo -e "foo\nplayed\nbar" > /tmp/t && cat /tmp/t | grep played | sed 's#pl#st#g'
stayed
With journalctl --follow, it fails just like with tail.
What's the reason for being unable to pipe twice?
It's a buffering issue - the first grep buffers it's output when it's piping to another command but not if it's printing to stdout. See http://mywiki.wooledge.org/BashFAQ/009 for additional info.

How do i use a shell script output as input for perl script (to delete files with permissions)?

I am running a shell script on the mailq to create a list of files to delete, but i cannot delete them with shell script due to permissions (when i use root permissions the script works but i cannot always give root password for root permissions to a user). i would like to send the output list of files to perl in order to delete them and that Perl program has root priveleges.
the shell script is:
#!/usr/bin/ksh
WORKFILE="/tmp/check.mq"
MAILLIST="yagyavalkbhatt#yahoo.com"
mailq|grep -B1 -i temporarily |grep -iv deferred |egrep -i 'jan|feb|mar|apr|may|june|jul|aug|sept|oct|nov|dec' |awk -F" " '{print $1}' |awk '{print substr($0,10,14)}'|tee $WORKFILE |awk '{print "*" $1}'|tee mail.mq
mailq|grep -B1 -i unknown |egrep -i 'jan|feb|mar|apr|may|june|jul|aug|sept|oct|nov|dec' |awk -F" " '{print $1}' |awk '{print substr($0,10,14)}'|tee $WORKFILE |awk '{print "*" $1}'|tee mail.mq
mailq|grep -B1 -i lookup |grep -iv deferred |egrep -i 'jan|feb|mar|apr|may|june|jul|aug|sept|oct|nov|dec' |awk -F" " '{print $1}' |awk '{print substr($0,10,14)}'|tee $WORKFILE |awk '{print "*" $1}'|tee mail.mq
cat mail.mq | while read file; do rm /var/spool/mqueue/$file;done
find . -type f -name "mail.mq" |rm -rf mail.mq
which creates output like:
*##### where ##### is a unique 5 numbers to identify files in the mailq.
i want to know how i can delete these files with root priveleges from any user.
One good way of handling this is to allow users to create lists like this and save the output to files that goes in a given directory. You can then have a cron job periodically (frequently or infrequently) go through the lists submitted by users, check the content to make sure it's legit (you only want to allow certain files in certain directories to be listed for deletion), delete the files, then delete the list.
Giving root permissions to users is not a good idea. There are usually better ways to get the job done.
Redirect STDERR to STDOUT and read it together.
Example:
#ls_output = `ls -l 2>&1`;

tail and grep log and mail (linux)

i want to tail log file with grep and sent it via mail
like:
tail -f /var/log/foo.log | grep error | mail -s subject name#example.com
how can i do this?
You want to send an email when emailing errors occur? That might fail ;)
You can however try something like this:
tail -f $log |
grep --line-buffered error |
while read line
do
echo "$line" | mail -s subject "$email"
done
Which for every line in the grep output sends an email.
Run above shell script with
nohup ./monitor.sh &
so it will keep running in the background.
I'll have a go at this. Perhaps I'll learn something if my icky bash code gets scrutinised. There is a chance there are already a gazillion solutions to do this, but I am not going to find out, as I am sure you have trawled the depths and widths of the cyberocean. It sounds like what you want can be separated into two bits: 1) at regular intervals obtain the 'latest tail' of the file, 2) if the latest tail actually exists, send it by e-mail. For the regular intervals in 1), use cron. For obtaining the latest tail in 2), you'll have to keep track of the file size. The bash script below does that - it's a solution to 2) that can be invoked by cron. It uses the cached file size to compute the chunk of the file it needs to mail. Note that for a file myfile another file .offset.myfile is created. Also, the script does not allow path components in the file name. Rewrite, or fix it in the invocation [e.g. (cd /foo/bar && segtail.sh zut), assuming it is called segtail.sh ].
#!/usr/local/bin/bash
file=$1
size=0
offset=0
if [[ $file =~ / ]]; then
echo "$0 does not accept path components in the file name" 2>&1
exit 1
fi
if [[ -e .offset.$file ]]; then
offset=$(<".offset.$file")
fi
if [[ -e $file ]]; then
size=$(stat -c "%s" "$file") # this assumes GNU stat, possibly present as gstat. CHECK!
# (gstat can also be Ganglias Status tool - careful).
fi
if (( $size < $offset )); then # file might have been reduced in size
echo "reset offset to zero" 2>&1
offset=0
fi
echo $size > ".offset.$file"
if [[ -e $file && $size -gt $offset ]]; then
tail -c +$(($offset+1)) "$file" | head -c $(($size - $offset)) | mail -s "tail $file" foo#bar
fi
How about:
mail -s "catalina.out errors" blah#myaddress.com < grep ERROR catalina.out