I have a job in gitlab-ci.yml that look like this
job_name:
script:
- .../ExeName.exe > Output.txt
needs:
- ...
stage: ...
tags:
- ...
Edit: jobs are using powershell
ExeName.exe is an executable created by visual studio. Output.txt contains the output of the program and is created when ExeName.exe is run. I want to know if a string exists in the Output.txt file. If the string exists, the job should fail, if it exists the job should pass. How can i do that?
I guess the job you consider runs an image that contains standard POSIX tools.
So in particular, you may want to rely on grep:
either writing:
job_name:
script:
- .../ExeName.exe > Output.txt
- '! grep -e "forbidden string" Output.txt'
(as by default, grep succeeds if it finds the string, while you are interested in the opposite behavior, hence the shell negation operator !) or:
job_name:
script:
- .../ExeName.exe > Output.txt
- grep -q -v -e "forbidden string" Output.txt
or you may want to manually use an if if you want to display more text in the logs:
job_name:
script:
- .../ExeName.exe > Output.txt
- if grep -q -e "forbidden string" Output.txt; then echo "Found forbidden string"; false; else echo "OK."; fi
As an aside, you might be interested in setting your generated text file Output.txt as a job artifact.
Related
First let me explain that I have very little expertise with bash scripting. I only use it for very simple applications.
My script is used to generate a grep command.
I use the echo command as an interim debug tool. I figure that if I can get the echo command to show the command I want to execute, all I have to do is remove the echo and the quotes and the command inside the echo should do what I want.
Here is my script (called grepper3.sh). Again, I am not an expert at this:
#!/bin/bash
echo "what should I grep?"
read this
echo "grep -Ri \"$this\" > \"$this\""
Here is what happens when I execute:
master#master-Latitude-E6440:~$ ./grepper3.sh
what should I grep?
all that
grep -Ri "all that" > "all that"
The grep command being echoed by the code is exactly what I want. But when I remove the echo and the surrounding double quotes:
was: echo "grep -Ri \"$this\" > \"$this\""
changed to: grep -Ri \"$this\" > \"$this\"
I get this:
master#master-Latitude-E6440:~$ ./grepper3.sh
what should I grep?
all that
./grepper3.sh: line 5: \"$this\": ambiguous redirect
I'm guessing that there is a simple fix, but I can't figure it out.
You can add . to recursive search.
#!/bin/bash
echo "what should I grep?"
read this
grep -Ri $this > $this .
All you need to to is to change the quotation marks to backticks.
Your new code would be:
#!/bin/bash
echo "what should I grep?"
read this
echo `grep -Ri \"$this\" > \"$this\"`
Here is what I finally got to work. Thanks for all of your help!
#!/bin/bash
echo "what should I grep?"
read this
DEST="/home/master/results/$this"
grep -Ri "$this" > "$DEST"
The output of certain command contains
>> ..................546 Jobs Retrieved
List of jobs Retrieved: 1-4,6-12,14,2017-2018 ............
>>> 30 Jobs Done
Jobs terminated: retrieve them with: crab -getoutput <List of jobs>
List of jobs: 203,376,578,765,803,809,811
.....................
And I want to extract only 203,376,578,765,803,809,811 that occurs after line 30 Jobs Done. And after that I neet to put this number as a string in certain variable to use this in some command. How can I do it.
I tried it in this way:
I put the output in a status.log file
$ sed -e '1,/Jobs Done/d' status.log | grep "List of jobs:"
then I got only line
List of jobs: 578,765,811,836,1068,1096,1128
but I don't need the phrase "List of jobs"
Please help me.
Thank you very much in advance.
You can use this:
awk '/30 Jobs Done/ {f=1;next} f && /List of jobs:/ {print $4;exit}' file
203,376,578,765,803,809,811
When it find 30 Jobs Done it set flag f to true.
If it then finds List of jobs: and flag f is true, print field 4
Using simple tools:
egrep '^\s+List of jobs: [0-9,]+$' status.log | cut -d: -f2
The pattern for egrep matches the whole line and the cut returns everything after the :.
That means you will get a leading space in the result. If that's a problem:
egrep '^\s+List of jobs: [0-9,]+$' status.log | cut -d: -f2 | cut -c2-
You could do this:
grep -A2 "Jobs Done" yourfile | awk '/List of jobs:/{print $4}'
Grab two lines following "Jobs Done" (-A2) and then look for "List of jobs" with awk and print 4th field.
i want to tail log file with grep and sent it via mail
like:
tail -f /var/log/foo.log | grep error | mail -s subject name#example.com
how can i do this?
You want to send an email when emailing errors occur? That might fail ;)
You can however try something like this:
tail -f $log |
grep --line-buffered error |
while read line
do
echo "$line" | mail -s subject "$email"
done
Which for every line in the grep output sends an email.
Run above shell script with
nohup ./monitor.sh &
so it will keep running in the background.
I'll have a go at this. Perhaps I'll learn something if my icky bash code gets scrutinised. There is a chance there are already a gazillion solutions to do this, but I am not going to find out, as I am sure you have trawled the depths and widths of the cyberocean. It sounds like what you want can be separated into two bits: 1) at regular intervals obtain the 'latest tail' of the file, 2) if the latest tail actually exists, send it by e-mail. For the regular intervals in 1), use cron. For obtaining the latest tail in 2), you'll have to keep track of the file size. The bash script below does that - it's a solution to 2) that can be invoked by cron. It uses the cached file size to compute the chunk of the file it needs to mail. Note that for a file myfile another file .offset.myfile is created. Also, the script does not allow path components in the file name. Rewrite, or fix it in the invocation [e.g. (cd /foo/bar && segtail.sh zut), assuming it is called segtail.sh ].
#!/usr/local/bin/bash
file=$1
size=0
offset=0
if [[ $file =~ / ]]; then
echo "$0 does not accept path components in the file name" 2>&1
exit 1
fi
if [[ -e .offset.$file ]]; then
offset=$(<".offset.$file")
fi
if [[ -e $file ]]; then
size=$(stat -c "%s" "$file") # this assumes GNU stat, possibly present as gstat. CHECK!
# (gstat can also be Ganglias Status tool - careful).
fi
if (( $size < $offset )); then # file might have been reduced in size
echo "reset offset to zero" 2>&1
offset=0
fi
echo $size > ".offset.$file"
if [[ -e $file && $size -gt $offset ]]; then
tail -c +$(($offset+1)) "$file" | head -c $(($size - $offset)) | mail -s "tail $file" foo#bar
fi
How about:
mail -s "catalina.out errors" blah#myaddress.com < grep ERROR catalina.out
I'm trying to run a perl script from within a bash script (I'll change this design later on, but for now, bear with me). The bash script receives the argument that it will run. The argument to the script is as follows:
test.sh "myscript.pl -g \"Some Example\" -n 1 -p 45"
within the bash script, I simple run the argument that was passed:
#!/bin/sh
$1
However, in my perl script the -g argument only gets "Some (that's with the quotes), instead of the Some Example. Even if I quote it, it cuts off because of the whitespace.
I tried escaping the whitespace, but it doesn't work... any ideas?
To run it as posted test.sh "myscript.pl -g \"Some Example\" -n 1 -p 45" do this:
#!/bin/bash
eval "$1"
This causes the $1 argument to be parsed by the shell so the individual words will be broken up and the quotes removed.
Or if you want you could remove the quotes and run test.sh myscript.pl -g "Some Example" -n 1 -p 45 if you changed your script to:
#!/bin/bash
"$#"
The "$#" gets replaced by all the arguments $1, $2, etc., as many as were passed in on the command line.
Quoting is normally handled by the parser, which isn't seeing them when you substitute the value of $1 in your script.
You may have more luck with:
#!/bin/sh
eval "$1"
which gives:
$ sh test.sh 'perl -le "for (#ARGV) { print; }" "hello world" bye'
hello world
bye
Note that simply forcing the shell to interpret the quoting with "$1" won't work because then it tries to treat the first argument (i.e., the entire command) as the name of the command to be executed. You need the pass through eval to get proper quoting and then re-parsing of the command.
This approach is (obviously?) dangerous and fraught with security risks.
I would suggest you name the perl script in a separate word, then you can quote the parameters when referring to them, and still easily extract the script name without needing the shell to split the words, which is the fundamental problem you have.
test.sh myscript.pl "-g \"Some Example\" -n 1 -p 45"
and then
#!/bin/sh
$1 "$2"
if you really have to do this (for whatever reason) why not just do:
sh test.sh "'Some Example' -n 1 -p 45"
in:
test.sh
RUN=myscript.pl
echo `$RUN $1
(there should be backticks ` before $RUN and after $1)
I have a file that contains this kind of paths:
C:\bad\foo.c
C:\good\foo.c
C:\good\bar\foo.c
C:\good\bar\[variable subdir count]\foo.c
And I would like to get the following file:
C:\bad\foo.c
C:/good/foo.c
C:/good/bar/foo.c
C:/good/bar/[variable subdir count]/foo.c
Note that the non matching path should not be modified.
I know how to do this with sed for a fixed number of subdir, but a variable number is giving me trouble. Actually, I would have to use many s/x/y/ expressions (as many as the max depth... not very elegant).
May be with awk, but this kind of magic is beyond my skills.
FYI, I need this trick to correct some gcov binary files on a cygwin platform.
I am dealing with binary files; therefore, I might have the following kind of data:
bindata\bindata%bindataC:\good\foo.c
which should be translated as:
bindata\bindata%bindataC:/good/foo.c
The first \ must not be translated, despite that it is on the same line.
However, I have just checked my .gcno files while editing this text and it looks like all the paths are flanked with zeros, so most of the answers below should fit.
sed -e '/^C:\\good/ s/\\/\//g' input_file.txt
I would recommend you look into the cygpath utility, which converts path names from one format to another. For instance on my machine:
$ cygpath `pwd`
/home/jericson
$ cygpath -w `pwd`
D:\root\home\jericson
$ cygpath -m `pwd`
D:/root/home/jericson
Here's a Perl implementation of what you asked for:
$ echo 'C:\bad\foo.c
C:\good\foo.c
C:\good\bar\foo.c
C:\good\bar\[variable subdir count]\foo.c' | perl -pe 's|\\|/|g if /good/'
C:\bad\foo.c
C:/good/foo.c
C:/good/bar/foo.c
C:/good/bar/[variable subdir count]/foo.c
It works directly with the string, so it will work anywhere. You could combine it with cygpath, but it only works on machines that have that path:
perl -pe '$_ = `cygpath -m $_` if /good/'
(Since I don't have C:\good on my machine, I get output like C:goodfoo.c. If you use a real path on your machine, it ought to work correctly.)
You want to substitute '/' for all '\' but only on the lines that match the good directory path. Both sed and awk will let you do this by having a LHS (matching) expression that only picks the lines with the right path.
A trivial sed script to do this would look like:
/[Cc]:\\good/ s/\\/\//g
For a file:
c:\bad\foo
c:\bad\foo\bar
c:\good\foo
c:\good\foo\bar
You will get the output below:
c:\bad\foo
c:\bad\foo\bar
c:/good/foo
c:/good/foo/bar
Here's how I would do it in awk:
# fixpaths.awk
/C:\\good/ {
gsub(/\\/,"/",$1);
print $1 >> outfile;
}
Then run it using the command:
awk -f fixpaths.awk paths.txt; mv outfile paths.txt
Or with some help from good ol' Bash:
#!/bin/bash
cat file | while read LINE
do
if <bad_condition>
then
echo "$LINE" >> newfile
else
echo "$LINE" | sed -e "s/\\/\//g" >> newfile
fi
done
try this
sed -re '/\\good\\/ s/\\/\//g' temp.txt
or this
awk -F"\\" '{if($2=="good"){OFS="\/"; $1=$1;} print $0}' temp.txt