Remove old Elasticsearch indexes if ELK is installed in docker container using curl - elastic-stack

ELK is installed on docker . Due to old logs and indexes server hard disk capacity gets full resulting crash of ELK container.

Run below shell script on docker shell on which elk is installed
#!/bin/bash
DAYSAGO=date --date="200 days ago" +%Y%m%d
ALLLINES=/usr/bin/curl -s -XGET http://127.0.0.1:9200/_cat/indices?v | egrep logstash
echo
echo "THIS IS WHAT SHOULD BE DELETED FOR ELK:"
echo
echo "$ALLLINES" | while read LINE
do
FORMATEDLINE=echo $LINE | awk '{ print $3 }' | awk -F'-' '{ print $2 }' | sed 's/\.//g'
if [ "$FORMATEDLINE" -lt "$DAYSAGO" ]
then
TODELETE=echo $LINE | awk '{ print $3 }'
echo "http://127.0.0.1:9200/$TODELETE"
fi
done
echo
echo -n "if this make sence, Y to continue N to exit [Y/N]:"
read INPUT
if [ "$INPUT" == "Y" ] || [ "$INPUT" == "y" ] || [ "$INPUT" == "yes" ] || [ "$INPUT" == "YES" ]
then
echo "$ALLLINES" | while read LINE
do
FORMATEDLINE=echo $LINE | awk '{ print $3 }' | awk -F'-' '{ print $2 }' | sed 's/\.//g'
if [ "$FORMATEDLINE" -lt "$DAYSAGO" ]
then
TODELETE=echo $LINE | awk '{ print $3 }'
/usr/bin/curl -XDELETE http://127.0.0.1:9200/$TODELETE
sleep 1
fi
done
else
echo SCRIPT CLOSED BY USER, BYE ...
echo
exit
fi

Related

How to compare a string value in an IF else condition using shell script

#!/bin/sh
if [ $# -lt 2 ]
then
echo "Usage $0 servername envtype"
exit 1
fi
servername=$1
envtype=$2
srv=echo $servername | tr "a-z" "A-Z"
srvtype=echo $envtype | tr "a-z" "A-Z"
echo $srv
echo $srvtype
if [ "${srv}" <> "SCP" ];
then
if [" ${srv}" <> "RENTD" ];
then
echo "Invalid servername"
exit 1;
fi
fi
I am getting this error when running the shell script
[ 2 -lt 2 ]
servername=scp
envtype=prd
echo scp
tr a-z A-Z
srv=SCP
echo prd
tr a-z A-Z
srvtype=PRD
echo SCP
SCP
echo PRD
PRD
[ SCP ]
[ SCP ]
/tmp/testupper.sh: 19: [ SCP: not found
Found my issue.
[" ${srv}" <> "RENTD" ]
there is a space being prefixed to the srv variable, between the " and the $: " ${srv}"

Bash or Python efficient substring matching and filtering

I have a set of filenames in a directory, some of which are likely to have identical substrings but not known in advance. This is a sorting exercise. I want to move the files with the maximum substring ordered letter match together in a subdirectory named with that number of letters and progress to the minimum match until no matches of 2 or more letters remain. Ignore extensions. Case insensitive. Ignore special characters.
Example.
AfricanElephant.jpg
elephant.jpg
grant.png
ant.png
el_gordo.tif
snowbell.png
Starting from maximum length matches to minimum length matches will result in:
./8/AfricanElephant.jpg and ./8/elephant.jpg
./3/grant.png and ./3/ant.png
./2/snowbell.png and ./2/el_gordo.tif
Completely lost on an efficient bash or python way to do what seems a complex sort.
I found some awk code which is almost there:
{
count=0
while ( match($0,/elephant/) ) {
count++
$0=substr($0,RSTART+1)
}
print count
}
where temp.txt contains a list of the files and is invoked as eg
awk -f test_match.awk temp.txt
Drawback is that a) this is hardwired to look for "elephant" as a string (I don't know how to make it take an input string (rather than file) and an input test string to count against, and
b) I really just want to call a bash function to do the sort as specified
If I had this I could wrap some bash script around this core awk to make it work.
function longest_common_substrings () {
shopt -s nocasematch
for file1 in * ; do for file in * ; do \
if [[ -f "$file1" ]]; then
if [[ -f "$file" ]]; then
base1=$(basename "$file" | cut -d. -f1)
base2=$(basename "$file1" | cut -d. -f1)
if [[ "$file" == "$file1" ]]; then
echo -n ""
else
echo -n "$file $file1 " ; $HOME/Scripts/longest_common_substring.sh "$base1" "$base2" | tr -d '\n' | wc -c | awk '{$1=$1;print}' ;
fi
fi
fi
done ;
done | sort -r -k3 | awk '{ print $1, $3 }' > /tmp/filesort_substring.txt
while IFS= read -r line; do \
file_to_move=$(echo "$line" | awk '{ print $1 }') ;
directory_to_move_to=$(echo "$line" | awk '{ print $2 }') ;
if [[ -f "$file_to_move" ]]; then
mkdir -p "$directory_to_move_to"
\gmv -b "$file_to_move" "$directory_to_move_to"
fi
done < /tmp/filesort_substring.txt
shopt -u nocasematch
where $HOME/Scripts/longest_common_substring.sh is
#!/bin/bash
shopt -s nocasematch
if ((${#1}>${#2})); then
long=$1 short=$2
else
long=$2 short=$1
fi
lshort=${#short}
score=0
for ((i=0;i<lshort-score;++i)); do
for ((l=score+1;l<=lshort-i;++l)); do
sub=${short:i:l}
[[ $long != *$sub* ]] && break
subfound=$sub score=$l
done
done
if ((score)); then
echo "$subfound"
fi
shopt -u nocasematch
Kudos to the original solution for computing the match in the script which I found elsewhere in this site

How do I fix 'command not found' that popped out when I tried 'egrep' from a variable?

I wanted to make a program that searches all the lines that contains all the factors given, from a file mydata. I tried to egrep first factor from mydata and save it in a variable a. Then, I tried to egrep the next factor from a and save the result to a again until I egrep all the factors. But when I executed the program, it said
"command not found" in line 14.
if [ $# -eq 0 ]
then
echo -e "Usage: phoneA searchfor [...searchfor]\n(You didn't tell me what you want to search for.)"
else
a=""
for i in $*
do
if [ -z "$a" ]
then
a=$(egrep "$i" mydata)
else
a=$("$a" | egrep "$i")
fi
done
awk -f display.awk "$a"
fi
I expected all the lines including all the factors outputted on the screen in the pattern that I made in display.awk.
$a contains data, not a command. You need to write that data to the pipe.
if [ $# -eq 0 ]
then
printf '%s\n' "Usage: phoneA searchfor [...searchfor]" "(You didn't tell me what you want to search for.)" >&2
exit 1
fi
a=""
for i in "$#"; do
if [ -z "$a" ]; then
a=$(egrep "$i" mydata)
else
a=$(printf '%s' "$a" | egrep "$i")
fi
done
awk -f display.awk "$a"

Parsing AutoSys JIL with perl

I have an assignment to parse out AutoSys JIL files. This is a JIL job definition, it is a config file that the AUTOSYS scheduler reads in and runs. , Imagine a file formatted like this, with thousands of job definitions like the one below, stacked on top of each other in the exact same format. All beginning with the header and ending with the timezone.
/* ----------------- COME_AND_PLAY_WITH_US_DANNY ----------------- */
insert_job: COME_AND_PLAY_WITH_US_DANNY job_type: CMD
command: /bin/bash -ls
machine: capser.com
owner: twins
permission: foo,foo
date_conditions: 1
days_of_week: mo,tu,we,th,fr
start_times: "04:00"
description: "Forever, and ever and ever"
std_in_file: "/home/room217"
std_out_file: "${CASPERSYSLOG}/room217.out"
std_err_file: "${CASPERSYSLOG}/room217.err
alarm_if_fail: 1
profile: "/autosys_profile"
timezone: US/Eastern
This is the script. I need to extract the job, machine and command from the job definition above. It works fine, but eventually I am going to want to store the information in some kind of container and send it, while this script writes out the results line by line in the terminal. Right now I am redirecting the results to a temporary file.
#!/foo/bar/perl5/core/5.10/exec/bin/perl
use strict;
use warnings;
use File::Basename ;
my($job, $machine, $command) ;
my $filename = '/tmp/autosys.jil_output.padc';
open(my $fh, '<:encoding(UTF-8)', $filename)
or die "Could not open file '$filename' $!";
my $count = 0;
while (my $line = <$fh>) {
#chomp $line;
if($line =~ /\/\* -{17} \w+ -{17} \*\//) {
$count = 1; }
elsif($line =~ /(alarm_if_fail:)/) {
$count = 0 ; }
elsif ($count) {
if ($line =~ m/insert_job: (\w+).*job_type: CMD/) {
$job = $1 ;
}
elsif($line =~ m/command:(.*)/) {
$command = $1 ;
}
elsif($line =~ m/machine:(.*)/) {
$machine = $1 ;
print "$job\t $machine\t $command \n ";
}
}
#sleep 1 ;
}
My question is When I place the print $job, $machine $command statement within the last elsif statement, it works fine. However when I place it out side of the last elsif statement, like the example below the output is duplicated over and over again - each line is duplicated like four to five times in the output. I do not understand that. How come I have to put the print statement within the last elsif statement to get the script to print out one line at a time, correctly.
elsif ( $line =~ m/machine:(.*)/ ) {
$machine = $1;
}
print "$job\t $machine\t $command \n ";
Reformat of above code for readability
#!/foo/bar/perl5/core/5.10/exec/bin/perl
use strict;
use warnings;
use File::Basename;
my ( $job, $machine, $command );
my $filename = '/tmp/autosys.jil_output.padc';
open( my $fh, '<:encoding(UTF-8)', $filename )
or die "Could not open file '$filename' $!";
my $count = 0;
while ( my $line = <$fh> ) {
#chomp $line;
if ( $line =~ /\/\* -{17} \w+ -{17} \*\// ) {
$count = 1;
}
elsif ( $line =~ /(alarm_if_fail:)/ ) {
$count = 0;
}
elsif ( $count ) {
if ( $line =~ m/insert_job: (\w+).*job_type: CMD/ ) {
$job = $1;
}
elsif ( $line =~ m/command:(.*)/ ) {
$command = $1;
}
elsif ( $line =~ m/machine:(.*)/ ) {
$machine = $1;
print "$job\t $machine\t $command \n ";
}
}
# sleep 1;
}
As I've said in my comment, please format your code sensibly. Without doing so you will get people either ignoring your question, or being grumpy about answering like me
Let's assume that the unidentified text block is just a sample of your input
Let's also assume that, even though your code works fine with your sample data, there are some data blocks in the real data that don't work
On top of that, I'm assuming that any data field value that contains spaces requires enclosing quotes, which makes your example command: /bin/bash -ls incorrect, and invalid syntax
Please also make sure that you have given a proper example of your problem with runnable code and data. If I execute the code that you show against your sample data then everything works fine, so what problem do you have?
As far as I can tell, you want to display the insert_job, machine, and command fields from every JIL data block whose job_type field is CMD. Is that right?
Here's my best guess: xxfelixxx's comment is correct, and you are simply printing all the fields that you have collected every time you read a line from the data file
My solution is to transform each data block into a hash.
It is dangerous to use comments to delineate the blocks, and you have given no information about the ordering of the fields, so I have to assume that the insert_job field comes first. That makes sense if the file is to be used as a list of imperatives, but the additional job_type field on the same line is weird. Is that a genuine sample of your data, or another problem with your example?
Here's a working solution to my imagination of your problem.
#!/foo/bar/perl5/core/5.10/exec/bin/perl
use strict;
use warnings 'all';
my $data = do {
local $/;
<DATA>;
};
my #data = grep /:/, split /^(?=insert_job)/m, $data;
for ( #data ) {
my %data = /(\w+) \s* : \s* (?| " ( [^""]+ ) " | (\S+) )/gx;
next unless $data{job_type} eq 'CMD';
print "#data{qw/ insert_job machine command /}\n";
}
__DATA__
/* ----------------- COME_AND_PLAY_WITH_US_DANNY ----------------- */
insert_job: COME_AND_PLAY_WITH_US_DANNY job_type: CMD
command: /bin/bash -ls
machine: capser.com
owner: twins
permission: foo,foo
date_conditions: 1
days_of_week: mo,tu,we,th,fr
start_times: "04:00"
description: "Forever, and ever and ever"
std_in_file: "/home/room217"
std_out_file: "${CASPERSYSLOG}/room217.out"
std_err_file: "${CASPERSYSLOG}/room217.err
alarm_if_fail: 1
profile: "/autosys_profile"
timezone: US/Eastern
/* ----------------- COME_AND_PLAY_WITH_US_AGAIN_DANNY ----------------- */
insert_job: COME_AND_PLAY_WITH_US_AGAIN_DANNY job_type: CMD
command: /bin/bash -ls
machine: capser.com
owner: twins
permission: foo,foo
date_conditions: 1
days_of_week: mo,tu,we,th,fr
start_times: "04:00"
description: "Forever, and ever and ever"
std_in_file: "/home/room217"
std_out_file: "${CASPERSYSLOG}/room217.out"
std_err_file: "${CASPERSYSLOG}/room217.err
alarm_if_fail: 1
profile: "/autosys_profile"
timezone: US/Eastern
/* ----------------- NEVER_PLAY_WITH_US_AGAIN_DANNY ----------------- */
insert_job: NEVER_PLAY_WITH_US_AGAIN_DANNY job_type: CMD
command: /bin/bash -rm *
machine: capser.com
owner: twins
permission: foo,foo
date_conditions: 1
days_of_week: mo,tu,we,th,fr
start_times: "04:00"
description: "Forever, and ever and ever"
std_in_file: "/home/room217"
std_out_file: "${CASPERSYSLOG}/room217.out"
std_err_file: "${CASPERSYSLOG}/room217.err
alarm_if_fail: 1
profile: "/autosys_profile"
timezone: US/Eastern
output
COME_AND_PLAY_WITH_US_DANNY capser.com /bin/bash
COME_AND_PLAY_WITH_US_AGAIN_DANNY capser.com /bin/bash
NEVER_PLAY_WITH_US_AGAIN_DANNY capser.com /bin/bash
This is a ksh solution to turn a JIL file into comma separated file you can open in excel
#!/usr/bin/ksh
# unix scprit to flatten autorep -q
resetVar()
{
AIF=""
AD=""
AH=""
BF=""
BN=""
BS=""
BT=""
COM=""
COD=""
DC=""
DOW=""
DES=""
EC=""
IJ=""
JL=""
JT=""
MAC=""
MES=""
MRA=""
NR=""
OWN=""
PER=""
PRI=""
PRO=""
RC=""
RW=""
SM=""
ST=""
SEF=""
SOF=""
TRT=""
WF=""
WFMS=""
WI=""
LSD=""
LST=""
LED=""
LET=""
STA=""
RUN=""
}
writePartToFile()
{
echo "$AIF;$AD;$AH;$BF;$BN;$BS;$BT;$COM;$COD;$DC;$DOW;$DES;$EC;$IJ;$JL;$JT;$MAC;$MES;$MRA;$NR;$OWN;$PER;$PRI;$PRO;$RC;$RW;$SM;$ST;$SEF;$SOF;$TRT;$WF;$WFMS;$WI" >> $TO_TPM
#echo "$AIF;$AD;$AH;$BF;$BN;$BS;$BT;$COM;$COD;$DC;$DOW;$DES;$EC;$IJ;$JL;$JT;$MAC;$MES;$MRA;$NR;$OWN;$PER;$PRI;$PRO;$RC;$RW;$SM;$ST;$SEF;$SOF;$TRT;$WF;$WFMS;$WI"
resetVar
}
JOB_NAME="flatten JIL"
part1=""
part2=""
#---------------------------------
if test "$1." = "."
then
echo "Missing first parameter (jil file to flatten)";
exit 1;
fi
if test "$2." = "."
then
echo "Missing second parameter (resulting flat file)";
exit 1;
fi
TO_FLATTEN=$1
TO_RESULT=$2
CLE_FILE="lesCles"
CLE_TMP="lesClesTmp"
TO_TPM="tempFichier"
TO_STATUS="statusFichier"
rm $TO_RESULT
rm $CLE_TMP
rm $CLE_FILE
rm $TO_TPM
rm $TO_STATUS
echo 'alarm_if_fail;auto_delete;auto_hold;box_failure;box_name;box_success;box_terminator;command;condition;date_conditions;days_of_week;description;exclude_calendar;insert_job;job_load;job_terminator;machine;max_exit_success;max_run_alarm;n_retrys;owner;permission;priority;profile;run_calendar;run_window;start_mins;start_times;std_err_file;std_out_file;term_run_time;watch_file;watch_file_min_size;watch_interval;last_start_date;last_start_time;last_end_date;last_end_time;status;run' >> $TO_RESULT;
while read line; do
if test "${line#*:}" != "$line"
then
cle="$(echo "$line" | cut -d":" -f 1)"
#echo "cle = $cle"
part2="$(echo "$line" | cut -d":" -f 2)"
#echo "part2 = $part2"
val="$(echo "$part2" | cut -d" " -f 2)"
#echo "val = $val"
fi
if test "$cle" = "insert_job"
then
#on n'est sur la premiere ligne
if test "$IJ." = "."
then
;
else
if test "$BN." = "."
then
echo $IJ >> $CLE_TMP
else
echo $BN >> $CLE_TMP
fi
writePartToFile
fi
IJ=$val
JT="$(echo "$line" | cut -d":" -f 3)"
else
#on n est pas sur le premiere ligne
val=$part2
case $cle in
alarm_if_fail) AIF=$val;;
auto_delete) AD=$val;;
auto_hold) AH=$val;;
box_failure) BF=$val;;
box_name) BN=$val;;
box_success) BS=$val;;
box_terminator) BT=$val;;
command) COM=$val;;
condition) COD=$val;;
date_conditions) DC=$val;;
days_of_week) DOW=$val;;
description) DES=$val;;
exclude_calendar) EC=$val;;
insert_job) IJ=$val;;
job_load) JL=$val;;
job_terminator) JT=$val;;
machine) MAC=$val;;
max_exit_success) MES=$val;;
max_run_alarm) MRA==$val;;
n_retrys) NR=$val;;
'#owner') OWN=$val;;
permission) PER=$val;;
priority) PRI=$val;;
profile) PRO=$val;;
run_calendar) RC=$val;;
run_window) RW=$val;;
start_mins) SM=$val;;
start_times) ST=$val;;
std_err_file) SEF=$val;;
std_out_file) SOF=$val;;
term_run_time) TRT=$val;;
watch_file) WF=$val;;
watch_file_min_size) WFMS=$val;;
watch_interval) WI=$val;;
esac
fi
done < $TO_FLATTEN;
#Traiter derniere occurence
if test "$BN." = "."
then
echo $IJ >> $CLE_TMP
else
echo $BN >> $CLE_TMP
fi
writePartToFile
echo "Les cles"
cat $CLE_TMP | sort | uniq > $CLE_FILE
cat $CLE_FILE
rm $CLE_TMP
#------------------------------
while read line; do
autorep -J ${line} -w >> $TO_STATUS;
done < $CLE_FILE;
#----------------------------------------
echo " Resultats"
while read line; do
unJob="$(echo "$line" | cut -d";" -f 14)"
details="$(grep -w "$unJob" "$TO_STATUS" | head -n 1)"
LSD="$(echo "$details" | awk '{print $2}')"
if test "$LSD" = "-----"
then
LST=""
LED="$(echo "$details" | awk '{print $3}')"
if test "$LED" = "-----"
then
LET=""
STA="$(echo "$details" | awk '{print $4}')"
RUN="$(echo "$details" | awk '{print $5}')"
else
LET="$(echo "$details" | awk '{print $4}')"
STA="$(echo "$details" | awk '{print $5}')"
RUN="$(echo "$details" | awk '{print $6}')"
fi
else
LST="$(echo "$details" | awk '{print $3}')"
LED="$(echo "$details" | awk '{print $4}')"
if test "$LED" = "-----"
then
LET=""
STA="$(echo "$details" | awk '{print $5}')"
RUN="$(echo "$details" | awk '{print $6}')"
else
LET="$(echo "$details" | awk '{print $5}')"
STA="$(echo "$details" | awk '{print $6}')"
RUN="$(echo "$details" | awk '{print $7}')"
fi
fi
echo " ligne= ${line};${LSD};${LST};${LED};${LET};${STA};${RUN}"
echo "${line};${LSD};${LST};${LED};${LET};${STA};${RUN}" >> $TO_RESULT
resetVar
done < $TO_TPM;

Resolve name by inode in current direcory

How can I resolve the name by the given inode in the current directory in the following script that prints all filenames of symlinks pointing to a specified file that is passed as an argument to the script. The list should be sorted by ctime.
#!/usr/bin/ksh
IFS="`printf '\n\t'`"
USAGE="usage: symlink.sh <file>"
get_ctime() {
perl -se 'use File::stat; $file=lstat($filename); print $file->ctime' -- -filename="$1"
}
stat_inode() {
perl -se 'use File::stat; $file=stat($filename); if (defined $file) { print $file->ino; }' -- -filename="$1"
}
lstat_inode() {
perl -se 'use File::stat; $file=lstat($filename); if (defined $file) { print $file->ino; }' -- -filename="$1"
}
if [ $# -eq 0 ]; then
echo "$USAGE"
exit 1
fi
FILE_NAME="$1"
FILE_INODE=$(stat_inode "$FILE_NAME")
if [ ! -e "$FILE_NAME" ]; then
echo "no such file \"$FILE_NAME\""
exit 1
fi
for LINK in ./* ./.[!.]* ;do
if [ -L "$LINK" ]; then
TARGET_INODE=$(stat_inode "$LINK")
if [ ! -z "$TARGET_INODE" ]; then
if [ "$FILE_INODE" -eq "$TARGET_INODE" ]; then
echo $(get_ctime "$LINK") $(lstat_inode "$LINK");
fi
fi
fi
done | sort -nk1 | awk '{print $2}'
Basically, I'd like to pipe awk to some kind of lookup function like this: | awk ' ' | lookup
I'd really appreciate if someone suggested a more elegant way to accomplish the task.
OS: SunOS 5.10
Shell: KSH
Something like this?
$ find . -maxdepth 1 -inum 2883399
./.jshintrc
$
or:
$ echo 2883399 | xargs -IX find . -maxdepth 1 -inum X
./.jshintrc
$