I have several (15) files with names : file1.out, file2.out, file3.out, ....,file15.out. I am reading each file and doing some calculation. Here is a sample.
for file in file*.out; do
echo $file
done
But in this way the files are being read in the order file1.out, file10.out.... ,file15.out, file2.out...,file9.out. Is there any way to read these files in an ascending order i.e. file1.out then file2.out and so on.
Since you know the amount of files you have, you can use a for integer loop
for i in $(seq 1 15); do
echo "file$i.out"
done
For full POSIX compliance (seq is not a standard utility), use a while loop and an explicit counter
i=1
while [ "$i" -le 15 ]; do
echo "file$i.out"
i=$((i+1))
done
Rename your files
If you have less than 100 files you can use the following notation
file1.out => file01.out
Change your sort algorithm
i.e. Use ls -v instead of file*.out
for i in `ls -v file*.out`; do
echo $i;
done;
Related
as an example I will put different inputs to keep the privacy of my files and to avoid long text, these are of the following form :
INPUT1.cfg :
TC # aa # D317
TC # bb # D314
TC # cc # D315
TC # dd # D316
INPUT2.cfg
BL;nn;3
LY;ww;3
LO;xx;3
TC;vv;3
TC;dd;3
OD;pp;3
TC;aa;3
what I want to do is iterate the name (column 2) in the rows of input1 and compare with the name (column 2) in the rows of input2; if they match we will get the line of INPUT2 in an output file otherwise it will return that the table is not found, here is my try code:
#!/bin/bash
input1="input1.cfg";
input2="input2.cfg"
cat $input1|while read line
do
TableNameIN=`echo $line|cut -d"#" -f2`
cat $input2| while read line
do
TableNameOUT=`echo $line|cut -d";" -f2`
if echo "$TableNameOUT" | grep -q $TableNameIN;
then echo "$line" >> output.txt
else
echo "Table $TableNameIN non trouvé"
fi
done
done
this what i get as result :
Table bb not found
Table bb not found
Table bb not found
Table cc not found
Table cc not found
Table cc not found
I manage to write what is equal but the problem with my code is that it has in output "table not found" for each row whereas I just want to write only once at the end of the comparison of all the lines
here is the output i want to get :
Table bb not found
Table cc not found
Can any one help me with this , PS : I don't want to use awk because it's just a part of my code and i already use sh
Assumptions:
for file input2.cfg the 2nd column (table name) is unique
input2.cfg is not so large that we run the risk of using up all memory for storing intput2.cfg in an associative array (otherwise we could store the table names from input1.cfg's - assuming this is a smaller file - in the array and swap the processing order of the two files)
there are no explicit requirements for data to be sorted (otherwise we may need to add a sort or two)
a bash solution is sufficient (based on inclusion of the #!/bin/bash shebang in OPs current code)
There are many ways to slice-n-dice this one (awk being my preference but OP doesn't want to use awk). For this particular answer I'll pull the awk steps out into separate bash commands.
NOTE: While we could use a set of nested loops (as in the OPs code), I've opted to use an associative array to store input2.cfg thus eliminating the need to repeatedly scan input2.cfg.
#!/usr/bin/bash
input1=input1.cfg
input2=input2.cfg
> output.txt # clear out the target file
# load ${input2} into an associative array
unset lines
typeset -A lines # associative array for storing contents of ${input2}
while read -r line
do
x="${line%;*}" # use parameter expansion
tabname="${x#*;}" # to parse out table name
lines["${tabname}"]="${line}" # add to array
done < "${input2}"
# process ${input1}
while read -r c1 c2 tabname rest_of_line
do
[[ -v lines["${tabname}"] ]] && # if tabname has an entry in our array
echo "${lines[${tabname}]}" >> output.txt && # then dump the associated line (from ${input2}) to output.txt
continue # process next line from ${input1}
echo "Table ${tabname} not found" # otherwise print 'not found' message
done < "${input1}"
# display contents of output.txt
echo "++++++++++++++++ output.txt"
cat output.txt
echo "++++++++++++++++"
This generates the following:
Table bb not found
Table cc not found
++++++++++++++++ output.txt
TC;aa;3
TC;dd;3
++++++++++++++++
First of all, I'm very new to programming and so would need your help in writing a perl script to do the following on windows.
I have a big log file with timestamp (1gb) and its difficult to read the logs as it takes a lot of time to open. so my requirement is to copy the logs from the bigger log file for the last one hour and paste it to another file and then copy the next 1 hr of data to different file(so we will have 24 files for a day). The next day the data in these files needs to be over written or delete & create a new file.
Sample log :
09092016-00:02:00,..................
09092016-00:02:08,..................
09092016-00:02:15,..................
09092016-00:02:18,..................
Please help me with this and thanks for your help in advance.
Thanks,
A simpler solution would be to use the split command to split the files into manageable sizes.
split -l 1000 logfile
Will split your logfile into smaller files of 1000 lines each.
You can then just use grep to find the files that contain the day you need.
grep 09092016 logfile*
for example:
logfile="./log"
while read -r d m y h; do
grep "^$d$m$y-$h" "$logfile" > "partial-${y}${m}{$d}-${h}.log"
done < <(sed -n 's/\(..\)\(..\)\(....\)-\(..\)\(.*\)/\1 \2 \3 \4/p' "$logfile" | sort -u)
easy, but not efficient. It reads the whole big logfile 25x for the split. (1x for gathering the existing ddmmyyyy-hh lines in the log, and again for every different found date-hour.)
I have a very simple mp3 player, and the order it plays audio files are based on the file names, and the rule is there must be a 3-size number in the beginning of file name, such as:
001file.mp3
002file.mp3
003file.mp3
I want to write a fish shell sortmp3 to add numbers to the files of a directory. Say directory myfiles contains files:
aaa.mp3
bbb.mp3
ccc.mp3`
When I run sortmp3 myfiles, the file names will be changed to:
001aaa.mp3
002bbb.mp3
003ccc.mp3
But my question is:
how to generate some sequential numbers?
how to make sure the size of each number is exactly 3?
I would write this, which makes no assumptions about how many files there are in a directory:
function sortmp3
set -l files *
set -l i
for i in (seq (count $files))
echo mv $files[$i] (printf "%03d%s" $i $files[$i])
end
end
Remove the "echo" if you like how it works.
You can generate sequential numbers with the seq tool - an external program.
This will only take care of the first part, it won't pad to three characters.
To do that, there's a variety of choices:
printf '%s\n' 00(seq 0 99) | rev | cut -c 1-3 | rev
printf '%s\n' 00(seq 0 99) | sed 's/^.*\(...\)$/\1/'
The 00(seq 0 99) part will generate numbers from "1" to "99" with two zeroes prepended - ie. from "001" to "0099". The later parts of the pipeline remove the superfluous zeroes again.
Or with the next fish version, you can use the new string tool:
string sub -s -3 -- 00(seq 0 99)
Depending on your specific situation you should use the "seq" command to generate sequential numbers or the "math" command to increment a counter. To format the number with a predictable number of leading zeros use the "printf" command:
set idx 12
printf '%03d' $idx
I have 2 commands that I need to run back to back 16 times for 2 sets of data. I have labeled the files used as file#a1_100.gen (set 1) and file#a2_100.gen (set 2). The 100 is then replaced by multiples of 100 upto 1600 (100,200,...,1000,...,1600).
Example 1: For first set
Command 1: perl myprogram1.pl file#a1.pos abc#a1.ref xyz#a1.ref file#a1_100.gen file#a1_100.out
Command 2: perl my program2.pl file#a1_100.out file#a1_100.out.long
Example 2: For first set
Command 1: perl myprogram1.pl file#a1.pos abc#a1.ref xyz#a1.ref file#a1_200.gen file#a1_200.out
Command 2: perl my program2.pl file#a1_200.out file#a1_200.out.long
These 2 commands are repeated 16 times for both set 1 and set 2. For set 2 the filename changes to File#a2...
I need a command that will run this on its own by changing the filename for the 2 sets, running it 16 times for each set.
Any help will be greatly appreciated! Thanks!
This is probably most easily done with a shell script. As with Perl, TMTOWTDI — there's more than one way to do it.
for num in $(seq 1 16)
do
perl myprogram1.pl file#a1.pos abc#a1.ref xyz#a1.ref file#a1_${num}00.gen file#a1_${num}00.out
perl myprogram2.pl file#a1_${num}00.out file#a1_${num}00.out.long
done
(You could use {1..16} in place of $(seq 1 16) to generate the numbers. You might also note that the # characters in the file names discombobulate the SO Markdown system.)
Or you could use:
for num in $(seq 100 100 1600)
do
perl myprogram1.pl file#a1.pos abc#a1.ref xyz#a1.ref file#a1_${num}.gen file#a1_${num}.out
perl myprogram2.pl file#a1_${num}.out file#a1_${num}.out.long
done
(I don't think there's a {...} expansion for that.)
Or, better, you could use variables to hold values to avoid repetition:
POS="file#a1.pos"
ABC="abc#a1.ref"
XYZ="xyz#a1.ref"
for num in $(seq 100 100 1600)
do
PFX="file#a1_${num}"
GEN="${PFX}.gen"
OUT="${PFX}.out"
LONG="${OUT}.long"
perl myprogram1.pl "${POS}" "${ABC}" "${XYZ}" "${GEN}" "${OUT}"
perl myprogram2.pl "${OUT}" "${LONG}"
done
In this code, the braces around the parameter names are all optional; in the first block of code, the braces around ${num} were mandatory, but optional in the second set. Enclosing names in double quotes is also optional here, but recommended.
Or, if you must do it in Perl, then:
use warnings;
use strict;
my $POS = "file#a1.ref";
my $ABC = "abc#a1.ref";
my $XYZ = "xyz#a1.ref";
for (my $num = 100; $num <= 1600; $num += 100)
{
my $PFX = "file#a1_${num}";
my $GEN = "${PFX}.gen";
my $OUT = "${PFX}.out";
my $LONG = "${OUT}.long";
system("perl", "myprogram1.pl", "${POS}", "${ABC}", "${XYZ}", "${GEN}", "${OUT}");
system("perl", "myprogram2.pl", "${OUT}", "${LONG}");
}
This is all pretty basic coding. And you can guess that it didn't take me long to generate this from the last shell script. Note the use of multiple separate strings instead on one long string in the system calls. That avoids running a shell interpreter — Perl runs perl directly.
You could use $^X instead of "perl" to ensure that you run the same Perl executable as ran the script shown. (If you have /usr/bin/perl on your PATH but you run $HOME/perl/v5.20.1/bin/perl thescript.pl, the difference might matter, but probably wouldn't.)
This question is based on this thread.
The code
function man()
{
man "$1" > /tmp/manual; less /tmp/manual
}
Problem: if I use even one option, the command does not know where is the wanted-manual
For instance,
man -k find
gives me an error, since the reference is wrong. The command reads -k as the manual.
My attempt to solve the problem in pseudo-code
if no parameters
run the initial code
if one parameter
run: man "$2"
...
In other words, we need to add an option-check to the beginning such that
Pseudo-code
man $optional-option(s) "$n" > /tmp/manual; less /tmp/manual
where $n
n=1 if zero options
n=2 if 1 option
n=3 if 2 options
....
How can you make such an "option-check" that you can alter the value of $n?
Developed Problem: to make two if loops for the situations from n=1 to n=2
How about passing all the arguments
function man()
{
man $# > /tmp/manual; less /tmp/manual
}
What is the bug in less which you mention in the title?
First, you can pass all of your function's arguments to man by using $* or $#. You can read man sh for the precise details on the difference between the two; short story is to almost always use "$#" with double quotes.
Second, the temporary file is unnecessary. You could make this a little cleaner by piping the output of man directly to less:
function man() {
man "$#" | less
}
By the way, if you're just trying to use a different pager (man uses more and you want the fancier less) there's a commonly recognized PAGER environment variable that you can set to override the default pager. You could add this to your ~/.bashrc for instance to tell all programs to use less when displaying multiple screens of output:
export PAGER=less
To answer your precise question, you can check the number of arguments with $#:
if [ $# -eq 0 ]; then
: # No arguments
elif [ $# -eq 1 ]; then
: # One argument
# etc.
You might also find the shift command helpful. It renames $2 to $1, $3 to $2, and so on. It is often used in a loop to process command-line arguments one by one:
while [ $# -gt 1 ]; do
echo "Next argument is: $1"
shift
done
echo "Last argument is: $1"