I'm writing a script that will fold, sort and count text in a file. I need to design the program so that, if it is given multiple filenames on the command line, it processes each one separately, one after the other. I think I could write a loop but I don't know that much about those yet so if possible would like to try other options. Are there other options that I can add to this so more than one file name can be entered in the command line?
if test $# -lt 1
then
echo "usage: $0 Enter at least one DNA filename"
exit
fi
if test -r $*
then
fold -w3 $* | sort | uniq -c | sort -k1,1nr -k2
else
echo "usage: $* must be readable"
exit
fi
Nena
for loop will be appropriate here. The following form is used to iterate over positional arguments:
for f; do
# do work here using "$f" as the current argument
done
This is equivalent to a more verbose version:
for f in "$#"; do
# do work here using "$f" as the current argument
done
You can use a while loop and shift to iterate through the command line arguments one by one as:
if test $# -lt 1 # insufficient arguments.
then
echo "usage: $0 Enter at least one DNA filename"
exit
fi
# loop through the argument on by one.
# till their number($#) becomes 0.
while test $# -gt 0
do
if test -r "$1" # use $1..$* represent all arguments.
then
fold -w3 "$1" | sort | uniq -c | sort -k1,1nr -k2
else
echo "usage: $1 must be readable"
exit
fi
# shift so that 2nd argument now comes in $1.
shift
done
Related
I have a picture sequence named with zero-padded numbers like so:
/path/to/file_07469.jpx
/path/to/file_07470.jpx
/path/to/file_07471.jpx
/path/to/file_07472.jpx
/path/to/file_07473.jpx
/path/to/file_07474.jpx
/path/to/file_07475.jpx
/path/to/file_07476.jpx
/path/to/file_07477.jpx
/path/to/file_07478.jpx
/path/to/file_07479.jpx
/path/to/file_07480.jpx
/path/to/file_07481.jpx
/path/to/file_07482.jpx
This is just an extract. It is thousands of files. I’d like to rename all files from a certain number on, adding / subtracting X. I’d love to use find with a regex.
#!/bin/zsh
shift=-1000
seqnumstart="$(echo "$1" | grep -Eo "\d+")"
bn="$(basename $1)"
bbn="$(echo "${bn%_*}")"
ext="$(echo "${bn##*.}")"
find "$(dirname $1)" -name "$bbn*$ext" -print0 | while read -d $'\0' file
do
seqnum="$(echo "$file" | grep -Eo "\d+")"
seqnum="$(echo "${seqnum#"${seqnum%%[!0]*}"}")"
if [[ "$seqnum" -ge "$seqnumstart" ]]; then
seqnumnew=$(($seqnum + $shift))
seqnumnew=$(printf %05d $seqnumnew)
filenew="$(echo $file | sed -E 's [0-9]+ '$seqnumnew' g')"
mv "$file" "$filenew"
fi
done
How can I improve my code? It is very slow. Im on a Mac (zsh).
zmv is a utility in zsh that can do a lot of filename manipulation and looping for you. Try this:
zmv -n 'p/file_(<7000-7999>).jpx' 'p/file_$(printf "%05d" $(($1 - 1000))).jpx'
Some of the pieces:
zmv: an autoload function; use autoload -Uz zmv to make it available (this is usually added to .zshrc).
-n: no-op. With this option, zmv will just print what would have happened, giving you an idea if the command is correct. Remove this to actually mv the files.
(...): grouping operator for zmv. This identifies sections in the name that you want to change; this section is referenced in the 'to' argument as $1.
<7000-7999>: glob operator for a range. Note that leading zeroes are not always required.
$(printf "%05d" ...): zero-padding.
$((...)): arithmetic.
$1: reference to the parenthetical value in the 'from' argument'. This is where zmv's magic happens - this is substituted for each matching filename.
As you likely know, you'll need to do the renaming in groups or in a specific order to avoid trying to change a name to a name that already exists. zmv will usually halt when it encounters collisions like that.
This is much faster:
#!/bin/zsh
shift=1000
seqnumstart="$(echo "$1" | grep -Eo "\d+")"
lastfile="$(find "$(dirname $1)" -name "*.jpx" | sort | tail -1)"
seqnumend="$(echo "$lastfile" | grep -Eo "\d+")"
bn="$(basename $1)"
bbn="$(echo "${bn%_*}")"
#extension
ext="$(echo "${bn##*.}")"
#basepath before the padded number
bp="$(echo "${1%_*}")"
function buildpath {
echo "$bp"_"$1"."$ext"
}
for i in {$seqnumstart..$seqnumend}
do
unpad="$(echo $i | sed 's/^0*//')"
seqnumnew="$(($unpad + $shift))"
seqnumnewpad="$(printf %05d $seqnumnew)"
op="$(buildpath "$i")"
np="$(buildpath "$seqnumnewpad")"
mv "$op" "$np"
done
I was trying to do some quick filename cleanup at the shell (zsh, if it matters). Renaming files. (I'm using cp instead of mv just to be safe)
foreach f (\#*.ogg)
cp $f `echo $f | perl -pe 's/\#\d+ (.+)$/"\1"/'`
end
Now, I know there are tools to do stuff like this, but for personal interest I'm wondering how I can do it this way. Right now, I get an error:
cp: target `When.ogg"' is not a directory
Where 'When.ogg' is the last part of the filename. I've tried adding quotes (see above) and escaping the spaces, but nonetheless this is what I get.
Is there a reason I can't use the output of s perl pmr=;omrt as the final argument to another command line tool?
It looks like you have a space in the file names being processed, so each of your cp command lines evaluates to something like
cp \#nnnn When.Ogg When.ogg
When the cp command sees more than two arguments, the last one must be a target directory name for all the files to be copied to - hence the error message. Because your source filename ($f) contains a space it is being treated as two arguments - cp sees three args, rather than the two you intend.
If you put double quotes around the first $f that should prevent the two 'halves' of the name from being treated as separate file names:
cp "$f" `echo ...
This is what you need in bash, hope it's good for zsh too.
cp "$f" "`echo $f | perl -pe 's/\#\d+ (.+)$/\1/'`"
If the filename contains spaces, you also have quote the second argument of cp.
I often use
dir /b ... | perl -nle"$o=$_; s/.../.../; $n=$_; rename $o,$n if !-e $n"
The -l chomps the input.
The -e check is to avoid accidentally renaming all the files to one name. I've done that a couple of times.
In bash (and I'm guessing zsh), that would be
foreach f (...)
echo "$f" | perl -nle'$o=$_; s/.../.../; $n=$_; rename $o,$n if !-e $n'
end
or
find -name '...' -maxdepth 1 \
| perl -nle'$o=$_; s/.../.../; $n=$_; rename $o,$n if !-e $n'
or
find -name '...' -maxdepth 1 -exec \
perl -e'for (#ARGV) {
$o=$_; s/.../.../; $n=$_;
rename $o,$n if !-e $n;
}' {} +
The last supports file names with newlines in them.
I'm trying to use substitution in a BS with built data namespace within a (very) small embedded busybox (no man, 60 cmds all in all), but I can't echo the data as soon as there are more than 2 data echoed :
this is OK :
a=$(echo -e ${smtp_0} ${smtp_4})
echo $a
# returns: "0 4" as expected, also all individually printed datas are echoed as expected
this does not echo expected datas whatever data is:
b=$(echo -e ${smtp_0} ${smtp_4} ${smtp_5})
echo $b
# returns: "54" , same with double-quotes (Nok, it should return "0 4 5")
Datas are built like this :
"data file sample"
val0=1
val1=1
...
Reading datas shell:
#!/bin/sh
x=0
while read line
do
# fetch values, removing blank and commented lines, eg keeping only lines starting with data namespace
formatted_line=$(echo $line | sed -e "/^[^a-z].*$/d" | cut -d= -f2)
# store file's value into a data array-like
if [ ! -z $formatted_line ];then
eval "`echo $x | sed -e 's/.*/smtp_&=$formatted_line/'`"
x=$(($x+1))
fi
done < $DATA_FILE
# Then try echoing datas...
# ... see above ...
EDITED:
So it does look like there is nthg mistaken in there but the data file EOL misleading the concatenation of the builtin data. I close the point and thx to Dennis helping getting this headhache fixed.
To preserve whitespace, variables should be quoted:
b=$(echo -e "${smtp_0} ${smtp_4} ${smtp_5}")
echo "$b"
However, why are you using echo?
b="${smtp_0} ${smtp_4} ${smtp_5}"
echo "$b"
Also, you should use indenting in your code (or if you are then you should retain it when posting questions).
I've been trying to grep an exact shell 'variable' using word boundaries,
grep "\<$variable\>" file.txt
but haven't managed to; I've tried everything else but haven't succeeded.
Actually I'm invoking grep from a Perl script:
$attrval=`/usr/bin/grep "\<$_[0]\>" $upgradetmpdir/fullConfiguration.txt`
$_[0] and $upgradetmpdir/fullConfiguration.txt contains some matching "text".
But $attrval is empty after the operation.
#OP, you should do that 'grepping' in Perl. don't call system commands unnecessarily unless there is no choice.
$mysearch="pattern";
while (<>){
chomp;
#s = split /\s+/;
foreach my $line (#s){
if ($line eq $mysearch){
print "found: $line\n";
}
}
}
I'm not seeing the problem here:
file.txt:
hello
hi
anotherline
Now,
mala#human ~ $ export GREPVAR="hi"
mala#human ~ $ echo $GREPVAR
hi
mala#human ~ $ grep "\<$GREPVAR\>" file.txt
hi
What exactly isn't working for you?
Not every grep supports the ex(1) / vi(1) word boundary syntax.
I think I would just do:
grep -w "$variable" ...
Using single quotes works for me in tcsh:
grep '<$variable>' file.txt
I am assuming your input file contains the literal string: <$variable>
If variable=foo are you trying to grep for "foo"? If so, it works for me. If you're trying to grep for the variable named "$variable", then change the quotes to single quotes.
On a recent linux it works as expected. Do could try egrep instead
Say you have
$ cat file.txt
This line has $variable
DO NOT PRINT ME! $variableNope
$variable also
Then with the following program
#! /usr/bin/perl -l
use warnings;
use strict;
system("grep", "-P", '\$variable\b', "file.txt") == 0
or warn "$0: grep exited " . ($? >> 8);
you'd get output of
This line has $variable
$variable also
It uses the -P switch to GNU grep that matches Perl regular expressions. The feature is still experimental, so proceed with care.
Also note the use of system LIST that bypasses shell quoting, allowing the program to specify arguments with Perl's quoting rules rather than the shell's.
You could use the -w (or --word-regexp) switch, as in
system("grep", "-w", '\$variable', "file.txt") == 0
or warn "$0: grep exited " . ($? >> 8);
to get the same result.
Using single quote it wont work. You should go for double quote
For example:
this wont work
--------------
for i in 1
do
grep '$i' file
done
this will work
--------------
for i in 1
do
grep "$i" file
done
Is there an idiomatic way to simulate Perl's diamond operator in bash? With the diamond operator,
script.sh | ...
reads stdin for its input and
script.sh file1 file2 | ...
reads file1 and file2 for its input.
One other constraint is that I want to use the stdin in script.sh for something else other than input to my own script. The below code does what I want for the file1 file2 ... case above, but not for data provided on stdin.
command - $# <<EOF
some_code_for_first_argument_of_command_here
EOF
I'd prefer a Bash solution but any Unix shell is OK.
Edit: for clarification, here is the content of script.sh:
#!/bin/bash
command - $# <<EOF
some_code_for_first_argument_of_command_here
EOF
I want this to work the way the diamond operator would work in Perl, but it only handles filenames-as-arguments right now.
Edit 2: I can't do anything that goes
cat XXX | command
because the stdin for command is not the user's data. The stdin for command is my data in the here-doc. I would like the user data to come in on the stdin of my script, but it can't be the stdin of the call to command inside my script.
Sure, this is totally doable:
#!/bin/bash
cat $# | some_command_goes_here
Users can then call your script with no arguments (or '-') to read from stdin, or multiple files, all of which will be read.
If you want to process the contents of those files (say, line-by-line), you could do something like this:
for line in $(cat $#); do
echo "I read: $line"
done
Edit: Changed $* to $# to handle spaces in filenames, thanks to a helpful comment.
Kind of cheezy, but how about
cat file1 file2 | script.sh
I am (like everyone else, it seems) a bit confused about exactly what the goal is here, so I'll give three possible answers that may cover what you actually want. First, the relatively simple goal of getting the script to read from either a list of files (supplied on the command line) or from its regular stdin:
if [ $# -gt 0 ]; then
exec < <(cat "$#")
fi
# From this point on, the script's stdin is redirected from the files
# (if any) supplied on the command line
Note: the double-quoted use of $# is the best way to avoid problems with funny characters (e.g. spaces) in filenames -- $* and unquoted $# both mess this up. The <() trick I'm using here is a bash-only feature; it fires off cat in the background to feed data from files supplied on the command line, and then we use exec to replace the script's stdin with the output from cat.
...but that doesn't seem to be what you actually want. What you seem to really want is to pass the supplied filenames or the script's stdin as arguments to a command inside the script. This requires sort of the opposite process: converting the script's stdin into a file (actually a named pipe) whose name can be passed to the command. Like this:
if [[ $# -gt 0 ]]; then
command "$#" <<EOF
here-doc goes here
EOF
else
command <(cat) <<EOF
here-doc goes here
EOF
fi
This uses <() to launder the script's stdin through cat to a named pipe, which is then passed to command as an argument. Meanwhile, command's stdin is taken from the here-doc.
Now, I think that's what you want to do, but it's not quite what you've asked for, which is to both redirect the script's stdin from the supplied files and pass stdin to the command inside the script. This can be done by combining the above techniques:
if [ $# -gt 0 ]; then
exec < <(cat "$#")
fi
command <(cat) <<EOF
here-doc goes here
EOF
...although I can't think why you'd actually want to do this.
The Perl diamond operator essentially loops across all the command line arguments, treating each as a filename. It opens each file and reads them line-by-line. Here's some bash code that will do approximately the same.
for f in "$#"
do
# Do something with $f, such as...
cat $f | command1 | command2
-or-
command1 < $f
-or-
# Read $f line-by-line
cat $f | while read line_from_f
do
# Do stuff with $line_from_f
done
done
You want to take the first argument and do something with it, and then either read from any files specified or stdin if no files?
Personally, I'd suggest using getopt to indicate arguments using the "-a value" syntax to help disambiguate, but that's just me. Here's how I'd do it in bash without getopts:
firstarg=${1?:usage: $0 arg [file1 .. fileN]}
shift
typeset -a files
if [[ ${##} -gt 0 ]]
then
files=( "$#" )
else
files=( "/dev/stdin" )
fi
for file in "${files[#]}"
do
whatever_you_want < "$file"
done
The ?: operator will die if there are no args specified, since you seem to want at least one arg either way. After grabbing that, shift the args over by one, and then either use the remaining args as your file list, or the bash special filehandle "/dev/stdin" if there were no other args.
I think that the "if no files are specified, use /dev/stdin - otherwise use the files on the command line" piece is probably what you're looking for, but the rest of the code is at least useful for context.
Also a little cheezy, but how about this:
if [[ $# -eq 0 ]]
then
# read from stdin
else
# read from $* (args)
fi
If you need to read and process line-by-line (which is likely) and don't want to copy/paste the same code twice (which is likely), define a function in your script and just pass the lines one-by-one to this function, and process them in said function.
Why not use ``cat #* in the script? For example:
x=`cat $*`
echo $x