Loop for pictures to be uuencoded does not work - sh

My mips based shell script does not uuencode the pictures in /tmp. What is wrong?
for pic in /tmp/*.jpg; do
echo "cat $pic" | uuencode -m "$pic"
done
I get this little for e.g. 2 pictures in /tmp as output only:
begin-base64 644 /tmp/01-20150721100027-01.jpg
Y2F0IC90bXAvMDEtMjAxNTA3MjExMDAwMjctMDEuanBnIA
begin-base64 644 /tmp/01-20150721100027-02.jpg
Y2F0IC90bXAvMDEtMjAxNTA3MjExMDAwMjctMDIuanBnIA

What you're doing with that echo, cat and pipe there is just crazy. You just end up with the string cat /file/name encoded, not the file contents. You simply want this:
for pic in /tmp/*.jpg; do
uuencode -m "$pic" "$pic"
done

Related

Running perl files from a text file

There're multiple perl scripts that is ran from CYGWIN terminal. An example is,
$ perl IdGeneratorTool.pl JSmith -i userInfo.adb -o JSmith.txt
The above is an example. Were based on input parameter JSmith, it reads a db file, generate an ID and output that to a text file.
Now these perl scripts running on the CYGWIN keeps growing and it's added to a text file like shown below,
$ perl IdGeneratorTool.pl JSmith -i userInfo.adb -o JSmith.txt
$ perl IdGeneratorTool.pl PTesk -i userInfo.adb -o PTesk.txt
$ perl IdGeneratorTool.pl CMorris -i userInfo.adb -o CMorris.txt
$ perl IdGeneratorTool.pl JLawrence -i userInfo.adb -o JLawrence.txt
$ perl IdGeneratorTool.pl TCruise -i userInfo.adb -o TCruise.txt
...
....
......
.......
.........
And the list keeps growing.
I would like to know whether there's a way to execute all these perl scripts which are in a text file in one go.
I'm new to perl and doesn't have much idea as to what are the options.
An ideal scenario might be, a tool where i can open this text file and click a execute button and then it executes all the scripts and output multiple *.txt files into the same directory.
Or maybe a simple perl script that can do it.
Put them into a file makeall (or whatever you want to call it.
Put as a first line #!/bin/bash into the file
In cygwin enter chmod +x makeall
in cygwin enter ./makeall
With this you've created a bash script which'll do all your calls of the perl script.
Another option would to just put all the user information into a csv file and read that one in order to call your script.
WAIT! Even easier!
Put into the makeall script this:
#!/bin/bash
for user in \
JSmith \
PTesk \
CMorris \
JLawrence \
TCruise \
; do
perl IdGeneratorTool.pl "$user" -i userInfo.adb -o "$user".txt
done
Now you just need to add any additional user the same way I did for your examples.
Without seeing the source for IdGeneratorTool.pl it's hard to give any specific advice; but it is generally not hard to turn something like
do_stuff($ARGV[0], $opt_i, $opt_o);
into
while (<>) {
chomp;
$user, $adb, $outputfile = split('\t');
do_stuff($user, $adb, $outputfile);
}
to read the input from a tab-delimited file instead of from command-line arguments.
You can create text file with list of users (one per line) for example user_list.txt
JSmith
PTesk
CMorris
JLawrence
TCruise
Then create bash script process_list.sh with following content in same directory
#!/bin/bash
for user in `cat user_list.txt`
do
perl IdGeneratorTool.pl $user -i userInfo.adb -o ${user}.txt
done
Now make bash script executable chmod +x process_list.sh and it is ready for execution.
Once you need to add new user edit user_list.txt to add one more line into the file.
Polar Bear

grep and/or sed to match a path from a string which has different patterns

I have a big file which is composed of alot of different lines which only have one commen keyword, storaged.
PROC:storage123:0702:2108:0,1,2,3,4,5:storage:vers:storaged:storage123:Storage
123:storage123:-R /etc/orc/storage123 -e emr123#localhost -p Xxx::
PROC:storageabc:0606:2108:0,1,2,3,4,5:storage:vers:storaged:storageabc:Storage
abc:storageabc: -e emabc#localhost -R /etc/orc/storageabc -p 654::
What i need to do is grep for the path that can be found on all storaged keywords that comes after -R. But I only want the path, nothing after that. -R can be found on different places so there is no pattern to it.
I created one espressionen which seemed to work, but I think I made it much for complex (and not 100% sure to match) than it should have to be.
[root:~/scripts/] <conf.txt grep -o 'R *[^ ]*' | grep -o '[^ ]*$' | sed 's/.*R\///'
/etc/orc/storage123
/etc/orc/storagerabc
The espression also is hard to implement in a bash script so something simpler would be great. I need these paths in the script later on.
Cheers
Your attempt is nice, but you can simplify it by using a look-behind:
$ grep -Po '(?<=-R )[^ ]*' file
/etc/orc/storage123
/etc/orc/storageabc
Basically it looks for the string -R (note the space) and from that, it prints everything up to a space.
$ sed 's/.*-R \([^ ]*\).*/\1/' file
/etc/orc/storage123
/etc/orc/storageabc

Change multiple files

The following command is correctly changing the contents of 2 files.
sed -i 's/abc/xyz/g' xaa1 xab1
But what I need to do is to change several such files dynamically and I do not know the file names. I want to write a command that will read all the files from current directory starting with xa* and sed should change the file contents.
I'm surprised nobody has mentioned the -exec argument to find, which is intended for this type of use-case, although it will start a process for each matching file name:
find . -type f -name 'xa*' -exec sed -i 's/asd/dsg/g' {} \;
Alternatively, one could use xargs, which will invoke fewer processes:
find . -type f -name 'xa*' | xargs sed -i 's/asd/dsg/g'
Or more simply use the + exec variant instead of ; in find to allow find to provide more than one file per subprocess call:
find . -type f -name 'xa*' -exec sed -i 's/asd/dsg/g' {} +
Better yet:
for i in xa*; do
sed -i 's/asd/dfg/g' $i
done
because nobody knows how many files are there, and it's easy to break command line limits.
Here's what happens when there are too many files:
# grep -c aaa *
-bash: /bin/grep: Argument list too long
# for i in *; do grep -c aaa $i; done
0
... (output skipped)
#
You could use grep and sed together. This allows you to search subdirectories recursively.
Linux: grep -r -l <old> * | xargs sed -i 's/<old>/<new>/g'
OS X: grep -r -l <old> * | xargs sed -i '' 's/<old>/<new>/g'
For grep:
-r recursively searches subdirectories
-l prints file names that contain matches
For sed:
-i extension (Note: An argument needs to be provided on OS X)
Those commands won't work in the default sed that comes with Mac OS X.
From man 1 sed:
-i extension
Edit files in-place, saving backups with the specified
extension. If a zero-length extension is given, no backup
will be saved. It is not recommended to give a zero-length
extension when in-place editing files, as you risk corruption
or partial content in situations where disk space is exhausted, etc.
Tried
sed -i '.bak' 's/old/new/g' logfile*
and
for i in logfile*; do sed -i '.bak' 's/old/new/g' $i; done
Both work fine.
#PaulR posted this as a comment, but people should view it as an answer (and this answer works best for my needs):
sed -i 's/abc/xyz/g' xa*
This will work for a moderate amount of files, probably on the order of tens, but probably not on the order of millions.
Another more versatile way is to use find:
sed -i 's/asd/dsg/g' $(find . -type f -name 'xa*')
I'm using find for similar task. It is quite simple: you have to pass it as an argument for sed like this:
sed -i 's/EXPRESSION/REPLACEMENT/g' `find -name "FILE.REGEX"`
This way you don't have to write complex loops, and it is simple to see, which files you are going to change, just run find before you run sed.
u can make
'xxxx' text u search and will replace it with 'yyyy'
grep -Rn '**xxxx**' /path | awk -F: '{print $1}' | xargs sed -i 's/**xxxx**/**yyyy**/'
There's some good answers above. I thought I'd throw in one more that is succinct and parallelizable, using GNU parallel, which I often prefer to xargs:
parallel sed -i 's/abc/xyz/g' {} ::: xa*
Combine this with the -j N option to run N jobs in parallel.
If you are able to run a script, here is what I did for a similar situation:
Using a dictionary/hashMap (associative array) and variables for the sed command, we can loop through the array to replace several strings. Including a wildcard in the name_pattern will allow to replace in-place in files with a pattern (this could be something like name_pattern='File*.txt' ) in a specific directory (source_dir).
All the changes are written in the logfile in the destin_dir
#!/bin/bash
source_dir=source_path
destin_dir=destin_path
logfile='sedOutput.txt'
name_pattern='File.txt'
echo "--Begin $(date)--" | tee -a $destin_dir/$logfile
echo "Source_DIR=$source_dir destin_DIR=$destin_dir "
declare -A pairs=(
['WHAT1']='FOR1'
['OTHER_string_to replace']='string replaced'
)
for i in "${!pairs[#]}"; do
j=${pairs[$i]}
echo "[$i]=$j"
replace_what=$i
replace_for=$j
echo " "
echo "Replace: $replace_what for: $replace_for"
find $source_dir -name $name_pattern | xargs sed -i "s/$replace_what/$replace_for/g"
find $source_dir -name $name_pattern | xargs -I{} grep -n "$replace_for" {} /dev/null | tee -a $destin_dir/$logfile
done
echo " "
echo "----End $(date)---" | tee -a $destin_dir/$logfile
First, the pairs array is declared, each pair is a replacement string, then WHAT1 will be replaced for FOR1 and OTHER_string_to replace will be replaced for string replaced in the file File.txt. In the loop the array is read, the first member of the pair is retrieved as replace_what=$i and the second as replace_for=$j. The find command searches in the directory the filename (that may contain a wildcard) and the sed -i command replaces in the same file(s) what was previously defined. Finally I added a grep redirected to the logfile to log the changes made in the file(s).
This worked for me in GNU Bash 4.3 sed 4.2.2 and based upon VasyaNovikov's answer for Loop over tuples in bash.
The Silver Searcher Solution
I'm adding another option for those people who don't know about the amazing tool called The Silver Searcher (command line tool is ag).
Note: You can use grep and other tools to do the same thing here, but The Silver Searcher is fantastic :)
TLDR
ag -l 'abc' | xargs sed -i 's/abc/xyz/g'
Install The Silver Searcher
sudo apt install silversearcher-ag # Debian / Ubuntu
sudo pacman -S the_silver_searcher # Arch / EndeavourOS
sudo yum install epel-release the_silver_searcher # RHEL / CentOS
Demo Files
Paste the following into your terminal to create some demonstration files:
mkdir /tmp/food
cd /tmp/food
content="Everybody loves to abc this food!"
echo "$content" > ./milk
echo "$content" > ./bread
mkdir ./fastfood
echo "$content" > ./fastfood/pizza
echo "$content" > ./fastfood/burger
mkdir ./fruit
echo "$content" > ./fruit/apple
echo "$content" > ./fruit/apricot
Using 'ag'
The following ag command will recursively find all the files that contain the string 'abc'. It ignores the .git directory, .gitignore files, and other ignore files:
$ ag 'abc'
milk
1:Everybody loves to abc this food!
bread
1:Everybody loves to abc this food!
fastfood/burger
1:Everybody loves to abc this food!
fastfood/pizza
1:Everybody loves to abc this food!
fruit/apple
1:Everybody loves to abc this food!
fruit/apricot
1:Everybody loves to abc this food!
To just list the files that contain the string 'abc', use the -l switch:
$ ag -l 'abc'
bread
fastfood/burger
fastfood/pizza
fruit/apricot
milk
fruit/apple
Changing Multiple Files
Finally, using xargs and sed, we can replace the 'abc' string with another string:
ag -l 'abc' | xargs sed -i 's/abc/eat/g'
In the above command, ag is listing all the files that contain the string 'abc'. The xargs command is splitting the file names and piping them individually into the sed command.

how to trim wipe output with sed?

i want to trim an output of wipe command with sed.
i try to use this one:
wipe -vx7 /dev/sdb 2>&1 | sed -u 's/.*\ \([0-9]\+\).*/\1/g'
but it don't work for some reason.
when i use echo & sed to print the output of wipe command it works!
echo "/dev/sdb: 10%" | sed -u 's/.*\ \([0-9]\+\).*/\1/g'
what i'm doing wrong?
Thanks!
That looks like a progress indicator. They are often output directly to the tty instead of to stdout or stderr. You may be able to use the expect script called unbuffer (source) or some other method to create a pseudo tty. Be aware that there will probably be more junk such as \r, etc., that you may need to filter out.
Demonstration:
$ cat foo
#!/bin/sh
echo hello > /dev/tty
$ a=$(./foo)
hello
$ echo $a
$ a=$(unbuffer ./foo)
$ echo $a
hello

DOS to UNIX path substitution within a file

I have a file that contains this kind of paths:
C:\bad\foo.c
C:\good\foo.c
C:\good\bar\foo.c
C:\good\bar\[variable subdir count]\foo.c
And I would like to get the following file:
C:\bad\foo.c
C:/good/foo.c
C:/good/bar/foo.c
C:/good/bar/[variable subdir count]/foo.c
Note that the non matching path should not be modified.
I know how to do this with sed for a fixed number of subdir, but a variable number is giving me trouble. Actually, I would have to use many s/x/y/ expressions (as many as the max depth... not very elegant).
May be with awk, but this kind of magic is beyond my skills.
FYI, I need this trick to correct some gcov binary files on a cygwin platform.
I am dealing with binary files; therefore, I might have the following kind of data:
bindata\bindata%bindataC:\good\foo.c
which should be translated as:
bindata\bindata%bindataC:/good/foo.c
The first \ must not be translated, despite that it is on the same line.
However, I have just checked my .gcno files while editing this text and it looks like all the paths are flanked with zeros, so most of the answers below should fit.
sed -e '/^C:\\good/ s/\\/\//g' input_file.txt
I would recommend you look into the cygpath utility, which converts path names from one format to another. For instance on my machine:
$ cygpath `pwd`
/home/jericson
$ cygpath -w `pwd`
D:\root\home\jericson
$ cygpath -m `pwd`
D:/root/home/jericson
Here's a Perl implementation of what you asked for:
$ echo 'C:\bad\foo.c
C:\good\foo.c
C:\good\bar\foo.c
C:\good\bar\[variable subdir count]\foo.c' | perl -pe 's|\\|/|g if /good/'
C:\bad\foo.c
C:/good/foo.c
C:/good/bar/foo.c
C:/good/bar/[variable subdir count]/foo.c
It works directly with the string, so it will work anywhere. You could combine it with cygpath, but it only works on machines that have that path:
perl -pe '$_ = `cygpath -m $_` if /good/'
(Since I don't have C:\good on my machine, I get output like C:goodfoo.c. If you use a real path on your machine, it ought to work correctly.)
You want to substitute '/' for all '\' but only on the lines that match the good directory path. Both sed and awk will let you do this by having a LHS (matching) expression that only picks the lines with the right path.
A trivial sed script to do this would look like:
/[Cc]:\\good/ s/\\/\//g
For a file:
c:\bad\foo
c:\bad\foo\bar
c:\good\foo
c:\good\foo\bar
You will get the output below:
c:\bad\foo
c:\bad\foo\bar
c:/good/foo
c:/good/foo/bar
Here's how I would do it in awk:
# fixpaths.awk
/C:\\good/ {
gsub(/\\/,"/",$1);
print $1 >> outfile;
}
Then run it using the command:
awk -f fixpaths.awk paths.txt; mv outfile paths.txt
Or with some help from good ol' Bash:
#!/bin/bash
cat file | while read LINE
do
if <bad_condition>
then
echo "$LINE" >> newfile
else
echo "$LINE" | sed -e "s/\\/\//g" >> newfile
fi
done
try this
sed -re '/\\good\\/ s/\\/\//g' temp.txt
or this
awk -F"\\" '{if($2=="good"){OFS="\/"; $1=$1;} print $0}' temp.txt