Using xargs arguments twice - sed

I need to check if local file is same as remote host file.
The file locations are like below:
File1 at Local machine
./remotehostname/home/a/b/scripts/xyz.cpp
File2 at remote machine
remotehostname:/home/a/b/scripts/xyz.cpp
I intend to compare these 2 files, using the command
diff ./remotehostname/home/a/b/scripts/xyz.cpp remotehostname:/home/a/b/scripts/xyz.cpp
find . -type f | grep -v .svn |xargs -I % diff %
I need to change % to take remotehost and compare the file.
Not sure how to apply sed on %. Or is there a better way to compare such files.
One way could be to save the list of files and then apply sed on that file, but I think there should be an even better way. Also the diff doesnt work on remote hosts, maybe I need to use output of dry rsync?

This can be done with xargs, but I prefer to use while read in bash.
xargs method
find . -type f | grep -v .svn | sed 's/.*/& remotehostname:&/' | xargs -n2 diff
The sed command duplicates the input and makes whatever modifications you need. The xargs then passes the inputs to diff two at a time. This will not work if any filename contain spaces.
bash method
find . -type f | grep -v .svn | while read line; do
diff "$line" "remotehostname:$line"
done
The bash read command reads a line from stdin, places it in the name variable, $line, and returns true. You can then put whatever you like inside the loop, so you get total freedom to rewrite the filename however you need. When the input runs out, read returns false, and the loop exits.
Note that piping things into loops has some interesting side effects that are not relevant here, but might bite you one day.

If you are interested in the actual difference (and not just whether they differ - which rsync is brilliant for telling you) then you can do this using GNU Parallel:
find . -type f | grep -v .svn |
parallel diff {} '<(ssh {= s:./::;s:/.*:: =} cat {= s:([^/]+/){2,2}::;$_=::shell_quote_scalar($_) =})'
s:./::;s:/.*:: = hostname from path
s:([^/]+/){2,2}:: = rest of path
::shell_quote_scalar = \-quote special chars as needed by the shell
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

Related

sed, xargs and stdbuf - how to get only first n matches of a pattern from a file

I have a file with patterns (1 line=1 pattern) I want to look for on a big text file - only one (or none) pattern will be found in each line of the infile. Once found a match, I want to retrieve the characters immediately before the match. The first part is to acquire the patterns for sed
cat patterns.txt | xargs -I '{}' sed -n 's/{}.*$//p' bigtext.txt
That works ok - the downside being that potentially I'll have hundreds of thousands of matches. I don't want/need all the matches - a fair representation of 1K hits would be enough. And here is where I struggle: I've read that in order to limit the number of hits of sed, I should use stdbuf (gstdbuf in my case) and pipe the whole thing through head. But I am not sure where to place the stdbuf command:
cat patterns.txt | xargs -I '{}' gstdbuf -oL -eL sed -n 's/{}.*$//p' bigtext.txt | head -n100
When I tried this, the process takes as long as if it was running sed on the whole file and then getting the head of that output, while my wish is to stop searching after 100 or 1000 matches. Any ideas on the best way of accomplishing this?
Is the oneliner you have provided really what you wanted? Esp. since you mention a fair sample. Because as it is stands right now, it feeds patterns.txt into xargs... which will go ahead and invoke sed for each pattern individually, one after another. And the whole output of xargs is fed to head which chops it of after n lines. In other words, your first pattern can already exhaust all the lines you wanted to see, even though the other patterns could have matched any number of times on lines occurring before the matches presented to you. Detailed example between horizontal rulers.
If I have patterns.txt of:
_Pat1
_Pat2
_Pat3
And bigtext.txt with:
1matchx_Pat1x
2matchx_Pat2x
2matchy_Pat2y
2matchz_Pat2z
3matchx_Pat3x
3matchy_Pat3y
3matchz_Pat3z
1matchy_Pat1y
1matchz_Pat1z
And I run your oneliner limited to five hits, I do not get result of (first five matches for all three patterns as found in the file):
1matchx
2matchx
2matchy
2matchz
3matchx
But (all (3) patches for _Pat1 plus 2 matches for _Pat2 after which I've ran out of output lines):
1matchx
1matchy
1matchz
2matchx
2matchy
Now to your performance problem which is partially related. I have to admit that I could not reproduce it. I've taken your example from the comment, blew the "big" file up to a 1GB in size by repeating the pattern and ran your oneliner:
$ time { cat patterns.txt | xargs -I '{}' stdbuf -oL sed -n 's/{}.*$//p' bigtext.txt | head -5 ; }
1aaa
2aaabbb
3aaaccc
1aaa
2aaabbb
xargs: stdbuf: terminated by signal 13
real 0m0.012s
user 0m0.013s
sys 0m0.008s
Note I've dropped the -eL, stderr is usually unbuffered (which is what you usually want) and doesn't play any role here really. Also note I've ran stdbuf without the "g" prefix, which tells me you're probably on a system where GNU tools are not the default... and probably the reasons why you get different behavior. I'll try to explain what is going on and venture few guesses... and conclude with a suggestion. Also note, I really did not need to use stdbuf (manipulate buffering) at all or rather it had no appreciable impact on the result, but again, this could be platform and tools (as well as scenario) specific.
When you read the line from its end, head reads standard input as it is being piped in from xargs (and by extension the sed (or stdbuf wrapping) runs which xargs forks, they are both attached to its writing end) until limit of lines to print has been reached and then head terminates. Doing so "breaks" the pipe and xargs and sed (or stdbuf which it was wrapped in) receive SIGPIPE signal and by default they as well terminate (that you can see in the output of my run: xargs: stdbuf: terminated by signal 13).
What the stdbuf -oL does and why someone might have suggested it. When no longer using console for reading/writing, which would usually be line buffered, and using pipes we would usually see buffered I/O instead. stdbuf -oL changes that back to line buffered. Without it, the process involved would communicate in larger chunk and it could take head longer to realize, it is done and needs no further input, while sed keeps running to see if there are any further suitable matches. As mentioned, on my systems (4K buffer) and with that (repeating pattern) example, this made no real difference. Also note, while it decreases the risk of not knowing we could be done, line buffering does increase overhead involved in communication between the processes.
So why would these mechanics not yield the same expected results for you? Couple options come to mind:
since you fork and run sed once per pattern, whole file each time. It could happen you get series of several runs without any hits. I'd guess this is actually likely the case.
since you give sed file to read from, you may have different implementation of sed that tries to read a lot more in before taking action on the file content (mine reads 4K at a time). Not a likely cause, but in theory you could also feed sed line by line to force smaller chunks and getting that SIGPIPE sooner.
Now assuming that sequential pattern by pattern matching is actually not desired, summary of all of above would be: process your patterns first into a single one and then perform a single pass over the "big" file (optionally capping the output of course). It might be worth switching from shell mostly to something a bit more comfortable to use, or at least not to keep the oneliner format which is likely to turn confusing.
Not true to my own recommendation, awk script called like this prints first 5 hits and quits:
awk -v patts="$(cat patterns.txt)" -v last=5 'BEGIN{patts="(" patts ; gsub(/\n/, "|", patts) ; sub(/.$/, ")", patts); cnt=1 ;} $0~patts{sub(patts ".*", ""); print; cnt++;} cnt>last{exit;}' bigtext.txt
You can specify a file that has patterns to match to the grep command with a -f file. You can also specify the number of matches to find before quiting -m count
So this command will get you the first 5 lines that match:
grep -f patterns.txt -m 5 bigtext.txt
Now to trim the match to the end of the line, is a bit more difficult.
Assuming you use bash, we can build a regex from the file, like this:
while IFS='' read -r line || [[ -n "$line" ]]; do
subRegex="s/$line.*//;"${subRegex}
done < patterns.txt
Then use this in a sed command. The resulting code becomes:
while IFS='' read -r line || [[ -n "$line" ]]; do
subRegex="s/$line.*//;"${subRegex}
done < patterns.txt
grep -f patterns.txt -m 5 bigtext.txt | sed "$subRegex"
The sed command is only running on the lines that have already matched from the grep, so it should be fairly performant.
Now if you call this a lot you could put it in a function
function findMatches() {
local matchCount=${1:-5} # default to 5 matches
local subRegex
while IFS='' read -r line || [[ -n "$line" ]]; do
subRegex="s/$line.*//;"${subRegex}
done < patterns.txt
grep -f patterns.txt -m ${matchCount} bigtext.txt | sed "${subRegex}"
}
Then you can call it like this
findMatches 5
findMatches 100
Update
Based on the sample files you gave, this solution does produce the expected result 1aaa 2aaabbb 3aaaccc 4aaa 5aaa
However, given your comment on the length of each pattern being 120 characters, and each line of the bigfile being 250 characters, 10 GB file size.
You didn't mention how many patterns you might have. So I tested and it seems that the sed command done inline falls apart someplace before 50 patterns.
(Of course, if your samples are really how the data look, then you could do your trimming of each line to be based bases on non-AGCT and not based on the patterns file. Which would be much quicker)
But based on the original question. You can generate a sed script in a separate file based on patterns.txt. Like this:
sed -e "s/^/s\//g;s/$/.*\$\/\/g/g;" patterns.txt > temp.sed
then use this temp file on the sed command.
grep -f patterns.txt -m 5 bigtext.txt | sed -f temp.sed
The grep stops after finding X matches, and the sed trims those... The new function runs on my machine in a couple seconds.
For testing I created a 2GB file of 250 character AGCT combos. And another file with 50+ patterns, 120 characters each with a few of these patterns taken from random lines of the bigtext file.
function findMatches() {
sed -e "s/^/s\//g;s/$/.*\$\/\/g/g;" patterns.txt > temp.sed
grep -f patterns.txt -m ${1:-5} bigtext.txt | sed -f temp.sed
}

finding most recent file version from list of file path names with jumbled file names

I recently lost a bunch of files from eclipse in an accidental copy/replace dilema. I was able to recover most of them but I found in the eclipse metadata folder a history of files, some of which are the ones I need. The path for the history is:
($WORKSPACE/.metadata/.plugins/org.eclipse.core.resources/.history).
Inside there are a bunch of folders like 3e,2f,1a,ff, etc.. each with a couple files named like "2054f7f9a0d30012175be7013ca49f5b". I was able to do a recursive grep with a keyword i know would be in the file and return a list of file names (grep -R -l 'KEYWORD') and now I can't figure out how to sort them by most recently modified.
any help would be great, thanks!
you can try:
find $WORK.../.history -type f -printf '%T#\t%p\n' | sort -nr | cut -f2- | xargs grep 'your_pattern'
Decomposed:
the find finds all plain files and prints their modification time and path
the sort sort sort them numerically - and reverse, so highest number comes first (the latest modified)
the cut removes the time from each line
the xargs run its argument for each file what get to it input,
in this case will run the grep command, so
the 1st file what the grep find - was the lastest modified
The above not works when the filenames containing spaces, but hopefully this is not your case... The -printf works only with GNU find.
For the repetative work, you can split the command to two parts:
find $WORK.../.history -type f -printf '%T#\t%p\n' | sort -nr | cut -f2- > /somewhere/FILENAMES_SORTED_BY_MODIF_TIME
so in 1st step you save to somewhere the list of filenames sorted by their modification times, and after you can repeatedly use the grep command on their content with:
< /somewhere/FILENAMES_SORTED_BY_MODIF_TIME xargs grep 'your_pattern'
the above command is usually written as
xargs grep 'your_pattern' < /somewhere/FILENAMES_SORTED_BY_MODIF_TIME
but for the bash is OK write the redirection to the start and in this case is simpler changing the pattern for the grep if the pattern is in the last place...
If you want check the list of filenames with modification times, you can break the above commands as:
find $WORK.../.history -type f -printf "%T#\t%Tc\t%p\n" | sort -nr >/somewehre/FILENAMES_WITH_DATE
check the list (they now contains readable date too) and use the next
< /somewehre/FILENAMES_WITH_DATE cut -f3- | xargs grep 'your_pattern'
note, now need to use -f3- and not -f2- as in the 1st example.

Grep data and output to file

I'm attempting to extract data from log files and organise it systematically. I have about 9 log files which are ~100mb each in size.
What I'm trying to do is: Extract multiple chunks from each log file, and for each chunk extracted, I would like to create a new file and save this extracted data to it. Each chunk has a clear start and end point.
Basically, I have made some progress and am able to extract the data I need, however, I've hit a wall in trying to figure out how to create a new file for each matched chunk.
I'm unable to use a programming language like Python or Perl, due to the constraints of my environment. So please excuse the messy command.
My command thus far:
find Logs\ 13Sept/Log_00000000*.log -type f -exec \
sed -n '/LRE Starting chunk/,/LRE Ending chunk/p' {} \; | \
grep -v -A1 -B1 "Starting chunk" > Logs\ 13Sept/Chunks/test.txt
The LRE Starting chunk and LRE Ending chunk are my boundaries. Right now my command works, but it saves all matched chunks to one file (whose size is becoming exessive).
How do I go about creating a new file for each match and add the matched content to it? keeping in mind that each file could hold multiple chunks and is not limited to one chunk per file.
Probably need something more programmable than sed: I'm assuming awk is available.
awk '
/LRE Ending chunk/ {printing = 0}
printing {print > "chunk" n ".txt"}
/LRE Starting chunk/ {printing = 1; n++}
' *.log
Try something like this:
find Logs\ 13Sept/Log_00000000*.log -type f -print | while read file; do \
sed -n '/LRE Starting chunk/,/LRE Ending chunk/p' "$file" | \
grep -v -A1 -B1 "Starting chunk" > "Logs 13Sept/Chunks/$file.chunk.txt";
done
This loops over the find results and executes for each file and then create one $file.chunk.txt for each of the files.
Something like this perhaps?
find Logs\ 13Sept/Log_00000000*.log -type f -exec \
sed -n '/LRE Starting chunk/,/LRE Ending chunk/{;/LRE .*ing chunk/d;w\
'"{}.chunk"';}' {} \;
This uses sed's w command to write to a file named (inputfile).chunk. If that is not acceptable, perhaps you can use sh -c '...' to pass in a small shell script to wrap the sed command with. (Or is a shell script also prohibited for some reason?)
Perhaps you could use csplit to do the splitting, then truncate the output files at the chunk end.

How to use multiple files at once using bash

I have a perl script which is used to process some data files from a given directory. I have written below bash script to look for the last updated file in the given directory and process that file.
cd $data_dir
find \( -type f -mtime -1 \) -exec ./script.pl {} \;
Sometimes, user copied multiple files to the data dir and hence the previous one skipped. The perl script execute only the last updated file. Can you please suggest me how to fix this using bash script.
Try
cd $data_dir
find \( -type f -mtime -1 \) -exec ./script.pl {} +
Note the termination of -exec with a + vs your \;
From the man page
-exec command {} +
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end;
Now that you'll have one or more file names passed into your perl script, you can alter your perl script to iterate over each passed in file name.
If I understood the question correctly, you need to process any files that were created or modified in a directory since the last time your script was run.
In my opinion find is not the right tool to determine those files, because it has no notion of which files it has already seen.
Using any of the -atime/-ctime/-mtime options will either produce duplicates if you run your script twice in the specified period, or miss some files if it is not executed at the right time. The timing intricacies of using these options for something like this are not easy to deal with.
I can propose a few alternatives:
a) Use three directories instead of one: incoming/ processing/ done/. Your users should only be allowed to put files in incoming/. You move any files in there to processing/ with a simple mv incoming/* processing/ before running your perl script. Then you move them from processing/ to done/ when its over.
In my opinion this is the simplest and best solution, and the one used by mail servers etc when dealing with this issue. If I were you and there were not any special circumstances preventing you from doing this, I'd stop reading here.
b) Have your finder script touch a special file (e.g. .timestamp, perhaps in a different directory, so that your users will not tamper with it) when it's done. This will allow your script to remember the last time it was run. Then use
find \( -cnewer .timestamp -o -newer .timestamp \) -type f -exec ./script.pl '{}' ';'
to run your perl script for each file. You should modify your perl script so that it can run repeatedly with a different file name each time. If you can modify it to accept multiple files in one go, you can also run it with
find \( -cnewer .timestamp -o -newer .timestamp \) -type f -exec ./script.pl '{}' +
which will minimise the number of ./script.pl processes. Take care to handle the first run of the find script, when the .timestamp file is missing. A good solution would be to simply ignore it by not using the -*newer options at all in that case. Also keep in mind that there is a race condition where files added after find was started but before touching the timestamp file will not be processed.
c) As a variation of (b), have your script update the timestamp with the time of the processed file that was created/modified most recently. This is tricky, because find cannot order its output on its own. You could use a wrapper around your perl script to handle this:
#!/bin/bash
for i in "$#"; do
find "$i" \( -cnewer .timestamp -o -newer .timestamp \) -exec touch -r '{}' .timestamp ';'
done
./script.pl "$#"
This will update the timestamp if it is called to process a file with a newer mtime or ctime, minimising (but not eliminating) the race condition. It is however somewhat awkward - unavoidable since bash's [[ -nt option seems to only check the mtime. It might be better if your perl script handled that on its own.
d) Have your script store each processed filename and its timestamps somewhere and then skip duplicates. That would allow you to just pass all files in the directory to it and let it sort out the mess. Kinda tricky though...
e) Since your are using Linux, you might want to have a look at inotify and the inotify-tools package - specifically the inotifywait tool. With a bit of scripting it would allow you to process files as they are added in the directory:
inotifywait -e MOVED_TO -e CLOSE_WRITE -m -r testd/ | grep --line-buffered -e MOVED_TO -e CLOSE_WRITE | while read d e f; do ./script.pl "$f"; done
This has no race conditions, as long as your users do not create/copy/move any directories rather than just files.
The perl script will only execute against the file which find gives it. Perhaps you should remove the -mtime -1 option from the find command so that it picks up all the files in the directory?

run program multiple times using one line shell command

I have the following gifs on my linux system:
$ find . -name *.gif
./gifs/02.gif17.gif
./gifs/fit_logo_en.gif
./gifs/halloween_eyes_63.gif
./gifs/importing-pcs.gif
./gifs/portal.gif
./gifs/Sunflower_as_gif_small.gif
./gifs/weird.gif
./gifs2/00p5dr69.gif
./gifs2/iss013e48788.gif
...and so on
What I have written is a program that converts GIF files to BMP with the following interface:
./gif2bmp -i inputfile -o outputfile
My question is, is it possible to write a one line command using xargs, awk, find etc. to run my program once for each one of these files? Or do I have to write a shell script with a loop?
For that kind of work, it may be worth looking at find man page, especially the -exec option.
You can write something along the line of:
find . -name *.gif -exec gif2bmp -i {} -o {}.bmp \;
You can play with combinations ofdirname and basename to obtain better naming for the output file, though in this case, I would prefer to use a shell for loop, something like:
for i in `find . -name "*.gif"`; do
DIR=`dirname $i`
NAME=`basename $i .gif`
gif2bmp -i $i -o ${DIR}/${NAME}.bmp
done
Using GNU Parallel you can do:
parallel ./gif2bmp -i {} -o {.}.bmp ::: *.gif
The added benefit is that it will run one job for each cpu core in parallel.
Watch the intro video for a quick introduction: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (http://www.gnu.org/software/parallel/parallel_tutorial.html). You command line with love you for it.