This is the code:
#!/usr/bin/perl -w
$dir="/vol.nas/rpas_qc/mohima/Test/translations";
$dir1="/vol.nas/rpas_qc/mohima/Test/dest";
`find $dir -type f -exec rsync -a {} $dir1\`;
This line:
find $dir -type f -exec rsync -a {} $dir1\
works fine in Unix but I am getting an error in perl:
Can't find string terminator "`" anywhere before EOF at test1.pl line 4
I am trying to copy all files in $dir to $dir1 without the subdirectories.
Using perl since the script will do lot of other stuff which is easier in perl.
Any help is appreciated.
\ is the escape character in Perl. The \ at the end of your find command is escaping the `. You need to escape the backslash with another one.
`find $dir -type f -exec rsync -a {} $dir1 \\`;
This will now fail with find: missing argument to -exec. You're also going to need a semicolon on the end of the -exec part.
`find $dir -type f -exec rsync -a {} $dir1 \\;`;
In perl, try changing:
find $dir -type f -exec rsync -a {} $dir1\
To:
find $dir -type f -exec rsync -a {} $dir1\\
You need to escape your backslash and special characters. Once for Perl code and again for whatever other language (in this case your shell).
`find $dir -type f -exec rsync -a {} $dir1\`;
In the above code, the \ is escaping your last backtick ( ` ) in perl. So your child shell execution is never terminated. To fix this simply add another backslash, which escapes that character from being interpolated:
`find $dir -type f -exec rsync -a {} $dir1\\`;
Related
I am now working with "Jenkinsfile".
I need to do a "find" by type of the file extension, to do a "sed -i", ignoring some hidden directories and other folders.
I don't know the correct syntax.
Example:
def replacePath() {
sh 'sed -i "s/A\\/B/C\\/D\\/E\\/F\\/G\\/A\\/B\\/opt\\/C/g" \$(find . -type f -name "*.json" not path ..... -print0) '
Try using xargs, like so:
find . -type f -name '*.json' ... -print0 | xargs -0 sed -i 's/pattern/replacement/g'
Using xargs has fewer problems than passing argument on the command line with $(...), particularly when used with -print0, as xargs can cope with filenames containing shell metacharacters.
I'm using this to find files of a particular name in subdirectories, then editing some content:
find prod -type f -name "file.txt" -exec sed -i '' -e "s,^varname.*$, varname = \"$value\"," {} +
How can I get the name of the current directory (not the directory the script is executed in, rather the directory the file is found in) and insert it into the replace text? Something like:
find prod -type f -name "file.txt" -exec sed -i '' -e "s,^ varname.*$, varname = \"$value/$dirname\"," {} +
I'm hoping to keep it as a one-liner. My most recent attempt was this, but the replacement didn't work and I feel there must be a simpler syntax:
find prod -type f -name "file.txt" -exec sh -c '
for file do
dirname=${file%/*}
done' sed -i '' -e "s,^varname.*$, varname = \"$value/$dirname\"," {} +
Example:
value=bar
file.txt input:
varname = “foo”
file.txt output:
varname = “bar/directory_name”
You can do this with GNU awk in the same way:
The sed command you make use of can be replaced with:
$ awk --inplace -v v="$value" '(FNR==1){d=FILENAME;sub("/[^/]*$","",d)}/^varname/{$0="varname = "v"/"d}1'
So your find woud read:
$ find prod -type f -name "file.txt" -exec awk --inplace -v v="$value" '(FNR==1){d=FILENAME;sub("/[^/]*$","",d)}/^varname/{$0="varname = "v"/"d}1' {} \;
This might work for you (GNU sed & parallel):
find prod -type f -name "file.txt" |
parallel -qa- --link sed -i 's#\(varname=\).*#\1"{2}{1//}"#' {1} ::: $value
We supply 2 sources to the parallel command. The first source is the list of files from the find command using the parallel option -a -. The second source is the variable $value, being only a single value it is linked to the first source using the parallel option --link. The sed command is quoted using the parallel option -q and normal regexp rules apply excepting that the values {2} and {1//} are first interpreted by parallel to represent the second source and the directory of the first source respectively.
N.B. To check the commands to parallel are as you desire, use the --dryrun option and check the output before running for real.
You need to use -execdir and spawn a shell:
find ... -execdir \
bash -c 'sed -i "" -e "s,^ varname.*$, varname = \"$value/${PWD}\"," "$1"' -- {} +
-execdir runs sed in the parent folder of the file instead of the folder from where you run find. This allows to use
$PWD.
Further note: I calling bash with two arguments:
-exec bash -c '... code ...' -- {}
^^ ^^
I'm passing the -- as a placeholder. When called with -c, bash starts to index arguments at $0 instead of $1. ($0 would normally contain the script's name). That allows to use $1 for the filename from {} which is imo more readable and understandable.
Kind of new to Perl, still navigating my way through.
Is there another way to write the bash command below in "Perl"?
find $INPUT_DIR -ctime -$DAYS_NUM -type f -exec grep -hs EDI_DC {} \; |
grep -i -v xml >> $OUTPUT_DIR/$OUTPUT_FILENAME
where INPUT_DIR, DAYS_NUM, OUTPUT_DIR and OUTPUT_FILENAME are arguments passed during runtime.
When you try to convert find command to perl, consider using find2perl script.
It generate the perl code.
find2perl 'INPUT_DIR' -ctime -'DAYS_NUM' -type f -exec grep -hs EDI_DC {} \;
I have this working fine for me:
find Sources/$1-$2 -name '*' |xargs perl -pi -e "s/domain.com/$2/g"
But when I change it to the following it doesn't:
find Sources/$1-$2 -name '*.php,*.rb' |xargs perl -pi -e "s/domain.com/$2/g"
What wrong?
Here's some explanation behind the solution that others have provided.
The tests in a find command are combined with Boolean operators:
-a -and
-o -or
! -not
If you don't supply an operator, -and is the default.
find . -type f -name '*.rb' # The command as entered.
find . -type f -a -name '*.rb' # Behind the scenes.
Your search failed because it didn't find any matching files:
# Would find only files with bizarre names like 'foo.php,bar.rb'
find . -name '*.php,*.rb'
You need to supply the file extensions as separate -name tests, combined in an OR fashion.
find . -name '*.php' -o -name '*.rb'
you have to write it as:
find Sources/$1-$2 -name '*.php' -o -name '*.rb' ....
I'm guessing that you want all files then end in .php and .rb.
Try find Sources/$1-$2 \( -iname "*.php" -o -iname "*.rb" \) -print |xargs perl -pi -e "s/domain.com/$2/g"
It is much better filtering out find's result with [ef]grep. Why?
Because you can fed the grep pattern as an argument, or can read it from the config or soo. It is much easier to write: grep "$PATTERN" as constructing long find arguments with '-o'. (ofc, here are situations, where find args are better), but not in your case.
The cost is one more process. So, for you example is easy to write a script myscript.sh
find Sources/$1-$2 -print | egrep -i "$3" | xargs ...
you can call it
./myscript.sh aaa bbb ".(php|rb)$"
and the result will equivalent to more complicated
find Sources/$1-$2 \( -iname '*.php' -o -iname '*.rb' \) | xargs ...
but
why bother? If you have bash4+, (and shopt -s globstar in your .bashrc) you can simple write:
perl -pi -e '.....' Sources/aaa-bbb/**/*.{rb,php}
the ** is like a find -name.
By the way, xargs is not needed here.
find Sources/$1-$2 \( -name '*.php' -o -name '*.rb' \) \
-exec perl -i -pe "s/domain\.com/$2/g" {} +
Also notice the "." in /domain.com/ needs to be escaped.
I want to create tar file with all the output files resulting from executing find command.
I tried the following command:
find . \(-name "*.log" -o -name "*.log.*" \) -mtime +7 -exec tar cvf test.tar.gz {} \;
But it is including only the last found file in the test.tar file. How to include all files in test.tar file?
Regards
Chaitanya
Use command line substitution:
tar cf test.tar $(find . \(-name "*.log" -o -name "*.log.*" \) -mtime +7)
What this does is run the command in $() and makes the output the command line arguments of the outer command.
This uses the more modern bash notation. If you are not using bash, you can also use backticks which should work with most shells:
tar cf test.tar `find . \(-name "*.log" -o -name "*.log.*" \) -mtime +7`
While backticks are more portable, the $() notation is easier if you need to nest command line substitution.
You want to pipe the file names found by find into tar.
find . \(-name "*.log" -o -name "*.log.*" \) -mtime +7 -exec tar cvf test.tar.gz {} \;
But it is including only the last found file in the test.tar file.
That's because for every file it finds it is running a new tar command that overwrites the tar file from the previous command.
You can make find batch the files together by changing the \; to a + but if there's more
files than can be listed at once, find will still run multiple commands, each overwriting the tar file from the previous one. You could pipe the output through xargs but it has the same issue of possibly running the command multiple times. The command line substitution recommended above is the safest way I know of to do it, ensuring that tar can only be called once -- but if too many files are found, it may give an error about the command line being too long.
This one should equally work:
find . -name "*.log" -o -name "*.log.*" -mtime +7 -exec tar cvf test.tar {} +
Note the "+" at the end vs "\;".
For a reliable way when a very large number of files will match the search:
find . -name "*.log" -o -name "*.log.*" -mtime +7 > /tmp/find.out
tar cvf test.tar -I /tmp/find.out