find -ctime bash alternative in Perl - perl

Kind of new to Perl, still navigating my way through.
Is there another way to write the bash command below in "Perl"?
find $INPUT_DIR -ctime -$DAYS_NUM -type f -exec grep -hs EDI_DC {} \; |
grep -i -v xml >> $OUTPUT_DIR/$OUTPUT_FILENAME
where INPUT_DIR, DAYS_NUM, OUTPUT_DIR and OUTPUT_FILENAME are arguments passed during runtime.

When you try to convert find command to perl, consider using find2perl script.
It generate the perl code.
find2perl 'INPUT_DIR' -ctime -'DAYS_NUM' -type f -exec grep -hs EDI_DC {} \;

Related

unix code inside perl

This is the code:
#!/usr/bin/perl -w
$dir="/vol.nas/rpas_qc/mohima/Test/translations";
$dir1="/vol.nas/rpas_qc/mohima/Test/dest";
`find $dir -type f -exec rsync -a {} $dir1\`;
This line:
find $dir -type f -exec rsync -a {} $dir1\
works fine in Unix but I am getting an error in perl:
Can't find string terminator "`" anywhere before EOF at test1.pl line 4
I am trying to copy all files in $dir to $dir1 without the subdirectories.
Using perl since the script will do lot of other stuff which is easier in perl.
Any help is appreciated.
\ is the escape character in Perl. The \ at the end of your find command is escaping the `. You need to escape the backslash with another one.
`find $dir -type f -exec rsync -a {} $dir1 \\`;
This will now fail with find: missing argument to -exec. You're also going to need a semicolon on the end of the -exec part.
`find $dir -type f -exec rsync -a {} $dir1 \\;`;
In perl, try changing:
find $dir -type f -exec rsync -a {} $dir1\
To:
find $dir -type f -exec rsync -a {} $dir1\\
You need to escape your backslash and special characters. Once for Perl code and again for whatever other language (in this case your shell).
`find $dir -type f -exec rsync -a {} $dir1\`;
In the above code, the \ is escaping your last backtick ( ` ) in perl. So your child shell execution is never terminated. To fix this simply add another backslash, which escapes that character from being interpolated:
`find $dir -type f -exec rsync -a {} $dir1\\`;

Perl: edit file, not just output to shell

I found a little one-liner of perl code that will change the serial in my zone-files on my Bind server.
However it wont change the actual file, it just gives me the output directly to the shell.
This is what I run:
find . -type f -print0 | xargs -0 perl -e "while(<>){ s/\d+(\s*;\s*[sS]erial)/2015050466\1/; print; }"
This gives me the correct output to the shell and if I remove the print; at the end of the perl line nothing happens and I want it to actually change the files to the output I got.
I'm a total noob when it comes to Perl so this might be a simple fix so any answer would be appreciated.
I am assuming you want to replace the string inside the files found by find.
Command example below will change in-place (-i) any "foo" with "bar" for all *.txt files from curent directory.
find . -type f -name '*.txt' -print0 | xargs -0 perl -p -i -e 's/foo/bar/g;'
And for your question, you should be able to get it with this command:
find . -type f -print0 | xargs -0 perl -p -i -e 's/\d+(\s*;\s*[sS]erial)/2015050466\1/;'
Note: It is good habit to always use single quotes rather than double quotes. This is because inside double quotes, a \, $, etc. may be processed by the shell before passed to Perl. See Bash manual.

Using Sed and Find with Grep Linux

I am writing a script that will saech for php files that contain a phrase and I would like that phrase replaced with a new one below is my little script but it is not working it searches ok but does not work with the search and replace section
find . -type f -name "*.php" -exec grep -H "define('DB_HOST', 'localhost');" {} \; | xargs sed -i "define('DB_HOST', 'localhost');/define('DB_HOST', '10.0.0.1');/g"
can someone explain to me what i am doing wrong
many thanks
Joe
did you forget the 's/' at the beggining of the sed expression? As in
sed 's/expression1/expression2/g'
You seem to have
sed 'espression1/expression2/g'
Edit
Another thing: You don't need to use xarg here. You can use multiple -exec flags - and it will to each only if all the previous succeeded:
find . -name '*.php' -exec grep 'whatever' {} \; -exec sed -i 's/whatever/you want/g' {} \;
This will work:
find . -type f -name "*.php" -exec grep -l "define('DB_HOST', 'localhost');" {} \; | xargs sed -i "s/define('DB_HOST', 'localhost');/define('DB_HOST', '10.0.0.1');/g"
Corrections
Missing s/ in sed search and replace command
use grep -l instead of grep -H

Linux: Using find and grep to find a keyword in files and count occurrences

I'm using executing this bash commands inside a search script I've built with php:
find myFolder -type f -exec grep -r KEYWORD {} +
find myFolder -type f -exec grep -r KEYWORD {} + | wc -l
find myFolder -type f | wc -l
The first line gives me back the filenames where KEYWORD was found.
The second line gives me the number of occurrences and the third line the total number of files.
Is there a way to do this more elegantly and faster?
You can get more efficiency if you avoid -exec, which makes one fork per file match. xargs is a better choice here. So I would do something like this:
find myFolder -type f -print0 | xargs -0 grep KEYWORD
find myFolder -type f -print0 | xargs -0 grep KEYWORD | wc -l
The last one should be OK, at least with GNU find.
The -print0 and -0 ensure that filenames with spaces in them are handled correctly.
Note that grep -r` implies recursive grepping, but as you're only supplying one filename in each invocation it is redundant.

How can I traverse a directory tree using a bash or Perl script?

I am interested into getting into bash scripting and would like to know how you can traverse a unix directory and log the path to the file you are currently looking at if it matches a regex criteria.
It would go like this:
Traverse a large unix directory path file/folder structure.
If the current file's contents contained a string that matched one or more regex expressions,
Then append the file's full path to a results text file.
Bash or Perl scripts are fine, although I would prefer how you would do this using a bash script with grep, awk, etc commands.
find . -type f -print0 | xargs -0 grep -l -E 'some_regexp' > /tmp/list.of.files
Important parts:
-type f makes the find list only files
-print0 prints the files separated not by \n but by \0 - it is here to make sure it will work in case you have files with spaces in their names
xargs -0 - splits input on \0, and passes each element as argument to the command you provided (grep in this example)
The cool thing with using xargs is, that if your directory contains really a lot of files, you can speed up the process by paralleling it:
find . -type f -print0 | xargs -0 -P 5 -L 100 grep -l -E 'some_regexp' > /tmp/list.of.files
This will run the grep command in 5 separate copies, each scanning another set of up to 100 files
use find and grep
find . -exec grep -l -e 'myregex' {} \; >> outfile.txt
-l on the grep gets just the file name
-e on the grep specifies a regex
{} places each file found by the find command on the end of the grep command
>> outfile.txt appends to the text file
grep -l -R <regex> <location> should do the job.
If you wanted to do this from within Perl, you can take the find commands that people suggested and turn them into a Perl script with find2perl:
If you have:
$ find ...
make that
$ find2perl ...
That outputs a Perl program that does the same thing. From there, if you need to do something that easy in Perl but hard in shell, you just extend the Perl program.
find /path -type f -name "*.txt" | awk '
{
while((getline line<$0)>0){
if(line ~ /pattern/){
print $0":"line
#do some other things here
}
}
}'
similar thread
find /path -type f -name "outfile.txt" | awk '
{
while((getline line<$0)>0){
if(line ~ /pattern/){
print $0":"line
}
}
}'