Hello all who may assist. I have a problem which I seem not t understand. I am selecting files and combining them with a "cdo" command after combining, I want to move the combined files into another directory.
This worked perfectly a month ago, then I had to increase ram, which took a month to do with no work done on the script.
Here is the start of my script
use strict;
use warnings;
use File::Path;
use File::Find;
use File::Copy qw(copy);
use File::Copy qw(move);
use Path::Tiny;
use Tie::File;
use File::Cat;
Before I come to the problem, the following move command works after selecting a file
print "Copying $file\n" if $debug;
my $cmd01 = "cp $Input_Data_Dirs[$ll]/$file $Output_Base_Dirs[$mm]";
print "Doing system ($cmd01)\n" if $debug;
system ($cmd01);
So am able to move several files with the above construction, then I move into the directory. From there I combine six files into one
print "doing cat with cdo\n" if $debug;
my $cmd05 = "cdo cat #sixfiles $newfile";
print "Doing system ($cmd05)\n" if $debug;
system ($cmd05);
Here is the part which fails
#-----------------------------------------
#print "Moving combined file\n" if $debug;
#my $cmd21 = "cp $newfile $Output_Base_Dirs[$mm]/$Var_Dirs[$kk]";
#print "Doing system ($cmd21)\n" if $debug;
#system ($cmd21);
#copy $newfile, $Output_Base_Dirs[$mm]/$Var_Dirs[$kk];
move $newfile, $Output_Base_Dirs[$mm]/$Var_Dirs[$kk];
#----------------------------------------------------
The unix commands "cp" and "mv" give the error
cp: missing destination file operand after 'pr_AFR-44_CNRM-CERFACS-CNRM-CM5_historical_r1i1p1_CLMcom-CCLM4-8-17_v1_day_19710101_20001231.nc'
Try 'cp --help' for more information.
sh: 2: /home/suman/CORDEX/DATA/historical/precip: Permission denied
and
mv: missing destination file operand after 'pr_AFR-44_CNRM-CERFACS-CNRM-CM5_historical_r1i1p1_CLMcom-CCLM4-8-17_v1_day_19710101_20001231.nc'
Try 'mv --help' for more information.
sh: 2: /home/suman/CORDEX/DATA/historical/precip: Permission denied
I have made sure there is no permission problem by issuing the command
sudo chmod -Rv ugo +rwx CORDEX
On the other hand the perl in-built commands "copy" and "move" give the following errors
Argument "precip" isn't numeric in division (/) at merge_files.pl line 247.
Argument "/home/suman/CORDEX/DATA/historical" isn't numeric in division (/) at merge_files.pl line 247.
Illegal division by zero at merge_files.pl line 247.
I am really defeated by these errors.
I will appreciate any assistance if ever possible to resolve this
I have upvoted the solution from Dave Cross for the reason that it eliminates the error of non-numeric/division by zero. Thanks Dave for that.
However, after defining
my $target_dir="$Output_Base_Dirs[$mm]/$Var_Dirs[$kk]";
both the commands:
my $cmd21 = "cp -v $newfile --target-directory=$target_dir";
and
my $cmd21 = "mv -v $newfile --target-directory=$target_dir"
give the same error
cp: missing destination file operand after 'pr_AFR-44_CNRM-CERFACS-CNRM-CM5_historical_r1i1p1_CLMcom-CCLM4-8-17_v1_day_19710101_20001231.nc'
Try 'cp --help' for more information.
sh: 2: --target-directory=/home/suman/CORDEX/DATA/historical/precip: not found
yet the target_dir exists.
The two perl commands
copy $newfile, "$target_dir" or die "copy operation failed: $!";
move $newfile, "$target_dir" or die "move operation failed: $!";
move operation failed: No such file or directory at merge_files.pl line 249.
copy operation failed: No such file or directory at merge_files.pl line 248.
I am really baffled.
move $newfile, $Output_Base_Dirs[$mm]/$Var_Dirs[$kk];
This is wrong. When combining two variables like this, you need to put them in a string.
move $newfile, "$Output_Base_Dirs[$mm]/$Var_Dirs[$kk]";
Without that, Perl thinks you're trying to do a division sum.
Related
I am trying to run an executable perl file that copies a directory to another location and then removes every file in that new location except for those ending with .faa and .tsv. Here's the code:
#!/usr/bin/perl
use strict;
use warnings;
my $folder = $ARGV[0];
system ("cp -r ~/directoryA/$folder/ ~/directoryB/");
chdir "~/directoryB/$folder";
# Remove everything except for .faa and .tsv files
system ("rm !\(*.faa|*.tsv\)");
Regardless of whether or not I escape the parenthesis, I get the error:
sh: 1: Syntax error: "(" unexpected
and it didn't remove any files. The location of the perl file is ~/bin, and I'd like to avoid changing the #!/usr/bin/perl line since several computers will be using this script.
This is a little beyond my knowledge, as I only know basic scripting, but does anyone know why this is happening?
This entire program is much simpler without the use of shell commands
I would write this, which copies only the wanted file types in the first place. I assume there are no nested directories to be copied
use strict;
use warnings;
use File::Copy 'copy';
my ($folder) = #ARGV;
while ( my $file = glob "~/directoryA/$folder/*.{faa,tsv}" ) {
copy $file, '~/directoryB';
}
It's complaining about this line:
system ("rm !\(*.faa|*.tsv\)");
as even if you get the quoting of the shell metacharacters right, is pretty obtuse and does not, I believe, erase all files that don't end in .faa or .tsv.
Perl is up to the latter task.
unlink grep { -f $_ && !/[.]faa$/ && !/[.]tsv$/ } glob("*")
is one of several ways.
I currently have an issue with reading files in one directory.
I need to take all the fastq files in a file and run the script for each file then put new files in an ‘Edited_sequences’ folder.
The one script I had is
perl -ne '$i++; if($i<80001){print}' BM2003_TCCCAGAACAAC_L001_R1_001.fastq > ./Edited_sequences/BM2003_TCCCAGAACAAC_L001_R1_001.fastq
It takes the first 80000 lines in one fastq file then outputs the result.
Now for example I have 2000 fastq files, then I need to copy and paste for 2000 times.
I know there is a glob command suit for this situation but I just do not know how to deal with that.
Please help me out.
You can use perl to do copy/paste for you, first argument *.fastq are all fastq files, and second ./Edited_sequences is target folder for new files,
perl -e '$d=pop; `head -8000 "$_" > "$d/$_"` for #ARGV' *.fastq ./Edited_sequences
glob gets you an array of filenames matching a particular expression. It's frequently used with <> brackets, a lot like reading input (you can think of it as reading files from a directory).
This is a simple example that will print the names of every ".fastq" file in the current directory:
print "$_\n" for <*.fastq>;
The important part is <*.fastq>, which gives us an array of filenames matching that expression (in this case, a file extension). If you need to change which directory your Perl script is working in, you can use chdir.
From there, we can process your files as needed:
while (my $filename = <*.fastq>) {
open(my $in, '<', $filename) or die $!;
open(my $out, '>', "./Edited_sequences/$filename") or die $!;
for (1..80000) {
my $line = <$in>;
print $out $line;
}
}
You have two choices:
Use Perl to read in the 2000 files and run it as part of your program
Use the Shell to pass each of those 2000 file to your command line
Here's the bash alternative:
for file in *.fastq
do
perl -ne '$i++; if($i<80001){print}' "$file" > "./Edited_sequences/$file"
done
Your same Perl script, but with the shell finding each file. This should work and not overload the command line. The for loop in bash, if handed a glob can expand them correctly.
However, I always recommend that you don't actually execute the command, but echo the resulting commands into a file:
for file in *.fastq
do
echo "perl -ne '\$i++; if(\$i<80001){print}' \
\"$file\" > \"./Edited_sequences/$file\"" >> myoutput.txt
done
Then, you can look at myoutput.txt to make sure it looks good before you actually do any real harm. Once you've determined that myoutput.txt is a good file, you can execute that as a shell script:
$ bash myoutput.txt
I am trying to chdir in perl but I am just not able to get my head around what's going wrong.
This code works.
chdir('C:\Users\Server\Desktop')
But when trying to get the user's input, it doesn't work. I even tried using chomp to remove any spaces that might come.
print "Please enter the directory\n";
$p=<STDIN>;
chdir ('$p') or die "sorry";
system("dir");
Also could someone please explain how I could use the system command in this same situation and how it differs from chdir.
The final aim is to access two folders, check for files that are named the same (eg: if both the folders have a file named "water") and copy the file that has the same name into a third folder.
chdir('$p') tries to change to a directory literally named $p. Drop the single quotes:
chdir($p)
Also, after reading it in, you probably want to remove the newline (unless the directory name really does end with a newline):
$p = <STDIN>;
chomp($p);
But if you are just chdiring to be able to run dir and get the results in your script, you probably don't want to do that. First of all, system runs a command but doesn't capture its output. Secondly, you can just do:
opendir my $dirhandle, $p or die "unable to open directory $p: $!\n";
my #files = readdir $dirhandle;
closedir $dirhandle;
and avoid the chdir and running a command prompt command altogether.
I will use it this way.
chdir "C:/Users/Server/Desktop"
The above works for me
hi i have written a perl script which copies all the entire directory structure from source to destination and then i had to create a restore script from the perl script which will undo what the perl script has done that is create a script(shell) which can use bash features to restore the contents from destination back to source i m struggling to find the correct function or command which can copy recursively (not an requirement) but i want exactly the same structure as it was before
Below is the way i m trying to create a file called restore to do the restoration process
i m particularly looking for algorithm.
Also restore will restore the structure to a command line directory input if it is supplied if not You can assume the default input supplied to perl script
$source
$target
in this case we would wanna copy from target to source
So we have two different parts in one script.
1 which will copy from source to destination.
2 it will create a script file which will undo what part 1 has done
i hope this makes it very clear
unless(open FILE, '>'."$source/$file")
{
# Die with error message
# if we can't open it.
die "\nUnable to create $file\n";
}
# Write some text to the file.
print FILE "#!/bin/sh\n";
print FILE "$1=$target;\n";
print FILE "cp -r \n";
# close the file.
close FILE;
# here we change the permissions of the file
chmod 0755, "$source/$file";
The last problem i have is i couldn't get $1 in my restore file as it refers to a some variable in perl
but i need this for getting command line input when i run restore as $0 = ./restore $1=/home/xubuntu/User
First off, the standard way in Perl for doing this:
unless(open FILE, '>'."$source/$file") {
die "\nUnable to create $file\n";
}
is to use the or statement:
open my $file_fh, ">", "$source/$file"
or die "Unable to create "$file"";
It's just easier to understand.
A more modern way would be use autodie; which will handle all IO problems when opening or writing to files.
use strict;
use warnings;
use autodie;
open my $file_fh, '>', "$source/$file";
You should look at the Perl Modules File::Find, File::Basename, and File::Copy for copying files and directories:
use File::Find;
use File::Basename;
my #file_list;
find ( sub {
return unless -f;
push #file_list, $File::Find::name;
},
$directory );
Now, #file_list will contain all the files in $directory.
for my $file ( #file_list ) {
my $directory = dirname $file;
mkdir $directory unless -d $directory;
copy $file, ...;
}
Note that autodie will also terminate your program if the mkdir or copy commands fail.
I didn't fill in the copy command because where you want to copy and how may differ. Also you might prefer use File::Copy qw(cp); and then use cp instead of copy in your program. The copy command will create a file with default permissions while the cp command will copy the permissions.
You didn't explain why you wanted a bash shell command. I suspect you wanted to use it for the directory copy, but you can do that in Perl anyway. If you still need to create a shell script, the easiest way is via the :
print {$file_fh} << END_OF_SHELL_SCRIPT;
Your shell script goes here
and it can contain as many lines as you need.
Since there are no quotes around `END_OF_SHELL_SCRIPT`,
Perl variables will be interpolated
This is the last line. The END_OF_SHELL_SCRIPT marks the end
END_OF_SHELL_SCRIPT
close $file_fh;
See Here-docs in Perldoc.
First, I see that you want to make a copy-script - because if you only need to copy files, you can use:
system("cp -r /sourcepath /targetpath");
Second, if you need to copy subfolders, you can use -r switch, can't you?
I have this directory structure:
$ ls -F
analyze/
data.pl
input.pl
logminer.txt
logSearch.pl
readFormat.pl
Version Notes.txt
datadump.pl
format/
logminer.pl
logs/
properties.txt
unzip.exe
I run:
perl -e 'if (!(-d analyze)){ print "poo\n"}'
and it prints poo.
What is missing here? I have done tests like this earlier and it would correctly identify that the directory exists. Why not this directory?
perl -e 'if (!(-d "analyze")){ print "poo\n"}'
^-- ^---
missing quotes?
edit: changed to double quotes - forgot this was for command-line perl
First,
-d analyze
means "check if the file handle anaylyze is a directory handle". You want
-d 'analyze'
Now, you say you still get the problem by doing that, so check what error you're getting.
my $rv = -d 'analyze';
die "stat: $!" if !defined($rv);
die "Not a dir" if !$rv;
-d is just a thin wrapper around stat(2), so it's not Perl that "can't see", it's the system.
The most common errors:
The current work directory isn't what you think it is. (Many people assume it's always the directory in which the script resides.)
The file name has trailing whitespace, especially a newline. (That's not likely to be the case here.)