Merge the files with the same file name in Perl [closed] - perl

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I currently got a problem on merging the files in Perl.
There are two directories/folders, which contain the same name and extension files in pairs.
For example, in folder 1, I have files 1.fastq, 2.fastq,....,10.fastq.
In folder 2, I have the exactly same file names 1.fastq, 2.fastq,....,10.fastq, but they contain different information.
I want to merge the files with the same name, in the beginning I tried the cat command
$ cat 1.fastq 1.fastq > 1.fastq
However if there are too many files, for example 1000+, I will need to do it 1000+ times.
How can I do it automatically with the perl command?
Thank you in advance.

Perl based solution will be like below.
#!/usr/bin/perl
$source_dir = "./source";
$dest_dir = "./dest";
opendir ($source, $source_dir);
#source_files = readdir $source;
foreach $each_file (#source_files){
if($each !~ /^(\.|\.\.)$/) {
open $file_h , "< $source_dir/$each_file";
#contents = <$file_h>;
open $dest_file, ">>$dest_dir/$each_file";
print $dest_file #contents;
#contents =();
}
}
You can also do this using Shell script also. A typical shell script will be like below.
#!/usr/bin/sh
source='./source'
dest='./dest'
for file in `ls $source`
do
if [ -e $dest/$file ]
then
cat $source/$file $dest/$file >> $dest/$file."unique_name"
rm $dest/$file
mv $dest/$file."unique_name" $dest/$file
else
cp $source/$file $dest/$file
fi
done
You can not use input file as output with cat.
$ cat 1.fastq 1.fastq > 1.fastq
This will lead you error saying "input file is output file".

Related

Output of System command in perl [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have the following command
system("ssh $host_name sh /tmp/a.sh $file_name $region $domain < /tmp/info.txt > $resFile ");
However, this command is not working as expected. Please suggest how to get the failure reason.
if you want to see the error messages produced by the system call, I'd recommend concatenating both stdout and stderr and redirecting them by placing 2>&1 at the end of the call:
system("ssh $host_name sh /tmp/a.sh $file_name $region $domain < /tmp/info.txt > $resFile 2>&1 ");`
Then, anything in stderr will also be printed in $resfile, and you can inspect the errors to try and discover what went wrong.

I just destroyed libc.so on my machine. What can I do now? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I was SSHed into a remote box as root when I ran the following command:
ln -sf /nonexistent /.../libc.so
Immediately my prompt started throwing errors:
basename: could not find shared library
I can't even run anything:
root#toastbox# ls
ls: could not find shared library
How can I fix this? I have two SSH sessions open with Bash, but no other processes accessible. I have a cross-compiler for the target on my local machine, but no way to SCP files to the remote end anymore.
EDIT: There are no other copies of libc on this box; I overwrote the real libc file. Some things still work: I can echo, and I can use tab-completion to emulate ls. But normal programs (mv, rm, etc.) are MIA.
I discovered that I could still write to files by using echo and redirection (thanks Iwillnotexist Idonotexist!). Further, echo -ne lets me write arbitrary bytes to a file. I can therefore truncate a file with echo -ne '' > file, then repeatedly write to it with
echo -ne '\001' >> /file
Using this approach, I can overwrite any executable present on the system (since I'm still root) in this way.
I compiled a simple program to rename a file:
#include <unistd.h>
int main(int argc, char **argv) { return rename(argv[1], argv[2]); }
using cross-gcc -static mv.c mv (eliminating the libc.so dependency). Then, I wrote a script to encode any binary file as a series of echo commands (limited by the length that readline will allow me to enter):
# Encode a file as a series of echo statements.
# settings
maxlen = 1020
infile = '/tmp/mv'
outfile = '/usr/bin/mv'
print "echo -ne '' > %s" % outfile
template = "echo -ne '%%s' >> %s" % outfile
maxchunk = maxlen - len(template % '')
pos = 0
data = open(infile, 'rb').read()
transtable = {}
for i in xrange(256):
c = chr(i)
if i == 0:
transtable[c] = r'\0'
elif c.isalpha():
transtable[c] = c
else:
transtable[c] = r'\0%o' % i
while pos < len(data):
chunk = []
chunklen = 0
while pos < len(data):
bit = transtable[data[pos]]
if chunklen + len(bit) < maxchunk:
chunk.append(bit)
chunklen += len(bit)
pos += 1
else:
break
print template % ''.join(chunk)
I used my echo encoder to generate a series of echo commands which I mass-pasted into the ssh session. These look like
echo -ne '' > /usr/bin/mv
echo -ne '\0177ELF\01\01\01\0\0\0\0\0\0\0\0\0\02\0\050\0\01\0\0\0\0360\0200\0\0\064\0\0\0\030Q\05\0\0\0\0\05\064\0\040\0\05\0\050\0\034\0\033\0\01\0\0\0\0\0\0\0\0\0200\0\0\0\0200\0\0P\03\01\0P\03\01\0\05\0\0\0\0\020\0\0\01\0\0\0\0\017\01\0\0\0237\01\0\0\0237\01\0x\02\0\0X\046\0\0\06\0\0\0\0\020\0\0Q\0345td\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\06\0\0\0\0\0\0\0\01\0\0p\0244\0356\0\0\0244n\01\0\0244n\01\0\0350\010\0\0\0350\010\0\0\04\0\0\0\04\0\0\0R\0345td\0\017\01\0\0\0237\01\0\0\0237\01\0\0\01\0\0\0\01\0\0\06\0\0\0\040\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\020\0265\04\034\0\040\0\0360\053\0371\040\034\016\0360r\0375\0134\0300\0237\0345\0H\055\0351X\060\0237\0345\04\0260\0215\0342\020\0320M\0342\014\0300\0217\0340\03\060\0234\0347\024\060\013\0345D\060\0237\0345\04\0\0213\0342\03\060\0234\0347\020\060\013\0345\070\060\0237\0345\0\020\0240\0343\03\060\0234\0347\014\060\013\0345\054\060\0237\0345\03\060\0234\0347\010\060\013\0345\044\060\0237\0345\03\040\0234\0347\024\060K\0342\0223\072\0\0353\04' >> /usr/bin/mv
echo -ne '\0320K\0342\0\0210\0275\0350\0350\036\01\0\0174\0377\0377\0377\0200\0377\0377\0377\0204\0377\0377\0377\0210\0377\0377\0377\0214\0377\0377\0377\0H\055\0351\04\0260\0215\0342\010\0320M\0342\010\0\013\0345\014\020\013\0345\014\060\033\0345\04\060\0203\0342\0\040\0223\0345\014\060\033\0345\010\060\0203\0342\0\060\0223\0345\02\0\0240\0341\03\020\0240\0341\06\0\0\0353\0\060\0240\0341\03\0\0240\0341\04\0320K\0342\0\0210\0275\0350\0\0\0\0\0\0\0\0\0\0\0\0\0220\0\055\0351\046p\0240\0343\0\0\0\0357\0220\0\0275\0350\0\0\0260\0341\036\0377\057Qr\072\0\0352\0\0\0240\0341\020\0265\04\034\0\0360\014\0370\04\0140\01\040\0100B\020\0275\020\0265\03\034\0377\063\02\0333\0100B\0377\0367\0361\0377\020\0275\020\0265\02K\0230G\010\060\020\0275\0300F\0340\017\0377\0377\0360\0265\031N\0203\0260\034\034\0176D\07\034\01\0222\0\0360\0253\0371\045h\0\0340\0230G\04\065\053h\0\053\0372\0321\0345h\0\0340\0230G\04\065\053h\0\053\0372\0321eh\0\0340\0230G\04\065\053h\0\053\0372\0321\075\034\0200\0315y\034\0210\0' >> /usr/bin/mv
...
I tested the replacement mv a few times to make sure it worked (using Bash tab-completion as a substitute for ls), and then used the echo encoder to write a replacement libc.so to a temporary directory. Finally, I moved the replacement libc.so into the right place using the static mv I pushed.
And success! It might've taken about an hour, but my box is back up and running, with no casualties save for one clobbered /usr/bin/mv :)

Read same extension multiple files in one directory in Perl

I currently have an issue with reading files in one directory.
I need to take all the fastq files in a file and run the script for each file then put new files in an ‘Edited_sequences’ folder.
The one script I had is
perl -ne '$i++; if($i<80001){print}' BM2003_TCCCAGAACAAC_L001_R1_001.fastq > ./Edited_sequences/BM2003_TCCCAGAACAAC_L001_R1_001.fastq
It takes the first 80000 lines in one fastq file then outputs the result.
Now for example I have 2000 fastq files, then I need to copy and paste for 2000 times.
I know there is a glob command suit for this situation but I just do not know how to deal with that.
Please help me out.
You can use perl to do copy/paste for you, first argument *.fastq are all fastq files, and second ./Edited_sequences is target folder for new files,
perl -e '$d=pop; `head -8000 "$_" > "$d/$_"` for #ARGV' *.fastq ./Edited_sequences
glob gets you an array of filenames matching a particular expression. It's frequently used with <> brackets, a lot like reading input (you can think of it as reading files from a directory).
This is a simple example that will print the names of every ".fastq" file in the current directory:
print "$_\n" for <*.fastq>;
The important part is <*.fastq>, which gives us an array of filenames matching that expression (in this case, a file extension). If you need to change which directory your Perl script is working in, you can use chdir.
From there, we can process your files as needed:
while (my $filename = <*.fastq>) {
open(my $in, '<', $filename) or die $!;
open(my $out, '>', "./Edited_sequences/$filename") or die $!;
for (1..80000) {
my $line = <$in>;
print $out $line;
}
}
You have two choices:
Use Perl to read in the 2000 files and run it as part of your program
Use the Shell to pass each of those 2000 file to your command line
Here's the bash alternative:
for file in *.fastq
do
perl -ne '$i++; if($i<80001){print}' "$file" > "./Edited_sequences/$file"
done
Your same Perl script, but with the shell finding each file. This should work and not overload the command line. The for loop in bash, if handed a glob can expand them correctly.
However, I always recommend that you don't actually execute the command, but echo the resulting commands into a file:
for file in *.fastq
do
echo "perl -ne '\$i++; if(\$i<80001){print}' \
\"$file\" > \"./Edited_sequences/$file\"" >> myoutput.txt
done
Then, you can look at myoutput.txt to make sure it looks good before you actually do any real harm. Once you've determined that myoutput.txt is a good file, you can execute that as a shell script:
$ bash myoutput.txt

Can I write a program to gather parameters in order to generate script to guide a binary file installation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
OK, just as concept:
The base platform is Suse Enterprise server 11.1
I have a binary file to install; to install it, I need input some value such as ip address, cert location and so on.
Now what I want to do is to write a perl program to gather all input information first and then generate a script to guide the binary file install unattended.
Can it be done? I'm a fresh Perl learner.
Thanks.
The easiest way to get paramters via Console, is to use the Getopt Module.
You can find further informations about that here:
http://perldoc.perl.org/Getopt/Long.html
For example:
use Getopt::Long;
my $data = "bin_file";
my $length = 24;
my $verbose;
GetOptions ("length=i" => \$length, # numeric
"file=s" => \$data, # string
"verbose" => \$verbose
) # flag
or die("Error in command line arguments\n");
Then you can call your script (inside your shell) with:
$ perl script.pl --length 14 --file test.dat --verbose
Getopt parses the command line from #ARGV , recognizing and removing specified options and their possible values.

What would the best way be to take a string from a text file and search and replace another string with the one from the text file?

What would the best way be to take a string from a text file and search and replace another string with the one from the text file?
E.g c:\output.txt has abcd and c:\newfile.txt has Stack overflow is great.
I would like to replace great with abcd.
What would be the best approach to do this?
you can download sed for windows and then
set /p var=<output.txt
sed "s/%var%/Stackoverflow is great/g" newfile.txt
Since Perl was your first tag, I guess you'd want a Perl version of the solution.
If you have Perl installed on your Windows, the following works (whitespace added for readability):
C:\>perl -e "open(my $rf, '<', 'c:\output.txt')
|| die \"Can not open c:\output.txt: $!\";
my $replace = <$rf>;
chomp $replace;
close $rf;
local $^I='.bak'; # Replace inside the file, make backup
while (<>) {
s/great/$replace/g;
print;
}" c:\newfile.txt
C:\>type C:\newfile.txt
Stack overflow is abcd
To be a bit more Windows idiomatic, you can replace the start of the Perl code (reading of the contents of a file) with a "cmd"'s SET /P command (see Ghostdog's asnwer), for a much shorter Perl code:
C:\> set /p r=<c:\output.txt
C:\> perl -pi.bak -e "s/great/%r%/g;" c:\newfile.txt
C:\> type C:\newfile.txt
Stack overflow is abcd