How do I determine or set the working directory of QtSpim? - system-calls

I just want to run ANY kind of Spim programm using an Syscall for open, read and/or write a file, but that doesn´t work out. I am aware that probably my program and the file are not in the working directory of QtSpim, but I have no Idea how to chance it or set a new directory. So after the first Syscall $v0 is -1, which indaicates an error. I tried using the whole pathname for the to-read-file (example below) and tried to write/create a file to see, where QtSpim would save a file. If I have a fundamental flaw, do not hesitate to let me know. I am using QtSpim under Windows
.data
filename: .asciiz "C:\Users\...\test.txt" #einzulesender Dateiname
buffer: .space 1024
.text
main:
#open the file (to get the file descriptor)
li $v0, 13 # system call for open file
la $a0, filename # board file name
li $a1, 0 # Open for reading
li $a2, # Mode
syscall # open a file (file descriptor returned in $v0)
move $s1, $v0 # save the file descriptor
#read from file
li $v0, 14 # system call for read from file
move $a0, $s1 # file descriptor
la $a1, buffer # address of buffer to which to read
li $a2, 1024 # hardcoded buffer length
syscall # read from file
# Close the file
li $v0, 16 # system call for close file
move $a0, $s1 # file descriptor to close

i got exactly the same problem and my code looks nearly the same. The most documentaries about the syscall functions of qtspim also tell me, that the file descriptor gets returned in $a0. Even thought we get the descriptor actually in $v0.
Burning for the answer :)

Related

Deleting a line from a huge file in Perl

I have huge text file and first five lines of it reads as below :
This is fist line
This is second line
This is third line
This is fourth line
This is fifth line
Now, I want to write something at a random position of the third line of that file which will replace the characters in that line by the new string I am writing. I am able to achieve that with the below code :
use strict;
use warnings;
my #pos = (0);
open my $fh, "+<", "text.txt";
while(<$fh) {
push #pos, tell($fh);
}
seek $fh , $pos[2]+1, 0;
print $fh "HELLO";
close($fh);
However, I am not able to figure out with the same kind of approach how can I delete the entire third line from that file so that the texts reads below :
This is fist line
This is second line
This is fourth line
This is fifth line
I do not want to read the entire file into an array, neither do I want to use Tie::File. Is it possible to achieve my requirement using seek and tell ? A solution will be very helpful.
A file is a sequence of bytes. We can replace (overwrite) some of them, but how would we remove them? Once a file is written its bytes cannot be 'pulled out' of the sequence or 'blanked' in any way. (The ones at the end of the file can be dismissed, by truncating the file as needed.)
The rest of the content has to move 'up', so that what follows the text to be removed overwrites it. We have to rewrite the rest of the file. In practice it is often far simpler to rewrite the whole file.
As a very basic example
use warnings 'all';
use strict;
use File::Copy qw(move);
my $file_in = '...';
my $file_out = '...'; # best use `File::Temp`
open my $fh_in, '<', $file_in or die "Can't open $file_in: $!";
open my $fh_out, '>', $file_out or die "Can't open $file_out: $!";
# Remove a line with $pattern
my $pattern = qr/this line goes/;
while (<$fh_in>)
{
print $fh_out $_ unless /$pattern/;
}
close $fh_in;
close $fh_out;
# Rename the new fie into the original one, thus replacing it
move ($file_out, $file_in) or die "Can't move $file_out to $file_in: $!";
This writes every line of input file into the output file, unless a line matches a given pattern. Then that file is renamed, replacing the original (what does not involve data copy). See this topic in perlfaq5.
Since we really use a temporary file I'd recommend the core module File::Temp for that.
This may be made more efficient, but far more complicated, by opening in update '+<' mode so to overwrite only a portion of the file. You iterate until the line with the pattern, record (tell) its position and the line length, then copy all remaining lines in memory. Then seek back to the position minus length of that line, and dump the copied rest of the file, overwriting the line and all that follows it.
Note that now the data for the rest of the file is copied twice, albeit one copy is in memory. Going to this trouble may make sense if the line to be removed is far down a very large file. If there are more lines to remove this gets messier.
Writing out a new file and copying it over the original changes the file's inode number. That may be a problem for some tools or procedures, and if it is you can instead update the original by either
Once the new file is written out, open it for reading and open the original for writing. This clobbers the original file. Then read from the new file and write to the original one, thus copying the content back to the same inode. Remove the new file when done.
Open the original file in read-write mode ('+<') to start with. Once the new file is written, seek to the beginning of the original (or to the place from which to overwrite) and write to it the content of the new file. Remember to also set the end-of-file if the new file is shorter,
truncate $fh, tell($fh);
after copying is done. This requires some care and the first way is probably generally safer.
If the file weren't huge the new "file" can be "written" in memory, as an array or a string.
Use sed command from Linux command line in Perl:
my $return = `sed -i '3d' text.txt`;
Where "3d" means delete the 3rd row.
It is useful to look at perlrun and see how perl itself modifies a file 'in-place.'
Given:
$ cat text.txt
This is fist line
This is second line
This is third line
This is fourth line
This is fifth line
You can apparently 'modify in-place', sed like, by using the -i and -p switch to invoke Perl:
$ perl -i -pe 's/This is third line\s*//' text.txt
$ cat text.txt
This is fist line
This is second line
This is fourth line
This is fifth line
But if you consult the Perl Cookbook recipe 7.9 (or look at perlrun) you will see that this:
$ perl -i -pe 's/This is third line\s*//' text.txt
is equivalent to:
while (<>) {
if ($ARGV ne $oldargv) { # are we at the next file?
rename($ARGV, $ARGV . '.bak');
open(ARGVOUT, ">$ARGV"); # plus error check
select(ARGVOUT);
$oldargv = $ARGV;
}
s/This is third line\s*//;
}
continue{
print;
}
select (STDOUT); # restore default output

perl multi pipe CLOEXEC

I am trying to set up more then one pipe to the same forked process in perl. This is a minimal example with just one, but in the end I want to have multiple pipes this way:
#!/usr/bin/perl
use Fcntl;
pipe PIPEREAD, PIPEWRITE;
# is supposed to increase the max file descriptors
$^F = 255; # default is 2
$flags = fcntl(PIPEREAD, F_GETFL, 0);
# doesn't do anything
fcntl(PIPEREAD, F_SETFL, $flags & (~FD_CLOEXEC)) or die "Can't set flags: $!\n";
if (!fork()) {
exec("cat", "/dev/fd/" . fileno(PIPEREAD));
}
print PIPEWRITE "Test\n";
close PIPEWRITE;
sleep(1);
This fails because all file descriptors above 2 are closed when I call exec. How can I prevent this behaviour?
Fails with
cat: /dev/fd/3: No such file or directory
I have tried to both unset the FD_CLOEXEC flag and increase $^F. Nothing works.
CLOEXEC is set right when the pipe is opened, so you have to set $^F before running pipe. If you switch that order, it works fine for me, even without using fcntl.
Also, if you want to set it using fcntl, you need to use F_SETFD, not F_SETFL
In perlvar(1) it says:
The close-on-exec status of a file descriptor will be decided according to the value of $^F when the corresponding file, pipe, or socket was opened, not the time of the "exec()".
So move your $^F=255 before your pipe and it should work.

Different behaviors of reading from files generated on different machines

I have a folder of several hundred text file. Each file has the same format, for instance the file with the name ATextFile1.txt reads
ATextFile1.txt 09 Oct 2013
1
2
3
4
...
I have a simplified Perl script that is supposed to read the file and print it back out in the terminal window:
#!/usr/bin/Perl
use warnings;
use strict;
my $fileName = shift(#ARGV);
open(my $INFILE, "<:encoding(UTF-8)", $fileName) || die("Cannot open $fileName: $!.\n");
foreach (<$INFILE>){
print("$_"); # Uses the newline character from the file
}
When I use this script on files generated by the Windows version of the program that generates the ATextFile1.txt, my output is exactly as I'd expect (being the content of the text file), however, when I run this script on files generated by the Mac version of the file generating program, the output looks like the following:
2016tFile1.txt 09 Oct 2013
After some testing, it seems that it is only printing the first line of the text where the first 4 characters are overwritten by what can be expressed in RegEx as /[0-9][0-9]16/. If in my Perl script, I replace the output statement with print("\t$_");, I get the following line printed to STDOUT:
2016 ATextFile1.txt 09 Oct 2013
Each of these files can be read normally using any standard text editor but for some reason, my Perl script can't seem to properly read and write from the file. Any help would be greatly appreciated (I'm hoping it's something obvious that I'm missing). Thanks in advance!
Note that if you are printing UTF-8 characters to STDOUT you will need to use
binmode STDOUT, ':encoding(utf8)';
beforehand.
It looks as if your Mac files have just CR as the line ending. I understood that recent versions of Macintosh systems used LF as the line ending (the same as Linux) but Mac OS 9 uses just CR, while Windows uses the two characters CR LF inside the file, which is converted to just LF by the PerlIO layer when perl is running in a Windows platform.
If there are no linefeeds in the file, then Perl will read the entire file as a single record, and printing it will overlay all lines on top of one another.
As long as the files are relatively small, the easiest way to read either file format with the same Perl code is to read the whole file and split it on either CR or LF. Anything else will need different code according to the source of the input files.
Try this version of your code.
use strict;
use warnings;
my #contents = do {
open my $fh, '<:encoding(utf8)', $ARGV[0];
local $/;
my $contents = <$fh>;
split /[\r\n]+/, $contents;
}
print "$_\n" for #contents;
Update
One alternative you might try is to use the PerlIO::eol module, which provides a PerlIO layer that translates any line ending to LF when the record is read. I'm not certain that it plays nice with UTF-8, but as long as you add it after the encoding layer it should be fine.
It is not a core module so you will probably need to install it, but after that the program becomes just
use strict;
use warnings;
open my $fh, '<:encoding(UTF-8):eol(LF)', $ARGV[0];
binmode STDOUT, ':encoding(utf8)';
print while <$fh>;
I have created Windows, Linux, and Mac-style text files and this program works fine wioth all of them, but I have been unable to check whether a UTF-8 character that has 0x0D or 0x0A as part of its encoding are passed through properly, so be careful.
Update 2
After thinking briefly about this, of course there are no UTF-8 encodings that contain CR or LF apart from those characters themselves. All characters outside the ASCII range contain only bytes with the top bit set, so they are over 0x80 and can never be 0x0D or 0x0A.

Read and write a text file

How to read and write a text file and put it in the same file? I tried but I got empty files.
My code is :
#!/opt/lampp/bin/perl
print "Content-Type : text/html\n\n";
$ref = '/opt/lampp/htdocs/perl/Filehandling/data_1.txt';
open(REF, "+<", $ref) or die "Uable to open the file $ref: $!";
while (<REF>) {
if($_=~ m/Others/gi) {
$_ =~ s/Others/Other/gi;
print "$_\n";
}
}
$ref = '/opt/lampp/htdocs/perl/Filehandling/data_1.txt';
open(REF, "+>", $ref) or die "Uable to open the file $ref: $!";
print REF "$_\n";
close REF;
close REF;
perl -pi -e 's/Others/Other/gi' /opt/lampp/htdocs/perl/Filehandling/data_1.txt
The -i option edits a file in-place, i.e. overwrites the file with the new contents.
The s/// operator doesn't substitute anything if there is no match, so you don't have to check with m// if you have a match before attempting a substitution.
The flow control in your script is flawed; you open for reading, then print to standard output (not back to the file), then open for writing, but don't write anything useful (or, well, only the last line you read in the input loop).
What's with the Content-Type: header, do you want to run this as a CGI script? That sounds like a security problem.
Do you only want to print if there is a substitution? If so, try this:
perl -ni -e 's/Others/Other/ig and print' /opt/lampp/htdocs/perl/Filehandling/data_1.txt
Notice the change from -p which adds a read-and-print loop around your script, to -n which only reads, but doesn't print your file.
It seems that all you want is to s/Others/Other/gi in your data_1.txt.
You can make it a lot of easier with one line: perl -pi.bak -e's/Others/Other/gi' data_1.txt.
If you still want to read and write into the same file you can check the IO::InSitu module at CPAN.
Fᴍᴛᴇʏᴇᴡᴛᴋ: Learning How to Learn How to Update a Text File in Place
If only you were to Rᴇᴄᴏɴɴᴏɪᴛʀᴇ Tʜᴇ Fᴀʙᴜʟᴏᴜs perlfaq5 Mᴀɴᴘᴀɢᴇ — which is available on every Perl installation worldwide including your own — you would easily learn the answer to your question under that Fᴏʀᴍɪᴅᴀʙʟʏ Aᴘᴏᴛʜᴇᴏsɪᴢᴇᴅ Qᴜᴀɴᴅᴀʀʏ entitled “How do I change, delete, or insert a line in a file, or append to the beginning of a file?”.
Somehow that key first step to problem solving failed to execute.
Learning how to efficiently search the standard Perl documentation set is one of the three keys steps in becoming a proficient Perl programmer. (For the record, the other two are to have a rolodex full of working examples and to have a particular itch you need scratched which you can use Perl to scratch it with.)
If you for whatever reason you either cannot or will not search the standard documentation, and so find yourself coming online begging help in doing something that was long ago answered in that documentation set, you will never be self‐enabling. You may also be perceived as something as a pest.
The Recipe Rolodex 📖
Looking beyond the standard documentation included with every Perl installation whose contents you should have searched before coming here, we find that O’Reilly’s Perl Cookbook chapter 7 on “File Access” and chapter 8 on “File Contents” contain several recipes particularly relevant to your current question, especially but not only those marked in bold below. The Perl Cookbook is a compendium of example code meant as the paired companion volume to Programming Perl. Arguably, the Cookbook is even more useful for the beginning Perl programmer than the Camel itself.
However, you can’t grep dead trees. So I will show you what you should look for, and how to grep it no matter whether you happen to possess the ungreppable dead‐tree version of that particular primer or not.
NB: While I sure that the publisher would be delighted if you were to purchase this essential Perl tome, whether as a dead tree or electronically, I recognize that this is not possible for everyone, so I below will show you how to access the relevant portions completely free of charge. In fact, I include them for your convenience.
First, the table of contents for chapter 7 of the Perl Cookbook:
Recipe 7.0: Introduction to File Access
Recipe 7.1: Opening a File
Recipe 7.2: Opening Files with Unusual Filenames
Recipe 7.3: Expanding Tildes in Filenames
Recipe 7.4: Making Perl Report Filenames in Error Messages
Recipe 7.5: Storing Filehandles into Variables
Recipe 7.6: Writing a Subroutine That Takes Filehandles as Built-ins Do
Recipe 7.7: Caching Open Output Filehandles
Recipe 7.8: Printing to Many Filehandles Simultaneously
Recipe 7.9: Opening and Closing File Descriptors by Number
Recipe 7.10: Copying Filehandles
Recipe 7.11: Creating Temporary Files
Recipe 7.12: Storing a File Inside Your Program Text
Recipe 7.13: Storing Multiple Files in the DATA Area
Recipe 7.14: Writing a Unix-Style Filter Program
☞ Recipe 7.15: Modifying a File in Place with Temporary File
☞ Recipe 7.16: Modifying a File in Place with -i Switch
☞ Recipe 7.17: Modifying a File in Place Without a Temporary File
Recipe 7.18: Locking a File
Recipe 7.19: Flushing Output
Recipe 7.20: Doing Non‐Blocking I/O
Recipe 7.21: Determining the Number of Unread Bytes
Recipe 7.22: Reading from Many Filehandles Without Blocking
Recipe 7.23: Reading an Entire Line Without Blocking
Recipe 7.24: Program: netlock
Recipe 7.25: Program: lockarea
And here is Chapter 8’s ᴛᴏᴄ:
☞ Recipe 8.0: Introduction to File Contents
Recipe 8.1: Reading Lines with Continuation Characters
Recipe 8.2: Counting Lines (or Paragraphs or Records) in a File
Recipe 8.3: Processing Every Word in a File
☞ Recipe 8.4: Reading a File Backwards by Line or Paragraph
Recipe 8.5: Trailing a Growing File
Recipe 8.6: Picking a Random Line from a File
Recipe 8.7: Randomizing All Lines
☞ Recipe 8.8: Reading a Particular Line in a File
Recipe 8.9: Processing Variable‐Length Text Fields
☞ Recipe 8.10: Removing the Last Line of a File
Recipe 8.11: Processing Binary Files
Recipe 8.12: Using Random‐Access I/O
☞ Recipe 8.13: Updating a Random‐Access File
Recipe 8.14: Reading a String from a Binary File
Recipe 8.15: Reading Fixed‐Length Records
Recipe 8.16: Reading Configuration Files
Recipe 8.17: Testing a File for Trustworthiness
Recipe 8.18: Treating a File as an Array
Recipe 8.19: Setting the Default I/O Layers
Recipe 8.20: Reading or Writing Unicode from a Filehandle
Recipe 8.21: Converting Microsoft®‐Proprietary Text Files into Standard Unicode
Recipe 8.22: Comparing the Contents of Two Files
Recipe 8.23: Pretending a String Is a File
Recipe 8.24: Program: tailwtmp
Recipe 8.25: Program: tctee
Recipe 8.26: Program: laston
Recipe 8.27: Program: Flat file indexes
Muddled Mental Models 😞
Chapter 8’s section 8.0 “Introduction to File Contents” is especially poignant, so much so that its copyrighted text I here reproduce by kind permission of its author:
Treating files as unstructured streams necessarily governs what you can do with them. You can read and write sequential, fixed‐size blocks of data at any location in the file, increasing its size if you write past the current end. Perl uses an I/O library that emulates C’s stdio(3) to implement reading and writing of variable‐length records like lines, paragraphs, and words.
What can’t you do to an unstructured file? Because you can’t insert or delete bytes anywhere but at end of file, you can't change easily the length of, insert, or delete records. An exception is the last record, which you can delete by truncating the file to the end of the previous record. For other modifications, you need to use a temporary file or work with a copy of the file in memory. If you need to do this a lot, a database system may be a better solution than a raw file. Standard with Perl as of v5.8 is the Tie::File module, which offers an array interface to files of records.
The last reference may be the most useful, because it offers a model of a text file that may better accord with the non‐programmer’s notion of what a text file is. To the operating system, a file is merely an ordered set of bytes, and the only operations that you can use on a file are read, write, seek, and truncate. There is on insert, there is no go‐to‐line‐number‐N, and there certainly is no search and replace. The operating system model is more that of a paper tape reader than it is a deck of cards. You can no more insert new data in the middle of a file than you can wedge a new sector between existing ones on a hard disk. They don’t call it a hard disk for nothing, you know.
The problem is that the non‐programmer has no experience with the operating system’s model of a fixed file filled with bytes. Her only model for a text file is one whose operations match that of the text editor she is used to, one which in these post–pre‐Internet [sic] days of computing often more resembles a child’s video game than it does a serious tool for getting real work done.
Whether you have a video‐game version or a power‐tool version, all text editors are almost always line based, and allow one to move lines around, search by line, change lines including their lengths, insert into the middle of a file or delete from the middle shortening everything up. Even her notion of a character is very different from the operating system, since a user‐visible grapheme can easily comprise multiple programmer-visible code points, and each of these code points can easily comprise multiple operating‐system–visible code units.
This non‐programmer model does not work with real operating system files. The Tie::File module provides an abstraction layer that can help present a higher level and perhaps more non‐programmer–friendly model of a text file.
ᴘʟᴇᴀᴄ: The Programming Language Examples Alike Cookbook
Have you ever wanted a Rosetta Stone for programming languages? If so, then today is your lucky day. 😹
Although the published Perl Cookbook’s complete code is readly available for free and easy download, you might also find interesting the ᴘʟᴇᴀᴄ project, the multilingual Programming Language Examples Alike Cookbook. It includes not just the full Perl Cookbook code, but also translations in varying states of completeness into other popular programming languages like Ruby and Python, old languages like Rexx and TCL, nascent languages like Go and Groovy, and exotic languages like Haskell and OCaml (no relation to 🐪 :).
I quote below from the ᴘʟᴇᴀᴄ project’s source for the three relevant recipes from the Perl Cookbook.
Recipe 7.15: Modifying a File in Place with a Temporary File
# ^^PLEAC^^_7.8
#-----------------------------
open(OLD, "< $old") or die "can't open $old: $!";
open(NEW, "> $new") or die "can't open $new: $!";
while (<OLD>) {
# change $_, then...
print NEW $_ or die "can't write $new: $!";
}
close(OLD) or die "can't close $old: $!";
close(NEW) or die "can't close $new: $!";
rename($old, "$old.orig") or die "can't rename $old to $old.orig: $!";
rename($new, $old) or die "can't rename $new to $old: $!";
#-----------------------------
while (<OLD>) {
if ($. == 20) {
print NEW "Extra line 1\n";
print NEW "Extra line 2\n";
}
print NEW $_;
}
#-----------------------------
while (<OLD>) {
next if 20 .. 30;
print NEW $_;
}
#-----------------------------
Recipe 7.16: Modifying a File in Place with the -i Switch
# ^^PLEAC^^_7.9
#-----------------------------
#% perl -i.orig -p -e 'FILTER COMMAND' file1 file2 file3 ...
#-----------------------------
#!/usr/bin/perl -i.orig -p
# filter commands go here
#-----------------------------
#% perl -pi.orig -e 's/DATE/localtime/e'
#-----------------------------
while (<>) {
if ($ARGV ne $oldargv) { # are we at the next file?
rename($ARGV, $ARGV . '.orig');
open(ARGVOUT, ">$ARGV"); # plus error check
select(ARGVOUT);
$oldargv = $ARGV;
}
s/DATE/localtime/e;
}
continue{
print;
}
select (STDOUT); # restore default output
#-----------------------------
#Dear Sir/Madam/Ravenous Beast,
# As of DATE, our records show your account
#is overdue. Please settle by the end of the month.
#Yours in cheerful usury,
# --A. Moneylender
#-----------------------------
#Dear Sir/Madam/Ravenous Beast,
# As of Sat Apr 25 12:28:33 1998, our records show your account
#is overdue. Please settle by the end of the month.
#Yours in cheerful usury,
# --A. Moneylender
#-----------------------------
#% perl -i.old -pe 's{\bhisvar\b}{hervar}g' *.[Cchy]
#-----------------------------
# set up to iterate over the *.c files in the current directory,
# editing in place and saving the old file with a .orig extension
local $^I = '.orig'; # emulate -i.orig
local #ARGV = glob("*.c"); # initialize list of files
while (<>) {
if ($. == 1) {
print "This line should appear at the top of each file\n";
}
s/\b(p)earl\b/${1}erl/ig; # Correct typos, preserving case
print;
} continue {close ARGV if eof}
#-----------------------------
Recipe 7.17: Modifying a File in Place Without a Temporary File
# ^^PLEAC^^_7.10
#-----------------------------
open(FH, "+< FILE") or die "Opening: $!";
#ARRAY = <FH>;
# change ARRAY here
seek(FH,0,0) or die "Seeking: $!";
print FH #ARRAY or die "Printing: $!";
truncate(FH,tell(FH)) or die "Truncating: $!";
close(FH) or die "Closing: $!";
#-----------------------------
open(F, "+< $infile") or die "can't read $infile: $!";
$out = '';
while (<F>) {
s/DATE/localtime/eg;
$out .= $_;
}
seek(F, 0, 0) or die "can't seek to start of $infile: $!";
print F $out or die "can't print to $infile: $!";
truncate(F, tell(F)) or die "can't truncate $infile: $!";
close(F) or die "can't close $infile: $!";
#----------------------------
Conclusion
There. If that doesn’t point the way to answering your question, I don’t know what will. But I bet it did. Congrats! 👏

Perl - Import contents of file into another file

The below code I'm trying to produce is trying to do this:
What I'm trying to do is running a BTEQ script that gets data from a DB then exports to a flat-file, that flat file is picked up my a Perl script (the above code), then with this post trying to get perl to import that file it gets into a fastload file. Does that make more sense?
while (true) {
#Objective: open dir, get flat-file which was exported from bteq
opendir (DIR, "C:/q2refresh/") or die "Cannot open /my/dir: $!\n"; #open directory with the flat-file
my #Dircontent = readdir DIR;
$filetobecopied = "C:/q2refresh/q2_refresh_prod_export.txt"; #flatfile exported from bteq
$newfile = "C:/q2refresh/Q2_FastLoadFromFlatFile.txt"; #new file flat-file contents will be copied to as "fastload"
copy($filetobecopied, $newfile) or die "File cannot be copied.";
close DIR;
my $items_in_dir = #Dircontent;
if ($items_in_dir > 2) { # > 2 because of "." and ".."
-->>>>>> # take the copied FlatFile above and import into a fastload script located at C:/q2refresh/q2Fastload.txt
}
else {sleep 100;}
}
I need help with implementing the above bolded section. How do I import the contents of C:/q2refresh/Q2_FastLoadFromFlatFile.txt into a fastload script located at C:/q2refresh/q2Fastload.txt.
// I apologize if this is somewhat newbish, but I am new to Perl.
Thanks.
if ($items_in_dir > 2) { # > 2 because of "." and ".."
Well, when including . and .., plus the two copies of q2_refresh_prod_export.txt, you will always have more than 2 files in the directory. If such a case should occur that q2_refresh_prod_export.txt is not copied, the script will die. So the else clause will never be called.
Also, it is pointless to copy the file to a new place, if you are simply going to copy it to another place in a second. It's not like "cut & paste", you actually, physically copy the file to a new file, not a clipboard.
If by "import into" mean that you want to append the contents of q2_refresh_prod_export.txt to an existing q2Fastload.txt, there are ways to do that, such as what Troy suggested in another answer, with an open and >> (append to).
You will have to sort out what you mean by this whole $items_in_dir condition. You are keeping files and copying files in that directory, so what is it exactly that you are checking for? Whether the files have all been removed (by some other process)?
I can't tell what you're trying to do. Could it be that you just want to do this?
open SOURCE, $newfile;
open SINK, '>>C:/q2refresh/q2Fastload.txt';
while (<SOURCE>) {
print SINK $_;
}
close SOURCE;
close SINK;
That will append the contents of $newfile to your fastload file.