Tcl How to write data to a specific line number in the middle of operating - numbers

Is there any way or command in Tcl for writing in the middle of {data.txt} and also specific line number ... ?
for example after writing data in text file, when I'm writing in line number 1000, is there any way for turning back to line number 20 and adding the data in this line for output. (something look like llappend & append for list variables, but in puts command)

You can use seek to move the current location in a channel opened on a file (it's not meaningful for pipes, sockets and serial lines); the next read (gets, read) or write (puts) will happen at that point. Except in append mode, when writes always go to the end.
seek $channel $theNewLocation
However, the location to seek to is in bytes; the only locations that it is trivially easy to go to are the start of the file and the end of the file (the latter using end-based indexing). Thus you need to either remember where “line 20” really is from the first time, or go to the start and read forward a few lines.
seek $channel 0
for {set i 0} {$i < 20} {incr i} {gets $channel}
Also be aware that the data afterwards does not shuffle up or down to accommodate what you've written the second time. If you don't write exactly the same length of data as was already there, you end up with partial lines in the file. Truncating the file with chan truncate might help, but might also be totally the wrong thing to do. If you're going to go back and patch a text file, writing an extra long line where you're going to do the patch (e.g., with lots of spaces on the end) can make it much easier. It's a cheesy hack, but simple to do.

Related

Reading a changing txt file matlab

I have a text file (5 columns "\t" separated) that's being written to by another software. I need to take the readings from the file and do some calculations. Is there a was to read the new lines added to the file and process it then repeat again for every new set of lines. I don't mind a bit of delay as long as it does the job.
My idea is to start reading the file line by line until the end of file, then it will read from where it stopped last until the new end of file ...etc.
Can this be done in Matlab? Can I specify the starting line for the file reading? can I also update the end of file point?
To prevent the loop from breaking at the eof point, I think I should set my loop to be controlled by time or anything else, while it should check for eof at the end of every iteration.
I've mostly worked with Matlab, but if there is a better option to use for this purpose (that I can reasonably learn) please feel free to guide me.
Edit1: I've tried using dlmread as you suggested, when I read the file outside the loop it reads the file correctly even when I change R1 and with the other software updating the text. However, when I put it in a loop I get this error:
Error using dlmread (line 143)
Empty format string is not supported at the end of a file.
Here is my code to read it multiple times:
clear all
x=0;
R1=0; C1=0;
while(x<10)
M = dlmread('tst_4.txt','\t',R1,C1);
R1=length(M);
x=x+1;
end
Thanks
You can used dlmread(filename,delimiter,R1,C1). Where R1 and C1 are the row and column offset respectively. By setting the row offset to the last row that you have read, you can read the file content excluding what you would have already read.

To read a big file which are in Gigs fastly in PERL

We are currently reading the file line by line which delays to read and complete for all.
we would need to read the file fastly and prgoress with our commands.
the commands which i tried using fork and array just displays me the first set of lines only and not proceeding with pther sets.
please help on it.
Reading a large file takes a fair bit of time - disks are slow, after all. Before you start looking at Perl, first try (assuming you're on a unix-type system):
time cat /path/to/your/large/file >/dev/null
The output will tell you how long it takes to just read that file from disk without doing anything to it. Alternately, open the file in your favorite text editor and time how long it takes to load. Once you have that time, compare it to how long your Perl program takes to read the file. Unless the Perl program takes significantly longer, you're not likely to be able to do anything about it because the time is being spent on getting the data from disk rather than on processing it.
Of course, that's assuming that you actually do need to read the entire file. If you can get by with only reading specific parts of it, then you could create an index file and use that to jump directly to the part that's of interest, but you haven't provided enough information for us to tell whether that would apply to your case or not.
If you need more specific help, please provide a better description of what you mean to accomplish and a small, runnable piece of Perl code which shows how you're currently reading and processing the file so that we can see whether you're doing anything particularly inefficient that can be improved on.

A rotating log file in perl

I have implemented a log file that will be storing the cpu and memory state of a process after every minute.I have limited the maximum size of the file to 3MB (thats enough for my purpose).
The script will be called by a cron job after every minute and the script will log the details for that minute and will rename the file as "Log_.log".
When the size reaches "3MB - 100 bytes" I reset the file pointer to point to the begining and will overwrite the first entry in the log file and will now rename the file as "Log_<0+some offset>.log".
As I am renaming the file after every minute to update the file pointer position, is it a good/efficient way ?
I do not want to maintain more than one log file for this purpose.
Another option for me is to maintain the file pointer position in a file ,but ....another file !! not interested in maintaining one if this option is good :)
Thanks in Advance.
Are you an engineer? This is a nice example of some simple task, solved by a perfectly working but overly complex solution.
Unless the content you put in takes exactly as many bytes as the content you take out, writing "in" a file will actually cause the whole following part after your writing position to be rewritten to disk. Append is much cheaper.
Renaming the file to store the pointer works - but it's not very elegant, and makes stuff more complex (for one, your process needs write rights to the directory in which the file resides - else just write access to two files is sufficient)
Unless disk space is an issue (and really, it rarely is), your approach is less efficient than say, append everything to a file, and rotate the file when it reaches its maximum size. This way you always have the last 3MB of logs available, and maximum 3MB more in your current file. It will make parsing the file a lot easier too, instead of recalculating the entire pointer position thing.
Update to answer your comment:
Renaming a file every minute (or even every second) shouldn't slow down your system significantly, don't worry about that.
Our concerns are mainly with "why you think you need to rename the file". It's not better technically, it's not better from a logical point of view, it makes a lot of other (future) tasks harder. You could store the file pointer in a seperate file, or at the end of your file, and there are better^H^H^H^H^H^H simpler solutions that don't require the file pointer at all.
I'm confused why you would rename your file. What does this accomplish?
Are the log entries fixed size? Or variable size?
If the entries are fixed size, then there is no trouble in re-writing the existing file from the start: you won't ever have incomplete entries in your file, and if you are writing a counter or timestamps to the file, it should be clear where the 'cursor' is located.
If the entries are variable size, then you should probably not begin re-writing the file from the beginning without somehow making it clear where the 'cursor' is located in the file, and write code that is resilient to reading truncated log entries.
Can you re-use existing tools such as RRDtool?

ExtAudioFileSeek and ExtAudioFileWrite together on the same file

I have a situation where I can save a post-processing pass through the audio by taking some manipulated buffer from the end of the track and writing them to the beginning of my output file.
I originally thought I could do this by resetting the write pointer using ExtAudioFileSeek, and was about to implement it when I saw this line in the docs
Ensure that the file you are seeking in is open for reading only. This function’s behavior with files open for writing is undefined.
Now I know I could close the file for writing then reopen it, but the process is a little more complicated than that. Part of the manipulation I am doing is reading from buffers that are in the file I am writing to. The overall process looks like this:
Read buffers from the end of the read file
Read buffers from the beginning of the write file
Process the buffers
Write the buffers back to the beginning of the write file, overwriting the buffers I read in step 2
Logically, this can be done in 1 pass no problem. Programmatically, how can I achieve the same thing without corrupting my data, becoming less-efficient (opposite of my goal) or potentially imploding the universe?
Yes, using a single audio file for both reading and writing may, as you put it, implode the universe, or at least lead to other nastiness. I think that the key to solving this problem is in step 4, where you should write the output to a new file instead of trying to "recycle" the initial write file. After your processing is complete, you can simply scrap the intermediate write file.
Or have I misunderstood the problem?
Oh, and also, you should use ExtAudioFileWriteAsync instead of ExtAudioFileWrite for your writes if you are doing this in realtime. Otherwise the I/O load will cause audio dropouts.

How can I print "Done" or "Fail" at the end of line to stdout in Perl?

I just have begun with Perl and I want to write my own script to scan a document and convert the resulting TIFF file to a PDF file. If the conversion succeeds (using tiff2pdf), I want to print "Done" at the end of the line, but I can't seem to find a hint to do this on the Web.
My guess is that I have to get the geometry of the terminal and count the letters I already printed but that seems to be to complicated. Do you have any advice?
You're right about having to inspect the size of the terminal you're printing to. There's many ways to do that, but the most portable and reliable way I'm aware of is Term::Size::Any.
With that, you can get the width of the terminal you're running in:
use Term::Size::Any;
my $cols = chars *STDOUT{IO};
With that, you can then print whatever you want, padded with the right amount of whitespace, e.g.:
printf "% ${cols}s", "Done\n";
Also be aware that programs don't always output to terminals. Output could, for example, be redirected to a file, so you might want to have an appropriate fallback if determining the terminal size fails.