Update text in place for Mac OS command line app - swift

I'm writing a simple mac os (10.11) command line app in Swift to download a file from a valid remote URL and I'd like to show the % of file downloaded in place on the console.
It would start with:
0% of 10Mb file downloaded
Then when I have some percentage of the file downloaded, the above line would get replaced with:
18% of 10Mb file downloaded
Finally, when it finishes, the string gets replaced in line with:
100% of 10Mb file downloaded
print(..) will keep append to existing text - is there another function that will do what I need?

If you leave the cursor at the end of the line (i.e, by not outputting a newline '\n' at the end), you can return to the start of the line by outputting a carriage return ('\r'). Further output will overwrite what is already on the line.
I'm not super familiar with Swift, but I suspect you may need to explicitly flush output to get a partial line to show up. If you were using standard C I/O, this would be fflush(stdout); I'm not sure what the Swift equivalent is, but I'm sure it exists.

Add
\u{1B}[1A\u{1B}[k
to your string.
for example:
print("\u{1B}[1A\u{1B}[kDownloading \(myProgress)%")

Related

Running MATLAB system command in background with stdout

I'm using MATLAB and calling an .exe via the system command.
[status,cmdout] = system(command_s);
where command_s is a command string that is formatted earlier in my script to pass all the desired options to the .exe. The .exe would normally write to a .csv file via the > redirection operator in Windows/DOS. Instead, this output is going to cmdout where I use it later in the MATLAB script. It is working correctly and as expected. I'm doing it this way so that the process just uses memory and does not write a very large file to the disk, which would then have to be read from the disk and then deleted after I'm done with it. In the end, it saves a .mat file that's usually in hundreds of KB instead of 10s/100s of MBs as the .csv file would be (some unneeded data is thrown out in the end).
The issue I'm having is since I'm dealing with large files, the executable can take a significant amount of time. I typically have to wait about 2 minutes after executing this command. In the meantime, I have no feedback to know it is progressing and that my system hasn't froze. I know I could add the & symbol to the end of my string, command_s, and run MATLAB code while this is running in the background (or asynchronously as some would say), but that brings up an external window AND makes cmdout empty - so I cannot use the output - forcing me to sit there for 2 minutes wondering each time it executes.
Is there any way to run in the background AND get the stdout from the command?
Maybe you could try system(command_s,'-echo')?

Batch file doesn't read flags correctly

I've got a little batch file and it looks like this:
.\batchisp.exe –device at32uc3b1512 –hardware usb –operation erase f memory flash blankcheck loadbuffer G3Pro_USB.hex program verify start reset 0
The whole line is fine and works correctly if I run it straight in PowerShell. However, if I run the batch file, it runs this:
.\batchisp.exe ΓÇôdevice at32uc3b1512 ΓÇôhardware usb ΓÇôoperation erase f memory flash blankcheck loadbuffer G3Pro_USB.hex program verify start reset 0
Which does not work, because as you can see, the -'s have changed into ΓÇô's... Can anybody tell me why this is and how to fix it?
This is because the – marks are not - characters. They are actually endashes. These usually are caused by Word's automatic en/emdashing.
Powershell is smart enough to convert the endashes to dashes as "arguments", but cmd is not.
To fix this issue, replace – with -. A regex search/replace that catches all the alternative dash types that works in notepad++ is: [–—‒] to -.

LIne-by-line file-io not working as expected in Windows

I'm using Perl 5.16.1 from Strawberry in a Windows environment. I have a Perl script reading very large text files. The smallest text file is 30M. When reading files that do not have a line feed at the end of the very last line I get very peculiar results. It may not happen all the time but when it does It's as though it is reading cached data from the I/O system for another file that I previously opened with the Perl script. If I manually edit the file and add a line feed it's fine. I added a line counter and some inline code to display what happens when I'm near the end of the file to make sure I wasn't going nuts. To try and fix I tried adding this to my script:
open (SS_LOG, ">>", $SSFile) or die "Can't open $SSFile\r\n $!\r\n";
print SS_LOG "\r\n";
close SS_LOG;
but it does nothing. The file stays the same size. I'm also storing data in large arrays.
Has anyone else seen anything like this?
Try unbuffering your output:
SS_LOG->autoflush(1);

How to rewrite a file from a shell script without any danger of truncating the file if out of disk space?

How to rewrite a file from a shell script without any danger of truncating the file if out of disk space?
This handy perl one liner replaces all occurrences of "foo" with "bar" in a file called test.txt:
perl -pi -e 's/foo/bar/g' test.txt
This is very useful, but ...
If the file system where test.txt resides has run out of disk space, test.txt will be truncated to a zero-byte file.
Is there a simple, race-condition-free way to avoid this truncation occuring?
I would like the test.txt file to remain unchanged and the command to return an error if the file system is out of space.
Ideally the solution should be easily used from a shell script without requiring additional software to be installed (beyond "standard" UNIX tools like sed and perl).
Thanks!
In general, this can’t be done. Remember that the out-of-space condition can hit anywhere along the sequence of actions that give the appearance of in-place editing. Once the filesystem is full, perl may not be able to undo previous actions in order to restore the original state.
A safer way to use the -i switch is to use a nonempty backup suffix, e.g.,
perl -pi.bak -e 's/foo/bar/g' test.txt
This way, if something goes wrong along the way, you still have your original data.
If you want to roll your own, be sure to check the value returned from the close system call. As the Linux manual page notes,
Not checking the return value of close() is a common but nevertheless serious programming error. It is quite possible that errors on a previous write(2) operation are first reported at the final close(). Not checking the return value when closing the file may lead to silent loss of data. This can especially be observed with NFS and with disk quota.
As with everything else in life, leave yourself more margin for error. Disk is cheap. Dig out the pocket change from your couch cushions and go buy yourself half a terabyte or so.
From perldoc perlrun:
-i[extension]
specifies that files processed by the "<>" construct are to be edited in-place.
It does this by renaming the input file, opening the output file by the original
name, and selecting that output file as the default for print() statements. The
extension, if supplied, is used to modify the name of the old file to make a
backup copy, following these rules:
If no extension is supplied, no backup is made and the current file is
overwritten.
[…]
Rephrased:
The backup filename is determined from the value of the -i-switch, if one is given.
The original file is renamed to the new filename, and opened for the script. Renaming is atomic on most filesystems.
A file with the name of the original file is opened for writing. The file will start with length zero, but is not identical to the original file (which has a different name now).
After the script has finished, and if no explicit backup extension was provided, the backup file is deleted. The original file is then lost.
Should the system run out of drive space, then the new file is endangered, not the original file which was never copied or moved (at least on filesystems with an inode-like concept).

Perl and reading files with different encodings

I am using a perl script to read in a file, but I'm not sure what encoding the file is in. Basically, my file is a list of book titles, but each book has other info associated with it (author, publication date, etc). So each book title is within a discrete chunk of data for the book. So I iterate through the file line by line until I find the regular expression '/Book Title: (.*)/' and take what's in the paren. Then, I create a separate .txt file with the name of the text file being my book. However, in my unix server, when I look at the name of the file, it's actually not, for example, 'LordOfTheFlies.txt' but rather 'LordOfTheFlies^M.txt'
What is this '^M'? Is that a weird end of line encoding I'm not taking into account? I tried chomp but it doesn't seem to be working. What is the best file encoding for working with perl?
It's the additional carriage return character that Windows systems insert before line feed characters (M == 13th letter, hence ASCII 13 is visualised as ^M).
It has nothing to do with file encoding, it's just the line ending policy biting you. Perl is usually good at handling line ending characters correctly, but if they occur somewhere else than the end of a line you have to do it yourself. You can use s/\r// instead of chomp() to get them out.
Before processing the file, you need to know the encoding of the file, which is determined by the producer of the file.
That "^M" is control-M, which is a carriage return, and is not needed in Unix file systems.Looks like the file is created in Unix and transferred to Windows. It can also be added with ftp when text file are transfered as binaries.
Try chop, instead of 'chomp'. Chomp removes the 'new line character'. s/\r// is also good.
For your general question, you might want to use appropriate module for the file type you have to make your life easier and better with Perl.