How to get vim to list the PIDs of selected files that are presently being edited, avoiding recovery mode, and not list all the other files - perl

The vim manual page contains two similar -r type commands. I'll give more background below, this question is really how to invoke the first type of -r to list the swap files, but avoid the second -r that invokes recovery
-r List swap files, with information about using them for re‐
covery.
-r {file} Recovery mode. The swap file is used to recover a crashed
editing session. The swap file is a file with the same
filename as the text file with ".swp" appended. See ":help
recovery".
The -r without filename (the first -r above ) reports on the swap files of other files too, including ones in other directories
Background:
I'm trying to have vim report the swap files of a specific file (mostly to determine if vim still editing the file). If the file is being edited ( in another window, either on linux or cygwin ) I can 'raise' that window up to the top with "\e[2t\e[1t" as I have successfully be able to do thanks to Bring Window to Front
Vim has multiple swap file names, and multiple directories that it could put a file, so I want to ask vim, please tell me the name of the swap files that are currently in use for a given file, and if there is a current vim process on the file. Unfortunately, sometimes vim will open a command file in recovery mode in unexpected ways.
I'm invoking vim like this vim -r -c :q file, well actually, I'm invoking it from script, since I want vim to see something more like a terminal, then I look at the output file, so it's more like script -q -c "vim -r -c :q foo" fooscript, then I look in the fooscript file for messages like /Note: process STILL RUNNING: (\d+)/
It is beginning to look like I need to use vim -r without a file name, and parse the output of the -r report, and that there isn't a way to get the report pre-filtered to a single file in question.

after switching my focus to just vim -r, and
Knowing that vim will try to put the swap file into the same directory as the file it's editing ( thanks to #romainl for the pointer to :help swap-file )
observing that vim -r reports on the files in the current directory first,
observing that the file name associated with the swap file is reported before the process id of the vim process, and
observing that vim appends (STILL RUNNING) if it finds the active process
I changed the current directory appropriately and ran this code after plugging in the name of the file-to-search-for
perl -lne '
last if /^\s+In directory/;
undef $f if /^\d+/;
$f = $1 if /^\s+file name:\s+(.*)\s*$/;
if ( $f =~ m#/file-to-search-for# && /^\s+ process ID:\s(\d+).*?STILL RUNNING/ ) {
print $1;
$pid //= $1;
}
END { exit !$pid; } '
The pid of the running vim process is printed, and the exit status is zero when the appropiate swap file is found, and non-zero if the file was not being edited

Related

checking to see if files are executable perl

I have a program that checks to see if the files in my directory are readable,writeable, and executable.
i have it set up so it looks like
if (-e $file){
print "exists";
}
if (-x $file){
print "executable";
}
and so on
but my issue is when I run it it shows that the text files are executable too. Plain text files with 1 word in them. I feel like there is an error. What did I do wrong. I am a complete perl noob so forgive me.
It is quite possible for a text file to be executable. It might not be particularly useful in many cases, but it's certainly possible.
In Unix (and your Mac is running a Unix-like operating system) the "executable" setting is just a flag that is set in the directory entry for a file. That flag can be set on or off for any file.
There are actually three of these permissions why record if you can read, write or execute a file. You can see these permissions by using the ls -l command in a terminal window (see man ls for more details of what various ls options mean). There are probably ways to view these permissions in the Finder too (perhaps a "properties" menu item or something like that - I don't have a Mac handy to check).
You can change these permissions with the chmod ("change mode") command. See man chmod for details.
For more information about Unix file modes, see this Wikipedia article.
But whether or not a file is executable has nothing at all to do with its contents.
The statement if (-x $file) does not check wether a file is an executable but if your user has execution priveleges on it.
For checking if a file is executable or not, I'm affraid there isn't a magic method for it. You may try to use:
if (-T $file) for checking if the file has an ASCII or UTF-8 enconding.
if (-B $file) for checking if the file is binary.
If this is unsuitable for your case, consider the following:
Assuming you are on a Linux enviroment, note that every file can be executed. The question here is: The execution of e.g.: test.txt, is going to throw a standard error (STDERR)?
Most likely, it will.
If test.txt file contains:
Some text
And you launched it in your Perl script by: system("./test.txt"); This will display a STDERR like:
./test.txt: line 1: Some: command not found
If for some reason you are looking to run all the files of your directory (in a for loop for instance) be warned that this is pretty dangerous, since you will launch all your files and you may not be willing to do so. Specially if the perl script is in the same directory that you are checking (this will lead to undesirable script behaviour).
Hope it helps ;)

how to use backup files to create regular files in emacs

I am trying to create a file named caseexp.sml . Emacs created a backup file of this file when I was working on it at some earlier point, and now when I try to open it as caseexp.sml, emacs opens a #caseexp.sml# file and everytime I try to save it using C-x C-w, emacs saves it as another backup file with another tilde added to its name. Several attempts later, I have only managed to save it as #caseexp.sml"~~~.
How can I avoid creating these "tilde" backup files and save my file simply as caseexp.sml ?
There are a few unexpected behaviors here, so I can't be sure that this is what's going on, but usually what happens is that if files with hashes are left around, it's possible that Emacs crashed while you had unsaved changes. However, usually Emacs should prompt you to run "M-x recover-this-file" to restore changes from the unsaved-changes file (the filename with the hashes) to the actual file, so it's not clear what's going on there. Try fixing this from the command line.
You probably want to cp all the files to another location first, in order to have a backup (I'm assuming a Unix-like OS):
$ cp *caseexp* /tmp
Then delete the extra files while preserving the one with the most recent changes:
$ cp <most recent file with latest changes> caseexp.sml
$ rm \#caseexp*

CMD: Export all the screen content to a text file

In command prompt - How do I export all the content of the screen to a text file(basically a copy command, just not by using right-clicking and the clipboard)
This command works, but only for the commands you executed, not the actual output as well
doskey /HISTORY > history.txt
If you want to append a file instead of constantly making a new one/deleting the old one's content, use double > marks. A single > mark will overwrite all the file's content.
Overwrite file
MyCommand.exe>file.txt
^This will open file.txt if it already exists and overwrite the data, or create a new file and fill it with your output
Append file from its end-point
MyCommand.exe>>file.txt
^This will append file.txt from its current end of file if it already exists, or create a new file and fill it with your output.
Update #1 (advanced):
My batch-fu has improved over time, so here's some minor updates.
If you want to differentiate between error output and normal output for a program that correctly uses Standard streams, STDOUT/STDERR, you can do this with minor changes to the syntax. I'll just use > for overwriting for these examples, but they work perfectly fine with >> for append, in regards to file-piping output re-direction.
The 1 before the >> or > is the flag for STDOUT. If you need to actually output the number one or two before the re-direction symbols, this can lead to strange, unintuitive errors if you don't know about this mechanism. That's especially relevant when outputting a single result number into a file. 2 before the re-direction symbols is for STDERR.
Now that you know that you have more than one stream available, this is a good time to show the benefits of outputting to nul. Now, outputting to nul works the same way conceptually as outputting to a file. You don't see the content in your console. Instead of it going to file or your console output, it goes into the void.
STDERR to file and suppress STDOUT
MyCommand.exe 1>nul 2>errors.txt
STDERR to file to only log errors. Will keep STDOUT in console
MyCommand.exe 2>errors.txt
STDOUT to file and suppress STDERR
MyCommand.exe 1>file.txt 2>nul
STDOUT only to file. Will keep STDERR in console
MyCommand.exe 1>file.txt
STDOUT to one file and STDERR to another file
MyCommand.exe 1>stdout.txt 2>errors.txt
The only caveat I have here is that it can create a 0-byte file for an unused stream if one of the streams never gets used. Basically, if no errors occurred, you might end up with a 0-byte errors.txt file.
Update #2
I started noticing weird behavior when writing console apps that wrote directly to STDERR, and realized that if I wanted my error output to go to the same file when using basic piping, I either had to combine streams 1 and 2 or just use STDOUT. The problem with that problem is I didn't know about the correct way to combine streams, which is this:
%command% > outputfile 2>&1
Therefore, if you want all STDOUT and STDERR piped into the same stream, make sure to use that like so:
MyCommand.exe > file.txt 2>&1
The redirector actually defaults to 1> or 1>>, even if you don't explicitly use 1 in front of it if you don't use a number in front of it, and the 2>&1 combines the streams.
Update #3 (simple)
Null for Everything
If you want to completely suppress STDOUT and STDERR you can do it this way. As a warning not all text pipes use STDOUT and STDERR but it will work for a vast majority of use cases.
STD* to null
MyCommand.exe>nul 2>&1
Copying a CMD or Powershell session's command output
If all you want is the command output from a CMD or Powershell session that you just finished up, or any other shell for that matter you can usually just select that console from that session, CTRL + A to select all content, then CTRL + C to copy the content. Then you can do whatever you like with the copied content while it's in your clipboard.
Just see this page
in cmd type:
Command | clip
Then open a *.Txt file and Paste. That's it. Done.
If you are looking for each command separately
To export all the output of the command prompt in text files. Simply follow the following syntax.
C:> [syntax] >file.txt
The above command will create result of syntax in file.txt. Where new file.txt will be created on the current folder that you are in.
For example,
C:Result> dir >file.txt
To copy the whole session, Try this:
Copy & Paste a command session as follows:
1.) At the end of your session, click the upper left corner to display the menu.
Then select.. Edit -> Select all
2.) Again, click the upper left corner to display the menu.
Then select.. Edit -> Copy
3.) Open your favorite text editor and use Ctrl+V or your normal
Paste operation to paste in the text.
If your batch file is not interactive and you don't need to see it run then this should work.
#echo off
call file.bat >textfile.txt 2>&1
Otherwise use a tee filter. There are many, some not NT compatible. SFK the Swiss Army Knife has a tee feature and is still being developed. Maybe that will work for you.
How about this:
<command> > <filename.txt> & <filename.txt>
Example:
ipconfig /all > network.txt & network.txt
This will give the results in Notepad instead of the command prompt.
From command prompt Run as Administrator. Example below is to print a list of Services running on your PC run the command below:
net start > c:\netstart.txt
You should see a copy of the text file you just exported with a listing all the PC services running at the root of your C:\ drive.
If you want to output ALL verbosity, not just stdout. But also any printf statements made by the program, any warnings, infos, etc, you have to add 2>&1 at the end of the command line.
In your case, the command will be
Program.exe > file.txt 2>&1

Using grep in eshell on NTemacs

I have been trying to do a recursive grep command on files in sub folders using grep in NTemacs and Cygwin. So far the "best" results have been using grep in eshell. When I use this:
grep "t" -r *
I get a list of all file names containing the letter t, in all sub folders one layer down but notthing else. In Cygwin i get nothing. I'm working on a directroy that is not in the Cygwin install. Don't know if that mather or not.
What I want is to match the content of a more complex string in all files (and not just the file names, but the content). And in all sub directories.
I would like to use eshell from emacs but I'm open to suggestions, apart form using LINUX. This is a work PC and I don't want to do all the setup of a LINUX install.
i just wrote a very similar answer to another question, but i suspect it's the same root problem:
my first thought is that your files have windows line endings (CRLF) as opposed to unix/linux line endings (LF), and that is messing with grep's ability to parse the file. try running this:
dos2unix filename
on each file you need to search then try your grep statement again.
if you need to convert many files across several directories, i suggest using dos2unix with the -exec action of find:
find . -exec dos2unix {} \;
(add whatever other options you need to find before running that, of course)

Linux recycle bin script

I'm creating a recycle-bin script in SH shell linux in three differant scripts, delete, trash and restore.
The first two scripts are working fine; 'Delete' moves the selected file to the recycle-bin while logging a text file called 'trashinfo' which shows the original path location of the file (to be later used in restore) and 'Trash' which removes everything in the recycle-bin.
The 'restore' script should take the logged path name gained in the delete script and return the file to its original location. I've spent more time than I'd like to remember on this and cant get the restore script to work properly!
Below is the script I've written, as far as I can make out I'm grepping for the filename variable in the text file that holds the pathname, eg 'restore testfile', this is then combined with the basename command, the testfile is then moved into the location thats been grepped and combined with the basename.
Anyone have any pointers on where I'm going wrong?
if [ "$*" != -f ]
then
path=grep "$*" /usr/local/bin/trashinfo
pathname=basename "$path"
mv "$path" "$pathname"
path=$(grep "$*" /usr/local/bin/trashinfo)
pathname=$(basename "$path")