WinCvs 3 way diff with the "-3" option? - diff

I'm working on writing a Perl script that provides an interface to WinCvs for all of my Perl scripts to use. While trying to implement the diff command I came across an option that looks incredibly useful (if it does what I think it does) but I can't figure out how to use it.
Here is the "usage:" output for the WinCvs diff command.
My question is what is how do you use the -3 option?
The online CVS documentation doesn't even reference this option and I can't figure out a way to make it produce a useful output.
I tried using it like this cvs diff -3 -r<revision1> -r<revision2> -r<revision3> but I just get the following error.
Using this command with only 2 revisions seems to give the same output as if I didn't use -3. Am I missing something? Or is this command just inherently broken in WinCvs?

Related

How can we use minimatch in powershell?

I would like to be able to get all the files using a pattern like c:\files\**\bla\**\*.ts, for example.
Possible options, sorted in the order of preference, are:
Powershell supports it already. (How???)
There is a powershell module implementing it.
There is a .Net library that does it. glob and minimatch look promising.
There is a windows command line tool that does it.
I am asking this question because I prefer the first 2 options, but so far (10 mins) I came up empty.

Is there a standard way to declare your preferred diff tool under Unix/Linux?

I want to know if there is a standardized way to declare your prefer diff tool (as for file or directory comparison).
For preferred editor there is the EDITOR environment variable, but I wasn't able to find one for diff.
If not maybe someone wrote a script that can reconfigure most used tools to do this.
EDITOR is usually used by various shells that provide line-editing functionality on certain things like your command history. For example, pressing v in vi-edit mode within bash will bring up the command for editing in your preferred editor.
I'm unaware of any aspects of the shell that do diff processing so the need for a DIFF variable seems unnecessary. If you want to diff some files, just use diff or your preferred one if not diff.
However, nothing is stopping you from writing a program of your own which does use a DIFF variable to perform difference analysis on files.

Perl: console / command-line tool for interactive code evaluation and testing

Python offers an interactive interpreter allowing the evaluation of little code snippets by submitting a couple of lines of code to the console. I was wondering if a tool with similar functionality (e.g. including a history accessible with the arrow keys) also exists for Perl?
There seem to be all kinds of solutions out there, but I can't seem to find any good recommendations. I.e. lots of tools are mentioned, but I'm interested in which tools people actually use and why. So, do you have any good recommendations, excluding the standard perl debugging (perl -d -e 1)?
Here are some interesting pages I've had a look at:
a question in the official Perl FAQ
another Stackoverflow question, where the answer mostly is the perl debugger and several links are broken
Perl Console
Perl Shell
perl -d -e 1
Is perfectly suitable, I've been using it for years and years. But if you just can't,
then you can check out Devel::REPL
If your problem with perl -d -e 1 is that it lacks command line history, then you should install Term::ReadLine::Perl which the debugger will use when installed.
Even though this question has plenty of answers, I'll add my two cents on the topic. My approach to the problem is easy if you are a ViM user, but I guess it can be done from other editors as well:
Open your ViM, and type your code. You don't need to save it on any file.
:w !perl for evaluation (:w !COMMAND pipes the buffer to the process obtained by running COMMAND. In this case the mighty perl interpreter!)
Take a look at the output
This approach is good for any interpreted language, not just for Perl.
In the case of Perl it is extremely convenient when you are writing your own modules, since in my experience the perl interpreter will refuse to reload a module (even when loading was attempted and failed). On the minus side, you will loose all your context every time, so if you are doing some heavy or slow operation, you need to save some intermediate results (whilst the perl console approach preserves the previously computed data).
If you just need the evaluation of an expression - which is the other use case for a perl console program - another good alternative is seeing the evaluation out of a perl -e command. It's fast to launch, but you have to deal with escaping (for this thing the $'...' syntax of Bash does the job pretty well.
Just use to get history and arrows:
rlwrap perl -de1

How to discover command line options (if any) for an undocumented executable of unknown origin?

Take an undocumented executable of unknown origin. Trying /?, -h, --help from the command line yields nothing. Is it possible to discover if the executable supports any command line options by looking inside the executable? Possibly reverse engineering? What would be the best way of doing this?
I'm talking about a Windows executable, but would be interested to hear what different approaches would be needed with another OS.
In linux, step one would be run strings your_file which dumps all the strings of printable characters in the file. Any constants chars will thus be shown, including any "usage" instructions.
Next step could be to run ltrace on the file. This shows all function calls the program does. If it includes getopt (or familiar), then it is a sure sign that it is processing input parameters. In fact, you should be able to see exactly what argument the program is expecting since that is the third parameter to the getopt function.
For Windows, you can see this question about decompiling Windows executables. It should be relatively easy to at least discover the options (what they actually do is a different story).
If it's a .NET executable try using Reflector. This will convert the MSIL code into the equivalent C# code which may make it easier to understand. Unfortunately private and local variable names will be lost, as these are not stored in the MSIL but it should still be possible to follow what's going on.

CLI Patterns/Antipatterns for usability

What patterns contribute or detract from the usability of a CLI interface?
As an example consider the CLI for ClearCase. The CLI is very comprehensive (+1) but it is has several glaring opportunities. Recently, I wanted to force the files to lower case into ClearCase using clearfsimport. Unfortunately I wound up on the documentation for its cousin clearimport. It may seem slight but it cost me more hours than I care to admit. The variation in the middle got me.
Why provide such nearly identical functionality with such nearly identical names? There are many better options in my opinion
clearimport -fs
fsclearimport
clear_fs_import
clearimport_fs
Anything would be better than what they went with. The code I am working on IS a CLI and this experience made me look at my own choices. I think I have all the basics covered (standard help, long-form vs short-form, short meaningful names, providing examples, eliminate ambiguity, accurately handling spaces within quotes, etc).
There is some literature on this subject.
Perhaps a bad CLI is no different than a bad API. CLI are type of an API in some sense. The goals are naturally common:: flexibility, readability, and completeness. Several factors differentiate CLI from a typical API. One is that CLI needs to support scriptability (participate many times perhaps in a series of pipes). Another is that autocompletion and namespaces don't exist in the same way. You don't always have a nice colorful GUI doing stuff for you. CLIs must document themselves externally to customer directly. And finally the audience of a CLI is vastly different than the standard API. I appreciate any insight you may have.
I like the subcommand pattern, which I'm most familiar with as its implemented in the command-line Subversion client.
svn [subcommand] [options] [files]
Without the subcommands, subversion would have waaaaay too many different options for me to remember them effectively, and the help system would be a pain to slog through.
But, if I don't remember how any particular subcommand works, I can just type:
svn help [subcommand]
...and it shows me only the relevant portions of the help documentation.
As noted above, this format:
[master verb] [subverb] [optionally, noun] [options]
is good in terms of remembering what commands are available. cvs, svn, Perforce, git, all adhere to this. It improves discoverability of commands, a major CLI problem. One wrinkle that occurs here is options for the master-verb vs. options for the subverb. I.e.,
cvs -d dir command bar
is different than
cvs command -d dir bar
This was a confusing situation in cvs, which svn "fixed" by allowing options specified in any order. Your own solution may vary; if you have a very good reason to pass options to the master verb, okay, just be aware of the overhead.
Looking to API usability is a good idea too, but beware that there is no real typing in CLI commands, and there is a lot of richness in what CLI commands 'return', since you've got both a return code and an output to work with. In the unixy/streams world, the output is usually much more important than the return code. Getting the format of your output right is crucial. Also, while tempting, I've found that sending different things to stdout vs. stderr is not always useful; it confuses novice and even intermediate users (because they both get dumped to console in most cases), and rarely is useful advanced users. So unless there's a real need for it I avoid it; it's too easy for (e.g.) someone to get very confused about why the output of a command was '' in an error condition just because the programmer nicely dumped the errors to stderr.
Another issue in design is the "what next" problem. In a GUI, the next steps for the user are spelled out by the available buttons, menus, etc. In a CLI, the user can literally type any command next, and pipe any command to any other. (Or try, at least.) I design my commands to give hints (either in the help or the output) as to what potential next steps might be in a typical workflow.
Another good pattern is allowing user customization of the output. While it is possible for users to use cut, sort, etc. to tailor the output, being able to specify a format string magnifies the utility of a command. The example I cite here is top, which lets you tell it which columns you want.