What should be the output of cli --dry-run? - command-line

Is there any expectation on the output of a command-line application's --dry-run option? Is a free-form human-readable explanation ok, or should it be parseable? Should it be what would be printed during a real execution? What if the real execution is quiet?
Is there any standard for this?

No, this is not codified anywhere. For a program which normally doesn't print anything, perhaps enable debugging output so the user can see what it's doing.
In this day and age, an option to produce machine-readable output (JSON or whatever) would always be nice. Mankind has spent way too much time reverse-engineering parsers for unspecified ad-hoc formats.

Related

Alternative for println in Scala

I am required to print huge amounts of data ( in the order of a few 100 MBs) on the console. Using println for this is failing miserably on IntelliJ.
Is there any alternative like console.log which can handle and display this data without lagging and slowing down?
Thanks in advance!
You can buffer, and perhaps bypass character encoding. It's also worth looking at the IntelliJ settings, particularly if you don't see this problem when running from the commandline - perhaps IntelliJ is offering some functionality in its console (e.g. highlight errors, link stack traces to line numbers) that involves scanning every line. You might also worry about word wrapping in your console
(That said, I'm not sure how you expect to understand anything from 100MB of printing - if it's something you need to look at an overview of to "see the pattern", try making code see the pattern the same way you do)

Prolog read input without full stop

I am currently programming a small text-based adventure game in SWI-Prolog. Hence, the user will have to give commands like "goto(room)" or "goto room".
However the problem is that you always have to finish the command with a full stop, i.e.
"goto(room)." instead of "goto(room). This is not very user-friendly.
I have a predicate that reads a command and then executes the input. How can I automatically add the full stop if there is none (if there already is one the input should just be executed)?
Thanks in advance!
Regards,
Volker
Obviously you are using read/1 or some variation; this is supposed to be used to read valid prolog terms (and that's why you need a full-stop).
The solution would be to parse the input on your own (check primitive char io, read utilities and io in general (you will probably need just the read utilities though)) and then convert it to a term.
Additionally, you can create a small natural language with DCGs and use; for example the user could just write goto room instead of goto(room).
On the other hand, I personally don't think that having to skip a full-stop it will be a lot more user friendly if they have to type prolog terms anyway.

checks current running processes,iphone

Can we find out the list of process which are currently running on in iOS programmatically?
Somewhat similar to shown here
http://www.techet.net/sysstat/
shown at process tab
Suggestions are always welcome.
Thanks
See this other answer I literally just posted.
Take a look at the modified Darwin C-code I posted:
darwin.c
darwin.h
If you look in there, inside OS_get_table(), you'll find a bunch of commented out printf statements. If you uncomment and change those, storing the data in some kind of usable data structure, you can collect all this type of information.
Note don't just uncomment all the printf statements and expect that code to work, though. iOS has limits on the rate at which apps can write to std out, so you'll get throttled if you have tons of printfs in a short period of time.

How can I control an interactive Unix application programmatically through Perl?

I have inherited a 20-year-old interactive command-line unix application that is no longer supported by its vendor. We need to automate some tasks in this application.
The most troublesome of these is creating thousands of new records with slightly different parameters (e.g. different identifiers, different names). The records have to be created in sequence, one at a time, which would take many months (and therefore dollars) to do manually. In most cases, creating a record has a very predictable pattern of keying in commands, reading responses, keying in further commands, etc. However, some record creation operations will result in error conditions ('record with this identifier already exists') that require a different set of commands to be exit gracefully.
I can see a few different ways to do this:
Named pipes. Write a Perl script that runs the target application with STDIN and STDOUT set to named pipes then sends the target application the sequence of commands to create a record with the required parameters, and then instructs the target application to exit and shut down. We then run the script as many times as required with different parameters.
Application. Find another Unix tool that can be used to script interactive programs. The only ones I have been able to find though are expect, but this does not seem top be maintained; and chat, which I recall from ages ago, and which seems to do more-or-less what I want, but appears to be only for controlling modems.
One more potential complication: I think the target application was written for a VT100 terminal and it uses some sort of escape sequences to do things like provide highlighting.
My question is what approach should I take? One of these, or something completely different? I quite like the idea of using named pipes and then having a Perl script that opens the FIFOs and reads and writes as required, as it provides a lot of flexibility, but from what I have read it seems like there's a lot of potential problems if I go down this path.
Thanks in advance.
I'd definitely stick to Perl for the extra flexibility, as chaos suggested. Are you aware of the Expect perl module? It's a lot nicer than the named pipe approach.
Note also with named pipes, you can't force the output coming back from your legacy application to be unbuffered, which could be annoying. I think Expect.pm uses pseudo-ttys to get around this problem, but I'm not sure. See the discussion in perlipc in the section "Bidirectional Communication with Another Process" for more details.
expect is a lot more solid than you're probably giving it credit for, but if I were you I'd still go with the Perl option, wanting to have a full and familiar programming language for managing the process and having confidence that whatever weird issues arise, there will be ways of addressing them.
Expect, either with the Tcl or Perl implementations, would be my first attempt. If you are seeing odd sequences in the output because it's doing odd terminal things, just filter those from the output before you do your matching.
With named pipes, you're going to end up reinventing Expect anyway.

CLI Patterns/Antipatterns for usability

What patterns contribute or detract from the usability of a CLI interface?
As an example consider the CLI for ClearCase. The CLI is very comprehensive (+1) but it is has several glaring opportunities. Recently, I wanted to force the files to lower case into ClearCase using clearfsimport. Unfortunately I wound up on the documentation for its cousin clearimport. It may seem slight but it cost me more hours than I care to admit. The variation in the middle got me.
Why provide such nearly identical functionality with such nearly identical names? There are many better options in my opinion
clearimport -fs
fsclearimport
clear_fs_import
clearimport_fs
Anything would be better than what they went with. The code I am working on IS a CLI and this experience made me look at my own choices. I think I have all the basics covered (standard help, long-form vs short-form, short meaningful names, providing examples, eliminate ambiguity, accurately handling spaces within quotes, etc).
There is some literature on this subject.
Perhaps a bad CLI is no different than a bad API. CLI are type of an API in some sense. The goals are naturally common:: flexibility, readability, and completeness. Several factors differentiate CLI from a typical API. One is that CLI needs to support scriptability (participate many times perhaps in a series of pipes). Another is that autocompletion and namespaces don't exist in the same way. You don't always have a nice colorful GUI doing stuff for you. CLIs must document themselves externally to customer directly. And finally the audience of a CLI is vastly different than the standard API. I appreciate any insight you may have.
I like the subcommand pattern, which I'm most familiar with as its implemented in the command-line Subversion client.
svn [subcommand] [options] [files]
Without the subcommands, subversion would have waaaaay too many different options for me to remember them effectively, and the help system would be a pain to slog through.
But, if I don't remember how any particular subcommand works, I can just type:
svn help [subcommand]
...and it shows me only the relevant portions of the help documentation.
As noted above, this format:
[master verb] [subverb] [optionally, noun] [options]
is good in terms of remembering what commands are available. cvs, svn, Perforce, git, all adhere to this. It improves discoverability of commands, a major CLI problem. One wrinkle that occurs here is options for the master-verb vs. options for the subverb. I.e.,
cvs -d dir command bar
is different than
cvs command -d dir bar
This was a confusing situation in cvs, which svn "fixed" by allowing options specified in any order. Your own solution may vary; if you have a very good reason to pass options to the master verb, okay, just be aware of the overhead.
Looking to API usability is a good idea too, but beware that there is no real typing in CLI commands, and there is a lot of richness in what CLI commands 'return', since you've got both a return code and an output to work with. In the unixy/streams world, the output is usually much more important than the return code. Getting the format of your output right is crucial. Also, while tempting, I've found that sending different things to stdout vs. stderr is not always useful; it confuses novice and even intermediate users (because they both get dumped to console in most cases), and rarely is useful advanced users. So unless there's a real need for it I avoid it; it's too easy for (e.g.) someone to get very confused about why the output of a command was '' in an error condition just because the programmer nicely dumped the errors to stderr.
Another issue in design is the "what next" problem. In a GUI, the next steps for the user are spelled out by the available buttons, menus, etc. In a CLI, the user can literally type any command next, and pipe any command to any other. (Or try, at least.) I design my commands to give hints (either in the help or the output) as to what potential next steps might be in a typical workflow.
Another good pattern is allowing user customization of the output. While it is possible for users to use cut, sort, etc. to tailor the output, being able to specify a format string magnifies the utility of a command. The example I cite here is top, which lets you tell it which columns you want.