Turn off XPCE in SWI-Prolog - emacs

I'd like to get the output of apropos/1 and help/1 etc. inside my Emacs buffer, instead of an XPCE window. I'm using SWI-Prolog under Linux.
What I have tried:
Setting flags in the init file (~/.plrc):
:- set_prolog_flag(gui, false).
:- set_prolog_flag(xpce, false).
Calling swipl with the --nopce flag (a wild guess looking at /usr/lib/swi-prolog/xpce.rc)
ssh localhost, effectively emulating a terminal-only machine, which worked, but there must be a better solution...
And yes, I could just uninstall the swi-prolog-x package, but I may want to write GUI programs in the future. Ideally I would like to turn off the GUI only for the documentation / debugging.
EDIT:
I've found out part of the solution: the goals online_help:give_help/1 and online_help:give_apropos seem to be what I need. I just need to re-hook these onto help and apropos, maybe via prolog:help_hook/1. Any ideas?

[~]# unset DISPLAY
swipl --nopce
This a an undocumented flag... Normally it is only used when building the system/

Related

Passing macros with coverity exe files while using them through cmd

I am new to Coverity,I am using it from the command prompt with it's .exe files.So I want to pass specific macros in coverity cov-build.exe so that those macros will be implemented when cov-emit.exe(when it is called by cov-build.exe) is parsing the .c files.Till now I have tried the below stated configurations.
code-build.exe Intermediate_folder --delete-stale-tus --preprocessor-first --return-emit-failure "My_bat_file" -- -D My_macro_name=my_macro_body
So any help will be much be appreciated.I am stuck on this.
Thanks and regards,
Newbie_in
cov-build wraps your existing build command, monitors it and spawns parallel compiler invocations in order to understand your code. These parallel compiler invocations will see the same command line being passed to your own compiler.
So if you want this define to take effect for your compiler as well as Coverity's then you should simply just add it to your build the way you would normally and Coverity will see it.
If you want to add a define that only Coverity's compiler can see, this is best done with within the config for your compiler.
You can either edit the config directly (add
<append_arg>-Dmy_macro_name=my_macro_body</append_arg>
after the <begin_command_line_config> line), or re-configure using --xml-option.
For example, if you're using the shortcut gcc config this would look like this:
$ cov-configure --gcc --xml-option=append_arg>-Dmy_macro_name=my_macro_body.
I noticed you're using --preprocess-first on the cov-build command line - I recommend against this, as it destroys XREFs making it much more difficult to browse defect information, as well as makes the analysis unable to find some defects (i.e. ones that are due to macros). --preprocess-next behaves like --preprocess-first and will only fire if the initial compilation attempt fails, so if you're using --preprocess-first to work around compilation issues, I strongly recommend using --preprocess-next instead.
If you do have compilation issues, it's always good to report them (along with a reproducer) to Coverity support so that they can be fixed in future releases.

How to cleanly pass command-line parameters when test-running my Python script?

So I'm writing Python with IDLE and my usual workflow is to have two windows (an editor and a console), and run my script with F5 to quickly test it.
My script has some non-optional command line parameters, and I have two conflicting desires:
If someone launches my script without passing any parameters, I'd like him to get an error telling him to do so (argparse does this well)
When I hit F5 in IDLE, I'd like to have my script run with dummy parameters (and I don't want to have more keystrokes than F5 to have this work, and I don't want to have a piece of code I have to remember to remove when I'm not debugging any more)
So far my solution has been that if I get no parameters, I look for a params.debug file (that's not under source control), and if so, take that as default params, but it's a bit ugly... so would there be a cleaner, more "standard" solution to this? Do other IDEs offer easier ways of doing this?
Other solutions I can think of: environment variables, having a separate "launcher" script that's the one taking the "official" parameters.
(I'm likely to try out another IDE anyway)
With some editors you can define the 'execute' command,
For example with Geany, for Python files, F5 is python2.7 %f. That could be modified to something like python2.7 %f dummy parameters. But I use an attached terminal window and its line history more than F5 like commands.
I'm an Ipython user, so don't remember much about the IDLE configuration. In Ipython I usually use the %run magic, which is more like invoking the script from a shell than from an IDE. Ipython also has a better previous line history than the shell.
For larger scripts I like to put the guts of the code (classes, functions) in one file, and test code in the if __name__ block. The user interface is in another file that imports this core module.
Thanks for your question. I also searched for a way to do this. I found that Spyder which is the IDE I use has an option under Run/Configure to enter the command line parameters for running the program. You can even configure different ones for the different editors you have open.
Python tracker issue 5680 is a proposal to add to Idle a way to set command lines args for F5 Run module. You are free to test any of the proposed patches if you are able to apply them.
In the meanwhile conditionally extending sys.argv as done below should fulfill your requirements.
import sys
in __name__ == '__main__':
if 'idlelib.PyShell' in sys.modules:
sys.argv.extend(('a', '-2')) # add your argments here.
print(sys.argv) # in use, parse sys.argv after extending it
# ['C:\\Programs\\python34\\tem.py', 'a', '-2']

Is there a way to add timing by default to ipython?

When using IPython, it's often convenient to see how long your commands take to run by using the %time magic function. When you use this often enough, you start to wish that you could just get toggle a setting to get this metadata by default whenever you enter a query. Psql lets you do this with \timing. GHCi lets you do this with :set s+. Does IPython let you do this? And if not, why not?
The "ipythonic" way of timing code is using the %timeit or %%timeit magic function (respectively for online, and multi-line code).
These functions provide quite accurate results by running the code multiple times (the exact number is adaptive if not specified).
The global flag you are asking does not exist in ipython. Furthermore, just adding %%timeit to all the cells will not work because global variables are not modified when calling %%timeit. This "feature" is directly inherited from the timeitmodule.
For more info see this ipython issue.

How to discover command line options (if any) for an undocumented executable of unknown origin?

Take an undocumented executable of unknown origin. Trying /?, -h, --help from the command line yields nothing. Is it possible to discover if the executable supports any command line options by looking inside the executable? Possibly reverse engineering? What would be the best way of doing this?
I'm talking about a Windows executable, but would be interested to hear what different approaches would be needed with another OS.
In linux, step one would be run strings your_file which dumps all the strings of printable characters in the file. Any constants chars will thus be shown, including any "usage" instructions.
Next step could be to run ltrace on the file. This shows all function calls the program does. If it includes getopt (or familiar), then it is a sure sign that it is processing input parameters. In fact, you should be able to see exactly what argument the program is expecting since that is the third parameter to the getopt function.
For Windows, you can see this question about decompiling Windows executables. It should be relatively easy to at least discover the options (what they actually do is a different story).
If it's a .NET executable try using Reflector. This will convert the MSIL code into the equivalent C# code which may make it easier to understand. Unfortunately private and local variable names will be lost, as these are not stored in the MSIL but it should still be possible to follow what's going on.

What is a good method for inventing a command name?

We're struggling to come up with a command name for our all purpose "developer helper" tool, which we are using on our project. It's like a wrapper for our existing tools like cmake and hg. The purpose of the command is really just to make our lives easier by combining multiple commands into one (for example, publishing packages). For example, we have commands like:
do conf
do build
do install
do publish
We've considered a few ambiguous names like do (as above) and run, but obviously, do is a Linux bash command and run is pretty ambiguous.
We'd like our command to be 2 chars short, preferably - but who thinks we're asking the impossible? Is there a practical way to check the availability of command names (other than just typing them into your terminal), or is it just a case of choose one and hope nobody else will use it? Are we worrying about nothing?
Since it's a "developer helper" tool why not use hm [run|build|port|deploy|test], Help Me ...
Give it a verbose name, then let everyone alias it to whatever they want. Make sure you use the verbose name in other scripts so that it removes ambiguity.
This way, each user gets to use whatever makes sense to him/her, and the scripts are more readable and more easily searchable (for example, grepping four "our_cool_tool" will usually yield better results than grepping for "run").
How many 2-character words are useful in this context? I think you need four. With that in mind, here are some suggestions.
omni
torq
fluf
mega
spif
crnk
splt
argh
quat
drul
scud
prun
sqat
zoom
sizl
I have more if you need them.
Pick one: http://en.wikipedia.org/wiki/List_of_all_two-letter_combinations
To check the availability of command names, I suggest looking for all two-letter filenames that are in the directories in your path. You can use a script like this
for item in `echo $PATH | sed 's/:/ /g'` ; do
ls -1d $item/??
done
It won't show builtins in your shell (like "do" as you mentioned) but it's a good start.
Change ?? to ??? for three-letter files, etc.
I'm going to vote for qp (quick package?) since it's easy to pronounce, easy to type, and easy to remember where the keys are on the keyboard.
I use "asd". it's short and most developers type it without thinking
(oh, and you can always claim later that it stands for some "Advanced Script for Developers" if you need to justify yourself a few years from now)
How about fu? As in Kung Fu. It's a special purpose tool. And it's really easy to type.
I think that run is a good name, at least anybody that will download your project will know what to do. Calling it without parameters should reveal your options.
Even 'do' will do, I think you can use backquotes to run it from bash scripts.
Also remember that running the tools without parameters will tell you what options you have.
Use makefiles to do everything for you.
How about calling it something descriptive, like 'build_runner', and then just aliasing it to 'br' (or preferred acronym) in your .bashrc?
There is a really crappy tool called cleartool (part of clearcase), and people will alias it on their machine to "ct". Perhaps you can have a longer command and suggest users alias it.
It would probably be best to do something like ire_and_curses suggested, name it descriptively then alias it to a 2 letter command. If I was choosing, I would name it dev_help and alias it to dh.
I think you're worrying about nothing. Install the program as 'the-command-to-do-evertyhing-and-if-you-dont-make-your-own-alias-for-it-you-should'. I don't think that will be too long for any modern filesystems, but you might need to shorten it to 'tctdeaiydmyoafiys'. See what common aliases are used, and then change the program's name to that. In other words: don't decide, let natural selection decide for you. If you are working with a team of < 10, this should not even remotely cause any problems.
Call it devtool alias to dt
Custom tools like that I like to start with the prefix 'jj-'. I can type (with big index-finger power) 'jj ' and see all my personal commands. Also, they group together in alphabetical lists. 'J' is not a very common character for built-inc commands, but you can pick your own.
Since you want two characters, you can use just 'zz', or something starting with 'z'.
Are you sure you want to put all your functionality in one command? That might be simultaneously over-constraining and over-loading the interface a little.
do conf
do build
do install
do publish