How can I verify that a *.thy file is a valid Isabelle proof from the command line? Doing it in the GUI amounts to seeing that there are no issues/errors/warnings etc, I guess. But is there a way to do it from the command line?
You just have to write a small ROOT-file and then invoke isabelle build.
E.g., if you want to check that theories Foo.thy and Bar.thy compile, then you create a file with name ROOT with the following content:
session Test = HOL +
theories
Foo
Bar
Then the compilation can be done via
isabelle build -d. Test
See the Isabelle system manual (Chapter 2) for more details.
If you want to avoid having to create a ROOT file, you might be able to do something like:
isabelle process -T Foo
But isabelle build is certainly the “more official” way.
Something similar (with a hackish dance to set the secure mode for some theories) is what the Praktomat does to submitted Isabelle theories.
Related
I am developing a Qt application in Python. It uses a resource file, which needs to be compiled. I am using autotools to manage compilation and installation of my projects.
Now, in order for the resource file to be usable by the application, it needs to be compiled with a certain version of the compilation program (pyrcc). I can get the version by putting the output of pyrcc -version in a variable in configure.ac. But then, I don't know how to check whether the string pyrcc5 is present in the output. If it is not present, I want to tell the user that his PyRCC programm has the wrong version, and abort configure.
Additionally, I would like to avoid the need of an extra variable for the program output, but instead do it like this (Pseudo code):
if "pyrcc5" not in output of "pyrcc -version":
say "pyrcc has wrong version"
exit 1
How can I do this ?
When writing a configure.ac for Autoconf, always remember that you are basically writing a shell script. Autoconf provides a host of macros that afford you a lot of leverage, but you can usually at least get an idea about basic "How can I do X in Autoconf?" questions by asking instead "How would I do X in a portable shell script?"
In particular, for ...
I would like to avoid the need of an extra variable for the program
output, but instead do it like this (Pseudo code):
if "pyrcc5" not in output of "pyrcc -version":
say "pyrcc has wrong version"
exit 1
... the usual tool for a portable shell script to use for such a task is grep, and, happily, the easiest way to apply it to the task does not require an intermediate variable. For example, this implements exactly your pseudocode (without emitting any extraneous messaging to the console):
if ! pyrcc -version | grep pyrcc5 >/dev/null 2>/dev/null; then
echo "pyrcc has wrong version"
exit 1
fi
That pipes the output of pyrcc -version into grep, and relies on the fact that grep exits with a success status if and only if it finds any matches.
You could, in fact, put exactly that in your configure.ac, but it would be more idiomatic to
Use the usual Autoconf mechanisms to locate pyrcc and grep, and to use the versions discovered that way;
Use the Autoconf AS_IF macro to write the if construct, instead of writing it literally;
Use standard Autoconf mechanisms for emitting a "checking..." message and reporting on its result; and
Use the standard Autoconf mechanism for outputting a failure message and terminating.
Of course, all of that makes the above considerably more complex, but also more flexible and portable. It might look like this:
AC_ARG_VAR([PYRCC], [The name or full path of pyrcc. Version 5 is required.])
# ...
AC_PROG_GREP
AC_CHECK_PROGS([PYRCC], [pyrcc5 pyrcc], [])
AS_IF([test "x${PYRCC}" = x],
[AC_MSG_ERROR([Required program pyrcc was not found])])
# ...
AC_MSG_CHECKING([whether ${PYRCC} has an appropriate version])
AS_IF([! pyrcc -version | grep pyrcc5 >/dev/null 2>/dev/null], [
AC_MSG_RESULT([no])
AC_MSG_ERROR([pyrcc version 5 is required, but ${PYRCC} is a different version])
], [
AC_MSG_RESULT([yes])
])
In addition to portability and conventional Autoconf progress messaging, that also gets the builder a way to specify a particular pyrcc executable to configure (by setting variable PYRCC in its environment), documents that in configure's help text, and exports PYRCC as a make variable.
Oh, and I snuck in a check for pyrcc under the name pyrcc5, too, though I don't know whether that's useful in practice.
The final result no longer looks much like the shell script fragment I offered first, I grant. But again, the pure shell script fragment could be used as is, and also, the fully Autoconfiscated version is derived directly from the pure script.
I'm running into problems testing a new addition to a module. (Specifically - the ~ operator seems to be not working in Math::Complex for this new feature only.) It's too bizarre to be what it appears but the ideal scheme would be to add the -d option on the top line of the .t program.
Well, I was quickly disabused of that idea! It does not invoke the debugger.
If I wanted to use the debugger, I'd need to create an edit of the .t program that:
Uses (the use command) the module directly. not in the form of
BEGIN { use_ok('My::Module') };
Does not "use Test::More;"
A few other edits that cause gluteal pains
The problem with doing that is that any changes I make in the edited test program I still need to transfer back to the true test program use in "make test". Error prone as best.
I am already using "make test TEST_VERBOSE=1" so that my stdio output shows up. But there's GOT to be a simpler way to invoke the debugger on the .t
Thanks for ideas here.
-- JS
use_ok tests are great, but you should have them in test files of their own, not test files that also test other things.
I'm not sure why you would need to avoid Test::More or use_ok to run the debugger, though. What does happen when you try your test directly:
perl -d -Mblib t/yourtestfile.t?
If all else fails, you can try using Enbugger in your test script.
I have a test-suit as usual for Perl projects, containing a lib and a t directory. The tests in t are structured through subdirectories. So I run them using:
prove -Ilib -r t/
So far nothing special, and afaik quite a standard way of testing in Perl.
Since it is the assumption, that this is the standard way of testing, I'd like to make sure that the following applies:
"If you run prove -r on t, you have tested everything that is there to test".
This is very important, since otherwise you can never be sure that you really called all the tests and the stuff is fine. Somebody calling the above would then maybe - not knowing so - just call a part of the available tests, leaving some behind. Quite annoying... tests that are not run, are of no help. It should be as easy and predictive as possible for developers to call all the tests! It is a bad thing when you have to look up how to run the rest of the test-suit. You might not know about it, or might not do it anyway.
So here comes my problem: I have to integrate some Tests using pgTAP which kindly provides the tool pg_prove. Now I have to make two commandos to do the testing. Additionally to running prove -Ilib -r I also have to run something like e.g. pg_prove -S schema=customerX -U dbuser -d dbname t/pgTAP/*.sql. The problem is not that big if you call the tests automatically from cron or what ever. But it really decreases the chance that we lazy developers run all test tests during our busy days.
So I wonder what would be the best approach to implement the tests in such a way that prove will also include those tests. Is it, that I have to create some .t-files which wrap the whole thing (and how?)? Are there any tricks I can do with the whole Harness stuff on CPAN? Would a simple test_all.sh in the root-dir, including both commandos, do the best job, even if it breaks the assumptions I made above?
So my question in short is: Can I run all tests, including pgTAP with prove? If not, is there a best practice for solving my problem?
Thanks a lot.
Yes. In fact, pg_prove just passes everything off to prove. Assuming your pgTAP tests end in .sql, you can run all your tests like this:
prove -lr --ext .sql --ext .t \
--source pgTAP \
--pgtap-option dbname=dbname \
--pgtap-option username=dbuser \
--pgtap-option suffix=.pg \
--pgtap-option set=schema=customerX
If you use Module::Build, you can also have ./Build test run all the tests, too, as I've done for circle.
See the TAP::Parser::SourceHandler::pgTAP documentation for details.
I'm using TextMate 1.5.10 (Mac OSX 10.7.2) to write a perl modulino application. To verify the functionality, I'm using test scripts designed to be run with the prove command line tool.
An example of the directory structure I'm using looks like this:
text_mate_test/MyModule.pm
text_mate_test/t/001_load_test.t
The 001_load_test.t file looks like this:
#!/usr/bin/perl
use Modern::Perl;
use Test::More;
use MyModule;
my $testObj = new_ok("MyModule", undef, "Initial load test.");
done_testing();
When I run prove or prove -v in the "text_mate_test" directory, everything passes as expected.
I'd like to be able to setup a hotkey in TextMate that allows me to run the test file without having to jump over to the terminal. Currently, if I run "001_load_test.t" directly from inside TextMate with Cmd+R, it chokes saying "Can't locate MyModule.pm in #INC". That's expected since the test script isn't designed to run directly. (I'm still pretty new to writing test files, but I believe that's the proper way to set them up.)
Running off the assumption that I don't want to change the test file itself, is there a way to setup a hotkey so I can run the file accurately from inside TextMate?
I've figured out an even better way to do this.
In the TextMate Bundle Editor (Menubar -> Bundles -> Bundle Editor -> Show Bundle Editor), I've updated the default "Perl -> Run Script" bundle to this:
#!/usr/bin/env ruby
require "#{ENV["TM_SUPPORT_PATH"]}/lib/tm/executor"
require "#{ENV["TM_SUPPORT_PATH"]}/lib/tm/save_current_document"
TextMate.save_current_document
TextMate::Executor.make_project_master_current_document
### If it's a ".t" test script in a "t" directory, run prove
if ( ENV["TM_FILEPATH"] =~ /^.*\/(t\/[^\/]+)$/ )
### Grab the relative file path for more legible output
relative_file_path = $1
### Jump up one directory so prove will work
Dir.chdir("../");
### Call prove with args to run only the file you are working on.
TextMate::Executor.run("prove", :script_args => ["-v", relative_file_path]);
### Otherwise, run with perl
else
TextMate::Executor.run(ENV["TM_PERL"] || "perl", "-I#{ENV["TM_BUNDLE_SUPPORT"]}",
"-Mexception_handler", ENV["TM_FILEPATH"],
:version_args => ["-e", 'printf "Perl v%vd", $^V;'])
end
Here's a screenshot of how it looks in the Bundle Editor.
The benefit of this is that you can use the same hot key (Cmd+r by default) to run your normal scripts with perl and your test scripts with prove.
This is what I was looking for.
UPDATED: When I first developed this, I only had one test script in the "t" directory. I didn't notice until I added other test scripts that the code in the original version of this answer would run prove across all the scripts. Not just the one being worked on. To get back to the expected behavior, I've update the bundle code so that prove will only run on the active script.
I've come up with a solution. Create a new Perl bundle called "Run Script with prove" and associate it with Shift-Cmd-R. The code for the bundle is:
#!/usr/bin/env ruby
require "#{ENV["TM_SUPPORT_PATH"]}/lib/tm/executor"
require "#{ENV["TM_SUPPORT_PATH"]}/lib/tm/save_current_document"
TextMate.save_current_document
TextMate::Executor.make_project_master_current_document
### If it's a ".t" test script in a "t" directory, run prove
if ( ENV["TM_FILEPATH"] =~ /^.*\/(t\/[^\/]+)$/ )
### Use the relative file path for more legible output
relative_file_path = $1
### Jump up one directory so prove will work
Dir.chdir("../");
### Call prove with args to run only the file you are working on.
TextMate::Executor.run("prove", :script_args => ["-v", relative_file_path]);
else
error_string = "This script's filepath doesn't end with /t/.*\.t\n"
error_string += "That is required for the 'Perl -> Run Script with prove' bundle to work.\n"
TextMate::Executor.run("echo", :script_args => [error_string]);
end
Note: This is the results of a bunch of trial and error hacking. I don't know if it's the "right" to do it, but this works for me. Everything but the last two lines is a copy from the original "Run Script" bundle that comes with TextMate. Based on that, it seems like this should be pretty safe.
UPDATE: When I first built this I only had one test file in the "t" directory. When I added more, I discovered that the original version of the bundle was running all the test files. This code represents an update to the expected behavior of only running the test script you working on. Because of the way I ended up doing that, it also became necessary to add in a fallback. If you try to run a script that doesn't match the standard test file path format, it gives an error message.
It will enable the program to find your module if you add
use lib '..';
to the top of your code (before the use MyModule). This will add the text_mate_test directory to #INC and enable Perl to find the module, though you may come across other problems with running the program directly.
I'm trying to set an environment variable in a build script with phing.
This is normally done command line like this:
export MY_VAR=value
In Phing I did the following but it isn't working.
<exec command="export MY_VAR=value" />
I see that this is quite an old question, but I don't think it has been answered in the best way. If you wish to export a shell variable, for example say you are running phpunit from phing and want to do an export before invoking phpunit, try:
<exec command="export MY_VAR=value ; /path/to/phpunit" />
Simply do the export and invoke your command inside the same exec tag. Separate the export statement and the shell executable with a semicolon as shown. Your script will be able to access the value using the standard php function:
$myVar = getenv('MY_VAR');
Bold claim: There is no way to set/export a (Unix) shell variable in PHP so that it is visible inside the scope that started the php script.
php myfile.php (does putenv or shell_exec('export foo=bar');)
echo $foo
Will return nothing.
As PHP can not do it so neither can phing.
Accessing shell environment variables accross multiple script runs (if its that what you want) seems also like an unideal design decision, pretty stateful.
Apart from that I'd urge you to stick to phing and learn its lean lesson. Phing helps stateless thinking to some degree.
I'd never heard of phing before, but this looks very promising as a build tool. Thanks for posting! I looked through the doc on phing.info, I found the following possibility:
#0 I would like to clarify one point. Are you saying that
prompt$ > export MY_VAR=value
prompt$ > phing build.xml
doesn't set MY_VAR to value so it is visible inside the running phing processes? I'd be surprised, but I would understand if this is not how you want to run your build script.
#1 I think in the context of a build tool, a feature like exec is meant to run a stand-alone program, so, while the exec may run and set MY_VAR, this is all happening in a subprocess that disappears immediately as the exec finishes and continues processing the next task in the build.xml.
If you're just trying to ensure that your phing script runs with specific values for env_vars, you could try
Command-line arguments:
....
-D<property>=<value>
// Set the property to the specified value to be used in the buildfile
So presumably, you can do
phing -DMY_VAR=value build.xml
#2 did you consider using a properites file?
See http://www.phing.info/docs/guide/stable/chapters/appendixes/AppendixF-FileFormats.html
and scroll down for info on build.properties
#3 also ...
Phing Built-In Properties
Property Contents
env.* Environment variables, extracted from $_SERVER.
you would access them with something like
${env.MY_VAR}
#4 This looks closer to what you really want
<replacetokens>
<token key="BC_PATH" value="${top.builddir}/"/>
<token key="BC_PATH_USER" value="${top.builddir}/testsite/user/${lang}/"/>
</replacetokens>
I hope this helps.