Raising the failure level of a coq tactic - coq

When implementing a complex tactic in Ltac, there are some Ltac commands or tactic invocation that I expect to fail and where that is expected (e.g. to terminate a repeat, or to cause backtracking). These failures are usually raised at failure level 0.
Failures raised at higher level “escape” the surrounding try or repeat block, and are useful to signal unexpected failures.
What I am missing is a way to run a tactic tac and treat its failure, even at level 0, to be at a higher level, while retaining the message of the failure. This would let me make sure that repeat does not terminate due to a Ltac programming error on my side.
Can I implement such a failure-raising-level higher-order tactic in Ltac?

You can write a tactic to achieve that in Ocaml. I put that on GitHub here.
For example the following should raise an error instead of silently succeeding:
repeat (match goal with
| [ |- _ ] =>
raise_error_level (assert (3 = 3) by idtac)
end).

I do not know if it is possible to get exactly what you want, but I sometimes use the following idiom:
tactic_expression_that_may_fail_with_level_0
|| fail 1000 "There was some problem here"
If the first tactic fails with level 0, the || will try to run the second one, which will fail with a very high level and report it to you.
It would help if you could provide a concrete use case to see if some other technique would be better suited.

Related

Exception handling in Apache Spark

I have been researching on the proper way of handling exceptions in Apache Spark jobs. I have read through different questions in Stackoverflow but I still haven't got to a conclusion. From my point of view there are three ways of handling exceptions:
Try catch/block surrounding the lambda function that is going to perform the computation. This is tricky because the block will have to be placed surrounding the code that triggers the lazy computation. If an error happens then I assume there won't be any RDD to work with (Taken from this blog entry)
val lines: RDD[String] = sc.textFile("large_file.txt")
val tokens =
lines.flatMap(_ split " ")
.map(s => s(10))
try {
// This try-catch block catch all the exceptions thrown by the
// preceding transformations.
tokens.saveAsTextFile("/some/output/file.txt")
} catch {
case e : StringIndexOutOfBoundsException =>
// Doing something in response of the exception
}
Try catch/block inside the lambda function: This implies deciding on the correct output of a caught exception inside the lambda function.
rdd.map({
Try(fn) match{
case Success: _
case Failure:<<Record with error flag>>
}).filter(record.errorflag==null)
Let the exception propagate. The task will fail and the Spark framework will relaunch the task again. This works when the error is caused by reasons outside the code scope. e.g. (memory leak, connection to another service lost momentarily.)
What's the correct way of handling exceptions?. I guess it depends on what you want to achieve with the RDD operation. If an error in one of the RDD records means that the output is not valid then option 1 is the way to go. If we expect some of the records to fail, we go for option 2. Option 3 does not even need to make a choice as it is the normal behaviour of the platform.
In the past we did not bother with try/catch approach except for input parameter checking.
For the rest we just relied on checking the return code as in:
spark-submit --master yarn ... bla bla
ret_val=$?
...
Why? As you need to correct something in general and we needed to then start over again. It's hard to dynamically correct certain things. Your scheduling tool can pick this up as well, Rundeck, Airflow...et al.
More advanced restart options are possible but simply get convoluted, but could be done. As you allude to in option 2. Never seen that done though.

For this function, what does "return Left" mean in the case of IO?

In Database.MongoDB.Query, there is this function:
access :: MonadIO m => Pipe -> AccessMode -> Database -> Action m a -> m a
The documentation says this about the function:
Run action against database on server at other end of pipe. Use access mode for any reads and writes. Return Left on connection failure or read/write failure.
What does "return Left" mean here? I ask because m can be any monad (with a MonadIO instance). For instance, what does "return Left" mean if m is just the IO monad?
Must m be the Either monad for me to be able to detect connection or read/write failure when using the access method?
Yes. It's a type. Return Left comes from an older version. If any errors happens then it just throws IO exceptions. We'll need to fix it.
I filed a bug for it. https://github.com/mongodb-haskell/mongodb/issues/67

info_auto tactic does not print traces anymore in Coq8.5?

I used to use info_auto to display the steps actually performed under the hood by an auto tactic. However, this no longer seems to work with Coq 8.5 (beta3).
The following example used to work for Coq 8.4:
Example auto_example_5: 2 = 2.
Proof.
info_auto.
Qed.
and give me the necessary steps such as apply #eq_refl..
With Coq8.5, I get a warning:
The "info_auto" tactic does not print traces anymore. Use "Info 1 auto", instead.
(* info auto : *)
Using Info 1 auto. as hinted, I got:
<unknown>
in the message view. In other occasions, I sometimes get things like
<unknown>; refine H
But neither is helpful/informational because I can't apply these to finish the proof manually.
What is the proper way to replicate the old info_auto function in Coq 8.5?
This issue seems to have been fixed in Coq 8.6.

Throwing exceptions in Scala, what is the "official rule"

I'm following the Scala course on Coursera.
I've started to read the Scala book of Odersky as well.
What I often hear is that it's not a good idea to throw exceptions in functional languages, because it breaks the control flow and we usually return an Either with the Failure or Success.
It seems also that Scala 2.10 will provide the Try which goes in that direction.
But in the book and the course, Martin Odersky doesn't seem to say (at least for now) that exceptions are bad, and he uses them a lot.
I also noticed the methods assert / require...
Finally I'm a bit confused because I'd like to follow the best practices but they are not clear and the language seems to go in both directions...
Can someone explain me what i should use in which case?
The basic guideline is to use exceptions for something really exceptional**. For an "ordinary" failure, it's far better to use Option or Either. If you are interfacing with Java where exceptions are thrown when someone sneezes the wrong way, you can use Try to keep yourself safe.
Let's take some examples.
Suppose you have a method that fetches something from a map. What could go wrong? Well, something dramatic and dangerous like a segfault* stack overflow, or something expected like the element isn't found. You'd let the segfault stack overflow throw an exception, but if you merely don't find an element, why not return an Option[V] instead of the value or an exception (or null)?
Now suppose you're writing a program where the user is supposed to enter a filename. Now, if you're not just going to instantly bail on the program when something goes wrong, an Either is the way to go:
def main(args: Array[String]) {
val f = {
if (args.length < 1) Left("No filename given")
else {
val file = new File(args(0))
if (!file.exists) Left("File does not exist: "+args(0))
else Right(file)
}
}
// ...
}
Now suppose you want to parse an string with space-delimited numbers.
val numbers = "1 2 3 fish 5 6" // Uh-oh
// numbers.split(" ").map(_.toInt) <- will throw exception!
val tried = numbers.split(" ").map(s => Try(s.toInt)) // Caught it!
val good = tried.collect{ case Success(n) => n }
So you have three ways (at least) to deal with different types of failure: Option for it worked / didn't, in cases where not working is expected behavior, not a shocking and alarming failure; Either for when things can work or not (or, really, any case where you have two mutually exclusive options) and you want to save some information about what went wrong; and Try when you don't want the whole headache of exception handling yourself, but still need to interface with code that is exception-happy.
Incidentally, exceptions make for good examples--so you'll find them more often in a textbook or learning material than elsewhere, I think: textbook examples are very often incomplete, which means that serious problems that normally would be prevented by careful design ought instead be flagged by throwing an exception.
*Edit: Segfaults crash the JVM and should never happen regardless of the bytecode; even an exception won't help you then. I meant stack overflow.
**Edit: Exceptions (without a stack trace) are also used for control flow in Scala--they're actually quite an efficient mechanism, and they enable things like library-defined break statements and a return that returns from your method even though the control has actually passed into one or more closures. Mostly, you shouldn't worry about this yourself, except to realize that catching all Throwables is not such a super idea since you might catch one of these control flow exceptions by mistake.
So this is one of those places where Scala specifically trades off functional purity for ease-of-transition-from/interoperability-with legacy languages and environments, specifically Java. Functional purity is broken by exceptions, as they break referential integrity and make it impossible to reason equationally. (Of course, non-terminating recursions do the same, but few languages are willing to enforce the restrictions that would make those impossible.) To keep functional purity, you use Option/Maybe/Either/Try/Validation, all of which encode success or failure as a referentially-transparent type, and use the various higher-order functions they provide or the underlying languages special monad syntax to make things clearer. Or, in Scala, you can simply decide to ditch functional purity, knowing that it might make things easier in the short term but more difficult in the long. This is similar to using "null" in Scala, or mutable collections, or local "var"s. Mildly shameful, and don't do to much of it, but everyone's under deadline.

Problem when trying to define an operator in Prolog

I have defined a prolog file with the following code:
divisible(X, Y) :-
X mod Y =:= 0.
divisibleBy(X, Y) :-
divisible(X, Y).
op(35,xfx,divisibleBy).
Prolog is complaining that
'$record_clause'/2: No permission to modify static_procedure `op/3'
What am I doing wrong? I want to define an divisibleBy operator that will allow me to write code like the following:
4 divisibleBy 2
Thanks.
Use
:- op(35,xfx,divisibleBy).
:- tells the Prolog interpreter to evaluate the next term while loading the file, i.e. make a predicate call, instead of treating it as a definition (in this case a redefinition of op/3).
The answer given by #larsmans is spot-on regarding your original problem.
However, you should reconsider if you should define a new operator.
In general, I would strongly advise against defining new operators for the following reasons:
The gain in readability is often overrated.
It may easily introduce new problems in places you wouldn't normally expect buggy.
It doesn't "scale" well: a small number of operators can make code on presentation slides super-concise, but what if you add more discriminate union cases over time? More operators?