double permission inhale leads to unexpected verifications - viper-lang

I have trouble understanding some behaviour of Viper, which, imho, seems to be a bug.
The following code fails to verify due to a lack of permissions for bar.val_int on the assignment. This makes total sense.
field val_int: Int
method foo() {
var bar: Ref
inhale acc(bar.val_int)
exhale acc(bar.val_int)
bar.val_int := 2
}
Yet, the following code verifies successfully:
field val_int: Int
method foo() {
var bar: Ref
inhale acc(bar.val_int)
inhale acc(bar.val_int)
exhale acc(bar.val_int)
exhale acc(bar.val_int)
bar.val_int := 2
}
As a matter of fact, one can add an arbitrary number of exhales, as long as there are two inhales (for full permissions) at the top, the code will always verify. Is this a bug, intentional, or am I missing something weird that is happening?
I would have expected the second inhale to have no effect, as a full permission is already held for the heap location. However, even if somehow a double write permission is help (which should be illegal?), then at least I would expect the two exhale statements to revoke these permissions, leaving again no permissions at the moment of assignment.

Write/full permissions to field locations are exclusive, i.e. it is an implicit invariant of Viper's logic that, given a single location x.f, the total permission amount to x.f is at most write/1. See also the Viper tutorial, in particular this subsection about exclusive permissions.
Due to this invariant, the sequence of inhales — and inhales are basically "just" permission aware assumes — from your second snippet
inhale acc(bar.val_int)
inhale acc(bar.val_int)
is thus equivalent to
assume false
Once false has been assumed (which, if you think operationally, means "this and subsequent code is not reachable"), it doesn't matter which Viper statements follow afterwards. In particular, no sequence of exhales will un-assume false.
Thinking in Proof Rules
If you're more used to thinking in deductive proof rules, consider the following proof rules related to contradictions in propositional logic
A ∧ ¬A A ∧ ¬A
---------- or even: ----------
false B
which say that false follows from A ∧ ¬A, or rather that any conclusion B follows from such a contradiction.
Basically, Viper's instance of separation logic includes the following related rule:
acc(x.f, p) 🞻 acc(x.f, q)
------------------------------ if 1 < p + q
false
The rule captures aforementioned invariant of the logic that permissions to a field x.f may not exceed full/write permissions, i.e. 1.
In your case, the two inhales establish the necessary premisses, and Viper thus concludes false.

Related

PureScript - Inferred Type Causes Compiler Warning

Consider the following simple snippet of PureScript code
a :: Int
a = 5
b :: Int
b = 7
c = a + b
main ∷ Effect Unit
main = do
logShow c
The program successfully infers the type of C to be Int, and outputs the expected result:
12
However, it also produces this warning:
No type declaration was provided for the top-level declaration of c.
It is good practice to provide type declarations as a form of documentation.
The inferred type of c was:
Int
in value declaration c
I find this confusing, since I would expect the Int type for C to be safely inferred. Like it often says in the docs, "why derive types when the compiler can do it for you?" This seems like a textbook example of the simplest and most basic type inference.
Is this warning expected? Is there a standard configuration that would suppress it?
Does this warning indicate that every variable should in fact be explicitly typed?
In most cases, and certainly in the simplest cases, the types can be inferred unambiguously, and indeed, in those cases type signatures are not necessary at all. This is why simpler languages, such as F#, Ocaml, or Elm, do not require type signatures.
But PureScript (and Haskell) has much more complicated cases too. Constrained types are one. Higher-rank types are another. It's a whole mess. Don't get me wrong, I love me some high-power type system, but the sad truth is, type inference works ambiguously with all of that stuff a lot of the time, and sometimes doesn't work at all.
In practice, even when type inference does work, it turns out that its results may be wildly different from what the developer intuitively expects, leading to very hard to debug issues. I mean, type errors in PureScript can be super vexing as it is, but imagine that happening across multiple top-level definitions, across multiple modules, even perhaps across multiple libraries. A nightmare!
So over the years a consensus has formed that overall it's better to have all the top-level definitions explicitly typed, even when it's super obvious. It makes the program much more understandable and puts constraints on the typechecker, providing it with "anchor points" of sorts, so it doesn't go wild.
But since it's not a hard requirement (most of the time), it's just a warning, not an error. You can ignore it if you wish, but do that at your own peril.
Now, another part of your question is whether every variable should be explicitly typed, - and the answer is "no".
As a rule, every top-level binding should be explicitly typed (and that's where you get a warning), but local bindings (i.e. let and where) don't have to, unless you need to clarify something that the compiler can't infer.
Moreover, in PureScript (and modern Haskell), local bindings are actually "monomorphised" - that's a fancy term basically meaning they can't be generic unless explicitly specified. This solves the problem of all the ambiguous type inference, while still working intuitively most of the time.
You can notice the difference with the following example:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s x = show x
On the second line s a <> s b you get an error saying "Could not match type b with type a"
This happens because the where-bound function s has been monomorphised, - meaning it's not generic, - and its type has been inferred to be a -> String based on the s a usage. And this means that s b usage is ill-typed.
This can be fixed by giving s an explicit type signature:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s :: forall x. Show x => x -> String
s x = show x
Now it's explicitly specified as generic, so it can be used with both a and b parameters.

Does multiplying a wildcard in viper mean anything?

A bug in VerCors generated some silver that looks like:
field f: Int
method test(n: Int, x: Ref)
requires n == 100
requires acc(x.f, wildcard * n)
{}
Viper seems to accept this, but I don't understand what it would mean, if anything.
Intuitively, it means that the callee obtains a tiny (but non-zero) permission amount — times n. In effect, this should be no different from just obtaining the tiny amount (i.e. acc(x.f, wildcard)), although one can maybe engineer contrived situations, in probably involving perm, in which it makes a difference.
Bottom line: The example is indeed somewhat confusing and the semantics not totally obvious, but everything is well defined.

In the Verified Software Toolchain, how can I specify some Clight which is not the body of a function, and which I'm generating from within Coq?

I would like to write a Gallina function of a type like gen_code : Foo -> statement and then prove that it generates code that satisfies a particular specification. I do have an informal Hoare triple in mind; however, I'm not terribly familiar yet with VST, and I'm not sure exactly how to turn it something that the framework can work with. I think I want a theorem of the form forall {Espec : OracleKind} (foo : Foo), semax ?Delta (P foo) (gen_code foo) (normal_ret_assert (Q foo)), where I know what P and Q are, but I'm not sure what to put for ?Delta. It's worth noting that I have some functions defined in C that I've verified already which I want to invoke from the code generated by gen_code, so presumably ?Delta needs to involve the signatures of those somehow. I assume it probably also needs to mention the free variables of gen_code foo, and here I'm similarly lost.

Is it legal to repeat the same value in a MULTIPLECHARVALUE or MULTIPLESTRINGVALUE field?

Let's assume a FIX field is of type MULTIPLECHARVALUE or MULTIPLESTRINGVALUE, and the enumerated values defined for the field are A, B, C and D. I know that "A C D" is a legal value for this field, but is it legal for a value to be repeated in the field? For example, is "A C C D" legal? If so, what are its semantics?
I can think of three possibilities:
"A C C D" is an invalid value because C is repeated.
"A C C D" is valid and semantically the same as "A C D". In other words, set semantics are intended.
"A C C D" is valid and has multiset/bag semantics.
Unfortunately, I cannot find any clear definition of the intended semantics of MULTIPLECHARVALUE and MULTIPLESTRINGVALUE in FIX specification documents.
The FIX50SP2 spec does not answer your question, so I can only conclude that any of the three interpretations could be considered valid.
Like so may questions with FIX, the true answer is dependent upon the counterparty you are communicating with.
So my answer is:
if you are client app, ask your counterparty what they want (or check their docs).
if you are the server app, you get to decide. Your docs should tell your clients how to act.
If it helps, the QuickFIX/n engine treats MultipleCharValue/MultipleStringValue fields as strings, and leaves it to the application code to parse out the individual values. Thus, it's easy for a developer to support any of the interpretations, or even different interpretations for different fields. (I suspect the other QuickFIX language implementations are the same.)
The definition of MultipleValueString field is a string field containing one or more space delimited values. I haven't got the official spec, but there are few locations where this definition can be found:
https://www.onixs.biz/fix-dictionary/4.2/index.html#MultipleValueString (I know onixs.biz to be very faithful to the standard specification)
String field (see definition of "String" above) containing one or more space delimited values.
https://aj-sometechnicalitiesoflife.blogspot.com/2010/04/fix-protocol-interview-questions.html
12. What is MultipleValueString data type? [...]
String field containing one or more space delimited values.
This leaves it up to a specific field of this type whether multiples are allowed or not, though I suspect only a few if any would need to have multiples allowed. As far as I can tell, the FIX specification deliberately leaves this open.
E.g. for ExecInst<18> it would be silly to specify the same instruction multiple times. I would also suspect each and every implementation to behave differently (for instance one ignoring duplicates, another balking with an error/rejection).

Does Scala have a value restriction like ML, if not then why?

Here’s my thoughts on the question. Can anyone confirm, deny, or elaborate?
I wrote:
Scala doesn’t unify covariant List[A] with a GLB ⊤ assigned to List[Int], bcz afaics in subtyping “biunification” the direction of assignment matters. Thus None must have type Option[⊥] (i.e. Option[Nothing]), ditto Nil type List[Nothing] which can’t accept assignment from an Option[Int] or List[Int] respectively. So the value restriction problem originates from directionless unification and global biunification was thought to be undecidable until the recent research linked above.
You may wish to view the context of the above comment.
ML’s value restriction will disallow parametric polymorphism in (formerly thought to be rare but maybe more prevalent) cases where it would otherwise be sound (i.e. type safe) to do so such as especially for partial application of curried functions (which is important in functional programming), because the alternative typing solutions create a stratification between functional and imperative programming as well as break encapsulation of modular abstract types. Haskell has an analogous dual monomorphisation restriction. OCaml has a relaxation of the restriction in some cases. I elaborated about some of these details.
EDIT: my original intuition as expressed in the above quote (that the value restriction may be obviated by subtyping) is incorrect. The answers IMO elucidate the issue(s) well and I’m unable to decide which in the set containing Alexey’s, Andreas’, or mine, should be the selected best answer. IMO they’re all worthy.
As I explained before, the need for the value restriction -- or something similar -- arises when you combine parametric polymorphism with mutable references (or certain other effects). That is completely independent from whether the language has type inference or not or whether the language also allows subtyping or not. A canonical counter example like
let r : ∀A.Ref(List(A)) = ref [] in
r := ["boo"];
head(!r) + 1
is not affected by the ability to elide the type annotation nor by the ability to add a bound to the quantified type.
Consequently, when you add references to F<: then you need to impose a value restriction to not lose soundness. Similarly, MLsub cannot get rid of the value restriction. Scala enforces a value restriction through its syntax already, since there is no way to even write the definition of a value that would have polymorphic type.
It's much simpler than that. In Scala values can't have polymorphic types, only methods can. E.g. if you write
val id = x => x
its type isn't [A] A => A.
And if you take a polymorphic method e.g.
def id[A](x: A): A = x
and try to assign it to a value
val id1 = id
again the compiler will try (and in this case fail) to infer a specific A instead of creating a polymorphic value.
So the issue doesn't arise.
EDIT:
If you try to reproduce the http://mlton.org/ValueRestriction#_alternatives_to_the_value_restriction example in Scala, the problem you run into isn't the lack of let: val corresponds to it perfectly well. But you'd need something like
val f[A]: A => A = {
var r: Option[A] = None
{ x => ... }
}
which is illegal. If you write def f[A]: A => A = ... it's legal but creates a new r on each call. In ML terms it would be like
val f: unit -> ('a -> 'a) =
fn () =>
let
val r: 'a option ref = ref NONE
in
fn x =>
let
val y = !r
val () = r := SOME x
in
case y of
NONE => x
| SOME y => y
end
end
val _ = f () 13
val _ = f () "foo"
which is allowed by the value restriction.
That is, Scala's rules are equivalent to only allowing lambdas as polymorphic values in ML instead of everything value restriction allows.
EDIT: this answer was incorrect before. I have completely rewritten the explanation below to gather my new understanding from the comments under the answers by Andreas and Alexey.
The edit history and the history of archives of this page at archive.is provides a recording of my prior misunderstanding and discussion. Another reason I chose to edit rather than delete and write a new answer, is to retain the comments on this answer. IMO, this answer is still needed because although Alexey answers the thread title correctly and most succinctly—also Andreas’ elaboration was the most helpful for me to gain understanding—yet I think the layman reader may require a different, more holistic (yet hopefully still generative essence) explanation in order to quickly gain some depth of understanding of the issue. Also I think the other answers obscure how convoluted a holistic explanation is, and I want naive readers to have the option to taste it. The prior elucidations I’ve found don’t state all the details in English language and instead (as mathematicians tend to do for efficiency) rely on the reader to discern the details from the nuances of the symbolic programming language examples and prerequisite domain knowledge (e.g. background facts about programming language design).
The value restriction arises where we have mutation of referenced1 type parametrised objects2. The type unsafety that would result without the value restriction is demonstrated in the following MLton code example:
val r: 'a option ref = ref NONE
val r1: string option ref = r
val r2: int option ref = r
val () = r1 := SOME "foo"
val v: int = valOf (!r2)
The NONE value (which is akin to null) contained in the object referenced by r can be assigned to a reference with any concrete type for the type parameter 'a because r has a polymorphic type a'. That would allow type unsafety because as shown in the example above, the same object referenced by r which has been assigned to both string option ref and int option ref can be written (i.e. mutated) with a string value via the r1 reference and then read as an int value via the r2 reference. The value restriction generates a compiler error for the above example.
A typing complication arises to prevent3 the (re-)quantification (i.e. binding or determination) of the type parameter (aka type variable) of a said reference (and the object it points to) to a type which differs when reusing an instance of said reference that was previously quantified with a different type.
Such (arguably bewildering and convoluted) cases arise for example where successive function applications (aka calls) reuse the same instance of such a reference. IOW, cases where the type parameters (pertaining to the object) for a reference are (re-)quantified each time the function is applied, yet the same instance of the reference (and the object it points to) being reused for each subsequent application (and quantification) of the function.
Tangentially, the occurrence of these is sometimes non-intuitive due to lack of explicit universal quantifier ∀ (since the implicit rank-1 prenex lexical scope quantification can be dislodged from lexical evaluation order by constructions such as let or coroutines) and the arguably greater irregularity (as compared to Scala) of when unsafe cases may arise in ML’s value restriction:
Andreas wrote:
Unfortunately, ML does not usually make the quantifiers explicit in its syntax, only in its typing rules.
Reusing a referenced object is for example desired for let expressions which analogous to math notation, should only create and evaluate the instantiation of the substitutions once even though they may be lexically substituted more than once within the in clause. So for example, if the function application is evaluated as (regardless of whether also lexically or not) within the in clause whilst the type parameters of substitutions are re-quantified for each application (because the instantiation of the substitutions are only lexically within the function application), then type safety can be lost if the applications aren’t all forced to quantify the offending type parameters only once (i.e. disallow the offending type parameter to be polymorphic).
The value restriction is ML’s compromise to prevent all unsafe cases while also preventing some (formerly thought to be rare) safe cases, so as to simplify the type system. The value restriction is considered a better compromise, because the early (antiquated?) experience with more complicated typing approaches that didn’t restrict any or as many safe cases, caused a bifurcation between imperative and pure functional (aka applicative) programming and leaked some of the encapsulation of abstract types in ML functor modules. I cited some sources and elaborated here. Tangentially though, I’m pondering whether the early argument against bifurcation really stands up against the fact that value restriction isn’t required at all for call-by-name (e.g. Haskell-esque lazy evaluation when also memoized by need) because conceptually partial applications don’t form closures on already evaluated state; and call-by-name is required for modular compositional reasoning and when combined with purity then modular (category theory and equational reasoning) control and composition of effects. The monomorphisation restriction argument against call-by-name is really about forcing type annotations, yet being explicit when optimal memoization (aka sharing) is required is arguably less onerous given said annotation is needed for modularity and readability any way. Call-by-value is a fine tooth comb level of control, so where we need that low-level control then perhaps we should accept the value restriction, because the rare cases that more complex typing would allow would be less useful in the imperative versus applicative setting. However, I don’t know if the two can be stratified/segregated in the same programming language in smooth/elegant manner. Algebraic effects can be implemented in a CBV language such as ML and they may obviate the value restriction. IOW, if the value restriction is impinging on your code, possibly it’s because your programming language and libraries lack a suitable metamodel for handling effects.
Scala makes a syntactical restriction against all such references, which is a compromise that restricts for example the same and even more cases (that would be safe if not restricted) than ML’s value restriction, but is more regular in the sense that we’ll not be scratching our head about an error message pertaining to the value restriction. In Scala, we’re never allowed to create such a reference. Thus in Scala, we can only express cases where a new instance of a reference is created when it’s type parameters are quantified. Note OCaml relaxes the value restriction in some cases.
Note afaik both Scala and ML don’t enable declaring that a reference is immutable1, although the object they point to can be declared immutable with val. Note there’s no need for the restriction for references that can’t be mutated.
The reason that mutability of the reference type1 is required in order to make the complicated typing cases arise, is because if we instantiate the reference (e.g. in for example the substitutions clause of let) with a non-parametrised object (i.e. not None or Nil4 but instead for example a Option[String] or List[Int]), then the reference won’t have a polymorphic type (pertaining to the object it points to) and thus the re-quantification issue never arises. So the problematic cases are due to instantiation with a polymorphic object then subsequently assigning a newly quantified object (i.e. mutating the reference type) in a re-quantified context followed by dereferencing (reading) from the (object pointed to by) reference in a subsequent re-quantified context. As aforementioned, when the re-quantified type parameters conflict, the typing complication arises and unsafe cases must be prevented/restricted.
Phew! If you understood that without reviewing linked examples, I’m impressed.
1 IMO to instead employ the phrase “mutable references” instead of “mutability of the referenced object” and “mutability of the reference type” would be more potentially confusing, because our intention is to mutate the object’s value (and its type) which is referenced by the pointer— not referring to mutability of the pointer of what the reference points to. Some programming languages don’t even explicitly distinguish when they’re disallowing in the case of primitive types a choice of mutating the reference or the object they point to.
2 Wherein an object may even be a function, in a programming language that allows first-class functions.
3 To prevent a segmentation fault at runtime due to accessing (read or write of) the referenced object with a presumption about its statically (i.e. at compile-time) determined type which is not the type that the object actually has.
4 Which are NONE and [] respectively in ML.