Generating code from locales without interpretation - scala

I would love to generate code from locale definitions directly, without interpretation. Example:
(* A locale, from the code point of view, similar to a class *)
locale MyTest =
fixes L :: "string list"
assumes distinctL: "distinct L"
begin
definition isInL :: "string => bool" where
"isInL s = (s ∈ set L)"
end
The assumptions to instantiate MyTest are executable and I can generate code for them
definition "can_instance_MyTest L = distinct L"
lemma "can_instance_MyTest L = MyTest L"
by(simp add: MyTest_def can_instance_MyTest_def)
export_code can_instance_MyTest in Scala file -
I can define a function to execute the isInL definition for arbitrary MyTest.
definition code_isInL :: "string list ⇒ string ⇒ bool option" where
"code_isInL L s = (if can_instance_MyTest L then Some (MyTest.isInL L s) else None)"
lemma "code_isInL L s = Some b ⟷ MyTest L ∧ MyTest.isInL L s = b"
by(simp add: code_isInL_def MyTest_def can_instance_MyTest_def)
However, code export fails:
export_code code_isInL in Scala file -
No code equations for MyTest.isInL
Why do I want to do such a thing?
I'm working with a locale in the context of a valid_graph similar to e.g. here but finite. Testing that a graph is valid is easy. Now I want to export the code of my graph algorithms into Scala. Of course, the code should run on arbitrary valid graphs.
I'm thinking of the Scala analogy similar to something like this:
class MyTest(L: List[String]) {
require(L.distinct)
def isInL(s: String): Bool = L contains s
}

One way to solve this is datatype refinement using invariants (see isabelle doc codegen section 3.3). Thereby the validity assumption (distinct L, in your case) can be moved into a new type. Consider the following example:
typedef 'a dlist = "{xs::'a list. distinct xs}"
morphisms undlist dlist
proof
show "[] ∈ ?dlist" by auto
qed
This defines a new type whose elements are all lists with distinct elements. We have to explicitly set up this new type for the code generator.
lemma [code abstype]: "dlist (undlist d) = d"
by (fact undlist_inverse)
Then, in the locale we have the assumption "for free" (since every element of the new type guarantees it; however, at some point we have to lift a basic set of operations from lists with distinct element to 'a dlists).
locale MyTest =
fixes L :: "string dlist"
begin
definition isInL :: "string => bool" where
"isInL s = (s ∈ set (undlist L))"
end
At this point, we are able to give (unconditional) equations to the code generator.
lemma [code]: "MyTest.isInL L s ⟷ s ∈ set (undlist L)"
by (fact MyTest.isInL_def)
export_code MyTest.isInL in Haskell file -

I found a method, thanks to chris' tips.
Define a function to test the prerequisites/assumptions to instantiate a MyTest
definition "can_instance_MyTest L = distinct L"
The command term MyTest reveals that MyTest is of type string list => bool,
this means that MyTest is a predicate that takes a parameter and tests if this parameter fulfills MyTest's assumptions.
We introduce a code equation ([code]) that replaces MyTest with the executable instance tester.
The code generator can now produce code for occurrences of e.g., MyTest [a,b,c]
lemma [code]: "MyTest = can_instance_MyTest"
by(simp add:fun_eq_iff MyTest_def can_instance_MyTest_def)
export_code MyTest in Scala file -
We yield (I replaced List[Char] with String for readability):
def can_instance_MyTest[A : HOL.equal](l: List[A]): Boolean =
Lista.distinct[A](l)
def myTest: (List[String]) => Boolean =
(a: List[String]) => can_instance_MyTest[String](a)
More readable pseudo-code:
def myTest(l: List[String]): Boolean = l.isDistinct
Now we need executable code for isInL. We utilize the predefined constant undefined. This code throws an exception if L is not distinct.
definition code_isInL :: "string list ⇒ string ⇒ bool" where
"code_isInL L s = (if can_instance_MyTest L then s ∈ set L else undefined)"
export_code code_isInL in Scala file -
We yield:
def code_isInL(l: List[String], s:String): Boolean =
(if (can_instance_MyTest[String](l)) Lista.member[String](l, s)
else sys.error("undefined"))*)
We just need to show that the code_isInL is correct:
lemma "b ≠ undefined ⟹ code_isInL L s = b ⟷ MyTest L ∧ MyTest.isInL L s = b"
by(simp add: code_isInL_def MyTest_def can_instance_MyTest_def MyTest.isInL_def)
(* Unfortunately, the other direction does not hold. The price of undefined. *)
lemma "¬ MyTest L ⟹ code_isInL L s = undefined"
by(simp add: code_isInL_def can_instance_MyTest_def MyTest_def)

Related

How do I code algebraic cpos as an Isabelle locale

I am trying to prove the known fact that there is a standard way to build an algebriac_cpo from a partial_order. My problem is I keep getting the error
Type unification failed: No type arity set :: partial_order
and I can not understand what this means.
I think I have tracked down my problem to the definition of cpo. The definition works and I have proven various results for it but the working interpretation of a partial_order fails with cpo.
theory LocaleProblem imports "HOL-Lattice.Bounds"
begin
definition directed:: "'a::partial_order set ⇒ bool" where
" directed a ≡
¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
class cpo = partial_order +
fixes bot :: "'a::partial_order" ("⊥")
assumes bottom: " ∀ x::'a. ⊥ ⊑ x "
assumes dlub: "directed (d::'a::partial_order set) ⟶ (∃ lub . is_Inf d lub) "
print_locale cpo
interpretation "idl" : partial_order
"(⊆)::( ('b set) ⇒ ('b set) ⇒ bool) "
by (unfold_locales , auto) (* works *)
interpretation "idl" : cpo
"(⊆)::( ('b set) ⇒ ('b set) ⇒ bool) "
"{}" (* gives error
Type unification failed: No type arity set :: partial_order
Failed to meet type constraint:
Term: (⊆) :: 'b set ⇒ 'b set ⇒ bool
Type: ??'a ⇒ ??'a ⇒ bool *)
Any help much appreciated. david
You offered two solutions.
Following the work of Hennessy in Algebraic Theory of Processes" I am trying to prove (where I(A) are the Ideal which are sets) "If a is_a partial_order then I(A) is an algebraic_cpo" I will then want to apply the result to a number of semantics, give as sets. Dose your comment mean that the second solution is not a good idea?
Initially I had a proven lemma that started
lemma directed_ran: "directed (d::('a::partial_order×'b::partial_order) set) ⟹ directed (Range d)"
proof (unfold directed_def)
With the First solution started well:
context partial_order begin (* type 'a is now a partial_order *)
definition
is_Sup :: "'a set ⇒ 'a ⇒ bool" where
"is_Sup A sup = ((∀x ∈ A. x ⊑ sup) ∧ (∀z. (∀x ∈ A. x ⊑ z) ⟶ sup ⊑ z))"
definition directed:: "'a set ⇒ bool" where
" directed a ≡
¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
lemma directed_ran: "directed (d::('c::partial_order×'b::partial_order) set) ⟹ directed (Range d)"
proof - assume a:"directed d"
from a local.directed_def[of "d"] (* Fail with message below *)
show "directed (Range d)"
Alas the working lemma now fails: I rewrote
proof (unfold local.directed_def)
so I explored and found that although the fact local.directed_def exists is can not be unified
Failed to meet type constraint:
Term: d :: ('c × 'b) set
Type: 'a set
I changed the type successfully in the lemma statement but now can find no way to unfold the definition in the proof. Is there some way to do this?
Second solution again starts well:
instantiation set:: (type) partial_order
begin
definition setpoDef: "a⊑(b:: 'a set) = subset_eq a b"
instance proof
fix x::"'a set" show " x ⊑ x" by (auto simp: setpoDef)
fix x y z::"'a set" show "x ⊑ y ⟹ y ⊑ z ⟹ x ⊑ z" by(auto simp: setpoDef)
fix x y::"'a set" show "x ⊑ y ⟹ y ⊑ x ⟹ x = y" by (auto simp: setpoDef)
qed
end
but next step fails
instance proof
fix d show "directed d ⟶ (∃lub. is_Sup d (lub::'a set))"
proof assume "directed d " with directed_def[of "d"] show "∃lub. is_Sup d lub"
by (metis Sup_least Sup_upper is_SupI setpoDef)
qed
next
from class.cpo_axioms_def[of "bot"] show "∀x . ⊥ ⊑ x " (* Fails *)
qed
end
the first subgoal is proven but show "∀x . ⊥ ⊑ x " although cut an paste from the subgaol in the output does not match the subgoal. Normally at this point is need to add type constraints. But I cannot find any that work.
Do you know what is going wrong?
Do you know if I can force the Output to reveal the type information in the sugoal.
The interpretation command acts on the locale that a type class definition implicitly declares. In particular, it does not register the type constructor as an instance of the type class. That's what the command instantiation is good for. Therefore, in the second interpretation, Isabelle complains about set not having been registered as a instance of partial_order.
Since directed only needs the ordering for one type instance (namely 'a), I recommend to move the definition of directed into the locale context of the partial_order type class and remove the sort constraint on 'a:
context partial_order begin
definition directed:: "'a set ⇒ bool" where
"directed a ≡ ¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
end
(This only works if is_Sup is also defined in the locale context. If not, I recommend to replace the is_Sup condition with a1 <= ub and a2 <= ub.)
Then, you don't need to constrain 'a in the cpo type class definition, either:
class cpo = partial_order +
fixes bot :: "'a" ("⊥")
assumes bottom: " ∀ x::'a. ⊥ ⊑ x "
assumes dlub: "directed (d::'a set) ⟶ (∃ lub . is_Inf d lub)"
And consequently, your interpretation should not fail due to sort constraints.
Alternatively, you could declare set as an instance of partial_order instead of interpreting the type class. The advantage is that you can then also use constants and theorems that need partial_order as a sort constraint, i.e., that have not been defined or proven inside the locale context of partial_order. The disadvantage is that you'd have to define the type class operation inside the instantiation block. So you can't just say that the subset relation should be used; this has to be a new constant. Depending on what you intend to do with the instantiation, this might not matter or be very annoying.

Questions about a la carte data types

I was reading the original paper about data types a la carte and decided to try to implement the idea in Scala (I know it's already implemented in many functional libraries). Unfortunately I found the original paper is hard to comprehend and I stuck somewhere in the beginning. Then I found another paper that was easier to understand and I managed to rewrite Haskell code from the paper into Scala, you can find it here. However I still struggling to understand a few moments:
A quote from the second paper
Orignal Expr data type
data Expr = Val Int | Add Expr Expr
New type signature:
data Arith e = Val Int | Add e e
For any functor f, its induced recursive datatype, Fix f, is defined as the least fixpoint of f, implemented as follows:
data Fix f = In (f (Fix f))
Now that we have tied the recursive knot of a signature,
Fix Arith is a language equivalent to the original Expr datatype
which allowed integer values and addition.
What does it mean exactly "we have tied the recursive knot of a signature" and what does it mean Fix Arith is a language equivalent to the original Expr ?
The actual type of In is In :: f (Fix f) -> Fix f
If we try to construct a value using In construct and Val 1 variable we'll get the following result:
> :t In(Val 1)
> In(Val 1) :: Fix Arith
Scala encoding of the same data types:
sealed trait Arith[A]
case class Val[A](x: Int) extends Arith[A]
case class Add[A](a: A, b: A) extends Arith[A]
trait Fix[F[_]]
case class In[F[_]](exp: F[Fix[F]]) extends Fix[F]
fold function
The fold function has the following signature and implementation
Haskell:
fold :: Functor f => (f a -> a) -> Fix f -> a
fold f (In t) = f (fmap (fold f) t)
Scala variant I came up with
def fold[F[_] : Functor, A](f: F[A] => A): Fix[F] => A = {
case In(t) =>
val g: F[Fix[F]] => F[A] = implicitly[Functor[F]].lift(fold(f))
f(g(t))
}
The thing that I'm curious about is that in my Scala version function g has the following type F[Fix[F]] => F[A] but the type of variable t after pattern matching is LaCarte$Add with value Add(In(Val(1)),In(Val(2))), how it happens that it's valid to apply function g to LaCarte$Add ? Also, I'd very appreciate if you can help me to understand fold function ?
Quote from the paper:
The first argument of fold is an f-algebra, which provides
the behavior of each constructor associated with a given signature f.
What does it mean exactly “we have tied the ‘recursive knot’ of a signature”?
The original Expr datatype is recursive, referring to itself in its own definition:
data Expr = Val Int | Add Expr Expr
The Arith type “factors out” the recursion by replacing recursive calls with a parameter:
data Arith e = Val Int | Add e e
The original Expr type can have any depth of nesting, which we want to support with Arith as well, but the maximum depth depends on what type we choose for e:
Arith Void can’t be nested: it can only be a literal value (Val n) because we can’t construct an Add, because we can’t obtain a value of type Void (it has no constructors)
Arith (Arith Void) can have one level of nesting: the outer constructor can be an Add, but the inner constructors can only be Lit.
Arith (Arith (Arith Void)) can have two levels
And so on
What Fix Arith gives us is a way to talk about the fixed point Arith (Arith (Arith …)) with no limit on the depth.
This is just like how we can replace a recursive function with a non-recursive function and recover the recursion with the fixed-point combinator:
factorial' :: (Integer -> Integer) -> Integer -> Integer
factorial' recur n = if n <= 1 then 1 else n * recur (n - 1)
factorial :: Integer -> Integer
factorial = fix factorial'
factorial 5 == 120
What does it mean Fix Arith is a language equivalent to the original Expr?
The language (grammar) that Fix Arith represents is equivalent to the language that Expr represents; that is, they’re isomorphic: you can write a pair of total functions Fix Arith -> Expr and Expr -> Fix Arith.
How it happens that it’s valid to apply function g to LaCarte$Add?
I’m not very familiar with Scala, but it looks like Add is a subtype of Arith, so the parameter of g of type F[Fix[F]] can be filled with a value of type Arith[Fix[Arith]] which you get by matching on the In constructor to “unfold” one level of recursion.

Similar record types in a list/array in purescript

Is there any way to do something like
first = {x:0}
second = {x:1,y:1}
both = [first, second]
such that both is inferred as {x::Int | r} or something like that?
I've tried a few things:
[{x:3}] :: Array(forall r. {x::Int|r}) -- nope
test = Nil :: List(forall r. {x::Int|r})
{x:1} : test -- nope
type X r = {x::Int | r}
test = Nil :: List(X) -- nope
test = Nil :: List(X())
{x:1} : test
{x:1, y:1} : test -- nope
Everything I can think of seems to tell me that combining records like this into a collection is not supported. Kind of like, a function can be polymorphic but a list cannot. Is that the correct interpretation? It reminds me a bit of the F# "value restriction" problem, though I thought that was just because of CLR restrictions whereas JS should not have that issue. But maybe it's unrelated.
Is there any way to declare the list/array to support this?
What you're looking for is "existential types", and PureScript just doesn't support those at the syntax level the way Haskell does. But you can roll your own :-)
One way to go is "data abstraction" - i.e. encode the data in terms of operations you'll want to perform on it. For example, let's say you'll want to get the value of x out of them at some point. In that case, make an array of these:
type RecordRep = Unit -> Int
toRecordRep :: forall r. { x :: Int | r } -> RecordRep
toRecordRep {x} _ = x
-- Construct the array using `toRecordRep`
test :: Array RecordRep
test = [ toRecordRep {x:1}, toRecordRep {x:1, y:1} ]
-- Later use the operation
allTheXs :: Array Int
allTheXs = test <#> \r -> r unit
If you have multiple such operations, you can always make a record of them:
type RecordRep =
{ getX :: Unit -> Int
, show :: Unit -> String
, toJavaScript :: Unit -> Foreign.Object
}
toRecordRep r =
{ getX: const r.x
, show: const $ show r.x
, toJavaScript: const $ unsafeCoerce r
}
(note the Unit arguments in every function - they're there for the laziness, assuming each operation could be expensive)
But if you really need the type machinery, you can do what I call "poor man's existential type". If you look closely, existential types are nothing more than "deferred" type checks - deferred to the point where you'll need to see the type. And what's a mechanism to defer something in an ML language? That's right - a function! :-)
newtype RecordRep = RecordRep (forall a. (forall r. {x::Int|r} -> a) -> a)
toRecordRep :: forall r. {x::Int|r} -> RecordRep
toRecordRep r = RecordRep \f -> f r
test :: Array RecordRep
test = [toRecordRep {x:1}, toRecordRep {x:1, y:1}]
allTheXs = test <#> \(RecordRep r) -> r _.x
The way this works is that RecordRep wraps a function, which takes another function, which is polymorphic in r - that is, if you're looking at a RecordRep, you must be prepared to give it a function that can work with any r. toRecordRep wraps the record in such a way that its precise type is not visible on the outside, but it will be used to instantiate the generic function, which you will eventually provide. In my example such function is _.x.
Note, however, that herein lies the problem: the row r is literally not known when you get to work with an element of the array, so you can't do anything with it. Like, at all. All you can do is get the x field, because its existence is hardcoded in the signatures, but besides the x - you just don't know. And that's by design: if you want to put anything into the array, you must be prepared to get anything out of it.
Now, if you do want to do something with the values after all, you'll have to explain that by constraining r, for example:
newtype RecordRep = RecordRep (forall a. (forall r. Show {x::Int|r} => {x::Int|r} -> a) -> a)
toRecordRep :: forall r. Show {x::Int|r} => {x::Int|r} -> RecordRep
toRecordRep r = RecordRep \f -> f r
test :: Array RecordRep
test = [toRecordRep {x:1}, toRecordRep {x:1, y:1}]
showAll = test <#> \(RecordRep r) -> r show
Passing the show function like this works, because we have constrained the row r in such a way that Show {x::Int|r} must exist, and therefore, applying show to {x::Int|r} must work. Repeat for your own type classes as needed.
And here's the interesting part: since type classes are implemented as dictionaries of functions, the two options described above are actually equivalent - in both cases you end up passing around a dictionary of functions, only in the first case it's explicit, but in the second case the compiler does it for you.
Incidentally, this is how Haskell language support for this works as well.
Folloing #FyodorSoikin answer based on "existential types" and what we can find in purescript-exists we can provide yet another solution.
Finally we will be able to build an Array of records which will be "isomorphic" to:
exists tail. Array { x :: Int | tail }
Let's start with type constructor which can be used to existentially quantify over a row type (type of kind #Type). We are not able to use Exists from purescript-exists here because PureScript has no kind polymorphism and original Exists is parameterized over Type.
newtype Exists f = Exists (forall a. f (a :: #Type))
We can follow and reimplement (<Ctrl-c><Ctrl-v> ;-)) definitions from Data.Exists and build a set of tools to work with such Exists values:
module Main where
import Prelude
import Unsafe.Coerce (unsafeCoerce)
import Data.Newtype (class Newtype, unwrap)
newtype Exists f = Exists (forall a. f (a :: #Type))
mkExists :: forall f a. f a -> Exists f
mkExists r = Exists (unsafeCoerce r :: forall a. f a)
runExists :: forall b f. (forall a. f a -> b) -> Exists f -> b
runExists g (Exists f) = g f
Using them we get the ability to build an Array of Records with "any" tail but we have to wrap any such a record type in a newtype before:
newtype R t = R { x :: Int | t }
derive instance newtypeRec :: Newtype (R t) _
Now we can build an Array using mkExists:
arr :: Array (Exists R)
arr = [ mkExists (R { x: 8, y : "test"}), mkExists (R { x: 9, z: 10}) ]
and process values using runExists:
x :: Array [ Int ]
x = map (runExists (unwrap >>> _.x)) arr

Is there a way to derive Num class functions in own data type in Haskell?

Let's say I have a type declaration:
data MyType = N Double | C Char | Placeholder
I want to be able to treat MyType as a Double whenever it's possible, with all the Num, Real, Fractional functions resulting in N (normal result) for arguments wrapped in the N constructor, and Placeholder for other arguments
> (N 5.0) + (N 6.0)
N 11.0
> (N 5.0) + (C 'a')
Placeholder
Is there a way to do this other than simply defining this class as an instance of those classes in a manner similar to:
instance Num MyType where
(+) (N d1) (N d2) = N (d1+d2)
(+) _ _ = Placeholder
...
(which seems counter-productive)?
There is no generic deriving available in standard Haskell: currently, deriving is only available as defined by the compiler for specific Prelude typeclasses: Read, Show, Eq, Ord, Enum, and Bounded.
The Glasgow Haskell Compiler (GHC) apparently has extensions that support generic deriving. However, I don't know if it would actually save you any work to try and use them: how many typeclasses do you need to derive a Num instance from? And, are you sure that you can define an automatic scheme for deriving Num that will always do what you want?
As noted in the comments, you need to describe what your Num instance will do in any case. And describing and debugging a general scheme is certain to be more work than describing a particular one.
No, you can't do this automatically, but I think what leftaroundabout could have been getting at is that you can use Applicative operations to help you.
data MyType n = N n | C Char | Placeholder deriving (Show, Eq, Functor)
instance Applicative MyType where
pure = N
(<*>) = ap
instance Monad MyType where
N n >>= f = f n
C c >>= _ = C c
Placeholder >>= _ = Placeholder
Now you can write
instance Num n => Num (MyType n) where
x + y = (+) <$> x <*> y
abs = fmap abs
...

Isabelle's Code generation: Abstraction lemmas for containers?

I am experimenting with the Code generator. My theory contains a datatype that encodes an invariant:
typedef small = "{x::nat. x < 10}" morphisms to_nat small
by (rule exI[where x = 0], simp)
definition "is_one x ⟷ x = small 1"
Now I want to export code for is_one. It seems that I first have to set up the data type for the code generator as follows:
code_datatype small
instantiation small :: equal
begin
definition "HOL.equal a b ⟷ to_nat a = to_nat b"
instance
apply default
unfolding equal_small_def
apply (rule to_nat_inject)
done
end
lemma [code abstype]: "small (to_nat x) = x" by (rule to_nat_inverse)
And now I am able to define a code equation for is_one that does not violate the abstraction:
definition [code del]:"small_one = small 1"
declare is_one_def[code del]
lemma [code]: "is_one x ⟷ x = small_one" unfolding is_one_def small_one_def..
lemma [code abstract]: "to_nat small_one = 1"
unfolding small_one_def by (rule small_inverse, simp)
export_code is_one in Haskell file -
Question 1: Can I avoid pulling the definition of small_one out of is_one?
Now I have a compound value of small items:
definition foo :: "small set list" where "foo = [ {small (10-i), small i} . i <- [2,3,4] ]"
In that form, I cannot export code using it (“Abstraction violation in code equation foo...”).
Question 2: How do I define an [code abstract] lemma for such a value? It seems that the code generator does not accept lemmas of the form map to_nat foo = ....
For types with invariants declared by code_abstract such as small, no code equation may contain the abstraction function small. Consequently, if you want the equality test in is_one to be expressed on type small, you have to define a constant for small 1 and prove the corresponding code equation with to_nat first. This is what you have done. However, it would be easier to use equality on the representation type nat, i.e., inline the implementation equal_class.equal; then, you do not need to instantiate equal for small either.
lemma is_one_code [code]: "is_one x ⟷ to_nat x = 1"
by(cases x)(simp add: is_one_def small_inject small_inverse)
The lifting package does most of this for you automatically:
setup_lifting type_definition_small
lift_definition is_one :: "small => bool" is "%x. x = 1" ..
export_code is_one in SML file -
Unfortunately, the code generator does not support compound return types that involve abstract types like in small set list. As described in Data Refinement in Isabelle/HOL, sect. 3.2, you have to wrap the type small set list as a type constructor small_set_list of its own and define foo with type small_set_list. The type small_set_list is then represented as nat set list with the invariant list_all (%A. ALL x:A. x < 10).
Besides Andreas' explanation and comprehensive solution, I found a work-around that avoids introducing a new type. Assume foo is not used in many places, say, only in
definition "in_foo x = (∀s ∈ set foo. x ∈ s)"
Trying to export code for in_foo gives the "Abstraction violation" error. But using program refinement, I can avoid the problematic code:
I use code_thms to find out how foo is used and find that it is used as list_all (op ∈ ?x) foo. Now I rewrite this particular use of foo without mentioning small:
lemma [code_unfold]:
"list_all (op ∈ x) foo ⟷
list_all (op ∈ (to_nat x)) [ {(10-i), i} . i <- [2, 3, 4] ]"
unfolding foo_def by (auto simp add: small_inverse small_inject)
(Had I defined foo using lift_definition, the proof after apply transfer would be even simpler.)
After this transformation, which essentially captures that all members of foo fulfill the invariant of small, the code export works as intended:
export_code in_foo in Haskell file -