I would like to encode the abstract keyword semantic as a constraint in Alloy (be patient, I need to do this for a reason! :) ). if I have the following code:
abstract sig A {}
sig a1 extends A{}
sig a2 extends A{}
I think its meaning would be as following (I hope I am right!):
sig A {}
sig a1 in A{}
sig a2 in A{}
fact {
A=a1+a2 //A is nothing other than a1 and a2
a1 & a2 = none // a1 and a1 are disjoint
}
so the two above signatures are equal (i.e., would be semantically equal):
I am eager to use the Abstract keyword that Alloy provide to make life easy, But the problem arises when I make A to be a subset of sig O and use abstract keyword:
sig O{}
abstract sig A in O{}
sig a1 extends A{}
sig a2 extends A{}
above syntax returns an error! Alloy complains:"Subset signature cannot be abstract.", so my first question is : Why Alloy does not allow this?
I don't stop and encode abstract keyword semantic (as explained above) , and come to the following code:
sig O{}
sig A in O{}
sig a1 in A{}
sig a2 in A{}
fact {
A=a1+a2 // A can not be independently instantiated
a1 & a2 = none // a1 and a2 are disjoint
}
And this works, and everything is fine :)
Now if I want to add a3 to my Alloy specification, I need to tweak my specification as the following:
sig O{}
sig A in O{}
sig a1 in A{}
sig a2 in A{}
sig a3 in A{}
fact {
A=a1+a2+3
a1 & a2 = none
a1 & a3 = none
a2 & a3 = none
}
But as you see by comparing the two specification above, if I want to continue this, and add a4 in a similar way to my specification, I need to change the fact part even more, and this continues to be hassle! Actually the number of ai & aj =none (for i=1..n) expressions are increasing non-monotonously! i.e., adding a4 force me to add more than one constraint:
fact {
A=a1+a2+3 +a4
a1 & a2 = none
a1 & a3 = none
a1 & a4 = none
a2 & a3 = none
a2 & a4 = none
a3 & a4 = none
}
So my second question:
Is there any workaround (or probably simpler way) to do this?
Any comment is appreciated.
Thx :)
On Q1 (why does Alloy not allow extension of subset signatures?): I don't know.
On Q2 (is there a workaround): the simplest workaround is to make a1 ... an be subsignatures (extensions) of A, and find another way to establish the relation of A and O. In the simple examples you have given, O has no subtypes so simply changing A in O to A extends O would work.
If O is already partitioned by other signatures you haven't shown us, then that workaround doesn't work; it's impossible to say what would work without more detail. (Ideally, you want a minimum complete working example to illustrate the difficulty: the examples you give are minimal, and illustrate one difficulty, but they don't illustrate why A cannot be an extension of O.)
[Addendum]
In a comment, you say
The reason [that I used A in O instead of A extends O] is that there is another signature C in O that is not shown here. A and C are not necessarily disjoint, so that is the reason I think I have to use in instead of extend in defining them to be subset of O.
The devil is in the details, but the conclusion doesn't follow from the premises stated. If A and C both extend O, they will be disjoint, but if one uses extend and the other uses in they are not automatically disjoint. So if you want to have A and C each be a subset of O, and A to be partitioned by several other signatures, it is possible to do so (unless there are other constraints not yet mentioned).
sig O {}
abstract sig A extends O {}
sig a1, a2 extends A {}
sig a3, a4 extends A {}
sig C in O {}
Related
Suppose we have a type A with an equivalence relation (===) : A -> A -> Prop on it.
On top of that there is a function f : A -> option A.
It so happens that this function f is "almost" Proper with respect to equiv. By "almost" I mean the following:
Lemma almost_proper :
forall a1 a2 b1 b2 : A,
a1 === a2 ->
f a1 = Some b1 -> f a2 = Some b2 ->
b1 === b2.
In other words, if f succeeds on both inputs, the relation is preserved, but f might still fail on one and succeed on the other. I would like to express this concept concisely but came up with a few questions when trying to do so.
I see three solutions to the problem:
Leave everything as is. Do not use typeclasses, prove lemmas like the one above. This doesn't look good, because of the proliferation of preconditions like x = Some y, which create complications when proving lemmas.
It is possible to prove Proper ((===) ==> equiv_if_Some) f when equiv_if_Some is defined as follows:
Inductive equiv_if_Some {A : Type} {EqA : relation A} `{Equivalence A EqA} : relation (option A) :=
| equiv_Some_Some : forall a1 a2, a1 === a2 -> equiv_if_Some (Some a1) (Some a2)
| equiv_Some_None : forall a, equiv_if_Some (Some a) None
| equiv_None_Some : forall a, equiv_if_Some None (Some a)
| equiv_None_None : equiv_if_Some None None.
One problem here is that this is no longer an equivalence relation (it is not transitive).
It might be possible to prove Almost_Proper ((===) ==> (===)) f if some reasonable Almost_Proper class is used. I am not sure how that would work.
What would be the best way to express this concept? I am leaning toward the second one, but perhaps there are more options?
For variants 2 and 3, are there preexisting common names (and therefore possibly premade definitions) for the relations I describe? (equiv_if_Some and Almost_Proper)
2 if it simplifies your proofs.
The fact that it's not an equivalence relation is not a deal breaker, but it does limit the contexts in which it can be used.
equiv_if_Some might be nicer to define as an implication (similar to how it appears in the almost_proper lemma) than as an inductive type.
You may also consider using other relations (as an alternative to, or in combination with equiv_if_Some):
A partial order, that can only relate None to Some but not Some to None
A partial equivalence relation, that can only relate Somes.
Here, following is explained
trait A { def common = “A” }
trait B extends A { override def common = “B” }
trait C extends A { override def common = “C” }
class D1 extends B with C
class D2 extends C with B
In case of (D1), the superclass of C is B.
Following same reasoning, in case of (D2), the superclass of B is C.
So is it possible to vary hierarchy relationships dynamically by varying traits linearly?
Also asked here:
https://users.scala-lang.org/t/type-linearization/2533
This is a bit big for a comment, so:
In both cases (D1 and D2) B extends A and C also extends A. B and C don't become superclasses of each other, it's just the matter of how Scala resolves the diamond problem.
Think of type linearization as of "if you mix in 2 traits that extend from a common parent and thus implement a method with the same signature, the rightmost one wins". This means that in your examples:
class D1 extends B with C
B and C both implement A, but C is the rightmost in D1 definition. This means C.common will be called if you call common on D1. And in the next example:
class D2 extends C with B
the story is just the same, but B is the rightmost in its definition, thus its implementation will be called on D2.common
I'm reading this article about Category and Functor in scala: https://hseeberger.wordpress.com/2010/11/25/introduction-to-category-theory-in-scala/
In this part:
In order to preserve the category structure this mapping must preserve the identity maps and the compositions. More formally:
F(1A) = 1F(A) ∀ A ∈ C1
F(g ο f) = F(g) ο F(f) ∀ f: A → B, g: B → C where A, B, C ∈ C1
I can't understand well about: F(1A) = 1F(A)
Why the right part is 1F(A) rather than F(A)?
I see in other articles, the identity law for Functor is:
fa.map(a => a) == fa
Which doesn't relate to 1F(A)
I think you are confused about what the notation represents.
1A represents the identity arrow which is just an arrow/mapping from the object A in the category to itself i.e. A -> A.
Similarly, 1F(A) represents the identity functor which is just an arrow from the functor F(A) to itself ie F(A) -> F(A).
Therefore, the arrow 1F(A) is not the same as the functor F(A) and so it would be wrong to say that 1F(A) = F(A) as you suggest.
To answer your question why F(1A) = 1F(A), consider the following in light of the above explanation:
F(1A) = F(f : A -> A ) = F(f) : F(A) -> F(A) = 1F(A)
Also now that we have cleared up the meaning of the notation, the code snippet is consistent with the functor definition:
fa.map(a => a) == fa
fa is a functor and the map is using the identity function to map every element of the functor to itself. This reproduces a functor which is equivalent to the original functor fa and so the map can be represented as 1F(A) in the language of category theory.
So you can see there is a concept of an arrow and an object in Category Theory which you must be able to distinguish and understand when looking at the notation. I highly recommend reading the first chapter of Steve Awodey's Category Theory if you are really interested in this subject.
If A is an object in the original category, then F(A) is the corresponding object in the new category. If there is an arrow A->B in the original category, there must be a corresponding arrow F(A)->F(B) in the new category. Since 1A is just an arrow A->A, this means there must be an arrow F(A)->F(A) as well, which would be written 1F(A). More compactly, F(1A) = 1F(A).
Consider the following hierarchy:
class C1
class C2 extends C1
class C3 extends C2
class C4 extends C3
I want to write a function that just accepts types C2 and C3. For that I thought of the following:
def f [C >: C3 <: C2](c :C) = 0
I'd expect the following behaviour
f(new C1) //doesn't compile, ok
f(new C2) //compiles, ok
f(new C3) //compiles, ok
f(new C4) // !!! Compiles, and it shouldn't
The problem is when calling it with C4, which I don't want to allow, but the compiler accepts. I understand that C4 <: C2 is correct and that C4 can be seen as a C3. But when specifying the bound [C >: C3 <: C2], I would expect the compiler to find a C that respects both bounds at the same time, not one by one.
Question is : Is there any way to achieve what I want, and if not, is the compiler trying to avoid some inconsistency with this?
Edit: from the answers I realized that my presumption is wrong. C4 always fulfills C >: C3, so both bounds are indeed respected. The way to go for my use case is C3 <:< C.
Statically, yes. It's pretty simple to impose this constraint:
def f[C <: C2](c: C)(implicit ev: C3 <:< C) = 0
f(new C4) wouldn't compile now.
The problem is, it's probably not possible to prohibit the following behaviour at compile time:
val c: C3 = new C4
f(c)
Here variable c has static type C3, which passes any kind of typechecking by compiler, but it is actually a C4 at runtime.
At runtime you can of course check the type using reflection or polymorphism and throw errors or return Failure(...) or None
I found that explanation from another stackoverflow question very helpful:
S >: T simply means that if you pass in a type S that is equal to T or its parent, then S will be used. If you pass a type that is sublevel to T then T will be used.
So in your example all, but first should compile.
Following example illustrates the meaning of that:
Let's redefine f:
def f[U >: C3 <: C2](c: U) = c
and then:
val a2 = f(new C2)
val a3 = f(new C3)
val a4 = f(new C4)
List[C2](a2, a3, a4) //compiles
List[C3](a3, a4) //compiles
List[C4](a4) //does not cause a4 is C3
Hope that helps.
There are number of great tutorials and posts out there covering the more straightforward of Lens's methods, e.g. Cleaner way to update nested structures; can anyone provide example uses for these three other methods? Thanks.
Unfortunately, Scalaz7 lens examples are a WIP. You need to ask this question of the Scalaz Google Group. Before you ask, try these examples here and watch Emmett's videos.
Using lenses with Scalaz 7
Emmett's videos on Lenses
Look at the source code again. What can you puzzle out from this?
def xmapbA[X, A >: A2 <: A1](b: Bijection[A, X]): LensFamily[X, X, B1, B2] =
xmapA(b to _)(b from _)
def xmapB[X1, X2](f: B1 => X1)(g: X2 => B2): LensFamily[A1, A2, X1, X2] =
lensFamily(a => run(a).xmap(f)(g))
def xmapbB[X, B >: B1 <: B2](b: Bijection[B, X]): LensFamily[A1, A2, X, X] =
xmapB(b to _)(b from _)
/** Modify the value viewed through the lens, returning a functor `X` full of results. */
def modf[X[+_]](f: B1 => X[B2], a: A1)(implicit XF: Functor[X]): X[A2] = {
val c = run(a)
XF.map(f(c.pos))(c put _)
}
Sorry for the minimal help. I can just point at whom to ask and what you need to know before you ask.