Alloy model an algebraic group - algebra

I am trying to model the structure of an algebraic group with Alloy.
A group just has a set of elements and a binary relation with certain properties so I thought it would be a good fit for alloy.
This is what I started with
sig Number{}
/* I call it Number but this is really just a name for some objects that are going to be in the group */
sig Group{
member: set Number,
product: member->member->member, /*This is the part I'm really not sure about the Group is supposed to have a a well-defined binary relation so I thought maybe I could write it like this, sort of as a Curried function...I think it's actually a ternary relation in Alloy language since it takes two members and returns a third member */
}{//I want to write the other group properties as appended facts here.
some e:member | all g:member| g->e->g in product //identity element
all g:member | some i:member| g->i->e in product /* inverses exist I think there's a problem here because i want the e to be the same as in the previous line*/
all a,b,c:member| if a->b->c and c->d->e and b->c->f then a->f->e //transitivity
all a,b:member| a->b->c in product// product is well defined
}

I've only just learned a bit of Alloy myself, but your "inverses exist" problem looks straightforward from a predicate logic perspective; replace your first two properties with
some e:member {
all g:member | g->e->g in product //identity element
all g:member | some i:member | g->i->e in product // inverses exist
}
By putting the inverse property in the scope of the quantifier of e, it is referring to that same e.
I haven't tested this.

Here is one way of encoding groups in Alloy:
module group[E]
pred associative[o : E->E->E]{ all x, y, z : E | (x.o[y]).o[z] = x.o[y.o[z]] }
pred commutative[o : E->E->E]{ all x, y : E | x.o[y] = y.o[x] }
pred is_identity[i : E, o : E->E->E]{ all x : E | (i.o[x] = x and x = x.o[i]) }
pred is_inverse[b : E->E, i : E, o : E->E->E]{ all x : E | (b[x].o[x] = i and i = x.o[b[x]]) }
sig Group{
op : E -> E->one E, inv : E one->one E, id : E
}{
associative[op] and is_identity[id, op] and is_inverse[inv, id, op] }
sig Abelian extends Group{}{ commutative[op] }
unique_identity: check {
all g : Group, id' : E | (is_identity[id', g.op] implies id' = g.id)
} for 13 but exactly 1 Group
unique_inverse: check {
all g : Group, inv' : E->E | (is_inverse[inv', g.id, g.op] implies inv' = g.inv)
} for 13 but exactly 1 Group

Related

MiniZinc: build a connectivity matrix

In MiniZinc, I have an array of boolean representing an oriented connection between nodes of a graph:
array[Variants,Variants] of bool : VariantIsDirectlyUpwardOf;
VariantIsDirectlyUpwardOf[v1,v2] = true if there is an oriented arc "v1 -> v2".
now I want to build
array[Variants,Variants] of bool VariantIsUpwardOf;
where VariantIsUpwardOf[v1,v2] = true if there is an oriented path "v1 -> ... -> v2" where "..." is a sequence of nodes defining an oriented path of any length going from v1 to v2.
My first try was to define a transitive kind of constraint
array[Variants,Variants] of var bool : VariantIsUpwardOf;
constraint forall (v1 in Variants, v2 in Variants)(VariantIsDirectlyUpwardOf[v1,v2]->VariantIsUpwardOf[v1,v2]);
constraint forall (v1 in Variants, v2 in Variants, v3 in Variants)( VariantIsUpwardOf[v1,v2] /\ VariantIsUpwardOf[v2,v3] -> VariantIsUpwardOf[v1,v3]);
but I think this is incorrect because if all values of VariantIsUpwardOf[v1,v2] were true, then my constraints would be satisfied and the result would be incorrect.
Following the comment (thanks Axel), I made a second unsuccessful test using predicate dpath, here is my very basic test calling dpath::
include "path.mzn";
enum MyNodes={N1,N2};
array [int] of MyNodes: EdgeFrom=[N1];
array [int] of MyNodes: EdgeTo= [N2];
array [MyNodes] of bool: NodesInSubGraph = [true, true];
array [int] of bool: EdgesInSubGraph = [true];
var bool : MyTest = dpath(EdgeFrom,EdgeTo,N1,N2,NodesInSubGraph,EdgesInSubGraph);
output [show(MyTest)];
it produces the following error:
Running MiniTest.mzn
221msec
fzn_dpath_enum_reif:3.3-52
in call 'abort'
MiniZinc: evaluation error: Abort: Reified dpath constraint is not supported
Process finished with non-zero exit code 1.
Finished in 221msec.
The following MiniZinc model demonstrates the usage of the dpath() predicate to find a directed path in a graph.
I took the directed graph from Wikipedia as example:
The model:
include "globals.mzn";
int: Nodes = 4;
bool: T = true; % abbreviate typing
bool: F = false;
set of int: Variants = 1..Nodes;
% VariantIsDirectlyUpwardOf[v1,v2] = true if there is an oriented arc "v1 -> v2".
% Example from https://en.wikipedia.org/wiki/Directed_graph
array[Variants,Variants] of bool : VariantIsDirectlyUpwardOf =
[| F, T, T, F,
| F, F, F, F,
| F, T, F, T,
| F, F, T, F |];
% count the number of Edges as 2D array sum
int: NoOfEdges = sum(VariantIsDirectlyUpwardOf);
set of int: Edges = 1..NoOfEdges;
% for dpath(), the graph has to be represented as two
% 'from' 'to' arrays of Nodes
% cf. https://www.minizinc.org/doc-2.6.4/en/lib-globals-graph.html
array[Edges] of Variants: fromNodes =
[row | row in Variants, col in Variants
where VariantIsDirectlyUpwardOf[row, col]];
array[Edges] of Variants: toNodes =
[col | row in Variants, col in Variants
where VariantIsDirectlyUpwardOf[row, col]];
% arbitrary choice of Nodes to be connected by a directed path
Variants: sourceNode = 4;
Variants: destNode = 2;
% decision variables as result of the path search
array[Variants] of var bool: nodesInPath;
array[Edges] of var bool: edgesInPath;
constraint dpath(fromNodes, toNodes, sourceNode, destNode, nodesInPath, edgesInPath);
% determine next node after nd in path
function int: successor(int: nd) =
min([s | s in Variants, e in Edges where
fix(nodesInPath[s]) /\ fix(edgesInPath[e]) /\
(fromNodes[e] = nd) /\ (toNodes[e] = s)]);
function string: showPath(int: nd) =
if nd = destNode then "\(nd)" else "\(nd)->" ++ showPath(successor(nd)) endif;
output [showPath(sourceNode)];
Resulting output:
4->3->2

How to generate arbitrary instances of a language given its concrete syntax in Rascal?

Given the concrete syntax of a language, I would like to define a function "instance" with signature str (type[&T]) that could be called with the reified type of the syntax and return a valid instance of the language.
For example, with this syntax:
lexical IntegerLiteral = [0-9]+;
start syntax Exp
= IntegerLiteral
| bracket "(" Exp ")"
> left Exp "*" Exp
> left Exp "+" Exp
;
A valid return of instance(#Exp) could be "1+(2*3)".
The reified type of a concrete syntax definition does contain information about the productions, but I am not sure if this approach is better than a dedicated data structure. Any pointers of how could I implement it?
The most natural thing is to use the Tree data-type from the ParseTree module in the standard library. It is the format that the parser produces, but you can also use it yourself. To get a string from the tree, simply print it in a string like so:
str s = "<myTree>";
A relatively complete random tree generator can be found here: https://github.com/cwi-swat/drambiguity/blob/master/src/GenerateTrees.rsc
The core of the implementation is this:
Tree randomChar(range(int min, int max)) = char(arbInt(max + 1 - min) + min);
Tree randomTree(type[Tree] gr)
= randomTree(gr.symbol, 0, toMap({ <s, p> | s <- gr.definitions, /Production p:prod(_,_,_) <- gr.definitions[s]}));
Tree randomTree(\char-class(list[CharRange] ranges), int rec, map[Symbol, set[Production]] _)
= randomChar(ranges[arbInt(size(ranges))]);
default Tree randomTree(Symbol sort, int rec, map[Symbol, set[Production]] gr) {
p = randomAlt(sort, gr[sort], rec);
return appl(p, [randomTree(delabel(s), rec + 1, gr) | s <- p.symbols]);
}
default Production randomAlt(Symbol sort, set[Production] alts, int rec) {
int w(Production p) = rec > 100 ? p.weight * p.weight : p.weight;
int total(set[Production] ps) = (1 | it + w(p) | Production p <- ps);
r = arbInt(total(alts));
count = 0;
for (Production p <- alts) {
count += w(p);
if (count >= r) {
return p;
}
}
throw "could not select a production for <sort> from <alts>";
}
Tree randomChar(range(int min, int max)) = char(arbInt(max + 1 - min) + min);
It is a simple recursive function which randomly selects productions from a reified grammar.
The trick towards termination lies in the weight of each rule. This is computed a priori, such that every rule has its own weight in the random selection. We take care to give the set of rules that lead to termination at least 50% chance of being selected (as opposed to the recursive rules) (code here: https://github.com/cwi-swat/drambiguity/blob/master/src/Termination.rsc)
Grammar terminationWeights(Grammar g) {
deps = dependencies(g.rules);
weights = ();
recProds = {p | /p:prod(s,[*_,t,*_],_) := g, <delabel(t), delabel(s)> in deps};
for (nt <- g.rules) {
prods = {p | /p:prod(_,_,_) := g.rules[nt]};
count = size(prods);
recCount = size(prods & recProds);
notRecCount = size(prods - recProds);
// at least 50% of the weight should go to non-recursive rules if they exist
notRecWeight = notRecCount != 0 ? (count * 10) / (2 * notRecCount) : 0;
recWeight = recCount != 0 ? (count * 10) / (2 * recCount) : 0;
weights += (p : p in recProds ? recWeight : notRecWeight | p <- prods);
}
return visit (g) {
case p:prod(_, _, _) => p[weight=weights[p]]
}
}
#memo
rel[Symbol,Symbol] dependencies(map[Symbol, Production] gr)
= {<delabel(from),delabel(to)> | /prod(Symbol from,[_*,Symbol to,_*],_) := gr}+;
Note that this randomTree algorithm will not terminate on grammars that are not "productive" (i.e. they have only a rule like syntax E = E;
Also it can generate trees that are filtered by disambiguation rules. So you can check this by running the parser on a generated string and check for parse errors. Also it can generated ambiguous strings.
By the way, this code was inspired by the PhD thesis of Naveneetha Vasudevan of King's College, London.

Apple turicreate always return the same label

I'm test-driving turicreate, to resolve a classification issue, in which data consists of 10-uples (q,w,e,r,t,y,u,i,o,p,label), where 'q..p' is a sequence of characters (for now of 2 types), +,-, like this:
q,w,e,r,t,y,u,i,o,p,label
-,+,+,e,e,e,e,e,e,e,type2
+,+,e,e,e,e,e,e,e,e,type1
-,+,e,e,e,e,e,e,e,e,type2
'e' is just a padding character, so that vectors have a fixed lenght of 10.
note:data is significantly tilted toward one label (90% of it), and the dataset is small, < 100 points.
I use Apple's vanilla script to prepare and process the data (derived from here):
import turicreate as tc
# Load the data
data = tc.SFrame('data.csv')
# Note, for sake of investigating why predictions do not work on Swift, the model is deliberately over-fitted, with split 1.0
train_data, test_data = data.random_split(1.0)
print(train_data)
# Automatically picks the right model based on your data.
model = tc.classifier.create(train_data, target='label', features = ['q','w','e','r','t','y','u','i','o','p'])
# Generate predictions (class/probabilities etc.), contained in an SFrame.
predictions = model.classify(train_data)
# Evaluate the model, with the results stored in a dictionary
results = model.evaluate(train_data)
print("***********")
print(results['accuracy'])
print("***********")
model.export_coreml("MyModel.mlmodel")
Note:The model is over-fitted on the whole data (for now). Convergence seems ok,
PROGRESS: Model selection based on validation accuracy:
PROGRESS: ---------------------------------------------
PROGRESS: BoostedTreesClassifier : 1.0
PROGRESS: RandomForestClassifier : 0.9032258064516129
PROGRESS: DecisionTreeClassifier : 0.9032258064516129
PROGRESS: SVMClassifier : 1.0
PROGRESS: LogisticClassifier : 1.0
PROGRESS: ---------------------------------------------
PROGRESS: Selecting BoostedTreesClassifier based on validation set performance.
And the classification works as expected (although over-fitted)
However, when i use the mlmodel in my code, no matter what, it returns always the same label, here 'type2'. The rule is here type1 = only "+" and "e", type2 = all others combinations.
I tried using the text_classifier, the results are far less accurate...
I have no idea what I'm doing wrong....
Just in case someone wants to check, for a small data set, here's the raw data.
q,w,e,r,t,y,u,i,o,p,label
-,+,+,e,e,e,e,e,e,e,type2
-,+,e,e,e,e,e,e,e,e,type2
+,+,-,+,e,e,e,e,e,e,type2
-,-,+,-,e,e,e,e,e,e,type2
+,e,e,e,e,e,e,e,e,e,type1
-,-,+,+,e,e,e,e,e,e,type2
+,-,+,-,e,e,e,e,e,e,type2
-,+,-,-,e,e,e,e,e,e,type2
+,-,-,+,e,e,e,e,e,e,type2
+,+,e,e,e,e,e,e,e,e,type1
+,+,-,-,e,e,e,e,e,e,type2
-,+,-,e,e,e,e,e,e,e,type2
-,-,-,-,e,e,e,e,e,e,type2
-,-,e,e,e,e,e,e,e,e,type2
-,-,-,e,e,e,e,e,e,e,type2
+,+,+,+,e,e,e,e,e,e,type1
+,-,+,+,e,e,e,e,e,e,type2
+,+,+,e,e,e,e,e,e,e,type1
+,-,-,-,e,e,e,e,e,e,type2
+,-,-,e,e,e,e,e,e,e,type2
+,+,+,-,e,e,e,e,e,e,type2
+,-,e,e,e,e,e,e,e,e,type2
+,-,+,e,e,e,e,e,e,e,type2
-,-,+,e,e,e,e,e,e,e,type2
+,+,-,e,e,e,e,e,e,e,type2
e,e,e,e,e,e,e,e,e,e,type1
-,+,+,-,e,e,e,e,e,e,type2
-,-,-,+,e,e,e,e,e,e,type2
-,e,e,e,e,e,e,e,e,e,type2
-,+,+,+,e,e,e,e,e,e,type2
-,+,-,+,e,e,e,e,e,e,type2
And the swift code:
//Helper
extension MyModelInput {
public convenience init(v:[String]) {
self.init(q: v[0], w: v[1], e: v[2], r: v[3], t: v[4], y: v[5], u: v[6], i: v[7], o: v[8], p:v[9])
}
}
let classifier = MyModel()
let data = ["-,+,+,e,e,e,e,e,e,e,e", "-,+,e,e,e,e,e,e,e,e,e", "+,+,-,+,e,e,e,e,e,e,e", "-,-,+,-,e,e,e,e,e,e,e","+,e,e,e,e,e,e,e,e,e,e"]
data.forEach { (tt) in
let gg = MyModelInput(v: tt.components(separatedBy: ","))
if let prediction = try? classifier.prediction(input: gg) {
print(prediction.labelProbability)
}
}
The python code saves a MyModel.mlmodel file, which you can add to any Xcode project and use the code above.
note: the python part works fine, for example:
+---+---+---+---+---+---+---+---+---+---+-------+
| q | w | e | r | t | y | u | i | o | p | label |
+---+---+---+---+---+---+---+---+---+---+-------+
| + | + | + | + | e | e | e | e | e | e | type1 |
+---+---+---+---+---+---+---+---+---+---+-------+
is labelled as expected. But when using the swift code, the label comes out as type2. This thing is driving be berserk (and yes, I checked that the mlmodel replaces the old one whenever i create a new version, and also in Xcode).

Merge model with extensible record in Elm 0.19

I define an extensible record
type alias Saved a =
{ a
| x : Int
, y : String
}
and a Model based on that:
type alias Model =
Saved { z : Float }
I then load and decode JSON into a Saved {}:
let
received =
Decode.decodeValue savedDecoder json |> Result.toMaybe
in
(Maybe.map
(\r ->
{ model
| x = r.x
, y = r.y
}
)
received
|> Maybe.withDefault model
Is there any way to merge the existing model with the received extensible record that does not involve copying each field individually, similar to the ES6 Object.assign function?
That's the way it's done. Optionally, you can pattern match a parameter:
Maybe.map
(\{x, y} ->
{ model
| x = x
, y = y
}
)

Specman on-the-fly generation for multiple constrained items

I have this multiple fields that need to be constrained in this manner:
struct my_struct {
a : uint;
b : uint;
c : uint;
d : uint;
keep 3*a + 4*b + 5*c + 6*d == 206 and a + b + c + d == 50;
my_method() #clk_event is {
while (TRUE) {
if (ctr == 0) {
gen a;
gen b;
gen c;
gen d;
};
if (ctr == 50) {
ctr = 0;
} else {
ctr += 1;
};
wait cycle;
};
};
};
I basically want to generate a new set of values for a, b, c, and d periodically. The above code is not working as their values did not change in my simulation. Any idea how to do it?
When you generate one field, other fields can't change their values, they are inputs for the constraints. Given your constraints, there can be only one correct value for a field if three other can't change.
You probably need to modify the design and put the fields with constraints under a struct, and have a field of this struct type. So instead of four separate gens, you will have only one, and it will do the job right.