Merge model with extensible record in Elm 0.19 - merge
I define an extensible record
type alias Saved a =
{ a
| x : Int
, y : String
}
and a Model based on that:
type alias Model =
Saved { z : Float }
I then load and decode JSON into a Saved {}:
let
received =
Decode.decodeValue savedDecoder json |> Result.toMaybe
in
(Maybe.map
(\r ->
{ model
| x = r.x
, y = r.y
}
)
received
|> Maybe.withDefault model
Is there any way to merge the existing model with the received extensible record that does not involve copying each field individually, similar to the ES6 Object.assign function?
That's the way it's done. Optionally, you can pattern match a parameter:
Maybe.map
(\{x, y} ->
{ model
| x = x
, y = y
}
)
Related
MiniZinc: build a connectivity matrix
In MiniZinc, I have an array of boolean representing an oriented connection between nodes of a graph: array[Variants,Variants] of bool : VariantIsDirectlyUpwardOf; VariantIsDirectlyUpwardOf[v1,v2] = true if there is an oriented arc "v1 -> v2". now I want to build array[Variants,Variants] of bool VariantIsUpwardOf; where VariantIsUpwardOf[v1,v2] = true if there is an oriented path "v1 -> ... -> v2" where "..." is a sequence of nodes defining an oriented path of any length going from v1 to v2. My first try was to define a transitive kind of constraint array[Variants,Variants] of var bool : VariantIsUpwardOf; constraint forall (v1 in Variants, v2 in Variants)(VariantIsDirectlyUpwardOf[v1,v2]->VariantIsUpwardOf[v1,v2]); constraint forall (v1 in Variants, v2 in Variants, v3 in Variants)( VariantIsUpwardOf[v1,v2] /\ VariantIsUpwardOf[v2,v3] -> VariantIsUpwardOf[v1,v3]); but I think this is incorrect because if all values of VariantIsUpwardOf[v1,v2] were true, then my constraints would be satisfied and the result would be incorrect. Following the comment (thanks Axel), I made a second unsuccessful test using predicate dpath, here is my very basic test calling dpath:: include "path.mzn"; enum MyNodes={N1,N2}; array [int] of MyNodes: EdgeFrom=[N1]; array [int] of MyNodes: EdgeTo= [N2]; array [MyNodes] of bool: NodesInSubGraph = [true, true]; array [int] of bool: EdgesInSubGraph = [true]; var bool : MyTest = dpath(EdgeFrom,EdgeTo,N1,N2,NodesInSubGraph,EdgesInSubGraph); output [show(MyTest)]; it produces the following error: Running MiniTest.mzn 221msec fzn_dpath_enum_reif:3.3-52 in call 'abort' MiniZinc: evaluation error: Abort: Reified dpath constraint is not supported Process finished with non-zero exit code 1. Finished in 221msec.
The following MiniZinc model demonstrates the usage of the dpath() predicate to find a directed path in a graph. I took the directed graph from Wikipedia as example: The model: include "globals.mzn"; int: Nodes = 4; bool: T = true; % abbreviate typing bool: F = false; set of int: Variants = 1..Nodes; % VariantIsDirectlyUpwardOf[v1,v2] = true if there is an oriented arc "v1 -> v2". % Example from https://en.wikipedia.org/wiki/Directed_graph array[Variants,Variants] of bool : VariantIsDirectlyUpwardOf = [| F, T, T, F, | F, F, F, F, | F, T, F, T, | F, F, T, F |]; % count the number of Edges as 2D array sum int: NoOfEdges = sum(VariantIsDirectlyUpwardOf); set of int: Edges = 1..NoOfEdges; % for dpath(), the graph has to be represented as two % 'from' 'to' arrays of Nodes % cf. https://www.minizinc.org/doc-2.6.4/en/lib-globals-graph.html array[Edges] of Variants: fromNodes = [row | row in Variants, col in Variants where VariantIsDirectlyUpwardOf[row, col]]; array[Edges] of Variants: toNodes = [col | row in Variants, col in Variants where VariantIsDirectlyUpwardOf[row, col]]; % arbitrary choice of Nodes to be connected by a directed path Variants: sourceNode = 4; Variants: destNode = 2; % decision variables as result of the path search array[Variants] of var bool: nodesInPath; array[Edges] of var bool: edgesInPath; constraint dpath(fromNodes, toNodes, sourceNode, destNode, nodesInPath, edgesInPath); % determine next node after nd in path function int: successor(int: nd) = min([s | s in Variants, e in Edges where fix(nodesInPath[s]) /\ fix(edgesInPath[e]) /\ (fromNodes[e] = nd) /\ (toNodes[e] = s)]); function string: showPath(int: nd) = if nd = destNode then "\(nd)" else "\(nd)->" ++ showPath(successor(nd)) endif; output [showPath(sourceNode)]; Resulting output: 4->3->2
How to generate arbitrary instances of a language given its concrete syntax in Rascal?
Given the concrete syntax of a language, I would like to define a function "instance" with signature str (type[&T]) that could be called with the reified type of the syntax and return a valid instance of the language. For example, with this syntax: lexical IntegerLiteral = [0-9]+; start syntax Exp = IntegerLiteral | bracket "(" Exp ")" > left Exp "*" Exp > left Exp "+" Exp ; A valid return of instance(#Exp) could be "1+(2*3)". The reified type of a concrete syntax definition does contain information about the productions, but I am not sure if this approach is better than a dedicated data structure. Any pointers of how could I implement it?
The most natural thing is to use the Tree data-type from the ParseTree module in the standard library. It is the format that the parser produces, but you can also use it yourself. To get a string from the tree, simply print it in a string like so: str s = "<myTree>"; A relatively complete random tree generator can be found here: https://github.com/cwi-swat/drambiguity/blob/master/src/GenerateTrees.rsc The core of the implementation is this: Tree randomChar(range(int min, int max)) = char(arbInt(max + 1 - min) + min); Tree randomTree(type[Tree] gr) = randomTree(gr.symbol, 0, toMap({ <s, p> | s <- gr.definitions, /Production p:prod(_,_,_) <- gr.definitions[s]})); Tree randomTree(\char-class(list[CharRange] ranges), int rec, map[Symbol, set[Production]] _) = randomChar(ranges[arbInt(size(ranges))]); default Tree randomTree(Symbol sort, int rec, map[Symbol, set[Production]] gr) { p = randomAlt(sort, gr[sort], rec); return appl(p, [randomTree(delabel(s), rec + 1, gr) | s <- p.symbols]); } default Production randomAlt(Symbol sort, set[Production] alts, int rec) { int w(Production p) = rec > 100 ? p.weight * p.weight : p.weight; int total(set[Production] ps) = (1 | it + w(p) | Production p <- ps); r = arbInt(total(alts)); count = 0; for (Production p <- alts) { count += w(p); if (count >= r) { return p; } } throw "could not select a production for <sort> from <alts>"; } Tree randomChar(range(int min, int max)) = char(arbInt(max + 1 - min) + min); It is a simple recursive function which randomly selects productions from a reified grammar. The trick towards termination lies in the weight of each rule. This is computed a priori, such that every rule has its own weight in the random selection. We take care to give the set of rules that lead to termination at least 50% chance of being selected (as opposed to the recursive rules) (code here: https://github.com/cwi-swat/drambiguity/blob/master/src/Termination.rsc) Grammar terminationWeights(Grammar g) { deps = dependencies(g.rules); weights = (); recProds = {p | /p:prod(s,[*_,t,*_],_) := g, <delabel(t), delabel(s)> in deps}; for (nt <- g.rules) { prods = {p | /p:prod(_,_,_) := g.rules[nt]}; count = size(prods); recCount = size(prods & recProds); notRecCount = size(prods - recProds); // at least 50% of the weight should go to non-recursive rules if they exist notRecWeight = notRecCount != 0 ? (count * 10) / (2 * notRecCount) : 0; recWeight = recCount != 0 ? (count * 10) / (2 * recCount) : 0; weights += (p : p in recProds ? recWeight : notRecWeight | p <- prods); } return visit (g) { case p:prod(_, _, _) => p[weight=weights[p]] } } #memo rel[Symbol,Symbol] dependencies(map[Symbol, Production] gr) = {<delabel(from),delabel(to)> | /prod(Symbol from,[_*,Symbol to,_*],_) := gr}+; Note that this randomTree algorithm will not terminate on grammars that are not "productive" (i.e. they have only a rule like syntax E = E; Also it can generate trees that are filtered by disambiguation rules. So you can check this by running the parser on a generated string and check for parse errors. Also it can generated ambiguous strings. By the way, this code was inspired by the PhD thesis of Naveneetha Vasudevan of King's College, London.
Apple turicreate always return the same label
I'm test-driving turicreate, to resolve a classification issue, in which data consists of 10-uples (q,w,e,r,t,y,u,i,o,p,label), where 'q..p' is a sequence of characters (for now of 2 types), +,-, like this: q,w,e,r,t,y,u,i,o,p,label -,+,+,e,e,e,e,e,e,e,type2 +,+,e,e,e,e,e,e,e,e,type1 -,+,e,e,e,e,e,e,e,e,type2 'e' is just a padding character, so that vectors have a fixed lenght of 10. note:data is significantly tilted toward one label (90% of it), and the dataset is small, < 100 points. I use Apple's vanilla script to prepare and process the data (derived from here): import turicreate as tc # Load the data data = tc.SFrame('data.csv') # Note, for sake of investigating why predictions do not work on Swift, the model is deliberately over-fitted, with split 1.0 train_data, test_data = data.random_split(1.0) print(train_data) # Automatically picks the right model based on your data. model = tc.classifier.create(train_data, target='label', features = ['q','w','e','r','t','y','u','i','o','p']) # Generate predictions (class/probabilities etc.), contained in an SFrame. predictions = model.classify(train_data) # Evaluate the model, with the results stored in a dictionary results = model.evaluate(train_data) print("***********") print(results['accuracy']) print("***********") model.export_coreml("MyModel.mlmodel") Note:The model is over-fitted on the whole data (for now). Convergence seems ok, PROGRESS: Model selection based on validation accuracy: PROGRESS: --------------------------------------------- PROGRESS: BoostedTreesClassifier : 1.0 PROGRESS: RandomForestClassifier : 0.9032258064516129 PROGRESS: DecisionTreeClassifier : 0.9032258064516129 PROGRESS: SVMClassifier : 1.0 PROGRESS: LogisticClassifier : 1.0 PROGRESS: --------------------------------------------- PROGRESS: Selecting BoostedTreesClassifier based on validation set performance. And the classification works as expected (although over-fitted) However, when i use the mlmodel in my code, no matter what, it returns always the same label, here 'type2'. The rule is here type1 = only "+" and "e", type2 = all others combinations. I tried using the text_classifier, the results are far less accurate... I have no idea what I'm doing wrong.... Just in case someone wants to check, for a small data set, here's the raw data. q,w,e,r,t,y,u,i,o,p,label -,+,+,e,e,e,e,e,e,e,type2 -,+,e,e,e,e,e,e,e,e,type2 +,+,-,+,e,e,e,e,e,e,type2 -,-,+,-,e,e,e,e,e,e,type2 +,e,e,e,e,e,e,e,e,e,type1 -,-,+,+,e,e,e,e,e,e,type2 +,-,+,-,e,e,e,e,e,e,type2 -,+,-,-,e,e,e,e,e,e,type2 +,-,-,+,e,e,e,e,e,e,type2 +,+,e,e,e,e,e,e,e,e,type1 +,+,-,-,e,e,e,e,e,e,type2 -,+,-,e,e,e,e,e,e,e,type2 -,-,-,-,e,e,e,e,e,e,type2 -,-,e,e,e,e,e,e,e,e,type2 -,-,-,e,e,e,e,e,e,e,type2 +,+,+,+,e,e,e,e,e,e,type1 +,-,+,+,e,e,e,e,e,e,type2 +,+,+,e,e,e,e,e,e,e,type1 +,-,-,-,e,e,e,e,e,e,type2 +,-,-,e,e,e,e,e,e,e,type2 +,+,+,-,e,e,e,e,e,e,type2 +,-,e,e,e,e,e,e,e,e,type2 +,-,+,e,e,e,e,e,e,e,type2 -,-,+,e,e,e,e,e,e,e,type2 +,+,-,e,e,e,e,e,e,e,type2 e,e,e,e,e,e,e,e,e,e,type1 -,+,+,-,e,e,e,e,e,e,type2 -,-,-,+,e,e,e,e,e,e,type2 -,e,e,e,e,e,e,e,e,e,type2 -,+,+,+,e,e,e,e,e,e,type2 -,+,-,+,e,e,e,e,e,e,type2 And the swift code: //Helper extension MyModelInput { public convenience init(v:[String]) { self.init(q: v[0], w: v[1], e: v[2], r: v[3], t: v[4], y: v[5], u: v[6], i: v[7], o: v[8], p:v[9]) } } let classifier = MyModel() let data = ["-,+,+,e,e,e,e,e,e,e,e", "-,+,e,e,e,e,e,e,e,e,e", "+,+,-,+,e,e,e,e,e,e,e", "-,-,+,-,e,e,e,e,e,e,e","+,e,e,e,e,e,e,e,e,e,e"] data.forEach { (tt) in let gg = MyModelInput(v: tt.components(separatedBy: ",")) if let prediction = try? classifier.prediction(input: gg) { print(prediction.labelProbability) } } The python code saves a MyModel.mlmodel file, which you can add to any Xcode project and use the code above. note: the python part works fine, for example: +---+---+---+---+---+---+---+---+---+---+-------+ | q | w | e | r | t | y | u | i | o | p | label | +---+---+---+---+---+---+---+---+---+---+-------+ | + | + | + | + | e | e | e | e | e | e | type1 | +---+---+---+---+---+---+---+---+---+---+-------+ is labelled as expected. But when using the swift code, the label comes out as type2. This thing is driving be berserk (and yes, I checked that the mlmodel replaces the old one whenever i create a new version, and also in Xcode).
How to optimize this algorithm that find all maximal matching in a graph?
In my app people give grades to each other, out of ten point. Each day, an algorithm computes a match for as much people as possible (it's impossible to compute a match for everyone). It makes a graph where vertexes are users and edges are the grades I simplify the problem by saying that if 2 people give a grade to each other, there is an edge between them with a weight of their respective grade average. But if A give a grade to B, but B doesnt, their is no edge between them and they can never match : this way, the graph is not oriented anymore I would like that, in average everybody be happy, but in the same time, I would like as few as possible of people that have no match. Being very deterministic, I made an algorithm that find ALL maximal matchings in a graph. I did that because I thought I could analyse all these maximal matchings and apply a value function that could look like : V(Matching) = exp(|M| / max(|M|)) * sum(weight of all Edge in M) That is to say, a matching is high-valued if its cardinal is close to the cardinal of the maximum matching, and if the sum of the grade between people is high. I put an exponential function to the ratio |M|/max|M| because I consider it's a big problem if M is lower that 0.8 (so the exp will be arranged to highly decrease V as |M|/max|M| reaches 0.8) I would have take the matching where V(M) is maximal. Though, the big problem is that my function that computes all maximal matching takes a lot of time. For only 15 vertex and 20 edges, it takes almost 10 minutes... Here is the algorithm (in Swift) : import Foundation struct Edge : CustomStringConvertible { var description: String { return "e(\(v1), \(v2))" } let v1:Int let v2:Int let w:Int? init(_ arrint:[Int]) { v1 = arrint[0] v2 = arrint[1] w = nil } init(_ v1:Int, _ v2:Int) { self.v1 = v1 self.v2 = v2 w = nil } init(_ v1:Int, _ v2:Int, _ w:Int) { self.v1 = v1 self.v2 = v2 self.w = w } } let mygraph:[Edge] = [ Edge([1, 2]), Edge([1, 5]), Edge([2, 5]), Edge([2, 3]), Edge([3, 4]), Edge([3, 6]), Edge([5, 6]), Edge([2,6]), Edge([4,1]), Edge([3,5]), Edge([4,2]), Edge([7,1]), Edge([7,2]), Edge([8,1]), Edge([9,8]), Edge([11,2]), Edge([11, 8]), Edge([12,13]), Edge([1,6]), Edge([4,7]), Edge([5,7]), Edge([3,5]), Edge([9,1]), Edge([10,11]), Edge([10,4]), Edge([10,2]), Edge([10,1]), Edge([10, 12]), ] // remove all the edge and vertex "touching" the edges and vertex in "edgePath" func reduce (graph:[Edge], edgePath:[Edge]) -> [Edge] { var alreadyUsedV:[Int] = [] for edge in edgePath { alreadyUsedV.append(edge.v1) alreadyUsedV.append(edge.v2) } return graph.filter({ edge in return alreadyUsedV.first(where:{ edge.v1 == $0 }) == nil && alreadyUsedV.first(where:{ edge.v2 == $0 }) == nil }) } func findAllMaximalMatching(graph Gi:[Edge]) -> [[Edge]] { var matchings:[[Edge]] = [] var G = Gi // current graph (reduced at each depth) var M:[Edge] = [] // current matching being built var Cx:[Int] = [] // current path in the possibilities tree // eg : Cx[1] = 3 : for the depth 1, we are at the 3th edge var d:Int = 0 // current depth var debug_it = 0 while(true) { if(G.count == 0) // if there is no available edge in graph, it means we have a matching { if(M.count > 0) // security, if initial Graph is empty we cannot return an empty matching { matchings.append(M) } if(d == 0) { // depth = 0, we cannot decrement d, we have finished all the tree possibilities break } d = d - 1 _ = M.popLast() G = reduce(graph: Gi, edgePath: M) } else { let indexForThisDepth = Cx.count > d ? Cx[d] + 1 : 0 if(G.count < indexForThisDepth + 1) { // depth ended, _ = Cx.popLast() if( d == 0) { break } d = d - 1 _ = M.popLast() // reduce from initial graph to the decremented depth G = reduce(graph: Gi, edgePath: M) } else { // matching not finished to be built M.append( G[indexForThisDepth] ) if(indexForThisDepth == 0) { Cx.append(indexForThisDepth) } else { Cx[d] = indexForThisDepth } d = d + 1 G = reduce(graph: G, edgePath: M) } } debug_it += 1 } print("matching counts : \(matchings.count)") print("iterations : \(debug_it)") return matchings } let m = findAllMaximalMatching(graph: mygraph) // we have compute all the maximal matching, now we loop through all of them to find the one that has V(Mi) maximum // .... Finally my question is : how can I optimize this algorithm to find all maximal matching and to compute my value function on them to find the best matching for my app in a polynomial time ?
I may be missing something since the question is quite complicated, but why not simply use maximum flow problem, with every vertex appearing twice and the edges weights are the average grading if exists? It will return the maximal flow if configured correctly and runs polynomial time.
Alloy model an algebraic group
I am trying to model the structure of an algebraic group with Alloy. A group just has a set of elements and a binary relation with certain properties so I thought it would be a good fit for alloy. This is what I started with sig Number{} /* I call it Number but this is really just a name for some objects that are going to be in the group */ sig Group{ member: set Number, product: member->member->member, /*This is the part I'm really not sure about the Group is supposed to have a a well-defined binary relation so I thought maybe I could write it like this, sort of as a Curried function...I think it's actually a ternary relation in Alloy language since it takes two members and returns a third member */ }{//I want to write the other group properties as appended facts here. some e:member | all g:member| g->e->g in product //identity element all g:member | some i:member| g->i->e in product /* inverses exist I think there's a problem here because i want the e to be the same as in the previous line*/ all a,b,c:member| if a->b->c and c->d->e and b->c->f then a->f->e //transitivity all a,b:member| a->b->c in product// product is well defined }
I've only just learned a bit of Alloy myself, but your "inverses exist" problem looks straightforward from a predicate logic perspective; replace your first two properties with some e:member { all g:member | g->e->g in product //identity element all g:member | some i:member | g->i->e in product // inverses exist } By putting the inverse property in the scope of the quantifier of e, it is referring to that same e. I haven't tested this.
Here is one way of encoding groups in Alloy: module group[E] pred associative[o : E->E->E]{ all x, y, z : E | (x.o[y]).o[z] = x.o[y.o[z]] } pred commutative[o : E->E->E]{ all x, y : E | x.o[y] = y.o[x] } pred is_identity[i : E, o : E->E->E]{ all x : E | (i.o[x] = x and x = x.o[i]) } pred is_inverse[b : E->E, i : E, o : E->E->E]{ all x : E | (b[x].o[x] = i and i = x.o[b[x]]) } sig Group{ op : E -> E->one E, inv : E one->one E, id : E }{ associative[op] and is_identity[id, op] and is_inverse[inv, id, op] } sig Abelian extends Group{}{ commutative[op] } unique_identity: check { all g : Group, id' : E | (is_identity[id', g.op] implies id' = g.id) } for 13 but exactly 1 Group unique_inverse: check { all g : Group, inv' : E->E | (is_inverse[inv', g.id, g.op] implies inv' = g.inv) } for 13 but exactly 1 Group