Refresh and ValidityCheck operations in the SEAL Code - seal

While reading the Homomorphic Encryption Standard I came across those two operations:
Refresh(Params, flag, EK, C1) → C2.
ValidityCheck(Params, EK, [C], COMP) → flag.
I've searched their implementation in the SEAL Code but couldn’t find any, although I think that the first one is only implemented as evaluator.relinearize().

Your observation is correct. SEAL 2.3.1 only implements the flag="Relinearize" variant of the Refresh operation. ValidityCheck is not implemented in SEAL 2.3.1 at all.

Related

Unexpected Drools Ruleflow Behaviour

Drools version: 6.5.0
A rule flow sequence which takes (Start -> A -> B -> End) route, the expectation is all the rules in A (RuleflowGroup: A) will be executed first before all the rules in B (RuleflowGroup: B). But the result it produces from the implementation of the AgendaEventListener methods (i.e., beforeMatchFired, afterMatchFired) follows the reverse order. Rules associated with B are executed first before rules associated with A.
Any explanation would be very helpful.
Please find the rule flow diagram below.
If it is the same with version 7.x that i am currently using, it is because is a bit more complicated than you think. It is not just a flow. (A->B->C), is a stack.
So it is A then B/A then C/B/A. And when C finishes executing is returning back to B/A and then A.
If you want to get rid of that you can add a rule at last level, with lowest priority and
eval (true) in the when part and halt() in the then part to end the session before it returns to previous ruleflow group.

Does My imposed Condition have a name in category theory/Set theory?

There's a requirement i'm placing on the signature of a total & referencially transparent function:
def add[T](a: T)(b: T): T
//requirement is type T under e.g. addition must always bear antoher type T
// and is not allowed to throw runtime arithmetic exceptions or such.
this requirement can be easily fulfilled for many types such as Int,String,Nat(natural numbers); yet is also easily violated by types such as NonZeroInt as addition of two non-zero integers can in fact be zero.
My question is there a coined term for this condition? Monoid comes to mind but it's obvious I'm not imposing all the rules for monoids here.
If I understand what you're asking, then the term you are looking for is "closure" for a set given an operation. Refer to the mathematical definition in Wikipedia here. In short:
A set has closure under an operation if performance of that operation
on members of the set always produces a member of the same set
However, "closure" seems to have a different meaning in computer science. See the link here. And my searches pertaining to closure in the context of Scala, even when also putting it in the context of mathematics or set-theory, doesn't lead to any helpful results. Perhaps this is why you've had trouble finding the coined term.
Without any laws required it is just anot operation on a set T, nothing more. If add is associative you can call it Semigroup.

What is the recommended replacement for LinkedList in Scala 2.11+

A LinkedList is needed for a custom LRU implementation. When looking at the source we see:
#deprecated("Low-level linked lists are deprecated
due to idiosyncrasies in interface and incomplete features.", "2.11.0")
class LinkedList[A]() extends AbstractSeq[A]
with LinearSeq[A]
with GenericTraversableTemplate[A, LinkedList]
with LinkedListLike[A, LinkedList[A]]
with Serializable {
So then what is the recommended alternative (none mentioned here..). Do we fall back to the java.util.LinkedList? I am guessing there were a better option ..
Update The specific characteristic of LinkedList that is needed is the ability to access an individual entry on O(1) in order to insert/remove elements in the list efficiently. This would require that a LinkedListEntry (or Node or similar ..) reference be exposed and returned upon creation of new element in the list. It appears none of the available implementations - including java.util.LinkedList - are suitable.
Probably LinkedHashMap may fit better than others, but:
there is spray-caching which is a really good implemantation and they are using something called ConcurrentLinkedHashMap which is better for using with Scala as it provides high-performance concurrency.

Functional implementation of Tarjan's Strongly Connected Components algorithm

I went ahead and implemented the textbook version of Tarjan's SCC algorithm in Scala. However, I dislike the code - it is very imperative/procedural with lots of mutating states and book-keeping indices. Is there a more "functional" version of the algorithm? I believe imperative versions of algorithms hide the core ideas behind the algorithm unlike the functional versions. I found someone else encountering the same problem with this particular algorithm but I have not been able to translate his Clojure code into idomatic Scala.
Note: If anyone wants to experiment, I have a good setup that generates random graphs and tests your SCC algorithm vs running Floyd-Warshall
See Lazy Depth-First Search and Linear Graph Algorithms in Haskell by David King and John Launchbury. It describes many graph algorithms in a functional style, including SCC.
The following functional Scala code generates a map that assigns a representative to each node of a graph. Each representative identifies one strongly connected component. The code is based on Tarjan's algorithm for strongly connected components.
In order to understand the algorithm it might suffice to understand the fold and the contract of the dfs function.
def scc[T](graph:Map[T,Set[T]]): Map[T,T] = {
//`dfs` finds all strongly connected components below `node`
//`path` holds the the depth for all nodes above the current one
//'sccs' holds the representatives found so far; the accumulator
def dfs(node: T, path: Map[T,Int], sccs: Map[T,T]): Map[T,T] = {
//returns the earliest encountered node of both arguments
//for the case both aren't on the path, `old` is returned
def shallowerNode(old: T,candidate: T): T =
(path.get(old),path.get(candidate)) match {
case (_,None) => old
case (None,_) => candidate
case (Some(dOld),Some(dCand)) => if(dCand < dOld) candidate else old
}
//handle the child nodes
val children: Set[T] = graph(node)
//the initially known shallowest back-link is `node` itself
val (newState,shallowestBackNode) = children.foldLeft((sccs,node)){
case ((foldedSCCs,shallowest),child) =>
if(path.contains(child))
(foldedSCCs, shallowerNode(shallowest,child))
else {
val sccWithChildData = dfs(child,path + (node -> path.size),foldedSCCs)
val shallowestForChild = sccWithChildData(child)
(sccWithChildData, shallowerNode(shallowest, shallowestForChild))
}
}
newState + (node -> shallowestBackNode)
}
//run the above function, so every node gets visited
graph.keys.foldLeft(Map[T,T]()){ case (sccs,nextNode) =>
if(sccs.contains(nextNode))
sccs
else
dfs(nextNode,Map(),sccs)
}
}
I've tested the code only on the example graph found on the Wikipedia page.
Difference to imperative version
In contrast to the original implementation, my version avoids explicitly unwinding the stack and simply uses a proper (non tail-) recursive function. The stack is represented by a persistent map called path instead. In my first version I used a List as stack; but this was less efficient since it had to be searched for containing elements.
Efficiency
The code is rather efficient. For each edge, you have to update and/or access the immutable map path, which costs O(log|N|), for a total of O(|E| log|N|). This is in contrast to O(|E|) achieved by the imperative version.
Linear Time implementation
The paper in Chris Okasaki's answer gives a linear time solution in Haskell for finding strongly connected components. Their implementation is based on Kosaraju's Algorithm for finding SCCs, which basically requires two depth-first traversals. The paper's main contribution appears to be a lazy, linear time DFS implementation in Haskell.
What they require to achieve a linear time solution is having a set with O(1) singleton add and membership test. This is basically the same problem that makes the solution given in this answer have a higher complexity than the imperative solution. They solve it with state-threads in Haskell, which can also be done in Scala (see Scalaz). So if one is willing to make the code rather complicated, it is possible to implement Tarjan's SCC algorithm to a functional O(|E|) version.
Have a look at https://github.com/jordanlewis/data.union-find, a Clojure implementation of the algorithm. It's sorta disguised as a data structure, but the algorithm is all there. And it's purely functional, of course.

Operator precedence in Scala

I like Scala's propose of operator precedence but in some rare cases, unmodified rules may be inconvenient, because you have restrictions in naming your methods. Are there ways to define another rules for a class/file, etc. in Scala? If not, would it be resolved in the future?
Operator precedence is fixed in the Scala Reference - 6.12.3 Infix Operations by the first character in the operator. Listed in increasing order of precedence:
(all letters)
|
^
&
= !
< >
:
+ -
* / %
(all other special characters)
And it's not very probable that it will change. It will probably create more problems than it fixes. If you're used the normal operator precedence changing it for one class will be quite confusing.
Are there ways to define another rules for a class/file, etc. in Scala? If not, would it be resolved in the future?
There is no such ability and there is little likelihood of it being added in the forseeable future.
There was a feature request raised in the typelevel fork of the scala compiler, a version of the compiler which 'previews' experimental features. The developers suggested that if somebody were to write a SIP for this, it may be considered for implementation.
But in it's current state, there is no way to override precedence. It's rules are formally defined in the language specification.
unmodified rules may be inconvenient, because you have restrictions in naming your methods
you do not have any restrictions in naming your methods. For example, you can define methods +, -, * and etc. for a class.
we must also follow de-facto "unmodified rules" (enforced by Scala operator precedence rules) mentioned in previous answer (https://stackoverflow.com/a/2922456) by Thomas Jung - it is common for many if not all programming languages, and abstract algebra; we need not redefine operator precedence for a+b*c.
See Chapter 6 of the book http://www.scala-lang.org/docu/files/ScalaByExample.pdf for "Rational" class example.