I have asked this question already in the slick google group. Posting it here if looking for help from ppl who do not check the group.
I am trying to implement a filter similar to scala-kendo . We had already developed this filter functionality using the plain queries. Now, I am trying to convert it to slick expressions, similar to what slick-kendo has done.
I need to implement case insensitive filtering. However, I am not able to find out how to do that. The members in scala.slick.ast.Library provides methods with case sensitive search only.
EDIT:
Adding the code:
private def predicate(e: E, f: Filter) = {
val (c, v) = (colNode(e, f.field), LiteralNode(f.value))
val L = Library
def \(fs: FunctionSymbol, nodes: Node*) = fs.typed[Boolean](nodes: _*)
Column.forNode[Boolean](f.operator match {
case "EqualTo" => \(L.==, c, v)
case "NotEqualTo" => \(L.Not, \(L.==, c, v))
case "GreaterThen" => \(L.>, c, v)
case "GreaterThenOrEqualTo" => \(L.>=, c, v)
case "LessThen" => \(L.<, c, v)
case "LessThenOrEqualTo" => \(L.<=, c, v)
case "StartsWith" => \(L.StartsWith, c, v)
case "StartsWithIgnore" => \(L.StartsWith, c, v)
case "EndsWith" => \(L.EndsWith, c, v)
case "Contains" => \(L.Like, c, LiteralNode(s"%${f.value}%"))
case "DoesNotContain" => \(L.Not, \(L.Like, c, LiteralNode(s"%${f.value}%")))
})
}
As you can see above, there are methods like StartsWith, EndsWith etc in Library, but I need something like StartsWithIgnoreCase, EndsWithIgnoreCase etc
Can someone provide any suggestions to implement this feature I need.
If I understand you correctly you want something like here: http://slick.typesafe.com/doc/3.0.0-RC3/queries.html#sorting-and-filtering - see the 4. example "val q4 = coffees.filter ...". This part of the docs is also valid for slick 2.x.
Finally, the solution is found. Pasting here for others benefit. Went through the slick codebase and found out how to covert node to column and apply lowercase on column.
case "EqualToIgnoreCase" => \(L.==, (Column.forNode[String](c).toLowerCase).toNode, LiteralNode(f.value.toLowerCase()))
Related
I'm coming from a JS background. In JS I could write something as follows:
let x = [[1, 0], [2, 1], [3, 2], [4, 3]]
x.forEach(([n, i]) => console.log())
So I was trying to convert to Scala, and I found a bunch of ways to do it. But I don't understand how the match and case statement are disappearing.
val x = Array(1, 2, 3, 4).zipWithIndex
// does what I expect
x.foreach((a) => {
println(a)
})
// uses match
x.foreach((a) => {
a match {
case (n, i) => println(s"$n $i")
}
})
// gets rid of redundant variable name
x.foreach({
_ match {
case (n, i) => println(s"$n $i")
}
})
// gets rid of unnecesary scope
x.foreach(_ match {
case (n, i) => println(s"$n $i")
})
Up to here, it makes sense. The below code I found online when looking how to loop with index.
// where did `match` go?
x.foreach({
case (n, i) => println(s"$n $i")
})
// and now `case` is gone too?
x.foreach((n, i) => println(s"$n $i"))
What is going on here? I would call it destructuring coming from JS, but this seems like a hidden/implicit match/case statement. Are there rules around that? How do I know if there should be an implicit match/case statement?
// where did `match` go?
x.foreach({
case (n, i) => println(s"$n $i")
})
This is a Pattern Matching Anonymous Function, also sometimes called a Partial Function Literal. See Scala Language Specification 8.5 Pattern Matching Anonymous Functions for all the gory details. Simply put, the expression
{
case p1 => e1
case p2 => e2
// …
case pn => en
}
is equivalent to
(x1: S1, x2: S2, /* … */, xn: Sn) => (x1, x2, /* … */, xn) match {
case p1 => e1
case p2 => e2
// …
case pn => en
}
provided that the result type is SAM-convertible to FunctionN[S1, S2, /* … */, Sn, R], or as a special case PartialFunction1[S1, R] (which is where the name Partial Function Literal comes from.)
// and now `case` is gone too?
x.foreach((n, i) => println(s"$n $i"))
This is a new feature of Scala 3. For a very long time, the Scala Designers wanted to unify Tuples and Argument Lists. In other words, they wanted to make it so that methods in Scala only ever take one argument, and that argument is a tuple. Unfortunately, it turned out that a) this massively breaks backwards-compatibility and b) massively breaks platform interoperability.
Now, Scala 3 was an opportunity to ignore problem a), but you cannot ignore problem b) since one of the major design goals of Scala is to have seamless, tight, good, performant integration with the underlying host platform (e.g. .NET in the case of the now-abandoned Scala.NET, the ECMASCript / HTML5 / DOM / WebAPI platform in the case of Scala.js, the native Operating System in the case of Scala-native, or the Java platform (JRE, JDK, JVM, J2SE, J2EE, Java, Kotlin, Clojure, etc.) in the case of Scala-JVM).
However, the Scala designers managed to find a compromise, where arguments and tuples are not the same thing, but parameters can be easily converted to tuples and tuples can be easily converted to arguments.
This is called Parameter Untupling, and it basically means that a function of type FunctionN[S1, S2, /* … */, Sn, R] can be automatically converted to a function of type Function1[(S1, S2, /* … */, Sn), R] which is syntactic sugar for Function1[TupleN[S1, S2, /* … */, Sn], R].
Simply put,
(p1: S1, p2: S2, /* … */, pn: Sn) => e: R
can automatically be converted to
(x: (S1, S2, /* … */, Sn)) => {
val p1: S1 = x._1
val p2: S2 = x._2
// …
val pn: Sn = x._n
e
}
Note: unfortunately, there is no comprehensive specification of Scala 3 yet. There is a partial Language Reference, which however only describes differences to Scala 2. So, you typically have to bounce back and forth between the SLS and the Scala 3 docs.
About match keyword:
// where did `match` go?
x.foreach({
case (n, i) => println(s"$n $i")
})
This is the language specification, you can ignore match keyword in anonymous functions: https://scala-lang.org/files/archive/spec/2.13/08-pattern-matching.html#pattern-matching-anonymous-functions
Say I have the following:
trait PropType
case class PropTypeA(String value) extends PropType
case class PropTypeB(String value) extends PropType
case class Item(
propTypeA: PropTypeA,
propTypeB: PropTypeB
)
and that I'm given a List[PropType]. How would I go with combining this into a List[Item]?
That is (and assuming we only have PropTypeA(name: String) and PropTypeB(name: String) to make this shorter / easier to follow hopefully) given this:
List[PropType](
PropTypeA("item1-propTypeA"),
PropTypeB("item1-propTypeB"),
PropTypeA("item2-propTypeA"),
PropTypeB("item2-propTypeB")
]
I'd like to get the equivalent of:
List[Item](
Item(PropTypeA("item1-propTypeA"), PropTypeB("item1-propTypeB")),
Item(PropTypeA("item2-propTypeA"), PropTypeB("item2-propTypeB"))
)
Kind of table building from linearized rows across columns, if that makes sense.
Note that in general there might be incomplete "rows", e.g. this:
List[PropType](
PropTypeA("item1-propTypeA"),
PropTypeB("item1-propTypeB"),
PropTypeB("itemPartialXXX-propTypeB"),
PropTypeA("itemPartialYYY-propTypeA"),
PropTypeA("item2-propTypeA"),
PropTypeB("item2-propTypeB")
]
should generate the same output as the above, with the logic being that PropTypeA always marks the start of a new row and thus everything "unused" is discarded.
How should I approach this?
Something like this will work with examples you mentioned.
list.grouped(2).collect { case Seq(a: PropTypeA, b: PropTypeB) => Item(a,b) }.toList
However it is unclear from your question what other cases you want to handle and how. For example, how exactly do you define the "partial" occurrence. Are there always two elements in reverse order? Can there be just one, or three? Can there be two As in a row? Or three? Or two Bs?
For example, A, A, A, A, B or B, B, A, A, B or just A?
Depending on how you answer those question, you'll need to somehow "pre-filter" the list before hand.
Here is an implementation based on the last phrase in your question: "PropTypeA always marks the start of a new row and thus everything "unused" is discarded." It only looks for instances where an A is immediately followed by B and discards everything else:
list.foldLeft(List.empty[PropType]) {
case ((a: PropTypeA) :: tail, b: PropTypeB) => b :: a :: tail
case ((b: PropTypeB) :: tail, a: PropTypeA) => a :: b :: tail
case (Nil, a: PropTypeA) => a :: Nil
case (_ :: tail, a: PropTypeA) => a :: tail
case (list, _) => list
}.reverse.grouped(2).collect {
case Seq(a: PropTypeA, b: PropTypeB) => Item(a,b)
}.toList
If you have more than just two types, then there are even more questions: what happens if stuff after A comes in wrong order for example? Like what do you do with A,B,C,A,C,B?
But basically, it would be the same idea as above: if next element is of the type you expect in the the sequence, add it to the result, otherwise discard sequence and keep going.
we can use the tail recursion function to generate the list of a new type.
def transformType(proptypes: List[PropType]): List[Item] =
{
// tail recursion function defined
#tailrec
def transform(proptypes: List[PropType], items: List[Item]): List[Item]=
{
proptypes match {
case (first:PropTypeA) :: (second:PropTypeB) :: tail=> transform(tail, items :+ Item(first, second))
case (first:PropTypeA) :: (second:PropTypeA) :: tail => transform(second :: tail, items :+ Item(first, PropTypeB("")))
case (first:PropTypeB) :: tail => transform(tail, items :+ Item(PropTypeA(""), first))
case (first:PropTypeA) :: tail => transform(tail, items :+ Item(first, PropTypeB("")))
case _ => items
}
}
transform(proptypes, List.empty[Item])
}
you can find the working link here
What is a good way( read better readability) to filter a list of tuples. I'm using
tupleList.filter(_._2).map(_._1)
But this does not feel readable.
Not sure how much better but you can use collect:
tupleList.collect { case (true, x) => x }
and of course give x some meaningful name. If the first element is not a boolean you can even do something like:
tupleList.collect { case (x, y) if (cond) => y}
and give x and y meaningful names
Using the equivalent with partial functions can also help:
tupleList.filter { case (_, snd) => snd }
.map { case (fst, _) => fst }
This should improve significantly when Dotty arrives with tuple unpacking.
I want to compute something if exactly one of two options is non-empty. Obviously this could be done by a pattern match, but is there some better way?
(o1, o2) match {
case (Some(o), None) => Some(compute(o))
case (None, Some(o)) => Some(compute(o))
case _ => None
}
You could do something like this:
if (o1.isEmpty ^ o2.isEmpty)
List(o1,o2).flatMap(_.map(x=>Some(compute(x)))).head
else
None
But pattern matching is probably the better way to go.
Thanks to helpful comments from #Suma, I came up with another solutions in addition to the current ones:
Since the inputs are always in the form of Option(x):
Iterator(Seq(o1,o2).filter(_!=None))
.takeWhile(_.length==1)
.map( x => compute(x.head.get))
.toSeq.headOption
Using iterator also allows for a sequence of values to be passed to the input. The final mapping will be done if and only if one value in the sequence is defined.
Inspired by now deleted answer of pedrofurla, which was attempting to use o1 orElse o2 map { compute }, one possibility is to define xorElse, the rest is easy with it:
implicit class XorElse[T](o1: Option[T]) {
def xorElse[A >: T](o2: Option[A]): Option[A] = {
if (o1.isDefined != o2.isDefined) o1 orElse o2
else None
}
}
(o1 xorElse o2).map(compute)
Another possibility I have found is using a pattern match, but using Seq concatenation so that both cases are handled with the same code. The advantage of this approach is it can be extended to any number of options, it will always evaluate when there is exactly one:
o1.toSeq ++ o2 match {
case Seq(one) => Some(compute(one))
case _ => None
}
Just initialize a sequence and then flatten
Seq(o1, o2).flatten match {
case Seq(o) => Some(compute(o))
case _ => None
}
I find myself constantly doing things like the following:
val adjustedActions = actions.scanLeft((1.0, null: CorpAction)){
case ((runningSplitAdj, _), action) => action match {
case Dividend(date, amount) =>
(runningSplitAdj, Dividend(date, amount * runningSplitAdj))
case s # Split(date, sharesForOne) =>
((runningSplitAdj * sharesForOne), s)
}
}
.drop(1).map(_._2)
Where I need to accumulate the runningSplitAdj, in this case, in order to correct the dividends in the actions list. Here, I use scan to maintain the state that I need in order to correct the actions, but in the end, I only need the actions. Hence, I need to use null for the initial action in the state, but in the end, drop that item and map away all the states.
Is there a more elegant way of structuring these? In the context of RxScala Observables, I actually made a new operator to do this (after some help from the RxJava mailing list):
implicit class ScanMappingObs[X](val obs: Observable[X]) extends AnyVal {
def scanMap[S,Y](f: (X,S) => (Y,S), s0: S): Observable[Y] = {
val y0: Y = null.asInstanceOf[Y]
// drop(1) because scan also emits initial state
obs.scan((y0, s0)){case ((y, s), x) => f(x, s)}.drop(1).map(_._1)
}
}
However, now I find myself doing it to Lists and Vectors too, so I wonder if there is something more general I can do?
The combinator you're describing (or at least something very similar) is often called mapAccum. Take the following simplified use of scanLeft:
val xs = (1 to 10).toList
val result1 = xs.scanLeft((1, 0.0)) {
case ((acc, _), i) => (acc + i, i.toDouble / acc)
}.tail.map(_._2)
This is equivalent to the following (which uses Scalaz's implementation of mapAccumLeft):
xs.mapAccumLeft[Double, Int](1, {
case (acc, i) => (acc + i, i.toDouble / acc)
})._2
mapAccumLeft returns a pair of the final state and a sequence of the results at each step, but it doesn't require you to specify a spurious initial result (that will just be ignored and then dropped), and you don't have to map over the entire collection to get rid of the state—you just take the second member of the pair.
Unfortunately mapAccumLeft isn't available in the standard library, but if you're looking for a name or for ideas about implementation, this is a place to start.