Sorting by DateTime in Slick - scala

I'm currently going through a rough point in Slick. I'm trying to sort the query of a table with a timestamp:
TableName.filter(tableAttribute === 1).sortBy(_.tableTimestamp)
The timestamp is of type joda.DateTime within slick. When I try to sort, I'm getting the following error:
No implicit view available from dao.Tables.profile.api.Rep[org.joda.time.DateTime] => slick.lifted.Ordered.
I'm assuming that this isn't built into Slick. Is there a quick and clean way to add an implicit view and solve this?
Thanks!

You might be looking for an implicit conversion using Ordering.fromLessThan like below:
import org.joda.time.DateTime
implicit def datetimeOrdering: Ordering[DateTime] = Ordering.fromLessThan(_ isBefore _)
In case you want to reverse the ordering, simply replace isBefore with isAfter.

Related

Nested JSON in scala, Circe, Slick

I have a nested JSON in my database. I have figured out the case class for the same. I am using circe, slick and Akka HTTP in my Web api application.
My case class is :
case class Sen
(
sentences: Array[File]
)
case class File
(
content: String,
)
I have written GetResult for the same nesting. I have problems with the array in the case class.
implicit lazy val getFile = GetResult(r => Array[File](r.<<))
implicit lazy val SenObj = GetResult(r => Sen(getFile(r)))
Can anyone tell me how to solve this?
Following is the error I get while compiling
Error:diverging implicit expansion for type slick.jdbc.GetResult[T]
starting with method createGetTuple22 in object GetResult
implicit lazy val getFile = GetResult(r => Array[File](r.<<))
Your definition of getFile is manually constructing an Array, and specifically you're asking for an Array[File]. There's no GetResult[File], meaning that r.<< won't be able to convert a column value into a File.
Can anyone tell me how to solve this?
You'll at least need a GetResult[File] defined.
However, it's not clear from the question how the JSON part is intended to work:
Perhaps you have a column containing text which your application treats as JSON. If that's the case, I suggest doing JSON array conversion outside of your Slick code layer. That will keep the mapping to and from the database straightforward.
Or perhaps you have a JSON-type in your database and you're using database-specific operations. In that case, I guess it'll depend on what control you have there, and it probably does make sense to try to do JSON-array operations at the Slick layer. (That's the case for Postgress, for example, via the pg-slick library)
But that's a different question.
As a general note, I suggest always being explicit about the types of GetResult you are defining:
implicit lazy val getFile: GetResult[Array[File]]=
GetResult(r => Array[File](r.<<))
implicit lazy val SenObj: GetResult[Sen] =
GetResult(r => Sen(getFile(r)))
...to be clear about what instances you have available. I find that helps in debugging these situations.

Cast N1QLQuery response into custom object in Scala

I have a simple case class:
case class Account(accountId: Long, login: DateTime)
Now I want to retrieve docs from Couchbase bucket by simple N1QL query (it should return a simple list of JSON documents contain two fields):
val query = "SELECT u.accountId, u.login FROM `accounts` u WHERE DATE_DIFF_STR(NOW_STR(), u.login, 'day') > 30"
bucket.query(N1qlQuery.simple(query)).map(rows => rows.map(row => row.value().asInstanceOf[Account]).seq)
but, I got an error there in postman:
java.lang.ClassCastException: com.couchbase.client.java.document.json.JsonObject cannot be cast to com.package.account
My question is - how I could cast docs from database into my, custom object? I also tried to cast it into RawJSONDocument first, but it did not help.
Can someone help me with that?
First, you may be interested to know that we're actively working on a native Couchbase Scala SDK at present for release this year, and it will support your use case of converting rows directly into a case class.
But in the here-and-now, no you cannot directly cast a JsonObject into a case class. You will need to use toString to pull out the raw JSON string, and then use a Scala JSON library to convert it. You've got several options here:
Jackson
val json = row.value().toString()
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)
val account = mapper.readValue(json, classOf[Account])
uPickle
val account = upickle.default.read[Account](json)
Jsoniter
val account = com.github.plokhotnyuk.jsoniter_scala.core.readFromString[Account](json)
Plus there's Circe, Json4s, Play Json, Jawn etc..
FWIW, I found Jsoniter to be the fastest in my benchmarking.

Sort by dateTime in scala

I have an RDD[org.joda.time.DateTime]. I would like to sort records by date in scala.
Input - sample data after applying collect() below -
res41: Array[org.joda.time.DateTime] = Array(2016-10-19T05:19:07.572Z, 2016-10-12T00:31:07.572Z, 2016-10-18T19:43:07.572Z)
Expected Output
2016-10-12T00:31:07.572Z
2016-10-18T19:43:07.572Z
2016-10-19T05:19:07.572Z
I have googled and checked following link but could not understand it -
How to define an Ordering in Scala?
Any help?
If you collect the records of your RDD, then you can apply the following sorting:
array.sortBy(_.getMillis)
On the contrary, if your RDD is big and you do not want to collect it to the driver, you should consider:
rdd.sortBy(_.getMillis)
You can define an implicit ordering for org.joda.time.DateTime like so;
implicit def ord: Ordering[DateTime] = Ordering.by(_.getMillis)
Which looks at the milliseconds of a DateTime and sorts based on that.
You can then either ensure that the implicit is in your scope or just use it more explicitly:
arr.sorted(ord)

Spark toDF cannot resolve symbol after importing sqlContext implicits

I'm working on writing some unit tests for my Scala Spark application
In order to do so I need to create different dataframes in my tests. So I wrote a very short DFsBuilder code that basically allows me to add new rows and eventually create the DF. The code is:
class DFsBuilder[T](private val sqlContext: SQLContext, private val columnNames: Array[String]) {
var rows = new ListBuffer[T]()
def add(row: T): DFsBuilder[T] = {
rows += row
this
}
def build() : DataFrame = {
import sqlContext.implicits._
rows.toList.toDF(columnNames:_*) // UPDATE: added :_* because it was accidently removed in the original question
}
}
However the toDF method doesn't compile with a cannot resolve symbol toDF.
I wrote this builder code with generics since I need to create different kinds of DFs (different number of columns and different column types). The way I would like to use it is to define some certain case class in the unit test and use it for the builder
I know this issue somehow relates to the fact that I'm using generics (probably some kind of type erasure issue) but I can't quite put my finger on what the problem is exactly
And so my questions are:
Can anyone show me where the problem is? And also hopefully how to fix it
If this issue cannot be solved this way, could someone perhaps offer another elegant way to create dataframes? (I prefer not to pollute my unit tests with the creation code)
I obviously googled this issue first but only found examples where people forgot to import the sqlContext.implicits method or something about a case class out of scope which is probably not the same issue as I'm having
Thanks in advance
If you look at the signatures of toDF and of SQLImplicits.localSeqToDataFrameHolder (which is the implicit function used) you'll be able to detect two issues:
Type T must be a subclass of Product (the superclass of all case classes, tuples...), and you must provide an implicit TypeTag for it. To fix this - change the declaration of your class to:
class DFsBuilder[T <: Product : TypeTag](...) { ... }
The columnNames argument is not of type Array, it's a "repeated parameter" (like Java's "varargs", see section 4.6.2 here), so you have to convert the array into arguments:
rows.toList.toDF(columnNames: _*)
With these two changes, your code compiles (and works).

Scala Slick: MappedColumnType cannot find implicit value for BaseColumlnType[String]

I'm trying to set up database columns in Slick with non-primitive objects. I've spent the past day researching MappedColumnType for mapping custom objects to columns, and as far as I can tell I'm implementing them as people recommend. Unfortunately, the following code produces an error:
implicit val localDateMapper = MappedColumnType.base[LocalDate, String]
(
//map date to String
d => d.toString,
//map String to date
s => LocalDate.parse(s)
)
And here is the error:
could not find implicit value for evidence parameter of type slick.driver.H2Driver.BaseColumnType[String]
I've seen multiple examples where people map custom objects to and from Strings. I figure there must be something I'm missing?
For reference, I'm using Play Slick 1.1.1 and Scala 2.11.6. The former supports Slick 3.1.
You can import a BaseColumnType[String] with:
import slick.driver.H2Driver.api.stringColumnType