Scala conversion long to datetime - scala

I am using nscala-time (wrapper for Joda Time) and slick for a project. I'm trying to use this clause to write a line to the database:
Article.insert(0,"title1", "hellothere", DateTime.now.getMillis.asInstanceOf[Timestamp])
Apparently Slick does not support "dateTime" type defined in Joda Time, and I have to use java.sql.Timestamp instead. So I decide to do a little conversion inside the insert method, using "asInstanceOf". Unfortunately, Scala quickly tells me that Java.Long cannot be converted to Java.sql.Timestamp. Then I used this:
val dateTime = new DateTime();
val timeStamp = new Timestamp(dateTime.getMillis());
Article.insert(0,"title1", "hellothere", timeStamp)
This magically works, and all I'm left with is confusion.
How can I convert it one way but not the other? Should I use a different conversion than asInstanceOf?

You misunderstand what asInstanceOf does: asInstanceOf doesn't convert anything. What it does is lie to the compiler, telling it to believe something instead of going with the knowledge it has.
So, you had a Long, and then you got a Long, but pretended it was a Timestamp, which obviously doesn't work.
I have a simple recommendation regarding asInstanceOf: never use it.

There's no magic about it. Your first statement:
DateTime.now.getMillis
is a Long. A Long is not a Timestamp, so it makes sense that you can't convert it to one by using asInstanceOf.
The second statement:
new Timestamp(dateTime.getMillis())
is using the Timestamp constructor to create a new Timestamp instance based on the dateTime.getMillis.

Related

How can I convert Long from String in scala

I know I can parse from Long from String like the following
"60".toLong
or convert Long from Double like the following
60.0.toLong
or convert Long from a String of Double like the following
"60.0".toDouble.toLong
However, I can't do the following
"60.0".toLong
So my question is whether using .toDouble.toLong is a best practice, or should I use something like try ... catch ...?
Meanwhile, there is another question, when I try to convert a very large Long to Double, there maybe some precision loss, I want to know how to fix that?
"9223372036854775800.31415926535827932".toDouble.toLong
You should wrap the operation in a Try anyway, in case the string is not valid.
What you do inside the Try depends on whether "60.0" is a valid value in your application.
If it is valid, use the two-step conversion.
Try("60.0".toDouble.toLong) // => Success(60)
If it is not valid, use the one-step version.
Try("60.0".toLong) // => Failure(java.lang.NumberFormatException...)
Answer to updated question:
9223372036854775800.31415926535827932 is outside the range for a Double, so you need BigDecimal for that.
Try(BigDecimal("9223372036854775800.31415926535827932").toLong)
However you are very close to maximum value for Long, so if the numbers really are that large I suggest avoiding Long and using BigDecimal and BigInt.
Try(BigDecimal("9223372036854775800.31415926535827932").toBigInt)
Note that toLong will not fail if the BigDecimal is too large, it just gives the wrong value.

Replacement for deprecated PostgresDataType.JSON?

I'm using JOOQ with PostgreSQL, and trying to implement a query like this:
INSERT INTO dest_table (id,name,custom_data)
SELECT key as id,
nameproperty as name,
CONCAT('{"propertyA": "',property_a,'", "propertyB": "',property_b,'","propertyC": "',property_c,'"}')::json as custom_data
FROM source_table
The concatenation/JSON bit is what I'm here to ask about. I actually have managed to get it working, but only by using this (Kotlin):
val concatBits = mutableListOf<Field<Any>>()
... build up various bits of the concatenation ...
val concatField = concat(*(concatBits.toTypedArray())).cast(PostgresDataType.JSON)
It concerns me that PostgresDataType is deprecated. The documentation says I should use SQLDataType instead, but it has no JSON value.
What's the recommended way to do this?
EDIT: a bit more information ...
I'm building the query like this:
val innerSelectFields = listOf(
field("key").`as`(DEST_TABLE.ID),
field("nameproperty").`as`(DEST_TABLE.NAME),
concatField.`as`(DEST_TABLE.CUSTOM_DATA)
)
val innerSelect = dslContext
.select(innerSelectFields)
.from(table("source_table"))
val insertInto = dslContext
.insertInto(DEST_TABLE)
.select(innerSelect)
The initial query I posted is slightly misleading, as the resulting SQL from this code doesn't have the
(id,name,custom_data) part.
Also, in case it matters, "source_table" is a temporary table, created during runtime, so there are no autogenerated classes for it.
jOOQ currently doesn't support the JSON data type out of the box. The main reason is that it is unclear what Java type to bind a JSON data structure to, as the JDK doesn't have such a standard type, and jOOQ will not prefer one third party library over the other.
The currently recommended approach is to create your own custom data type binding for your preferred third party JSON library:
https://www.jooq.org/doc/latest/manual/code-generation/custom-data-type-bindings
In that case, you will no longer need to explicitly cast your bind variable to some JSON type, because your binding will take care of that transparently.

A safe way of 'overwriting'/'forcing' a certain implicit resolution

I have a custom formatter (say MyFormat) for org.joda.DateTime which provides a Format[org.joda.DateTime].
Play provides a default formatter for the same class in the play.api.libs.json package.
I would like to use MyFormat across my application, and not the one Play provides. I have done this through explicit imports/mixins which does the trick, however during changes or omissions when these get removed, serialization defaults to the Play formatter and I end up with runtime errors. This seems very error prone.
Ultimately I would like for my code not to compile if there is no MyFormat for org.joda.DateTime in scope whenever one is required.
Is there a nice and safe way of doing this?
You have a couple options, but neither of them will be exactly what you want. The crux of the problem is that Scala will look for implicits within related companion objects. For example, if it needs to find an implicit Format[DateTime], it will look within the companion object of Format and the companion object of DateTime. In this case, there is an implicit def within Format that will produce the desired (or in your case, undesired) Format[DateTime] and there's nothing you can do to get rid of it, short of forking and hacking Play.
Workaround 1: Create an object with your custom formatters, and import it globally:
package my.project
object Formats {
implicit val dateTimeFormat: Format[DateTime] = ???
}
(in other files)
import my.project.Formats._
The Format[DateTime] imported into a local scope will supersede the one resolved from the Format object. However, you can't prevent your code from compiling without it (unless you have some sort of scalastyle rule or something that requires the import).
Workaround 2: Create a wrapper for DateTime with it's own Format, and give it an implicit conversion to DateTime.
case class MyDateTime(dt: DateTime) {
implicit def toDateTime: DateTime = dt
}
object MyDateTime {
implicit val fmt: Format[MyDateTime] = ???
implicit def fromDateTime(dt: DateTime): MyDateTime = MyDateTime(dt)
}
This is not a complete example, but an idea as to how to implement this. DateTime can be swapped with MyDateTime is most places, however it may require other implicits revolving around MyDateTime. For example, when I tried using this with Anorm, I needed an implicit ToStatement[MyDateTime], and probably a few others--so that's the obvious disadvantage. However, this would explicitly avoid using the default Format[DateTime].

Anorm: WHERE condition, conditionally

Consider a repository/DAO method like this, which works great:
def countReports(customerId: Long, createdSince: ZonedDateTime) =
DB.withConnection {
implicit c =>
SQL"""SELECT COUNT(*)
FROM report
WHERE customer_id = $customerId
AND created >= $createdSince
""".as(scalar[Int].single)
}
But what if the method is defined with optional parameters:
def countReports(customerId: Option[Long], createdSince: Option[ZonedDateTime])
Point being, if either optional argument is present, use it in filtering the results (as shown above), and otherwise (in case it is None) simply leave out the corresponding WHERE condition.
What's the simplest way to write this method with optional WHERE conditions? As Anorm newbie I was struggling to find an example of this, but I suppose there must be some sensible way to do it (that is, without duplicating the SQL for each combination of present/missing arguments).
Note that the java.time.ZonedDateTime instance maps perfectly and automatically into Postgres timestamptz when used inside the Anorm SQL call. (Trying to extract the WHERE condition as a string, outside SQL, created with normal string interpolation did not work; toString produces a representation not understood by the database.)
Play 2.4.4
One approach is to set up filter clauses such as
val customerClause =
if (customerId.isEmpty) ""
else " and customer_id={customerId}"
then substitute these into you SQL:
SQL(s"""
select count(*)
from report
where true
$customerClause
$createdClause
""")
.on('customerId -> customerId,
'createdSince -> createdSince)
.as(scalar[Int].singleOpt).getOrElse(0)
Using {variable} as opposed to $variable is I think preferable as it reduces the risk of SQL injection attacks where someone potentially calls your method with a malicious string. Anorm doesn't mind if you have additional symbols that aren't referenced in the SQL (i.e. if a clause string is empty). Lastly, depending on the database(?), a count might return no rows, so I use singleOpt rather than single.
I'm curious as to what other answers you receive.
Edit: Anorm interpolation (i.e. SQL"...", an interpolation implementation beyond Scala's s"...", f"..." and raw"...") was introduced to allow the use $variable as equivalent to {variable} with .on. And from Play 2.4, Scala and Anorm interpolation can be mixed using $ for Anorm (SQL parameter/variable) and #$ for Scala (plain string). And indeed this works well, as long as the Scala interpolated string does not contains references to an SQL parameter. The only way, in 2.4.4, I could find to use a variable in an Scala interpolated string when using Anorm interpolation, was:
val limitClause = if (nameFilter="") "" else s"where name>'$nameFilter'"
SQL"select * from tab #$limitClause order by name"
But this is vulnerable to SQL injection (e.g. a string like it's will cause a runtime syntax exception). So, in the case of variables inside interpolated strings, it seems it is necessary to use the "traditional" .on approach with only Scala interpolation:
val limitClause = if (nameFilter="") "" else "where name>{nameFilter}"
SQL(s"select * from tab $limitClause order by name").on('limitClause -> limitClause)
Perhaps in the future Anorm interpolation could be extended to parse the interpolated string for variables?
Edit2: I'm finding there are some tables where the number of attributes that might or might not be included in the query changes from time to time. For these cases I'm defining a context class, e.g. CustomerContext. In this case class there are lazy vals for the different clauses that affect the sql. Callers of the sql method must supply a CustomerContext, and the sql will then have inclusions such as ${context.createdClause} and so on. This helps give a consistency, as I end up using the context in other places (such as total record count for paging, etc.).
Finally got this simpler approach posted by Joel Arnold to work in my example case, also with ZonedDateTime!
def countReports(customerId: Option[Long], createdSince: Option[ZonedDateTime]) =
DB.withConnection {
implicit c =>
SQL( """
SELECT count(*) FROM report
WHERE ({customerId} is null or customer_id = {customerId})
AND ({created}::timestamptz is null or created >= {created})
""")
.on('customerId -> customerId, 'created -> createdSince)
.as(scalar[Int].singleOpt).getOrElse(0)
}
The tricky part is having to use {created}::timestamptz in the null check. As Joel commented, this is needed to work around a PostgreSQL driver issue.
Apparently the cast is needed only for timestamp types, and the simpler way ({customerId} is null) works with everything else. Also, comment if you know whether other databases require something like this, or if this is a Postgres-only peculiarity.
(While wwkudu's approach also works fine, this definitely is cleaner, as you can see comparing them side to side in a full example.)

How to format strings in Scala?

I need to print a formatted string containing scala.Long.
java.lang.String.format() is incompatible with scala.Long (compile time) and RichLong (java.util.IllegalFormatConversionException)
Compiler warns about deprecation of Integer on the following working code:
val number:Long = 3243
String.format("%d", new java.lang.Long(number))
Should I change fomatter, data type or something else?
You can try something like:
val number: Long = 3243
"%d".format(number)
The format method in Scala exists directly on instances of String, so you don't need/want the static class method. You also don't need to manually box the long primitive, let the compiler take care of all that for you!
String.format("%d", new java.lang.Integer(number))
is therefore better written as
"%d".format(number)
#Bruno's answer is what you should use in most cases.
If you must use a Java method to do the formatting, use
String.format("%d",number.asInstanceOf[AnyRef])
which will box the Long nicely for Java.