Jerkson started throwing a really strange error that I haven't seen before.
com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class scala.runtime.BoxedUnit and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationConfig.SerializationFeature.FAIL_ON_EMPTY_BEANS) ) (through reference chain: scala.collection.MapWrapper["data"])
I'm parsing some basic data from an API. The class I've defined is:
case class Segmentation(
#(JsonProperty#field)("legend_size")
val legend_size: Int,
#(JsonProperty#field)("data")
val data: Data
)
and Data looks like:
case class Data(
#(JsonProperty#field)("series")
val series: List[String],
#(JsonProperty#field)("values")
val values: Map[String, Map[String, Any]]
)
Any clue why this would be triggering errors? Seems like a simple class that Jerkson can handle.
Edit: sample data:
{"legend_size": 1, "data": {"series": ["2013-04-06", "2013-04-07", "2013-04-08", "2013-04-09", "2013-04-10", "2013-04-11", "2013-04-12", "2013-04-13", "2013-04-14", "2013-04-15"], "values": {"datapoint": {"2013-04-12": 0, "2013-04-15": 4, "2013-04-14": 0, "2013-04-08":
0, "2013-04-09": 0, "2013-04-11": 0, "2013-04-10": 0, "2013-04-13": 0, "2013-04-06": 0, "2013-04-07": 0}}}}
this isn't the answer to the above example, but I'm going to offer it because it was the answer to my similar "BoxedUnit" scenario:
No serializer found for class scala.runtime.BoxedUnit and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS)
In my case Jackson was complaining about deserializing an instance of a scala.runtime.BoxedUnit object.
Q: So what is scala.runtime.BoxedUnit?
A: It's scala java representation for "Unit". The core part of Jackson (which is java code) is attempting to deserialize a java representation of the scala Unit non-entity.
Q: So why was this happening?
A: In my case this was a downstream side effect caused by a buggy method with an undeclared return value. The method in question wrapped a match clause that (unintentionally) didn't return a value for each case. Because of the buggy code described above, Scala dynamically declared the var capturing the result of this method as "Unit". Later on in the code when this var gets serialized into json, the jackson error occurs.
So if you are getting an issue like this, my advice would be to examine any implicitly typed vars / methods with non-defined return values and ensure they are doing what you think they are doing.
I had the same exception. what caused it in my case is that I defined an apply method in the companion object without '=':
object Somthing {
def apply(s: SomthingElse) {
...
}
}
instead of
object Somthing {
def apply(s: SomthingElse) = {
...
}
}
That caused the apply method return type to be Unit which caused the exception when I passed the object to jackson.
Not sure if that is the case in your code or if this question is still relevant but this might help others with this kind of problem.
It's been a while since I first posted this question. The solution as of writing this answer appears to be moving on from Jerkson and using the Jackson-module-scala or Json4s with the Jackson backend. Many Scala types are included in the default serialized and are natively handled.
In addition, the reason why I'm seeing BoxedUnit is because the explicit type Jerkson was seeing was Any (a part of Map[String, Map[String, Any]]). Any is a base type and doesn't give Jerkson/Jackson information about what it's deserializing. Therefore, it complains about a missing serializer.
Related
Because I am dealing with generic type therefore I can't use specific case classes. Then I created a generic util which serializes and deserializes generic object.
import org.json4s
import org.json4s.Formats._
import org.json4s.native.JsonMethods._
object JsonHelper {
def json2Object[O](input: String) : O = {
parse(json4s.string2JsonInput(input)).asInstanceOf[O]
}
def object2Json[O](input: O) : String = {
write(input).toString
}
}
The compiler throws the error:
No JSON serializer found for type O. Try to implement an implicit Writer or JsonFormat for this type.
write(input).toString
This should be thrown at runtime but why it's thrown at compile time?
In a comment above, you asked "So how jackson can work with java object? It use reflection right? And why it's different from Scala?", which gets to the heart of this question.
The json4s "native" serializer you have imported uses compile-time reflection to create the Writer.
Jackson uses run-time reflection to do the same.
The compile-time version is more efficient; the run-time version is more flexible.
To use the compile-time version, you need to let the compiler have enough information to choose the correct Writer based on the declared type of the object to be serialized. This will rule out very generic writer methods like the one you propose. See #TimP's answer for how to fix your code for that version.
To use the run-time version, you can use Jackson via the org.json4s.jackson.JsonMethods._ package. See https://github.com/json4s/json4s#jackson
The compiler error you posted comes from this location in the json4s code. The write function you're calling takes an implicit JSON Writer, which is how the method can take arbitrary types. It's caught at compile time because implicit arguments are compiled the same way explicit ones are -- it's as if you had:
def f(a: Int, b: Int) = a + b
f(5) // passed the wrong number of arguments
I'm having a bit of trouble seeing exactly which write method you're calling here -- the json4s library is pretty big and things are overloaded. Can you paste the declared write method you're using? It almost certainly has a signature like this:
def write[T](value: T)(implicit writer: Writer[T]): JValue
If it looks like the above, try including the implicit writer parameter in your method as so:
object JsonHelper {
def json2Object[O](input: String)(implicit reader: Reader[O]) : O = {
parse(json4s.string2JsonInput(input)).asInstanceOf[O]
}
def object2Json[O](input: O)(implicit writer: Writer[O]) : String = {
write(input).toString
}
}
In this example, you have a deal with generic types, Scala, like another jvm languages, has type erasing mechanism at compile time (error message at compile time may don't contain message about generic in a whole), so try to append this fragment to the signature of both methods:
(implicit tag: ClassTag[T])
it's similar to you example with generic, but with jackson.
HTH
I am using the Scala API of Flink. I have some transformations over a reports = DataStream[Tuple15] (the Tuple15 is a Scala Tuple and all the fields are Int). The issue is located here:
reports
.filter(_._1 == 0) // some filter
.map( x => (x._3, x._4, x._5, x._7, x._8))
(TypeInformation.of(classOf[(Int,Int,Int,Int,Int)])) // keep only 5 fields as a Tuple5
.keyBy(2,3,4) // the error is in apply, but I think related to this somehow
.timeWindow(Time.minutes(5), Time.minutes(1))
// the line under is line 107, where the error is
.apply( (tup, timeWindow, iterable, collector: Collector[(Int, Int, Int, Float)]) => {
...
})
The error states:
InvalidProgramException: Specifying keys via field positions is only valid for
tuple data types. Type: GenericType<scala.Tuple5>
Whole error trace (I marked the line pointing to the error, line 107, corresponding to the apply method on the code above):
Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: Specifying keys via field positions is only valid for tuple data types. Type: GenericType<scala.Tuple5>
at org.apache.flink.api.common.operators.Keys$ExpressionKeys.<init>(Keys.java:217)
at org.apache.flink.api.common.operators.Keys$ExpressionKeys.<init>(Keys.java:208)
at org.apache.flink.streaming.api.datastream.DataStream.keyBy(DataStream.java:256)
at org.apache.flink.streaming.api.scala.DataStream.keyBy(DataStream.scala:289)
here -> at du.tu_berlin.dima.bdapro.flink.linearroad.houcros.LinearRoad$.latestAverageVelocity(LinearRoad.scala:107)
at du.tu_berlin.dima.bdapro.flink.linearroad.houcros.LinearRoad$.main(LinearRoad.scala:46)
at du.tu_berlin.dima.bdapro.flink.linearroad.houcros.LinearRoad.main(LinearRoad.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
But this doesn't make sense to me. I am using a tuple type, am I not? Or what is the deal with the GenericType<...>?
And how should I fix the map to make the keyBy work?
The reason is that the TypeInformation belongs to the Java API and, thus, does not know the Scala tuples. Therefore, it returns a GenericType which cannot be used as the input for a keyBy operation with field positions.
If you want to generate the Scala tuple type information manually, you have to uses the createTypeInformation method which is contained in the org.apache.flink.api.scala/org.apache.flink.streaming.api.scala package object.
But if you import the package object, then there is no need to specify the type information manually, since the TypeInformation is a context bound of the map operation and createTypeInformation is an implicit function.
The following code snippet shows the idiomatic way to deal with TypeInformations.
import org.apache.flink.streaming.api.scala._
reports
.filter(_._1 == 0) // some filter
.map( x => (x._3, x._4, x._5, x._7, x._8))
.keyBy(2,3,4) // the error is in apply, but I think related to this somehow
.timeWindow(Time.minutes(5), Time.minutes(1))
// the line under is line 107, where the error is
.apply( (tup, timeWindow, iterable, collector: Collector[(Int, Int, Int, Float)]) => {
...
})
I also encountered the same problem and able to fix it as follows:
Use Tuple2 class from Flink API i.e., [import org.apache.flink.api.java.tuple.Tuple15] instead of scala.Tuple15
Please see your import section and correct it.
Here I used Flink Java API. In case of Scala, import org.apache.flink.api.scala._ package
[Apache Flink]
Well, after much time spent, I actually got it to work simply by removing the TypeInformation. So, changing this:
.map( x => (x._3, x._4, x._5, x._7, x._8))(TypeInformation.of(classOf[(Int,Int,Int,Int,Int)]))
to this:
.map( x => (x._3, x._4, x._5, x._7, x._8))
Nonetheless, I assume this solution is kind of a hack, because I'm still getting warnings (well, INFO logs) from Flink:
00:22:18,662 INFO org.apache.flink.api.java.typeutils.TypeExtractor - class scala.Tuple15 is not a valid POJO type
00:22:19,254 INFO org.apache.flink.api.java.typeutils.TypeExtractor - class scala.Tuple4 is not a valid POJO type
So, if there is any more general answer, I'll be happy to accept it. Until then, this worked for me.
UPDATE
I had tried this before and didn't work. I just realised that now it's working thanks to the answer from #Till. So, as well as what I stated, you need to import either org.apache.flink.streaming.api.scala.createTypeInformation or org.apache.flink.api.scala.createTypeInformation (not both!).
AggregateOperator support only Flink Tuple. if you are facing this issue then first please check your import is it scala.Tuple2 then it's wrong . So it should be org.apache.flink.api.java.tuple.Tuple2
I am new to Play and Scala as well. And I meet a problem with i18n while reading book "Play with Scala". The problem was the Messages object, which has to be obtained in every template to let application work properly.
What bothers me is that even if I don't use the Messages object in one of my Scala template files, but I inherit/call another template in it, I still have to add
(implicit messages: Messages) at the top of the file.
Can somebody explain me why is that? Is it necessary to add the Messages object in every template? Its quite problematic and I am sure it can be solved somehow.
This is not a Play Framework specific problem, it is just how implicit parameters work in Scala (see Understanding implicit in Scala).
Take the following function which "magically" adds a number to a list of numbers.
def addMagic[A](numbers: List[Int])(implicit add: Int) = numbers.map(_ + add)
We can use addMagic as follows :
{
implicit val magicNumber = 42
addMagic(List(1, 2, 3))
// List[Int] = List(43, 44, 45)
}
If we use addMagic in another function without passing an implicit Int :
def printMagicNumbers(numbers: List[Int]) = println(addMagic(numbers))
we get the following error:
error: could not find implicit value for parameter add: Int
So we also need to add an implicit parameter to printMagicNumbers :
def printMagicNumbers(numbers: List[Int])(implicit add: Int) =
println(addMagic(numbers))
In the same manner your template function needs an implicit Messages object if it calls a template function which needs the Messages object.
I'm struggling using custom case classes to write to Cassandra (2.1.6) using Spark (1.4.0). So far, I've tried this by using the DataStax spark-cassandra-connector 1.4.0-M1 and the following case classes:
case class Event(event_id: String, event_name: String, event_url: String, time: Option[Long])
[...]
case class RsvpResponse(event: Event, group: Group, guests: Long, member: Member, mtime: Long, response: String, rsvp_id: Long, venue: Option[Venue])
In order to make this work, I've also implemented the following converter:
implicit object EventToUDTValueConverter extends TypeConverter[UDTValue] {
def targetTypeTag = typeTag[UDTValue]
def convertPF = {
case e: Event => UDTValue.fromMap(toMap(e)) // toMap just transforms the case class into a Map[String, Any]
}
}
TypeConverter.registerConverter(EventToUDTValueConverter)
If I look up the converter manually, I can use it to convert an instance of Event into UDTValue, however, when using sc.saveToCassandra passing it an instance of RsvpResponse with related objects, I get the following error:
15/06/23 23:56:29 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)
com.datastax.spark.connector.types.TypeConversionException: Cannot convert object Event(EVENT9136830076436652815,First event,http://www.meetup.com/first-event,Some(1435100185774)) of type class model.Event to com.datastax.spark.connector.UDTValue.
at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42)
at com.datastax.spark.connector.types.UserDefinedType$$anon$1$$anonfun$convertPF$1.applyOrElse(UserDefinedType.scala:33)
at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40)
at com.datastax.spark.connector.types.UserDefinedType$$anon$1.convert(UserDefinedType.scala:31)
at com.datastax.spark.connector.writer.DefaultRowWriter$$anonfun$readColumnValues$2.apply(DefaultRowWriter.scala:46)
at com.datastax.spark.connector.writer.DefaultRowWriter$$anonfun$readColumnValues$2.apply(DefaultRowWriter.scala:43)
It seems my converter is never even getting called because of the way the connector library is handling UDTValue internally. However, the solution described above does work for reading data from Cassandra tables (including user defined types). Based on the connector docs, I also replaced my nested case classes with com.datastax.spark.connector.UDTValue types directly, which then fixes the issue described, but breaks reading the data. I can't imagine I'm meant to define 2 separate models for reading and writing data. Or am I missing something obvious here?
Since version 1.3, there is no need to use custom type converters to load and save nested UDTs. Just model everything with case classes and stick to the field naming convention and you should be fine.
I'm trying to implement a controller in Play2 which exposes a simple REST-style api for my db-tables. I'm using squeryl for database access and spray-json for converting objects to/from json
My idea is to have a single generic controller to do all the work, so I've set up the following routes in conf/routes:
GET /:tableName controllers.Crud.getAll(tableName)
GET /:tableName/:primaryKey controllers.Crud.getSingle(tableName, primaryKey)
.. and the following controller:
object Crud extends Controller {
def getAll(tableName: String) = Action {..}
def getSingle(tableName: String, primaryKey: Long) = Action {..}
}
(Yes, missing create/update/delete, but let's get read to work first)
I've mapped tables to case classes by extended squeryl's Schema:
object MyDB extends Schema {
val accountsTable = table[Account]("accounts")
val customersTable = table[Customer]("customers")
}
And I've told spray-json about my case classes so it knows how to convert them.
object MyJsonProtocol extends DefaultJsonProtocol {
implicit val accountFormat = jsonFormat8(Account)
implicit val customerFormat = jsonFormat4(Customer)
}
So far so good, it actually works pretty well as long as I'm using the table-instances directly. The problem surfaces when I'm trying to generify the code so that I end up with excatly one controller for accessing all tables: I'm stuck with some piece of code that doesn't compile and I am not sure what's the next step.
It seems to be a type issue with spray-json which occurs when I'm trying to convert the list of objects to json in my getAll function.
Here is my generic attempt:
def getAll(tableName: String) = Action {
val json = inTransaction {
// lookup table based on url
val table = MyDB.tables.find( t => t.name == tableName).get
// execute select all and convert to json
from(table)(t =>
select(t)
).toList.toJson // causes compile error
}
// convert json to string and set correct content type
Ok(json.compactPrint).as(JSON)
}
Compile error:
[error] /Users/code/api/app/controllers/Crud.scala:29:
Cannot find JsonWriter or JsonFormat type class for List[_$2]
[error] ).toList.toJson
[error] ^
[error] one error found
I'm guessing the problem could be that the json-library needs to know at compile-time which model type I'm throwing at it, but I'm not sure (notice the List[_$2] in that compile error). I have tried the following changes to the code which compile and return results:
Remove the generic table-lookup (MyDB.tables.find(.....).get) and instead use the specific table instance e.g. MyDB.accountsTable. Proves that JSON serialization for work . However this is not generic, will require a unique controller and route config per table in db.
Convert the list of objects from db query to a string before calling toJson. I.e: toList.toJson --> toList.toString.toJson. Proves that generic lookup of tables work But not a proper json response since it is a string-serialized list of objects..
Thoughts anyone?
Your guess is correct. MyDb.tables is a Seq[Table[_]], in other words it could hold any type of table. There is no way for the compiler to figure out the type of the table you locate using the find method, and it seems like the type is needed for the JSON conversion. There are ways to get around that, but you'd need to some type of access to the model class.