Kotlin - how to get annotation attribute value - annotations

say, i have one Kotlin class with annotations:
#Entity #Table(name="user") data class User (val id:Long, val name:String)
How can i get the value of name attribute from #Table annotation?
fun <T> tableName(c: KClass<T>):String {
// i can get the #Table annotation like this:
val t = c.annotations.find { it.annotationClass == Table::class }
// but how can i get the value of "name" attribute from t?
}

You can simply:
val table = c.annotations.find { it is Table } as? Table
println(table?.name)
Note, I used the is operator since the annotation has RUNTIME retention and therefore it is an actual instance of the Table annotation within the collection. But the following works for any annotation:
val table = c.annotations.find { it.annotationClass == Table::class } as? Table

Related

Fields of type Record are inserted as null in BigQuery

I have a Scala job where I need to insert nested JSON file to BigQuery. The solution for that is to create a BQ table with field type as Record for the nested fields.
I wrote a case class that looks like this:
case class AvailabilityRecord(
nestedField: NestedRecord,
timezone: String,
) {
def toMap(): java.util.Map[String, Any] = {
val map = new java.util.HashMap[String, Any]
map.put("nestedField", nestedField)
map.put("timezone", timezone)
map
}
}
case class NestedRecord(
from: String,
to: String
)
I'm using the Java dependency "com.google.cloud" % "google-cloud-bigquery" % "2.11.0", in my program.
When I try to insert the JSON value that I parsed to the case class, into BQ, the value of field timezone of tpye String is inserted, however the nested field of type Record is inserted as null.
For insertion, I'm using the following code:
def insertData(records: Seq[AvailabilityRecord], gcpService: GcpServiceImpl): Task[Unit] = Task.defer {
val recordsToInsert = records.map(record => InsertBigQueryRecord("XY", record.toMap()))
gcpService.insertIntoBq(recordsToInsert, TableId.of("dataset", "table"))
}
override def insertIntoBq(records: Iterable[InsertBigQueryRecord],
tableId: TableId): Task[Unit] = Task {
val builder = InsertAllRequest.newBuilder(tableId)
records.foreach(record => builder.addRow(record.key, record.record))
bqContext.insertAll(builder.build)
}
What might be the issue of fields of Record type are inserted as null?
The issue was that I needed to map the sub case class too, because to the java API, the case class object is not known.
For that, this helped me to solve the issue:
case class NestedRecord(
from: String,
to: String
) {
def toMap(): java.util.Map[String, String] = {
val map = new java.util.HashMap[String, Any]
map.put("from", from)
map.put("to", to)
map
}
}
And in the parent case class, the edit would take place in the toMap method:
map.put("nestedField", nestedField.toMap)
map.put("timezone", timezone)

Write RDD[entity] in cassandra from Spark

I am trying to write an RDD that contains public classes in Cassandra with Spark
class Test(private var id: String, private var randomNumber: Integer, private var lastUpdate: Instant) {
def setId(id: String): Unit = { this.id = id }
def getId: String = { this.id }
def setLastUpdater(lastUpdater: Instant): Unit = { this.lastUpdater = lastUpdater }
def getLastUpdater: Instant = { this.lastUpdater }
def setRandomNumber(number: Integer): Unit = { this.randomNumber = randomNumber }
def getRandomNumber: Integer = { this.randomNumber }
}
This class has all the Setters and Getters to maintain the encapsulation and I need it to not be a Case Class because I have to modify the values during the transformations.
The table corresponding to this entity in Cassandra has slightly different names for the fields:
CREATE TABLE IF NOT EXISTS test.test (
id uuid,
random_number int,
last_update timestamp,
PRIMARY KEY (id)
)
I am trying to write this RDD with the method saveToCassandra
implicit val connector = CassandraConnector(sc.getConf)
val rdd: RDD[Test]
rdd.saveToCassandra("test", "test")
but the method throws me an exception for the coincidence of the names of the attributes of the class with the names of the fields in the table
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Columns not found in entity.Test: [id, random_number, last_update]
at scala.Predef$.require(Predef.scala:277)
at com.datastax.spark.connector.mapper.DefaultColumnMapper.columnMapForWriting(DefaultColumnMapper.scala:106)
at com.datastax.spark.connector.mapper.MappedToGettableDataConverter$$anon$1.<init>(MappedToGettableDataConverter.scala:35)
at com.datastax.spark.connector.mapper.MappedToGettableDataConverter$.apply(MappedToGettableDataConverter.scala:26)
at com.datastax.spark.connector.writer.DefaultRowWriter.<init>(DefaultRowWriter.scala:16)
at com.datastax.spark.connector.writer.DefaultRowWriter$$anon$1.rowWriter(DefaultRowWriter.scala:30)
at com.datastax.spark.connector.writer.DefaultRowWriter$$anon$1.rowWriter(DefaultRowWriter.scala:28)
at com.datastax.spark.connector.writer.TableWriter$.apply(TableWriter.scala:433)
at com.datastax.spark.connector.writer.TableWriter$.apply(TableWriter.scala:417)
at com.datastax.spark.connector.RDDFunctions.saveToCassandra(RDDFunctions.scala:35)
how can I write the entity in Cassandra without having to call the attributes the same and the attributes are private in the class?
saveToCassandra allows you to provide an optional ColumnSelector:
def saveToCassandra(
keyspaceName: String,
tableName: String,
columns: ColumnSelector = AllColumns,
writeConf: WriteConf = WriteConf.fromSparkConf(sparkContext.getConf))(...): Unit
In your case you could use the following selector:
def selector = SomeColumns(
ColumnName("id"),
ColumnName("random_number", alias = Some("randomNumber")),
ColumnName("last_update", alias = Some("lastUpdate"))
)
Btw, while not the typical (and recommended) use of a case class, you could absolutely define fields as vars and benefit from using a typed Dataset. That makes it very easy to rename fields before writing to Cassandra.

SpringDataForCassandra: NullPointerEx in Entity when an Int column is set null in DB

I'm trying to map my table 'User' to a scala domain Object called UserEntitiy.
An Int column named CreatedBy in User table has null values, while calling userRepository.findOne() or .findAll(), I'm encountering a NullPointerException.
Updating the values of the Int column to any valid Int resolves the issue. However, in our use case, we need to keep nulls in some int columns if there are no values available.
Here's my UserRepository trait, the Entity object, and Controller.
UserRepository.scala
import org.springframework.data.repository.CrudRepository
import org.springframework.stereotype.Repository
#Repository
trait UserRespository extends CrudRepository[UserEntity, Int] {
}
UserEntity.scala
import java.util.Date
import org.springframework.data.cassandra.core.cql.PrimaryKeyType
import org.springframework.data.cassandra.core.mapping.{PrimaryKey,
PrimaryKeyColumn, Table}
#Table(value="User")
class UserEntity extends Serializable {
#PrimaryKey
var userID: Int = _
#PrimaryKeyColumn(`type` = PrimaryKeyType.CLUSTERED)
var enterpriseID: Int = _
var createdBy: Int = _
}
CoreController.scala
#RestController
#RequestMapping(path = Array("/hello"))
#ResponseBody
class CoreController {
#Autowired
var userRepository : UserRespository = null
#GetMapping(Array("/"))
def getUser(): ResponseBody = {
var a:UserEntity = userRepository.findById(4000).get() //NPE Here.
null
}
}
What I've tried:
1. Using Options data type for the createdBy column.
2. Modifying the setter for CreatedBy to set -1 if the value in arg is null.
Any help will be greatly appreciated, Thanks.
If you have tried out using Option data type, then I highly recommend using Try for handling the error in a functional way
You can rewrite your code as follows
import scala.util.Try
val res : Option[UserEntity] = Try{
userRepository.findById(4000).get()
} match{
case scala.util.Success(value) => Some(value)
case scala.util.Failure(exception) => None
}

Scala serialization exception with Enumeration Value

I'm using the play 2.1 framework for scala and the MongoDB Salat plugin.
When I update an Enumeration.Value I got an exception:
java.lang.IllegalArgumentException: can't serialize class scala.Enumeration$Val
at org.bson.BasicBSONEncoder._putObjectField(BasicBSONEncoder.java:270) ~[mongo-java-driver-2.11.1.jar:na]
at org.bson.BasicBSONEncoder.putIterable(BasicBSONEncoder.java:295) ~[mongo-java-driver-2.11.1.jar:na]
at org.bson.BasicBSONEncoder._putObjectField(BasicBSONEncoder.java:234) ~[mongo-java-driver-2.11.1.jar:na]
at org.bson.BasicBSONEncoder.putObject(BasicBSONEncoder.java:174) ~[mongo-java-driver-2.11.1.jar:na]
at org.bson.BasicBSONEncoder.putObject(BasicBSONEncoder.java:120) ~[mongo-java-driver-2.11.1.jar:na]
at com.mongodb.DefaultDBEncoder.writeObject(DefaultDBEncoder.java:27) ~[mongo-java-driver-2.11.1.jar:na]
Inserting the Enumeration.Value works fine. My case class looks like:
case class User(
#Key("_id") id: ObjectId = new ObjectId,
username: String,
email: String,
#EnumAs language: Language.Value = Language.DE,
balance: Double,
added: Date = new Date)
and my update code:
object UserDAO extends ModelCompanion[User, ObjectId] {
val dao = new SalatDAO[User, ObjectId](collection = mongoCollection("users")) {}
def update(): WriteResult = {
UserDAO.dao.update(q = MongoDBObject("_id" -> new ObjectId(id)), o = MongoDBObject("$set" -> MongoDBObject("language" -> Language.EN))))
}
}
Any ideas how to get that working?
EDIT:
workaround: it works if I cast the Enumeration.Value toString, but that's not how it should be...
UserDAO.dao.update(q = MongoDBObject("_id" -> new ObjectId(id)), o = MongoDBObject("$set" -> MongoDBObject("language" -> Language.EN.toString))))
It is possible to add a BSON encoding for Enumeration. So, the conversion is done in a transparent manner.
Here is the code
RegisterConversionHelpers()
custom()
def custom() {
val transformer = new Transformer {
def transform(o: AnyRef): AnyRef = o match {
case e: Enumeration$Val => e.toString
case _ => o
}
}
BSON.addEncodingHook(classOf[Enumeration$Val], transformer)
}
}
At the time of writing mongoDB doesn't place nice with scala enums, I use a decorator method as a work around.
Say you have this enum:
object EmployeeType extends Enumeration {
type EmployeeType = Value
val Manager, Worker = Value
}
and this mongodb record:
import EmployeeType._
case class Employee(
id: ObjectId = new ObjectId
)
In your mongoDB, store the integer index of the enum instead of the enum itself:
case class Employee(
id: ObjectId = new ObjectId,
employeeTypeIndex: Integer = 0
){
def employeeType = EmployeeType(employeeTypeIndex); /* getter */
def employeeType_=(v : EmployeeType ) = { employeeTypeIndex= v.id} /* setter */
}
The extra methods implement getters and setters for the employee type enum.
Salat only does its work when you serialize to and from your model object with the grater, not when you do queries with MongoDB-objects yourself. The mongo driver api knows nothing about the annotation #EnumAs. (In addition to that even if you could use salat for that, how would it be able to know that you are referring to User.language in a generic key->value MongoDBObject?)
So you have to do like you describe in your workaround. Provide the "value" of the enum yourself when you want to do queries.

ClassCastException when trying to insert with Squeryl

This may be due to my misunderstanding of how Squeryl works. My entity is defined as:
case class Wallet(userid: Int, amount: Long)
extends KeyedEntity[Int] with Optimistic {
def id = userid
}
My table variable is defined as:
val walletTable = table[Wallet]("wallets")
on(walletTable) {
w =>
declare {
w.userid is (primaryKey)
}
}
Then I'm just calling a method to try to add money to the wallet:
val requestedWallet = wallet.copy(amount = wallet.amount + amount)
try {
inTransaction {
walletTable.update(requestedWallet)
}
On the line where I call update, an exception is getting thrown:
[ClassCastException: java.lang.Integer cannot be cast to org.squeryl.dsl.CompositeKey]
I'm not using composite keys at all, so this is very confusing. Does it have to do with the fact that my id field is not called "id", but instead "userid"?
I get the same behavior when I try what you are doing. It seems that for some reason id can't be an alias unless it is a composite key (at least in 0.9.5). You can work around that and get the same result with something like this:
case class Wallet(#Column("userid") id: Int, amount: Long)
extends KeyedEntity[Int] with Optimistic {
def userid = id
}
The column annotation will look to the database for the userid field, and id will be a val instead. You can then alias userid for consistency.