Scala help. DynamoDBMappingException: could not unconvert attribute - scala

I defined the following table in Scala. When I ran a test to query data from the dynamodb table, it throws exceptions:
com.amazonaws.services.dynamodbv2.datamodeling.
DynamoDBMappingException
: TsuIntersectionRecord[comments]; could not unconvert attribute
The problem is with comments attribute, which is a list (SS). What should I do to fix unconert problem?
#DynamoDBTable(tableName = "tsu-overrides-table")
case class TsuIntersectionRecord(
#(DynamoDBHashKey #beanGetter)(attributeName = "overrideId")
#BeanProperty
var overrideId: String,
#(DynamoDBRangeKey #beanGetter)(attributeName = "intersection")
#BeanProperty
var intersection: String,
#(DynamoDBAttribute #beanGetter)(attributeName = "runId")
#BeanProperty
var runId: String,
#(DynamoDBAttribute #beanGetter)(attributeName = "snapshotDate")
#BeanProperty
var snapshotDate: String,
#(DynamoDBAttribute #beanGetter)(attributeName = "tsuValue")
#BeanProperty
var tsuValue: Double,
#(DynamoDBAttribute #beanGetter)(attributeName = "action")
#BeanProperty
var action: String,
#(DynamoDBAttribute #beanGetter)(attributeName = "overrideValue")
#BeanProperty
var overrideValue: Option[Double],
#(DynamoDBAttribute #beanGetter)(attributeName = "comments")
#BeanProperty
var comments: List[String],
#(DynamoDBAttribute #beanGetter)(attributeName = "lastUpdateTime")
#BeanProperty
var lastUpdateTime: Int
) {
def this() = this(null, null, null, null, 0.0, null, None, List(), 0)
}

Related

class type required but T found in Encoders

I am trying to create a generic code to read using spark sql from a view. Where T can be any object passed at runtime, I should be able to get DataSet of T.
Method
def queryStream[T](sparkSession: SparkSession, query: String, filterParams: Array[String])(implicit tag: ClassTag[T]): Dataset[T] = sparkSession.sql(query)
.as(Encoders.bean[KafkaMessage](classOf[KafkaMessage]))
.map(f => {
init()
getObjectMapper.readValue(f.message, classTag[T].runtimeClass).asInstanceOf[T]
})(Encoders.bean[T](classOf[T]))
Invocation :
queryStream[Student](sparkSession,"select cast(value as string) as message,"
+ "cast(key as string) as messageKey, " +
"cast(partition as int) partition, " +
"cast(offset as long) offset from events",null)
KafkaMessage:
class KafkaMessage {
#BeanProperty
var messageKey: String = _
#BeanProperty
var message: String = _
#BeanProperty
var partition: Int = _
#BeanProperty
var offset: Long = _
override def toString(): String = s"Message Key: $messageKey, Message: $message, Partition:$partition, Offset:$offset"
}

Calculate a derived column in the select output - Scala Slick 3.2.3

I am trying to write some REST API to fetch the data using Scala Slick 3.2.3. Is there a way to calculate a derived column and include it in the returned output?
My model:
case class Task(id: Option[TaskId], title: String, dueOn: String, status: String, createdAt: String, updatedAt: String)
Table class:
class TasksTable(tag: Tag) extends Table[Task](tag, _tableName = "TASKS") {
def id: Rep[TaskId] = column[TaskId]("ID", O.PrimaryKey, O.AutoInc)
def title: Rep[String] = column[String]("TITLE")
def dueOn: Rep[String] = column[String]("DUE_ON")
def status: Rep[String] = column[String]("STATUS")
def createdAt: Rep[String] = column[String]("CREATED_AT")
def updatedAt: Rep[String] = column[String]("UPDATED_AT")
def * = (id.?, title, dueOn, status, createdAt, updatedAt) <> ((Task.apply _).tupled, Task.unapply)
}
DAO:
object TasksDao extends BaseDao {
def findAll: Future[Seq[Task]] = tasksTable.result
}
I want to add a column in the response json called timeline with values "overdue", "today", "tomorrow", "upcoming", etc. calculated based on the dueOn value.
I tried searching but could not find any help. Any help with an example or any pointers would be highly appreciated. Thanks!
First I'd start from defining enum model for timeline:
object Timelines extends Enumeration {
type Timeline = Value
val Overdue: Timeline = Value("overdue")
val Today: Timeline = Value("today")
val Tomorrow: Timeline = Value("tomorrow")
val Upcoming: Timeline = Value("upcoming")
}
Then I'd modify dueOne column type from plain String to LocalDate - this will be easier to do on DAO level, so Slick will handle parsing errors for us.
So, to need to define custom type for LocalDate (see for more details: http://scala-slick.org/doc/3.0.0/userdefined.html#using-custom-scalar-types-in-queries).
// Define mapping between String and LocalDate
private val defaultDateFormat: DateTimeFormatter = DateTimeFormatter.ISO_DATE // replace it with formatter you use for a date
def stringDateColumnType(format: DateTimeFormatter): BaseColumnType[LocalDate] = {
MappedColumnType.base[LocalDate, String](_.format(format), LocalDate.parse(_, format))
}
implicit val defaultStringDateColumnType: BaseColumnType[LocalDate] = stringDateColumnType(defaultDateFormat)
private val defaultDateFormat: DateTimeFormatter = DateTimeFormatter.ISO_DATE // replace it with formatter you use for a date
// Change `dueOn` from String to LocalDate
case class Task(id: Option[TaskId], title: String, dueOn: LocalDate, status: String, createdAt: String, updatedAt: String)
class TasksTable(tag: Tag) extends Table[Task](tag, _tableName = "TASKS") {
def id: Rep[TaskId] = column[TaskId]("ID", O.PrimaryKey, O.AutoInc)
def title: Rep[String] = column[String]("TITLE")
def dueOn: Rep[LocalDate] = column[LocalDate]("DUE_ON") // Then replace column type
def status: Rep[String] = column[String]("STATUS")
def createdAt: Rep[String] = column[String]("CREATED_AT")
def updatedAt: Rep[String] = column[String]("UPDATED_AT")
def * = (id.?, title, dueOn, status, createdAt, updatedAt) <> ((Task.apply _).tupled, Task.unapply)
}
Then define API level model TaskResponse with new additional timeline field:
case class TaskResponse(id: Option[TaskId], title: String, dueOn: LocalDate, status: String, createdAt: String, updatedAt: String, timeline: Timeline)
object TaskResponse {
import Timelines._
def fromTask(task: Task): TaskResponse = {
val timeline = dueOnToTimeline(task.dueOn)
TaskResponse(task.id, task.title, task.dueOn, task.status, task.createdAt, task.updatedAt, timeline)
}
def dueOnToTimeline(dueOn: LocalDate): Timeline = {
val today = LocalDate.now()
Period.between(today, dueOn).getDays match {
case days if days < 0 => Overdue
case 0 => Today
case 1 => Tomorrow
case _ => Upcoming
}
}
}
Then you can create TasksService responsible for business logic of converting:
class TasksService(dao: TasksDao)(implicit ec: ExecutionContext) {
def findAll: Future[Seq[TaskResponse]] = {
dao.findAll.map(_.map(TaskResponse.fromTask))
}
}
Hope this helps!

List[String] Object in scala case class

I am using dse 5.1.0 (packaged with spark 2.0.2.6 and scala 2.11.8).
reading a cassandra table as below.
val sparkSession = ...
val rdd1 = sparkSession.table("keyspace.table")
This table contains a List[String] column, say list1, which I read in scala rdd, say rdd1. But when I try to use encoder, it throws error.
val myVoEncoder = Encoders.bean(classOf[myVo])
val dataSet = rdd1.as(myVoEncoder)
I have tried with
scala.collection.mutable.list,
scala.collection.immutable.list,
scala.collection.list,
Seq,
WrappedArray. All gave the same error as below.
java.lang.UnsupportedOperationException: Cannot infer type for class scala.collection.immutable.List because it is not bean-compliant
MyVo.scala
case class MyVo(
#BeanProperty var id: String,
#BeanProperty var duration: Int,
#BeanProperty var list1: List[String],
) {
def this() = this("", 0, null)
}
Any help will be appriciated.
You should use Array[String]:
case class MyVo(
#BeanProperty var id: String,
#BeanProperty var duration: Int,
#BeanProperty var list1: Array[String]
) {
def this() = this("", 0, null)
}
although it is important to stress out, that more idiomatic approach would be:
import sparkSession.implicits._
case class MyVo(
id: String,
duration: Int,
list1: Seq[String]
)
rdd1.as[MyVo]

Swift: Define relationships between classes

I'm trying to figure out how to setup the relationship between two classes so that I'm able to determine the properties of one by knowing the other.
Let's say I have Owner and Pet, where the owner has a pet.
class Owner{
let name: String
var age: Int
let petName: String
init(name: String, age: Int, petName: String) {
self.name = name
self.age = age
self.petName = petName
}
}
class Pet{
let petName: String
var age: Int
let petType: String
init(petName: String, age: Int, petType: String) {
self.petName = petName
self.age = age
self.petType = petType
}
}
Assuming each petName is unique, I'd like to be able to do something like owner.petType where I can return the petType for the petName of the owner. Is this possible? Should I set the classes up differently?
You can set up your classes as such:
class Owner {
let name: String
var age: Int
var pet: Pet? //Owner might not have a pet - thus optional or can also have an array of pets
init(name: String, age: Int, pet: Pet) {
self.name = name
self.age = age
self.pet = pet
}
}
class Pet {
let name: String
var age: Int
let type: String
init(name: String, age: Int, type: String) {
self.name = name
self.age = age
self.type = type
}
}

Non-value field is accessed in 'hashCode()'

It happen with id. (Non-value field is accessed in 'hashCode()')
How can I fix it?
Here is my code:
import java.util.Date
case class Category() {
var id: Integer = _
var parentId: Integer = _
var name: String = _
var status: Boolean = _
var sortOrder: Integer = _
var createTime: Date = _
var updateTime: Date = _
def this(id: Integer, parentId: Integer, name: String, status: Boolean, sortOrder: Integer, createTime: Date, updateTime: Date) {
this()
this.id = id
this.parentId = parentId
this.name = name
this.status = status
this.sortOrder = sortOrder
this.createTime = createTime
this.updateTime = updateTime
}
override def hashCode(): Int = {
if (id != null) id.hashCode() else 0
}
}
If I change var id to val id, it will be ok. But I need setter, I cant't set it as a val.
Non value field in hashCode means you're relaying on a mutable field to generate the classes hashcode, which may be dangerous, depending how you use that class. For example, assume you have a HashMap[Category, String], you place an entry in the hash map and them mutate the id field, now that hashmap can no longer find the class instance by it's hashcode.
You're treating a case class like it's a regular, mutable Java class. Case classes provides the mechanism of encapsulating immutable data in a syntax convenient way (where you also get a few nice things for free, such as hashCode, equals, etc). What you can do is make your case class accept parameters via it's primary constructor:
case class Category(id: Int,
parentId: Int,
name: String,
status: Boolean,
sortOrder: Int,
createTime: Date,
updateTime: Date) {
override def hashCode(): Int = id.hashCode()
}
If you want to update Category, you can use the utility copy method defined on a case class:
val cat = Category(id = 1,
parentId = 1,
name = "Hello",
status = true,
sortOrder = 1,
createTime = new Date(123132123L),
updateTime = new Date(52234234L))
val newCat: Category = cat.copy(id = 2)
Another way of approaching this (which I like less TBH) would be to make all your fields an Option[T] with a default value set to None:
case class Category(id: Option[Int] = None,
parentId: Option[Int] = None,
name: Option[String] = None,
status: Option[Boolean] = None,
sortOrder: Option[Int] = None,
createTime: Option[Date] = None,
updateTime: Option[Date] = None) {
override def hashCode(): Int = id.map(_.hashCode()).getOrElse(0)
}
And then updating them once you can populate the class with data:
val cat = Category()
val newCat = cat.copy(id = Some(1))
After I try many times, I think this should be a good choice.
Any ideas?
override def hashCode(): Int = {
if (getId != null) getId.hashCode() else 0
}