Map nullable to several properties and specify default value for each - mapstruct

I try to map an enum (may be null) to a bean with 2 properties.
There is a mapping from the (source) enum to either properties and I have a default value for either property in the case the (source) enum is null.
However, in the generated mapping code the default values per property are never used as the initial null check immediately returns a null (result) bean.
The code example might help to understand the issue. S is the source enum, R is the expected result (bean) with the two properties t1 of type T1 and t2 of type T2. This is my mapper with S, T1, T2, R inline for easier trying out:
#Mapper
public interface MyMapper {
static enum S {A, B, C, D}
static enum T1 {A, B, C, D}
static enum T2 {X, Y}
#Data
#Builder
static class R {
private T1 t1;
private T2 t2;
}
#Mapping(source = "source", target = "t1", defaultValue = "A")
#Mapping(source = "source", target = "t2", defaultValue = "X")
R sToR(S source);
#ValueMapping(source = "A", target = "X")
#ValueMapping(source = "B", target = "X")
#ValueMapping(source = "C", target = "Y")
#ValueMapping(source = "D", target = "Y")
T2 sToT2(S source);
}
The generated code of sToR looks like this:
#Override
public R sToR(S source) {
if ( source == null ) {
return null;
}
RBuilder r = R.builder();
if ( source != null ) {
r.t1( sToT1( source ) );
}
else {
r.t1( T1.A );
}
if ( source != null ) {
r.t2( sToT2( source ) );
}
else {
r.t2( T2.X );
}
return r.build();
}
Everything would be as expected, if I just could get rid of the initial if(source==null){return null;} but so far I failed.

There is an open issue (mapstruct/mapstruct#1243) for MapStruct to work with Nullable, NotNull etc.
Until that one is resolved there isn't much that MapStruct can do in the moment and you will have to live with the null check

Related

jdbc reading resultSet by colName issue for aliases

I have a generic repository with a method as:
object Queries {
def getByFieldId(field: String, id: Int): String = {
s"""
|SELECT
| DF.id AS fileId,
| DF.name AS fileName,
| AG.id AS groupId,
| AG.name AS groupName
|FROM $tableName DFG
|INNER JOIN directory_files DF on DF.id = DFG.file_id
|INNER JOIN ad_groups AG on AG.id = DFG.group_id
|WHERE DFG.$field = $id
|""".stripMargin
}
}
def getByFieldId(field: String, id: Int): Try[List[Object]] = {
try {
val sqlQuery = Queries.getByFieldId("ad_group", 1)
statement = conn.getPreparedStatement(sqlQuery)
setParameters(statement, params)
resultSet = statement.executeQuery()
val metadata = resultSet.getMetaData
val columnCount = metadata.getColumnCount
val columns: ListBuffer[String] = ListBuffer.empty
for (i <- 1 to columnCount) {
columns += metadata.getColumnName(i)
}
var item: List[Object] = List.empty
while (resultSet.next()) {
val row = columns.toList.map(x => resultSet.getObject(x))
item = row
}
Success(item)
} catch {
case e: Any => Failure(errorHandler(e))
} finally conn.closeConnection(resultSet, statement)
}
The problems is that my result set ignore the query aliases and return columns as (id, name, id, name) instead of (fileId, fileName, groupId, groupName).
One solution found is to use column index instead of col names, but I'm not sure if this solution will cover entire app and will not break some other queries.
Maybe, another found solution is here and if I'm right, I can still use colNames but need to get them together with colTypes, then inside resultSet.next() to call getType method for each as:
// this part of code is not tested
// this idea came to me writing this topic
while (resultSet.next()) {
val row = columns.toList.map(x => {
x.colType match {
case "string" => resultSet.getString(x.colName)
case "integer" => resultSet.getInt(x.colName)
case "decimal" => resultSet.getDecimal(x.colName)
case _ => resultSet.getString(x.colName)
})
item = row
}
You may try to use getColumnLabel instead of getColumnName
as documented
Gets the designated column's suggested title for use in printouts and displays. The suggested title is usually specified by the SQL AS clause. If a SQL AS is not specified, the value returned from getColumnLabel will be the same as the value returned by the getColumnName method.
Note that this is highly dependent on the used RDBM.
For Oracle both methods return the alias and there is no chance to get the original column name.

Is there better way to unwrap record from sum type?

I have a sum type with record param, records have the same prop of the same type (tag :: String), and I need to get its value from passed T type value. So I do with case pattern matching:
data T = T1 { tag :: String, ... } | T2 { tag :: String, ...} | T3 {tag :: String, ...}
fun :: T -> String
fun t = case t of
T1 { tag } -> tag
T2 { tag } -> tag
T3 { tag } -> tag
I wonder if there is a more simple, less verbose way to do this?
If all your cases always have this field, and its semantics is the same in all cases (otherwise why would you have a function that conflates them?), then a cleaner design would be to bring it out of the cases:
type T = { tag :: String, theCase :: TCase }
data TCase = T1 { ... } | T2 { ... } | T3 { ... }
fun :: T -> String
fun = _.tag

Genie Vala Generics and Nullable Types

Simple question: In the following generic class, how should the generic type and contained types be defined so that they are nullable? The following will not compile.
class Pair? of G1, G2: Object
_first:G1?
_second:G2?
construct()
_first = null
_second = null
def insert( first:G1, second:G2 )
_first = first
_second = second
def insert_first( value:G1 )
_first = value
def insert_second( value:G2 )
_second = value
def second():G2
return _second
Usage:
var pair = new Pair() of string, string
pair = null
Due to the way Vala Generics work, generic parameters are always nullable.
As long as you don't switch on --enable-experimental-non-null class variables are nullable as well, so your code simplifies to:
[indent=4]
class Pair of G1, G2: Object
_first:G1
_second:G2
construct()
_first = null
_second = null
def insert( first:G1, second:G2 )
_first = first
_second = second
def insert_first( value:G1 )
_first = value
def insert_second( value:G2 )
_second = value
def second():G2
return _second
init
var pair = new Pair of string, string
pair = null
When --enable-experimental-non-null is on, you have to be explicit in the type of the variable. I don't know how to write this in Genie, I tried this, but the compiler does not like it:
init
pair: Pair? of string, string = new Pair of string, string
pair = null
In Vala it's no problem:
class Pair<G1,G2>: Object {
private G1 first;
private G2 second;
public Pair () {
first = null;
second = null;
}
// ...
}
int main () {
Pair<string, string>? pair = new Pair<string, string> ();
pair = null;
return 0;
}
I can't wrap my head around the concept of a type parameter that has null as the type. I don't think that is a useful concept. So the definition of your class would be:
[indent = 4]
class Pair of G1, G2: Object
_first:G1?
_second:G2?
def insert( first:G1, second:G2 )
_first = first
_second = second
def insert_first( value:G1 )
_first = value
def insert_second( value:G2 )
_second = value
def second():G2
return _second
If you must re-assign the variable that has the object instance to null then it would be:
[indent = 4]
init
var pair = new Pair of string,string()
pair = null
The Vala compiler will, however, dereference pair when it goes out of scope. So I'm not sure why you would need to assign null.
The use of nulls would ideally only be used when interfacing with a C library in my view. Accessing a null can lead to a crash (segmentation fault) if it is not checked for properly. For example:
init
a:int? = 1
a = null
var b = a + 1
The Vala compiler does have an experimental non-null mode that does some checking for unsafe code. If you compile the following with the Vala switch --enable-experimental-non-null:
[indent = 4]
init
var pair = new Pair of string,string()
pair = null
you will get the error:
error: Assignment: Cannot convert fromnull' to Pair<string,string>'
If you understand the consequences then you can tell the compiler this is OK with:
[indent = 4]
init
pair:Pair? = new Pair of string,string()
pair = null

Scala Slick 2 join on multiple fields?

how can do joins on multiple fields like in example beneath?
val ownerId = 1
val contactType = 1
...
val contact = for {
(t, c) <- ContactTypes leftJoin Contacts on (_.id === _.typeId && _.ownerId === ownerId)
if t.id === contactType
} yield (c.?, t)
How can I achieve this with Slick 2.0.1? Idelly I need slick to generate this kind of query
SELECT
x2."contact_id",
x2."type_id",
x2."owner_id",
x2."value",
x2."created_on",
x2."updated_on",
x3."id",
x3."type",
x3."model"
FROM
(
SELECT
x4."id" AS "id",
x4."type" AS "type",
x4."model" AS "model"
FROM
"contact_types" x4
)x3
LEFT OUTER JOIN(
SELECT
x5."created_on" AS "created_on",
x5."value" AS "value",
x5."contact_id" AS "contact_id",
x5."updated_on" AS "updated_on",
x5."type_id" AS "type_id",
x5."owner_id" AS "owner_id"
FROM
"contacts" x5
)x2 ON x3."id" = x2."type_id" AND x2.owner_id = 1
WHERE
(x3."id" = 3)
Please note ON x3."id" = x2."type_id" AND x2.owner_id = 16
Ok, so after digging through websites and source code I think I finally found the solution
leftJoin on() method accepts following parameter pred: (E1, E2) => T, so we simply can do like this
val contacts = for {
(t, c) <- ContactTypes leftJoin Contacts on ( (type, contact) => {
type.id === contact.typeId && contact.ownerId === ownerId
} )
} yield (c.?, t)
Which generated sql query as needed.

Play2's anorm can't work on postgresql

I found that the row parsers of play2's anorm depend on the meta data returned by jdbc driver.
So in the built-in sample "zentasks" provided by play, I can find such code:
object Project {
val simple = {
get[Pk[Long]]("project.id") ~
get[String]("project.folder") ~
get[String]("project.name") map {
case id~folder~name => Project(id, folder, name)
}
}
}
Please notice that the fields all have a project. prefix.
It works well on h2 database, but not on postgresql. If I use portgresql, I should write it as:
object Project {
val simple = {
get[Pk[Long]]("id") ~
get[String]("folder") ~
get[String]("name") map {
case id~folder~name => Project(id, folder, name)
}
}
}
I've asked this in play's google group, and Guillaume Bort said:
Yes if you are using postgres it's probably the cause. The postgresql
jdbc driver is broken and doesn't return table names.
If the postgresql's jdbc driver really have this issue, I think there will be a problem for anorm:
If two tables have fields with the same name, and I query them with join, anorm won't get the correct values, since it can't find out which name belongs to which table.
So I write a test.
1. create tables on postgresql
create table a (
id text not null primary key,
name text not null
);
create table b (
id text not null primary key,
name text not null,
a_id text,
foreign key(a_id) references a(id) on delete cascade
);
2. create anorm models
case class A(id: Pk[String] = NotAssigned, name: String)
case class B(id: Pk[String] = NotAssigned, name: String, aId: String)
object A {
val simple = {
get[Pk[String]]("id") ~
get[String]("name") map {
case id ~ name =>
A(id, name)
}
}
def create(a: A): A = {
DB.withConnection { implicit connection =>
val id = newId()
SQL("""
insert into a (id, name)
values (
{id}, {name}
)
""").on('id -> id, 'name -> a.name).executeUpdate()
a.copy(id = Id(id))
}
}
def findAll(): Seq[(A, B)] = {
DB.withConnection { implicit conn =>
SQL("""
select a.*, b.* from a as a left join b as b on a.id=b.a_id
""").as(A.simple ~ B.simple map {
case a ~ b => a -> b
} *)
}
}
}
object B {
val simple = {
get[Pk[String]]("id") ~
get[String]("name") ~
get[String]("a_id") map {
case id ~ name ~ aId =>
B(id, name, aId)
}
}
def create(b: B): B = {
DB.withConnection { implicit conneciton =>
val id = UUID.randomUUID().toString
SQL("""
insert into b (id, name, a_id)
values (
{id}, {name}, {aId}
)
""").on('id -> id, 'name -> b.name, 'aId -> b.aId).executeUpdate()
b.copy(id = Id(id))
}
}
}
3. test cases with scalatest
class ABTest extends DbSuite {
"AB" should "get one-to-many" in {
running(fakeApp) {
val a = A.create(A(name = "AAA"))
val b1 = B.create(B(name = "BBB1", aId = a.id.get))
val b2 = B.create(B(name = "BBB2", aId = a.id.get))
val ab = A.findAll()
ab foreach {
case (a, b) => {
println("a: " + a)
println("b: " + b)
}
}
}
}
}
4. the output
a: A(dbc52793-0f6f-4910-a954-940e508aab26,BBB1)
b: B(dbc52793-0f6f-4910-a954-940e508aab26,BBB1,4a66ebe7-536e-4bd5-b1bd-08f022650f1f)
a: A(d1bc8520-b4d1-40f1-af92-52b3bfe50e9f,BBB2)
b: B(d1bc8520-b4d1-40f1-af92-52b3bfe50e9f,BBB2,4a66ebe7-536e-4bd5-b1bd-08f022650f1f)
You can see that the "a"s have name of "BBB1/BBB2", but not "AAA".
I tried to redefine the parsers with prefixes as:
val simple = {
get[Pk[String]]("a.id") ~
get[String]("a.name") map {
case id ~ name =>
A(id, name)
}
}
But it will report errors that they can't find specified fields.
Is it a big issue of anorm? Or do I miss something?
The latest play2(RC3) has solved this problem by checking the class name of meta object:
// HACK FOR POSTGRES
if (meta.getClass.getName.startsWith("org.postgresql.")) {
meta.asInstanceOf[{ def getBaseTableName(i: Int): String }].getBaseTableName(i)
} else {
meta.getTableName(i)
}
But be careful if you want to use it with p6spy, it doesn't work because the class name of meta will be "com.p6spy....", not "org.postgresql....".