When should we create a Custom TypeHandler in mybatis? - mybatis

The below line is from the doc : https://mybatis.org/mybatis-3/configuration.html#:~:text=Whenever%20MyBatis%20sets%20a%20parameter,and%20Time%20API)%20by%20default.
Whenever MyBatis sets a parameter on a PreparedStatement or retrieves a value from a ResultSet, a TypeHandler is used to retrieve the value in a means appropriate to the Java type.
Also, we can override the type handlers or create our own to deal with unsupported or non-standard types.
So, one such scenario I have seen is for Json Type where we need custom TypeHandler since Json type is not supported by mybatis. Is my understanding correct ?
Also, I want to know few other types which isnot supported by mybatis (just curious), I searched on net but couldnot find anything relavent.

Related

Check if a parameter is passed to MyBatis or not inside MyBatis mapper

I am passing a map to MyBatis with some parameters. Now these parameters are based on the incoming JSON request... So the numbers of parameters are variable.
So sometimes it can happen that some parameter is not coming in that case MyBatis throws BindingException in the null check itself ... What is the suggested way to handle this kind of scenario.

How to dynamically create index on JSON Object properties (JSON Object props are also dynamic)

I have a scenario where I want to dynamically create index on keys of JSON Object (JSON Object attributes will vary). I am able to store the JSON Object as index (by implementing FieldBridge).
eg1: preference:{"sport":"football", "music":"pop")
eg2: preference:{"sport":"cricket", "music":"jazz", "cuisine":"mexican"}
But I am unable to query the individual fields like:
preference.sport
or preference.cuisine
Is there any way / configuration in hibernate search through which we can achieve that?
If your fields are dynamic, there is no pre-defined schema and Hibernate Search is unable to determine how to query these fields. There are significant differences in how a match query should be executed on a text field or a date field, for example.
For that reason, you cannot use the Hibernate Search Query DSL to build your queries.
However, you can use native APIs.
If you're using the Lucene integration, just creating the relevant queries yourself will work fine (as long as you create the right one):
new TermQuery(new Term("sport", "value"))
If you're using the experimental Elasticsearch integration, you can use org.hibernate.search.elasticsearch.ElasticsearchQueries.fromJson( ... ). You will have to write the whole query as JSON, though, and will not be able to take advantage of the Hibernate Search QueryBuilder at all, even for queries on statically defined fields. See https://docs.jboss.org/hibernate/search/5.11/reference/en-US/html_single/#_queries
Better support for native queries, as well as dynamic fields with pre-defined types, which would be targetable in the Query DSL, is planned for Hibernate Search 6, but it's not there yet. See HSEARCH-3273.

Update a String list column in Cassandra DB with Scala

I'm new on Cassandra and Scala, I'm working on a Kafka consumer (written in Scala) that has to update a field of a row on Cassandra from data it receives.
And so far no problem.
In this row a field is a String list and when I do the update this field hasn't to change, so I have to assign the same String list to it self.
UPDATE keyspaceName.tableName
SET fieldToChange = newValue
WHERE id = idValue
AND fieldA = '${currentRow.getString("fieldA")}'
AND fieldB = ${currentRow.getInt("fieldB")}
...
AND fieldX = ${currentRow.getList("fieldX", classOf[String]).toString}
...
But I receive even the exception:
com.datastax.driver.core.exceptions.SyntaxError: line 19:49 no viable alternative at input ']' (... 482 AND fieldX = [[listStringItem1]]...)
I currently haven't found anything that could help me through the web
The problem is that Scala's string representation of the list doesn't match to the Cassandra's representation of the list, so it generates errors.
Instead of constructing the CQL statement directly in your code, it's better to use PreparedStatement and bind variables to it:
first, it will speedup the execution as Cassandra won't parse every statement separately;
it will be easier to bind variables as you won't need to care about corresponding string representation
But be very careful with Scala - Java driver expects Java's lists, sets, maps, and base types, like, ints, etc.. You may look to java-driver-scala-extras package, but you'll need to compile it yourself, as it's not available on Maven Central.

Is 'jdbcType' necessary in a MyBatis resultMap?

When we use Mybatis , in <select> ...</select> statment I know we need set jdbcType beacuse the IN variable maybe null, but when I see the document of Mybatis, I found jdbcType in <result>...</result> under ResultMap. the document of the
jdbcTpe in <result>...</result> was:
... The JDBC type is only required for nullable columns upon insert, update or delete. This is a JDBC requirement, not a MyBatis one. So even if you were coding JDBC directly, you'd need to specify this type – but only for nullable values.
the bold word say only required for nullable columns upon insert, update or delete.
BUT,the element of result is used in select neither insert, update or delete.
so ,is it necessary use jdbcType in <result>...</result> ?
Most of the time, no. Why? Read on.
If you want to use a null as a JDBC parameter value you need to specify the jdbcType. That's a restriction of the JDBC specification you can't avoid. Therefore, if there's even a remote possibility a JDBC parameter could have a null value, then yes, specify it.
This does not apply to parameters preprocessed by MyBatis inside MyBatis tags, like the ones you use in the "test" attribute of the < if > tag. Those ones are not JDBC parameters.
Now, for the columns you read. These are the ones you are interested on. The thing is most of the time you don't need them. MyBatis will pick the right JDBC type for you. Well... this has been the case for me 99.999% of the time.
What about the other 0.001%? For some exotic column types -- that you rarely use -- MyBatis may pick the wrong JDBC type for you. The designers of MyBatis thought about this case, and give you the chance of overriding it. I think I remember an XML type database column that MyBatis was unsuccessfully trying to read as a VARCHAR, but I don't remember which database.
Bottom line, don't use it when reading columns, unless MyBatis reads exotic data type columns (XML, UUID, POINT, etc.) the wrong way.

How to get the values in DataFrame with the correct DataType?

When I tried to get some values in a DataFrame, like:
df.select("date").head().get(0) // type: Any
The result type is Any, which is not expected.
Since a dataframe contains the schema of the data, it should know the DataType for each column, so when i try to get a value using get(0), it should return the value with the correct type. However, it does not.
Instead, I need to specify which DataType i want using getDate(0), which seems weird, inconvenient, and makes me mad.
When I have specified the schema with the correct DataTypes for each column when i created the Dataframe, I don't want to use different getXXX()' for differentcolumn`s.
Are there some convenient ways that I can get the values with their own correct types? That is to say, how can I get the values with the correct DataType specified in the schema?
Thank you!
Scala is a statically typed language. so the get method defined on the Row can only return values with a single type because the return type of the get method is Any. It cannot return Int for one call and a String for another.
you should be calling the getInt, getDate and other get methods provided for each type. Or the getAs method in which you can pass the type as a parameter (for example row.getAs[Int](0)).
As mentioned in the comments other options are
use Dataset instead of a DataFrame.
use Spark SQL
You can call the generic getAs method as getAs[Int](columnIndex), getAs[String](columnIndex) or use specific methods like getInt(columnIndex), getString(columnIndex).
Link to the Scaladoc for org.apache.spark.sql.Row.