mybatis custom type handler without annoations - mybatis

I'm new to mybatis. I am trying to map a JDBC integer to a custom class. All the examples that I have seen on this have used annotations, is it possible to not use annotations and do this? Any example would be greatly appreciated.
Sreekanth

It is definitely possible and is described in general in Configuration and in Mapper sections of the documentation.
Define the handler first:
#MappedJdbcTypes(JdbcType.INTEGER)
public class MyClassHandler extends BaseTypeHandler<MyClass> {
#Override
public void setNonNullParameter(PreparedStatement ps, int i,
MyClass parameter, JdbcType jdbcType) throws SQLException {
ps.setInt(i, parameter.asInt());
}
#Override
public MyClass getNullableResult(ResultSet rs, String columnName)
throws SQLException {
int val = rs.getInt(columnName);
if (rs.wasNull())
return null;
else
return MyClass.valueOf(val);
}
#Override
public MyClass getNullableResult(ResultSet rs, int columnIndex)
throws SQLException {
int val = rs.getInt(columnIndex);
if (rs.wasNull())
return null;
else
return MyClass.valueOf(val);
}
#Override
public MyClass getNullableResult(CallableStatement cs, int columnIndex)
throws SQLException {
int val = cs.getInt(columnIndex);
if (cs.wasNull())
return null;
else
return MyClass.valueOf(val);
}
}
Then configure it in mybatis-config.xml:
<typeHandlers>
<typeHandler handler="my.company.app.MyClassHandler"/>
</typeHandlers>
Now you can use it in xml mappers.
If you have a class
class SomeTypeEntity {
private MyClass myClassField;
};
For querying the field configure handler in the resultMap like this:
<resultMap id="someMap" type="SomeTypeEntity">
<result property="myClassField" column="my_class_column" typeHandler="my.company.app.MyClassHandler"/>
</resultMap>
For insert/update use it like this:
<update id="updateSomeTypeWithMyClassField">
update some_type
set
my_class_column = #{someTypeEntity.myClassField, typeHandler=my.company.app.MyClassHandler},
</update>
for mapper method:
void updateSomeTypeWithMyClassField(#Param("someTypeEntity") SomeTypeEntity entity);

Related

How to work with PGpoint for Geolocation using PostgreSQL?

I found a lot of answers that suggest to use spatial data with Hibernate spatial data geolocation but I want to know if that is the best because I found that PostgreSQL works with PGpoint for GeoLocation. I implemented but it doesn't work because doesn't save.
ERROR: column "location" is of type point but expression is of type character varying
I have the same question but nobody answered him. So let me add other question below if nobody knows about this question.
As suggestion I'd want to know what is the best way to use Geo data on Spring Boot Context
Thanks! have a good day.
There is no way to save/update/get/ PGpoint object directly,
Then you have to create your own user type for supporting PGpoint in order to convert it, before this is saved, UserType is a class of Hibernate which allows to create custom type in order to convert it before to save on database.
Here is code that you need to implement:
First: Need to create a class that implements of UserType:
public class PGPointType implements UserType {
#Override
public int[] sqlTypes() {
return new int[]
{
Types.VARCHAR
};
}
#SuppressWarnings("rawtypes")
#Override
public Class<PGpoint> returnedClass() {
return PGpoint.class;
}
#Override
public boolean equals(Object obj, Object obj1) {
return ObjectUtils.equals(obj, obj1);
}
#Override
public int hashCode(Object obj) {
return obj.hashCode();
}
#Override
public Object nullSafeGet(ResultSet resultSet, String[] names, SharedSessionContractImplementor sharedSessionContractImplementor, Object o) throws SQLException {
if (names.length == 1) {
if (resultSet.wasNull() || resultSet.getObject(names[0]) == null) {
return null;
} else {
return new PGpoint(resultSet.getObject(names[0]).toString());
}
}
return null;
}
#Override
public void nullSafeSet(PreparedStatement statement, Object value, int index, SharedSessionContractImplementor sharedSessionContractImplementor) throws SQLException {
if (value == null) {
statement.setNull(index, Types.OTHER);
} else {
statement.setObject(index, value, Types.OTHER);
}
}
#Override
public Object deepCopy(Object obj) {
return obj;
}
#Override
public boolean isMutable() {
return Boolean.FALSE;
}
#Override
public Serializable disassemble(Object obj) {
return (Serializable) obj;
}
#Override
public Object assemble(Serializable serializable, Object obj) {
return serializable;
}
#Override
public Object replace(Object obj, Object obj1, Object obj2) {
return obj;
}
}
Second: Need to add on entity header #TypeDef annotation, add a name and the PGPointType that you created it and on some field header of type PGpoint, add #Type annotation with the name that you created it:
#TypeDef(name = "type", typeClass = PGPointType.class)
#Entity
public class Entity {
#Type(type = "type")
private PGpoint pgPoint;
// Getters and setters
}
Kind regards.

Simplest way to migrate from older JPA to newer with version with Query.getResultList() returning List<Object[]> instead of List<Vector>

We're currently researching the best way to upgrade from Toplink 2.1-60f to EclipseLink 2.6. The project is somewhat large and most of the manual work would have to be done in parts of the code where we are using NativeQuery. Query.getResultList() result differs between the two JPA-implementations as TopLink returns a List<Vector> and EclipseLink on the other hand returns a List<Object[]>. The code is unfortunately therefore littered with List<Vector> references.
Part of the solution would be to convert the result from list array to a list of vectors. Instead of doing this in all the numerous places manually, I was thinking we could use AspectJ to intercept the getResultList() calls and convert the return values. Is this a viable solution? Has anyone implemented similar solutions? We're using Maven as our build tool.
Thanks in advance!
My suggestion is: Use a good IDE and refactor your code!
But because you asked for an AOP solution, here is a self-consistent AspectJ example. As I have never used JPA, I will just recreate your situation as a little abstraction.
Abstract Query base implementation with lots of dummy methods:
package de.scrum_master.persistence;
import java.util.*;
import javax.persistence.*;
public abstract class MyBaseQueryImpl implements Query {
#Override public int executeUpdate() { return 0; }
#Override public int getFirstResult() { return 0; }
#Override public FlushModeType getFlushMode() { return null; }
#Override public Map<String, Object> getHints() { return null; }
#Override public LockModeType getLockMode() { return null; }
#Override public int getMaxResults() { return 0; }
#Override public Parameter<?> getParameter(String arg0) { return null; }
#Override public Parameter<?> getParameter(int arg0) { return null; }
#Override public <T> Parameter<T> getParameter(String arg0, Class<T> arg1) { return null; }
#Override public <T> Parameter<T> getParameter(int arg0, Class<T> arg1) { return null; }
#Override public <T> T getParameterValue(Parameter<T> arg0) { return null; }
#Override public Object getParameterValue(String arg0) { return null; }
#Override public Object getParameterValue(int arg0) { return null; }
#Override public Set<Parameter<?>> getParameters() { return null; }
#Override public Object getSingleResult() { return null; }
#Override public boolean isBound(Parameter<?> arg0) { return false; }
#Override public Query setFirstResult(int arg0) { return null; }
#Override public Query setFlushMode(FlushModeType arg0) { return null; }
#Override public Query setHint(String arg0, Object arg1) { return null; }
#Override public Query setLockMode(LockModeType arg0) { return null; }
#Override public Query setMaxResults(int arg0) { return null; }
#Override public <T> Query setParameter(Parameter<T> arg0, T arg1) { return null; }
#Override public Query setParameter(String arg0, Object arg1) { return null; }
#Override public Query setParameter(int arg0, Object arg1) { return null; }
#Override public Query setParameter(Parameter<Calendar> arg0, Calendar arg1, TemporalType arg2) { return null; }
#Override public Query setParameter(Parameter<Date> arg0, Date arg1, TemporalType arg2) { return null; }
#Override public Query setParameter(String arg0, Calendar arg1, TemporalType arg2) { return null; }
#Override public Query setParameter(String arg0, Date arg1, TemporalType arg2) { return null; }
#Override public Query setParameter(int arg0, Calendar arg1, TemporalType arg2) { return null; }
#Override public Query setParameter(int arg0, Date arg1, TemporalType arg2) { return null; }
#Override public <T> T unwrap(Class<T> arg0) { return null; }
}
The only method missing is getResultList(), so now let us provide two different implementations for it, extending the abstract base implementation:
Concrete Query implementation returning List<Vector>:
This emulates your TopLink class.
package de.scrum_master.persistence;
import java.util.*;
public class VectorQuery extends MyBaseQueryImpl {
#Override
public List getResultList() {
List<Vector<String>> resultList = new ArrayList<>();
Vector<String> result = new Vector<>();
result.add("foo"); result.add("bar");
resultList.add(result);
result = new Vector<>();
result.add("one"); result.add("two");
resultList.add(result);
return resultList;
}
}
Concrete Query implementation returning List<Object[]>:
This emulates your EclipseLink class.
package de.scrum_master.persistence;
import java.util.*;
public class ArrayQuery extends MyBaseQueryImpl {
#Override
public List getResultList() {
List<Object[]> resultList = new ArrayList<>();
Object[] result = new Object[] { "foo", "bar" };
resultList.add(result);
result = new Object[] { "one", "two" };
resultList.add(result);
return resultList;
}
}
Driver application:
The application creates queries of both concrete subtypes, each time assuming that the list elements will be vectors.
package de.scrum_master.app;
import java.util.*;
import de.scrum_master.persistence.*;
public class Application {
public static void main(String[] args) {
List<Vector<?>> resultList;
resultList = new VectorQuery().getResultList();
for (Vector<?> result : resultList)
System.out.println(result);
resultList = new ArrayQuery().getResultList();
for (Vector<?> result : resultList)
System.out.println(result);
}
}
Console log without aspect:
[foo, bar]
[one, two]
Exception in thread "main" java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to java.util.Vector
at de.scrum_master.app.Application.main(Application.java:15)
Uh-oh! This is exactly your problem, right? Now what can we do about it if we absolutely refuse to refactor? We abuse AOP for patching up the legacy code. (Please don't do it, but you can if you absolutely want to.)
AspectJ query result adapter:
Disregarding usage of raw types and other ugly stuff, here is my proof of concept:
package de.scrum_master.aspect;
import java.util.*;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
#Aspect
public class QueryResultAdapter {
#Around("call(* javax.persistence.Query.getResultList())")
public List<Vector> transformQueryResult(ProceedingJoinPoint thisJoinPoint) throws Throwable {
System.out.println(thisJoinPoint);
List result = (List) thisJoinPoint.proceed();
if (result != null && result.size() > 0 && result.get(0) instanceof Vector)
return result;
System.out.println("Transforming arrays to vectors");
List<Vector> transformedResult = new ArrayList<Vector>();
for (Object[] arrayItem : (List<Object[]>) result)
transformedResult.add(new Vector(Arrays.asList(arrayItem)));
return transformedResult;
}
}
Console log with aspect:
call(List de.scrum_master.persistence.VectorQuery.getResultList())
[foo, bar]
[one, two]
call(List de.scrum_master.persistence.ArrayQuery.getResultList())
Transforming arrays to vectors
[foo, bar]
[one, two]
Et voilĂ  - you can do ugly stuff and other things it was not invented for with AOP. ;-)

AEM Query Builder: Implementing PredicateEvaluator for ordering only

I'm trying to implement PredicateEvaluator for ordering purpose only - no filtering.
So I started with:
public class OrderByTagPredicate implements PredicateEvaluator {
private static final Logger log = Logger.getLogger(OrderByTagPredicate.class.getSimpleName());
public OrderByTagPredicate() {
super();
}
#Override
public Comparator<Row> getOrderByComparator(final Predicate predicate, final EvaluationContext context) {
return new Comparator<Row>() {
#Override
public int compare(Row o1, Row o2) {
// TODO Auto-generated method stub
return 0;
}
};
}
#Override
public boolean canFilter(Predicate arg0, EvaluationContext arg1) {
return true;
}
#Override
public boolean canXpath(Predicate arg0, EvaluationContext arg1) {
return true;
}
#Override
public FacetExtractor getFacetExtractor(Predicate arg0, EvaluationContext arg1) {
return null;
}
#Override
public String[] getOrderByProperties(Predicate arg0, EvaluationContext arg1) {
return null;
}
#Override
public String getXPathExpression(Predicate arg0, EvaluationContext arg1) {
return null;
}
#Override
public boolean includes(Predicate arg0, Row arg1, EvaluationContext arg2) {
return true;
}
#Override
public boolean isFiltering(Predicate arg0, EvaluationContext arg1) {
return false;
}
}
I registered the predicate with:
query.registerPredicateEvaluator("orderbytag", new OrderByTagPredicate());
And added it the map: map.put("orderbytag","xxx"); which is then used to create a PredicateGroup.
I've tried to debug by putting breakpoints in all the methods from the OrderByTagPredicate class, but it seems like the methods "getOrderByComparator(...)" never got called.
Any clue?
Resolved by adding the following into the map:
map.put("orderby","orderbytag");
The orderby clause was missing. Adding this clause makes AEM use the so-called PredicateEvalutor ordering method to be executed!
When you just want to sort by a particular property, the easiest way to do it is to include an ordering predicate in the PredicateGroup you use to build your query. That way, you don't even need to worry about custom evaluators, since the default one will handle it for you.
Assuming you've already created "predicateGroup", and you want to sort by 'indexName' ascending, you would do:
Predicate orderPredicate = new Predicate(Predicate.ORDER_BY);
orderPredicate.set(Predicate.ORDER_BY, "#" + indexName);
orderPredicate.set(Predicate.PARAM_SORT, Predicate.SORT_ASCENDING);
predicateGroup.add(orderPredicate);

Using type handler in where clause in MyBatis

Can I use type handler in where clause when writing dynamic query in MyBatis?
I have to convert the Boolean value to char. the false will be converted to "N" and true to "Y". As the value store in the column are either Y or N
Yes , you can use MyBatis typehandlers
public class YesNoBooleanTypeHandler extends BaseTypeHandler<Boolean> {
#Override
public void setNonNullParameter(PreparedStatement ps, int i, Boolean parameter, JdbcType jdbcType)
throws SQLException {
ps.setString(i, convert(parameter));
}
#Override
public Boolean getNullableResult(ResultSet rs, String columnName)
throws SQLException {
return convert(rs.getString(columnName));
}
#Override
public Boolean getNullableResult(ResultSet rs, int columnIndex)
throws SQLException {
return convert(rs.getString(columnIndex));
}
#Override
public Boolean getNullableResult(CallableStatement cs, int columnIndex)
throws SQLException {
return convert(cs.getString(columnIndex));
}
private String convert(Boolean b) {
return b ? "Y" : "N";
}
private Boolean convert(String s) {
return s.equals("Y");
}
}
Mapper.xml where clause:
... WHERE your_bool = #{yourBool,typeHandler=YesNoBooleanTypeHandler} ...

JPA Criteria API group_concat usage

I am currently working on a report which needs a group_concat for one of the fields.
CriteriaQuery<GameDetailsDto> criteriaQuery = criteriaBuilder
.createQuery(GameDetailsDto.class);
Root<BetDetails> betDetails = criteriaQuery.from(BetDetails.class);
Expression<String> betSelection = betDetails.get("winningOutcome");
criteriaQuery.multiselect(
// other fields to select
criteriaBuilder.function("group_concat", String.class, betSelection),
// other fields to select
);
//predicate, where clause and other filters
TypedQuery<GameDetailsDto> typedQuery = entityManager.createQuery(criteriaQuery);
this throws a null pointer exception on the line:
TypedQuery<GameDetailsDto> typedQuery = entityManager.createQuery(criteriaQuery);
did i incorrectly use the function method of the criteriaBuilder?
the documentations says:
function(String name, Class<T> type, Expression<?>... args);
I figured out how to do this with Hibernate-jpa-mysql:
1.) created a GroupConcatFunction class extending org.hibernate.dialect.function.SQLFunction (this is for single column group_concat for now)
public class GroupConcatFunction implements SQLFunction {
#Override
public boolean hasArguments() {
return true;
}
#Override
public boolean hasParenthesesIfNoArguments() {
return true;
}
#Override
public Type getReturnType(Type firstArgumentType, Mapping mapping)
throws QueryException {
return StandardBasicTypes.STRING;
}
#Override
public String render(Type firstArgumentType, List arguments,
SessionFactoryImplementor factory) throws QueryException {
if (arguments.size() != 1) {
throw new QueryException(new IllegalArgumentException(
"group_concat shoudl have one arg"));
}
return "group_concat(" + arguments.get(0) + ")";
}
}
2.) i created the CustomMySql5Dialect class extending org.hibernate.dialect.MySQL5Dialect and registered the group_concat class created in step 1
3.) On the app context, i updated the jpaVendorAdapter to use the CustomMySql5Dialect as the databasePlatform
4.) Finally to use it
criteriaBuilder.function("group_concat", String.class,
sampleRoot.get("sampleColumnName"))
Simple solution: instead of creating the whole class, just use SQLFunctionTemplate.
new SQLFunctionTemplate(StandardBasicTypes.STRING, "group_concat(?1)")
and then register this function in your own SQL dialect (eg. in constructor)
public class MyOwnSQLDialect extends MySQL5Dialect {
public MyOwnSQLDialect() {
super();
this.registerFunction("group_concat", new SQLFunctionTemplate(StandardBasicTypes.STRING, "group_concat(?1)"));
}
}
Suggested property:
spring.jpa.properties.hibernate.metadata_builder_contributor = com.inn.core.generic.utils.SqlFunctionsMetadataBuilderContributor
and class:
import org.hibernate.boot.MetadataBuilder;
import org.hibernate.boot.spi.MetadataBuilderContributor;
import org.hibernate.dialect.function.StandardSQLFunction;
import org.hibernate.type.StandardBasicTypes;
import org.springframework.stereotype.Component;
#Component
public class SqlFunctionsMetadataBuilderContributor implements MetadataBuilderContributor {
#Override
public void contribute(MetadataBuilder metadataBuilder) {
metadataBuilder.applySqlFunction("config_json_extract",
new StandardSQLFunction("json_extract", StandardBasicTypes.STRING));
metadataBuilder.applySqlFunction("JSON_UNQUOTE",
new StandardSQLFunction("JSON_UNQUOTE", StandardBasicTypes.STRING));
metadataBuilder.applySqlFunction("group_concat",
new StandardSQLFunction("group_concat", StandardBasicTypes.STRING));
}
}