Mybatis mapper for PROC call with multiple IN and OUT parameters - mybatis

I am new to mybatis and stuck with calling a stored procedure. My procedure has 4 parameters
IN - searchQuery of type String
OUT - returnCode of type int
OUT - returnMsg of type String
OUT - prodList of type Array of custom STRUCT
How should I model my parameter and result object, provide the mappers?

Related

C++Builder - cannot cast from 'AnsiString' to 'TObject'

I have a problem with converting a string variable to TObject.
I have a query that returns two columns to me. In the first column I have varchar values that I translate into strings, and in the second column I have int values.
I want to fill a ComboBox in this way with these values:
cbx1-> AddItem (DataSet1->DataSet->Fields->Field[0]->AsString, (TObject *) (int) DataSet1->DataSet->Fields->Field[1];
As I refer to the second value which is int type, I receive some bushes, e.g., xD, etc.
By trying to convert this value to string, eg:
String temp = IntToStr (DataSet1->DataSet->Fields->Field[1]);
cbx1-> AddItem (DataSet1->DataSet->Fields->Field[0]->AsString, (TObject *) temp;
I receive an error message:
cannot cast from 'AnsiString' to 'TObject'
I do not know what further I can do to convert this value.
You cannot cast an AnsiString value to a TObject* pointer. You can only cast an integer value, or a pointer value, to a TObject* pointer. AnsiString is neither of those.
You are not retrieving the int value from the 2nd field correctly anyway. Field[1] is a pointer to an actual TField object in the Fields collection. That pointer is what you are trying to store in your ComboBox, NOT the int value that the TField represents.
You need to call Fields[1]->AsInteger to get the int value of the 2nd field, similar to how you use Fields[0]->AsString to get the string value of the 1st field:
cbx1->AddItem(
DataSet1->DataSet->Fields->Field[0]->AsString,
(TObject*) DataSet1->DataSet->Fields->Field[1]->AsInteger
// in C++, using reinterpret_cast is preferred over C-style casting:
// reinterpret_cast<TObject*>(DataSet1->DataSet->Fields->Field[1]->AsInteger)
);
This is no different than the code in your previous question:
cbx1->AddItem("one",(TObject*)1);
You are now just placing the literals "one" and 1 with runtime variables of equivalent types.

Conditionally add query operator on properties defined in non-EDM base type, if inheriting

(C# code at end of question)
I have the following inheritance chain:
PreRecord <- Record <- (multiple entity types)
Record declares a property ID As Integer.
PreRecord and Record are not EDM types, and do not correspond to tables in the database.
I have a method that takes a generic parameter constrained to PreRecord and builds an EF query with the generic parameter as the element type. At runtime, in the event that T inherits not just from PreRecord but from Record, I would like add an OrderBy operator on ID:
'Sample 1
Function GetQuery(Of T As PreRecord)(row As T) As IQueryable(Of T)
Dim dcx = New MyDbContext
Dim qry = dcx.Set(Of T).AsQueryable
If TypeOf row Is RecordBase Then
'modify/rewrite the query here
End If
Return qry
End Function
If the parameter constraint were to Record I would have no problem applying query operators that use the ID property. How can I make use of a different (narrowing) generic constraint mid-method and still return an IQueryable(Of T) / IQueryable<T>, where T is still constrained to PreRecord?
I tried this:
'Sample 2
qry = dcx.Set(Of T).Cast(Of Record).OrderBy(Function(x) x.ID).Cast(Of PreRecord)()
which doesn't work:
LINQ to Entities only supports casting EDM primitive or enumeration types.
C# equivalent:
//Sample 1
public IQueryable<T> GetQuery<T>(T row) where T : PreRecord {
var dcx = new MyDbContext();
var qry = dcx.Set<T>.AsQueryable();
if (row is RecordBase) {
//modify/rewrite the query here
}
return qry;
}
and this doesn't work:
//Sample 2
qry = dcx.Set<T>.Cast<Record>.OrderBy(x => x.ID).Cast<PreRecord>()
The problem here is the fact that compiler checks queries already at compile time and PreRecord class does not have ID property. We cannot use simply Cast, because when it is used in definition of the query parser tries to convert it to sql - but there is no such thing that exists in sql. Sql supports only conversion of one column type to another - so on the .NET side it is supported only for primitive and enum types. To overcome compiler query checking we may use Expression class to build dynamic queries:
ParameterExpression e = Expression.Parameter(typeof(Record));
Expression body = Expression.Property(e, "ID");
Expression<Func<PreRecord, int>> orderByExpression = Expression.Lambda<Func<PreRecord, int>>(body, e);
And use your expression in the query:
qry = dcx.Set<T>.OrderBy(orderByExpression);
This way your linq query will not be validated during compile time but execution time. Here I assumed ID is of type int, if the type is different change it accordingly.

How to send list of string values as "LIST" object in Dozer (custom-converter-param)?

Currently the list is passed in the custom-converter-param as "aa,bb,cc,dd" etc., but the parameter is passed as string and we need to again split the string with comma and finally save it as a list for processing.
Is there is a way to pass the list of string object as "List" as a parameter?
Thanks,
Kathir
In every custom converter the value of the property custom-converter-param is passed to the field parameter of type String in org.dozer.DozerConverter<A, B> class, and its only getter method contract is like this:
public String getParameter()
So No, is not possible to retrieve the value as a List. The simplest/best thing to do here is to create a method getParameterAsList in your own converter to split the String value.

How to pass a byte[] parameter type in a sql query in MyBatis?

I need to match against an encrypted column in the DB. I need to pass the encrypted value for matching as a byte[]. The hashcode of the byte[] is passed instead of the actual value stored in the byte[]. Because the hash code is passed, it does not match the value correctly. Below is my query and the function call in the Mapper.java.
AccBalNotificationBean selectAccBalNotificationBean(#Param("acctIdByteArray") byte[] acctIdByteArray);
SELECT toa.accounts_id from tbl_transactions_other_accounts toa WHERE other_account_number = #{acctIdByteArray}
Thank you for your help.
I assume the datatype of your other_account_number column is of type string (char, varchar etc). Mybatis will use the StringDataTypeHandler by default and call the .toString() method of your byte array. Give MyBatis a hint that you want the content of your array to be used, by specifying the typeHandler.
.. WHERE other_account_number = #{acctIdByteArray, typeHandler=org.apache.ibatis.type.ByteArrayTypeHandler}

Entity Framework and Stored Procedure Function Import with nullable parameters

I noticed that when Entity Framework generates a method for a stored procedure (function import), it tests to see if the parameter is null, and makes a decision like this:
if (contactID.HasValue)
{
contactIDParameter = new ObjectParameter("contactID", contactID);
}
else
{
contactIDParameter = new ObjectParameter("contactID", typeof(global::System.Int32));
}
I don't understand what its trying to do by passing the Type of the parameter as a parameter when the parameter is null? Exactly how does the stored procedure/function get executed in this case?
I did a test myself with SQL Profiler, and noticed that when I intentionally pass null as a parameter (by calling something like context.MyProcedure(null) ), null is simply passed as the parameter to the SQL server's stored procedure.
Some clarifications on this behavior would be appreciated.
I was interested in this question so I made some investigation.
ObjectParameter has two overloads - one for passing value and one for passing the type. The second is used if you pass null as the parameter value because EF internally need this. The reason is that function import must be called with ObjectParameters, not with plain parameters you are passing to the wrapping method.
Internally EF calls:
private EntityCommand CreateEntityCommandForFunctionImport(string functionName, out EdmFunction functionImport, params ObjectParameter[] parameters)
{
...
for (int i = 0; i < parameters.Length; i++)
{
if (parameters[i] == null)
{
throw EntityUtil.InvalidOperation(Strings.ObjectContext_ExecuteFunctionCalledWithNullParameter(i));
}
}
...
this.PopulateFunctionEntityCommandParameters(parameters, functionImport, command);
return command;
}
As you can see even null value must be represented as ObjectParameter because you can't simply pass null - it will throw exception. The PopulateFunctionEntityCommandParameters uses information about the type to create correct DbParameter for calling the stored procedure. The value of that parameter is DBNull.Value.
So you don't have to deal with it. It is just infrastructure.
When you watch the code of the class ObjectParameter constructors
public ObjectParameter (string name, object value)
public ObjectParameter (string name, Type type)
You can see that ObjectParameter has 3 important private fields:
_name (name of parameter, not null and immutable), _type (CLR type of the parameter, not null and immutable), _value (value of the parameter, can be changed and nullable)
When the first constructor is used, these fields are all initialized. With the second constructor, the _value field is left to be null.
In the ExecuteFunction of EF, a private method CreateEntityCommandForFunctionImport is used which calls another even deeper private method PopulateFunctionImportEntityCommandParameters which attaches the entity parameters.
Inside PopulateFunctionImportEntityCommandParameters, an instance of EntityParameter which represents a parameter in EntityCommand will be mapped to the name and value's properties of ObjectParameter.
This instruction explains it all:
entityParameter.Value = objectParameter.Value ?? DBNull.Value;
We pass the DBNull to EF if no value was specified as a parameter.