I have this scenario:
A SQL Server table myTable with field1, xmlField (nvarchar(50) and xml sql server data type)
Linq to entities
Now I'd like to get a query like this:
SELECT Field1, XmlField
FROM MyTable
WHERE CAST(XmlField AS nvarchar(4000)) = '<myXml />'
Obviously this is a correct query in SQL Server but I can't find a solution to write this in L2E.
Please notify that this code doesn't work:
var query = from row in context.MyTables
where (string)row.XmlField == "<myXml />"
select row
and other cast methods too.
This just because in L2E the "ToString" does't work correctly.
Now my idea is this one: an extension method:
var query = from row in context.MyTables
select row
query = query.CompareXml("XmlField", "<myXml />")
and this is the extended method:
public static IQueryable<TSource> CompareXml<TSource>(this IQueryable<TSource> source, string xmlFieldName, string xmlToCompare)
{
ConstantExpression xmlValue = Expression.Constant(xmlToCompare);
ParameterExpression parameter = Expression.Parameter(typeof(TSource), source.ElementType.Name);
PropertyInfo propertyInfo = typeof(TSource).GetProperty(xmlFieldName);
MemberExpression memberAccess = Expression.MakeMemberAccess(parameter, propertyInfo);
var stringMember = Expression.Convert(memberAccess, typeof(string));
BinaryExpression clauseExpression = Expression.Equal(xmlValue, stringMember);
return source.Where(Expression.Lambda<Func<TSource, bool>>(clauseExpression, parameter));
}
and again this doesn't work too.
Now I'd like to understand how I can force a "Convert" using Cast so I can compare Xml and nvarchar.
Thanks in advance
Massimiliano
Unfortunately EF still doesn't properly support XML columns. I'm afraid pretty much the only choice I know of is to create a view that does the cast and map that to a different entity. This will probably make the code awkward but also offers additional possible scenarios; for example, with a lot of SQL code, you could map elements in the XML columns to actual columns in the view, allowing you to make queries on specific parts of the XML.
On the bright side, at least inserting values in an XML column works pretty much as expected.
Related
Given a simple table
create table car (
make varchar
model varchar
)
And the following DAO code
NamedParameterJdbcTemplate template;
String SQL = "delete from car where make = :make and model in (:model)";
void batchDelete(final Map<String, Collection<String>> map) {
SqlParameterSource[] params = map.entrySet().stream()
.map(entry -> toParams(entry.getKey(), entry.getValue()))
.toArray(SqlParameterSource[]::new);
template.batchUpdate(SQL, params);
}
void delete(final Map<String, Collection<String>> map) {
map.forEach((make, models) -> {
SqlParameterSource params = toParams(make, models);
template.update(SQL, params);
});
}
SqlParameterSource toParams(final String make, final Collection<String> models) {
return new MapSqlParameterSource("make", make)
.addValue("model", new ArrayList<>(models));
}
The batch delete function fails when the maps has 2 keys with different number of values for the IN clause in a batch. Assume Map.of creates and ordered Map.
// runs fine - 2 values for each key
batchDelete(Map.of("VW", Arrays.asList("Polo", "Golf"), "Toyota", Arrays.asList("Yaris", "Camry")));
// fails - first key has 1 value, second key has 2 values
batchDelete(Map.of("Toyota", Arrays.asList("Yaris"), "VW", Arrays.asList("Polo", "Golf")));
// runs fine - key with bigger list comes first
batchDelete(Map.of("VW", Arrays.asList("Polo", "Golf"), "Toyota", Arrays.asList("Yaris")));
// non batch delete runs fine either way
delete(Map.of("Toyota", Arrays.asList("Yaris"), "VW", Arrays.asList("Polo", "Golf")));
Spring documentation sort of alludes to that
https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#jdbc-in-clause
The SQL standard allows for selecting rows based on an expression that includes a variable list of values. A typical example would be select * from T_ACTOR where id in (1, 2, 3). This variable list is not directly supported for prepared statements by the JDBC standard; you cannot declare a variable number of placeholders. You need a number of variations with the desired number of placeholders prepared, or you need to generate the SQL string dynamically once you know how many placeholders are required. The named parameter support provided in the NamedParameterJdbcTemplate and JdbcTemplate takes the latter approach.
The error message is
The column index is out of range: 3, number of columns: 2.; nested exception is org.postgresql.util.PSQLException: The column index is out of range: 3, number of columns: 2.
What happens is the following line in NamedParameterJdbcTemplate # batchUpdate:
PreparedStatementCreatorFactory pscf = getPreparedStatementCreatorFactory(parsedSql, batchArgs[0]);
will create a dynamic sql out of the first batch arg length:
delete from car where make = ? and model in (?)
So the 2nd batch item which has 2 models will fail as there is only 1 placeholder.
What would be a workaround ? (other than grouping map entries by number of values)
Solution
Went back to plain old PreparedStatement
SQL - use ANY instead of IN
delete from car where make = ? and model = any (?)
DAO
Connection con;
PreparedStatement ps = con.prepareStatement("SQL");
map.forEach((make, models) -> {
int col = 0;
ps.setString(++col, make);
ps.setArray(++col, con.createArrayOf("text", models));
ps.addBatch();
});
ps.executeBatch();
I would recommend changing the SQL to look something more like this:
String SQL = "DELETE FROM car WHERE (make, model) IN (:ids)";
If you do it this way then you can use something similar to the answer I gave on this question: NamedJDBCTemplate Parameters is list of lists. Doing it this way means you can use NamedParameterJdbcTemplate.update(String sql, Map<String, ?> paramMap). Where in your paramMap the key would be "ids" and the value would be an instance of Collection<Object[]> where each entry in the collection is an array containing the value pairs you want to delete:
List<Object[]> params = new ArrayList<>();//you can make this any instance of collection you want
for (Car car : cars) {
params.add(new Object[] { car.getMake(), car.getModel() });
//this is just to provide an example of what I mean, obviously this will probably be different in your app.
}
This is the query I am trying to run in PostgreSQL:
SELECT * FROM message WHERE id IN (
SELECT unnest(message_ids) "mid"
FROM session_messages WHERE session_id = '?' ORDER BY "mid" ASC
);
However, I am not able do something:
create.selectFrom(Tables.MESSAGE).where(Tables.MESSAGE.ID.in(
create.select(DSL.unnest(..))
Because DSL.unnest is a Table<?>, which makes sense since it is trying to take a List-like object (mostly a literal) and convert it to table.
I have a feeling that I need to find a way to wrap the function around my field name, but I have no clue as to how to proceed.
NOTE. The field message_ids is of type bigint[].
EDIT
So, this is how I am doing it now, and it works exactly as expected, but I am not sure if this is the best way to do it:
Field<Long> unnestMessageIdField = DSL.field(
"unnest(" + SESSION_MESSAGES.MESSAGE_IDS.getName() + ")",
Long.class)
.as("mid");
Field<Long> messageIdField = DSL.field("mid", Long.class);
MESSAGE.ID.in(
ctx.select(messageIdField).from(
ctx.select(unnestMessageIdField)
.from(Tables.CHAT_SESSION_MESSAGES)
.where(Tables.CHAT_SESSION_MESSAGES.SESSION_ID.eq(sessionId))
)
.where(condition)
)
EDIT2
After going through the code on https://github.com/jOOQ/jOOQ/blob/master/jOOQ/src/main/java/org/jooq/impl/DSL.java I guess the right way to do this would be:
DSL.function("unnest", SQLDataTypes.BIGINT.getArrayType(), SESSION_MESSAGES.MESSAGE_IDS)
EDIT3
Since as always lukas is here for my jOOQ woes, I am going to capitalize on this :)
Trying to generalize this function, in a signature of sort
public <T> Field<T> unnest(Field<T[]> arrayField) {
return DSL.function("unnest", <??>, arrayField);
}
I don't know how I can fetch the datatype. There seems to be a way to get DataType<T[]> from DataType<T> using DataType::getArrayDataType(), but the reverse is not possible. There is this class I found ArrayDataType, but it seems to be package-private, so I cannot use it (and even if I could, it does not expose the field elementType).
Old PostgreSQL versions had this funky idea that it is OK to produce a table from within the SELECT clause, and expand it into the "outer" table, as if it were declared in the FROM clause. That is a very obscure PostgreSQL legacy, and this example is a good chance to get rid of it, and use LATERAL instead. Your query is equivalent to this one:
SELECT *
FROM message
WHERE id IN (
SELECT "mid"
FROM session_messages
CROSS JOIN LATERAL unnest(message_ids) AS t("mid")
WHERE session_id = '?'
);
This can be translated to jOOQ much more easily as:
DSL.using(configuration)
.select()
.from(MESSAGE)
.where(MESSAGE.ID).in(
select(field(name("mid"), MESSAGE.ID.getDataType()))
.from(SESSION_MESSAGES)
.crossJoin(lateral(unnest(SESSION_MESSAGES.MESSAGE_IDS)).as("t", "mid"))
.where(SESSION_MESSAGES.SESSION_ID.eq("'?'"))
)
The Edit3 in the question is quite close to a decent solution for this problem.
We can create a custom generic unnest method for jOOQ which accepts Field and use it in jOOQ query normally.
Helper method:
public static <T> Field<T> unnest(Field<T[]> field) {
var type = (Class<T>) field.getType().getComponentType();
return DSL.function("unnest", type, field);
}
Usage:
public void query(SessionId sessionId) {
var field = unnest(SESSION_MESSAGES.MESSAGE_IDS, UUID.class);
dsl.select().from(MESSAGE).where(
MESSAGE.ID.in(
dsl.select(field).from(SESSION_MESSAGES)
.where(SESSION_MESSAGES.SESSION_ID.eq(sessionId.id))
.orderBy(field)
)
);
}
I have an IEnumerable variable that I want to extract a distinct value from. I know all the entries in the rows of the list have the same value, I just need to get that value.
The method returns an IEnumerable.
The row in the IEnumerable is defined as:
QuoteCovId
AdditionalInterestId
AdditionalInterestsAffiliateId
AdditionalInterestsLastName
AdditionalInterestsBusinessAddrLine1
AdditionalInterestsBusinessCity
AdditionalInterestsBusinessState
AdditionalInterestsBusinessZip
Sampel of code:
IadditionalInterestData = AdditionalInterestData.GetAdditionalInterests(MasterPkgID, Requestor);
// Using linq.
var quotes = from ai in IadditionalInterestData
select Distinct(ai.QuoteCovId);
// Iterate thru to get the 1 value.
foreach (int QuoteCovId in quotes)
{
quoteID = QuoteCovId;
}
var quoteId = AdditionalInterestData.GetAdditionalInterests(MasterPkgID, Requestor)
.FirstOrDefault().Select(f => f.QuoteCovId);
But that method:
AdditionalInterestData.GetAdditionalInterests(MasterPkgID, Requestor);
returns me an IEnumerable which I will use further in my application. Which is what I need.
So how will your suggestion still give me that IEnumerable and give me the quote value which happens to be the same in the collection?
var quoteId = AdditionalInterestData.GetAdditionalInterests(MasterPkgID, Requestor).FirstOrDefault().Select(f => f.QuoteCovId);
Also, I just added your line of code as is and I get an error statement.
I'm trying to convert an entire column values into a arrayList using ormlite on android, is this possible, with direct api?
Using raw results i get close, but not quite:
GenericRawResults<String[]> rawResults =
getHelper().getMyProcessDao().queryRaw(
queryBuild.selectColumns("nid").prepareStatementString());
List<String[]> result = rawResults.getResults();
Hrm. I'm not sure this is what you want. However, one way to accomplish what you ask for specifically is through by using the RawRowMapper which can be passed to ORMLite's DAO method: dao.queryRaw(String, Rowmapper, String...).
Something like the following should work:
RawRowMapper<Integer> mapper = new RawRowMapper<Integer>() {
public Integer mapRow(String[] columnNames, String[] resultColumns) {
// maybe you should verify that there _is_ only 1 column here
// maybe you should handle the possibility of a bad number and throw
return Integer.parseInt(resultColumns[0]);
}
};
GenericRawResults<Integer> rawResults =
getHelper().getMyProcessDao().queryRaw(
queryBuild.selectColumns("nid").prepareStatementString(), mapper);
List<Integer> list = rawResults.getResults();
I have a very simple mapping function called "BuildEntity" that does the usual boring "left/right" coding required to dump my reader data into my domain object. (shown below) My question is this - If I don't bring back every column in this mapping as is, I get the "System.IndexOutOfRangeException" exception and wanted to know if ado.net had anything to correct this so I don't need to bring back every column with each call into SQL ...
What I'm really looking for is something like "IsValidColumn" so I can keep this 1 mapping function throughout my DataAccess class with all the left/right mappings defined - and have it work even when a sproc doesn't return every column listed ...
Using reader As SqlDataReader = cmd.ExecuteReader()
Dim product As Product
While reader.Read()
product = New Product()
product.ID = Convert.ToInt32(reader("ProductID"))
product.SupplierID = Convert.ToInt32(reader("SupplierID"))
product.CategoryID = Convert.ToInt32(reader("CategoryID"))
product.ProductName = Convert.ToString(reader("ProductName"))
product.QuantityPerUnit = Convert.ToString(reader("QuantityPerUnit"))
product.UnitPrice = Convert.ToDouble(reader("UnitPrice"))
product.UnitsInStock = Convert.ToInt32(reader("UnitsInStock"))
product.UnitsOnOrder = Convert.ToInt32(reader("UnitsOnOrder"))
product.ReorderLevel = Convert.ToInt32(reader("ReorderLevel"))
productList.Add(product)
End While
Also check out this extension method I wrote for use on data commands:
public static void Fill<T>(this IDbCommand cmd,
IList<T> list, Func<IDataReader, T> rowConverter)
{
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read())
{
list.Add(rowConverter(rdr));
}
}
}
You can use it like this:
cmd.Fill(products, r => r.GetProduct());
Where "products" is the IList<Product> you want to populate, and "GetProduct" contains the logic to create a Product instance from a data reader. It won't help with this specific problem of not having all the fields present, but if you're doing a lot of old-fashioned ADO.NET like this it can be quite handy.
Although connection.GetSchema("Tables") does return meta data about the tables in your database, it won't return everything in your sproc if you define any custom columns.
For example, if you throw in some random ad-hoc column like *SELECT ProductName,'Testing' As ProductTestName FROM dbo.Products" you won't see 'ProductTestName' as a column because it's not in the Schema of the Products table. To solve this, and ask for every column available in the returned data, leverage a method on the SqlDataReader object "GetSchemaTable()"
If I add this to the existing code sample you listed in your original question, you will notice just after the reader is declared I add a data table to capture the meta data from the reader itself. Next I loop through this meta data and add each column to another table that I use in the left-right code to check if each column exists.
Updated Source Code
Using reader As SqlDataReader = cmd.ExecuteReader()
Dim table As DataTable = reader.GetSchemaTable()
Dim colNames As New DataTable()
For Each row As DataRow In table.Rows
colNames.Columns.Add(row.ItemArray(0))
Next
Dim product As Product While reader.Read()
product = New Product()
If Not colNames.Columns("ProductID") Is Nothing Then
product.ID = Convert.ToInt32(reader("ProductID"))
End If
product.SupplierID = Convert.ToInt32(reader("SupplierID"))
product.CategoryID = Convert.ToInt32(reader("CategoryID"))
product.ProductName = Convert.ToString(reader("ProductName"))
product.QuantityPerUnit = Convert.ToString(reader("QuantityPerUnit"))
product.UnitPrice = Convert.ToDouble(reader("UnitPrice"))
product.UnitsInStock = Convert.ToInt32(reader("UnitsInStock"))
product.UnitsOnOrder = Convert.ToInt32(reader("UnitsOnOrder"))
product.ReorderLevel = Convert.ToInt32(reader("ReorderLevel"))
productList.Add(product)
End While
This is a hack to be honest, as you should return every column to hydrate your object correctly. But I thought to include this reader method as it would actually grab all the columns, even if they are not defined in your table schema.
This approach to mapping your relational data into your domain model might cause some issues when you get into a lazy loading scenario.
Why not just have each sproc return complete column set, using null, -1, or acceptable values where you don't have the data. Avoids having to catch IndexOutOfRangeException or re-writing everything in LinqToSql.
Use the GetSchemaTable() method to retrieve the metadata of the DataReader. The DataTable that is returned can be used to check if a specific column is present or not.
Why don't you use LinqToSql - everything you need is done automatically. For the sake of being general you can use any other ORM tool for .NET
If you don't want to use an ORM you can also use reflection for things like this (though in this case because ProductID is not named the same on both sides, you couldn't do it in the simplistic fashion demonstrated here):
List Provider in C#
I would call reader.GetOrdinal for each field name before starting the while loop. Unfortunately GetOrdinal throws an IndexOutOfRangeException if the field doesn't exist, so it won't be very performant.
You could probably store the results in a Dictionary<string, int> and use its ContainsKey method to determine if the field was supplied.
I ended up writing my own, but this mapper is pretty good (and simple): https://code.google.com/p/dapper-dot-net/