I am facing some issue with getVertices function
on calling get vertices function, I am getting empty result even though I have data
Iterable<Vertex> resultIterator = db.getVertices("Myclass", new String[] {"key1","key2"} , new String[]{"value1","value2"});
and (key1+key2) is my composite key (non-unique)
and if I query the same from studio I am getting results; example :
select from Myclass where key1='value1' and key2="value2"
am I doing something wrong ?
Try this:
Iterable resultIterator=g.getVertices("MyClass",new String[] {"key1","key2"}, new Object[] {"value1",value2});
P.S. value2 must be an Integer
Regards
Related
Given a simple table
create table car (
make varchar
model varchar
)
And the following DAO code
NamedParameterJdbcTemplate template;
String SQL = "delete from car where make = :make and model in (:model)";
void batchDelete(final Map<String, Collection<String>> map) {
SqlParameterSource[] params = map.entrySet().stream()
.map(entry -> toParams(entry.getKey(), entry.getValue()))
.toArray(SqlParameterSource[]::new);
template.batchUpdate(SQL, params);
}
void delete(final Map<String, Collection<String>> map) {
map.forEach((make, models) -> {
SqlParameterSource params = toParams(make, models);
template.update(SQL, params);
});
}
SqlParameterSource toParams(final String make, final Collection<String> models) {
return new MapSqlParameterSource("make", make)
.addValue("model", new ArrayList<>(models));
}
The batch delete function fails when the maps has 2 keys with different number of values for the IN clause in a batch. Assume Map.of creates and ordered Map.
// runs fine - 2 values for each key
batchDelete(Map.of("VW", Arrays.asList("Polo", "Golf"), "Toyota", Arrays.asList("Yaris", "Camry")));
// fails - first key has 1 value, second key has 2 values
batchDelete(Map.of("Toyota", Arrays.asList("Yaris"), "VW", Arrays.asList("Polo", "Golf")));
// runs fine - key with bigger list comes first
batchDelete(Map.of("VW", Arrays.asList("Polo", "Golf"), "Toyota", Arrays.asList("Yaris")));
// non batch delete runs fine either way
delete(Map.of("Toyota", Arrays.asList("Yaris"), "VW", Arrays.asList("Polo", "Golf")));
Spring documentation sort of alludes to that
https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#jdbc-in-clause
The SQL standard allows for selecting rows based on an expression that includes a variable list of values. A typical example would be select * from T_ACTOR where id in (1, 2, 3). This variable list is not directly supported for prepared statements by the JDBC standard; you cannot declare a variable number of placeholders. You need a number of variations with the desired number of placeholders prepared, or you need to generate the SQL string dynamically once you know how many placeholders are required. The named parameter support provided in the NamedParameterJdbcTemplate and JdbcTemplate takes the latter approach.
The error message is
The column index is out of range: 3, number of columns: 2.; nested exception is org.postgresql.util.PSQLException: The column index is out of range: 3, number of columns: 2.
What happens is the following line in NamedParameterJdbcTemplate # batchUpdate:
PreparedStatementCreatorFactory pscf = getPreparedStatementCreatorFactory(parsedSql, batchArgs[0]);
will create a dynamic sql out of the first batch arg length:
delete from car where make = ? and model in (?)
So the 2nd batch item which has 2 models will fail as there is only 1 placeholder.
What would be a workaround ? (other than grouping map entries by number of values)
Solution
Went back to plain old PreparedStatement
SQL - use ANY instead of IN
delete from car where make = ? and model = any (?)
DAO
Connection con;
PreparedStatement ps = con.prepareStatement("SQL");
map.forEach((make, models) -> {
int col = 0;
ps.setString(++col, make);
ps.setArray(++col, con.createArrayOf("text", models));
ps.addBatch();
});
ps.executeBatch();
I would recommend changing the SQL to look something more like this:
String SQL = "DELETE FROM car WHERE (make, model) IN (:ids)";
If you do it this way then you can use something similar to the answer I gave on this question: NamedJDBCTemplate Parameters is list of lists. Doing it this way means you can use NamedParameterJdbcTemplate.update(String sql, Map<String, ?> paramMap). Where in your paramMap the key would be "ids" and the value would be an instance of Collection<Object[]> where each entry in the collection is an array containing the value pairs you want to delete:
List<Object[]> params = new ArrayList<>();//you can make this any instance of collection you want
for (Car car : cars) {
params.add(new Object[] { car.getMake(), car.getModel() });
//this is just to provide an example of what I mean, obviously this will probably be different in your app.
}
I have the following Criteria API code which returns List.
I would like to convert this to List<myClass>
How can I do this?
CriteriaQuery<Tuple> cq = cb.createTupleQuery();
Root<ProductCatalogue> pc = cq.from(ProductCatalogue.class);
Root<ProductList> al = cq.from(ProductList.class);
.......
.......
.......
Predicate[] predicates = new Predicate[predicateList.size()];
predicateList.toArray(predicates);
criteriaQuery.where(predicates);
TypedQuery<Tuple> typedQuery = getEntityManager().cq(criteriaQuery);
List<Tuple> tuple = typedQuery.getResultList();
Ideally I would like to
List<Employee> emp = tuple
However the above resulted in incompatible type error, I do not know how could I cast this.
If you insist on using a tuple query you'll have to convert the instances manually.
If myClass is an entity you should use CriteriaQuery< myClass> as perissf suggested, otherwise you may use a "constructor expression" to create instances directly from a select, for example:
select new com...myClass(c.id, c.price) FROM ProductList c WHERE c.....;
See this answer for an example to create such a query using the Criteria API.
I'm trying to convert an entire column values into a arrayList using ormlite on android, is this possible, with direct api?
Using raw results i get close, but not quite:
GenericRawResults<String[]> rawResults =
getHelper().getMyProcessDao().queryRaw(
queryBuild.selectColumns("nid").prepareStatementString());
List<String[]> result = rawResults.getResults();
Hrm. I'm not sure this is what you want. However, one way to accomplish what you ask for specifically is through by using the RawRowMapper which can be passed to ORMLite's DAO method: dao.queryRaw(String, Rowmapper, String...).
Something like the following should work:
RawRowMapper<Integer> mapper = new RawRowMapper<Integer>() {
public Integer mapRow(String[] columnNames, String[] resultColumns) {
// maybe you should verify that there _is_ only 1 column here
// maybe you should handle the possibility of a bad number and throw
return Integer.parseInt(resultColumns[0]);
}
};
GenericRawResults<Integer> rawResults =
getHelper().getMyProcessDao().queryRaw(
queryBuild.selectColumns("nid").prepareStatementString(), mapper);
List<Integer> list = rawResults.getResults();
I have written
List<int> Uids = new List<int>();
Uids = (from returnResultSet in ds.ToList()
from portfolioReturn in returnResultSet.Portfolios
from baseRecord in portfolioReturn.ChildData
select new int
{
id = baseRecord.Id
}).ToList<int>();
Getting error: 'int' does not contain a definition for 'id'
what is the problem that i created?
Thanks
Try this:
List<int> Uids = (from returnResultSet in ds.ToList()
from portfolioReturn in returnResultSet.Portfolios
from baseRecord in portfolioReturn.ChildData
select baseRecord.Id).ToList<int>();
Since you want to get a list of integers you can simply project the Id property from your query and then use the ToList extension method to buffer them into a List<T>. As a side note, are you certain that a List<T> is the right type to use here? You are forgoing the benefit of deferred execution and will not be able to stream these ids if you buffer them into a List<T>.
Your code is trying to instantiate ints, setting an id property that doesn't exist. I think the following is what you need.
Uids = (from returnResultSet in ds.ToList()
from portfolioReturn in returnResultSet.Portfolios
from baseRecord in portfolioReturn.ChildData
select baseRecord.Id).ToList<int>();
The problem is with the: select new int {} part.
Try simply doing:
List<int> Uids = new List<int>();
Uids = (from returnResultSet in ds.ToList()
from portfolioReturn in returnResultSet.Portfolios
from baseRecord in portfolioReturn.ChildData
select baseRecord.Id
)
.ToList<int>();
select new {} syntax is for defining anonymous types (where use is limited to the same function scope).
Andrew.
I have a very simple mapping function called "BuildEntity" that does the usual boring "left/right" coding required to dump my reader data into my domain object. (shown below) My question is this - If I don't bring back every column in this mapping as is, I get the "System.IndexOutOfRangeException" exception and wanted to know if ado.net had anything to correct this so I don't need to bring back every column with each call into SQL ...
What I'm really looking for is something like "IsValidColumn" so I can keep this 1 mapping function throughout my DataAccess class with all the left/right mappings defined - and have it work even when a sproc doesn't return every column listed ...
Using reader As SqlDataReader = cmd.ExecuteReader()
Dim product As Product
While reader.Read()
product = New Product()
product.ID = Convert.ToInt32(reader("ProductID"))
product.SupplierID = Convert.ToInt32(reader("SupplierID"))
product.CategoryID = Convert.ToInt32(reader("CategoryID"))
product.ProductName = Convert.ToString(reader("ProductName"))
product.QuantityPerUnit = Convert.ToString(reader("QuantityPerUnit"))
product.UnitPrice = Convert.ToDouble(reader("UnitPrice"))
product.UnitsInStock = Convert.ToInt32(reader("UnitsInStock"))
product.UnitsOnOrder = Convert.ToInt32(reader("UnitsOnOrder"))
product.ReorderLevel = Convert.ToInt32(reader("ReorderLevel"))
productList.Add(product)
End While
Also check out this extension method I wrote for use on data commands:
public static void Fill<T>(this IDbCommand cmd,
IList<T> list, Func<IDataReader, T> rowConverter)
{
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read())
{
list.Add(rowConverter(rdr));
}
}
}
You can use it like this:
cmd.Fill(products, r => r.GetProduct());
Where "products" is the IList<Product> you want to populate, and "GetProduct" contains the logic to create a Product instance from a data reader. It won't help with this specific problem of not having all the fields present, but if you're doing a lot of old-fashioned ADO.NET like this it can be quite handy.
Although connection.GetSchema("Tables") does return meta data about the tables in your database, it won't return everything in your sproc if you define any custom columns.
For example, if you throw in some random ad-hoc column like *SELECT ProductName,'Testing' As ProductTestName FROM dbo.Products" you won't see 'ProductTestName' as a column because it's not in the Schema of the Products table. To solve this, and ask for every column available in the returned data, leverage a method on the SqlDataReader object "GetSchemaTable()"
If I add this to the existing code sample you listed in your original question, you will notice just after the reader is declared I add a data table to capture the meta data from the reader itself. Next I loop through this meta data and add each column to another table that I use in the left-right code to check if each column exists.
Updated Source Code
Using reader As SqlDataReader = cmd.ExecuteReader()
Dim table As DataTable = reader.GetSchemaTable()
Dim colNames As New DataTable()
For Each row As DataRow In table.Rows
colNames.Columns.Add(row.ItemArray(0))
Next
Dim product As Product While reader.Read()
product = New Product()
If Not colNames.Columns("ProductID") Is Nothing Then
product.ID = Convert.ToInt32(reader("ProductID"))
End If
product.SupplierID = Convert.ToInt32(reader("SupplierID"))
product.CategoryID = Convert.ToInt32(reader("CategoryID"))
product.ProductName = Convert.ToString(reader("ProductName"))
product.QuantityPerUnit = Convert.ToString(reader("QuantityPerUnit"))
product.UnitPrice = Convert.ToDouble(reader("UnitPrice"))
product.UnitsInStock = Convert.ToInt32(reader("UnitsInStock"))
product.UnitsOnOrder = Convert.ToInt32(reader("UnitsOnOrder"))
product.ReorderLevel = Convert.ToInt32(reader("ReorderLevel"))
productList.Add(product)
End While
This is a hack to be honest, as you should return every column to hydrate your object correctly. But I thought to include this reader method as it would actually grab all the columns, even if they are not defined in your table schema.
This approach to mapping your relational data into your domain model might cause some issues when you get into a lazy loading scenario.
Why not just have each sproc return complete column set, using null, -1, or acceptable values where you don't have the data. Avoids having to catch IndexOutOfRangeException or re-writing everything in LinqToSql.
Use the GetSchemaTable() method to retrieve the metadata of the DataReader. The DataTable that is returned can be used to check if a specific column is present or not.
Why don't you use LinqToSql - everything you need is done automatically. For the sake of being general you can use any other ORM tool for .NET
If you don't want to use an ORM you can also use reflection for things like this (though in this case because ProductID is not named the same on both sides, you couldn't do it in the simplistic fashion demonstrated here):
List Provider in C#
I would call reader.GetOrdinal for each field name before starting the while loop. Unfortunately GetOrdinal throws an IndexOutOfRangeException if the field doesn't exist, so it won't be very performant.
You could probably store the results in a Dictionary<string, int> and use its ContainsKey method to determine if the field was supplied.
I ended up writing my own, but this mapper is pretty good (and simple): https://code.google.com/p/dapper-dot-net/