OData v4 post-processing of results (works with v3, not with v4) - entity-framework

I have a Web API server (with EF 6.x) and I need to do some post-processing of the result set from OData queries in the controller. On the client-side I use a DevEx grid and their ODataInstantFeedbackSource.
With no post-processing, everything works fine, e.g.:
http://somesite.us/odata/Items/$count
[EnableQuery]
public IHttpActionResult GetItems(ODataQueryOptions<Item> queryOptions)
{
return Ok(Context.Items);
}
It does not work with post-processing (same simple $count query, but without EnableQuery since I am manually applying the query options):
GET http://somesite.us/odata/Items/$count
//[EnableQuery]
public IHttpActionResult GetItems(ODataQueryOptions<Item> queryOptions)
{
queryOptions.Validate(_validationSettings);
var query = queryOptions.ApplyTo(Context.Items, new ODataQuerySettings()) as IQueryable<Item>;
var resultList = new List<Item>();
foreach (var item in query)
{
item.OrdStat = "asf"; // Some post-processing
resultList.Add(item);
}
return Ok(resultList.AsQueryable());
}
This throws an exception:
Microsoft.OData.ODataException
HResult=0x80131509
Message=The value of type 'System.Linq.EnumerableQuery`1[[SomeService.Model.Item, SomeService.Model, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]' could not be converted to a raw string.
Source=Microsoft.OData.Core
StackTrace:
at Microsoft.OData.RawValueWriter.WriteRawValue(Object value)
Note: with ODATA v3, the above works just fine. It is only with v4 that I get an exception when not using [EnableQuery].
If I add back the [EnableQuery] attribute, this simple $count query works with ODATA v4, but with more complex queries, the data returned to the client gets messed up (likely due to $skip, etc. being applied both by me and by the EnableQuery attribute).
For example, this query generated by the DevEx grid when you scroll down:
http://somesite.us/odata/Items?$orderby=ItemNo&$skip=300&$top=201
Results in (client-side): Unexpected number of returned keys: 0. Expected: 201
I assume that I need to remove the EnableQuery attribute since I am manually applying the query options, but why am I getting the "could not be converted to a raw string" exception when I do this?
How can I properly implement post-processing in this scenario?

I opened a support request with Microsoft on this, and they eventually determined that it is a bug in ODATA v4 and they created this bug report:
https://github.com/OData/WebApi/issues/1586
The workaround is to check if the query is a count query and, if so, return Ok(query.Count());
if (queryOptions.Context.Path?.Segments.LastOrDefault() is CountSegment)
return Ok(query?.Count());
Here is a more complete sample snippet / POC which works fine with ODATA v4:
private static ODataValidationSettings _validationSettings = new ODataValidationSettings();
[ODataRoute("Customers")]
public IHttpActionResult Get(ODataQueryOptions<CustomerLookup> queryOptions)
{
queryOptions.Validate(_validationSettings);
var query = queryOptions.ApplyTo(Context.CustomerLookup) as IQueryable<CustomerLookup>;
if (queryOptions.Context.Path?.Segments.LastOrDefault() is CountSegment)
return Ok(query?.Count());
var resultList = new List<CustomerLookup>();
foreach (var customer in query)
{
customer.Address = "1234_" + customer.Address;
resultList.Add(customer);
}
return Ok(resultList.AsQueryable());
}

Related

How to control in-memory generated key values in Entity Framework Core?

Background
I am developing a Web application based on ASP.NET Core 2.1 and EntityFrameworkCore 2.1.1. Besides other type of tests, I am using InMemory Web API testing to have some kind of integration testing of the Web.Api.
Since test data is rather convoluted I have generated a JSON file based on a slice of a migrated database and testing is importing data from this JSON file when the in-memory tests kick in.
The issue
Read-only tests are working just fine. However, when I insert something for an entity that has an identity (ValueGenerated == ValueGenerated.OnAdd) it automatically gets a value of 1 (all PKs are ints). This makes sense since whatever generator EF is using behind the scene to generate those values was not instructed to generate from a certain value.
However, this is clearly not working properly for inserts that generate an existing key value.
Things I have tried
[current working solution] Shifting the key values - Upon deserializing all objects, I perform an ids shift operations for all involved entities (they get "large" values). This works properly, but it is error prone (e.g. some entities have static ids, I have to ensure that all foreign keys / navigations properties are properly defined and it is kind of slow right now since I rely on reflection to properly identity the keys / navigation properties that require shifting)
Configuring value generator to start from a large value:
TestStartUp.cs
services.AddSingleton<IValueGeneratorCache, ValueGeneratorCache>();
services.AddSingleton<ValueGeneratorCacheDependencies, ValueGeneratorCacheDependencies>();
Ids generation functionality
public const int StartValue = 100000000;
class ResettableValueGenerator : ValueGenerator<int>
{
private int _current;
public override bool GeneratesTemporaryValues => false;
public override int Next(EntityEntry entry)
{
return Interlocked.Increment(ref _current);
}
public void Reset() => _current = StartValue;
}
public static void ResetValueGenerators(CustomApiContext context, IValueGeneratorCache cache, int startValue = StartValue)
{
var allKeyProps = context.Model.GetEntityTypes()
.Select(e => e.FindPrimaryKey().Properties[0])
.Where(p => p.ClrType == typeof(int));
var keyProps = allKeyProps.Where(p => p.ValueGenerated == ValueGenerated.OnAdd);
foreach (var keyProperty in keyProps)
{
var generator = (ResettableValueGenerator)cache.GetOrAdd(
keyProperty,
keyProperty.DeclaringEntityType,
(p, e) => new ResettableValueGenerator());
generator.Reset();
}
}
When debugging I can see that my entities being iterated, so the reset is applied.
Pushing the data into the in-memory database
private void InitializeSeeding(IServiceScope scope)
{
using (scope)
{
var services = scope.ServiceProvider;
try
{
var context = services.GetRequiredService<CustomApiContext>();
// this pushes deserialized data + static data into the database
InitDbContext(context);
var valueGeneratorService = services.GetRequiredService<IValueGeneratorCache>();
ResetValueGenerators(context, valueGeneratorService);
}
catch (Exception ex)
{
var logger = services.GetService<ILogger<Startup>>();
logger.LogError(ex, "An error occurred while seeding the database.");
}
}
}
Actual insert
This is done using a generic repository, but boils down to this:
Context.Set<T>().Add(entity);
Context.SaveChanges();
Question: How to control in-memory generated key values in Entity Framework Core?

Spring Data JPA Specifications and SQL Injection

I am working on getting back up to speed on Spring, Spring Data and JPA. I was interested in implementing the Specifications piece to enable a more dynamic query.
Say I have the following methods:
protected String containsLowerCase(String searchField) {
return WILDCARD + searchField.toLowerCase() + WILDCARD;
}
protected Specification<T> attributeContains(String attribute, String value) {
return (root, query, cb) -> {
return value == null ? null : cb.like(
cb.lower(root.get(attribute)),
containsLowerCase(value)
);
};
}
#Override
public Specification<Widget> getFilter(WidgetListRequest request) {
return (root, query, cb) -> Specifications.where(
Specifications.where(nameContains(request.getSearch()))
.or(descriptionContains(request.getSearch()))
)
.and(testValBetween(request.getMinTestVal(), request.getMaxTestVal()))
.toPredicate(root, query, cb);
}
private Specification<Widget> nameContains(String name) {
return attributeContains("name", name);
}
And in my service I call like:
WidgetListRequest request = new WidgetListRequest();
request.setSearch(search);
request.setMinTestVal(minTestVal.doubleValue());
request.setMaxTestVal(maxTestVal.doubleValue());
return widgetRepo.findAll(widgetSpec.getFilter(request));
If this was straight Repo call I know it, like save() is safe against SQL injection, and I know that JPQL is safe when you put parameters in. But, as I have tested, the above is NOT safe from SQL injection since it's using a concatenated string for the wildcards. I've seen some examples using the CriteraBuilder and CriteriaQuery to add parameters (ex: this), but I am not sure how that will work when you are using the Specifications on the service-side.
What I would need to do, I think, is somehow return the Specification with ParameterExpressions, but I don't know how to set them once the Specification is returned to the service class, nor how to add the wildcards such that they don't get escaped. How do I escape just the passed in value of containsLowerCase() above?

EF Core 2.0: How to discover the exact object, in object graph, causing error in a insert operation?

I have a complex and big object graph that I want to insert in database by using a DbContext and SaveChanges method.
This object is a result of parsing a text file with 40k lines (around 3MB of data). Some collections inside this object have thousands of items.
I am able to parse the file correctly and add it to the context so that it can start tracking the object. But when I try to SaveChanges, it says:
Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while updating the entries. See the inner exception for details. ---> System.Data.SqlClient.SqlException: String or binary data would be truncated.
I would like to know if there is a smart and efficient way of discovering which object is causing the issue. It seems that a varchar field is too little to store the data. But it's a lot of tables and fields to check manually.
I would like to get a more specific error somehow. I already configured an ILoggerProvider and added the EnableSensitiveDataLogging option in my dbContext to be able to see which sql queries are being generated. I even added MiniProfiler to be able to see the parameter values, because they are not present in the log generated by the dbContext.
Reading somewhere in the web, I found out that in EF6 there is some validation that happens before the sql is passed to the database to be executed. But it seems that in EF Core this is not available anymore. So how can I solve this?
After some research, the only approach I've found to solve this, is implementing some validation by overriding dbContext's SaveChanges method. I've made a merge of these two approaches to build mine:
Implementing Missing Features in Entity Framework Core - Part 3
Validation in EF Core
The result is...
ApplicationDbContext.cs
public override int SaveChanges(bool acceptAllChangesOnSuccess)
{
ValidateEntities();
return base.SaveChanges(acceptAllChangesOnSuccess);
}
public override async Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = new CancellationToken())
{
ValidateEntities();
return await base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
private void ValidateEntities()
{
var serviceProvider = this.GetService<IServiceProvider>();
var items = new Dictionary<object, object>();
var entities = from entry in ChangeTracker.Entries()
where entry.State == EntityState.Added || entry.State == EntityState.Modified
select entry.Entity;
foreach (var entity in entities)
{
var context = new ValidationContext(entity, serviceProvider, items);
var results = new List<ValidationResult>();
if (Validator.TryValidateObject(entity, context, results, true)) continue;
foreach (var result in results)
{
if (result == ValidationResult.Success) continue;
var errorMessage = $"{entity.GetType().Name}: {result.ErrorMessage}";
throw new ValidationException(errorMessage);
}
}
}
Note that it's not necessary to override the other SaveChanges overloads, because they call these two.
The Error tells you that youre writing more characters to a field than it can hold.
This error for example would be thrown when you create a given field as NVARCHAR(4) or CHAR(4) and write 'hello' to it.
So you could simply check the length of the values you read in to find the one which is causing your problem. There is at least on which is too long for a field.

.net WebApi IQueryable EF

I'm using .net web Api with Entity Framework. Its really nice that you can just do
[EnableQuery]
public IQueryable<Dtos.MyDto> Get()
{
return dbContext.MyEntity.Select(m => new MyDto
{
Name = m.Name
});
}
And you get odata applying to the Iqueryable, note also returning a projected dto.
But that select is a expression and so its being turned to into sql. Now in the above case that's fine. But what if I need to do some "complex" formatting on the return dto, its going to start having issues as SQL wont be able to do it.
Is it possible to create an IQueryable Wrapper?
QWrapper<TEntity,TDo>(dbcontext.MyEntity, Func<TEntity,TDo> dtoCreator)
it implements IQueryable so we still return it allowing webapi to apply any odata but the Func gets called once EF completes thus allowing 'any' .net code to be called as its not converting to SQL.
I don't want to do dbContext.MyEntity.ToList().Select(...).ToQueryable() or whatever as that will always return the entire table from the db.
Thoughts?
since you query already returns the data you expected, how about adding .Select(s=>new MyEntity(){ Name=s.Name }) for returning them as OData response? like:
return dbContext.MyEntity.Select(m => new MyDto
{
Name = m.Name
}).Select(s=>new MyEntity(){ Name=s.Name });

Mongo DB C# Driver - Getting Translated BSON from C# code

Say I have the following cursor set up using the C# Driver:
var cursor = _mongoClient.GetServer()
.GetDatabase("test")
.GetCollection<BsonDocument>("somecollection")
.Find(Query.EQ("field", "value"))
.SetFields(Fields.Exclude())
.SetLimit(5)
.SetSortOrder("field");
var results = cursor.ToList();
I want to see the tranlated BSON command that gets sent to the mongo server (i.e. "db.somecollection.find({...})".
Either way is acceptable:
1. Some sort of function that will print this as a string.
2. Some way to "sniff" the command that gets sent to the server. (The db profiling functionality in mongo.exe only shows the where clause and order by --I want to see everything --limit, field projections, etc)
Also, doing this with a MongoQueryable would be great as well.
Something like:
var queryable= (MongoQueryable<BsonDocument>)someCollection;
var debug = queryable.GetMongoQuery().ToJson();
So, it looks like the serialization of MongoCursor is encapsulated within classes internal to the MongoDB.Driver assembly. Therefore, serialized BSON messages that get sent to the server are not visible in client code, at least.
However, I can reasonably trust that the MongoCursor gets translated correctly at that lower level. (10gen is behind this project, after all.)
Of bigger concern is how LINQ expressions get translated. If I can verify that the LINQ IQueryables get translated to a MongoCursor with the correct state, I'm golden.
So, here is an extension method to pull the cursor out of the IQueryable:
public static class MongoExtensions
{
public static MongoCursor GetCursor<T>(this IQueryable<T> source)
{
var queryProvider = source.Provider as MongoQueryProvider;
if (queryProvider == null)
{
throw new NotSupportedException("Explain can only be called on a Linq to Mongo queryable.");
}
var selectQuery = (SelectQuery)MongoQueryTranslator.Translate(queryProvider, source.Expression);
if (selectQuery.Take.HasValue && selectQuery.Take.Value == 0)
{
throw new NotSupportedException("A query that has a .Take(0) expression will not be sent to the server and can't be explained");
}
var projector = selectQuery.Execute();
var cursorProp = projector.GetType().GetProperties().FirstOrDefault(p => p.Name == "Cursor");
return cursorProp.GetValue(projector) as MongoCursor<T>;
}
}
Then I can test the state of the MongoCursor, checking properties like "Query", "Skip", "Limit" and all the items in the "Options" collection.