I have an Odata controller get method like bellow:
public class ProductController : ApiController
{
[MyEnableQuery(PageSize = 48, AllowedQueryOptions = AllowedQueryOptions.OrderBy | AllowedQueryOptions.Top | AllowedQueryOptions.Skip | AllowedQueryOptions.InlineCount | AllowedQueryOptions.Filter, AllowedFunctions = AllowedFunctions.SubstringOf | AllowedFunctions.ToLower)]
public IQueryable<tbDefine_Products> GetProducts([FromODataUri] int CategoryID)
{
ProductHandler _handler = new ProductHandler();
IQueryable<tbDefine_Products> _list =_handler.GetProductActiveList(CategoryID);
return _list;
}
}
Now i wants to modify my query result before sending it to clinet... i want to something like _list.Tolist() and then iterating through the result array
List<tbDefine_Products> _list2 = _list.ToList<tbDefine_Products>();
for (int i = 0; i < _list2.Count; i++)
{
/ *some code here to modify result */
}
I have read a little about ActionFilterAttribute and ActionFilterAttribute.OnActionExecuted and
HttpActionExecutedContext Classes but i dont know how implement my ideas
Seems that you have already have a implementation about EnableQuery Attribute : MyEnableQuery, you should override the method:
public virtual IQueryable ApplyQuery(IQueryable queryable, ODataQueryOptions queryOptions)
get the query result first and then filter the result:
var result = base.ApplyQuery(queryable, queryOptions);
// filter the result.
return result;
Related
I'm trying to get just the ids for dependents if a principal is queried, every time the principal is queried.
My initial thought is to add it somehow in the OnModelCreating definitions, however that appears to be limited to filtering down larger sets of data, unless I'm missing something.
Something like this:
builder.Entity<ListingModel>()
.AlsoDoThis(
x => x.MenuIds.AddRange(
Menus.Where(y => y.ListingId == x.Id).Select(y => y.Id).ToList()
)
);
There is a need to not do this in code for each individual place I have a Select, since that functionality is normalized in some base classes. The base classes have a <TModel> passed in and don't inherently know what properties need to be handled this way.
I do have a workaround where I'm grabbing everything with an AutoInclude(), then filtering it out in the model definition with customer getter/setter to return a list of ids. But rather than being more performant (grabbing related FK ids at the DB level) it's transferring all of that data to the server and then programmatically selecting a list of ids, as far as I understand it.
private List<int> _topicsIds = new();
[NotMapped]
public List<int> TopicsIds
{
get { return Topics.Count > 0 ? Topics.Select(x => x.Id).ToList() : _topicsIds; }
set { _topicsIds = value; }
}
public List<TopicModel> Topics { get; set; } = new();
"Extra SQL that gets called with every select in a context" is (to my limited knowledge) almost what HasQueryFilter does, with a just slightly broader operation. I think this is the approach I'm looking for, just selecting more stuff instead of filtering stuff out.
You can project everything via Select
var result = ctx.ListingModels
.Select(lm => new // or to DTO
{
Id = lm.Id,
OtherProperty = lm.OtherProperty,
Ids = x.MenuIds.Select(m => m.Id).ToList()
})
.ToList();
To make more general solution we can use annotations and define how to project such entities.
During Model defining:
builder.Entity<TopicModel>()
.WithProjection(
x => x.MenuIds,
x => x.Menus.Where(y => y.ListingId == x.Id).Select(y => y.Id).ToList()
);
Then usage in common code:
public virtual List<TModel> GetList(List<int> ids)
{
var list = _context.Set<TModel>().Where(x => ids.Any(id => id == x.Id))
.ApplyCustomProjection(_context)
.ToList();
return list;
}
ApplyCustomProjection(_context) will find previously defined annotation and will apply custom projection.
And extensions implementation:
public static class ProjectionExtensions
{
public const string CustomProjectionAnnotation = "custom:member_projection";
public class ProjectionInfo
{
public ProjectionInfo(MemberInfo member, LambdaExpression expression)
{
Member = member;
Expression = expression;
}
public MemberInfo Member { get; }
public LambdaExpression Expression { get; }
}
public static bool IsUnderDotnetTool { get; }
= Process.GetCurrentProcess().ProcessName == "dotnet";
public static EntityTypeBuilder<TEntity> WithProjection<TEntity, TValue>(
this EntityTypeBuilder<TEntity> entity,
Expression<Func<TEntity, TValue>> propExpression,
Expression<Func<TEntity, TValue>> assignmentExpression)
where TEntity : class
{
// avoid registering non serializable annotations during migrations update
if (IsUnderDotnetTool)
return entity;
var annotation = entity.Metadata.FindAnnotation(CustomProjectionAnnotation);
var projections = annotation?.Value as List<ProjectionInfo> ?? new List<ProjectionInfo>();
if (propExpression.Body is not MemberExpression memberExpression)
throw new InvalidOperationException($"'{propExpression.Body}' is not member expression");
if (memberExpression.Expression is not ParameterExpression)
throw new InvalidOperationException($"'{memberExpression.Expression}' is not parameter expression. Only single nesting is allowed");
// removing duplicate
projections.RemoveAll(p => p.Member == memberExpression.Member);
projections.Add(new ProjectionInfo(memberExpression.Member, assignmentExpression));
return entity.HasAnnotation(CustomProjectionAnnotation, projections);
}
public static IQueryable<TEntity> ApplyCustomProjection<TEntity>(this IQueryable<TEntity> query, DbContext context)
where TEntity : class
{
var et = context.Model.FindEntityType(typeof(TEntity));
var projections = et?.FindAnnotation(CustomProjectionAnnotation)?.Value as List<ProjectionInfo>;
// nothing to do
if (projections == null || et == null)
return query;
var propertiesForProjection = et.GetProperties().Where(p =>
p.PropertyInfo != null && projections.All(pr => pr.Member != p.PropertyInfo))
.ToList();
var entityParam = Expression.Parameter(typeof(TEntity), "e");
var memberBinding = new MemberBinding[propertiesForProjection.Count + projections.Count];
for (int i = 0; i < propertiesForProjection.Count; i++)
{
var propertyInfo = propertiesForProjection[i].PropertyInfo!;
memberBinding[i] = Expression.Bind(propertyInfo, Expression.MakeMemberAccess(entityParam, propertyInfo));
}
for (int i = 0; i < projections.Count; i++)
{
var projection = projections[i];
var expression = projection.Expression.Body;
var assignExpression = ReplacingExpressionVisitor.Replace(projection.Expression.Parameters[0], entityParam, expression);
memberBinding[propertiesForProjection.Count + i] = Expression.Bind(projection.Member, assignExpression);
}
var memberInit = Expression.MemberInit(Expression.New(typeof(TEntity)), memberBinding);
var selectLambda = Expression.Lambda<Func<TEntity, TEntity>>(memberInit, entityParam);
var newQuery = query.Select(selectLambda);
return newQuery;
}
}
I am one of the many struggling to "upgrade" from ASP.NET to ASP.NET Core.
In the ASP.NET project, I made database calls from my DAL like so:
var result = context.Database.SqlQuery<Object_VM>("EXEC [sp_Object_GetByKey] #Key",
new SqlParameter("#Key", Key))
.FirstOrDefault();
return result;
My viewmodel has additional fields that my object does not, such as aggregates of related tables. It seems unnecessary and counter intuitive to include such fields in a database / table structure. My stored procedure calculates all those things and returns the fields as should be displayed, but not stored.
I see that ASP.NET Core has removed this functionality. I am trying to continue to use stored procedures and load view models (and thus not have the entity in the database). I see options like the following, but as a result I get "2", the number of rows being returned (or another mysterious result?).
using(context)
{
string cmd = "EXEC [sp_Object_getAll]";
var result = context.Database.ExecuteSQLCommand(cmd);
}
But that won't work because context.Database.ExecuteSQLCommand is only for altering the database, not "selecting".
I've also seen the following as a solution, but the code will not compile for me, as "set" is really set<TEntity>, and there isn't a database entity for this viewmodel.
var result = context.Set().FromSql("EXEC [sp_Object_getAll]");
Any assistance much appreciated.
Solution:
(per Tseng's advice)
On the GitHub Entity Framework Issues page, there is a discussion about this problem. One user recommends creating your own class to handle this sort of requests, and another adds an additional method that makes it run smoother. I changed the methods slights to accept slightly different params.
Here is my adaptation (very little difference), for others that are also looking for a solution:
Method in DAL
public JsonResult GetObjectByID(int ID)
{
SqlParameter[] parms = new SqlParameter[] { new SqlParameter("#ID", ID) };
var result = RDFacadeExtensions.GetModelFromQuery<Object_List_VM>(context, "EXEC [sp_Object_GetList] #ID", parms);
return new JsonResult(result.ToList(), setting);
}
Additional Class
public static class RDFacadeExtensions
{
public static RelationalDataReader ExecuteSqlQuery(
this DatabaseFacade databaseFacade,
string sql,
SqlParameter[] parameters)
{
var concurrencyDetector = databaseFacade.GetService<IConcurrencyDetector>();
using (concurrencyDetector.EnterCriticalSection())
{
var rawSqlCommand = databaseFacade
.GetService<IRawSqlCommandBuilder>()
.Build(sql, parameters);
return rawSqlCommand
.RelationalCommand
.ExecuteReader(
databaseFacade.GetService<IRelationalConnection>(),
parameterValues: rawSqlCommand.ParameterValues);
}
}
public static IEnumerable<T> GetModelFromQuery<T>(
DbContext context,
string sql,
SqlParameter[] parameters)
where T : new()
{
DatabaseFacade databaseFacade = new DatabaseFacade(context);
using (DbDataReader dr = databaseFacade.ExecuteSqlQuery(sql, parameters).DbDataReader)
{
List<T> lst = new List<T>();
PropertyInfo[] props = typeof(T).GetProperties();
while (dr.Read())
{
T t = new T();
IEnumerable<string> actualNames = dr.GetColumnSchema().Select(o => o.ColumnName);
for (int i = 0; i < props.Length; ++i)
{
PropertyInfo pi = props[i];
if (!pi.CanWrite) continue;
System.ComponentModel.DataAnnotations.Schema.ColumnAttribute ca = pi.GetCustomAttribute(typeof(System.ComponentModel.DataAnnotations.Schema.ColumnAttribute)) as System.ComponentModel.DataAnnotations.Schema.ColumnAttribute;
string name = ca?.Name ?? pi.Name;
if (pi == null) continue;
if (!actualNames.Contains(name)) { continue; }
object value = dr[name];
Type pt = pi.DeclaringType;
bool nullable = pt.GetTypeInfo().IsGenericType && pt.GetGenericTypeDefinition() == typeof(Nullable<>);
if (value == DBNull.Value) { value = null; }
if (value == null && pt.GetTypeInfo().IsValueType && !nullable)
{ value = Activator.CreateInstance(pt); }
pi.SetValue(t, value);
}//for i
lst.Add(t);
}//while
return lst;
}//using dr
}
i've created(instantiated) a hierarchical class structure by autofac:
order
|
--------> customerPersonDatails
| |
| ----------------->name
| |
| ------------------>surname
|--------> customerBillingDetail
| |
| ----------------->currency
| |
| ------------------>bank
|
|
--------->
what i want to do is "recursively" create the the order object and populate its property
var builder = new ContainerBuilder();
//register components
builder.RegisterType<order>().PropertiesAutowired().OnActivated(order_Init); //<-- onactivated method will be used to populate properties
builder.RegisterType<customerPersonDatails>().PropertiesAutowired();
builder.RegisterType<customerBillingDetail>().PropertiesAutowired();
public static Action<IActivatedEventArgs<order>> order_Init = (c) =>
{
c.Instance.customerPersonDatails.name = //<-- how to pass the current value provided from the foreach
c.Instance.customerPersonDatails.surname =
c.Instance.customerBillingDetail.currency =
c.Instance.customerBillingDetail.bank =
};
//iteration through my orders and create recursively the object
foreach(string currentOrder in Orders)
{
using (var scope = Container.BeginLifetimeScope())
{
//each time "resolve" is called i get a new istance of the order object with all its properties instatiated and the OnActivated method is correctly fired
//how can i pass into that method the currentOrder values in order to complete/populate the order structure with my values (currentOrder.name, currentOrder.surname, ... )
var ord = scope.Resolve<order>();
//here i have to pass currentOrder's value in some way into "order_Init"(how to do it?)
//do others stuff
ord.serialize();
}
}
the question is: how to pass the current value(currentOrder.name etc etc) into the function order_Init
i've noticed the "c" parameter of the function "order_Init" has some properties like Parameters/Context/component.... can i use one of them? how?
Here is full working example, which should help you to modify your solution
public void Test()
{
var builder = new ContainerBuilder();
builder.RegisterType<order>().PropertiesAutowired();
//builder.RegisterType<customerPersonDatails>().PropertiesAutowired();
//builder.RegisterType<customerBillingDetail>().PropertiesAutowired();
var container = builder.Build();
var Orders = new[] { "test" };
foreach (string currentOrder in Orders)
{
using (var scope = container.BeginLifetimeScope())
{
var ordFactory = scope.Resolve<order.Factory>(); //<------changed from "Resolve<order>" to "Resolve<order.Factory>"
var ord = ordFactory.Invoke(currentOrder); //<------ added in order to pass the data
}
}
}
public class order
{
//added delegate
public delegate order Factory(string currentOrder);
//added constructor
public order(string currentOrder)
{
//use the constructor parameter to populate the class property, is it correct?
this.orderCode = currentOrder;
Debug.WriteLine("I am in order constructor with currentOrder = " + currentOrder);
}
public string orderCode { get; set; }
And debug output is as expected
I am in order constructor with currentOrder = test
To achieve such possibility you should redesign your solution and invent some factory methods or Factory delegates which are supported by Autofac Delegate Factories
I´m using an existing database from our ERP.
In all my database tables, there is a float field called "r_e_c_n_o_", but this field is not auto-incremented by the database and I can´t change it.
For all added entities I would like to increment this field "r_e_c_n_o_", how could I acomplish that in DbContext´s SaveChanges() method?
Using ADO.NET I´d do something like that:
public static int GetNext(string tableName, string fieldName)
{
var cmd = _conn.CreateCommand(string.Format("SELECT MAX({0}) + 1 FROM {1}", fieldName, tableName));
var result = (int)cmd.ExecuteScalar();
return result;
}
UPDATE:
Please take a look in the comment below, its just what I need to solve my problem:
public override int SaveChanges()
{
var entries = this.ChangeTracker.Entries();
Dictionary<string, int> lastRecnos = new Dictionary<string, int>();
foreach (var entry in entries)
{
var typeName = entry.Entity.GetType().Name;
if (lastRecnos.ContainsKey(typeName))
lastRecnos[typeName]++;
else
lastRecnos[typeName] = 0;//How can i get the max here?
int nextRecnoForThisEntity = lastRecnos[typeName];
var entity = entry.Entity as EntityBase;
entity.Recno = nextRecnoForThisEntity;
}
return base.SaveChanges();
}
Tks,
William
I'm trying to implement a caching scheme for my EF Repository similar to the one blogged here. As the author and commenters have reported the limitation is that the key generation method cannot produce cache keys that vary with a given query's parameters. Here is the cache key generation method:
private static string GetKey<T>(IQueryable<T> query)
{
string key = string.Concat(query.ToString(), "\n\r",
typeof(T).AssemblyQualifiedName);
return key;
}
So the following queries will yield the same cache key:
var isActive = true;
var query = context.Products
.OrderBy(one => one.ProductNumber)
.Where(one => one.IsActive == isActive).AsCacheable();
and
var isActive = false;
var query = context.Products
.OrderBy(one => one.ProductNumber)
.Where(one => one.IsActive == isActive).AsCacheable();
Notice that the only difference is that isActive = true in the first query and isActive = false in the second.
Any suggestions/insight to efficiently generating cache keys which vary by IQueryable parameters would be truly appreciated.
Kudos to Sergey Barskiy for sharing the EF CodeFirst caching scheme.
Update
I took the approach of traversing the IQueryable's expression tree myself with the goal of resolving the values of the parameters used in the query. With maxlego's suggestion, I extended the System.Linq.Expressions.ExpressionVisitor class to visit the expression nodes that we're interested in - in this case, the MemberExpression. The updated GetKey method looks something like this:
public static string GetKey<T>(IQueryable<T> query)
{
var keyBuilder = new StringBuilder(query.ToString());
var queryParamVisitor = new QueryParameterVisitor(keyBuilder);
queryParamVisitor.GetQueryParameters(query.Expression);
keyBuilder.Append("\n\r");
keyBuilder.Append(typeof (T).AssemblyQualifiedName);
return keyBuilder.ToString();
}
And the QueryParameterVisitor class, which was inspired by the answers of Bryan Watts and Marc Gravell to this question, looks like this:
/// <summary>
/// <see cref="ExpressionVisitor"/> subclass which encapsulates logic to
/// traverse an expression tree and resolve all the query parameter values
/// </summary>
internal class QueryParameterVisitor : ExpressionVisitor
{
public QueryParameterVisitor(StringBuilder sb)
{
QueryParamBuilder = sb;
Visited = new Dictionary<int, bool>();
}
protected StringBuilder QueryParamBuilder { get; set; }
protected Dictionary<int, bool> Visited { get; set; }
public StringBuilder GetQueryParameters(Expression expression)
{
Visit(expression);
return QueryParamBuilder;
}
private static object GetMemberValue(MemberExpression memberExpression, Dictionary<int, bool> visited)
{
object value;
if (!TryGetMemberValue(memberExpression, out value, visited))
{
UnaryExpression objectMember = Expression.Convert(memberExpression, typeof (object));
Expression<Func<object>> getterLambda = Expression.Lambda<Func<object>>(objectMember);
Func<object> getter = null;
try
{
getter = getterLambda.Compile();
}
catch (InvalidOperationException)
{
}
if (getter != null) value = getter();
}
return value;
}
private static bool TryGetMemberValue(Expression expression, out object value, Dictionary<int, bool> visited)
{
if (expression == null)
{
// used for static fields, etc
value = null;
return true;
}
// Mark this node as visited (processed)
int expressionHash = expression.GetHashCode();
if (!visited.ContainsKey(expressionHash))
{
visited.Add(expressionHash, true);
}
// Get Member Value, recurse if necessary
switch (expression.NodeType)
{
case ExpressionType.Constant:
value = ((ConstantExpression) expression).Value;
return true;
case ExpressionType.MemberAccess:
var me = (MemberExpression) expression;
object target;
if (TryGetMemberValue(me.Expression, out target, visited))
{
// instance target
switch (me.Member.MemberType)
{
case MemberTypes.Field:
value = ((FieldInfo) me.Member).GetValue(target);
return true;
case MemberTypes.Property:
value = ((PropertyInfo) me.Member).GetValue(target, null);
return true;
}
}
break;
}
// Could not retrieve value
value = null;
return false;
}
protected override Expression VisitMember(MemberExpression node)
{
// Only process nodes that haven't been processed before, this could happen because our traversal
// is depth-first and will "visit" the nodes in the subtree before this method (VisitMember) does
if (!Visited.ContainsKey(node.GetHashCode()))
{
object value = GetMemberValue(node, Visited);
if (value != null)
{
QueryParamBuilder.Append("\n\r");
QueryParamBuilder.Append(value.ToString());
}
}
return base.VisitMember(node);
}
}
I'm still doing some performance profiling on the cache key generation and hoping that it isn't too expensive (I'll update the question with the results once I have them). I'll leave the question open, in case anyone has suggestions on how to optimize this process or has a recommendation for a more efficient method for generating cache keys with vary with the query parameters. Although this method produces the desired output, it is by no means optimal.
i suggest to use ExpressionVisitor
http://msdn.microsoft.com/en-us/library/bb882521(v=vs.90).aspx
Just for the record, "Caching the results of LINQ queries" works well with the EF and it's able to work with parameters correctly, so it can be considered as a good second level cache implementation for EF.
While the solution of the OP works quite well, I found that the performance of the solution is a little bit poor.
The duration of the key generation varied between 300ms and 1200ms for my queries.
However, I've found another solution that has quite better performance (<10ms).
public static string ToTraceString<T>(DbQuery<T> query)
{
var internalQueryField = query.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Instance).Where(f => f.Name.Equals("_internalQuery")).FirstOrDefault();
var internalQuery = internalQueryField.GetValue(query);
var objectQueryField = internalQuery.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Instance).Where(f => f.Name.Equals("_objectQuery")).FirstOrDefault();
var objectQuery = objectQueryField.GetValue(internalQuery) as ObjectQuery<T>;
return ToTraceStringWithParameters(objectQuery);
}
private static string ToTraceStringWithParameters<T>(ObjectQuery<T> query)
{
string traceString = query.ToTraceString() + "\n";
foreach (var parameter in query.Parameters)
{
traceString += parameter.Name + " [" + parameter.ParameterType.FullName + "] = " + parameter.Value + "\n";
}
return traceString;
}