Get nextval from Postgres sequence in .net core - postgresql

I am trying to get next value in a sequence but always get -1. Any help what I am doing wrong here?
private long GetNextSequence(string name)
{
long sequence = 0;
using (var dbContext = new Models.MyDatabaseContext())
{
var result = dbContext.Database.ExecuteSqlRaw(string.Format("select nextval('{0}')", name));
}
return sequence;
}

Related

processing data before presentation

I have dataset (from JSON source) with cumulative values. It looks like this:
Could I extract from this dataset delta from last hour or last day (for example, count from 0 since last midnight?)
What you are asking about falls squarely in the realm of process data as it usually comes from control systems aka process controls systems. There may be DCS (Distributed Control Systems) or SCADA out in the field that act as a focal point on receiving data. And there may be a process historian or time-series database for accessing that data, if not on an enterprise level at least not within the process controls network.
Much of the engineering associated with process data has been established for many, many decades. For my examples, I did not want to write too many custom classes so I will use some everyday .NET objects. However, I am adhering to 2 such well-regarded principles about process data:
All times will be in UTC. Usually one does not show the UtcTime until the very last moment when displaying to a local user.
Process Data acknowledges the Quality of a value. While there can be dozens of bad states associated with such Quality, I will use a simple binary approach of good or bad. Since I use double, a value is good as long as it is not double.NaN.
That said, I assume you have a class that looks similar to:
public class JsonDto
{
public string Id { get; set; }
public DateTime Time { get; set; }
public double value { get; set; }
}
Granted your class name may be different, but the main thing is this class holds an individual instance of process data. When you read a JSON file, it will produce a List<jsonDto> instance.
You will need lots of methods to transform the data to something a wee bit more useable in order to get to where the rubber finally meets the road: producing hourly differences. But that requires producing hourly values because there is no guarantee that your recorded values occur exactly on each hour.
ProcessData Class - lots of methods
public static class ProcessData
{
public enum CalculationTimeBasis { Auto = 0, EarliestTime, MostRecentTime, MidpointTime }
public static Dictionary<string, SortedList<DateTime, double>> GetTagTimedValuesMap(IEnumerable<JsonDto> jsonDto)
{
var map = new Dictionary<string, SortedList<DateTime, double>>();
var tagnames = jsonDto.Select(x => x.Id).Distinct().OrderBy(x => x);
foreach (var tagname in tagnames)
{
map.Add(tagname, new SortedList<DateTime, double>());
}
var orderedValues = jsonDto.OrderBy(x => x.Id).ThenBy(x => x.Time.ToUtcTime());
foreach (var item in orderedValues)
{
map[item.Id].Add(item.Time.ToUtcTime(), item.value);
}
return map;
}
public static DateTimeKind UnspecifiedDefaultsTo { get; set; } = DateTimeKind.Utc;
public static DateTime ToUtcTime(this DateTime value)
{
// Unlike ToUniversalTime(), this method assumes any Unspecified Kind may be Utc or Local.
if (value.Kind == DateTimeKind.Unspecified)
{
if (UnspecifiedDefaultsTo == DateTimeKind.Utc)
{
value = DateTime.SpecifyKind(value, DateTimeKind.Utc);
}
else if (UnspecifiedDefaultsTo == DateTimeKind.Local)
{
value = DateTime.SpecifyKind(value, DateTimeKind.Local);
}
}
return value.ToUniversalTime();
}
private static DateTime TruncateTime(this DateTime value, TimeSpan interval) => new DateTime(TruncateTicks(value.Ticks, interval.Ticks)).ToUtcTime();
private static long TruncateTicks(long ticks, long interval) => (interval == 0) ? ticks : (ticks / interval) * interval;
public static SortedList<DateTime, double> GetInterpolatedValues(SortedList<DateTime, double> recordedValues, TimeSpan interval)
{
if (interval <= TimeSpan.Zero)
{
throw new ArgumentOutOfRangeException($"{nameof(interval)} TimeSpan must be greater than zero");
}
var interpolatedValues = new SortedList<DateTime, double>();
var previous = recordedValues.First();
var intervalTimestamp = previous.Key.TruncateTime(interval);
foreach (var current in recordedValues)
{
if (current.Key == intervalTimestamp)
{
// It's easy when the current recorded value aligns perfectly on the desired interval.
interpolatedValues.Add(current.Key, current.Value);
intervalTimestamp += interval;
}
else if (current.Key > intervalTimestamp)
{
// We do not exactly align at the desired time, so we must interpolate
// between the "last recorded data" BEFORE the desired time (i.e. previous)
// and the "first recorded data" AFTER the desired time (i.e. current).
var interpolatedValue = GetInterpolatedValue(intervalTimestamp, previous, current);
interpolatedValues.Add(interpolatedValue.Key, interpolatedValue.Value);
intervalTimestamp += interval;
}
previous = current;
}
return interpolatedValues;
}
private static KeyValuePair<DateTime, double> GetInterpolatedValue(DateTime interpolatedTime, KeyValuePair<DateTime, double> left, KeyValuePair<DateTime, double> right)
{
if (!double.IsNaN(left.Value) && !double.IsNaN(right.Value))
{
double totalDuration = (right.Key - left.Key).TotalSeconds;
if (Math.Abs(totalDuration) > double.Epsilon)
{
double partialDuration = (interpolatedTime - left.Key).TotalSeconds;
double factor = partialDuration / totalDuration;
double calculation = left.Value + ((right.Value - left.Value) * factor);
return new KeyValuePair<DateTime, double>(interpolatedTime, calculation);
}
}
return new KeyValuePair<DateTime, double>(interpolatedTime, double.NaN);
}
public static SortedList<DateTime, double> GetDeltaValues(SortedList<DateTime, double> values, CalculationTimeBasis timeBasis = CalculationTimeBasis.Auto)
{
const CalculationTimeBasis autoDefaultsTo = CalculationTimeBasis.MostRecentTime;
var deltas = new SortedList<DateTime, double>(capacity: values.Count);
var previous = values.First();
foreach (var current in values.Skip(1))
{
var time = GetTimeForBasis(timeBasis, previous.Key, current.Key, autoDefaultsTo);
var diff = current.Value - previous.Value;
deltas.Add(time, diff);
previous = current;
}
return deltas;
}
private static DateTime GetTimeForBasis(CalculationTimeBasis timeBasis, DateTime earliestTime, DateTime mostRecentTime, CalculationTimeBasis autoDefaultsTo)
{
if (timeBasis == CalculationTimeBasis.Auto)
{
// Different (future) methods calling this may require different interpretations of Auto.
// Thus we leave it to the calling method to declare what Auto means to it.
timeBasis = autoDefaultsTo;
}
switch (timeBasis)
{
case CalculationTimeBasis.EarliestTime:
return earliestTime;
case CalculationTimeBasis.MidpointTime:
return new DateTime((earliestTime.Ticks + mostRecentTime.Ticks) / 2L).ToUtcTime();
case CalculationTimeBasis.MostRecentTime:
return mostRecentTime;
case CalculationTimeBasis.Auto:
default:
return earliestTime;
}
}
}
Usage Example
var inputValues = new List<JsonDto>();
// TODO: Magically populate inputValues
var tagDataMap = ProcessData.GetTagTimedValuesMap(inputValues);
foreach (var item in tagDataMap)
{
// Following would generate hourly differences for the one Tag Id (item.Key)
// by first generating hourly data, and then finding the delta of that.
var hourlyValues = ProcessData.GetInterpolatedValues(item.Value, TimeSpan.FromHours(1));
// Consider the difference between Hour(1) and Hour(2).
// That is, 2 input values will create 1 output value.
// Now you must decide which of the 2 input times you use for the 1 output time.
// This is what I call the CalculationTimeBasis.
// The time basis used will be Auto, which defaults to the most recent for this particular method, e.g. Hour(2)
var deltaValues = ProcessData.GetDeltaValues(hourlyValues);
// Same as above except we explicitly state we want the most recent time, e.g. also Hour(2)
var deltaValues2 = ProcessData.GetDeltaValues(hourlyValues, ProcessData.CalculationTimeBasis.MostRecentTime);
// Here the calculated differences are the same except the now
// timestamp now reflects the earliest time, e.g. Hour(1)
var deltaValues3 = ProcessData.GetDeltaValues(hourlyValues, ProcessData.CalculationTimeBasis.EarliestTime);

Use EF Core to get a list of Longs

I have a stored procedure that returns me a list of IDs. (I then use this list of IDs as keys for objects.)
I am migrating this from .NET to .NET Core. In normal .NET I could use an extension library to get the numbers out like this:
var getOrderDetailIdsStoredProc = new GetOrderDetailIdsStoredProc()
{
NumberOfOrderDetailIdsNeeded = numberOfOrderDetailIdsNeeded
};
var orderDetailIds = contextProvider.Context.Database
.ExecuteStoredProcedure<long>(getOrderDetailIdsStoredProc);
But that library (EntityFrameworkExtras) is not working with EF Core (I found a version for EF Core, but it doesn't work.)
So I have been looking for other solutions:
DbContext.Database.ExecuteSqlCommand: Cannot return records, only output variables
DbSet.FromSQL: Can only be run on a DbSet<T> (basically it needs an entity type)
Right now, all I can think of is to make an entity called Number:
public class Number
{
public long Value;
}
public DbSet<Number> Numbers;
And then do something like this:
Numbers.FromSql("exec GenerateOrderDetailSequencedIds #numberNeeded", numberNeeded)
Aside from the fact that this is very ugly (making an entity out of a native type), I have no table to hook it up to, so I worry it will not work.
Is there any way in EF Core to run a stored procedure and get back a list of numbers?
NOTE: This worked, but was not compatable with BreezeJs (it could not deal with a DbQuery). See my other answer for what I ended up doing.
OrderDetailIdHolder.cs
public class OrderDetailIdHolder
{
public long NewId { get; set; }
}
MyEntitiesContext
public DbQuery<OrderDetailIdHolder> OrderDetailIdHolders { get; set; }
internal List<long> GetOrderDetailIds(int numberOfIdsNeeded)
{
var result = OrderDetailIdHolders.FromSql($"exec Sales.GenerateOrderDetailIds {numberOfIdsNeeded}").ToList();
return result.Select(x=>x.NewId).ToList();
}
This a bit extra complexity for just a list of longs. But it works.
It is important to note that the property (NewId in this case) must match what is returned from the sproc. Also, the type is not a DbSet. It is a DbQuery.
It is also important to note that this is only for EF Core 2.2. EF Core 3 has a different way to do this (Keyless Entity Types)
This is what I ended up using:
public static List<T> SqlQueryList<T>(this DatabaseFacade database, string query, params SqlParameter[] sqlParameters)
{
// TODO: Add a using statement here so we don't leak the connection's resources.
var conn = database.GetDbConnection();
conn.Open();
var command = conn.CreateCommand();
command.CommandText = query;
command.Parameters.AddRange(sqlParameters);
var reader = command.ExecuteReader();
List<T> result = new List<T>();
while (reader.Read())
{
T typedRow;
var row = reader.GetValue(0);
if (typeof(T).IsValueType)
{
typedRow = (T) row;
}
else
{
typedRow = (T)Convert.ChangeType(result, typeof(T));
}
result.Add(typedRow);
}
return result;
}
Called like this:
var numberOfOrderDetailIdsNeededParam = new SqlParameter
{
ParameterName = "#numberOfOrderDetailIdsNeeded",
SqlDbType = SqlDbType.Int,
Direction = ParameterDirection.Input
};
numberOfOrderDetailIdsNeededParam.Value = numberOfOrderDetailIdsNeeded;
var result = contextProvider.Context.Database.SqlQueryList<long>($"exec Sales.GenerateOrderDetailIds #numberOfOrderDetailIdsNeeded", numberOfOrderDetailIdsNeededParam);
I did it this way because it was compatible with BreezeJs for .NET Core. Note that I only really tested this with Value types.

ASP.NET Core Entity Framework SQL Query SELECT

I am one of the many struggling to "upgrade" from ASP.NET to ASP.NET Core.
In the ASP.NET project, I made database calls from my DAL like so:
var result = context.Database.SqlQuery<Object_VM>("EXEC [sp_Object_GetByKey] #Key",
new SqlParameter("#Key", Key))
.FirstOrDefault();
return result;
My viewmodel has additional fields that my object does not, such as aggregates of related tables. It seems unnecessary and counter intuitive to include such fields in a database / table structure. My stored procedure calculates all those things and returns the fields as should be displayed, but not stored.
I see that ASP.NET Core has removed this functionality. I am trying to continue to use stored procedures and load view models (and thus not have the entity in the database). I see options like the following, but as a result I get "2", the number of rows being returned (or another mysterious result?).
using(context)
{
string cmd = "EXEC [sp_Object_getAll]";
var result = context.Database.ExecuteSQLCommand(cmd);
}
But that won't work because context.Database.ExecuteSQLCommand is only for altering the database, not "selecting".
I've also seen the following as a solution, but the code will not compile for me, as "set" is really set<TEntity>, and there isn't a database entity for this viewmodel.
var result = context.Set().FromSql("EXEC [sp_Object_getAll]");
Any assistance much appreciated.
Solution:
(per Tseng's advice)
On the GitHub Entity Framework Issues page, there is a discussion about this problem. One user recommends creating your own class to handle this sort of requests, and another adds an additional method that makes it run smoother. I changed the methods slights to accept slightly different params.
Here is my adaptation (very little difference), for others that are also looking for a solution:
Method in DAL
public JsonResult GetObjectByID(int ID)
{
SqlParameter[] parms = new SqlParameter[] { new SqlParameter("#ID", ID) };
var result = RDFacadeExtensions.GetModelFromQuery<Object_List_VM>(context, "EXEC [sp_Object_GetList] #ID", parms);
return new JsonResult(result.ToList(), setting);
}
Additional Class
public static class RDFacadeExtensions
{
public static RelationalDataReader ExecuteSqlQuery(
this DatabaseFacade databaseFacade,
string sql,
SqlParameter[] parameters)
{
var concurrencyDetector = databaseFacade.GetService<IConcurrencyDetector>();
using (concurrencyDetector.EnterCriticalSection())
{
var rawSqlCommand = databaseFacade
.GetService<IRawSqlCommandBuilder>()
.Build(sql, parameters);
return rawSqlCommand
.RelationalCommand
.ExecuteReader(
databaseFacade.GetService<IRelationalConnection>(),
parameterValues: rawSqlCommand.ParameterValues);
}
}
public static IEnumerable<T> GetModelFromQuery<T>(
DbContext context,
string sql,
SqlParameter[] parameters)
where T : new()
{
DatabaseFacade databaseFacade = new DatabaseFacade(context);
using (DbDataReader dr = databaseFacade.ExecuteSqlQuery(sql, parameters).DbDataReader)
{
List<T> lst = new List<T>();
PropertyInfo[] props = typeof(T).GetProperties();
while (dr.Read())
{
T t = new T();
IEnumerable<string> actualNames = dr.GetColumnSchema().Select(o => o.ColumnName);
for (int i = 0; i < props.Length; ++i)
{
PropertyInfo pi = props[i];
if (!pi.CanWrite) continue;
System.ComponentModel.DataAnnotations.Schema.ColumnAttribute ca = pi.GetCustomAttribute(typeof(System.ComponentModel.DataAnnotations.Schema.ColumnAttribute)) as System.ComponentModel.DataAnnotations.Schema.ColumnAttribute;
string name = ca?.Name ?? pi.Name;
if (pi == null) continue;
if (!actualNames.Contains(name)) { continue; }
object value = dr[name];
Type pt = pi.DeclaringType;
bool nullable = pt.GetTypeInfo().IsGenericType && pt.GetGenericTypeDefinition() == typeof(Nullable<>);
if (value == DBNull.Value) { value = null; }
if (value == null && pt.GetTypeInfo().IsValueType && !nullable)
{ value = Activator.CreateInstance(pt); }
pi.SetValue(t, value);
}//for i
lst.Add(t);
}//while
return lst;
}//using dr
}

EF Code-first: Increment a non-autoincremented field manually

I´m using an existing database from our ERP.
In all my database tables, there is a float field called "r_e_c_n_o_", but this field is not auto-incremented by the database and I can´t change it.
For all added entities I would like to increment this field "r_e_c_n_o_", how could I acomplish that in DbContext´s SaveChanges() method?
Using ADO.NET I´d do something like that:
public static int GetNext(string tableName, string fieldName)
{
var cmd = _conn.CreateCommand(string.Format("SELECT MAX({0}) + 1 FROM {1}", fieldName, tableName));
var result = (int)cmd.ExecuteScalar();
return result;
}
UPDATE:
Please take a look in the comment below, its just what I need to solve my problem:
public override int SaveChanges()
{
var entries = this.ChangeTracker.Entries();
Dictionary<string, int> lastRecnos = new Dictionary<string, int>();
foreach (var entry in entries)
{
var typeName = entry.Entity.GetType().Name;
if (lastRecnos.ContainsKey(typeName))
lastRecnos[typeName]++;
else
lastRecnos[typeName] = 0;//How can i get the max here?
int nextRecnoForThisEntity = lastRecnos[typeName];
var entity = entry.Entity as EntityBase;
entity.Recno = nextRecnoForThisEntity;
}
return base.SaveChanges();
}
Tks,
William

Generating Cache Keys from IQueryable For Caching Results of EF Code First Queries

I'm trying to implement a caching scheme for my EF Repository similar to the one blogged here. As the author and commenters have reported the limitation is that the key generation method cannot produce cache keys that vary with a given query's parameters. Here is the cache key generation method:
private static string GetKey<T>(IQueryable<T> query)
{
string key = string.Concat(query.ToString(), "\n\r",
typeof(T).AssemblyQualifiedName);
return key;
}
So the following queries will yield the same cache key:
var isActive = true;
var query = context.Products
.OrderBy(one => one.ProductNumber)
.Where(one => one.IsActive == isActive).AsCacheable();
and
var isActive = false;
var query = context.Products
.OrderBy(one => one.ProductNumber)
.Where(one => one.IsActive == isActive).AsCacheable();
Notice that the only difference is that isActive = true in the first query and isActive = false in the second.
Any suggestions/insight to efficiently generating cache keys which vary by IQueryable parameters would be truly appreciated.
Kudos to Sergey Barskiy for sharing the EF CodeFirst caching scheme.
Update
I took the approach of traversing the IQueryable's expression tree myself with the goal of resolving the values of the parameters used in the query. With maxlego's suggestion, I extended the System.Linq.Expressions.ExpressionVisitor class to visit the expression nodes that we're interested in - in this case, the MemberExpression. The updated GetKey method looks something like this:
public static string GetKey<T>(IQueryable<T> query)
{
var keyBuilder = new StringBuilder(query.ToString());
var queryParamVisitor = new QueryParameterVisitor(keyBuilder);
queryParamVisitor.GetQueryParameters(query.Expression);
keyBuilder.Append("\n\r");
keyBuilder.Append(typeof (T).AssemblyQualifiedName);
return keyBuilder.ToString();
}
And the QueryParameterVisitor class, which was inspired by the answers of Bryan Watts and Marc Gravell to this question, looks like this:
/// <summary>
/// <see cref="ExpressionVisitor"/> subclass which encapsulates logic to
/// traverse an expression tree and resolve all the query parameter values
/// </summary>
internal class QueryParameterVisitor : ExpressionVisitor
{
public QueryParameterVisitor(StringBuilder sb)
{
QueryParamBuilder = sb;
Visited = new Dictionary<int, bool>();
}
protected StringBuilder QueryParamBuilder { get; set; }
protected Dictionary<int, bool> Visited { get; set; }
public StringBuilder GetQueryParameters(Expression expression)
{
Visit(expression);
return QueryParamBuilder;
}
private static object GetMemberValue(MemberExpression memberExpression, Dictionary<int, bool> visited)
{
object value;
if (!TryGetMemberValue(memberExpression, out value, visited))
{
UnaryExpression objectMember = Expression.Convert(memberExpression, typeof (object));
Expression<Func<object>> getterLambda = Expression.Lambda<Func<object>>(objectMember);
Func<object> getter = null;
try
{
getter = getterLambda.Compile();
}
catch (InvalidOperationException)
{
}
if (getter != null) value = getter();
}
return value;
}
private static bool TryGetMemberValue(Expression expression, out object value, Dictionary<int, bool> visited)
{
if (expression == null)
{
// used for static fields, etc
value = null;
return true;
}
// Mark this node as visited (processed)
int expressionHash = expression.GetHashCode();
if (!visited.ContainsKey(expressionHash))
{
visited.Add(expressionHash, true);
}
// Get Member Value, recurse if necessary
switch (expression.NodeType)
{
case ExpressionType.Constant:
value = ((ConstantExpression) expression).Value;
return true;
case ExpressionType.MemberAccess:
var me = (MemberExpression) expression;
object target;
if (TryGetMemberValue(me.Expression, out target, visited))
{
// instance target
switch (me.Member.MemberType)
{
case MemberTypes.Field:
value = ((FieldInfo) me.Member).GetValue(target);
return true;
case MemberTypes.Property:
value = ((PropertyInfo) me.Member).GetValue(target, null);
return true;
}
}
break;
}
// Could not retrieve value
value = null;
return false;
}
protected override Expression VisitMember(MemberExpression node)
{
// Only process nodes that haven't been processed before, this could happen because our traversal
// is depth-first and will "visit" the nodes in the subtree before this method (VisitMember) does
if (!Visited.ContainsKey(node.GetHashCode()))
{
object value = GetMemberValue(node, Visited);
if (value != null)
{
QueryParamBuilder.Append("\n\r");
QueryParamBuilder.Append(value.ToString());
}
}
return base.VisitMember(node);
}
}
I'm still doing some performance profiling on the cache key generation and hoping that it isn't too expensive (I'll update the question with the results once I have them). I'll leave the question open, in case anyone has suggestions on how to optimize this process or has a recommendation for a more efficient method for generating cache keys with vary with the query parameters. Although this method produces the desired output, it is by no means optimal.
i suggest to use ExpressionVisitor
http://msdn.microsoft.com/en-us/library/bb882521(v=vs.90).aspx
Just for the record, "Caching the results of LINQ queries" works well with the EF and it's able to work with parameters correctly, so it can be considered as a good second level cache implementation for EF.
While the solution of the OP works quite well, I found that the performance of the solution is a little bit poor.
The duration of the key generation varied between 300ms and 1200ms for my queries.
However, I've found another solution that has quite better performance (<10ms).
public static string ToTraceString<T>(DbQuery<T> query)
{
var internalQueryField = query.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Instance).Where(f => f.Name.Equals("_internalQuery")).FirstOrDefault();
var internalQuery = internalQueryField.GetValue(query);
var objectQueryField = internalQuery.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Instance).Where(f => f.Name.Equals("_objectQuery")).FirstOrDefault();
var objectQuery = objectQueryField.GetValue(internalQuery) as ObjectQuery<T>;
return ToTraceStringWithParameters(objectQuery);
}
private static string ToTraceStringWithParameters<T>(ObjectQuery<T> query)
{
string traceString = query.ToTraceString() + "\n";
foreach (var parameter in query.Parameters)
{
traceString += parameter.Name + " [" + parameter.ParameterType.FullName + "] = " + parameter.Value + "\n";
}
return traceString;
}