I've come across the craziest behaviour in an application. I am using EF 6 and have a model with a decimal property, the database type powering is a decimal (18,4). If I change the value of the decimal from 0.6500 to 0.6550 and do save changes on the context, the row is not updated. If I change it from 0.6500 to 0.1350 and save changes, the row will get updated but the value is saved as 0.1300 so it has lost the 0.005. I know the database can hold that precision as it currently does for some manually inserted data, and EF retrieves that without issue.
What on earth do I need to do to get EF to update my row/maintain precision. Help much appreciated, I might go cry.
So I found the answer to my own question after having left the issue for a while. It would seem EF default precision for decimals is 2, I can override this myself and set it to 4 as I want. I stole the code from this previously answered question:
Set decimal(16, 3) for a column in Code First Approach in EF4.3
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<MyClass>().Property(x => x.SnachCount).HasPrecision(16, 3);
modelBuilder.Entity<MyClass>().Property(x => x.MinimumStock).HasPrecision(16, 3);
modelBuilder.Entity<MyClass>().Property(x => x.MaximumStock).HasPrecision(16, 3);
}
Related
I want to use the constructor of an AnyLogic TableFunction, but I do not know what int approximationOrder stands for. The constructor without it is labeled as deprecated. Can anyone help?
int approximationOrder: the order (1, 2, 3...) of the approximation polynomial if the INTERPOLATION_APPROXIMATION mode is selected
If you do not set your InterpolationType to INTERPOLATION_APPROXIMATION then it does not matter what you choose here.
For example:
TableFunction x = new TableFunction(
new double[]{1,2,3,4,5},
new double[]{1,2,1,2,2},
TableFunction.InterpolationType.INTERPOLATION_LINEAR,
1, //approximationOrder makes no impact as we will not be using it
TableFunction.OUTOFRANGE_ERROR,
999.0
);
It is this parameter you see when you select your interpolation to approximation
What version of AnyLogic are you using? The latest version has this documented
i have a code-first project where one of the POCOs contains a decimal field which is being setup in the database as this:
rate = c.Decimal(nullable: false, precision: 18, scale: 4),
i can see in the SQL database it reflecting correct precision and scale. i need it to store scale of up to 4 digits, i.e. anything bigger than 0.0001 - however, what is happening when persisting values is - everything beyond first two digits of the scale is persisted as 0. i.e. - 0.0100 would be persisted as 0.0100 however - 0.0090, 0.0050 or 0.0010 would all be persisted as 0.0000. i am probably overlooking some obvious aspect here but i have been looking for any documentation reference and don't seem to be finding anything so far, i'd appreciate any clues on this. thank you!
i have found the answer here and it seems to have helped me resolve the issue. Sam talks exactly the same symptoms i was facing. re-doing steps Sam recommended seem to have resolve the issue. here is the link:
[Sam's article on the issue][1]https://storiknow.com/entity-framework-decimal-scale-and-precision-convention/
having implemented steps in Sam's article, it caused a side issue with the context creation, and i had the following errors pop-up as exception during migration add:
DatabaseAccess.databaseContext.ApplicationUserRole: : EntityType 'ApplicationUserRole' has no key defined. Define the key for this EntityType.
DatabaseAccess.databaseContext.ApplicationUserLogin: : EntityType 'ApplicationUserLogin' has no key defined. Define the key for this EntityType.
which in my case was solved by adding the following lines to OnModelCreating():
modelBuilder.Entity<ApplicationUserRole>().ToTable("AspNetUserRoles").HasKey(ur => new { ur.RoleId, ur.UserId });
modelBuilder.Entity<ApplicationUserLogin>().ToTable("AspNetUserLogins").HasKey<int>(ul => ul.UserId);
here, the overall OnModelCreating() definition:
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Add(new decimalPrecisionAttributeConvention());
modelBuilder.Entity<ApplicationUserRole>().ToTable("AspNetUserRoles").HasKey(ur => new { ur.RoleId, ur.UserId });
modelBuilder.Entity<ApplicationUserLogin>().ToTable("AspNetUserLogins").HasKey<int>(ul => ul.UserId);
}
Just in case someone stumbles against the same issue.
I am using EF 6.1.3. Using code first sets a byte[] property in an entity to max. 8000 bytes. Any attempt to make it greater, that is MAX, fails.
HasMaxLength(null) (yes, the parameter is int?) still sets it to 8000, HasMaxLength(int.MaxValue) or any other value greater than 8000 makes EF throw System.Data.Entity.Core.MetadataException:
Schema specified is not valid. Errors: (0,0) : error 0026: MaxLength
'2147483647' is not valid. Length must be between '1' and '8000' for
'varbinary' type.
SQL server 13.0.2151 (mssqllocaldb) allows for varbinary(max):
This limit seems too severe to me. Trying to find a reason why it is imposed does not yield a good reason for this too. So, my question is
How a byte[] can be mapped to varbinary(max) in EF code first?
PS: The property is also 'required', but I am not sure if an optional property may be set to varbinary(MAX) either. Anyway, i have not tested this case since it does not make much sense to me.
Despite the multiple articles that states the solution is to add the following attribute
[Column(TypeName="image")]
byte[] Photo { get; set; }
I found the correct approach to be, adding instead this attribute
[MaxLength]
public byte[] Photo { get; set; }
With the Column(TypeName) recommendation I'll end up getting the following error with SQLCE:
The field Photo must be a string or array type with a maximum length of '4000'
Well, I found a workaround to this. Specifying HasColumnType("image") solves the problem, but I still think that EF must allow for specifying varbinary(max) as well.
Moreover, not all binary files are images. ;)
And still part of the question remains unanswered, so I will put it this way:
Why a byte[] property cannot be mapped to varbinary(max) in EF code first?
Any comments (or answers of course) are welcome. Thanks in advance.
EDIT (as per comment by Gert): leaving the property without any specs makes EF generate varbinary(max). Surprisingly simple!
It is possible.
Fluent API
.IsMaxLength()
Before you want to update the database take a look in the filename which is generated after you use "add-migration filename"
If you see a method "CreateTable" and see that a field which should te be a binary type with a lenght of MAX, it can be generated as c.Binary(maxLength: 8000), remove the parameter maxLength at all and then use update-database and after that you can check the created table in the SQL server database!
I have written my own Odoo module where I added a property weight to the product.template model.
The implementation in the Python code is
weight = fields.Float('Weight', digits=(12,4))
I also changed the view so that I can set this value in the form. So I created a new product in the Odoo GUI and set the weight to 7.85. After storing the value, 7.8500 is shown which seems to be reasonable as the definition declares 4 float digits. The value stored in the PostgreSQL is a numeric and the value is 7.8500. So this all seems to be correct.
When I now want to get the product with the Odoo API which is based on XML-RPC I do not get 7.8500 but 7.8500000000000005
<member>
<name>weight</name>
<value><double>7.8500000000000005</double></value>
</member>
So my question is, why is this and how can I prevent this?
EDIT:
This behavior occurs whenever I have 2 decimal places. So when I take 7.8 instead of 7.85 the return value is 7.8.
I am new to using intersytems cache and face an issue where I am querying data stored in cache, exposed by classes which do not seem to accurately represent the data in the underlying system. The data stored in the globals is almost always larger than what is defined in the object code.
As such I get errors like the one below very frequently.
Msg 7347, Level 16, State 1, Line 2
OLE DB provider 'MSDASQL' for linked server 'cache' returned data that does not match expected data length for column '[cache]..[namespace].[tablename].columname'. The (maximum) expected data length is 5, while the returned data length is 6.
Does anyone have any experience with implementing some type of quality process to ensure that the object definitions (sql mappings) are maintained in such away that they can accomodate the data which is being persisted in the globals?
Property columname As %String(MAXLEN = 5, TRUNCATE = 1) [ Required, SqlColumnNumber = 2, SqlFieldName = columname ];
In this particular example the system has the column defined with a max len of 5, however the data stored in the system is 6 characters long.
How can I proactively monitor and repair such situations.
/*
I did not create these object definitions in cache
*/
It's not completely clear what "monitor and repair" would mean for you, but:
How much control do you have over the database side? Cache runs code for a data-type on converting from a global to ODBC using the LogicalToODBC method of the data-type class. If you change the property types from %String to your own class, AppropriatelyNamedString, then you can override that method to automatically truncate. If that's what you want to do. It is possible to change all the %String property types programatically using the %Library.CompiledClass class.
It is also possible to run code within Cache to find records with properties that are above the (somewhat theoretical) maximum length. This obviously would require full table scans. It is even possible to expose that code as a stored procedure.
Again, I don't know what exactly you are trying to do, but those are some options. They probably do require getting deeper into the Cache side than you would prefer.
As far as preventing the bad data in the first place, there is no general answer. Cache allows programmers to directly write to the globals, bypassing any object or table definitions. If that is happening, the code doing so must be fixed directly.
Edit: Here is code that might work in detecting bad data. It might not work if you are doing cetain funny stuff, but it worked for me. It's kind of ugly because I didn't want to break it up into methods or tags. This is meant to run from a command prompt, so it would have to be modified for your purposes probably.
{
S ClassQuery=##CLASS(%ResultSet).%New("%Dictionary.ClassDefinition:SubclassOf")
I 'ClassQuery.Execute("%Library.Persistent") b q
While ClassQuery.Next(.sc) {
If $$$ISERR(sc) b Quit
S ClassName=ClassQuery.Data("Name")
I $E(ClassName)="%" continue
S OneClassQuery=##CLASS(%ResultSet).%New(ClassName_":Extent")
I '$IsObject(OneClassQuery) continue //may not exist
try {
I 'OneClassQuery.Execute() D OneClassQuery.Close() continue
}
catch
{
D OneClassQuery.Close()
continue
}
S PropertyQuery=##CLASS(%ResultSet).%New("%Dictionary.PropertyDefinition:Summary")
K Properties
s sc=PropertyQuery.Execute(ClassName) I 'sc D PropertyQuery.Close() continue
While PropertyQuery.Next()
{
s PropertyName=$G(PropertyQuery.Data("Name"))
S PropertyDefinition=""
S PropertyDefinition=##CLASS(%Dictionary.PropertyDefinition).%OpenId(ClassName_"||"_PropertyName)
I '$IsObject(PropertyDefinition) continue
I PropertyDefinition.Private continue
I PropertyDefinition.SqlFieldName=""
{
S Properties(PropertyName)=PropertyName
}
else
{
I PropertyName'="" S Properties(PropertyDefinition.SqlFieldName)=PropertyName
}
}
D PropertyQuery.Close()
I '$D(Properties) continue
While OneClassQuery.Next(.sc2) {
B:'sc2
S ID=OneClassQuery.Data("ID")
Set OneRowQuery=##class(%ResultSet).%New("%DynamicQuery:SQL")
S sc=OneRowQuery.Prepare("Select * FROM "_ClassName_" WHERE ID=?") continue:'sc
S sc=OneRowQuery.Execute(ID) continue:'sc
I 'OneRowQuery.Next() D OneRowQuery.Close() continue
S PropertyName=""
F S PropertyName=$O(Properties(PropertyName)) Q:PropertyName="" d
. S PropertyValue=$G(OneRowQuery.Data(PropertyName))
. I PropertyValue'="" D
.. S PropertyIsValid=$ZOBJClassMETHOD(ClassName,Properties(PropertyName)_"IsValid",PropertyValue)
.. I 'PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has invalid value of "_PropertyValue
.. //I PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has VALID value of "_PropertyValue
D OneRowQuery.Close()
}
D OneClassQuery.Close()
}
D ClassQuery.Close()
}
The simplest solution is to increase the MAXLEN parameter to 6 or larger. Caché only enforces MAXLEN and TRUNCATE when saving. Within other Caché code this is usually fine, but unfortunately ODBC clients tend to expect this to be enforced more strictly. The other option is to write your SQL like SELECT LEFT(columnname, 5)...
The simplest solution which I use for all Integration Services Packages, for example is to create a query that casts all nvarchar or char data to the correct length. In this way, my data never fails for truncation.
Optional:
First run a query like: SELECT Max(datalength(mycolumnName)) from cachenamespace.tablename.mycolumnName
Your new query : SELECT cast(mycolumnname as varchar(6) ) as mycolumnname,
convert(varchar(8000), memo_field) AS memo_field
from cachenamespace.tablename.mycolumnName
Your pain of getting the data will be lessened but not eliminated.
If you use any type of oledb provider, or if you use an OPENQUERY in SQL Server,
the casts must occur in the query sent to Intersystems CACHE db, not in the the outer query that retrieves data from the inner OPENQUERY.