I'm using Entity Framework and MS SQL Server 2008. It seems that Entity Framework always round off decimal entity attribute to nearest integer which is really annoying. Have you seen this problem before and what's your solution? Thanks!
i run into the same problem. I am using EF with SQL Server 2008 SP1. I use stored procedures to query some data. Another annoying thing is that if I create a function import for a stored procedure, EF supplies a function on the context class to call that sp, but for me, it does not. So I create my own function to call sp. In that functions I create EntityParameters by supplying DBType.Decimal. But this result in Precision and Scale set to 0. When I set to the correct numbers as my DB schema requires (3 and 1 btw), everything is ok.
zsolt
Without you providing your mapping classes and database schema I can only assume that in one of them you're using an Int or something that's set to zero decimal places of accuracy.
It would also be worth mentioning which version of EF (Entity Framework) you're using.
It shouldn't happen is the short answer.
Similar to what Timothy said, it's likely that one of the following is happening:
A property of one of your generated classes is defined as an int while the column is a decimal - you can do this by changing the actual type in the entity designer
Somewhere in your code, you're working with ints rather than decimals. In particular, if you're using implicit typing using var, the compiler could be inferring an integral type rather than a decimal type
An example to clarify the second point:
var myNumber = 100; // myNumber will be an int
myNumber = myNumber / 3; // myNumber == 33 (an int)
Can you post some code to give us a better idea?
in C#, int / int = int
10 / 2 = 5, 10/3 = 3.
The language with truncate the answer (not round) so if you want to get accurate, you need to cast the variable into doubles or decimals.
Related
I am starting new project using Postgres and hibernate (5.5.7) as the ORM,
however I have recently read the following wiki page:
https://wiki.postgresql.org/wiki/Don%27t_Do_This
Based on that I would like to change some of the default column mappings, specifically:
Use timestamptz instead of timestamp
Use varchar instead of varchar(255) when the column length is unspecified.
Increase the scale of numeric types so that the default is numeric(19,5) - The app uses BigDecimals to store currency values.
Reading through the hibernate code it appears that the length, precision and scale are hardcoded in the class: org.hibernate.mapping.Column, specifically:
public static final int DEFAULT_LENGTH = 255;
public static final int DEFAULT_PRECISION = 19;
public static final int DEFAULT_SCALE = 2;
For the 2nd and 3rd cases (varchar and numeric) I don't see any easy way to change the default (length, precision and scale), so the best option I have been able to come up with is to create a new custom "Dialect" extending from PostgreSQL95Dialect who's constructor redefines the mappings as follows:
registerColumnType(Types.TIMESTAMP, "timestamptz");
registerColumnType(Types.VARCHAR, "varchar");
registerColumnType(Types.NUMERIC, "numeric($p, 5)");
Using this overridden dialect I can generate a schema which includes the changes I am trying to achieve.
I am happy with the timestamp change since I don't see any use cases where I would need to store a timestamp without the timezone (I typically use Instant's (UTC time) in my model).
I can't foresee a problem with the varchar change since all validation occurs when data is sent into the system (Rest service).
However I have lost the capability to use the (#Column) scale attribute - I have to use an explicit "columnDefinition" if I want to specify a different scale.
This still leaves me with the following questions:
Is there a better solution than I have described?
Can you foresee any problems using the custom dialect, that I haven't listed here?
Would you recommend using the custom dialect for schema generation ONLY or should it be used for both schema generation and when the application is running (why)?
Well, if you really must do this, it's fine, but I wouldn't recommend you to do this. The default values usually come from the #Column annotation. So I would recommend you simply set proper values everywhere in your model. IMO the only okish thing you did is the switch to timestamptz but you didn't understand how the type works correctly. The type does not store the timezone, but instead will normalize to UTC.
The next problem IMO is that you switch to an unbounded varchar. It might be "discouraged" to use a length limit, but believe me, it will save you a lot of trouble in the long run. There was just recently a blog post about switching back to length limited varchar due to users saving way too big texts. So even if you believe that you validated the lengths, you probably won't get this right for all future uses. Increasing the limit is easy and doesn't require any expensive locks, so if you already define some limits on your REST API, it would IMO be stupid not to model this in the database as well. Or are you omitting foreign key constraints just because your REST API also validates that? No you are not, and every DBA will tell you that you should never omit such constraints. These constraints are valuable.
As for numerics, just bite the sour apply and apply the values on all #Column annotations. You can use some kind of constant holder class to avoid inconsistencies:
public class MyConstants {
public static final int VARCHAR_SMALL = 1000;
public static final int VARCHAR_MEDIUM = 5000;
public static final int VARCHAR_LARGE = 10000;
public static final int PRICE_PRECISION = 19;
public static final int PRICE_SCALE = 5;
}
and use it like #Column(precision = MyConstants.PRICE_PRECISION, scale = MyConstants.PRICE_SCALE)
I am working on an application that requires monetary calculations, so we're using BigDecimal to process such numbers.
I currently store the BigDecimals as a string in a PostgreSQL database. It made the most sense to me because I now am sure that the numbers will not lose precision as opposed to when they are stored as a double in the database.
The thing is that I cannot really do a lot of queries for that (i.e 'smaller than X' on a number stored as text is impossible)
For numbers I do have to perform complex queries on, I just create a new column value called indexedY (where Y is the name of the original column). I.e I have amount (string) and indexedAmount (double). I convert amount to indexedAmount by calling toDouble() on the BigDecimal instance.
I now just do the query, and then when a table is found, I just convert the string version of the same number to a BigDecimal and perform the query once again (this time on the fetched object), just to make sure I didn't have any rounding errors while the double was in transit (from the application to DB and back to the application)
I was wondering if I can avoid this extra step of creating the indexedY columns.
So my question comes down to this: is it safe to just store the outcome of a BigDecimal as a double in a (PostgreSQL) table without losing precision?
If BigDecimal is required, I would use a NUMERIC type with as much precision as you need. Eg NUMERIC(20, 20)
However if you only needs 15 digits of precision, using a double in the database might be fine, in which case it should be fine in Java too.
I have a funny issue with Entity Framework function import. I am importing a stored procedure from MS SQL Server 2008 R2, and it returns just one string. However, it is too complex for EF to infer this return type, so when defining function import I had to manually specify that the function returns a collection of scalars (ObjectResult<global::System.String>, as was generated).
Sometimes, the procedure returns a string containing only digits and starting with zero (e.g., 01234). When I access this result in the code, it turns out to be without the starting zero (1234)!
I know several workarounds, so it is not a question. I want to understand what's going on.
My wild guess is that in my case - when SP is too complex to "predict" its result - EF first tries to "guess" the type of the returned data by the format of that data. I.e., it sees a number (01234), and converts it into integer type (int, short - whatever). Then it sees that I want a string, and converts this number back into string, but of course during these conversions starting zero is lost. Is that true, or is there a better explanation?
UPDATE: This is a screenshot for function import:
I have a table with a column of type decimal. There is a ESQL/C structure that represents the table. It has a member of type decimal. I also have a normal C structure for equivalent for the same table. The type of the above mentioned field is a float.
Since we use memcpy to copy data to and from ESQL/C structure to C structure, there is an issue with decimal to float conversion. When I searched the Informix ESQL/C Programmer's manual, I couldn't find any function that can do this. Google search led me to the deccvflt() function. This function converts from a float to a decimal type.
Though I couldn't find this function listed in the manual, I see the declarations in decimal.h. Are these functions still recommended to be used?
Alternatively, I was also thinking about using the decimal type in the C structure also, as it happens to be a C structure. This way, I can still use the memcpy right?
Please share your thoughts.
IBM Informix Dynamic Server Version 11.50.FC3
Thanks,
prabhu
You could convert to float or decimal directly in your query using a cast
select name_of_float::decimal(8,2) from table
or
select name_of_decimal::float from table
I need to read floating point decimals from sqlite db
I have created Variable type in database as INTEGER
I am reading with sqlite3_column_int to NSInteger variables
which parts should changed so i can read floating point integers from database
thanks for answers sorry im a newbie in this topics
There's no such thing as a floating-point integer. They're mutually exclusive on a fundamental level (well floating point numbers can fall on integer boundaries but you get the idea) :P.
But the way to get floats out is by using the sqlite3_column_double(sqlite3_stmt *, int) function.
NSNumber *f = [NSNumber numberWithFloat:(float)sqlite3_column_double(stmt, col)];
Declare the fields as a floating point number. From sqllite datatype doc the database type is REAL. This is a 8 byte floating point number which is an Objective-C double, NSDouble or NSNumber type.
NSNumber *num;
int temp = (float)sqlite3_column_double(walkstatement, 3);
num = [[NSNumber alloc]initWithInt:temp];
NSLog(#"%#",num);
This code worked for me..
For additional information, sqlite3 uses a dynamic data type system. so anything past these:
NULL
INTEGER
REAL
TEXT
BLOB
are just hints to sqlite (technically, affinity), and it'll end up deciding on its own what type it is or is not.
http://www.sqlite.org/datatype3.html
the functions that return doubles, ints, etc are part of the sqlite C API try its best to give you what you want.... so what's stored in a column can potentially be converted by sqlite if what you're asking for is different from what's in the db.
http://www.sqlite.org/c3ref/column_blob.html
about midway down, there's a handy table that'll let you know what to expect.