I have a table with a column of type decimal. There is a ESQL/C structure that represents the table. It has a member of type decimal. I also have a normal C structure for equivalent for the same table. The type of the above mentioned field is a float.
Since we use memcpy to copy data to and from ESQL/C structure to C structure, there is an issue with decimal to float conversion. When I searched the Informix ESQL/C Programmer's manual, I couldn't find any function that can do this. Google search led me to the deccvflt() function. This function converts from a float to a decimal type.
Though I couldn't find this function listed in the manual, I see the declarations in decimal.h. Are these functions still recommended to be used?
Alternatively, I was also thinking about using the decimal type in the C structure also, as it happens to be a C structure. This way, I can still use the memcpy right?
Please share your thoughts.
IBM Informix Dynamic Server Version 11.50.FC3
Thanks,
prabhu
You could convert to float or decimal directly in your query using a cast
select name_of_float::decimal(8,2) from table
or
select name_of_decimal::float from table
Related
I want to have a SELECT-OPTIONS field in ABAP with the data type FLTP, which is basically a float. But this is not possible using SELECT-OPTIONS.
I tried to use PARAMETERS instead which solved this issue. But now of course I get no results when using this parameter value in the WHERE clause when selecting.
So on the one side I can't use data type 'F', but on the other side I get no results. Is there any way out of this dilema?
Checking floating point values for exact equality is a bad idea. It works in some edge-cases (like 0), but often it does not work. The reason is that not every value the user can express in decimal notation can also be expressed as a floating point value. So the values get rounded internally and now you get inequality where you would expect equality. Check the website "What Every Programmer Should Know About Floating-Point Arithmetic" for more information on this phenomenon.
So offering a SELECT-OPTION or a single PARAMETER to SELECT floating point values out of a table might be a bad idea.
What I would recommend instead is have the user state a range between two values with both fields obligatory:
PARAMETERS:
p_from TYPE f OBLIGATORY,
p_to TYPE f OBLIGATORY.
SELECT somdata
FROM table
WHERE floatfield >= p_from AND floatfield <= p_to.
But another solution you might want to consider is if float is really the appropriate data-type for your situation. When the table is a Z-table, you might want to consider to change the type of that field to a packed number or one of the decfloat flavors, as those will cause you far fewer surprises.
I'm currently improving a library client for Postgresql, the library already has working communication protocol including DataRow and RowDescription.
The problem I'm facing right now is how to deal with values.
Returning plain string with array of integers for example is kind of pointless.
By my research I found that some other libraries (like for Python) either return is as unmodified string or convert primitive types including arrays.
What I mean by conversion is making Postgres DataRow raw data as Python-type value: Postgres integer is parsed as python number, Postgres booleans as python booleans, etc.
Should I make second query to get information column type and use its converters or should I leave it plain?
You could opt to get the array values in the internal format by setting the corresponding "result-column format code" in the Bind message to 1, but that is typically a bad choice, since the internal format varies from type to type and may even depend on the server's architecture.
So your best option is probably to parse the string representation of the array on the client side, including all the escape characters.
When it comes to finding the base type for an array type, there is no other option than querying pg_type like
SELECT typelem::regtype FROM pg_type WHERE oid = 1007;
typelem
---------
integer
(1 row)
You could cache these values on the client side so that you don't have to query more than once per type and database session.
I have been implementing user defined types in Postgresql 9.2 and got confused.
In the PostgreSQL 9.2 documentation, there is a section (35.11) on user defined types. In the third paragraph of that section, the documentation refers to input and output functions that are used to construct a type. I am confused about the purpose of these functions. Are they concerned with on-disk representation or only in-memory representation? In the section referred to above, after defining the input and output functions, it states that:
If we want to do anything more with the type than merely store it,
we must provide additional functions to implement whatever operations
we'd like to have for the type.
Do the input and output functions deal with serialization?
As I understand it, the input function is the one which will be used to perform INSERT INTO and the output function to perform SELECT on the type so basically if we want to perform an INSERT INTO then we need a serialization function embedded or invoked in the input or output function. Can anyone help explain this to me?
Types must have a text representation, so that values of this type can be expressed as literals in a SQL query, and returned as results in output columns.
For example, '2013-20-01' is a text representation of a date. It's possible to write VALUES('2013-20-01'::date) in a SQL statement, because the input function of the date type recognizes this string as a date and transforms it into an internal representation (for both using it in memory and storing to disk).
Conversely, when client code issues SELECT date_field FROM table, the values inside date_field are returned in their text representation, which is produced by the type's output function from the internal representation (unless the client requested a binary format for this column).
I'm using Entity Framework and MS SQL Server 2008. It seems that Entity Framework always round off decimal entity attribute to nearest integer which is really annoying. Have you seen this problem before and what's your solution? Thanks!
i run into the same problem. I am using EF with SQL Server 2008 SP1. I use stored procedures to query some data. Another annoying thing is that if I create a function import for a stored procedure, EF supplies a function on the context class to call that sp, but for me, it does not. So I create my own function to call sp. In that functions I create EntityParameters by supplying DBType.Decimal. But this result in Precision and Scale set to 0. When I set to the correct numbers as my DB schema requires (3 and 1 btw), everything is ok.
zsolt
Without you providing your mapping classes and database schema I can only assume that in one of them you're using an Int or something that's set to zero decimal places of accuracy.
It would also be worth mentioning which version of EF (Entity Framework) you're using.
It shouldn't happen is the short answer.
Similar to what Timothy said, it's likely that one of the following is happening:
A property of one of your generated classes is defined as an int while the column is a decimal - you can do this by changing the actual type in the entity designer
Somewhere in your code, you're working with ints rather than decimals. In particular, if you're using implicit typing using var, the compiler could be inferring an integral type rather than a decimal type
An example to clarify the second point:
var myNumber = 100; // myNumber will be an int
myNumber = myNumber / 3; // myNumber == 33 (an int)
Can you post some code to give us a better idea?
in C#, int / int = int
10 / 2 = 5, 10/3 = 3.
The language with truncate the answer (not round) so if you want to get accurate, you need to cast the variable into doubles or decimals.
I need to read floating point decimals from sqlite db
I have created Variable type in database as INTEGER
I am reading with sqlite3_column_int to NSInteger variables
which parts should changed so i can read floating point integers from database
thanks for answers sorry im a newbie in this topics
There's no such thing as a floating-point integer. They're mutually exclusive on a fundamental level (well floating point numbers can fall on integer boundaries but you get the idea) :P.
But the way to get floats out is by using the sqlite3_column_double(sqlite3_stmt *, int) function.
NSNumber *f = [NSNumber numberWithFloat:(float)sqlite3_column_double(stmt, col)];
Declare the fields as a floating point number. From sqllite datatype doc the database type is REAL. This is a 8 byte floating point number which is an Objective-C double, NSDouble or NSNumber type.
NSNumber *num;
int temp = (float)sqlite3_column_double(walkstatement, 3);
num = [[NSNumber alloc]initWithInt:temp];
NSLog(#"%#",num);
This code worked for me..
For additional information, sqlite3 uses a dynamic data type system. so anything past these:
NULL
INTEGER
REAL
TEXT
BLOB
are just hints to sqlite (technically, affinity), and it'll end up deciding on its own what type it is or is not.
http://www.sqlite.org/datatype3.html
the functions that return doubles, ints, etc are part of the sqlite C API try its best to give you what you want.... so what's stored in a column can potentially be converted by sqlite if what you're asking for is different from what's in the db.
http://www.sqlite.org/c3ref/column_blob.html
about midway down, there's a handy table that'll let you know what to expect.