Right way to use data type 'F' in SELECT-OPTIONS? - select

I want to have a SELECT-OPTIONS field in ABAP with the data type FLTP, which is basically a float. But this is not possible using SELECT-OPTIONS.
I tried to use PARAMETERS instead which solved this issue. But now of course I get no results when using this parameter value in the WHERE clause when selecting.
So on the one side I can't use data type 'F', but on the other side I get no results. Is there any way out of this dilema?

Checking floating point values for exact equality is a bad idea. It works in some edge-cases (like 0), but often it does not work. The reason is that not every value the user can express in decimal notation can also be expressed as a floating point value. So the values get rounded internally and now you get inequality where you would expect equality. Check the website "What Every Programmer Should Know About Floating-Point Arithmetic" for more information on this phenomenon.
So offering a SELECT-OPTION or a single PARAMETER to SELECT floating point values out of a table might be a bad idea.
What I would recommend instead is have the user state a range between two values with both fields obligatory:
PARAMETERS:
p_from TYPE f OBLIGATORY,
p_to TYPE f OBLIGATORY.
SELECT somdata
FROM table
WHERE floatfield >= p_from AND floatfield <= p_to.
But another solution you might want to consider is if float is really the appropriate data-type for your situation. When the table is a Z-table, you might want to consider to change the type of that field to a packed number or one of the decfloat flavors, as those will cause you far fewer surprises.

Related

How does Snowflake calculate its HASH() output?

Take a look at this query
select
hash( col1, col2 ) as a,
col1||col2 as b, -- just taking a guess as to how hash can take multiple values
hash( b ) as c
from table_name
The result for a and c are different.
So, my question is: how does Snowflake calculate the hash when there are many fields like in a? Is it concatinating the fields first, and then signing that result of that?
Thank you
More to NickW's point that HASH is proprietary
HASH is a proprietary function that accepts a variable number of input expressions of arbitrary types and returns a signed value. It is not a cryptographic hash function and should not be used as such.
I assume the core of the problem you are trying to achieve, is to "make a value in another system, and be able to compare these "safely", of which concatenating strings together, seems very dangerous, as the number and length of each string is a property of those strings.
The usage notes section has some good hints:
Any two values of type NUMBER that compare equally will hash to the same hash value, even if the respective types have different precision and/or scale.
this implies that things are converted to this form.. but it also notes on convertion:
Note that this guarantee does not apply to other combinations of types, even if implicit conversions exist between the types.
What really would help is for you to describe, what you want to happen for you, then if "knowing how HASH works" is the best path to that end, OR not as I would suggest, would be more answerable.
Aka, this answer is a long form question, suggesting this question needs to be reworked.

Why does PostgreSQL consider NULL boundaries in range types to be distinct from infinite boundaries?

Just to preface, I'm not asking what the difference is between a NULL boundary and an infinite boundary - that's covered in this other question. Rather, I'm asking why PostgreSQL makes a distinction between NULL and infinite boundaries when (as far as I can tell) they function exactly the same.
I started using PostgreSQL's range types recently, and I'm a bit confused by what NULL values in range types are supposed to mean. The documentation says:
The lower bound of a range can be omitted, meaning that all values less than the upper bound are included in the range, e.g., (,3]. Likewise, if the upper bound of the range is omitted, then all values greater than the lower bound are included in the range. If both lower and upper bounds are omitted, all values of the element type are considered to be in the range.
This suggests to me that omitted boundaries in a range (which are the equivalent NULL boundaries specified in a range type's constructor) should be considered infinite. However, PostgreSQL makes a distinction between NULL boundaries and infinite boundaries. The documentation continues:
You can think of these missing values [in a range] as +/-infinity, but they are special range type values and are considered to be beyond any range element type's +/-infinity values.
This is puzzling. "beyond infinity" doesn't make sense, as the entire point of infinite values is that nothing can be greater than +infinity or less than -infinity. That doesn't break "element in range"-type checks, but it does introduce an interesting case for primary keys that I think most people wouldn't expect. Or at least, I didn't expect it.
Suppose we create a basic table whose sole field is a daterange, which is also the PK:
CREATE TABLE public.range_test
(
id daterange NOT NULL,
PRIMARY KEY (id)
);
Then we can populate it with the following data with no problem:
INSERT INTO range_test VALUES (daterange('-infinity','2021-05-21','[]'));
INSERT INTO range_test VALUES (daterange(NULL,'2021-05-21','[]'));
Selecting all the data reveals we have these two tuples:
[-infinity,2021-05-22)
(,2021-05-22)
So the two tuples are distinct, or there would have been a primary key violation. But again, NULL boundaries and infinite boundaries work exactly the same when we're dealing with the actual elements that make up the range. For example, there is no date value X such that the results of X <# [-infinity,2021-05-22) returns a different result than X <# (,2021-05-22). This makes sense because NULL values can't have a type of date, so they can't even be compared to the range (and PostgreSQL even converted the inclusive boundary on the lower NULL bound in daterange(NULL,'2021-05-21','[]') to an exclusive boundary, (,2021-05-22) to be doubly sure). But why are two ranges that are identical in every practical way considered distinct?
When I was still in school, I remember overhearing some discussion about the difference between "unknown" and "doesn't exist" - two people who were smarter than me were talking about that in the context of why NULL values often cause issues, and that replacing the singular NULL with separate "unknown" and "doesn't exist" values might solve those issues, but the discussion was over my head at the time. Thinking about this weird feature made me think of that discussion. So is the distinction between "unknown" and "doesn't exist" the reason why PostgreSQL treats NULL and +-infinity as distinct? If so, why are ranges the only types that allow for that distinction in PostgreSQL? And if not, why does PostgreSQL treat functionally-equivalent values as distinct?
Rather, I'm asking why PostgreSQL makes a distinction between NULL and infinite boundaries when (as far as I can tell) they function exactly the same.
But they do not. NULL is a syntax convenience when used as bound of a range, while -infinity / infinity are actual values in the domain of the range. Abstract values meaning lesser / greater that any other value, but values nonetheless (which can be included or excluded).
Also, NULL works for any range type, while most data types don't have special values like -infinity / infinity. Take integer and int4range for example.
For a better understanding, consider the thread in pgsql-general that a_horse provided:
https://www.postgresql.org/message-id/flat/OrigoEmail.bf5.ac6ff6ffeb116aec.13fc29939e0%40prod2#c9fabdc670211364636b733a79a04712
This makes sense because NULL values can't have a type of date, so they can't even be compared to the range
Every data type can be NULL, even domains that are explicitly NOT NULL. See:
Why does PostgreSQL allow NULLs in domains that prohibit NULL?
That includes date, of course (like Adrian commented):
test=> SELECT NULL::date, pg_typeof(NULL::date);
date | pg_typeof
------+-----------
| date
(1 row)
But trying to discuss NULL as value (when used as bound of a range) is a misleading approach to begin with. It's not a value.
... (and PostgreSQL even converted the inclusive boundary on the lower NULL bound in daterange(NULL,'2021-05-21','[]') to an exclusive boundary, (,2021-05-22) to be doubly sure).
Again, NULL is not treated as value in the domain of the range. It just serves as convenient syntax to say: "unbounded". No more than that.

Is there a MAX_INT constant in Postgres?

In Java I can say Integer.MAX_VALUE to get the largest number that the int type can hold.
Is there a similar constant/function in Postgres? I'd like to avoid hard-coding the number.
Edit: the reason I am asking is this. There is a legacy table with an ID of type integer, backed by a sequence. There is a lot of incoming rows into this table. I want to calculate how much time before the integer runs out, so I need to know "how many IDs are left" divided by "how fast we are spending them".
There's no constant for this, but I think it's more reasonable to hard-code the number in Postgres than it is in Java.
In Java, the philosophical goal is for Integer to be an abstract value, so it makes sense that you'd want to behave as if you don't know what the max value is.
In Postgres, you're much closer to the bare metal and the definition of the integer type is that it is a 4-byte signed integer.
There is a legacy table with an ID of type integer, backed by a sequence.
In that case, you can get the max value of the sequence by:
select seqmax from pg_sequence where seqrelid = 'your_sequence_name'::regclass.
This might be better than getting the MAX_INT, because sequence may have been created/altered with a specific max value that is different from MAX_INT.

Is there any way for Access 2016 to sort the numbers that are part of a "text" data type formatted field as though they are numeric values?

I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).

How do I determine if a Field in Salesforce.com stores integers?

I'm writing an integration between Salesforce.com and another service, and I've hit a problem with integer fields. In Salesforce.com I've defined a field of type "Number" with "Decimal Places" set to "0". In the other service, it is stored definitively as an integer. These two fields are supposed to store the same integral numeric values.
The problem arises once I store a value in the Salesforce.com variant of this field. Salesforce.com will return that same value from its Query() and QueryAll() operations with an amount of precision incorrectly appended.
As an example, if I insert the value "827" for this field in Salesforce.com, when I extract that number from Salesforce.com later, it will say the value is "827.0".
I don't want to hard-code my integration to remove these decimal values from specific fields. That is not maintainable. I want it to be smart enough to remove the decimal values from all integer fields before the rest of the integration code runs. Using the Salesforce.com SOAP API, how would I accomplish this?
I assume this will have something to with DescribeSObject()'s "Field" property where I can scan the metadata, but I don't see a way to extract the number of decimal places from the DescribeSObjectResult.
Ah ha! The number of decimal places is on a property called Scale on the Field object. You know you have an integer field if that's equal to "0".
Technically, sObject fields aren't integers, even if the "Decimal Places" property is set to 0. They are always decimals with varying scale properties. This is important to remember in APEX because the methods that are available are for Decimals aren't the same as those for integers, and you there are other potential type conversion issues (not always, but in some contexts).