Firebird computed (calculated) field on server side - firebird

Newbie in SQL and development in general, I have a table (COUNTRIES) with fields (INDEX, NAME, POPULATION, AREA)
Usually i add a client side (Delphi) Calculated field (DENSITY) and OnCalcField :
COUNTRIES.DENSITY=COUNTRIES.POPULATION / COUNTRIES.AREA
Trying to change to Firebird computed field to have all calculation done on server side, i created a field named density and in IBEXPERT "Computed Source" column :
ADD DENSITY COMPUTED BY ((((COUNTRIES.POPULATION/COUNTRIES.AREA))))
Everything work fine but when a Record.AREA = 0 i have a Divided by zero error.
My question is how to avoid this for example with a IF THEN condition to avoid to calculate a field when the divider is 0 or to make the result just =0 in this case.
My environnement :
Delphi RIO, Firebird 3.0, IBExpert

You can use IIF(). When the 1st parameter is TRUE, IIF returns value of the second parameter, otherwise of the third parameter.
ADD DENSITY COMPUTED BY (IIF(COUNTRIES.AREA = 0, 0, COUNTRIES.POPULATION / COUNTRIES.AREA))
(note I also removed some extra parenthesis)
When handling division by zero, I recommend returning NULL (instead of zero), with a simple use of NULLIF (internal function which returns null, when both input parameters are equal):
ADD DENSITY COMPUTED BY (COUNTRIES.POPULATION / nullif(COUNTRIES.AREA, 0))
That is: when COUNTRIES.AREA = 0, the whole division operation results in null, too.

Related

Setting Database Type in Anylogic

I have a product for my model that will go through a series of processing times. I'm assigning the processing times to each product type through an excel database which will be loaded into the model, with each processing time in a separate column (e.g. Product 1: 2.3, 4.8, 9 --> meaning that it takes 2.3-time units for process 1, 4.8-time units at process 2 and so on)
Currently, I am storing all the processing time in a List(Double) within my product (e.g. v_ProcessTime = [2.3, 4.8, 9]). However, I face an error when some columns contain purely integers instead of double values (The column value type will be recognised as integers and Anylogic prompts an error that I can't write an integer to a double list). The only workaround currently is to change the column type of the database to Double values manually.
Is it possible to use Java coding to change the value type of databases or any way to bypass this issue?
unfortunately, how the database recognize the type is out of your control... and yeah if you can't change the source itself with at least 1 value that is not an integer, then your only choice is to change the database value manually.
nevertheless, to use them in a list, you can just transform an integer into a double like this
(double)intVariable

Right way to use data type 'F' in SELECT-OPTIONS?

I want to have a SELECT-OPTIONS field in ABAP with the data type FLTP, which is basically a float. But this is not possible using SELECT-OPTIONS.
I tried to use PARAMETERS instead which solved this issue. But now of course I get no results when using this parameter value in the WHERE clause when selecting.
So on the one side I can't use data type 'F', but on the other side I get no results. Is there any way out of this dilema?
Checking floating point values for exact equality is a bad idea. It works in some edge-cases (like 0), but often it does not work. The reason is that not every value the user can express in decimal notation can also be expressed as a floating point value. So the values get rounded internally and now you get inequality where you would expect equality. Check the website "What Every Programmer Should Know About Floating-Point Arithmetic" for more information on this phenomenon.
So offering a SELECT-OPTION or a single PARAMETER to SELECT floating point values out of a table might be a bad idea.
What I would recommend instead is have the user state a range between two values with both fields obligatory:
PARAMETERS:
p_from TYPE f OBLIGATORY,
p_to TYPE f OBLIGATORY.
SELECT somdata
FROM table
WHERE floatfield >= p_from AND floatfield <= p_to.
But another solution you might want to consider is if float is really the appropriate data-type for your situation. When the table is a Z-table, you might want to consider to change the type of that field to a packed number or one of the decfloat flavors, as those will cause you far fewer surprises.

Why does PostgreSQL consider NULL boundaries in range types to be distinct from infinite boundaries?

Just to preface, I'm not asking what the difference is between a NULL boundary and an infinite boundary - that's covered in this other question. Rather, I'm asking why PostgreSQL makes a distinction between NULL and infinite boundaries when (as far as I can tell) they function exactly the same.
I started using PostgreSQL's range types recently, and I'm a bit confused by what NULL values in range types are supposed to mean. The documentation says:
The lower bound of a range can be omitted, meaning that all values less than the upper bound are included in the range, e.g., (,3]. Likewise, if the upper bound of the range is omitted, then all values greater than the lower bound are included in the range. If both lower and upper bounds are omitted, all values of the element type are considered to be in the range.
This suggests to me that omitted boundaries in a range (which are the equivalent NULL boundaries specified in a range type's constructor) should be considered infinite. However, PostgreSQL makes a distinction between NULL boundaries and infinite boundaries. The documentation continues:
You can think of these missing values [in a range] as +/-infinity, but they are special range type values and are considered to be beyond any range element type's +/-infinity values.
This is puzzling. "beyond infinity" doesn't make sense, as the entire point of infinite values is that nothing can be greater than +infinity or less than -infinity. That doesn't break "element in range"-type checks, but it does introduce an interesting case for primary keys that I think most people wouldn't expect. Or at least, I didn't expect it.
Suppose we create a basic table whose sole field is a daterange, which is also the PK:
CREATE TABLE public.range_test
(
id daterange NOT NULL,
PRIMARY KEY (id)
);
Then we can populate it with the following data with no problem:
INSERT INTO range_test VALUES (daterange('-infinity','2021-05-21','[]'));
INSERT INTO range_test VALUES (daterange(NULL,'2021-05-21','[]'));
Selecting all the data reveals we have these two tuples:
[-infinity,2021-05-22)
(,2021-05-22)
So the two tuples are distinct, or there would have been a primary key violation. But again, NULL boundaries and infinite boundaries work exactly the same when we're dealing with the actual elements that make up the range. For example, there is no date value X such that the results of X <# [-infinity,2021-05-22) returns a different result than X <# (,2021-05-22). This makes sense because NULL values can't have a type of date, so they can't even be compared to the range (and PostgreSQL even converted the inclusive boundary on the lower NULL bound in daterange(NULL,'2021-05-21','[]') to an exclusive boundary, (,2021-05-22) to be doubly sure). But why are two ranges that are identical in every practical way considered distinct?
When I was still in school, I remember overhearing some discussion about the difference between "unknown" and "doesn't exist" - two people who were smarter than me were talking about that in the context of why NULL values often cause issues, and that replacing the singular NULL with separate "unknown" and "doesn't exist" values might solve those issues, but the discussion was over my head at the time. Thinking about this weird feature made me think of that discussion. So is the distinction between "unknown" and "doesn't exist" the reason why PostgreSQL treats NULL and +-infinity as distinct? If so, why are ranges the only types that allow for that distinction in PostgreSQL? And if not, why does PostgreSQL treat functionally-equivalent values as distinct?
Rather, I'm asking why PostgreSQL makes a distinction between NULL and infinite boundaries when (as far as I can tell) they function exactly the same.
But they do not. NULL is a syntax convenience when used as bound of a range, while -infinity / infinity are actual values in the domain of the range. Abstract values meaning lesser / greater that any other value, but values nonetheless (which can be included or excluded).
Also, NULL works for any range type, while most data types don't have special values like -infinity / infinity. Take integer and int4range for example.
For a better understanding, consider the thread in pgsql-general that a_horse provided:
https://www.postgresql.org/message-id/flat/OrigoEmail.bf5.ac6ff6ffeb116aec.13fc29939e0%40prod2#c9fabdc670211364636b733a79a04712
This makes sense because NULL values can't have a type of date, so they can't even be compared to the range
Every data type can be NULL, even domains that are explicitly NOT NULL. See:
Why does PostgreSQL allow NULLs in domains that prohibit NULL?
That includes date, of course (like Adrian commented):
test=> SELECT NULL::date, pg_typeof(NULL::date);
date | pg_typeof
------+-----------
| date
(1 row)
But trying to discuss NULL as value (when used as bound of a range) is a misleading approach to begin with. It's not a value.
... (and PostgreSQL even converted the inclusive boundary on the lower NULL bound in daterange(NULL,'2021-05-21','[]') to an exclusive boundary, (,2021-05-22) to be doubly sure).
Again, NULL is not treated as value in the domain of the range. It just serves as convenient syntax to say: "unbounded". No more than that.

Creating Calculated Fields in Google Datastudio

I would like to create categories based on a count of variable.
CASE
WHEN COUNT(variable) = 1 THEN "1"
WHEN COUNT(variable) = 2 THEN "2"
WHEN COUNT(variable) = 3 THEN "3"
WHEN COUNT(variable) = 4 THEN "4"
WHEN COUNT(variable) >= 5 THEN ">5"
END
I get an error that says that my formula is not valid. However, I cannot see where the mistake is and Google does not offer help in this regard.
This takes a little getting used to in Data Studio, but you can't use all functions inside of a CASE statement (as noted in the documentation).
Here's how you can work around this limitation:
Create a new calculated field with the value of COUNT(variable)
Set the new field's aggregation type to Sum in the field list
Then create your CASE statement formula referencing that new field
If you don't want this extra field showing up in reports, you can disable it in the data source (it can still be used by your other formula).
Also note that the input of COUNT itself cannot be an aggregate value (e.g. result of SUM or a metric with the aggregation type set).
This is an incredibly frustrating bit of Data Studio, as you end up with a lot of these fields floating around and it adds an extra step. The unhelpful error message definitely doesn't help either.

Functions are not appearing while using TFilterRow in Talend

I am using a tFilterRow to avoid empty rows. While trying to use it I am getting only one function value 'absolute value'.
I want to filter values with a length greater than 0.
Why I am not getting any other functions?
As mentioned in the comments, the length function is only available to schema columns that have the String data type.
To filter out any rows that have a null value in a column you can use a tFilterRow but configured so that the column being checked is not equal to null like so:
In the case you are dealing with the primitive int (rather than the Integer class) then the primitive can never be null and instead defaults to 0 so you'll want to set it as not equal to 0 instead.