PostgreSQL division by zero when ordering - postgresql

i need to execute this query in postgres but i couldn't get rid of this error
ERROR: division by zero
SQL state: 22012
here is the query :
select id,rates_sum,rates_count from tbl_node order by rates_sum/rates_count DESC;
i know i can add a small value to the rates_count but i get inaccurate values .
Is there a way to make the postgres ignore this error ,or using if statement to check zeros and replace them with any number.
and again the error in the order by clause.
Thanks

Use a CASE statement:
SELECT
id,
rates_sum,
rates_count
FROM
tbl_node
ORDER BY
rates_sum / NULLIF(rates_count,0) DESC NULLS FIRST;
You could also use NULLS LAST, if you want to.

how about a where rates_count != 0?

Related

How to set start of sequence using expression [duplicate]

I want to add a sequence to a column that might already have data, so I'm trying to start it beyond whatever's already there. Assuming there already is data, I would like to have done it this way:
CREATE SEQUENCE my_sequence MINVALUE 1000000 START
(SELECT MAX(id_column) FROM my_table) OWNED BY my_table.id_column;
but it keeps dying at ( claiming syntax error. It's like the start value has to be cold hard numbers--nothing symbolic.
Of course, an even better solution would be if the sequence could be intelligent enough to avoid duplicate values, since id_column has a unique constraint on it--that's why I'm doing this. But from what I can tell, that's not possible.
I also tried skipping the START and then doing:
ALTER SEQUENCE my_sequence RESTART WITH (SELECT max(id_column)+1 FROM my_table);
but, again, it doesn't seem like to symbolic start values.
I'm running PostgreSQL 9.4 but some of our customers are using stuff as primitive as 8.3.
You can't specify a dynamic value for the start value.
But you can set the value once the sequence is created:
CREATE SEQUENCE my_sequence MINVALUE 1000000 OWNED BY my_table.id_column;
select setval('my_sequence', (SELECT MAX(id_column) FROM my_table));
You can restore you sequence by request:
select setval('my_sequence', (SELECT MAX(id_column) FROM my_table));
Applicable for Postgres 9.2.
Just because I was struggling with a slight variation in use case to the accepted answers here and experiencing an error telling me that setval did not exist, thought I'd share in case others had the same.
I needed to set the value of my sequence to the max value in an id column, but I also wanted to combine that with a default starting value if there were no rows.
To do this I used coalesce combined with max like this:
select setval('sequence', cast((select coalesce(max(id),1) from table) as bigint));
The catch here was using cast with the select, without that, you get an error something like:
ERROR: function setval(unknown, double precision) does not exist
LINE 1: select setval('sequence', (select coalesce(MAX(i...
HINT: No function matches the given name and argument types. You might need to add explicit type casts.

PostgreSQL, allow to filter by not existing fields

I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.

Out-of-range value when trying to filter on converted datetime value

I'm trying to pull data for certain dates out of a staging table where the offshore developers imported everything in the file, so I need to filter out the "non-data" rows and convert the remaining strings to datetime.
Which should be simple enough but... I keep getting this error:
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
I've taken the query and pulled it apart, made sure there are no invalid strings left and even tried a few different configurations of the query. Here's what I've got now:
SELECT *
FROM
(
select cdt = CAST(cmplt_date as DateTime), *
from stage_hist
WHERE cmplt_date NOT LIKE '(%'
AND ltrim(rtrim(cmplt_date)) NOT LIKE ''
AND cmplt_date NOT LIKE '--%'
) f
WHERE f.cdt BETWEEN '2017-09-01' AND '2017-10-01'
To make sure the conversion is working at least, I can run the inner query and the cast actually works for all rows. I get a valid data set for the rows and no errors, so the actual cast is working.
The BETWEEN statement must be throwing the error then, right? But I've casted both strings I use for that successfully, and even taken a value out of the table and did a test query using it which also works succesfully:
select 1
WHERE CAST(' 2017-09-26' as DateTime) BETWEEN '2017-09-01' AND '2017-10-01'
So if all the casts work individually, how come I'm getting an out-of-range error when running the real query?
I am guessing that this is due to the fact that in your cmplt_date field there are values which are not valid dates. Yes, I know you are filtering them using a WHERE clause, but know that Logical Processing Order of the SELECT statement is not always the actual order. What does this mean is that sometimes, the SQL Engine my start performing your CAST operation before finishing the filtering.
You are using SQL Server 2012, so you can just add TRY_CAST:
SELECT *
FROM
(
select cdt = TRY_CAST(cmplt_date as DateTime), *
from stage_hist
WHERE cmplt_date NOT LIKE '(%'
AND ltrim(rtrim(cmplt_date)) NOT LIKE ''
AND cmplt_date NOT LIKE '--%'
) f
WHERE f.cdt BETWEEN '2017-09-01' AND '2017-10-01'

DB2 - If table is empty for date X, insert, else go on

--DB2 version 10 on AIX
I have a stored procedure, which I need to update. And want to check if there is data for a certain date. If data exists, go on, else run insert and then go on.
IF (SELECT COUNT(*)
FROM SCHEMA1.TABLE1_STEP1
WHERE XDATE = '9/27/2014' < 1)
THEN (INSERT INTO SCHEMA1.TABLE1_STEP1 (SELECT * FROM SCHEMA2.TABLE2 FETCH FIRST 2 ROWS ONLY))
END IF;
This errors-out.
DB2 Database Error: ERROR [42601] [IBM][DB2/AIX64] SQL0104N An unexpected token "(" was found following "/2014') < 1) THEN". Expected tokens may include: "". SQLSTATE=42601
Any thoughts on what's wrong?
I'm guessing you probably want the less than sign outside of the parenthesis...
However, as an aside, you can also do this kind of statement without an IF (although, I don't have an AIX DB2 available to check for sure. It worked on DB2 for z/OS and LUW, however):
INSERT INTO SCHEMA1.TABLE1_STEP1
SELECT *
FROM SCHEMA2.TABLE2
WHERE NOT EXISTS (
SELECT *
FROM SCHEMA1.TABLE1_STEP1
WHERE XDATE = '9/27/2014'
)
FETCH FIRST 2 ROWS ONLY
Also, you're not providing an ORDER BY on the SCHEMA2.TABLE2 select, so your results could come back in any order (whatever is "easiest" for the database engine)... order is not guaranteed unless you provide the ORDER BY statement.

Yesod persistent-postgresql rawSql queries with column-list wildcards producing syntax errors

I'm using the persistent-postgresql library from Yesod, and I'd like to execute the following raw query:
SELECT * FROM utterance WHERE is_target IS NULL ORDER BY RANDOM() LIMIT 1000
to select 1000 random utterances with a NULL is_target. However, persistent generates the following SQL when I run my code through rawSql:
SELECT * FROM utterance WHERE is_target IS NULL ORDER BY RANDOM() LIMIT 1000"utterance"."id", "utterance"."message", "utterance"."is_target"
This generates an error in postgresql of syntax error at or near ""utterance"" at character 77.
What am I doing wrong?
I fixed this by using the following query instead:
SELECT ?? FROM utterance WHERE is_target IS NULL ORDER BY RANDOM() LIMIT 1000
rawSql doesn't work with column wildcards, because it enforces strong typing on returned data, so it's trying (and failing) to figure out where to put the column names. You need to explicitly list column names OR use some number of "??" placeholders in the statement and bind them at runtime like,
$ (Entity myType utterance, .... ) -> do ....
If you don't want strong typing, you probably also don't want to use Persistent; that's the entire reason it exists.