I know this has to be an easy one, but I have been searching and cannot locate why my logic is wrong.
I have a select statement like below.
SELECT * from MyTable where Column1 between 1 and 5 or Column1 between 10
and 15 or Column2 in (1,2,3)
So I need values based on two ranges in Column1 and a list in Column2.
It is returning the correct rows for my ranges, but I am getting extra values based on my list. I know it has to be my AND/ OR but I cannot get it to work.
SELECT * from MyTable where (Column1 between 1 and 5) or (Column1 between 10
and 15) or Column2 in (1,2,3)
I guess sample data and desired results would have been good information. I ended up finding my own answer through trial and error. Thank you for looking. I needed to use my and operator in each of my ranges.
SELECT * from MyTable where Column1 between 1 and 5 and Column2 in (1,2,3)
or Column1 between 10 and 15 and Column2 in (1,2,3)
Related
[beginner]
I have a table that looks like this:
colA colB
1 <null>
2 <null>
3 <null>
colB is the new empty column I added to the table. colA is varchar and colB is double precision data type (float).
I want to update colB with a colA multiplied by 2.
New table should look like this:
colA colB
1 2
2 4
3 6
When I go to update colB like so:
update tablename set colB = colA * 2
I get error:
Invalid operation: Invalid input syntax for type numeric
Ive tried to work around this with solutions like this:
update tablename set colB = COALESCE(colA::numeric::text,'') * 2
but get the same error.
In a select statement on the same table, this works on colA which is varchar:
select colA * 2 from tablename
How can I update a column with mathematical operations with different datatype reference columns? I cant update datatype for colA.
I suppose that Laurenz Albe is correct and there are non-numeric values in col_a
So UPDATE must be guarded:
UPDATE T
SET col_b =
CASE
WHEN col_a ~'^([0-9]+\.?[0-9]*|\.[0-9]+)$' THEN col_a::numeric *2
END ;
-- or this way
UPDATE T
SET col_b = col_a::numeric *2
WHERE
col_a ~'^([0-9]+\.?[0-9]*|\.[0-9]+)$' ;
Look at fiddle: https://www.db-fiddle.com/f/4wFynf9WiEuiE499XMcsCT/1
Recipes for "isnumeric" you can get here: isnumeric() with PostgreSQL
There is a value in the string column that is not a valid number. You will have to fix the data or exclude certain rows with a WHERE condition.
If you say that running the query from your client works, that leads me to suspect that your client doesn't actually execute the whole query, but slaps a LIMIT on it (some client tools do that).
The following query will have to process all rows and should fail:
SELECT colA * 2 AS double
FROM tablename
ORDER BY double;
update tablename set colB = colA::numeric * 2
Postgresql version 9.4
I have a table with an integer column, which has a number of integers with some gaps, like the sample below; I'm trying to get an existing id from the column at random with the following query, but it returns NULL occasionally:
CREATE TABLE
IF NOT EXISTS test_tbl(
id INTEGER);
INSERT INTO test_tbl
VALUES (10),
(13),
(14),
(16),
(18),
(20);
-------------------------------
SELECT * FROM test_tbl;
-------------------------------
SELECT COALESCE(tmp.id, 20) AS classification_id
FROM (
SELECT tt.id,
row_number() over(
ORDER BY tt.id) AS row_num
FROM test_tbl tt
) tmp
WHERE tmp.row_num =floor(random() * 10);
Please let me know where I'm doing wrong.
but it returns NULL occasionally
and I must add to this that it sometimes returns more than 1 rows, right?
in your sample data there are 6 rows, so the column row_num will have a value from 1 to 6.
This:
floor(random() * 10)
creates a random number from 0 up to 0.9999...
You should use:
floor(random() * 6 + 1)::int
to get a random integer from 1 to 6.
But this would not solve the problem, because the WHERE clause is executed once for each row, so there is a case that row_num will never match the created random number, so it will return nothing, or it will match more than once so it will return more than 1 rows.
See the demo.
The proper (although sometimes not the most efficient) way to get a random row is:
SELECT id FROM test_tbl ORDER BY random() LIMIT 1
Also check other links from SO, like:
quick random row selection in Postgres
You could select one row and order by random(), this way you are ensured to hit an existing row
select id
from test_tbl
order by random()
LIMIT 1;
Suppose I have a two tables like
Id date1. date2. status. code
1. .... ........ 1. AB110
2. ....... ..... 2. AB001
3. ....... ....... 1. AB120
4. ...... ........ 1. AB111
And table2
Code. Name. Display
AB110 Abc. Y
AB001 Xyx. Y
I want something like this:
withdate. type1. type2. code. name
2 1. 1 AB110. Abc
1. 2. 3 AB001. Xyz
3. . 1. 2 AB120. Lol
1. 1 5 AB111. Zzz
Select code,
table2.name,
count(case when date1 is not null then id) as withdate,
count(case when status=1 then id) as type1,
count(case when status=2 then id) as type2
from table, table2
where table.code=table2.code
group by code, name
Is it rigth to write a query like this ?
Yes it is right to write a query like that. The logic is good, but there is a synthax error in the case. I will prefer you count using the sum aggregate function as below.
select code,
table2.name,
sum(date1 is not null) as withdate,
sum(status=1) as type1,
sum(status=2) as type2
from table, table2
where table.code=table2.code
group by code, name;
This way when date1 is not null 1 is added to the cummated value of the sum else zero is added which is like counting.
Your query looks correct to me, except for the fact you forgot to fully qualify you column names and forgot the end keywords in the case expressions. There are, however, some improvements you could perform.
First, you shouldn't use implicit joins (having more than one table in the from clause) - they have been considered deprecated for quite a few years, and you should use a join clause.
Second, you didn't mention what version you were using, but Postgres 9.4 introduced a filter clause that can save you some of the boiler-plate of those case expressions.
SELECT table1.code,
table2.name,
COUNT(*) FILTER (WHERE date1 IS NOT NULL) AS withdate,
COUNT(*) FILTER (WHERE status = 1) AS type1,
COUNT(*) FILTER (WHERE status = 2) AS type2
FROM table1
JOIN table2 ON table1.code = table2.code
GROUP BY table1.code, table2.name
I want compare two column values which come from two different queries. Can anyone suggest a query which compares two columns in Postgres?
Well, the easiest to understand--but not necessarily the fastest--is probably something like this. (But you might mean something else by "compare".)
-- Values in column1 that aren't in column2.
SELECT column1 FROM query1
WHERE column1 NOT IN (SELECT column2 FROM query2);
-- Values in column2 that aren't in column1.
SELECT column2 FROM query2
WHERE column2 NOT IN (SELECT column1 FROM query1);
-- Values common to both column1 and column2
SELECT q1.column1 FROM query1 q1
INNER JOIN query2 q2 ON (q1.column1 = q2.column2);
You can also do this in a single statement to give you a visual comparison. A FULL OUTER JOIN returns all the values in both columns, with matching values in the same row, and NULL where one column is missing a value that's in the other column.
SELECT q1.column1, q2.column2 FROM query1 q1
FULL OUTER JOIN query2 q2 ON (q1.column1 = q2.column2);
In Postgre, why does
select abc from (select 1) as abc
produces:
(1)
and
select * from (select 1) as abc
produces:
1
That's really strange to me. Is that the case with MySQL, Oracle, etc?
I spent hours figuring out why my conditions were failing...
The rows returned by your queries have different type: the first one is ROW(INT), while the second one is INT.
MySQL and others lack this feature.
In your first query, you are selecting a whole ROW as a single column. This query
SELECT abc FROM (SELECT 1, 2) abc
will produce (1, 2), which is a single column too and has type ROW.
To select the INT value, use:
SELECT abc.col
FROM (
SELECT 1 AS col
) abc