add Exists operator in a union sql query - postgresql

I have two tables that am using union to combine the result-set of them my problem here is Each SELECT statement within UNION must have the same number of columns and data types, I don't have the same number of columns so am creating null columns
select
d.deal_id as order_id,
EXISTS(select * from table1 c where d.user_id = c.user_id) as IsUser, --this returns boolean value
from table 1 c
union
select
cast(o.id as varchar) as order_id,
coalesce('No_user'::text,'0'::text) as IsUser, --i get an error :UNION types boolean and character varying cannot be matched
from table2 o
how can I create a null column in table2 that matches the boolean data type of the table1

how can I create a null column in table2 that matches the boolean data type of the table1
By putting NULL into the SELECT list of the second query.
Generally (in SQL) the datatype for the column is dictated by the first query in the union set and other queries must match. You don't need column aliases for subsequent queries either, but they may help make a query more readable:
select
d.deal_id as order_id,
EXISTS(select * from table1 c where d.user_id = c.user_id) as IsUser, --this returns boolean value
from table 1 c
union
select
cast(o.id as varchar) as order_id,
null --IsUser
from table2 o
If you have a case where the type of the column in the first query is different to what you want in the output, you cast the first column:
select 1::boolean as boolcol
UNION
select true
This will ensure that the column is boolean, rather than giving a "integer and bool cannot be matched". Remember also that, should you need to, NULL can be cast to a type, e.g. select null::boolean as bolcol

Related

Postgres Crosstab query with CTE (with clause)

Recently started working on Postgres and need to pivot data.
I wrote the following query:
select *
from crosstab (
$$
with tmp_kv as (
select distinct pat_id
,col.name as key, replace(replace(replace(value, '[',''), ']', ''),'"','') as value
from (
select p.Id as pat_id, nullif(kv.key,'undefined')::int as key, trim(kv.value::text,'"') as value
from pat_table p
left join e_table e on e.pat_id = p.id and e.id is null
,jsonb_each_text(p.data) as kv
) t
left join lateral (
select name::text as name from public.config_fields fld
where id = t.key
) col on true
)
select pat_id, key, value
from tmp_kv
where nullif(trim(key),'') is not null
order by pat_id, key
$$,$$
select distinct key from tmp_kv -- (Get error "relation "tmp_kv" does not exist" )
where nullif(trim(key),'') is not null
order by 1
$$
) as (
pat_id bigint
...
...
);
Query works if I take the WITH clause out into temporary table. But will be deploying it to production with read replicas, so need it to be working with a CTE. Is there a way?
The two queries passed as strings to the crosstab() function are separate queries.
A CTE can only be attached to a single query.
What you ask for is strictly impossible.
Since you have to spell out the (static) return type for crosstab() anyway, and the result of the query in the 2nd parameter has to match that, it's pointless to use a query with a dynamic result as 2nd parameter to begin with.

Postgres join involving tables having join condition defined on an text array

I have two tables in postgresql
One table is of the form
Create table table1(
ID serial PRIMARY KEY,
Type []Text
)
Create table table2(
type text,
sellerID int
)
Now i want to get all the rows from table1 which are having type same that in table2 but the problem is that in table1 the type is an array.
In case the type in the table has an identifiable delimiter like ',' ,';' etc. you can rewrite the query as regexp_split_to_table(type,',') or versions later than 9.5 unnest function can be use too.
For eg.,
select * from
( select id ,regexp_split_to_table(type,',') from table1)table1
inner join
select * from table2
on trim(table1.type) = trim(table2.type)
Another good example can be found - https://www.dbrnd.com/2017/03/postgresql-regexp_split_to_array-to-split-string-using-different-delimiters/
SELECT
a[1] AS DiskInfo
,a[2] AS DiskNumber
,a[3] AS MessageKeyword
FROM (
SELECT regexp_split_to_array('Postgres Disk information , disk 2 , failed', ',')
) AS dt(a)
You can use the ANY operator in the JOIN condition:
select *
from table1 t1
join table2 t2 on t2.type = any (t1.type);
Note that if the types in the table1 match multiple rows in table2, you would get duplicates (from table1) because that's how a join works. Maybe you want an EXISTS condition instead:
select *
from table1 t1
where exists (select *
from table2 t2
where t2.type = any(t1.type));

PostgreSQL - Append a table to another and add a field without listing all fields

I have two tables:
table_a with fields item_id,rank, and 50 other fields.
table_b with fields item_id, and the same 50 fields as table_a
I need to write a SELECT query that adds the rows of table_b to table_a but with rank set to a specific value, let's say 4.
Currently I have:
SELECT * FROM table_a
UNION
SELECT item_id, 4 rank, field_1, field_2, ...
How can I join the two tables together without writing out all of the fields and without using an INSERT query?
EDIT:
My idea is to join table_b to table_a somehow with the rank field remaining empty, then simply replace the null rank fields. The rank field is never null, but item_id can be duplicated and table_a may have item_id values that are not in table_b, and vice-versa.
I am not sure I understand why you need this, but you can use jsonb functions:
select (jsonb_populate_record(null::table_a, row)).*
from (
select to_jsonb(a) as row
from table_a a
union
select to_jsonb(b) || '{"rank": 4}'
from table_b b
) s
order by item_id;
Working example in rextester.
I'm pretty sure I've got it. The predefined rank column can be inserted into table_b by joining to the subset of itself with only the columns left of the column behind which you want to insert.
WITH
_leftcols AS ( SELECT item_id, 4 rank FROM table_b ),
_combined AS ( SELECT * FROM table_b JOIN _leftcols USING (item_id) )
SELECT * FROM _combined
UNION
SELECT * FROM table_a

How do I avoid listing all the table columns in a PostgreSQL returns statement?

I have a PostgreSQL function similar to this:
CREATE OR REPLACE FUNCTION dbo.MyTestFunction(
_ID INT
)
RETURNS dbo.MyTable AS
$$
SELECT *,
(SELECT Name FROM dbo.MySecondTable WHERE RecordID = PersonID)
FROM dbo.MyTable
WHERE PersonID = _ID
$$ LANGUAGE SQL STABLE;
I would really like to NOT have to replace the RETURNS dbo.MyTable AS with something like:
RETURNS TABLE(
col1 INT,
col2 TEXT,
col3 BOOLEAN,
col4 TEXT
) AS
and list out all the columns of MyTable and Name of MySecondTable. Is this something that can be done? Thanks.
--EDIT--
To clarify I have to return ALL columns in MyTable and 1 column from MySecondTable. If MyTable has >15 columns, I don't want to have to list out all the columns in a RETURNS TABLE (col1.. coln).
You just list the columns that you want returned in the SELECT portion of your SQL statement:
SELECT t1.column1, t1.column2,
(SELECT Name FROM dbo.MySecondTable WHERE RecordID = PersonID)
FROM dbo.MyTable t1
WHERE PersonID = _ID
Now you'll just get column1, column3, and name returned
Furthermore, you'll probably find better performance using a LEFT OUTER JOIN in your FROM portion of the SQL statement as opposed to the correlated subquery you have now:
SELECT t1.column1, t1.column2, t2.Name
FROM dbo.MyTable t1
LEFT OUTER JOIN dbo.MySecondTable t2 ON
t2.RecordID = t1.PersonID
WHERE PersonID = _ID
Took a bit of a guess on where RecordID and PersonID were coming from, but that's the general idea.

How to match strings in a DB2 (z/OS) query?

This is blowing my mind.
All I want to do is basic string comparison on a long varchar field.
I have a table of approx. 12M records.
If I query for MY_FIELD='a string', I get a count of 25947, which seems about right.
If I query for MY_FIELD!='a string', I get a count of 989.
Shouldn't these 2 counts add up to the full table size of 12M?
And in how many of those rows is MY_FIELD set to NULL?
a. select count(*) from mytable;
b. select count(*) from mytable where my_field is null;
c. select count(*) from mytable where my_field is not null;
d. select count(*) from mytable where my_field = 'some value';
e. select count(*) from mytable where my_field != 'some value';
NULL is not equal or unequal to any value, including NULL so I would expect d+e to equate to c and b+c to equate to a.