Pass multiple postgres SQL statements in a single PGexec call - postgresql

In t-sql, it's possible to run multiple select statements without a ;. Example:
select 1 select 2 is valid, and returns two datasets of 1 and 2 respectively.
In postgres, it is not possible to run multiple select statements... you need a ; delimiter otherwise you get a syntax error.
Referencing the docs: http://www.postgresql.org/docs/current/interactive/libpq-exec.html
Multiple queries sent in a single PQexec call are processed in a single transaction, unless there are explicit BEGIN/COMMIT commands included in the query string to divide it into multiple transactions.
How can I do this?
Let's say I want to run these two queries on the server: select 1 select 2: should it look like this:
begin
select 1
commit;
begin
select 2
commit
I'm ok with it only returning the last query as the result set, but I need to know that the first query was executed on the server, even if it's not returning with that result set.
Why I want to do this: I have a complex sql script that has ~6 temp tables to build that the main query will use. By delimiting the temp tables with the ; syntax, I can't schedule this script in cron to run on a schedule. If I can get the temp tables to run and the main query to access them in the same PGexec call, I'd be very very happy.

You don't need libpq directly, you can just use he psql front end (in cron, you might need to specify the absolute pathname for the binary)
#!/bin/sh
psql -U my_user mydb <<OMG
begin;
select tralal 1;
commit;
begin;
select domtidom 2;
commit;
OMG

I was able to accomplish what I was looking for with CTEs rather than temp tables... one long chain of CTEs (acting as temp tables) waterfalling into the main query.
A simple example:
with first as (
select 1 as col
),
second as (
select 2 as col
)
select * from first union all select * from second
A more complex example:
with COGS as (
select 'Product1' Vertical, 3.0 Credit, 1.00 Debit, 2.75 Blend, 4.30 Amex, 0.25 ACH union
select 'Product2', 3.1, 2.2, 2.8, 4.5, 0.25 union
),
Allocable_Card_Volume as (
select MPR.Date, sum(MPR.Card_Volume_Net_USD) Allocable_Card_Volume
from mpr_base MPR
where MPR.Gateway in ('YapProcessing') and MPR.Vertical not in ('HA-Intl','HA')
group by MPR.Date
),
COGS_Financials_Base as (
select '2013-01-31'::DATE Date , 1000 Total_COGS , 200 Homeaway , (select Allocable_Card_Volume from Allocable_Card_Volume where Date in ('2013-01-31') ) Allocable_Card_Volume union
),
Initial_COGS as (
select
MPR.Date,
sum(
case when MPR.PaymentTypeGroup in ('ACH_Scan','AmEx') then (Txn_Count * COGS.ACH) else 0 end +
case when MPR.Vertical not in ('HA') and MPR.PaymentTypeGroup in ('Card','AmEx-Processing') then
coalesce( ((Credit_Card_Net_USD - Amex_Processing_Net_USD) * COGS.Credit * 0.01),0) + coalesce((Debit_Card_Net_USD * COGS.Debit * 0.01),0) + coalesce((Amex_Processing_Net_USD * COGS.Amex * 0.01),0) + coalesce((case when TPV is null and PaymentTypeGroup in ('Card') then TPV_Billing else 0 end * COGS.Blend * 0.01),0)
when MPR.Vertical in ('HA') and MPR.PaymentTypeGroup in ('Card','AmEx-Processing') and FeePaymentType in ('PropertyPaid') then
coalesce(COGS_Financials.Homeaway,0)
else 0 end
) Initial_COGS
from
mpr_base MPR
left join COGS on COGS.Vertical = MPR.Vertical and MPR.Gateway in ('YapProcessing') and MPR.PaymentTypeGroup not in ('Cash')
left join COGS_Financials_Base COGS_Financials on MPR.Date = COGS_Financials.Date and MPR.Gateway in ('YapProcessing') and MPR.PaymentTypeGroup in ('Card')
where MPR.Gateway in ('YapProcessing') and MPR.Vertical not in ('HA-Intl') and MPR.PaymentTypeGroup not in ('Cash')
group by
MPR.Date
),
COGS_Financials as (
select
COGS_Financials_Base.*, (COGS_Financials_Base.Total_COGS - Initial_COGS.Initial_COGS) Allocation
from
COGS_Financials_Base
join Initial_COGS on COGS_Financials_Base.Date = Initial_COGS.Date
),
MPR as (
select
MPR.Date,MPR.Gateway,MPR.Vertical, MPR.ParentAccountId, MPR.ParentName ,
MPR.PaymentTypeGroup ,
sum(TPV_USD) TPV_USD,
sum(TPV_Net_USD) TPV_Net_USD,
sum(Revenue_Net_USD) Revenue_Net_USD ,
sum(coalesce(
case when MPR.PaymentTypeGroup in ('ACH_Scan','AmEx') then (Txn_Count * COGS.ACH) else 0 end +
case when MPR.Vertical not in ('HA') and MPR.PaymentTypeGroup in ('Card','AmEx-Processing') then
coalesce( ((Credit_Card_Net_USD - Amex_Processing_Net_USD) * COGS.Credit * 0.01),0) + coalesce((Debit_Card_Net_USD * COGS.Debit * 0.01),0) + coalesce((Amex_Processing_Net_USD * COGS.Amex * 0.01),0) + coalesce((case when TPV is null and PaymentTypeGroup in ('Card') then TPV_Billing else 0 end * COGS.Blend * 0.01),0)
+(coalesce( ( ( cast(Card_Volume_Net_USD as decimal(18,2) ) / cast(COGS_Financials.Allocable_Card_Volume as decimal(18,2)) ) * COGS_Financials.Allocation ), 0) ) -- Excess
when MPR.Vertical in ('HA') and MPR.PaymentTypeGroup in ('Card','AmEx-Processing') and MPR.FeePaymentType in ('PropertyPaid') then coalesce(COGS_Financials.Homeaway,0)
else 0
end,0)
) COGS_USD,
sum(Txn_Count) Txn_Count
from
mpr_Base MPR
left join COGS on COGS.Vertical = MPR.Vertical and MPR.Gateway in ('YapProcessing') and MPR.PaymentTypeGroup not in ('Cash')
left join COGS_Financials on MPR.Date = COGS_Financials.Date and MPR.Gateway in ('YapProcessing') and MPR.PaymentTypeGroup in ('Card','AmEx-Processing')
where
MPR.Date in ('2016-02-29')
group by
MPR.Date,MPR.Gateway,MPR.Vertical , MPR.ParentAccountId ,MPR.ParentName,
MPR.PaymentTypeGroup
)
select
Vertical,
sum(TPV_USD)::money as TPV_USD,
sum(Revenue_Net_USD)::money as Revenue_Net_USD,
sum(COGS_USD)::money COGS_USD,
round((sum(Revenue_Net_USD)-sum(COGS_USD))/sum(Revenue_Net_USD)*100,2) Accounting_Margin
from
MPR
where Date in ('2016-02-29')
group by
Vertical
union all
select
'Total' ,
sum(TPV_USD)::money as TPV_USD,
sum(Revenue_Net_USD)::money as Revenue_Net_USD,
sum(COGS_USD)::money COGS_USD,
round((sum(Revenue_Net_USD)-sum(COGS_USD))/sum(Revenue_Net_USD)*100,2) Accounting_Margin
from
MPR
where Date in ('2016-02-29')
I said it would be complex :-)

From your answer, you could also do this
SELECT * FROM a
UNION ALL
SELECT * FROM b
UNION ALL
SELECT * FROM c
...

Related

Is there a smarter method to create series with different intervalls for count within a query?

I want to create different intervalls:
0 to 10 steps 1
10 to 100 steps 10
100 to 1.000 steps 100
1.000 to 10.000 steps 1.000
to query a table for count the items.
with "series" as (
(SELECT generate_series(0, 10, 1) AS r_from)
union
(select generate_series(10, 90, 10) as r_from)
union
(select generate_series(100, 900, 100) as r_from)
union
(select generate_series(1000, 9000, 1000) as r_from)
order by r_from
)
, "range" as ( select r_from
, case
when r_from < 10 then r_from + 1
when r_from < 100 then r_from + 10
when r_from < 1000 then r_from + 100
else r_from + 1000
end as r_to
from series)
select r_from, r_to,(SELECT count(*) FROM "my_table" WHERE "my_value" BETWEEN r_from AND r_to) as "Anz."
FROM "range";
I think generate_series is the right way, there is another way, we can use simple math to calculate the numbers.
SELECT 0 as r_from,1 as r_to
UNION ALL
SELECT power(10, steps ) * v ,
power(10, steps ) * v + power(10, steps )
FROM generate_series(1, 9, 1) v
CROSS JOIN generate_series(0, 3, 1) steps
so that might as below
with "range" as
(
SELECT 0 as r_from,1 as r_to
UNION ALL
SELECT power(10, steps) * v ,
power(10, steps) * v + power(10, steps)
FROM generate_series(1, 9, 1) v
CROSS JOIN generate_series(0, 3, 1) steps
)
select r_from, r_to,(SELECT count(*) FROM "my_table" WHERE "my_value" BETWEEN r_from AND r_to) as "Anz."
FROM "range";
sqlifddle
Rather than generate_series you could create defined integer range types (int4range), then test whether your value is included within the range (see Range/Multirange Functions and Operators. So
with ranges (range_set) as
( values ( int4range(0,10,'[)') )
, ( int4range(10,100,'[)') )
, ( int4range(100,1000,'[)') )
, ( int4range(1000,10000,'[)') )
) --select * from ranges;
select lower(range_set) range_start
, upper(range_set) - 1 range_end
, count(my_value) cnt
from ranges r
left join my_table mt
on (mt.my_value <# r.range_set)
group by r.range_set
order by lower(r.range_set);
Note the 3rd parameter in creating the ranges.
Creating a CTE as above is good if your ranges are static, however if dynamic ranges are required you can put the ranges into a table. Changes ranges then becomes a matter to managing the table. Not simple but does not require code updates. The query then reduces to just the Main part of the above:
select lower(range_set) range_start
, upper(range_set) - 1 range_end
, count(my_value) cnt
from range_tab r
left join my_table mt
on (mt.my_value <# r.range_set)
group by r.range_set
order by lower(r.range_set);
See demo for both here.

Postgresql: Select query on view returning no records

I have a view named vw_check_space in my public schema (using postgresql 9.4). When I run a
select * from public.vw_check_space;
as a postgres user, I get a list of rows but when I run the same query by another user 'user1', it returns nothing.
View:
CREATE OR REPLACE VIEW public.vw_check_space AS
WITH constants AS (
SELECT current_setting('block_size'::text)::numeric AS bs,
23 AS hdr,
8 AS ma
), no_stats AS (
SELECT columns.table_schema,
columns.table_name,
psut.n_live_tup::numeric AS est_rows,
pg_table_size(psut.relid::regclass)::numeric AS table_size
FROM columns
JOIN pg_stat_user_tables psut ON columns.table_schema::name = psut.schemaname AND columns.table_name::name = psut.relname
LEFT JOIN pg_stats ON columns.table_schema::name = pg_stats.schemaname AND columns.table_name::name = pg_stats.tablename AND columns.column_name::name = pg_stats.attname
WHERE pg_stats.attname IS NULL AND (columns.table_schema::text <> ALL (ARRAY['pg_catalog'::character varying, 'information_schema'::character varying]::text[]))
GROUP BY columns.table_schema, columns.table_name, psut.relid, psut.n_live_tup
), null_headers AS (
SELECT constants.hdr + 1 + sum(
CASE
WHEN pg_stats.null_frac <> 0::double precision THEN 1
ELSE 0
END) / 8 AS nullhdr,
sum((1::double precision - pg_stats.null_frac) * pg_stats.avg_width::double precision) AS datawidth,
max(pg_stats.null_frac) AS maxfracsum,
pg_stats.schemaname,
pg_stats.tablename,
constants.hdr,
constants.ma,
constants.bs
FROM pg_stats
CROSS JOIN constants
LEFT JOIN no_stats ON pg_stats.schemaname = no_stats.table_schema::name AND pg_stats.tablename = no_stats.table_name::name
WHERE (pg_stats.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND no_stats.table_name IS NULL AND (EXISTS ( SELECT 1
FROM columns
WHERE pg_stats.schemaname = columns.table_schema::name AND pg_stats.tablename = columns.table_name::name))
GROUP BY pg_stats.schemaname, pg_stats.tablename, constants.hdr, constants.ma, constants.bs
), data_headers AS (
SELECT null_headers.ma,
null_headers.bs,
null_headers.hdr,
null_headers.schemaname,
null_headers.tablename,
(null_headers.datawidth + (null_headers.hdr + null_headers.ma -
CASE
WHEN (null_headers.hdr % null_headers.ma) = 0 THEN null_headers.ma
ELSE null_headers.hdr % null_headers.ma
END)::double precision)::numeric AS datahdr,
null_headers.maxfracsum * (null_headers.nullhdr + null_headers.ma -
CASE
WHEN (null_headers.nullhdr % null_headers.ma::bigint) = 0 THEN null_headers.ma::bigint
ELSE null_headers.nullhdr % null_headers.ma::bigint
END)::double precision AS nullhdr2
FROM null_headers
), table_estimates AS (
SELECT data_headers.schemaname,
data_headers.tablename,
data_headers.bs,
pg_class.reltuples::numeric AS est_rows,
pg_class.relpages::numeric * data_headers.bs AS table_bytes,
ceil(pg_class.reltuples * (data_headers.datahdr::double precision + data_headers.nullhdr2 + 4::double precision + data_headers.ma::double precision -
CASE
WHEN (data_headers.datahdr % data_headers.ma::numeric) = 0::numeric THEN data_headers.ma::numeric
ELSE data_headers.datahdr % data_headers.ma::numeric
END::double precision) / (data_headers.bs - 20::numeric)::double precision) * data_headers.bs::double precision AS expected_bytes,
pg_class.reltoastrelid
FROM data_headers
JOIN pg_class ON data_headers.tablename = pg_class.relname
JOIN pg_namespace ON pg_class.relnamespace = pg_namespace.oid AND data_headers.schemaname = pg_namespace.nspname
WHERE pg_class.relkind = 'r'::"char"
), estimates_with_toast AS (
SELECT table_estimates.schemaname,
table_estimates.tablename,
true AS can_estimate,
table_estimates.est_rows,
table_estimates.table_bytes + COALESCE(toast.relpages, 0)::numeric * table_estimates.bs AS table_bytes,
table_estimates.expected_bytes + ceil(COALESCE(toast.reltuples, 0::real) / 4::double precision) * table_estimates.bs::double precision AS expected_bytes
FROM table_estimates
LEFT JOIN pg_class toast ON table_estimates.reltoastrelid = toast.oid AND toast.relkind = 't'::"char"
), table_estimates_plus AS (
SELECT current_database() AS databasename,
estimates_with_toast.schemaname,
estimates_with_toast.tablename,
estimates_with_toast.can_estimate,
estimates_with_toast.est_rows,
CASE
WHEN estimates_with_toast.table_bytes > 0::numeric THEN estimates_with_toast.table_bytes
ELSE NULL::numeric
END AS table_bytes,
CASE
WHEN estimates_with_toast.expected_bytes > 0::double precision THEN estimates_with_toast.expected_bytes::numeric
ELSE NULL::numeric
END AS expected_bytes,
CASE
WHEN estimates_with_toast.expected_bytes > 0::double precision AND estimates_with_toast.table_bytes > 0::numeric AND estimates_with_toast.expected_bytes <= estimates_with_toast.table_bytes::double precision THEN (estimates_with_toast.table_bytes::double precision - estimates_with_toast.expected_bytes)::numeric
ELSE 0::numeric
END AS bloat_bytes
FROM estimates_with_toast
UNION ALL
SELECT current_database() AS databasename,
no_stats.table_schema,
no_stats.table_name,
false AS bool,
no_stats.est_rows,
no_stats.table_size,
NULL::numeric AS "numeric",
NULL::numeric AS "numeric"
FROM no_stats
), bloat_data AS (
SELECT current_database() AS databasename,
table_estimates_plus.schemaname,
table_estimates_plus.tablename,
table_estimates_plus.can_estimate,
table_estimates_plus.table_bytes,
round(table_estimates_plus.table_bytes / (1024::double precision ^ 2::double precision)::numeric, 3) AS table_mb,
table_estimates_plus.expected_bytes,
round(table_estimates_plus.expected_bytes / (1024::double precision ^ 2::double precision)::numeric, 3) AS expected_mb,
round(table_estimates_plus.bloat_bytes * 100::numeric / table_estimates_plus.table_bytes) AS pct_bloat,
round(table_estimates_plus.bloat_bytes / (1024::numeric ^ 2::numeric), 2) AS mb_bloat,
table_estimates_plus.est_rows
FROM table_estimates_plus
)
SELECT bloat_data.databasename,
bloat_data.schemaname,
bloat_data.tablename,
bloat_data.can_estimate,
bloat_data.table_bytes,
bloat_data.table_mb,
bloat_data.expected_bytes,
bloat_data.expected_mb,
bloat_data.pct_bloat,
bloat_data.mb_bloat,
bloat_data.est_rows
FROM bloat_data
ORDER BY bloat_data.pct_bloat DESC;
I have provided connect privilege to the database and grant usage and select privilege to user user1. I am not sure what other privileges I would be missing here. Any help would be appreciated.
PS: I have also provided usage and select privilege to the tables and schema the view is using during its creation.
https://www.postgresql.org/docs/9.4/static/view-pg-stats.html
The view pg_stats provides access to the information stored in the
pg_statistic catalog. This view allows access only to rows of
pg_statistic that correspond to tables the user has permission to
read, and therefore it is safe to allow public read access to this
view.
https://www.postgresql.org/docs/9.4/static/monitoring-stats.html
pg_stat_user_tables Same as pg_stat_all_tables, except that only user
tables are shown.
so after you grant read on other owner tables to user, you still join pg_stat_user_tables which will cut list to only those tables onwer of which you are... - either exclude it from view, or use left outer join instead of inner join
I'm talking about JOIN pg_stat_user_tables, but you should check every table you join and read about all views you include in your query

DB2 SQL, cant define column as

I have some SQLthat is part of that is a section of a with statement
And I keep getting an error that "NEWID" is not valid in the context where it is used sqlstate 42703.
Update: The error has been comming from the group by clause using a having function I didnt put in the original code as I thought it wasnt the issue.So I updated the code to show the full version.
Does anyone know what the problem is with the statement?
HATSTABLE1 (HATId, NewID) as (
select HA.HATId as "ID",
round(
cast(
sum(
case when HA.ID = 4 or
HA.ID < 0
then 1 else 0 end
) AS FLOAT
) / count(*) * 100,
2
) AS NewID
from Hats T
join Heads HD on
T.ID=HD.HatID
group by T.ID
having NewID >1
try it
with tmp as (
select T.HATId as "ID",
sum(case when T.ID = 4 or HA.ID < 0 then 1 else 0 end) as sum1,
count(*) as nb
from Hats T
group by T.HATId
)
select HATId, round(cast(sum1 as decimal)/ nb * 100, 2) NewID
from tmp

Postgres select 'at least' items

I want to select comments to post, elder then particular commentId, BUT I want to have at least 5 comments in result anyway.
So if there are less then 5 comments is sql : SELECT * FROM comments WHERE id >= :comment_id, I have to make another select SELECT * FROM comments LIMIT 5.
Is it possible to get the same logic in one request?
with c as (
select count(*) as c
from comments
where id >= :comment_id
)
select *
from comments
where id >= :comment_id
union all
(
select *
from comments
where id < :comment_id
order by id desc
limit greatest(5 - (select c from c), 0)
)
;
Try:
WITH x AS {
SELECT * FROM comments WHERE id >= :comment_id
),
y AS (
SELECT * FROM comments
LIMIT 5
)
SELECT * FROM x
WHERE 5 <= ( SELECT count(*) FROM x )
UNION ALL
SELECT * FROM y
WHERE 5 > ( SELECT count(*) FROM x )

T-sql Percent calculation stuffed with WHERE clauses doesn't work

I have t-sql as follows:
SELECT (COUNT(Intakes.fk_ClientID) * 100) / (
SELECT count(*)
FROM INTAKES
WHERE Intakes.AdmissionDate >= #StartDate
)
FROM Intakes
WHERE Intakes.fk_ReleasedFromID = '1'
AND Intakes.AdmissionDate >= #StartDate;
I'm trying to get the percentage of clients who have releasedfromID = 1 out of a subset of clients who have a certain range of admission dates. But I get rows of 1's and 0's instead. Now, I can get the percentage if I take out the where clauses, it works:
SELECT (COUNT(Intakes.fk_ClientID) * 100) / (
SELECT count(*)
FROM INTAKES
)
FROM Intakes
WHERE Intakes.fk_ReleasedFromID = '1';
works fine. It selects ClientIDs where ReleasedFromID =1, multiplies it by 100 and divides by total rows in Intakes. But how do you run percentage with WHERE clauses as above?
After reading comment from #Anssssss
SELECT (COUNT(Intakes.fk_ClientID) * 100.0) / (
SELECT count(*)
FROM INTAKES
) 'percentage'
FROM Intakes
WHERE Intakes.fk_ReleasedFromID = '1';