Union different select statement - tsql

I want to union 3 different statements, the problem is that I am using a loop and if in my query:
WHILE some condition
BEGIN
if(condition 1)
begin
select something
end
else if(condition 2)
begin
select something
end
else if(condition 3)
begin
select something
end
END
The query is working fine but it returns more than 100 different select results (distinct tables). How can I union these select results into one table?

If you mean in TSQL, then you need a temporary table (#foo) or table variable (#bar). Then at each step:
INSERT {name} ({cols})
SELECT {cols}
{etc}
Then do a final
SELECT {cols}
FROM {name}
The choice between table-variable and temporary-table is subtle. I prefer table-variables; but for large data, or if you need an identity/index, a temporary-table may be more versatile. But if you aren't inside a SPROC you must ensure you clean up your temp table.

Related

how to iterate in all schemas and find count from all tables present in all schemas with same table name for every 5mins?

imagine there are 5 schemas in my database and in every schema there is a common name table (ex:- table1) after every 5mins records get inserted in table1, how I can iterate in all schemas n calculate the count of table1[i have to automate the process so i am going to write the code in function and call that function after every 5mins using crontab].
Basically 2 options: Hard code schema.table and union the results. So something like:
create or replace function count_rows_in_each_table1()
returns table (schema_name text, number_or_rows integer)
language sql
as $$
select 'schema1', count(*) from schema1.table1 union all
select 'schema2', count(*) from schema2.table1 union all
select 'schema3', count(*) from schema3.table1 union all
...
select 'scheman', count(*) from scheman.table1;
$$;
The alternative being building the query dynamically from information_scheme.
create or replace function count_rows_in_each_table1()
returns table (schema_name text, number_of_rows bigint)
language plpgsql
as $$
declare
c_rows_count cursor is
select table_schema::text
from information_schema.tables
where table_name = 'table1';
l_tbl record;
l_sql_statement text = '';
l_connector text = '';
l_base_select text = 'select ''%s'', count(*) from %I.table1';
begin
for l_tbl in c_rows_count
loop
l_sql_statement = l_sql_statement ||
l_connector ||
format (l_base_select, l_tbl.table_schema, l_tbl.table_schema);
l_connector = ' union all ';
end loop;
raise notice E'Running Query: \n%', l_sql_statement;
return query execute l_sql_statement;
end;
$$;
Which is better. With few schema and few schema add/drop, opt for the first. It is direct and easily shows what you are doing. If you add/drop schema often then opt for the second. If you have many schema, but seldom add/drop them then modify the second to generate the first, save and schedule execution of the generated query.
NOTE: Not tested

PL/pgSQL How To Create Test Query To Test Function

I create a function that returns TEXT, end I'm trying to create a simple query to test this function.
the query while looks like this:
CREATE TEMP TABLE function_test(actuel TEXT,expected TEXT,t_result TEXT);
/* the t_result should be'passed' if function(actuel) = expected */
INSERT INTO
function_test(actuel ,expected);
VALUES
('a','A'),/*function(`a`) return 'A'*/
('b','B'),/*function(`b`) return 'B'*/
('c','C');/*function(`c`) return 'C'*/
IF function(actuel)=expected THEN
INSERT INTO
function_test(t_result) VALUES 'passed' ;
ELSE
INSERT INTO
function_test(t_result) VALUES 'failed';
SELECT * FROM function_test;
DROP TABLE function_test;
Output
It would be nice if I could do it better than this.
thinks.
Not entirely sure I understand what you are asking, but could you not just do either
select MY_FUNCTION('a','A') from dual;
or if you need a row on success and no row on failure
select 'PASS' from dual
where MY_FUNCTION('a','A') = 'YES'
DUAL is a 1 one row table that exists in every Oracle database.

Execute select statement conditionally

I'm using PostgreSQL 9.6 and I need to create a query that performs a select depending on the logic of an if
Basically I've tried:
DO $$
BEGIN
IF exists ( SELECT 1 FROM TABLE WHERE A = B ) THEN
SELECT *
FROM A
ELSE
SELECT *
FROM B
END IF
END $$
And that returns me an error:
ERROR: query has no destination for result data
HINT: If you want to discard the results of a SELECT, use PERFORM
instead.
CONTEXT: PL/pgSQL function inline_code_block line 15 at SQL statement
Then I switched "SELECT" for "PERFORM", but that don't actually execute the SELECT statement for me.
I read that I need to call a void function to perform a "dynamic" query, but I couldn't make that work either. I'm new to writing queries on PostgreSQL. Is there any better way of doing that?
DO statements do not take parameters nor return anything. See:
Returning values for Stored Procedures in PostgreSQL
You may want a function instead. Create once:
CREATE FUNCTION foo()
RETURNS SETOF A -- or B, all the same
LANGUAGE plpgsql AS
$func$
BEGIN
IF EXISTS (SELECT FROM ...) THEN -- some meaningful test
RETURN QUERY
SELECT *
FROM A;
ELSE
RETURN QUERY
SELECT *
FROM B;
END IF;
END
$func$
Call:
SELECT * FROM foo();
But the function has one declared return type. So both tables A and B must share the same columns (at least columns with compatible data types in the same order; names are no problem).
The same restriction applies to a plain SQL statement. SQL is strictly typed.
Anonymous code blocks just can't return anything - you would need a function instead.
But I think you don't need pl/pgsql to do what you want. Assuming that a and b have the same count of columns and datatypes, you can use union all and not exists:
select a.* from a where exists (select 1 from mytable where ...)
union all
select b.* from b where not exists (select 1 from mytable where ...)

insert into select - stored procedure affects 0 rows

I am using SQL Server 2014. I created a stored procedure to update a table, but when i run this it affects 0 rows. i'm expecting to see 501 rows affected, as the actual insert statement when run alone returns that.The table beingupdated is pre-populated.
I also tried pre-populating the table with 500 records to see if the last 1 row was pulled by the stored procedure, but it still affects 0 rows.
Create PROCEDURE UPDATE_STAGING
(#StatementType NVARCHAR(20) = '')
AS
BEGIN
IF #StatementType = 'Insertnew'
BEGIN
INSERT INTO owner.dbo.MVR_Staging
(
policy_number,
quote_number,
request_id,
CreateTs,
mvr_response_raw_data
)
select
p.pol_num,
A.pol_number,
R.Request_ID,
R.CreateTS,
R._raw_data
from TABLE1 A with (NOLOCK)
left join TABLE2 R with (NOLOCK)
on R.Request_id = isnull(A.CACHE_REQUEST_ID, A.Request_id)
inner join TABLE3 P
on p.quote_policy_num = a.policy_number
where
A.[SOURCE] = 'MVR'
and A.CREATED_ON >= '2020-01-01'
END
IF #StatementType = 'Select'
BEGIN
SELECT *
FROM owner.dbo.MVR_Staging
END
END
to run:
exec UPDATE_STAGING insertnew
GO
Some correction to your code that is not related to your issue, but is good to keep a best practice and clean code.When declaring a stored procedure parameter, there's no point using parenthesis (#StatementType NVARCHAR(20) = ''). Also you should be using ELSE IF #StatementType = 'Select', without ELSE, this second IF condition will always be checked. Execute the procedure exec UPDATE_STAGING 'insertnew', as the parameter is NVARCHAR. As for your real issue, you could try comment the INSERT part and leave only the SELECT to see if rows are returned.

PostgreSQL: How to figure out missing numbers in a column using generate_series()?

SELECT commandid
FROM results
WHERE NOT EXISTS (
SELECT *
FROM generate_series(0,119999)
WHERE generate_series = results.commandid
);
I have a column in results of type int but various tests failed and were not added to the table. I would like to create a query that returns a list of commandid that are not found in results. I thought the above query would do what I wanted. However, it does not even work if I use a range that is outside the expected possible range of commandid (like negative numbers).
Given sample data:
create table results ( commandid integer primary key);
insert into results (commandid) select * from generate_series(1,1000);
delete from results where random() < 0.20;
This works:
SELECT s.i AS missing_cmd
FROM generate_series(0,1000) s(i)
WHERE NOT EXISTS (SELECT 1 FROM results WHERE commandid = s.i);
as does this alternative formulation:
SELECT s.i AS missing_cmd
FROM generate_series(0,1000) s(i)
LEFT OUTER JOIN results ON (results.commandid = s.i)
WHERE results.commandid IS NULL;
Both of the above appear to result in identical query plans in my tests, but you should compare with your data on your database using EXPLAIN ANALYZE to see which is best.
Explanation
Note that instead of NOT IN I've used NOT EXISTS with a subquery in one formulation, and an ordinary OUTER JOIN in the other. It's much easier for the DB server to optimise these and it avoids the confusing issues that can arise with NULLs in NOT IN.
I initially favoured the OUTER JOIN formulation, but at least in 9.1 with my test data the NOT EXISTS form optimizes to the same plan.
Both will perform better than the NOT IN formulation below when the series is large, as in your case. NOT IN used to require Pg to do a linear search of the IN list for every tuple being tested, but examination of the query plan suggests Pg may be smart enough to hash it now. The NOT EXISTS (transformed into a JOIN by the query planner) and the JOIN work better.
The NOT IN formulation is both confusing in the presence of NULL commandids and can be inefficient:
SELECT s.i AS missing_cmd
FROM generate_series(0,1000) s(i)
WHERE s.i NOT IN (SELECT commandid FROM results);
so I'd avoid it. With 1,000,000 rows the other two completed in 1.2 seconds and the NOT IN formulation ran CPU-bound until I got bored and cancelled it.
As I mentioned in the comment, you need to do the reverse of the above query.
SELECT
generate_series
FROM
generate_series(0, 119999)
WHERE
NOT generate_series IN (SELECT commandid FROM results);
At that point, you should find values that do not exist within the commandid column within the selected range.
I am not so experienced SQL guru, but I like other ways to solve problem.
Just today I had similar problem - to find unused numbers in one character column.
I have solved my problem by using pl/pgsql and was very interested in what will be speed of my procedure.
I used #Craig Ringer's way to generate table with serial column, add one million records, and then delete every 99th record. This procedure work about 3 sec in searching for missing numbers:
-- creating table
create table results (commandid character(7) primary key);
-- populating table with serial numbers formatted as characters
insert into results (commandid) select cast(num_id as character(7)) from generate_series(1,1000000) as num_id;
-- delete some records
delete from results where cast(commandid as integer) % 99 = 0;
create or replace function unused_numbers()
returns setof integer as
$body$
declare
i integer;
r record;
begin
-- looping trough table with sychronized counter:
i := 1;
for r in
(select distinct cast(commandid as integer) as num_value
from results
order by num_value asc)
loop
if not (i = r.num_value) then
while true loop
return next i;
i = i + 1;
if (i = r.num_value) then
i = i + 1;
exit;
else
continue;
end if;
end loop;
else
i := i + 1;
end if;
end loop;
return;
end;
$body$
language plpgsql volatile
cost 100
rows 1000;
select * from unused_numbers();
Maybe it will be usable for someone.
If you're on AWS redshift, you might end up needing to defy the question, since it doesn't support generate_series. You'll end up with something like this:
select
startpoints.id gapstart,
min(endpoints.id) resume
from (
select id+1 id
from yourtable outer_series
where not exists
(select null
from yourtable inner_series
where inner_series.id = outer_series.id + 1
)
order by id
) startpoints,
yourtable endpoints
where
endpoints.id > startpoints.id
group by
startpoints.id;