Select value from stored procedure which outputs a table - select

I have a procedure which returns a select statement output after a set of calculation. The output on executing the procedure will be as below:
exec proc1 empID
Output is as below
col1 col2 col3 col4 col5
2014 2 33 330 29
2014 3 10 34 12
2015 1 25 60 55
Now I have a main select statement which gets many columns by joining different tables.
I need to retrieve the above columns (output of stored procedure proc1) to the main select statement, for the empID which is available.
Something like below:
select empID, empName, empSal,
(select col3 from [exec proc1 empID] where col2=1),
empDept
from tblemployee
Is it possible to do? I am expecting 25 in the 4th column of the above query.

You can use either User-defined function or a view instead of a stored procedure.
sp doesn't allow select to use in it.
or you can
create a table variable to store the result returned from the stored procedure
Declare #TempTable Table (...)--declare all columns
insert the output of the stored proc into the table variable and then
Insert #TempTable Exec storedProcname params
Join the Temp table variable exactly as per your need
Note
But the above method has a limitation
The problem with INSERT #Temptable is that an
INSERT EXEC statement cannot be nested
. it will break,if your stored procedure already has an INSERT EXEC in it.

Related

Create many partitions as a result of a select statement in Postgres

Here is a table of phone numbers named phone_number:
phone_number
country_code
owner
07911 123456
44
Ada
08912 654321
44
Thomas
06 12 34 56 78
33
Jonathan
06 87 65 43 21
33
Arthur
Let's say we want to partition this table by country code, therefore creating this table phone_number_bis
CREATE TABLE phone_number_bis (
phone_number VARCHAR,
country_code INTEGER,
owner VARCHAR NOT NULL,
PRIMARY KEY (phone_number, country_code)
) PARTITION BY LIST(country_code)
Loading the content of phone_number into phone_number_bis will produce the following error:
INSERT INTO phone_number_bis( phone_number, country_code, owner)
SELECT phone_number, country_code, owner
FROM phone_number;
ERROR: no partition of relation "phone_number_bis" found for row
Partition key of the failing row contains (country_code) = (44)
Is there a SQL command that could create all necessary partitions before loading data into phone_number_bis, not knowing the content of the country_code column in advance ?
NB: as Franck Heikens pointed out, partitioning the table may not be relevant for storing phone numbers. This is an example made in order to make a complex problem more understable.
If your client is psql, you can use \gexec to make it run a query and then run each result as a new command. So then you would need to write one query which output a text string containing a suitable CREATE TABLE statement for each distinct country_code. To do it entirely on the server side, you could use pl/pgsql to do much the same thing, constructing a string and then using dynamic sql to EXECUTE the string.
Is there a SQL command that could create all necessary partitions
before loading data into phone_number_bis, not knowing the content of
the country_code column in advance ?
You can use DEFAULT partition, then split partition from DEFAULT.
begin;
create table phone_number(phone_number text,country_code integer, owner text);
insert into phone_number select 'dummy_' || (random()::numeric(10,4)),
120 + i,'owner'||i from generate_series(1, 5) g(i);
insert into phone_number select 'dummy_' || (random()::numeric(10,4)),
220+i,'owner'||i from generate_series(1, 5) g(i);
insert into phone_number select 'dummy_' || (random()::numeric(10,4)), 1 ,'owner'||i from generate_series(1, 20) g(i);
select string_agg(distinct (country_code::text),', ' order by (country_code::text))
from phone_number
where country_code > 99 and country_code < 201;
commit;
BEGIN;
CREATE TABLE phone_number_bis (
phone_number text,country_code integer,OWNER text,
PRIMARY KEY (phone_number, country_code)
)
PARTITION BY LIST (country_code);
CREATE TABLE phone_number_bis_01 PARTITION OF phone_number_bis
FOR VALUES IN (1);
CREATE TABLE phone_number_bis_2_300 PARTITION OF phone_number_bis
FOR VALUES IN (121, 122, 123, 124, 125);
CREATE TABLE phone_number_bis_default PARTITION OF phone_number_bis DEFAULT;
INSERT INTO phone_number_bis (phone_number, country_code, OWNER)
SELECT phone_number, country_code,OWNER FROM phone_number;
COMMIT;
now split the partition from default partition create an new partition for values in 200 to 300.
BEGIN;
ALTER TABLE phone_number_bis DETACH PARTITION phone_number_bis_default;
ALTER TABLE phone_number_bis_default RENAME TO phone_number_bis_default_old;
CREATE TABLE phone_number_bis_200_300 PARTITION OF phone_number_bis
FOR VALUES IN (221, 222, 223, 224, 225);
CREATE TABLE phone_number_bis_default PARTITION OF phone_number_bis DEFAULT;
INSERT INTO phone_number_bis (phone_number, country_code, OWNER)
SELECT phone_number, country_code, OWNER FROM phone_number_bis_default_old;
COMMIT;
https://www.postgresql.org/docs/current/ddl-partitioning.html
Quote:
Choosing the target number of partitions that the table should be
divided into is also a critical decision to make. Not having enough
partitions may mean that indexes remain too large and that data
locality remains poor which could result in low cache hit ratios.
However, dividing the table into too many partitions can also cause
issues. Too many partitions can mean longer query planning times and
higher memory consumption during both query planning and execution, as
further described below.
Quote:
Another reason to be concerned about having a large number of
partitions is that the server's memory consumption may grow
significantly over time, especially if many sessions touch large
numbers of partitions. That's because each partition requires its
metadata to be loaded into the local memory of each session that
touches it.
More partition, will consume more memory, there is an case: https://www.postgresql.org/message-id/flat/PH0PR11MB5191F459DCB44A91682FE8C8D6409%40PH0PR11MB5191.namprd11.prod.outlook.com#86aaad1ddd6350efc062c2dd79a31821
As Laurenz Albe and jjanes said, it cannot be done only with a SQL command.
A PL/pgSQL procedure seems to be required here :
DO $$
DECLARE partition_number INTEGER;
BEGIN
FOR partition_number IN SELECT DISTINCT(country_code) FROM phone_number
LOOP
EXECUTE FORMAT('CREATE TABLE phone_number_bis_%s PARTITION OF phone_number_bis FOR VALUES IN (%s)', partition_number, partition_number);
END LOOP ;
END;
$$ LANGUAGE plpgsql;
INSERT INTO phone_number_bis( phone_number, country_code, owner)
SELECT phone_number, country_code, owner
FROM phone_number; -- No error as partitions have been created before insertion

DB2 INSERT INTO SELECT statment to copy rows into the same table not allowing multiple rows

I've been looking for an answer to this question for a few days and can't find anything referencing this specific issue.
First of all, should it work if I want to use an INSERT INTO SELECT statement to copy over rows of a table, back into the same table but with new id's and 1 of the column modified?
Example:
INSERT INTO TABLE_A (column1, column2, column3) SELECT column1, 'value to change', column3 from TABLE_A where column 2 = 'original value';
When I try this on a DB2 database, I'm getting the following error:
INVALID MULTIPLE-ROW INSERT. SQLCODE=-533, SQLSTATE=21501, DRIVER=4.18.60
If I run the same statement but I put a specific ID to return in the select statement, ensuring only 1 row is returned, then the statement works. But that goes against what I'm trying to do which is copy multiple rows from the same table into itself, while updating a specific column to a new value.
Thanks everyone!
It works fine for me without error on Db2 11.1.4.0
CREATE TABLE TABLE_A( column1 int , column2 varchar(16), column3 int)
INSERT INTO TABLE_A values (1,'original value',3)
INSERT INTO TABLE_A (column1, column2, column3)
SELECT column1, 'value to change', column3 from TABLE_A where column2 = 'original value'
SELECT * FROM TABLE_A
returns
COLUMN1|COLUMN2 |COLUMN3
-------|---------------|-------
1|original value | 3
1|value to change| 3
maybe there is something you are not telling us....
You don't mention your platform and version, but the docs seems pretty clear..
IBM LUW 11.5
A multiple-row INSERT into a self-referencing table is invalid.
First google results
An INSERT operation with a subselect attempted to insert multiple rows
into a self-referencing table. The subselect of the INSERT operation
should return no more than one row of data. System action: The INSERT
statement cannot be executed. The contents of the object table are
unchanged. Programmer response: Examine the search condition of the
subselect to make sure that no more than one row of data is selected.
EDIT You've apparently got a self-referencing constraint on the table. Ex: EMPLOYEES table with a MANAGER column defined as a FK self-referencing back to the EMPLOYEES table.
Db2 simply doesn't support what you are trying to do.
You need to a temporary table to hold the modified rows.
Optionally, assuming that your table has a primary key, try using the MERGE statement instead.

Does PostgreSQL have an equivalent to SAS' obsnum?

When converting scripts, tables, datasets, etc. from a SAS environment to a PostgreSQL environment, is there an equivalent to referencing SAS' obsnum in PostgreSQL? For example, if a query says:
SELECT FROM schema.table
WHERE obsnum = 1
Is there a way to track the observation number or similar in PostgreSQL? Or should a different approach be taken?
Thanks.
Should probably specify that I was told obsnum is a built-in SAS value associated with datasets and tables, and in my SAS scripts there is no declaration for obsnum, only a singular reference to it in a SELECT statement.
OBSNUM isn't an automatic variable in SAS so its a variable value.
You should be able to use a similar query in Postgres to limit where the variable value is 1.
To add on - PROC SQL doesn't have an automatic variable to do numbering, it can use monotonic() for row numbers but it's not supported.
(wrong answer):
Try CTID
See your previous question, the first answer and the comments/questions.
What exactly is the/this data statement in SAS doing? PostgreSQL equivalent?
PostgresSQL
http://www.postgresql.org/docs/8.2/static/ddl-system-columns.html
-- make(fake) a dataset
CREATE TABLE dataset
( val double precision NOT NULL
);
-- populate it with random
INSERT INTO dataset(val)
SELECT random()
FROM generate_series(1,100)
;
-- Use row_number() to enumerate the unordered tuples
-- note the subquery. It is needed because otherwise
-- you cannot refer to row_number
SELECT * FROM (
SELECT val
, row_number() OVER() AS obsnum
FROM dataset
) qq -- subquery MUST have an alias
WHERE qq.obsnum = 1
;
-- you can just as well order over ctid (the result is the same)
SELECT * FROM (
SELECT val
, row_number() OVER(ORDER BY ctid) AS obsnum
FROM dataset
) qq -- subquery MUST have an alias
WHERE qq.obsnum = 1
;
-- In stead of a subquery you could use
-- a CTE to wrap the enumeration part
WITH zzz AS (
SELECT val
, row_number() OVER(ORDER BY ctid) AS obsnum
FROM dataset
)
SELECT * FROM zzz
WHERE obsnum = 1
;
-- Or, if you just want one observation: use LIMIT
-- (order of records is not defined,
-- but the order of ctid is not stable either)
SELECT * FROM dataset
LIMIT 1
;

DB2 - REPLACE INTO SELECT from table

Is there a way in db2 where I can replace the entire table with just selected rows from the same table ?
Something like REPLACE into tableName select * from tableName where col1='a';
(I can export the selected rows, delete the entire table and load/import again, but I want to avoid these steps and use a single query).
Original table
col1 col2
a 0 <-- replace all rows and replace with just col1 = 'a'
a 1 <-- col1='a'
b 2
c 3
Desired resultant table
col1 col2
a 0
a 1
Any help appreciated !
Thanks.
This is a duplicate of my answer to your duplicate question:
You can't do this in a single step. The locking required to truncate the table precludes you querying the table at the same time.
The best option you would have is to declare a global temporary table (DGTT) and insert the rows you want into it, truncate the source table, and then insert the rows from the DGTT back into the source table. Something like:
declare global temporary table t1
as (select * from schema.tableName where ...)
with no data
on commit preserve rows
not logged;
insert into session.t1 select * from schema.tableName;
truncate table schema.tableName immediate;
insert into schema.tableName select * from session.t1;
I know of no way to do what you're asking in one step...
You'd have to select out to a temporary table then copy back.
But I don't understand why you'd need to do this in the first place. Lets assume there was a REPLACE TABLE command...
REPLACE TABLE mytbl WITH (
SELECT * FROM mytbl
WHERE col1 = 'a' AND <...>
)
Why not simply delete the inverse set of rows...
DELETE FROM mytbl
WHERE NOT (col1 = 'a' AND <...>)
Note the comparisons done in the WHERE clause are the exact same. You just wrap them in a NOT ( ) to delete the ones you don't want to keep.

Export data from db2 with column names

I want to export data from db2 tables to csv format.I also need that first row should be all the column names.
I have little success by using the following comand
EXPORT TO "TEST.csv"
OF DEL
MODIFIED BY NOCHARDEL coldel: ,
SELECT col1,'COL1',x'0A',col2,'COL2',x'0A'
FROM TEST_TABLE;
But with this i get data like
Row1 Value:COL1:
Row1 Value:COL2:
Row2 Value:COL1:
Row2 Value:COL2:
etc.
I also tried the following query
EXPORT TO "TEST.csv"
OF DEL
MODIFIED BY NOCHARDEL
SELECT 'COL1',col1,'COL2',col2
FROM ADMIN_EXPORT;
But this lists column name with each row data when opened with excel.
Is there a way i can get data in the format below
COL1 COL2
value value
value value
when opened in excel.
Thanks
After days of searching I solved this problem that way:
EXPORT TO ...
SELECT 1 as id, 'COL1', 'COL2', 'COL3' FROM sysibm.sysdummy1
UNION ALL
(SELECT 2 as id, COL1, COL2, COL3 FROM myTable)
ORDER BY id
You can't select a constant string in db2 from nothing, so you have to select from sysibm.sysdummy1.
To have the manually added columns in first row you have to add a pseudo-id and sort the UNION result by that id. Otherwise the header can be at the bottom of the resulting file.
Quite old question, but I've encountered recently the a similar one realized this can be achieved much easier in 11.5 release with EXTERNAL TABLE feature, see the answer here:
https://stackoverflow.com/a/57584730/11946299
Example:
$ db2 "create external table '/home/db2v115/staff.csv'
using (delimiter ',' includeheader on) as select * from staff"
DB20000I The SQL command completed successfully.
$ head /home/db2v115/staff.csv | column -t -s ','
ID NAME DEPT JOB YEARS SALARY COMM
10 Sanders 20 Mgr 7 98357.50
20 Pernal 20 Sales 8 78171.25 612.45
30 Marenghi 38 Mgr 5 77506.75
40 O'Brien 38 Sales 6 78006.00 846.55
50 Hanes 15 Mgr 10 80659.80
60 Quigley 38 Sales 66808.30 650.25
70 Rothman 15 Sales 7 76502.83 1152.00
80 James 20 Clerk 43504.60 128.20
90 Koonitz 42 Sales 6 38001.75 1386.70
Insert the column names as the first row in your table.
Use order by to make sure that the row with the column names comes out first.