Get the list of primary keys and corresponding table name - db2

In db2 how can I get the list of primary keys and corresponding table name for a particular db schema?
I have found some query to get the primary keys from a table like,
SELECT sc.name
FROM SYSIBM.SYSCOLUMNS SC
WHERE SC.TBNAME = 'REGISTRATION'
AND sc.identity ='N'
AND sc.tbcreator='schemaname'
AND sc.keyseq=1
Can I modify the same to get complete primary keys ,column name and table name form a schema?

SELECT
tabschema, tabname, colname
FROM
syscat.columns
WHERE
keyseq IS NOT NULL AND
keyseq > 0
ORDER BY
tabschema, tabname, keyseq

Related

How to specify native table fields and a foreign table field during an insert?

Suppose the following:
create table member (
id serial primary key,
member_name varchar(55),
role_ids bigint[] not null
);
create table role (
id serial primary key,
role_name varchar(55) unique
);
insert into role values (1, 'admin'), (2, 'common');
I can create an admin member like this:
insert into member (role_ids)
select ARRAY[id] as role_id from role where role_name = 'admin';
But how can I specify other fields, like member_name as well?
I've tried this:
insert into member (member_name, role_ids) values('test member', role_ids)
select ARRAY[id::bigint] as role_id from role where role_name = 'admin';
But this throws an error, error at or near select
In your case I would probably choose to use a nested SELECT inside the VALUES, to emphasize that this is a lookup that you expect to succeed and return only one value:
insert into member (member_name, role_ids)
values('test member',
(select ARRAY[id] from role where role_name = 'admin'));
This wouldn't work if you were selecting more than one column in your select. Another solution would be to just use SELECT and no VALUES, because nothing stops you from returning literal values in your SELECT. You don't name the columns in the select for your insert, instead you order them to match the order of the columns in the insert:
insert into member (member_name, role_ids)
select 'test member', ARRAY[id] from role where role_name = 'admin';

I'm getting column "my_column" contains null values' when adding a composite primary key

Is it not supposed to delete null values before altering the table? I'm confused...
My query looks roughly like this:
BEGIN;
DELETE FROM my_table
WHERE my_column IS NULL;
ALTER TABLE my_table DROP CONSTRAINT my_table_pk;
ALTER TABLE my_table ADD PRIMARY KEY (id, my_column);
-- this is to repopulate the data afterwards
INSERT INTO my_table (name, other_table_id, my_column)
SELECT
ya.name,
ot.id,
my_column
FROM other_table ot
LEFT JOIN yet_another ya
ON ya.id = ot."fileId"
WHERE NOT EXISTS (
SELECT
1
FROM my_table mt
WHERE ot.id = mt.other_table_id AND ot.my_column = mt.my_column
) AND my_column IS NOT NULL;
COMMIT;
sorry for naming
There are two possible explanations:
A concurrent session inserted a new row with a NULL value between the start of the DELETE and the start of ALTER TABLE.
To avoid that, lock the table in SHARE mode before you DELETE.
There is a row where id has a NULL value.

Conditionally insert from one table into another

The same name may appear in multiple rows of table1. I would like to enumerate all names in sequential order 1, 2, ... One way to do so is to
create new table with name as primary key and id as serial type.
Select name from table1 and insert it into table2 only when it doesn't exist
table1 (name vchar(50), ...)
table2 (name vchar(50) primary key, id serial)
insert into table2(name)
select name
from table1 limit 9
where not exists (select name from table2 where name = table1.name)
This doesn't work. How to fix it?
Just select distinct values:
insert into table2(name)
select distinct name
from table1
order by name;

updating a varchar column with multiple select stmts

I have to update a VARCHAR column of a table by concatenating values from SELECT queries from another table. I have build a query like
UPDATE url SET VALUE = (SELECT id FROM ids WHERE identifier='site')':'(SELECT id FROM cids WHERE identifier='cid')
WHERE name='SToken:CToken'
AND tokenvalue LIKE (SELECT id FROM ids WHERE identifier='site');
Here value is VARCHAR.
How should I do this?

Ambiguous column in PostgreSQL UPSERT (writeable CTE) using one table to update another

I have a table called users_import into which I am parsing and importing a CSV file. Using that table I want to UPDATE my users table if the user already exists, or INSERT if it does not already exist. (This is actually a very simplified example of something much more complicated I'm trying to do.)
I am trying to do something very similar to this:
https://stackoverflow.com/a/8702291/912717
Here are the table definitions and query:
CREATE TABLE users (
id INTEGER NOT NULL UNIQUE PRIMARY KEY,
name TEXT NOT NULL
);
CREATE TABLE users_import (
id INTEGER NOT NULL UNIQUE PRIMARY KEY,
name TEXT NOT NULL
);
WITH upsert AS (
UPDATE users AS u
SET
name = i.name
FROM users_import AS i
WHERE u.id = i.id
RETURNING *
)
INSERT INTO users (name)
SELECT id, name
FROM users_import
WHERE NOT EXISTS (SELECT 1 FROM upsert WHERE upsert.id = users_import.id);
That query gives this error:
psql:test.sql:23: ERROR: column reference "id" is ambiguous
LINE 11: WHERE NOT EXISTS (SELECT 1 FROM upsert WHERE upsert.id = us...
^
Why is id ambiguous and what is causing it?
The RETURNING * in the WITH upsert... clause has all columns from users and all columns from the joined table users_import. So the result has two columns named id and two columns named name, hence the ambiguity when refering to upsert.id.
To avoid that, use RETURNING u.id if you don't need the rest of the columns.