INSERT INTO table if exists or else CREATE TABLE - postgresql

I have a pgscript that runs in a loop.
SET #id = 1;
BEGIN
WHILE #id <= 10
/*
CREATE TABLE tbl_name AS
SELECT * FROM main_tbl
WHERE id = #id
;
INSERT INTO tbl_name
SELECT * FROM main_tbl
WHERE id = #id
*/
SET #id = #id + 1;
END
For the first iteration id=1, I want my script to create a table because the table does not exist. For the upcoming loop id=2...3...4...so on, I want the script to insert into the table that was created.
Currently, I am creating an empty table before running the PgScript. Is there a way to write a script that creates the table for the first loop and insert in the upcoming iterations?

Try this:
CREATE TABLE IF NOT EXISTS words (
id SERIAL PRIMARY KEY,
word VARCHAR(500) NOT NULL
);
INSERT INTO words
VALUES(DEFAULT, '1234');
SELECT *
FROM words;

Related

Randomly loop through existing table column, to create fictitious data for another table

I have a couple of tables with ten rows each, I want good tsql queries to build more rows on these specific tables.
I know RAND function works for selecting random int values.
CREATE Table Occupation
(
Id int identity primary key,
Designation nvarchar(50),
country nvarchar(50)
)
Declare #Id int
Set #Id = 1
While #Id <= 10000
Begin
Insert Into Occupation values ('Designation - ' + CAST(#Id as nvarchar(10)),
'Country - ' + CAST(#Id as nvarchar(10)) + ' name')
Print #Id
Set #Id = #Id + 1
End
but where Designation is I need it to randomly choose one of the below
10000 times randomly or over and over until it reaches 10000
PK Designation
---------------------------------------
1 Aquarium Process Controller
2 Assistant Plant Operator
3 Boilermaker
4 Casual
5 Casual laborer
6 Casual worker
7 Cat operator
8 Cleaner
9 FLOORS CLEANER
10 G foreman
Loopless, 100% set-based solution. Ideas borrowed from Zohar Peled's comment on tally tables:
CREATE Table #Occupation
(
Id int identity primary key,
Designation nvarchar(50),
country nvarchar(50)
);
DECLARE #designation TABLE
(
Designation nvarchar(50)
);
INSERT #designation (Designation) VALUES
('Aquarium Process Controller')
,('Assistant Plant Operator')
,('Boilermaker')
,('Casual')
,('Casual laborer')
,('Casual worker')
,('Cat operator')
,('Cleaner')
,('FLOORS CLEANER')
,('G foreman');
INSERT INTO #Occupation
SELECT tmp.Designation, 'Country' + CAST(NTILE(10) OVER(ORDER BY NEWID()) AS VARCHAR)
FROM
(
SELECT TOP(10000) Designation
FROM #designation
CROSS JOIN [master].sys.all_columns ac1
) AS tmp;
The main trick, commonly used for obtaining 'random' records, is to order by NEWID(). Another trick is to group the records in 10 groups by NTILE(10) to get a number for the country name.
The CROSS JOIN is an idea from this tally-table link. It's simply a way to repeat a sequence of records by cross joining it with a ubiquitous table that has plenty of records. TOP prevents the cross join to be exhausted completely.
Here is what you are looking for, first i create a temp table to store the data for the lookup table and query the temp table via a PK using a random number generated (between 1 and 10)
CREATE TABLE #tmpData (PK INT, Designation VARCHAR(64))
INSERT INTO #tmpData (PK, Designation)
VALUES (1, 'Aquarium Process Controller') -- add the rest of the records below (PK 2 to 10)
CREATE Table Occupation
(
Id int identity primary key,
Designation nvarchar(50),
country nvarchar(50)
)
Declare #Id int
Set #Id = 1
DECLARE #Designation AS VARCHAR(64)
While #Id <= 10000
Begin
SET #Designation =
(SELECT Designation
FROM #tmpData
OUTER APPLY (SELECT RAND()*(11-0)+0) CxA(RandomNumber) -- creates a random number between 1 and 10
WHERE PK = CxA.RandomNumber)
Insert Into Occupation values ( #Designation ,
'Country - ' + CAST(#Id as nvarchar(10)) + ' name')
Print #Id
Set #Id = #Id + 1
End

Create Trigger that Insert data after update on specific column

So i want to insert data to history_rent table and delete data in rent table after update status_peminjaman column on rent table, i am already create Trigger but it doesn't triggered
CREATE OR ALTER TRIGGER AfterUpdateStatus on dbo.peminjaman
FOR UPDATE
AS DECLARE
#nama_peminjam varchar(100),
#tanggal_pinjam datetime,
#tanggal_kemblali datetime,
#nama_guru varchar(100),
#status_peminjaman varchar(50),
#kode_barang varchar(255);
SELECT #nama_peminjam = ins.nama_peminjam FROM INSERTED ins;
SELECT #tanggal_pinjam = ins.tanggal_pinjam FROM INSERTED ins;
SELECT #tanggal_kembali = ins.tanggal_kembali FROM INSERTED ins;
SELECT #nama_guru = ins.nama_guru FROM INSERTED ins;
SELECT #kode_barang = ins.kode_barang FROM INSERTED ins;
SELECT #status_peminjaman = ins.status_peminjaman FROM INSERTED ins;
IF UPDATE(status_peminjaman)
BEGIN
SET #status_peminjaman = 'Selesai'
END
INSERT INTO dbo.history_peminjaman
VALUES(#nama_peminjam,#tanggal_pinjam,#tanggal_kembali,#nama_guru,#kode_barang,#status_peminjaman);
PRINT 'TRIGEREDDDDDDDDD'
GO

Make duplicate row in Postgresql

I am writing migration script to migrate database. I have to duplicate the row by incrementing primary key considering that different database can have n number of different columns in the table. I can't write each and every column in query. If i simply just copy the row then, I am getting duplicate key error.
Query: INSERT INTO table_name SELECT * FROM table_name WHERE id=255;
ERROR: duplicate key value violates unique constraint "table_name_pkey"
DETAIL: Key (id)=(255) already exist
Here, It's good that I don't have to mention all column names. I can select all columns by giving *. But, same time I am also getting duplicate key error.
What's the solution of this problem? Any help would be appreciated. Thanks in advance.
If you are willing to type all column names, you may write
INSERT INTO table_name (
pri_key
,col2
,col3
)
SELECT (
SELECT MAX(pri_key) + 1
FROM table_name
)
,col2
,col3
FROM table_name
WHERE id = 255;
Other option (without typing all columns , but you know the primary key ) is to CREATE a temp table, update it and re-insert within a transaction.
BEGIN;
CREATE TEMP TABLE temp_tab ON COMMIT DROP AS SELECT * FROM table_name WHERE id=255;
UPDATE temp_tab SET pri_key_col = ( select MAX(pri_key_col) + 1 FROM table_name );
INSERT INTO table_name select * FROM temp_tab;
COMMIT;
This is just a DO block but you could create a function that takes things like the table name etc as parameters.
Setup:
CREATE TABLE public.t1 (a TEXT, b TEXT, c TEXT, id SERIAL PRIMARY KEY, e TEXT, f TEXT);
INSERT INTO public.t1 (e) VALUES ('x'), ('y'), ('z');
Code to duplicate values without the primary key column:
DO $$
DECLARE
_table_schema TEXT := 'public';
_table_name TEXT := 't1';
_pk_column_name TEXT := 'id';
_columns TEXT;
BEGIN
SELECT STRING_AGG(column_name, ',')
INTO _columns
FROM information_schema.columns
WHERE table_name = _table_name
AND table_schema = _table_schema
AND column_name <> _pk_column_name;
EXECUTE FORMAT('INSERT INTO %1$s.%2$s (%3$s) SELECT %3$s FROM %1$s.%2$s', _table_schema, _table_name, _columns);
END $$
The query it creates and runs is: INSERT INTO public.t1 (a,b,c,e,f) SELECT a,b,c,e,f FROM public.t1. It's selected all the columns apart from the PK one. You could put this code in a function and use it for any table you wanted, or just use it like this and edit it for whatever table.

Trigger not saving historicals

Version: SQL Server 2008 R2
This trigger checks primary key violations, if any moves the row to the history table; and then deletes the row; and then inserts the row that caused the violation.
But it is not doing its job.
CREATE TRIGGER [dbo].[ONINSERT]
ON [dbo].[TICKETS]
Instead of INSERT
AS
BEGIN
DECLARE #ID VARCHAR(200)
SET NOCOUNT ON;
SET #ID = (SELECT TICKET_ID FROM inserted)
INSERT TICKET_HISTORY
SELECT * FROM TICKETS
WHERE
TICKET_ID = #ID ;
print 'Inserting ' + #id
DELETE FROM TICKETS
WHERE TICKET_ID = #ID;
print 'Deleting' + #id
INSERT TICKETS
SELECT * FROM inserted;
print 'Inserting back' + #id
Triggers fire only once even if you have multiple inserts,so in your case I don't see that happening...You don't need to delete the row again since instead of insert generally means "instead of insert do some custom action"
if exists
(
select 1 from inserted i
join
mytable t
on t.id=i.id
)
begin
--insert into tickets history
insert into ticketshistory
select i.* from inserted i
join
mytable t where t.id=i.id
---you can add custom logic to alert or simply ignore
return
end
--this is needed since if there is no violation you have to insert rows
insert into tickets
select * from inserted

Update column of a inserted row with with its generated id in a single query

Say I have a table, created as follows:
CREATE TABLE test_table (id serial, unique_id varchar(50) primary key, name varchar(50));
test_table
----------
id | unique_id | name
In that table, I would like to update the unique_id field with the newly inserted id concatenated with the inserted name in a single go.
Usually this is accomplished by two queries. (PHP way)
$q = "INSERT INTO table (unique_id,name) values ('uid','abc') returning id||name as unique_id;";
$r = pg_query($dbconn,$q);
$row = pg_fetch_array($r);
$q1 = "UPDATE test_table set unique_id =".$row['unique_id']." where unique_id='uid'";
$r1 = pg_query($dbconn,$q1);
Is there any way to do the above in a single query?
You can have several options here, you could create a AFTER trigger which uses the generated ID for an direct update of the same row:
CREATE TRIGGER test_table_insert ON AFTER INSERT ON test_table FOR EACH ROW EXECUTE PROCEDURE test_table_insert();
And in your function you update the value:
CREATE FUNCTION test_table_insert() RETURNS TRIGGER AS $$
BEGIN
UPDATE test_table SET uniqid = NEW.id::text || NEW.name WHERE id = NEW.id;
END;
$$ LANGUAGE plpgsql;
You need to add the function before the trigger.
An other option would be to do it directly in the insert:
INSERT INTO table (id, unique_id, name) values (nextval('test_table_id_seq'), 'abc', currval('test_table_id_seq')::text || 'abc') returning id;
But as a_horse_with_no_name pointed out, I think you may have a problem in your database design.