Error in Oracle event query - oracle10g

Here I am using query to insert a value into table for every 1 minute cycle through event but its showing Invalid SQL Statement.I am using oracle 10g
ON GRANT CREATE EVENT test_event_02
ON SCHEDULE AT SYSTIMESTAMP + INTERVAL '1' MINUTE
ON COMPLETION PRESERVE
DO
insert into tablename values('shree2',SYSTIMESTAMP);

Related

Rollback doesn't work with Amazon Redshift

I am practicing with redshift, I have created a table:
Inserted values from another table
Delete the data from table
I have tried rollback both of this steps, but it doesn't work. What is wrong with this, I don't understand?
Open two psql terminals connected to same Redshift intance and database, say terminal-1 and terminal-2.
Execute following queries on terminal-1.
create table sales(
salesid integer not null Identity,
commission decimal(8,2),
saledate date,
description varchar(255),
created_at timestamp default sysdate,
updated_at timestamp);
begin;
insert into sales(commission,saledate,description,created_at,updated_at) values('3.55','2018-12-10','Test description','2018-05-17 23:54:51','2018-05-17 23:54:51');
insert into sales(commission,saledate,description,created_at,updated_at) values('5.67','2018-11-10','Test description1','2018-05-17 23:54:51','2018-05-17 23:54:51');
Hold on here and go to terminal-2; don't close the terminal-1, and execute following query
select * from sales;
You will not get above two data records inserted from terminal-1.
Hold on here, again go to terminal-1; and execute below query.
commit;
Hold on here and go to terminal-2; execute following query again
select * from sales;
Now, you will both records.
Point proven.

postgres SKIP LOCKED not working

Below the steps I followed to test the SKIP LOCKED:
open one sql console of some Postgres UI client
Connect to Postgres DB
execute the queries
CREATE TABLE t_demo AS
SELECT *
FROM generate_series(1, 4) AS id;
check rows are created in that table:
TABLE t_demo
select rows using below query:
SELECT *
FROM t_demo
WHERE id = 2
FOR UPDATE SKIP LOCKED;
it is returning results as 2
Now execute the above query again:
SELECT *
FROM t_demo
WHERE id = 2
FOR UPDATE SKIP LOCKED;
this second query should not return any results, but it is returning results as 2
https://www.postgresql.org/docs/current/static/sql-select.html#SQL-FOR-UPDATE-SHARE
To prevent the operation from waiting for other transactions to
commit, use either the NOWAIT or SKIP LOCKED option
(emphasis mine)
if you run both queries in one window - you probably either run both in one transaction (then your next statement is not other transaction" or autocommiting after each statement (default)((but then you commit first statement transaction before second starts, thus lock released and you observe no effect

AFTER INSERT trigger causes query execution to hang up

In a ms sql database I have a table named combo where multiple inserts, updates and deletes can happen (as well as single, of course). In another table named migrimi_temp I keep track of these changes in the form of queries (query that would have to be executed in mysql to achieve the same result).
For example, if a delete query is performed for all rows where id > 50, the trigger should activate to store the following query into the log table:
DELETE FROM combo where id > 50;
Therefore this one delete query in the combo table would result in one row in the log table.
But if instead I have an insert query inserting 2 rows, a trigger should activate to store each insert into the log table. So this one insert query in the combo table would result into 2 new rows in the log table.
I intend to handle insert, update and delete actions into separated triggers. I had managed to write triggers for single row insert / update/ delete. Then it occurred to me that multiple actions might be performed too.
This is my attempt to handle the case of multiple inserts in one single query. I resorted to using cursors after not being able to adapt the initial trigger without a cursor. The trigger is executed successfully, but when I perform an insert (single or multiple rows) the execution hangs up indefinitely, or at least longer than reasonable .
USE [migrimi_test]
GO
/****** Object: Trigger [dbo].[c_combo] Script Date: 12/11/2017 5:33:46 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create TRIGGER [dbo].[u_combo]
ON [migrimi_test].[dbo].[combo]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON
DECLARE #c_id INT;
DECLARE #c_name nvarchar(100);
DECLARE #c_duration int;
DECLARE #c_isavailable INT;
DECLARE c CURSOR FOR
SELECT id, name, duration, isvisible FROM inserted
OPEN c
FETCH NEXT FROM c INTO #c_id, #c_name, #c_duration, #c_isavailable
WHILE ##FETCH_STATUS = 0
INSERT INTO [migrimi_temp].[dbo].[sql_query] (query)
VALUES ('INSERT INTO combo (id, name, duration, value, isavailable, createdAt, updatedAt) values ('+CAST(#c_id as nvarchar(50))+', '+'"'+#c_name+'"'+',
'+CAST(#c_duration as nvarchar(50))+', 1, '+CAST(#c_isavailable as nvarchar(50))+', Now(), Now());' )
FETCH NEXT FROM c INTO #c_id, #c_name, #c_duration, #c_isavailable
CLOSE c
END
DEALLOCATE c
GO
SQL server version is 2012. OS is windows server 2008 (though I doubt that is relevant). I was based mainly on these two resources: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/40f5635c-9034-4e9b-8fd5-c02cec44ce86/how-to-let-trigger-act-for-each-row?forum=sqlgetstarted
and How can I get a trigger to fire on each inserted row during an INSERT INTO Table (etc) SELECT * FROM Table2?
This is part of a larger idea I am trying to accomplish, and until 2 days ago I was totally unfamiliar with triggers. I am trying to balance learning with accomplishing in reasonable amounts of time, but not doing so great
Cursors are notoriously slow in SQL Server.
Instead of using a cursor to loop over the inserted table, you can use insert...select which is a set based approach. It is much faster and is the recommended way to work in SQL:
CREATE TRIGGER [dbo].[u_combo]
ON [migrimi_test].[dbo].[combo]
AFTER INSERT
AS
BEGIN
INSERT INTO [migrimi_temp].[dbo].[sql_query] (query)
SELECT 'INSERT INTO combo (id, name, duration, value, isavailable, createdAt, updatedAt) values ('+CAST(id as nvarchar(50))+', "'+ name +'",
'+ CAST(duration as nvarchar(50)) +', 1, '+ CAST(isvisible as nvarchar(50))+ ', Now(), Now());'
FROM inserted
END
GO

PostgreSQL table locking conflicts

I'm reading this document on Explicit Locks and when they are automatically used by PostgreSQL.
What happens when one lock conflicts with another? Does the second transaction just wait until the first finishes? Does it abort?
So say some transaction opens up an ACCESS SHARE lock on table called apples. Then say another transaction tries to add a column issuing an ALTER TABLE query which is an ACCESS EXCLUSIVE lock. What happens to the second query? Does it hang? Abort?
The second query ALTER TABLE waits for the first transaction to complete. You can see that in pg_locks.
Select * from pg_locks;
To simulate this scenario:-
1) Open three separate SQL Editors in pgAdmin
2) SQL Editor 1: Execute the below statements
BEGIN;
Select * FROM table_name;
3) SQL Editor 2: Check the locks using pg_locks. There should be AccessShareLock
4) SQL Editor 3:
BEGIN;
alter table table_name ADD COLUMN new_column varchar(30);
The window should show Query is running.
5) SQL Editor 2: Check the locks using pg_locks. There should be AccessExclusiveLock
6) SQL Editor 1: Execute END; (i.e. Ending the select transaction)
7) SQL Editor 3: Query (i.e. alter table) should be executed successfully
8) SQL Editor 2: Check the locks using pg_locks. There shouldn't be any 'AccessShare' lock
9) SQL Editor 3: Execute END; (i.e. Ending the alter table transaction)
10) SQL Editor 2: Check the locks using pg_locks. There shouldn't be any 'AccessExclusive' lock.

Delete record 36 hours after insert in postgres using Triggers

I've stored some PDF exports in the repository which are generated by the scheduler. After 36 hours I need to delete those PDF's.
Table1
id(pk of table2), file_type, data
table2
id, name, label, created_date, updated_date
Now how can I write a trigger which can delete the records from Table1 and Table2 after 36 hours.
I've written this but It is executing only when an Insert is done. I wanted it to run even when none of the even is occured.
CREATE OR REPLACE FUNCTION ContentResource_Delete() RETURNS trigger AS $ContentResource_Delete$
BEGIN
delete from jicontentresource jicr USING jiresource jir
where jicr.id = jir.id and jicr.file_type='pdf' and trunc(EXTRACT(EPOCH FROM now() - creation_date)/3600) >=1 ;
delete from jiresource where name like '%.pdf' and trunc(EXTRACT(EPOCH FROM now() - creation_date)/3600) >=1 ;
RETURN NULL;
END;$ContentResource_Delete$ LANGUAGE plpgsql;
CREATE TRIGGER ContentResource_Delete AFTER INSERT ON jiresource FOR EACH ROW EXECUTE PROCEDURE ContentResource_Delete();
Hi all I got the solution, instead of using triggers I've used pgAgent and configured. If anyone facing issues while scheduling apart from default database (i.e. postgres) then in the step,ConnectionType should be Remote, connection string should be IP Address instead of localhost , specify the password and provide the databasename. Refer the Image