Here is the test case and results:
drop table if exists test1;
drop table if exists test2;
drop trigger if exists test1_tr on test1;
drop function if exists tf_test1;
create table test1 (name varchar(8) not null);
create table test2 (name varchar(8) not null);
\echo create trigger function tf_test1
CREATE OR REPLACE FUNCTION tf_test1() RETURNS trigger AS $BODY$
BEGIN
IF TG_OP = 'INSERT' THEN
INSERT INTO test2(name) VALUES (NEW.name);
END IF;
return new;
END
$BODY$
LANGUAGE 'plpgsql';
\echo create trigger test1_tr
CREATE TRIGGER test1_tr
AFTER INSERT OR UPDATE OR DELETE ON test1 FOR EACH ROW
EXECUTE PROCEDURE tf_test1();
\echo Insert
insert into test1 (name) values ('NAME_001');
insert into test1 (name) values ('NAME_002');
insert into test1 (name) values ('NAME_003');
insert into test1 (name) values ('NAME_004');
\echo Select test1
select * from test1;
\echo Select test2
select * from test2;
---------------------------- output -------------------------------
DROP TABLE
DROP TABLE
DROP TABLE
DROP TABLE
DROP TRIGGER
DROP FUNCTION
CREATE TABLE
CREATE TABLE
create trigger function tf_test1
CREATE FUNCTION
create trigger test1_tr
CREATE TRIGGER
Insert
INSERT 0 1
psql:test3.sql:28: ERROR: cache lookup failed for type 113
CONTEXT: SQL statement "INSERT INTO test2(name) VALUES (NEW.name)"
PL/pgSQL function tf_test1() line 4 at SQL statement
INSERT 0 1
INSERT 0 1
Select test1
name
----------
NAME_001
NAME_003
NAME_004
(3 rows)
Select test2
name
----------
NAME_001
NAME_003
NAME_004
(3 rows)
We have several servers running various flavors of RHEL 7.x. All Postgres instances are v11. This is happening on about 1/2 of them. There doesn't seem to be any consistent RH version that is the culprit.
I have queried both pg_class and pg_type for the OID referenced as the missing type. In all cases, the result set is empty.
Any help is appreciated.
I would also appreciate an insight into what's happening with Postgres. I'm a long-time Oracle DBA, but fairly new to Postgres. It seems like an internal Postgres error and not really a code problem, but a web search doesn't turn up much.
Follow-up on this to provide some closure. We had increased our buffer and effective cache size in the Postgresql.conf file and also turned Auditing on (pgaudit extension) full blast...For the machines where the PG memory conf parameters exceeded the physical memory of the machine and auditing was turned on, we would get cache lookup errors. A clue about this was the errors would hop around in the job flow, were not consistent from machine to machine and were effectively unsquashable bugs (dropping the offending trigger would just cause the cache error somewhere else in the job stream).
For now, we have increased the physical memory of the servers and turned auditing off. The cache lookup errors are gone. Further tuning is needed so we can eventually turn auditing back on.
is it possible to get records which failed during Copy command in Snowflake from internal stage to snowflake table?
I am trying to load error recrods in a error table during Copy command execution . Copy Command used:
Copy into table ( col1, col2,col3,col4) from ( select $1,$2,$3,56 from #%table) ON_ERROR=CONTINUE
To get all the bad records, you can run the copy with VALIDATION_MODE = 'RETURN ERRORS'. Then use the RESULT_SCAN from the validation in an insert statement.
If one of your columns is unique (i.e. col1), maybe you can compare rows in the table with the rows in the stage:
select $1 from #%table
MINUS
select col1 from table;
Please check below select statement after copy command
select rejected_record from table(validate(test_copy , job_id => '_last')) ;
I need to delete a table that has a lower case name, i.e. academy
However when I do db2 drop table academy or db2 drop table "academy" I get:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0204N "DB2INST1.ACADEMY" is an undefined name. SQLSTATE=42704
The same command worked for upper case table names though.
When I list my tables I have > db2 LIST TABLES
Table/View Schema Type Creation time
------------------------------- --------------- ----- --------------------------
AA DB2INST1 T 2016-06-07-14.23.08.927146
MYNEWTABLE DB2INST1 T 2016-06-07-14.29.50.859806
academy DB2INST1 T 2016-06-07-17.05.27.510905
In db2 drop table "academy" quotes get swallowed by the shell. You'll need to escape them:
db2 drop table \"academy\"
or quote the entire statement:
db2 'drop table "academy"'
Try doing a select * from "academy" and see if it will even call the table. If it does you should be able to run the same query again, only replace the word "select" with "drop".
I'm not sure if its standard SQL:
INSERT INTO tblA
(SELECT id, time
FROM tblB
WHERE time > 1000)
What I'm looking for is: what if tblA and tblB are in different DB Servers.
Does PostgreSql gives any utility or has any functionality that will help to use INSERT query with PGresult struct
I mean SELECT id, time FROM tblB ... will return a PGresult* on using PQexec. Is it possible to use this struct in another PQexec to execute an INSERT command.
EDIT:
If not possible then I would go for extracting the values from PQresult* and create a multiple INSERT statement syntax like:
INSERT INTO films (code, title, did, date_prod, kind) VALUES
('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');
Is it possible to create a prepared statement out of this!! :(
As Henrik wrote you can use dblink to connect remote database and fetch result. For example:
psql dbtest
CREATE TABLE tblB (id serial, time integer);
INSERT INTO tblB (time) VALUES (5000), (2000);
psql postgres
CREATE TABLE tblA (id serial, time integer);
INSERT INTO tblA
SELECT id, time
FROM dblink('dbname=dbtest', 'SELECT id, time FROM tblB')
AS t(id integer, time integer)
WHERE time > 1000;
TABLE tblA;
id | time
----+------
1 | 5000
2 | 2000
(2 rows)
PostgreSQL has record pseudo-type (only for function's argument or result type), which allows you query data from another (unknown) table.
Edit:
You can make it as prepared statement if you want and it works as well:
PREPARE migrate_data (integer) AS
INSERT INTO tblA
SELECT id, time
FROM dblink('dbname=dbtest', 'SELECT id, time FROM tblB')
AS t(id integer, time integer)
WHERE time > $1;
EXECUTE migrate_data(1000);
-- DEALLOCATE migrate_data;
Edit (yeah, another):
I just saw your revised question (closed as duplicate, or just very similar to this).
If my understanding is correct (postgres has tbla and dbtest has tblb and you want remote insert with local select, not remote select with local insert as above):
psql dbtest
SELECT dblink_exec
(
'dbname=postgres',
'INSERT INTO tbla
SELECT id, time
FROM dblink
(
''dbname=dbtest'',
''SELECT id, time FROM tblb''
)
AS t(id integer, time integer)
WHERE time > 1000;'
);
I don't like that nested dblink, but AFAIK I can't reference to tblB in dblink_exec body. Use LIMIT to specify top 20 rows, but I think you need to sort them using ORDER BY clause first.
If you want insert into specify column:
INSERT INTO table (time)
(SELECT time FROM
dblink('dbname=dbtest', 'SELECT time FROM tblB') AS t(time integer)
WHERE time > 1000
);
This notation (first seen here) looks useful too:
insert into postagem (
resumopostagem,
textopostagem,
dtliberacaopostagem,
idmediaimgpostagem,
idcatolico,
idminisermao,
idtipopostagem
) select
resumominisermao,
textominisermao,
diaminisermao,
idmediaimgminisermao,
idcatolico ,
idminisermao,
1
from
minisermao
You can use dblink to create a view that is resolved in another database. This database may be on another server.
insert into TABLENAMEA (A,B,C,D)
select A::integer,B,C,D from TABLENAMEB
If you are looking for PERFORMANCE, give where condition inside the db link query.
Otherwise it fetch all data from the foreign table and apply the where condition.
INSERT INTO tblA (id,time)
SELECT id, time FROM dblink('dbname=dbname port=5432 host=10.10.90.190 user=postgresuser password=pass123',
'select id, time from tblB where time>'''||1000||'''')
AS t1(id integer, time integer)
I am going to SELECT Databasee_One(10.0.0.10) data from Database_Two (10.0.0.20)
Connect to 10.0.0.20 and create DBLink Extenstion:
CREATE EXTENSION dblink;
Test the connection for Database_One:
SELECT dblink_connect('host=10.0.0.10 user=postgres password=dummy dbname=DB_ONE');
Create foreign data wrapper and server for global authentication:
CREATE FOREIGN DATA WRAPPER postgres VALIDATOR postgresql_fdw_validator;
You can use this server object for cross database queries:
CREATE SERVER dbonepostgres FOREIGN DATA WRAPPER postgres OPTIONS (hostaddr '10.0.0.10', dbname 'DB_ONE');
Mapping of user and server:
CREATE USER MAPPING FOR postgres SERVER dbonepostgres OPTIONS (user 'postgres', password 'dummy');
Test dblink:
SELECT dblink_connect('dbonepostgres');
Import data from 10.0.0.10 into 10.0.0.20
INSERT INTO tableA
SELECT
column1,
,column2,
...
FROM dblink('dbonepostgres', 'SELECT column1, column2, ... from public.tableA')
AS data(column1 DATATYPE, column2 DATATYPE, ...)
;
Here's an alternate solution, without using dblink.
Suppose B represents the source database and A represents the target database:
Then,
Copy table from source DB to target DB:
pg_dump -t <source_table> <source_db> | psql <target_db>
Open psql prompt, connect to target_db, and use a simple insert:
psql
# \c <target_db>;
# INSERT INTO <target_table>(id, x, y) SELECT id, x, y FROM <source_table>;
At the end, delete the copy of source_table that you created in target_table.
# DROP TABLE <source_table>;
I am trying to start playing with postgres and found a very strange thing, I created a table using pgadminIII named testtable and added couple of column then I wrote following query in query editor
SELECT * from testtable;
it responded no table found with such name, then after that I tried
select * from "testtable"
with quotes(later one) it worked, then I dropped the table and created the table using script editor, with same name making it sure no quotes are around the name, then both query started working, I can't understand stand what that exactly mean, even if I write "teablename" in create table statement quotes shouldn't become the part of the table name.
Also, how can I make sure while using pgAdmin graphical user interface that all object get created without quote (of course if above problem because of that)?
Update: Environment Info
OS => Windows Server 2008 x64, Postgres => 9.0.3-2 x64, pgAdmin => >
Version 1.12.2 (March 22, 2011, rev:>
REL-1_12_2)
Did you use the new table dialog the first time? You shouldn't use quotes in the dialog as pgAdmin will insert all necessary quotes.
Edit
I discovered something today what is a little weird and might explain what happened to you.
When you do not quote a table name the table name it is converted to lowercase. So if you do
CREATE TABLE TestTable ( ... );
Your table will be called testtable
What happens when you start to query the table is this:
SELECT * FROM TestTable; -- succeeds looks for testtable
SELECT * FROM testtable; -- succeeds
SELECT * FROM "TestTable"; -- fails because case doesn't match
Now if you had done:
CREATE TABLE "TestTable" ( ... );
Your table would actually be called TestTable with the case preserved and the result is
SELECT * FROM TestTable; -- fails looks for testtable
SELECT * FROM testtable; -- fails
SELECT * FROM "TestTable"; -- succeeds