I have set up postgres_fdw to access a 'remote' database (in fact its on the same server). Works fine. Except one of the columns is the oid of a large object, how can I read that data?
I worked out how to do this. The large object store can also be accessed via the pg_largeobject table. So I did
create foreign table if not exists global_lo (
loid oid not null,
pageno integer not null,
data bytea
)
server glob_serv options(table_name 'pg_largeobject', schema_name 'pg_catalog');
Now I can read a large object (all of it, I cannot stream etc) with
select data from global_lo where loid = 1234
If you have access to the foreign database, you could create a view on it to convert the lobs to either bytea or text so they can be used by the local database.
On the foreign database, you would create the view:
drop view if exists tmp_view_produto_descricao;
create view tmp_view_produto_descricao as
select * from (
select dado.*, lo_get(dado.descricaoExtendida_oid) as descricaoEstendida
from (
select
itm.id as item_id,
case when itm.descricaoExtendida is Null then null else Cast(itm.descricaoExtendida as oid) end descricaoExtendida_oid
from Item itm
where itm.descricaoExtendida is Not Null
and Cast(itm.descricaoExtendida as Text) != ''
) dado
) dado
where Cast(descricaoEstendida as Text) != '';
On the local database, you would declare the foreign view so you could use it:
create foreign table tmp_origem.tmp_view_produto_descricao (
item_id bigint,
descricaoExtendida_oid oid,
descricaoEstendida bytea
) server tmp_origem options (schema_name 'public');
This is slightly messier and wordier, but will give you better performance than you would get by acessing pg_largeobject directly.
Related
My goal is to load data from a webapp into a database on a linked server. Users will create several thousand rows of data with 8 columns.
I was trying to implement a version of the Passing a Table Valued Parameter to a Parameterized SQL Statement in the Table Valued Parameters article from Microsoft.
The big issue I've run into is figuring out how to pass a table valued parameter or temp table into an Execute() where I would put the MERGE.
What's the best way to move data from a web app to a local database then a linked server?
Here's as close as I could get with temp tables:
CREATE TABLE #TempTable
(
car int not null,
mileage int not null,
interface DECIMAL(18,0) not null,
primary key(car, interface)
);
SELECT
123 AS car,
321 as mileage,
444 AS interface
INTO #TempTable
EXEC('
CREATE TABLE #TempTable
(
car int not null,
mileage int not null,
interface DECIMAL(18,0) not null,
primary key(car, interface)
)
MERGE dbo.total AS target
USING #TempTable AS temp
ON temp.car = target.car AND temp.interface = target.interface AND temp.mileage = target.mileage
WHEN MATCHED THEN
UPDATE SET target.mileage = 1337
WHEN NOT MATCHED
THEN INSERT (car, interface, mileage)
VALUES (temp.car, temp.interface, temp.mileage );
',#TempTable) at LinkedServer;
DROP TABLE #TempTable
However, this query does nothing. Nothing is being passed.
Here's the closest I've gotten with a temp table parameter:
-- FYI only
CREATE TYPE TempType AS TABLE
(
car int not null,
mileage int not null,
interface DECIMAL(18,0) not null,
primary key(car, interface)
);
DECLARE #TempTable AS TempType;
INSERT INTO #TempTable (car, mileage, interface)
SELECT 1,1000,123;
SELECT('
DECLARE #TotalTemp AS TempType;
MERGE dbo.TargetTable AS targ
USING #TotalTemp AS temp
ON temp.car = targ.car AND temp.interface = targ.interface
WHEN MATCHED THEN
UPDATE SET targ.mileage = 1337
WHEN NOT MATCHED
THEN INSERT (car, interface, mileage)
VALUES (temp.car, temp.interface, temp.mileage);
',#TempTable) at LinkedServer
This query gives a syntax error.
I also realize that accessing a temp table from the linked server to the local server would be a solution. However, I posted this question for the sole purpose of seeing if passing a TVP or temp table to an EXECUTE statement is possible.
in Amazon Redshift I try to do a bulk insert value in a table from a temp table.
However I only want to insert the values where a compound of values (primary key) not exist in the table, to avoid adding duplicate.
Below the DDL of the table
• clusters_typologies table (table when i want to insert data)
create table if not exists clusters.clusters_typologies
(
cluster_id BIGINT,
typology_id BIGINT,
semantic_id BIGINT,
primary key (cluster_id, typology_id, semantic_id)
);
Temp Table is create with query below and after that all field are correctly inserted.
CREATE TEMPORARY TABLE temporary (
cluster_id bigint,
typology_name varchar(100),
typology_id bigint,
semantic_name varchar(100),
semantic_id bigint
);
Now when i try to insert with that query
INSERT INTO clusters.clusters_typologies (cluster_id, typology_id,semantic_id)
(SELECT temp.cluster_id, temp.typology_id, temp.semantic_id
FROM temporary temp
WHERE NOT EXISTS(SELECT 1
FROM clusters_typologies
where cluster_id = temp.cluster_id
and typology_id = temp.typology_id
and semantic_id = temp.semantic_id));
I got this error and i cannot figured out how to make it work.
Invalid operation: This type of correlated subquery pattern is not supported due to internal error;
Anyone know how to fix or how is the best way to insert in a table with a compound key avoiding duplicate.
Thanks.
To upsert follow this guide
https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-upsert.html
and note that certain types of correlated subquery are not allowed in redshift - that is the cause of your error
see
https://docs.aws.amazon.com/redshift/latest/dg/r_correlated_subqueries.html
After some attempt I figured out how to do an insert from a temp table, and check from a compound primary key to avoid duplicate.
Basically from AWS documentation that #Jon Scott as sent, I understand that use outer table in inner select is not supported from Redshift.
I solve using a left join and check if the joining column is null.
Below the query I use now.
INSERT INTO clusters.clusters_typologies (cluster_id, typology_id, semantic_id)
(SELECT temp.cluster_id, temp.typology_id, temp.semantic_id
FROM aaaa temp
LEFT JOIN clusters.clusters_typologies clu_typ ON temp.cluster_id = clu_typ.cluster_id AND
temp.typology_id = clu_typ.typology_id AND
temp.semantic_id = clu_typ.semantic_id
WHERE clu_typ.cluster_id IS NULL
AND clu_typ.typology_id IS NULL
AND clu_typ.semantic_id IS NULL);
currently I try to make a history table based on postgresql jsonb, currently as a example I have two table's:
CREATE TABLE data (id BIGSERIAL PRIMARY KEY, price NUMERIC(10,4) NOT NULL, article TEXT NOT NULL, quantity BIGINT NOT NULL, lose BIGINT NOT NULL, username TEXT NOT NULL);
CREATE TABLE data_history (id BIGSERIAL PRIMARY KEY, data JSONB NOT NULL, username TEXT NOT NULL);
The history table act's a simple history (the username there could be avoided).
I populate the data of the history with a trigger:
CREATE OR REPLACE FUNCTION insert_history() RETURNS TRIGGER AS $$
BEGIN
INSERT INTO data_history (data, username) VALUES (row_to_json(NEW.*), NEW.username);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Now I try to populate the history back to the data table:
SELECT jsonb_populate_record(NULL::data, data) FROM data_history;
However the result will now be a tuple and not a table:
jsonb_populate_record
-------------------------------------
(1,45.4500,0A45477,100,1,c.schmitt)
(2,5.4500,0A45477,100,1,c.schmitt)
(2 rows)
Is there any way to get the data back as the table data back? I know there is jsonb_populate_recordset, too, however it doesn't accept a query?!
jsonb_populate_record() returns a row-type (or record-type), so if you use it in the SELECT cluase, you'll get a single column, which is a row-type.
To avoid this, use it in the FROM clause instead (with an implicit LATERAL JOIN):
SELECT r.*
FROM data_history,
jsonb_populate_record(NULL::data, data) r
Technically, the statement below could work too
-- DO NOT use, just for illustration
SELECT jsonb_populate_record(NULL::data, data).*
FROM data_history
but it will call jsonb_populate_record() for each column in data (as a result of an engine limitation).
I am trying to create one view (or view in each schema to be used without modifications) on table which exists in multiple schemas with the same name
create schema company_1;
create schema company_2;
...
CREATE TABLE company_1.orders
(
id serial NOT NULL,
amount real,
paid real,
CONSTRAINT orders_pkey PRIMARY KEY (id )
)
WITH (
OIDS=FALSE
);
CREATE TABLE company_2.orders
(
id serial NOT NULL,
amount real,
paid real,
CONSTRAINT orders_pkey PRIMARY KEY (id )
)
WITH (
OIDS=FALSE
);
....
What is correct way of creating view on table orders without specifying schema for every view or specifying current schema?
What I need and failed to get is either
CREATE OR REPLACE VIEW
public.full_orders AS
SELECT id, amount FROM orders;
or
CREATE OR REPLACE VIEW
company_1.full_orders AS
-- company_2.full_orders AS
-- company_n.full_orders AS
SELECT id, amount FROM current_schema.orders;
Using postgresql 9.2.2
EDIT: The way I went:
CREATE VIEW company_1.full_orders AS
SELECT id, amount FROM company_1.orders;
On schema copy discussed here I butaly do this
FOR src_table IN
SELECT table_name
FROM information_schema.TABLES
WHERE table_schema = source_schema AND table_type = 'VIEW'
LOOP
SELECT view_definition
FROM information_schema.views
WHERE table_name = src_table AND table_schema = source_schema INTO q;
trg_table := target_schema||'.'||src_table;
EXECUTE 'CREATE VIEW ' || trg_table || ' AS '||replace(q, source_schema, target_schema);
END LOOP;
Still looking for better solution...
It's not possible to do with with a straightforward view. The view records the underlying table's identity at creation time, so it is not affected by schema settings done later on.
You could do it using a set-returning function using dynamic SQL, and then wrap that into a view. But I don't think that's a good solution.
I would just create quasi-duplicates for the view, as you have been doing, and enhance my deployment script to keep them all up to date.
I currently have a parent table:
CREATE TABLE members (
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
first_name varchar(20)
last_name varchar(20)
address address (composite type)
contact_numbers varchar(11)[3]
date_joined date
type varchar(5)
);
and two related tables:
CREATE TABLE basic_member (
activities varchar[3])
INHERITS (members)
);
CREATE TABLE full_member (
activities varchar[])
INHERITS (members)
);
If the type is full the details are entered to the full_member table or if type is basic into the basic_member table. What I want is that if I run an update and change the type to basic or full the tuple goes into the corresponding table.
I was wondering if I could do this with a rule like:
CREATE RULE tuple_swap_full
AS ON UPDATE TO full_member
WHERE new.type = 'basic'
INSERT INTO basic_member VALUES (old.member_id, old.first_name, old.last_name,
old.address, old.contact_numbers, old.date_joined, new.type, old.activities);
... then delete the record from the full_member
Just wondering if my rule is anywhere near or if there is a better way.
You don't need
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
A PRIMARY KEY implies UNIQUE NOT NULL automatically:
member_id SERIAL PRIMARY KEY
I wouldn't use hard coded max length of varchar(20). Just use text and add a check constraint if you really must enforce a maximum length. Easier to change around.
Syntax for INHERITS is mangled. The key word goes outside the parens around columns.
CREATE TABLE full_member (
activities text[]
) INHERITS (members);
Table names are inconsistent (members <-> member). I use the singular form everywhere in my test case.
Finally, I would not use a RULE for the task. A trigger AFTER UPDATE seems preferable.
Consider the following
Test case:
Tables:
CREATE SCHEMA x; -- I put everything in a test schema named "x".
-- DROP TABLE x.members CASCADE;
CREATE TABLE x.member (
member_id SERIAL PRIMARY KEY
,first_name text
-- more columns ...
,type text);
CREATE TABLE x.basic_member (
activities text[3]
) INHERITS (x.member);
CREATE TABLE x.full_member (
activities text[]
) INHERITS (x.member);
Trigger function:
Data-modifying CTEs (WITH x AS ( DELETE ..) are the best tool for the purpose. Requires PostgreSQL 9.1 or later.
For older versions, first INSERT then DELETE.
CREATE OR REPLACE FUNCTION x.trg_move_member()
RETURNS trigger AS
$BODY$
BEGIN
CASE NEW.type
WHEN 'basic' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.basic_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
WHEN 'full' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.full_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
END CASE;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Trigger:
Note that it is an AFTER trigger and has a WHEN condition.
WHEN condition requires PostgreSQL 9.0 or later. For earlier versions, you can just leave it away, the CASE statement in the trigger itself takes care of it.
CREATE TRIGGER up_aft
AFTER UPDATE
ON x.member
FOR EACH ROW
WHEN (NEW.type IN ('basic ','full')) -- OLD.type cannot be IN ('basic ','full')
EXECUTE PROCEDURE x.trg_move_member();
Test:
INSERT INTO x.member (first_name, type) VALUES ('peter', NULL);
UPDATE x.member SET type = 'full' WHERE first_name = 'peter';
SELECT * FROM ONLY x.member;
SELECT * FROM x.basic_member;
SELECT * FROM x.full_member;