Postgresql: dblink in Stored Functions from local to remote database - postgresql

I check below link which I used and running perfectly. But I want to opposite this things.
Postgresql: dblink in Stored Functions
My scenario: Two databases are there. I want to copy one table data from local to remote database. I used dblink for this used but I am confused how to use dblink to store the data?
Local database name: localdatabase
Remote Database name: remotedatabase
Can any one suggest me how can I do this?
Thanks in advance.

Something like the lines below should work:
SELECT dblink_connect('hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres password=mypasswd');
-- change the connection string to your taste
SELECT dblink_exec('INSERT INTO test (some_text) VALUES (''Text go here'');');
Where test is a table in the remote database with the following definition:
CREATE TABLE test(
id serial
, some_text text
);
After running dblink_exec(), you can check the results in the remote database (or locally, using dblink(), like in the example below).
SELECT * FROM dblink('SELECT id, some_text FROM test') AS d(id integer, some_text text);
id | some_text
----+--------------
1 | Text go here
(1 row)
You can wrap your dblink_exec call in a function as well:
CREATE OR REPLACE FUNCTION f_dblink_test_update(val text, id integer) RETURNS text AS
$body$
SELECT dblink_exec('UPDATE torles.test SET some_text=' || quote_literal($1) || ' WHERE id = ' || $2);
$body$
LANGUAGE sql;
As you can see, you can even build your query string dynamically. (Not that I advocate this approach, since you have to be careful not to introduce a SQL injection vulnerability into your system this way.)
Since dblink_exec returns with a text message about what it did, you have to define your function as RETURNS text unless there are other value-returning statements after the dblink_exec call.

Related

does dblink use a different session when performed on the same database?

I'm looking to get around the limitations from how to force a postgres function to not be in a transaction.
I'd like to execute two seperate transactions within a single function and am wondering if using dblink calls on loopback will solve this.
A dblink query uses its own connection created explicitly by dblink_connect or implicitly when a connection info string is delivered in a dblink exec command. In the sense of transactions, you can consider it independent of the transaction in which it ran.
Example setup:
create table test(id int, str text);
insert into test values(1, '');
Two transactions inside an outer one:
begin;
select dblink_connect('db1', 'dbname=test');
select dblink_connect('db2', 'dbname=test');
select dblink_exec('db1', 'begin');
select dblink_exec('db1', 'update test set str = ''db1'' where id = 1');
select dblink_exec('db1', 'commit'); -- UPDATE!
select dblink_exec('db2', 'begin');
select dblink_exec('db2', 'update test set str = ''db2'' where id = 1');
select dblink_exec('db2', 'rollback');
select dblink_disconnect('db1');
select dblink_disconnect('db2');
rollback;
Despite the last rollback, the update in transaction db1 was successful:
select *
from test;
id | str
----+-----
1 | db1
(1 row)
Note that while transactions are essentially independent, sessions are not. The commands in two internal sessions are executed one after the other in a linear manner, and no concurrency can be achieved. A possible conflict would create a deadlock.

Trigger to insert rows in remote database after deletion

I have created a trigger that works like this:
After deleting data from table flux_tresorerie_historique it insert this row in the table flux_tresorerie_historique that is located in another database archive
I use dblink to insert data in the remote database, the problem is that the creation of the query is too hard especially that the table contain more than 20 columns, and I want to create similar functions for 10 other tables.
Is there another rapid way to ensure this task?
Here an example that works fine:
CREATE OR REPLACE FUNCTION flux_tresorerie_historique_backup_row()
RETURNS trigger AS
$BODY$
DECLARE date_rapprochement_flux TEXT;
DECLARE code_commission TEXT;
DECLARE reference_flux TEXT;
BEGIN
IF OLD.date_rapprochement_flux is null
THEN
date_rapprochement_flux = 'NULL';
ELSE
date_rapprochement_flux = ''''||to_char(OLD.date_rapprochement_flux, 'YYYY-MM-DD')||'''';
END IF;
IF OLD.code_commission is null
THEN
code_commission = 'NULL';
ELSE
code_commission = ''''||replace(OLD.code_commission,'''','''''')||'''';
END IF;
IF OLD.reference_flux is null
THEN
reference_flux = 'NULL';
ELSE
reference_flux = ''''||replace(OLD.reference_flux,'''','''''')||'''';
END IF;
perform dblink_connect('dbname=gtr_bd_archive user=postgres password=postgres');
perform dblink_exec('insert into flux_tresorerie_historique values('||OLD.id_flux_historique||','''||OLD.date_operation_flux||''','''||OLD.date_valeur_flux||''','||date_rapprochement_flux||','''||replace(OLD.libelle_flux,'''','''''')||''','||OLD.montant_flux||','||OLD.contre_valeur_dzd||','''||replace(OLD.rib_compte_bancaire,'''','''''')||''','||OLD.frais_flux||','''||replace(OLD.sens_flux,'''','''''')||''','''||replace(OLD.statut_flux,'''','''''')||''','''||replace(OLD.code_devise,'''','''''')||''','''||replace(OLD.code_mode_paiement,'''','''''')||''','''||replace(OLD.code_agence,'''','''''')||''','''||replace(OLD.code_compte,'''','''''')||''','''||replace(OLD.code_banque,'''','''''')||''','''||OLD.date_maj_flux||''','''||replace(OLD.statut_frais,'''','''''')||''','||reference_flux||','||code_commission||','||OLD.id_flux||');');
perform dblink_disconnect();
RETURN NULL;
END;
This is a limited application of replication. Requirements vary a lot, so there are a number of different established solutions, addressing different situations. Consider the overview in the manual.
Your hand-knit, trigger-based solution is one viable option for relatively few deletions. Opening and closing a separate connection for every row incurs quite an overhead. There are other various options.
While working with dblink I suggest some modifications. Most importantly:
Use format() to escape strings more elegantly.
Pass the whole row instead of passing and escaping every single column.
Don't place the password in every single trigger function.
Use a FOREIGN SERVER plus USER MAPPING. Detailed instructions here:
Persistent inserts in a UDF even if the function aborts
Basically, run once on the source server:
CREATE SERVER myserver FOREIGN DATA WRAPPER dblink_fdw
OPTIONS (hostaddr '127.0.0.1', dbname 'gtr_bd_archive');
CREATE USER MAPPING FOR role_source SERVER myserver
OPTIONS (user 'postgres', password 'secret');
Preferably, don't log in as superuser at the target server. Use a dedicated role with limited privileges to avoid privilege escalation.
And use a password file on the target server to allow password-less access. This way you don't even have to store the password in the USER MAPPING. Instructions in the last chapter of this related answer:
Run batch file with psql command without password
Then:
CREATE OR REPLACE FUNCTION pg_temp.flux_tresorerie_historique_backup_row()
RETURNS trigger AS
$func$
BEGIN
PERFORM dblink_connect('myserver'); -- name of foreign server from above
PERFORM dblink_exec( format(
$$
INSERT INTO flux_tresorerie_historique -- provide target column list!
SELECT (r).id_flux_historique
, (r).date_operation_flux
, (r).date_valeur_flux
, (r).date_rapprochement_flux::date -- 'YYYY-MM-DD' is default ISO format anyway
, (r).libelle_flux
, (r).montant_flux
, (r).contre_valeur_dzd
, (r).rib_compte_bancaire
, (r).frais_flux
, (r).sens_flux
, (r).statut_flux
, (r).code_devise
, (r).code_mode_paiement
, (r).code_agence
, (r).code_compte
, (r).code_banque
, (r).date_maj_flux
, (r).statut_frais
, (r).reference_flux
, (r).code_commission
, (r).id_flux
FROM (SELECT %L::flux_tresorerie_historique) t(r)
$$, OLD::text)); -- cast whole row type
PERFORM dblink_disconnect();
RETURN NULL; -- only for AFTER trigger
END
$func$ LANGUAGE plpgsql;
You should spell out the list of columns for the target table if the row types don't match.
If you are serious about this:
insert this row in the table flux_tresorerie_historique
I.e., you insert the whole row and the target row type is identical (no extracting a date from a timestamp etc.), you can simplify much further passing the whole row.
CREATE OR REPLACE FUNCTION flux_tresorerie_historique_backup_row()
RETURNS trigger AS
$func$
BEGIN
PERFORM dblink_connect('myserver'); -- name of foreign server
PERFORM dblink_exec( format(
$$
INSERT INTO flux_tresorerie_historique
SELECT (%L::flux_tresorerie_historique).*
$$
, OLD::text));
PERFORM dblink_disconnect();
RETURN NULL; -- only for AFTER trigger
END
$func$ LANGUAGE plpgsql;
Related:
How do I do large non-blocking updates in PostgreSQL?
You can use quote_nullable for this! Also, concat_ws comes very handy:
CREATE OR REPLACE FUNCTION flux_tresorerie_historique_backup_row()
RETURNS trigger AS
$BODY$
BEGIN
perform dblink_connect('dbname=gtr_bd_archive user=postgres password=postgres');
perform dblink_exec('insert into flux_tresorerie_historique values('||
concat_ws(', ', quote_nullable(OLD.id_flux_historique),
quote_nullable(OLD.date_operation_flux),
quote_nullable(OLD.date_valeur_flux),
quote_nullable(to_char(OLD.date_rapprochement_flux, 'YYYY-MM-DD')),
quote_nullable(OLD.libelle_flux),
quote_nullable(OLD.montant_flux),
quote_nullable(OLD.contre_valeur_dzd),
quote_nullable(OLD.rib_compte_bancaire),
quote_nullable(OLD.frais_flux),
quote_nullable(OLD.sens_flux),
quote_nullable(OLD.statut_flux),
quote_nullable(OLD.code_devise),
quote_nullable(OLD.code_mode_paiement),
quote_nullable(OLD.code_agence),
quote_nullable(OLD.code_compte),
quote_nullable(OLD.code_banque),
quote_nullable(OLD.date_maj_flux),
quote_nullable(OLD.statut_frais),
quote_nullable(OLD.reference_flux),
quote_nullable(OLD.code_commission),
quote_nullable(OLD.id_flux)
)||');');
perform dblink_disconnect();
RETURN NULL;
END;
Note that it is OK to place non-sting values between single quotes, since a quoted literal is for PostgreSQL just as good a literal value as one without the quotes, so it is convenient to place all of the columns processed by quote_nullable. Also note that quote_nullable will already output dates in YYYY-MM-DD format (e.g. select quote_nullable(now()::date) would result in '2016-05-04'), so you may want to simplify OLD.date_rapprochement_flux even further by removing the to_char.

PostgreSQL execute custom queries from string

I am trying to create a set of functions for a PostgreSQL database I have, and I am facing this problem. What I am trying to do is to create a function, which takes as a parameter the name of his column a user wants to change, and the new value they wish to insert.
I thought that the simplest way would be to create the queries as strings internally and execute them (sql injections are not of concern right now).
Searching a bit, I tried to use the EXECUTE command, with no success. Any arrangement of the queries I tried did not work, with or without SET, and even a super simple query shows a syntax error, inside the function code or in the pgadmin sql editor:
EXECUTE "SELECT * FROM user_data.users;";
ERROR: prepared statement "SELECT * FROM user_data.users;" does not exist
********** Error **********
ERROR: prepared statement "SELECT * FROM user_data.users;" does not exist
SQL state: 26000
Any suggestions to solve this?
To alter a column name you could use a function like this.
It takes the table name, column name, and new column name.
CREATE OR REPLACE FUNCTION foo(_t regclass, _ocn text, _ncn text)
RETURNS void AS
$func$
BEGIN
EXECUTE 'ALTER TABLE '|| _t ||'
RENAME COLUMN '|| quote_ident(_ocn) ||' TO '|| quote_ident(_ncn) ||'';
END
$func$ LANGUAGE plpgsql;
To alter the type is a bit more work and may add to the server overheads.
Lets use this table as an example
CREATE TABLE foobar (id serial primary key, name text);
Ref Postgresql Type Conversion
In many cases a user does not need to understand the details of the
type conversion mechanism. However, implicit conversions done by
PostgreSQL can affect the results of a query. When necessary, these
results can be tailored by using explicit type conversion.
Firstly we would have to get the column data type(s) from the information The Information Schema Ref postgresql 9.4
select data_type from information_schema.columns where table_name = 'foobar' and column_name = 'name';
We could cast types as mentioned in type conversion::Table 8-1. Data Types Ref Postgresql 9.4
ALTER TABLE foobar ALTER COLUMN name TYPE varchar(20) USING name::text;
I hope this helps

Detect endianness of PostgreSQL cluster

Some background.
I am working on a show case for indexes. In order to provide more insights into how Postgres handles them, I am using pageinspect module. My setup (on 9.3.5) is the following:
CREATE EXTENSION IF NOT EXISTS pageinspect;
DROP SCHEMA IF EXISTS pin;
CREATE SCHEMA pin;
SET search_path TO pin,public;
CREATE TABLE bat AS
SELECT id,
translate((random()*123456789)::text,'0987654321.','abcdefghijk') str
FROM generate_series(1, 10) id;
ALTER TABLE bat ADD CONSTRAINT p_bat PRIMARY KEY (id);
CREATE INDEX i_bat_str ON bat(str);
VACUUM ANALYZE bat;
And now I can show some details (meta and the only leaf page):
SELECT * FROM bt_metap('i_bat_str');
WITH pages AS (
SELECT relname, relpages, blkno
FROM pg_class ic, generate_series(1,relpages-1) s(blkno)
WHERE oid='pin.i_bat_str'::regclass
)
SELECT blkno,s.*
FROM pages, bt_page_items('pin.i_bat_str',blkno) s
WHERE blkno=1;
Last query produces a series of data values. I do understand, that first byte is a single-byte header, and in my case it's least-order bit is set, 'cos I'm on a x86_64.
So my question is — is it possible to check server endianness via function (or via SQL) that will not require compiling extra code? I know how to do it in C, but I'd like to avoid it.
I'd probably use plperl or plpython.
For example:
CREATE OR REPLACE FUNCTION little_endian() RETURNS boolean LANGUAGE plperlu AS $$
use Config;
return $Config{byteorder} eq '1234' || $Config{byteorder} eq '12345678';
$$;
or:
CREATE OR REPLACE FUNCTION little_endian() RETURNS boolean LANGUAGE plpythonu AS $$
import sys
return sys.byteorder == "little"
$$;

How to insert a text file into a field in PostgreSQL?

How to insert a text file into a field in PostgreSQL?
I'd like to insert a row with fields from a local or remote text file.
I'd expect a function like gettext() or geturl() in order to do the following:
% INSERT INTO collection(id, path, content) VALUES(1, '/etc/motd', gettext('/etc/motd'));
-S.
The easiest method would be to use one of the embeddable scripting languages. Here's an example using plpythonu:
CREATE FUNCTION gettext(url TEXT) RETURNS TEXT
AS $$
import urllib2
try:
f = urllib2.urlopen(url)
return ''.join(f.readlines())
except Exception:
return ""
$$ LANGUAGE plpythonu;
One drawback to this example function is its reliance on urllib2 means you have to use "file:///" URLs to access local files, like this:
select gettext('file:///etc/motd');
Thanks for the tips. I've found another answer with a built in function.
You need to have super user rights in order to execute that!
-- 1. Create a function to load a doc
-- DROP FUNCTION get_text_document(CHARACTER VARYING);
CREATE OR REPLACE FUNCTION get_text_document(p_filename CHARACTER VARYING)
RETURNS TEXT AS $$
-- Set the end read to some big number because we are too lazy to grab the length
-- and it will cut of at the EOF anyway
SELECT CAST(pg_read_file(E'mydocuments/' || $1 ,0, 100000000) AS TEXT);
$$ LANGUAGE sql VOLATILE SECURITY DEFINER;
ALTER FUNCTION get_text_document(CHARACTER VARYING) OWNER TO postgres;
-- 2. Determine the location of your cluster by running as super user:
SELECT name, setting FROM pg_settings WHERE name='data_directory';
-- 3. Copy the files you want to import into <data_directory>/mydocuments/
-- and test it:
SELECT get_text_document('file1.txt');
-- 4. Now do the import (HINT: File must be UTF-8)
INSERT INTO mytable(file, content)
VALUES ('file1.txt', get_text_document('file1.txt'));
Postgres's COPY command is exactly for this.
My advice is to upload it to a temporary table, and then transfer the data across to your main table when you're happy with the formatting. e.g.
CREATE table text_data (text varchar)
COPY text_data FROM 'C:\mytempfolder\textdata.txt';
INSERT INTO main_table (value)
SELECT string_agg(text, chr(10)) FROM text_data;
DROP TABLE text_data;
Also see this question.
You can't. You need to write a program that will read file content's (or URL's) and store it into the desired field.
Use COPY instead of INSERT
reference: http://www.commandprompt.com/ppbook/x5504#AEN5631