Can any one explain me Dblink query format - postgresql

I need to retrieve data from tables which is located in two different database
so need to use dblink , but not able to understand its format
dblink(???) as(??)

Do you mean you're on one database and you want to query another?
First of all you'll need to make sure that postgresql-contrib is installed
in my case
dnf install postgresql-contrib-9.4.6-1.fc23.x86_64
Then you need to create the extension in postgresql to use dblink.
create extension dblink;
Here's is a simple example
SELECT * FROM dblink('dbname=Test','SELECT date1, int2 FROM test1') AS test(date1 date, int2 integer);
I've tested this and it works fine.
All the best
Ref Postgresql dblink

Related

Is there a way to access one database from another in Postgresql? [duplicate]

I'm going to guess that the answer is "no" based on the below error message (and this Google result), but is there anyway to perform a cross-database query using PostgreSQL?
databaseA=# select * from databaseB.public.someTableName;
ERROR: cross-database references are not implemented:
"databaseB.public.someTableName"
I'm working with some data that is partitioned across two databases although data is really shared between the two (userid columns in one database come from the users table in the other database). I have no idea why these are two separate databases instead of schema, but c'est la vie...
Note: As the original asker implied, if you are setting up two databases on the same machine you probably want to make two schemas instead - in that case you don't need anything special to query across them.
postgres_fdw
Use postgres_fdw (foreign data wrapper) to connect to tables in any Postgres database - local or remote.
Note that there are foreign data wrappers for other popular data sources. At this time, only postgres_fdw and file_fdw are part of the official Postgres distribution.
For Postgres versions before 9.3
Versions this old are no longer supported, but if you need to do this in a pre-2013 Postgres installation, there is a function called dblink.
I've never used it, but it is maintained and distributed with the rest of PostgreSQL. If you're using the version of PostgreSQL that came with your Linux distro, you might need to install a package called postgresql-contrib.
dblink() -- executes a query in a remote database
dblink executes a query (usually a SELECT, but it can be any SQL
statement that returns rows) in a remote database.
When two text arguments are given, the first one is first looked up as
a persistent connection's name; if found, the command is executed on
that connection. If not found, the first argument is treated as a
connection info string as for dblink_connect, and the indicated
connection is made just for the duration of this command.
one of the good example:
SELECT *
FROM table1 tb1
LEFT JOIN (
SELECT *
FROM dblink('dbname=db2','SELECT id, code FROM table2')
AS tb2(id int, code text);
) AS tb2 ON tb2.column = tb1.column;
Note: I am giving this information for future reference. Reference
I have run into this before an came to the same conclusion about cross database queries as you. What I ended up doing was using schemas to divide the table space that way I could keep the tables grouped but still query them all.
Just to add a bit more information.
There is no way to query a database other than the current one. Because PostgreSQL loads database-specific system catalogs, it is uncertain how a cross-database query should even behave.
contrib/dblink allows cross-database queries using function calls. Of course, a client can also make simultaneous connections to different databases and merge the results on the client side.
PostgreSQL FAQ
Yes, you can by using DBlink (postgresql only) and DBI-Link (allows foreign cross database queriers) and TDS_LInk which allows queries to be run against MS SQL server.
I have used DB-Link and TDS-link before with great success.
I have checked and tried to create a foreign key relationships between 2 tables in 2 different databases using both dblink and postgres_fdw but with no result.
Having read the other peoples feedback on this, for example here and here and in some other sources it looks like there is no way to do that currently:
The dblink and postgres_fdw indeed enable one to connect to and query tables in other databases, which is not possible with the standard Postgres, but they do not allow to establish foreign key relationships between tables in different databases.
If performance is important and most queries are read-only, I would suggest to replicate data over to another database. While this seems like unneeded duplication of data, it might help if indexes are required.
This can be done with simple on insert triggers which in turn call dblink to update another copy. There are also full-blown replication options (like Slony) but that's off-topic.
see https://www.cybertec-postgresql.com/en/joining-data-from-multiple-postgres-databases/ [published 2017]
These days you also have the option to use https://prestodb.io/
You can run SQL on that PrestoDB node and it will distribute the SQL query as required. It can connect to the same node twice for different databases, or it might be connecting to different nodes on different hosts.
It does not support:
DELETE
ALTER TABLE
CREATE TABLE (CREATE TABLE AS is supported)
GRANT
REVOKE
SHOW GRANTS
SHOW ROLES
SHOW ROLE GRANTS
So you should only use it for SELECT and JOIN needs. Connect directly to each database for the above needs. (It looks like you can also INSERT or UPDATE which is nice)
Client applications connect to PrestoDB primarily using JDBC, but other types of connection are possible including a Tableu compatible web API
This is an open source tool governed by the Linux Foundation and Presto Foundation.
The founding members of the Presto Foundation are: Facebook, Uber,
Twitter, and Alibaba.
The current members are: Facebook, Uber, Twitter, Alibaba, Alluxio,
Ahana, Upsolver, and Intel.
In case someone needs a more involved example on how to do cross-database queries, here's an example that cleans up the databasechangeloglock table on every database that has it:
CREATE EXTENSION IF NOT EXISTS dblink;
DO
$$
DECLARE database_name TEXT;
DECLARE conn_template TEXT;
DECLARE conn_string TEXT;
DECLARE table_exists Boolean;
BEGIN
conn_template = 'user=myuser password=mypass dbname=';
FOR database_name IN
SELECT datname FROM pg_database
WHERE datistemplate = false
LOOP
conn_string = conn_template || database_name;
table_exists = (select table_exists_ from dblink(conn_string, '(select Count(*) > 0 from information_schema.tables where table_name = ''databasechangeloglock'')') as (table_exists_ Boolean));
IF table_exists THEN
perform dblink_exec(conn_string, 'delete from databasechangeloglock');
END IF;
END LOOP;
END
$$

How I can connect multiple Postgres Database in a single SQL query? [duplicate]

I'm going to guess that the answer is "no" based on the below error message (and this Google result), but is there anyway to perform a cross-database query using PostgreSQL?
databaseA=# select * from databaseB.public.someTableName;
ERROR: cross-database references are not implemented:
"databaseB.public.someTableName"
I'm working with some data that is partitioned across two databases although data is really shared between the two (userid columns in one database come from the users table in the other database). I have no idea why these are two separate databases instead of schema, but c'est la vie...
Note: As the original asker implied, if you are setting up two databases on the same machine you probably want to make two schemas instead - in that case you don't need anything special to query across them.
postgres_fdw
Use postgres_fdw (foreign data wrapper) to connect to tables in any Postgres database - local or remote.
Note that there are foreign data wrappers for other popular data sources. At this time, only postgres_fdw and file_fdw are part of the official Postgres distribution.
For Postgres versions before 9.3
Versions this old are no longer supported, but if you need to do this in a pre-2013 Postgres installation, there is a function called dblink.
I've never used it, but it is maintained and distributed with the rest of PostgreSQL. If you're using the version of PostgreSQL that came with your Linux distro, you might need to install a package called postgresql-contrib.
dblink() -- executes a query in a remote database
dblink executes a query (usually a SELECT, but it can be any SQL
statement that returns rows) in a remote database.
When two text arguments are given, the first one is first looked up as
a persistent connection's name; if found, the command is executed on
that connection. If not found, the first argument is treated as a
connection info string as for dblink_connect, and the indicated
connection is made just for the duration of this command.
one of the good example:
SELECT *
FROM table1 tb1
LEFT JOIN (
SELECT *
FROM dblink('dbname=db2','SELECT id, code FROM table2')
AS tb2(id int, code text);
) AS tb2 ON tb2.column = tb1.column;
Note: I am giving this information for future reference. Reference
I have run into this before an came to the same conclusion about cross database queries as you. What I ended up doing was using schemas to divide the table space that way I could keep the tables grouped but still query them all.
Just to add a bit more information.
There is no way to query a database other than the current one. Because PostgreSQL loads database-specific system catalogs, it is uncertain how a cross-database query should even behave.
contrib/dblink allows cross-database queries using function calls. Of course, a client can also make simultaneous connections to different databases and merge the results on the client side.
PostgreSQL FAQ
Yes, you can by using DBlink (postgresql only) and DBI-Link (allows foreign cross database queriers) and TDS_LInk which allows queries to be run against MS SQL server.
I have used DB-Link and TDS-link before with great success.
I have checked and tried to create a foreign key relationships between 2 tables in 2 different databases using both dblink and postgres_fdw but with no result.
Having read the other peoples feedback on this, for example here and here and in some other sources it looks like there is no way to do that currently:
The dblink and postgres_fdw indeed enable one to connect to and query tables in other databases, which is not possible with the standard Postgres, but they do not allow to establish foreign key relationships between tables in different databases.
If performance is important and most queries are read-only, I would suggest to replicate data over to another database. While this seems like unneeded duplication of data, it might help if indexes are required.
This can be done with simple on insert triggers which in turn call dblink to update another copy. There are also full-blown replication options (like Slony) but that's off-topic.
see https://www.cybertec-postgresql.com/en/joining-data-from-multiple-postgres-databases/ [published 2017]
These days you also have the option to use https://prestodb.io/
You can run SQL on that PrestoDB node and it will distribute the SQL query as required. It can connect to the same node twice for different databases, or it might be connecting to different nodes on different hosts.
It does not support:
DELETE
ALTER TABLE
CREATE TABLE (CREATE TABLE AS is supported)
GRANT
REVOKE
SHOW GRANTS
SHOW ROLES
SHOW ROLE GRANTS
So you should only use it for SELECT and JOIN needs. Connect directly to each database for the above needs. (It looks like you can also INSERT or UPDATE which is nice)
Client applications connect to PrestoDB primarily using JDBC, but other types of connection are possible including a Tableu compatible web API
This is an open source tool governed by the Linux Foundation and Presto Foundation.
The founding members of the Presto Foundation are: Facebook, Uber,
Twitter, and Alibaba.
The current members are: Facebook, Uber, Twitter, Alibaba, Alluxio,
Ahana, Upsolver, and Intel.
In case someone needs a more involved example on how to do cross-database queries, here's an example that cleans up the databasechangeloglock table on every database that has it:
CREATE EXTENSION IF NOT EXISTS dblink;
DO
$$
DECLARE database_name TEXT;
DECLARE conn_template TEXT;
DECLARE conn_string TEXT;
DECLARE table_exists Boolean;
BEGIN
conn_template = 'user=myuser password=mypass dbname=';
FOR database_name IN
SELECT datname FROM pg_database
WHERE datistemplate = false
LOOP
conn_string = conn_template || database_name;
table_exists = (select table_exists_ from dblink(conn_string, '(select Count(*) > 0 from information_schema.tables where table_name = ''databasechangeloglock'')') as (table_exists_ Boolean));
IF table_exists THEN
perform dblink_exec(conn_string, 'delete from databasechangeloglock');
END IF;
END LOOP;
END
$$

Copy data between two tables in PostgreSQL using dblink.sql

I am using PostgreSQL 9.1. I need to transfer required columns from one table of one database into another table of another database, but not schema.
I found that dblink.sql file has to be there in share/contrib. But my contrib folder is empty. Where can I download the dblink.sql file and can execute my query?
When I execute the query now it shows an error message:
cross database reference is not possible ...
Can anyone help me how to transfer the data between two databases?
After you have installed the package into your system as detailed in the related question install the extension dblink into your database (the one you are running this code in, the foreign db does not need it):
CREATE EXTENSION dblink;
You can find code examples in the manual.
Here is a simple version of what I use to copy data between dbs:
First, create a FOREIGN SERVER
CREATE SERVER mydb
FOREIGN DATA WRAPPER postgresql
OPTIONS (hostaddr '111.111.111.111',port '5432',dbname 'mydb');
FOREIGN DATA WRAPPER postgresql was pre-installed in my case.
Then create function that opens a connection, removes old data (opotional), fetches new data, runs ANALYZE and closes the connection:
CREATE OR REPLACE FUNCTION f_tbl_sync()
RETURNS text AS
$BODY$
SELECT dblink_connect('mydb'); -- USER MAPPING for postgres, PW in .pgpass
TRUNCATE tbl; -- optional
INSERT INTO tbl
SELECT * FROM dblink(
'SELECT tbl_id, x, y
FROM tbl
ORDER BY tbl_id')
AS b(
tbl_id int
,x int
,y int)
ANALYZE tbl;
SELECT dblink_disconnect();
$BODY$
LANGUAGE sql VOLATILE;

Generating a UUID in Postgres for Insert statement?

My question is rather simple. I'm aware of the concept of a UUID and I want to generate one to refer to each 'item' from a 'store' in my DB with. Seems reasonable right?
The problem is the following line returns an error:
honeydb=# insert into items values(
uuid_generate_v4(), 54.321, 31, 'desc 1', 31.94);
ERROR: function uuid_generate_v4() does not exist
LINE 2: uuid_generate_v4(), 54.321, 31, 'desc 1', 31.94);
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I've read the page at: http://www.postgresql.org/docs/current/static/uuid-ossp.html
I'm running Postgres 8.4 on Ubuntu 10.04 x64.
uuid-ossp is a contrib module, so it isn't loaded into the server by default. You must load it into your database to use it.
For modern PostgreSQL versions (9.1 and newer) that's easy:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
but for 9.0 and below you must instead run the SQL script to load the extension. See the documentation for contrib modules in 8.4.
For Pg 9.1 and newer instead read the current contrib docs and CREATE EXTENSION. These features do not exist in 9.0 or older versions, like your 8.4.
If you're using a packaged version of PostgreSQL you might need to install a separate package containing the contrib modules and extensions. Search your package manager database for 'postgres' and 'contrib'.
Without extensions (cheat)
If you need a valid v4 UUID
SELECT uuid_in(overlay(overlay(md5(random()::text || ':' || random()::text) placing '4' from 13) placing to_hex(floor(random()*(11-8+1) + 8)::int)::text from 17)::cstring);
Thanks to #Denis Stafichuk #Karsten and #autronix
Or you can simply get UUID-like value by doing this (if you don't care about the validity):
SELECT uuid_in(md5(random()::text || random()::text)::cstring);
output>> c2d29867-3d0b-d497-9191-18a9d8ee7830
(works at least in 8.4)
PostgreSQL 13 supports natively gen_random_uuid ():
PostgreSQL includes one function to generate a UUID:
gen_random_uuid () → uuid
This function returns a version 4 (random) UUID. This is the most commonly used type of UUID and is appropriate for most applications.
db<>fiddle demo
The answer by Craig Ringer is correct. Here's a little more info for Postgres 9.1 and later…
Is Extension Available?
You can only install an extension if it has already been built for your Postgres installation (your cluster in Postgres lingo). For example, I found the uuid-ossp extension included as part of the installer for Mac OS X kindly provided by EnterpriseDB.com. Any of a few dozen extensions may be available.
To see if the uuid-ossp extension is available in your Postgres cluster, run this SQL to query the pg_available_extensions system catalog:
SELECT * FROM pg_available_extensions;
Install Extension
To install that UUID-related extension, use the CREATE EXTENSION command as seen in this this SQL:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
Beware: I found the QUOTATION MARK characters around extension name to be required, despite documentation to the contrary.
The SQL standards committee or Postgres team chose an odd name for that command. To my mind, they should have chosen something like "INSTALL EXTENSION" or "USE EXTENSION".
Verify Installation
You can verify the extension was successfully installed in the desired database by running this SQL to query the pg_extension system catalog:
SELECT * FROM pg_extension;
UUID as default value
For more info, see the Question: Default value for UUID column in Postgres
The Old Way
The information above uses the new Extensions feature added to Postgres 9.1. In previous versions, we had to find and run a script in a .sql file. The Extensions feature was added to make installation easier, trading a bit more work for the creator of an extension for less work on the part of the user/consumer of the extension. See my blog post for more discussion.
Types of UUIDs
By the way, the code in the Question calls the function uuid_generate_v4(). This generates a type known as Version 4 where nearly all of the 128 bits are randomly generated. While this is fine for limited use on smaller set of rows, if you want to virtually eliminate any possibility of collision, use another "version" of UUID.
For example, the original Version 1 combines the MAC address of the host computer with the current date-time and an arbitrary number, the chance of collisions is practically nil.
For more discussion, see my Answer on related Question.
pgcrypto Extension
As of Postgres 9.4, the pgcrypto module includes the gen_random_uuid() function. This function generates one of the random-number based Version 4 type of UUID.
Get contrib modules, if not already available.
sudo apt-get install postgresql-contrib-9.4
Use pgcrypto module.
CREATE EXTENSION "pgcrypto";
The gen_random_uuid() function should now available;
Example usage.
INSERT INTO items VALUES( gen_random_uuid(), 54.321, 31, 'desc 1', 31.94 ) ;
Quote from Postgres doc on uuid-ossp module.
Note: If you only need randomly-generated (version 4) UUIDs, consider using the gen_random_uuid() function from the pgcrypto module instead.
Update from 2021,
There is no need for a fancy trick to auto generate uuid on insert statement.
Just do one thing:
Set default value of DEFAULT gen_random_uuid () to your uuid column.
That is all.
Say, you have a table like this:
CREATE TABLE table_name (
unique_id UUID DEFAULT gen_random_uuid (),
first_name VARCHAR NOT NULL,
last_name VARCHAR NOT NULL,
email VARCHAR NOT NULL,
phone VARCHAR,
PRIMARY KEY (unique_id)
);
Now you need NOT to do anything to auto insert uuid values to unique_id column. Because you already defined a default value for it. You can simply focus on inserting onto other columns, and postgresql takes care of your unique_id. Here is a sample insert statement:
INSERT INTO table_name (first_name, last_name, email, phone)
VALUES (
'Beki',
'Otaev',
'beki#bekhruz.com',
'123-456-123'
)
Notice there is no inserting into unique_id as it is already taken care of.
About other extensions like uuid-ossp, you can bring them on if you are not satisfied with postgres's standard gen_random_uuid () function. Most of the time, you should be fine without them on
ALTER TABLE table_name ALTER COLUMN id SET DEFAULT uuid_in((md5((random())::text))::cstring);
After reading #ZuzEL's answer, i used the above code as the default value of the column id and it's working fine.
The uuid-ossp module provides functions to generate universally unique identifiers (UUIDs)
uuid_generate_v1() This function generates a version 1 UUID.
Add Extension
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
Verify Extension
SELECT * FROM pg_extension;
Run Query
INSERT INTO table_name(id, column1, column2 , column3, ...) VALUES
(uuid_generate_v1(), value1, value2, value3...);
Verify table data
SELECT uuid_generate_v5(uuid_ns_url (), 'test');

Joining Results from Two Separate Databases

Is it possible to JOIN rows from two separate postgres databases?
I am working with system with couple databases in one server and sometimes I really need such a feature.
According to http://wiki.postgresql.org/wiki/FAQ
There is no way to query a database other than the current one.
Because PostgreSQL loads database-specific system catalogs, it is
uncertain how a cross-database query should even behave.
contrib/dblink allows cross-database queries using function calls. Of
course, a client can also make simultaneous connections to different
databases and merge the results on the client side.
EDIT: 3 years later (march 2014), this FAQ entry has been revised and is more helpful:
How do I perform queries using multiple databases?
There is no way to directly query a database other than the current
one. Because PostgreSQL loads database-specific system catalogs, it is
uncertain how a cross-database query should even behave.
The SQL/MED support in PostgreSQL allows a "foreign data wrapper" to
be created, linking tables in a remote database to the local database.
The remote database might be another database on the same PostgreSQL
instance, or a database half way around the world, it doesn't matter.
postgres_fdw is built-in to PostgreSQL 9.3 and includes read/write
support; a read-only version for 9.2 can be compiled and installed as
a contrib module.
contrib/dblink allows cross-database queries using function calls and
is available for much older PostgreSQL versions. Unlike postgres_fdw
it can't "push down" conditions to the remote server, so it'll often
land up fetching a lot more data than you need.
Of course, a client can also make simultaneous connections to
different databases and merge the results on the client side.
Forget about dblink!
Say hello to Postgres_FDW:
To prepare for remote access using postgres_fdw:
Install the postgres_fdw extension using CREATE EXTENSION.
Create a foreign server object, using CREATE SERVER, to represent each remote database you want to connect to. Specify connection
information, except user, and password, as options of the server
object.
Create a user mapping, using CREATE USER MAPPING, for each database user you want to allow to access each foreign server. Specify
the remote user name and password to use as user and password options
of the user mapping.
Create a foreign table, using CREATE FOREIGN TABLE or IMPORT FOREIGN SCHEMA, for each remote table you want to access. The columns
of the foreign table must match the referenced remote table. You can,
however, use table and/or column names different from the remote
table's, if you specify the correct remote names as options of the
foreign table object.
Now you need only SELECT from a foreign table to access the data
stored in its underlying remote table.
It's really useful even on large data.
Yes, it is possible to do this using dblink albeit with significant performance considerations.
The following example will require the current SQL user to have permissions on both databases. If db2 is not located on the same cluster, then you will need to replace dbname=db2 with the full connection string defined in the dblink documentation.
SELECT *
FROM table1 tb1
LEFT JOIN (
SELECT *
FROM dblink('dbname=db2','SELECT id, code FROM table2')
AS tb2(id int, code text);
) AS tb2 ON tb2.column = tb1.column;
If table2 is very large, you could have performance issues because the sub-query loads up the entire table2 before performing the join.
No you can't. You could use dblink to connect from one database to another database, but that won't help if you're looking for JOIN's.
You can't use different SCHEMA's within a single database to store all you data?
Just a few steps and You can reach the goal:
follow this reference step by step
WE HAVE BEEN CONNECTED TO DB2 WITH TABLE TBL2 AND COLUMN COL2
ALSO THERE IS DB1 WITH TBL1 AND COLUMN COL1
*** connecting to second db ie db2
Now just **copy paste the 1-7 processes** (make sure u use correct username and password and ofcourse db name)
1.**CREATE EXTENSION dblink;**
2.**SELECT pg_namespace.nspname, pg_proc.proname
FROM pg_proc, pg_namespace
WHERE pg_proc.pronamespace=pg_namespace.oid
AND pg_proc.proname LIKE '%dblink%';**
3.**SELECT dblink_connect('host=localhost user=postgres password=postgres dbname=db1');**
4.**CREATE FOREIGN DATA WRAPPER postgres VALIDATOR postgresql_fdw_validator;**
5.**CREATE SERVER postgres2 FOREIGN DATA WRAPPER postgres OPTIONS (hostaddr '127.0.0.1', dbname 'db1');**
6.**CREATE USER MAPPING FOR postgres SERVER postgres2 OPTIONS (user 'postgres', password 'postgres');**
7.**SELECT dblink_connect('postgres2');**
---Now, you can SELECT the data of Database_One from Database_Two and even join both db results:
**SELECT * FROM public.dblink
('postgres2','SELECT col1,um_name FROM public.tbl1 ')
AS DATA(um_userid INTEGER),tbl2 where DATA.col1=tbl2.col2;**
You can also Check this :[How to join two tables of different databases together in postgresql [\[working finely in version 9.4\]][1]
You need to use dblink...as araqnid mentioned above, something like this works fine:
select ST.Table_Name, ST.Column_Name, DV.Table_Name, DV.Column_Name, *
from information_schema.Columns ST
full outer join dblink('dbname=otherdatabase','select Table_Name,
Column_Name from information_schema.Columns') DV(Table_Name text,
Column_Name text)
on ST.Table_Name = DV.Table_name
and ST.Column_Name = DV.Column_Name
where ST.Column_Name is null or DV.Column_Name is NULL
You have use dblink extension of postgresql.
Reference take from this Article:
DbLink extension of PostgreSQL which is used to connect one database to another database.
Install DbLink extension.
CREATE EXTENSION dblink;
Verify DbLink:
SELECT pg_namespace.nspname, pg_proc.proname
FROM pg_proc, pg_namespace
WHERE pg_proc.pronamespace=pg_namespace.oid
AND pg_proc.proname LIKE '%dblink%';
I have already prepared full demonstration on this. Please visit my post to learn step by step for executing cross database query in Postgresql.
Cannot be done? Of course we can, without special extensions. In our case, we had to compare two tables from different database servers, e.g. ACC and PROD, hence an even harder case than from most answers. Especially because ACC and PROD are deliberately on different servers to create a barrier, so you will not easily gain enough rights to perform a GRANT USAGE ON FOREIGN SERVER.
The obvious solution is to export both tables, and import both in the same database, e.g. DEV, or your own local db, under appropriate names, e.g. table1_acc and table1_prod, or schemas like acc and prod. Then, you may JOIN those with no special problems.