How can I UNION two tables in different PostgeSQL databases? - postgresql

I'm running PostgreSQL 11.8 in a Docker container. I have two databases: website_db and testdb.
website_db has a products table with id, product_name, colour, product_size columns
testdb has a table called users with id, username, password
I'm using website_db and I want to UNION columns from the users table in the testdb database. I can get this to work in MySQL but am struggling with Postgres. Here's my attempt:
SELECT * FROM products WHERE product_name = 'doesntexist' OR 1=1 UNION SELECT null,username,password,null FROM testdb.users;
I get this error back:
ERROR: relation "testdb.users" does not exist
LINE 1: ...1=1 UNION SELECT null,username,password,null FROM testdb.use...
Does anyone know what I have to do to fix my query?

You can do it using dblink:
create database first;
create database second;
\c first;
create table products
(
id serial not null
constraint products_pk
primary key,
product_name varchar(50) not null
);
INSERT INTO public.products (id, product_name) VALUES (1, 'first_db');
\c second;
create table products
(
id serial not null
constraint products_pk
primary key,
product_name varchar(50) not null
);
INSERT INTO public.products (id, product_name) VALUES (1, 'sec_db');
-- dblink -- executes a query in a remote database
create extension dblink;
-- change queries and creds
SELECT id, product_name FROM products
UNION ALL
SELECT * FROM dblink('dbname=first user=root password=root', 'SELECT id, product_name FROM products') AS tb2(id int, product_name text);

Related

How to specify native table fields and a foreign table field during an insert?

Suppose the following:
create table member (
id serial primary key,
member_name varchar(55),
role_ids bigint[] not null
);
create table role (
id serial primary key,
role_name varchar(55) unique
);
insert into role values (1, 'admin'), (2, 'common');
I can create an admin member like this:
insert into member (role_ids)
select ARRAY[id] as role_id from role where role_name = 'admin';
But how can I specify other fields, like member_name as well?
I've tried this:
insert into member (member_name, role_ids) values('test member', role_ids)
select ARRAY[id::bigint] as role_id from role where role_name = 'admin';
But this throws an error, error at or near select
In your case I would probably choose to use a nested SELECT inside the VALUES, to emphasize that this is a lookup that you expect to succeed and return only one value:
insert into member (member_name, role_ids)
values('test member',
(select ARRAY[id] from role where role_name = 'admin'));
This wouldn't work if you were selecting more than one column in your select. Another solution would be to just use SELECT and no VALUES, because nothing stops you from returning literal values in your SELECT. You don't name the columns in the select for your insert, instead you order them to match the order of the columns in the insert:
insert into member (member_name, role_ids)
select 'test member', ARRAY[id] from role where role_name = 'admin';

Postgres - Oracle data type conversion

We have a foreign table that is connecting to Oracle. In Oracle, the columns are:
ticker: VARCHAR2(5)
article_id: NUMBER
In Postgres, we have tried to create the article_id as INTEGER and NUMERIC, but every time we try and query we get this error:
column "article_id" of foreign table "latest_article_id" cannot be converted to or from Oracle data type
How can we create this foreign table so we can query it? The article_id is a number, so is there additional commands we must use?
We are on Postgres 10.10.
CREATE FOREIGN TABLE latest_article_id
(ticker VARCHAR,
article_id NUMERIC)
SERVER usercomm
OPTIONS ( table '(SELECT article_id, ticker
FROM (SELECT a.article_id, t.ticker,
ROW_NUMBER() OVER (PARTITION BY t.ticker
ORDER BY a.publish_date DESC NULLS LAST) AS rnum
FROM tickers t, article_tickers at, articles a
WHERE t.ticker_id = at.ticker_id
AND at.article_id = a.article_id
AND a.status_id = 6
AND a.pull_flag = ''Y'')
WHERE rnum = 1)');

How to get multiple table data in one query in posgresql JSONB data type

How can I Fetch table data in one query? I have below tables:
Tabel Name: calorieTracker
Creat Table calorieTracker(c_id serial NOT NULL PRIMARY KEY, caloriesConsumption jsonb);
INSERT INTO public."calorieTracker" ("caloriesConsumption")
VALUES ('[{"C_id":"1",,"calorie":88,"date":"19/08/2020"},{"C_id":2,"date":"19/08/2020","calorie":87}]');
Table Name: watertracker
create table watertracker(wt_id serial not null primary key, wt_date varchar, wt_goal float,wt_cid int);
INSERT INTO public.watertracker (wt_id,wt_date,wt_goal,wt_cid)
VALUES (2,'2020-08-19',5.5,2);
What I am looking here I want to write query where date is 19/08/2020(in calorieTracker table and water tracker table) and wt_cid is 2(water tracker table) and c_id is 2(calorieTracker table) then return data.
As you have not mentioned what output you want, so i am assuming you want JSON object from caloriesConsumption which matches the condition mentioned in the question:
based on above assumption try this query:
with cte as (
select
c_id,
jsonb_array_elements("caloriesConsumption") "data"
from "calorieTracker"
)
select
t1.*
from cte t1 inner join watertracker t2
on t2.wt_cid=cast(t1.data->>'c_id' as int)
and t2.wt_date=t1.data->>'date'
if you want the result from watertracker then just replace t1.* with t2.*.

Create new entries for a particular account_id in the same using postgres

I have a table called account(id,account_id,name,status). Already data is present for these columns say:
Table account:
I have to first query the account_id with the name as xyz and create new entries for that account_id with name as kjf and lmn and status as fail.
The new table will look like as below after insert
Can someone help me for writing a query for this? I had tried :
INSERT INTO account (id, account_id, name, status,)
SELECT uuid_generate_v4(), account_id, 'kjh', 'fail' FROM account;
This shows error as account_is is unique.
With SQL, you can try this:
with
v1 as (select max(id)+1 as maxid from account),
v2 as (select account_id as newid from account where name='xyz')
insert into account
select (select maxid from v1), (select newid from v2), 'kjh', 'fail';

postgres inner JOIN query out of memory

I am trying to consult a database using pgAdmin3 and I need to join to tables. I am using the following code:
SELECT table1.species, table1.trait, table1.value, table1.units, table2.id, table2.family, table2.latitude, table2.longitude, table2.species as speciescheck
FROM table1 INNER JOIN table2
ON table1.species = table2.species
But I keep running this error:
an out of memory error
So I've tried to insert my result in a new table, as follow:
CREATE TABLE new_table AS
SELECT table1.species, table1.trait, table1.value, table1.units, table2.id, table2.family, table2.latitude, table2.longitude, table2.species as speciescheck
FROM table1 INNER JOIN table2
ON table1.species = table2.species
And still got an error:
ERROR: could not extend file "base/17675/43101.15": No space left on device
SQL state: 53100
Hint: Check free disk space.
I am very very new at this (is the first time I have to deal with PostgreSQL) and I guess I can do something to optimize this query and avoid this type of error. I have no privileges in the database. Can anyone help??
Thanks in advance!
Updated:
Table 1 description
-- Table: table1
-- DROP TABLE table1;
CREATE TABLE table1
(
species character varying(100),
trait character varying(50),
value double precision,
units character varying(50)
)
WITH (
OIDS=FALSE
);
ALTER TABLE table1
OWNER TO postgres;
GRANT ALL ON TABLE table1 TO postgres;
GRANT SELECT ON TABLE table1 TO banco;
-- Index: speciestable1_idx
-- DROP INDEX speciestable1_idx;
CREATE INDEX speciestable1_idx
ON table1
USING btree
(species COLLATE pg_catalog."default");
-- Index: traittype_idx
-- DROP INDEX traittype_idx;
CREATE INDEX traittype_idx
ON table1
USING btree
(trait COLLATE pg_catalog."default");
and table2 as:
-- Table: table2
-- DROP TABLE table2;
CREATE TABLE table2
(
id integer NOT NULL,
family character varying(40),
species character varying(100),
plotarea real,
latitude double precision,
longitude double precision,
source integer,
latlon geometry,
CONSTRAINT table2_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE table2
OWNER TO postgres;
GRANT ALL ON TABLE table2 TO postgres;
GRANT SELECT ON TABLE table2 TO banco;
-- Index: latlon_gist
-- DROP INDEX latlon_gist;
CREATE INDEX latlon_gist
ON table2
USING gist
(latlon);
-- Index: species_idx
-- DROP INDEX species_idx;
CREATE INDEX species_idx
ON table2
USING btree
(species COLLATE pg_catalog."default");
You're performing a join between two tables on the column species.
Not sure what's in your data, but if species is a column with significantly fewer values than the number of records (e.g. if species is "elephant", "giraffe" and you're analyzing all animals in Africa), this join will match every elephant with every elephant.
When joining two tables most of the time you try to use a unique or close to unique attribute, like id (not sure what id means in your case, but could be it).