Use column values to build up query - postgresql

I have a table log containing columns schema_name & table_name & object_id & data and the table can contain records with different table_names and schema_names:
| schema_name | table_name | object_id | data |
| ------------- |-------------|-------------|-------------|
| bio | sample |5 |jsonb |
| bio | location |8 |jsonb |
| ... | ... |... |jsonb |
I want to execute a query as followed:
select schema_name,
table_name,
object_id,
(select some_column from schema_name.table_name where id = object_id)
from log
PS: id is a column that exists in every table (sample, location, ...)
Is their a way in postgreSQL to use the values in columns to build up a query (so that schema_name and table_name is filled in based on the values of the columns)?

Related

psycopg copy_expert doesn't work if id is not in csv

I have a data insert goal in Postgres. My table's columns:
id, col_1, col_2, col_3, col_4, col_5, col_6, col_7
Id column is auto incremented.
My Python insert code
copy_table_query = "COPY my_table (col_1, col_2, col_3, col_4, col_5, col_6, col_7) FROM STDIN WITH (DELIMITER '\t');"
curs.copy_expert(copy_table_query, data)
But it tries to insert col_1 into id and of course it fails with psycopg2.errors.InvalidTextRepresentation: invalid input syntax for type bigint. Because col_1 is string.
How can I let Postgres generate ids while I just insert data from CSV?
Example that shows it works. There is really no need to use copy_expert you can use [copy_from](https://www.psycopg.org/docs/cursor.html#cursor.copy_from. By default the separator is tab. You specify the columns with the columns parameter.
cat csv_test.csv
test f
test2 t
test3 t
\d csv_test
Table "public.csv_test"
Column | Type | Collation | Nullable | Default
--------+-------------------+-----------+----------+--------------------------------------
id | integer | | not null | nextval('csv_test_id_seq'::regclass)
col1 | character varying | | |
col2 | boolean | | |
with open('csv_test.csv') as csv_file:
cur.copy_from(csv_file, 'csv_test', columns=['col1', 'col2'])
con.commit()
select * from csv_test ;
id | col1 | col2
----+-------+------
1 | test | f
2 | test2 | t
3 | test3 | t

Search al date fields in DB postgresql

It´s possible to search all fields in all tables that correspond to datetime and show his contents?
For example:
select [from all tables] fields where field_type='datetime'
Expected behavior:
+---------------+--------------+--------------------------+----------+
| field_name | type_field | data | table |
+---------------+--------------+--------------------------+----------+
| date_invoice | date_time | 2022-01-02 18:45:09.234 | invoices |
| date_invoice | date_time | 2022-01-12 18:45:09.234 | invoices |
+---------------+--------------+--------------------------+----------+
If you will divide the task, first get all table names:
SELECT table_name FROM information_schema.tables
where table_type='BASE TABLE'
Then, do a loop (changing table_name below) in any programming language and query:
SELECT *
FROM information_schema.columns
where table_name = 'workers'
and data_type='timestamp without time zone'

Query Clarification on multiple table insert

I have a table populated by CSV raw data
| NNAME | DateDriven | username |
|--------------------------------|
| Thunder| 1-1-1999 | mickey |
|--------------------------------|
And an existing MSSQL database
> Tables
Drivers
| ------------- |
| ID | username |
|---------------|
| 1 | mickey |
| 2 | jonny |
| 3 | ryan |
-----------------
Cars
-----------------------------
| ID | NNAME | DateDriven |
|---------------------------|
| | | |
-----------------------------
Car_Drivers Table
-----------------------
| Cars_ID | Driver_ID |
|---------------------|
| | |
-----------------------
How can I take the cvs table data and insert it into the above? I am very lost!
CARS IDs are identity(1,1). Table Car_Drivers has a composite primary key off two foreign keys.
What I think I need to do is create a join to convert username to ID but I am getting lost completing the insert query.
Desired outcome
Cars Table
-----------------------------
| ID | NNAME | DateDriven |
|---------------------------|
| 1 | Thunder | 1-1-1999 |
-----------------------------
Car_Drivers Table
-----------------------
| Cars_ID | Driver_ID |
|---------------------|
| 1 | 1 |
-----------------------
The following ought to do what you need. The problem is that you need to keep some temporary data around as rows are inserted into Cars, but some of the data is from a different table. Merge provides the answer:
-- Create the test data.
declare #CSVData as Table ( NName NVarChar(16), DateDriven Char(8), Username NVarChar(16));
insert into #CSVData ( NName, DateDriven, Username ) values
( N'Thunder', '1-1-1999', N'mickey' );
select * from #CSVData;
declare #Drivers as Table ( Id SmallInt Identity, Username NVarChar(16) );
insert into #Drivers ( Username ) values
( N'mickey' ), ( N'jonny' ), ( N'ryan' );
select * from #Drivers;
declare #Cars as Table ( Id SmallInt Identity, NName NVarChar(16), DateDriven Char(8) );
declare #CarDrivers as Table ( Cars_Id SmallInt, Driver_Id SmallInt );
-- Temporary data needed for the #CarDrivers table.
declare #NewCars as Table ( Username NVarChar(16), Cars_Id SmallInt );
-- Merge the new data into #Cars .
-- MERGE allows the use of OUTPUT with references to columns not inserted,
-- e.g. Username .
merge into #Cars
using ( select NName, DateDriven, Username from #CSVData ) as CSVData
on 1 = 0
when not matched by target then
insert ( NName, DateDriven ) values ( CSVData.NName, CSVData.DateDriven )
output CSVData.Username, Inserted.Id into #NewCars;
-- Display the results.
select * from #Cars;
-- Display the temporary data.
select * from #NewCars;
-- Add the connections.
insert into #CarDrivers ( Cars_Id, Driver_Id )
select NewCars.Cars_Id, Drivers.Id
from #NewCars as NewCars inner join
#Drivers as Drivers on Drivers.Username = NewCars.Username;
-- Display the results.
select * from #CarDrivers;
DBFiddle.

Join and combine tables to get common rows in a specific column together in Postgres

I have a couple of tables in Postgres database. I have joined and merges the tables. However, I would like to have common values in a specific column to appear together in the final table (In the end, I would like to perform groupby and maximum value calculation on the table).
The schema of the test tables looks like this:
Schema (PostgreSQL v11)
CREATE TABLE table1 (
id CHARACTER VARYING NOT NULL,
seq CHARACTER VARYING NOT NULL
);
INSERT INTO table1 (id, seq) VALUES
('UA502', 'abcdef'), ('UA503', 'ghijk'),('UA504', 'lmnop')
;
CREATE TABLE table2 (
id CHARACTER VARYING NOT NULL,
score FLOAT
);
INSERT INTO table2 (id, score) VALUES
('UA502', 2.2), ('UA503', 2.6),('UA504', 2.8)
;
CREATE TABLE table3 (
id CHARACTER VARYING NOT NULL,
seq CHARACTER VARYING NOT NULL
);
INSERT INTO table3 (id, seq) VALUES
('UA502', 'qrst'), ('UA503', 'uvwx'),('UA504', 'yzab')
;
CREATE TABLE table4 (
id CHARACTER VARYING NOT NULL,
score FLOAT
);
INSERT INTO table4 (id, score) VALUES
('UA502', 8.2), ('UA503', 8.6),('UA504', 8.8);
;
I performed join and union and oepration of the tables to get the desired columns.
Query #1
SELECT table1.id, table1.seq, table2.score
FROM table1 INNER JOIN table2 ON table1.id = table2.id
UNION
SELECT table3.id, table3.seq, table4.score
FROM table3 INNER JOIN table4 ON table3.id = table4.id
;
The output looks like this:
| id | seq | score |
| ----- | ------ | ----- |
| UA502 | qrst | 8.2 |
| UA502 | abcdef | 2.2 |
| UA504 | yzab | 8.8 |
| UA503 | uvwx | 8.6 |
| UA504 | lmnop | 2.8 |
| UA503 | ghijk | 2.6 |
However, the desired output should be:
| id | seq | score |
| ----- | ------ | ----- |
| UA502 | qrst | 8.2 |
| UA502 | abcdef | 2.2 |
| UA504 | yzab | 8.8 |
| UA504 | lmnop | 2.8 |
| UA503 | uvwx | 8.6 |
| UA503 | ghijk | 2.6 |
View on DB Fiddle
How should I modify my query to get the desired output?

Postgresql use more than one row as expression in sub query

As the title says, I need to create a query where I SELECT all items from one table and use those items as expressions in another query. Suppose I have the main table that looks like this:
main_table
-------------------------------------
id | name | location | //more columns
---|------|----------|---------------
1 | me | pluto | //
2 | them | mercury | //
3 | we | jupiter | //
And the sub query table looks like this:
some_table
---------------
id | item
---|-----------
1 | sub-col-1
2 | sub-col-2
3 | sub-col-3
where each item in some_table has a price which is in an amount_table like so:
amount_table
--------------
1 | 1000
2 | 2000
3 | 3000
So that the query returns results like this:
name | location | sub-col-1 | sub-col-2 | sub-col-3 |
----------------------------------------------------|
me | pluto | 1000 | | |
them | mercury | | 2000 | |
we | jupiter | | | 3000 |
My query currently looks like this
SELECT name, location, (SELECT item FROM some_table)
FROM main_table
INNER JOIN amount_table WHERE //match the id's
But I'm running into the error more than one row returned by a subquery used as an expression
How can I formulate this query to return the desired results?
you should decide on expected result.
to get one-tp-many relation:
SELECT name, location, some_table.item
FROM main_table
JOIN some_table on true -- or id if they match
INNER JOIN amount_table --WHERE match the id's
to get one-to-one with all rows:
SELECT name, location, (SELECT array_agg(item) FROM some_table)
FROM main_table
INNER JOIN amount_table --WHERE //match the id's