It´s possible to search all fields in all tables that correspond to datetime and show his contents?
For example:
select [from all tables] fields where field_type='datetime'
Expected behavior:
+---------------+--------------+--------------------------+----------+
| field_name | type_field | data | table |
+---------------+--------------+--------------------------+----------+
| date_invoice | date_time | 2022-01-02 18:45:09.234 | invoices |
| date_invoice | date_time | 2022-01-12 18:45:09.234 | invoices |
+---------------+--------------+--------------------------+----------+
If you will divide the task, first get all table names:
SELECT table_name FROM information_schema.tables
where table_type='BASE TABLE'
Then, do a loop (changing table_name below) in any programming language and query:
SELECT *
FROM information_schema.columns
where table_name = 'workers'
and data_type='timestamp without time zone'
Related
I am trying to create a view in Redshift to enable us to see the latest data in each table.
We have datasets that update a various schedules and every table has a column "updated" that contains a datestamp of the rows last update.
What I want to achive is a view at the bottom (from these two example tables):
other.bigtable
+-----+--------+------------------+
| id | stat | updated |
+-----+--------+------------------+
| A2 | rgerhg | 03/05/2020 05:00 |
| F5 | bdfb | 03/05/2020 05:00 |
| GF5 | bb | 03/05/2020 05:00 |
+-----+--------+------------------+
default.test
+----+------+------------------+
| id | name | updated |
+----+------+------------------+
| 1 | A | 02/02/2008 00:00 |
| 2 | B | 02/02/2008 00:00 |
| 3 | C | 02/02/2008 00:00 |
| 4 | F | 02/02/2008 00:00 |
| 5 | T | 02/02/2010 00:00 |
+----+------+------------------+
default.view_updates
+---------+------------+------------------+
| schema | table_name | max_update |
+---------+------------+------------------+
| default | test | 02/02/2010 00:00 |
| other | big_table | 03/05/2020 05:00 |
+---------+------------+------------------+
So far I am as far as getting tables and schemas but have no idea where to start on the dates. Redshift seems a bit more limited.
EDIT:
Utilising some code stolen from the web I was hoping to use this to then create the table for the extra column:
select t.table_schema,
t.table_name
from information_schema.tables t
inner join information_schema.columns c
on c.table_name = t.table_name
and c.table_schema = t.table_schema
where c.column_name = 'updated'
and t.table_schema not in ('information_schema', 'pg_catalog')
and t.table_type = 'BASE TABLE'
order by t.table_schema;
[Source: https://dataedo.com/kb/query/amazon-redshift/find-tables-with-specific-column-name]
you can select the most recent date from each table and union together (and put in a view if you like).
Select * from (select top 1 'test', updated from test order by updated desc)
union all
Select * from (select top 1 'big_table', updated from big_table order by updated desc);
You can have a long list of "union all"s up to some limit. This hard codes the tables into the view - I assume this is what you are looking for.
I have a table log containing columns schema_name & table_name & object_id & data and the table can contain records with different table_names and schema_names:
| schema_name | table_name | object_id | data |
| ------------- |-------------|-------------|-------------|
| bio | sample |5 |jsonb |
| bio | location |8 |jsonb |
| ... | ... |... |jsonb |
I want to execute a query as followed:
select schema_name,
table_name,
object_id,
(select some_column from schema_name.table_name where id = object_id)
from log
PS: id is a column that exists in every table (sample, location, ...)
Is their a way in postgreSQL to use the values in columns to build up a query (so that schema_name and table_name is filled in based on the values of the columns)?
I would like to select a number of tables and select the geometry (geom) and Name columns in each of the tables and append below each other. I have gotten as far as selecting the tables and their columns as shown below:
SELECT TABLE_NAME COLUMN_NAME
FROM INFORMATION_SCHEMA.columns
WHERE (TABLE_NAME LIKE '%HESA' OR
TABLE_NAME LIKE '%HEWH') AND
(COLUMN_NAME = 'geom' AND
COLUMN_NAME = 'Name');
How do you then take the tables:
id | geom | Name | id | geom | Name |
____________________ ____________________
1 | geom1 | Name1 | 1 | geom4 | Name4 |
2 | geom2 | Name2 | 2 | geom5 | Name5 |
3 | geom3 | Name3 | 3 | geom6 | Name6 |
And append the second table below the first, like this:
id | geom | Name |
____________________
1 | geom1 | Name1 |
2 | geom2 | Name2 |
3 | geom3 | Name3 |
1 | geom4 | Name4 |
2 | geom5 | Name5 |
3 | geom6 | Name6 |
Do I use UNION ALL or something else?
https://www.db-fiddle.com/f/75fgQMEWf9LvPj4xYMGWvA/0
based on your sample data:
do
'
declare
r record;
begin
for r in (
SELECT a.TABLE_NAME
FROM INFORMATION_SCHEMA.columns a
JOIN INFORMATION_SCHEMA.columns b on a.TABLE_NAME = b.TABLE_NAME and a.COLUMN_NAME = ''geom'' and b.COLUMN_NAME = ''name''
WHERE (a.TABLE_NAME LIKE ''oranges%'' OR a.TABLE_NAME LIKE ''%_db'')
) loop
execute format(''insert into rslt select geom, name from %I'',r.table_name);
end loop;
end;
'
;
Union All will do the job just fine:
SELECT
*
FROM (
(SELECT * FROM table_one)
UNION ALL
(SELECT * FROM table_two)
) AS tmp
ORDER BY name ASC;
I have added the external SELECT, to show you how you can order the whole result.
DB Fiddle can be found here
As the title says, I need to create a query where I SELECT all items from one table and use those items as expressions in another query. Suppose I have the main table that looks like this:
main_table
-------------------------------------
id | name | location | //more columns
---|------|----------|---------------
1 | me | pluto | //
2 | them | mercury | //
3 | we | jupiter | //
And the sub query table looks like this:
some_table
---------------
id | item
---|-----------
1 | sub-col-1
2 | sub-col-2
3 | sub-col-3
where each item in some_table has a price which is in an amount_table like so:
amount_table
--------------
1 | 1000
2 | 2000
3 | 3000
So that the query returns results like this:
name | location | sub-col-1 | sub-col-2 | sub-col-3 |
----------------------------------------------------|
me | pluto | 1000 | | |
them | mercury | | 2000 | |
we | jupiter | | | 3000 |
My query currently looks like this
SELECT name, location, (SELECT item FROM some_table)
FROM main_table
INNER JOIN amount_table WHERE //match the id's
But I'm running into the error more than one row returned by a subquery used as an expression
How can I formulate this query to return the desired results?
you should decide on expected result.
to get one-tp-many relation:
SELECT name, location, some_table.item
FROM main_table
JOIN some_table on true -- or id if they match
INNER JOIN amount_table --WHERE match the id's
to get one-to-one with all rows:
SELECT name, location, (SELECT array_agg(item) FROM some_table)
FROM main_table
INNER JOIN amount_table --WHERE //match the id's
So, I have the next table:
time | name | ID |
12:00:00| access | 1 |
12:05:00| select | null |
12:10:00| update | null |
12:15:00| insert | null |
12:20:00| out | null |
12:30:00| access | 2 |
12:35:00| select | null |
The table is bigger (aprox 1-1,5 mil rows) and there will be ID equal to 2,3,4 etc and rows between.
The following should be the result:
time | name | ID |
12:00:00| access | 1 |
12:05:00| select | 1 |
12:10:00| update | 1 |
12:15:00| insert | 1 |
12:20:00| out | 1 |
12:30:00| access | 2 |
12:35:00| select | 2 |
What is the most simple method to update the rows without making the log full? Like, one ID at a time.
You can do it with a sub query:
UPDATE YourTable t
SET t.ID = (SELECT TOP 1 s.ID
FROM YourTable s
WHERE s.time < t.time AND s.name = 'access'
ORDER BY s.time DESC)
WHERE t.name <> 'access'
Index on (ID,time,name) will help.
You can do it using CTE as below:
;WITH myCTE
AS ( SELECT time
, name
, ROW_NUMBER() OVER ( PARTITION BY name ORDER BY time ) AS [rank]
, ID
FROM YourTable
)
UPDATE myCTE
SET myCTE.ID = myCTE.rank
SELECT *
FROM YourTable ORDER BY ID