I'm trying to run the same query over multiple tables in my Postgres database, that all have the same schema.
This question: Select from multiple tables without a join?
shows that this is possible, however they are hard-coding the set of tables.
I have another query that returns the five specific tables I would like my main query to run on. How can I go about using the result of this with the UNION approach?
In short, I want my query to see the five specific tables (determined by the outcome of another query) as one large table when it runs the query.
I understand that in many cases similar to my scenario you'd simply just want to merge the tables. I can not do this.
One way of doing this that may satisfy your constraints is using table inheritance. In short, you will need to create a parent table with the same schema, and for each child you want to query you must ALTER that_table INHERIT parent_table. Any queries against the parent table will query all of the child tables. If you need to query different tables in different circumstances, I think the best way would be to add a column named type or some such, and query only certain values of that table.
Related
I have a parent table (parent_table) and few children tables inherit from it (child_one_table and child_two_table).
I want to query data using columns belong to the parent table only (the data itself is inserted to the children tables), when I run an explain on my query I see that there are sequence scans running on all the tables (parent_table, child_one_table and child_two_table).
Is there a more efficient way to do this? When I try to query using ONLY on parent_table I get back 0 result since the data was inserted to children table.
Thanks in advance!
Thanks for clarifying your question!
When I query the data for SELECT * FROM parent_table WHERE name = 'joe'; I see that it actually go over all 3 tables and query for that.
This seems to be standard behaviour from PostgreSQL. According to the 5.10.1. Caveats section of https://www.postgresql.org/docs/current/ddl-inherit.html
Note that not all SQL commands are able to work on inheritance hierarchies. Commands that are used for data querying, data modification, or schema modification (e.g., SELECT, UPDATE, DELETE, most variants of ALTER TABLE, but not INSERT or ALTER TABLE ... RENAME) typically default to including child tables and support the ONLY notation to exclude them.
So really what you want to do is explore the usage of ONLY to specify the scope of your query.
I hope this can help!
In output I want to select all columns except two columns from a table in q/kdb historical database.
I tried running below query but it does not work on hdb.
delete colid,coltime from table where date=.z.d-1
but it is failing with below error
ERROR: 'par
(trying to update a physically partitioned table)
I referred https://code.kx.com/wiki/Cookbook/ProgrammingIdioms#How_do_I_select_all_the_columns_of_a_table_except_one.3F but no help.
How can we display all columns except for two in kdb historical database?
The reason you are getting par error is due to the fact that it is a partitioned table.
The error is documented here
trying to update a partitioned table
You cannot directly update, delete anything on a partitioned table ( there is a separate db maintenance script for that)
The query you have used as fix is basically selecting the data first in-memory (temporarily) and then deleting the columns, hence it is working.
delete colid,coltime from select from table where date=.z.d-1
You can try the following functional form :
c:cols[t] except `p
?[t;enlist(=;`date;2015.01.01) ;0b;c!c]
Could try a functional select:
?[table;enlist(=;`date;.z.d);0b;{x!x}cols[table]except`colid`coltime]
Here the last argument is a dictionary of column name to column title, which tells the query what to extract. Instead of deleting the columns you specified this selects all but those two, which is the same query more or less.
To see what the functional form of a query is you can run something like:
parse"select colid,coltime from table where date=.z.d"
And it will output the arguments to the functional select.
You can read more on functional selects at code.kx.com.
Only select queries work on partitioned tables, which you resolved by structuring your query where you first selected the table into memory, then deleted the columns you did not want.
If you have a large number of columns and don't want to create a bulky select query you could use a functional select.
?[table;();0b;{x!x}((cols table) except `colid`coltime)]
And show all columns except a subset of columns. The column clause expects a dictionary hence I am using the function {x!x} to convert my list to a dictionary. See more information here
https://code.kx.com/q/ref/funsql/
As nyi mentioned, if you want to permanently delete columns from an historical database you can use the deleteCol function in the dbmaint tools https://github.com/KxSystems/kdb/blob/master/utils/dbmaint.md
Is it viable to use many tables that are in one-to-one relationship instead of one big table?
The main goal is to be able to easily change schema when needed. Create/drop a table when needed, instead of a new column or changing a column.
How scalable will this be with Postgresql?
I have a two databases that contain the exact same tables and are on the same server. I want to be able to create a report that will allow me to "merge" these databases so that when a user queries they will query BOTH databases at the same time. Is that even possible?
The simplest way to achieve this would be to create database views that UNION ALL the values from the same tables in both databases - something like:
CREATE VIEW CombinedSalesTable AS
SELECT * FROM database1.SalesTable
UNION ALL
SELECT * FROM database2.SalesTable
- and design the reports to query the views.
You may want to add an additional column to the views to show which database each record comes from, as a key value that is unique in one table may have a "duplicate" in the equivalent table in the other database.
My current query looks something like this:
SELECT SUBSTR(name,1,1), COUNT(*) FROM files GROUP BY SUBSTR(name,1,1)
But it's taking a pretty long time just to do counts on a table that's already indexed by the name column. I saw from this question that some engines might not use indexes correctly for the SUBSTR function, and in fact, sqlite will not use indexes for SUBSTR(string,1,1).
Is there any other approach that would utilize the index and net me some faster queries?
One strategy that is consistent with your access pattern is to add a new indexed column "first_letter" to your table. Use a trigger on to set the value on insert and update. Then your query is a simple group by first_letter.
Another strategy is to create a shadow table which contains an aggregation of the mother table. This isn't easy because it is your job as developer to keep the shadow table consistent with the mother table. Every delete, update or insert in table files needs to be accompanied by a change in the shadow table.
Databases like Oracle have support for materialized views to achieve this automatically but sqlite doesn't.