I was wondering whether it is possible to query tables by specifying their object_id instead of table names in SELECT statements.
The reason for this is that some tables are created dynamically, and their structure (and names) are not known before, and yet I would like to be able to write sprocs that are capable of querying these tables and working on their content.
I know I can create dynamic statements and execute it, but maybe there are some better ways, and I would be grateful if someone could share how to approach it.
Thanks.
You have to query sys.columns and build a dynamic query based on that.
There are no better ways: SQL isn't designed for adhoc or unknown sturctures.
I've never worked on an application in 20 years where I don't know what my data looks like. Either your data is persisted or it should be in XML or JSON or such if it's transient-
Related
I have a PostgreSQL query constructed by a templating mechanism. What I want to do is to determine the relations actually hit by the query when it is run and record them in a relation. So this a very rudimentary lineage problem. Simply looking at the relation names appearing in the query (or parsing the query) would not easily solve the problem as the queries are somewhat complex and the templating mechanism inserts expressions like WHERE FALSE.
I can of course do it by using EXPLAIN on the query and insert the relation names I find manually. However this has two drawbacks:
EXPLAIN actually runs the query. Unfortunately running the query takes a lot of time so it is not ideal to run the query twice, once for the result and once for the EXPLAIN.
It is manual.
After reading a few documents I found out that on can log the result of an EXPLAIN automatically to a CSV file and read it back to a relation. But, as far as I understand, this means logging everything to the CSV which is not an option for me. Also, automatic logging seems to be triggered only when the execution takes longer then a predetermined threshold and I want to do this for a few specific queries, not for all time consuming ones.
PS: This does not need to be implemented fully at database layer. For instance, once I have the result of EXPLAIN in a relation, I can parse it and extract the relations it hits at the application layer.
EXPLAIN does not execute the query.
You can run EXPLAIN (FORMAT JSON) SELECT ..., which will return the execution plan as a JSON. Simply extract all Relation Name attributes, and you have a list of the tables scanned.
Not able to load multiple tables, getting error:
Exception in component tMysqlInput_1 (MYSQL_DynamicLoading)
java.sql.SQLException: Bad format for Timestamp 'GUINESS' in column 3
One table works fine. Basically after first iteration the second table trying to use the schema
of the first table. Please help, how to edit the component to make it
correct. Trying to load actor & country table from sakila DB mysql to
a another DB on the same server. Above image is for successful one table
dynamic loading.
you should not use tMysqlInput if output schemas differ. For this case there is no way around tJavaRow and custom code. I however cannot guess what happens in tMap, so you should provide some more details about what you want to achieve.
If all you need is to load data from one table to another without any transformations, you can do one of the following:
If your tables reside in 2 different databases on the same server, you can use a tMysqlRow and execute a query "INSERT INTO catalog.table SELECT * from catalog2.table2..". You can do some simple transformations in SQL if needed.
If your tables live in different servers, check the generic solution I suggested for a similar question here. It may need some tweaking depending on your use case, but the general idea is to replicate the functionality of INSERT INTO SELECT when the tables are not on the same server.
Is there a way of getting the metadata of a query?
I can use DESCRIBE but this only applies to tables, I don't really want to have to create a table from the query and get the metadata of that table as that would be unnecessarily expensive even if I limited the result rows.
I'm using impala shell to output queries to delimited files (usually only a couple of hundred rows) which are sometimes needed to be imported into an Access database.
I'd like to know the data types as then I can make Access use the correct data types rather than defaulting to string.
The answer, thanks to #SamsonScharfrichter is
CREATE VIEW xxxx AS, then DESCRIBE xxxx, then DROP VIEW xxxx.
I am currently creating a Form on access however when I go into form view I am unable to update any of the current records or add new records would anyone have any idea why I cant edit or update my records? or how I go about fixing this?
My Form is currently linked to a query, and incase it matters I also have inlcuded a search function in my form.
Any help/advice would be greatly appreciated.
Thanks
Paula
It means that the query you use is not updateable. There are a lot of limitations for updateable queries design, for instance you cannot use aggregaing in query, joins should have unique keys etc. Try to redesign your query. As a workaround you can copy the data from your query to temporary table, edit the data in this table and then copy data back to main table(s)
Your query is probably not updateable. You can check this by simply opening the query directly, and trying to edit/add data.
The most common reason is a JOIN on non-indexed columns.
For more reasons see: Dealing with Non-Updateable Microsoft Access Queries
or Allen Browne: Why is my query read-only?
I'm a newbie to pgsql. I have few questionss on it:
1) I know it is possible to access columns by <schema>.<table_name>, but when I try to access columns like <db_name>.<schema>.<table_name> it throwing error like
Cross-database references are not implemented
How do I implement it?
2) We have 10+ tables and 6 of have 2000+ rows. Is it fine to maintain all of them in one database? Or should I create dbs to maintain them?
3) From above questions tables which have over 2000+ rows, for a particular process I need a few rows of data. I have created views to get those rows.
For example: a table contains details of employees, they divide into 3 types; manager, architect, and engineer. Very obvious thing this table not getting each every process... process use to read data from it...
I think there are two ways to get data SELECT * FROM emp WHERE type='manager', or I can create views for manager, architect n engineer and get data SELECT * FROM view_manager
Can you suggest any better way to do this?
4) Do views also require storage space, like tables do?
Thanx in advance.
Cross Database exists in PostGreSQL for years now. You must prefix the name of the database by the database name (and, of course, have the right to query on it). You'll come with something like this:
SELECT alias_1.col1, alias_2.col3 FROM table_1 as alias_1, database_b.table_2 as alias_2 WHERE ...
If your database is on another instance, then you'll need to use the dblink contrib.
This question doe not make sens. Please refine.
Generally, views are use to simplify the writing of other queries that reuse them. In your case, as you describe it, maybe that stored proceudre would better fits you needs.
No, expect the view definition.
1: A workaround is to open a connection to the other database, and (if using psql(1)) set that as your current connection. However, this will work only if you don't try to join tables in both databases.
1) That means it's not a feature Postgres supports. I do not know any way to create a query that runs on more than one database.
2) That's fine for one database. Single databases can contains billions of rows.
3) Don't bother creating views, the queries are simple enough anyway.
4) Views don't require space in the database except their query definition.