I'm looking for an easy way to create UPDATE queries based on the results of certain SELECT queries. The purpose of this is to create a private configuration file that I'm planning to run after I revert my database from a "public" backup.
For example, assuming that I have a table named setting with the following table structure:
| id_setting | name | value | module |
and a query such as:
select * from setting where module = 'voip'
Based on the results of these queries, I would like to generate INSERT/UPDATE statements that are ultimately stored into my configuration script.
Any idea how to achieve this is a generic way?
PS. I know I can concatenate parts of SQL together but I feel that this approach is to time consuming.
The closest thing in pgAdmin is the query tool (see http://www.pgadmin.org/docs/1.16/query.html). This would not take your select statements and turn them into queries, but you can graphically build queries if you don't want to parse and concatenate.
If this is going to be a big, repetitive task, I would look at writing a Perl script to parse a query and rewrite it as needed. This would require some inside knowledge. It isn't clear what you want to do regarding updating the values so you'd have to design your solution around that. More likely you would want to write a functional API (a UDF) to do what you want, and then write calls to that, probably not in a config file directly (since it is not clear you can trust that) but through an interface.
Related
I want to be able to use the variable names in Redshift which refers to my DB Objects (like schema and table names). Something like...
SET my_schema="schema":
SET my_table="table";
SELECT * from #my_schema.#my_table;
But looks like Redshift doesn't have such feature. Is there any workaround possible to achieve this?
There are a few ways you try to attack this. But first trying to use a database engine for functions beyond querying the database is a waste of horsepower and the road to db lock-in. So I'm going to focus on ways to do this before the database.
The most complete way is to use a front-end system that clients connect to and then this system in turn connects to the db. The one I've used in the past is pgbounce-rr which pools connections to the the db but also allow for modifications to the SQL before being sent on. This will do what you want but you will need a computer to perform this work.
If you use Redshift data-api you could put a Lambda function in series which performs the SQL modifications you desire (but make sure you get your API permissions right). However, I expect it is unlikely that you are looking to move to an API access model.
Many benches support variable substitution and simple replacements in the SQL can be done by the bench. However, this is very dependent on which bench you use and having all users' benches configured correctly.
Bottom line - if you want something to modify your SQL do if before it goes to Redshift.
I have a PostgreSQL query constructed by a templating mechanism. What I want to do is to determine the relations actually hit by the query when it is run and record them in a relation. So this a very rudimentary lineage problem. Simply looking at the relation names appearing in the query (or parsing the query) would not easily solve the problem as the queries are somewhat complex and the templating mechanism inserts expressions like WHERE FALSE.
I can of course do it by using EXPLAIN on the query and insert the relation names I find manually. However this has two drawbacks:
EXPLAIN actually runs the query. Unfortunately running the query takes a lot of time so it is not ideal to run the query twice, once for the result and once for the EXPLAIN.
It is manual.
After reading a few documents I found out that on can log the result of an EXPLAIN automatically to a CSV file and read it back to a relation. But, as far as I understand, this means logging everything to the CSV which is not an option for me. Also, automatic logging seems to be triggered only when the execution takes longer then a predetermined threshold and I want to do this for a few specific queries, not for all time consuming ones.
PS: This does not need to be implemented fully at database layer. For instance, once I have the result of EXPLAIN in a relation, I can parse it and extract the relations it hits at the application layer.
EXPLAIN does not execute the query.
You can run EXPLAIN (FORMAT JSON) SELECT ..., which will return the execution plan as a JSON. Simply extract all Relation Name attributes, and you have a list of the tables scanned.
I use CTEs (common table expressions) in SQL developer to make my queries more structured, and also with the intent to create "bricks" which I can reuse in queries.
For the second purpose it would be good to keep those CTEs in a separate file, so I don't need to browse for the latest version.
Is it possible to refer to CTE in another file in Oracle's SQL developer?
I know I could create queries / views in the database and use them, but unfortunately I don't have access to that.
One way to go would be code templates in SQL Developer itself. So you could code up your most frequent CTE's and invoke them with the keyboard.
I talk about those here
But basically you code them up in the preferences, and give them a name.
Then type the name, and hit ctrl+space to invoke the template.
You can also set these up as Auto-Replace.
For what it's worth - you CAN reference code from other files using the # and ## commands. However, it will take the contents of that file and execute as a complete, standalone SQL statement or series of statements, so I don't think you can use this to achieve your goal.
As part of some requirement, I need to migrate a schema from some existing database to a new schema in a different database. Some part of it is already done and now I need to compare the 2 schema and make changes in the new schema as per gap finding.
I am not using a tool and was trying to understand some details using syscat command but could not get much success.
Any pointer on what is the best way to solve this?
Regards,
Ramakant
A tool really is the best way to solve this – IBM Data Studio is free and can compare schemas between databases.
Assuming you are using DB2 for Linux/UNIX/Windows, you can do a rudimentary compare by looking at selected columns in SYSCAT.TABLES and SYSCAT.COLUMNS (for table definitions), and SYSCAT.INDEXES (for indexes). Exporting this data to files and using diff may be the easiest method. However, doing this for more complex structures (tables with range or database partitioning, foreign keys, etc) will become very complex very quickly as this information is spread across a lot of different system catalog tables.
An alternative method would be to extract DDL using the db2look utility. However, you can't specify the order that db2look outputs objects (db2look extracts DDL based on the objects' CREATE_TIME), so you can't extract DDL for an entire schema into a file and expect to use diff to compare. You would need to extract DDL into a separate file for each table.
Use SchemaCrawler for IBM DB2, a free open-source tool that is designed to produce text output that is designed to be diffed. You can get very detailed information about your schema, including view and stored procedure definitions. All of the information that you need will be output in a single file, and can be compared very easily using a standard diff tool.
Sualeh Fatehi, SchemaCrawler
unfortunately as per company policy, cannot use these tools at this point of time. So am writing some program using JDBC to get the details and do some comparison kind of stuff.
I need to write an update script that will check to see if certain tables, indexes, etc. exist in the database, and if not, create them. I've been unable to figure out how to do these checks, as I keep getting Syntax Error at IF messages when I type them into a query window in PgAdmin.
Do I have to do something like write a stored procedure in the public schema that does these updates using Pl/pgSQL and execute it to make the updates? Hopefully, I can just write a script that I can run without creating extra database objects to get the job done.
If you are on PostgreSQL 9.1, you can use CREATE TABLE ... IF NOT EXISTS
On 9.0 you can wrap your IF condition code into a DO block: http://www.postgresql.org/docs/current/static/sql-do.html
For anything before that, you will have to write a function to achieve what you want.
Have you looked into pg_tables?
select * from pg_tables;
This will return (among other things) the schemas and tables that exist in the database. Without knowing more of what you're looking for, this seems like a reasonable place to start.