I use CTEs (common table expressions) in SQL developer to make my queries more structured, and also with the intent to create "bricks" which I can reuse in queries.
For the second purpose it would be good to keep those CTEs in a separate file, so I don't need to browse for the latest version.
Is it possible to refer to CTE in another file in Oracle's SQL developer?
I know I could create queries / views in the database and use them, but unfortunately I don't have access to that.
One way to go would be code templates in SQL Developer itself. So you could code up your most frequent CTE's and invoke them with the keyboard.
I talk about those here
But basically you code them up in the preferences, and give them a name.
Then type the name, and hit ctrl+space to invoke the template.
You can also set these up as Auto-Replace.
For what it's worth - you CAN reference code from other files using the # and ## commands. However, it will take the contents of that file and execute as a complete, standalone SQL statement or series of statements, so I don't think you can use this to achieve your goal.
Related
Is it possible to get the table structure like db2look from SQL?
Or the only way is from command line? Thus, by wrapping a external stored procedure in C I could call the db2look, but that is not what I am looking for.
Clarification added later:
I want to know which tables have the non logged option from SQL.
It is possible to create the table structure from regular SQL and the public DB2 catalog - however, it is complex and requires some deeper skills.
The metadata is available in the DB2 catalog views in the SYSCAT schema. For a regular table you would first start off by looking into the values in SYSCAT.TABLES and SYSCAT.COLUMNS. From there you would need to branch off to other views depending on what table and column options you are after, whether time-travel tables, special partitioning rules, or many other options are involved.
Serge Rielau published an article on developerWorks called Backup and restore SQL schemas for DB2 Universal Database that provides a set of stored procedures that will do exactly what you're looking for.
The article is quite old (2006) so you may need to put some time in to update the procedures to be able to handle features that were added to DB2 since the date of publication, but the procedures may work for you now and are a nice jumping off point.
I'm looking for an easy way to create UPDATE queries based on the results of certain SELECT queries. The purpose of this is to create a private configuration file that I'm planning to run after I revert my database from a "public" backup.
For example, assuming that I have a table named setting with the following table structure:
| id_setting | name | value | module |
and a query such as:
select * from setting where module = 'voip'
Based on the results of these queries, I would like to generate INSERT/UPDATE statements that are ultimately stored into my configuration script.
Any idea how to achieve this is a generic way?
PS. I know I can concatenate parts of SQL together but I feel that this approach is to time consuming.
The closest thing in pgAdmin is the query tool (see http://www.pgadmin.org/docs/1.16/query.html). This would not take your select statements and turn them into queries, but you can graphically build queries if you don't want to parse and concatenate.
If this is going to be a big, repetitive task, I would look at writing a Perl script to parse a query and rewrite it as needed. This would require some inside knowledge. It isn't clear what you want to do regarding updating the values so you'd have to design your solution around that. More likely you would want to write a functional API (a UDF) to do what you want, and then write calls to that, probably not in a config file directly (since it is not clear you can trust that) but through an interface.
I'm writing a PL/1 subroutine that reads data from DB2. Depending on the input, it uses one of 3 cursors. These have to be opened, fetched, closed, etc. On every of these cursor-specific operations I have to specify its name. This leads to very redundant code, because the remaining operations are exactly the same for every case.
Is it possible to create a reference, to which I would assign the appropriate cursor? Then I could use this to perform the necessary tasks only once.
Because of safety-related restrictions, I'm not allowed to use dynamic (prepared) SQL.
And is there a reference containing all commands I can use in my EXEC SQL statements?
Thanks in advance
David
And is there a reference containing all commands I can use in my EXEC SQL statements?
IBM has documentation for DB2, which contains an SQL reference for the product.
I have a huge SQL script which i need to analyse. It would be really helpful if i could find a way which can generate a call tree; ie, to see which all procedures are called from a particular procedure. a perl based example is here, http://sqlblog.com/blogs/linchi_shea/archive/2009/10/23/find-the-complete-call-tree-for-a-stored-procedure.aspx
but i need a tool to analyse the text file (.sql file), not the procedure stored in the database. due to some reasons i will not be able to create the whole set of procedures in the database and use the above mentioned tool.
please respond if you have come across any ide/tool with this feature.
Probably not very helpful, as it violates your request for a "offline" sql file, text based parsing tool, but wanted to throw this redgate tool out there that I have used with great success in the past; RedGate Sql Dependency Tracker. It works very well and does a good job mapping out your objects and all their dependencies (definable as to what you want mapped). But it does require a database with all of the existing objects in place to work properly. :(
If you can't find one out there, I guess you could maybe do some script/macro text parsing if all the procedure calls are easily defined and predictable in the file. AutoHotKey is a great general purpose scripting tool/framework, and there are a few sql based scripts out there...just not one exactly like you are looking for that I have seen.
We're considering using SSIS to maintain a PostgreSql data warehouse. I've used it before between SQL Servers with no problems, but am having a lot of difficulty getting it to play nicely with Postgres. I’m using the evaluation version of the OLEDB PGNP data provider (http://www.postgresql.org/about/news.1004).
I wanted to start with something simple like UPSERT on the fact table (10k-15k rows are updated/inserted daily), but this is proving very difficult (not to mention I’ll want to use surrogate keys in the future).
I’ve attempted (Link) and (http://consultingblogs.emc.com/jamiethomson/archive/2006/09/12/SSIS_3A00_-Checking-if-a-row-exists-and-if-it-does_2C00_-has-it-changed.aspx) which are effectively the same (except I don’t really understand the union all at the end when I’m trying to upsert) But I run into the same problem with parameters when doing the update using a OLEDb command – which I tried to overcome using (http://technet.microsoft.com/en-us/library/ms141773.aspx) but that just doesn’t seem to work, I get a validation error –
The external columns for complent.... are out of sync with the datasource columns... external column “Param_2” needs to be removed from the external columns.
(this error is repeated for the first two parameters as well – never came across this using the sql connection as it supports named parameters)
Has anyone come across this?
AND:
The fact that this simple task is apparently so difficult to do in SSIS suggests I’m using the wrong tool for the job - is there a better (and still flexible) way of doing this? Or would another ETL package be better for use between two Postgres database? -Other options include any listed on (http://en.wikipedia.org/wiki/Extract,_transform,_load#Open-source_ETL_frameworks). I could just go and write a load of SQL to do this for me, but I wanted a neat and easily maintainable solution.
I have used the Slowly Changing Dimension wizard for this with good success. It may give you what you are looking for especially with the Wizard
http://msdn.microsoft.com/en-us/library/ms141715.aspx
The External Columns Out Of Sync: SSIS is Case Sensitive - I encountered this issue multiple times and it makes me want to pull my hair out.
This simple task is going to take some work either way. SSIS is by no means an enterprise class ETL product yet, but it does give you some quick and easy functionality, and is sufficient for most ETL work. I guess it is also about your level of comfort with it as well.
SCD is way too slow for what I want. I need to use set based sql.
It turned out that a lot of my problems were with bugs in the provider.
I opened a forum topic (http://www.pgoledb.com/forum/viewtopic.php?f=4&t=49) and had a useful discussion with the moderator/support/developer person.
Also Postgres doesn't let you do cross db querys, so I solved the problem this way:
Data Source from Production DB to a temp Archive DB table
Run set based query between temp table and archive table
Truncate temp table
Note that the temp table is not atchally a temp table, but a copy of the archive table schema to temporarily stored data in.
Took a while, but I got there in the end.
This simple task is going to take some work either way. SSIS is by no means an enterprise class ETL product yet, but it does give you some quick and easy functionality, and is sufficient for most ETL work. I guess it is also about your level of comfort with it as well.
What enterprise ETL solution would you suggest?