ORacle Forms Utility - oracle10g

Is their any utility through which we can determine the list of database objects an Oracle Form or Report uses or the list of Forms and Reports that use a specific database object.
For Oracle forms/Reports 10g

There used to be a tool made by Quest Software "SQL Impact" but it is not supported any more.
The easiest way to do this kind of analysis is to convert all your fmb/rdf to xml and then write a script in Ruby searching for db object names.
Alternatively you can use JDAPI to iterate over forms objects, but there is no JDAPI for reports. Using JDAPI is much easier using JRuby, you can find sample scripts there:
https://github.com/tomi44g/OracleFormsJruby

Related

Is there a way to get to the stored queries for reports and graphs in Tableau?

We're using Tableau 10.5.6. I used a reporting tool years ago called Oracle Sales Analyzer. In that tool you could get to the queries generated by the reports and graphs you created through back-end catalogs using their command line.
There you could rewrite the query to be more efficient by fine-tuning the code if you needed. It was a very cool feature of that reporting tool for geeks like me who like to dive into the back end of the product and tune it at a very low level.
My question is, does Tableau have any of this type of facility? Is there a way to get to the queries that get stored once you create a report or a graph. Also is there command line where you can access these catalogs if they exist? Otherwise are these queries just stored in ASCII flat files that can be accessed by a user.
Thanks!
There are two ways that Tableau will query a database.
Option 1: Custom SQL
In your data source, you paste in the sql you have written and Tableau will pass that query through to the database. This gives you complete control over the sql, including adding any indexing hints you may want. See https://onlinehelp.tableau.com/current/pro/desktop/en-us/customsql.html
Option 2: Use the Tableau data source designer
This is what many people do. Here, you visually design your data source with the joins. Tableau translates that design into what the Hyper engine considers to be the most effective way to run the query. Sometimes, Hyper translates that into a regular sql statement. Sometimes it does some additional things to help boost performance, like breaking it up into different queries. A lot depends on the db engine you are connecting to. There is no "sql" stored in a flat file for this. Tableau just translates your design at run-time. The Hyper engine does a good job with fine-tuning, assuming you have an efficient database design with proper indexing and current table statistics.
There is a way to see the sql from option 2 at run-time using Performance Recording. Performance Recording keeps track of each step of the visualization process and will spit out the sql statement(s) that Tableau ran to generate your dataset. The sql is not stored in the twb file though, it's a run-time analysis.

Data virtualization with SQL Server DB using Marklogic

I would like to use data from a SQL Server database in Marklogic without moving it physically. I have read about data virtualization in Marklogic but cannot get any example or documentation explaining how to go about it. Please point me to any reference that may help me.
I have already tried reading data using MLSAM. Is this the only way and is this virtualization?
MarkLogic introduced the concept of Views to allow data visualization tools to connect to MarkLogic through ODBC, executing SQL against MarkLogic. These views are fed from XML content within MarkLogic through range indexes. So, I think that is the other way around for what you are looking for. In general, MarkLogic will need data inside its own databases, to allow indexing it.
MLSAM can be a way to pull such data in, executing SQL statements from within XQuery against external sources (contrary to xdmp:sql, which runs against the Views inside MarkLogic). Tools like RecordLoader, XQsync, and XMLSh might be worth looking at as well. See
http://developer.marklogic.com/code
HTH!

Export data from postgreSQL in SAS XPORT (xpt) format

Is there a convenient, open-source method to generate a SAS XPORT Transport Format (xpt) file from a postgreSQL database for FDA submission?
I have checked the FDA specifications, available at http://www.fda.gov/downloads/ForIndustry/DataStandards/StudyDataStandards/UCM312964.pdf
These state that 'SAS XPORT transport files can be converted to various other formats using commercially available off the shelf software', but no software packages other than SAS are suggested.
The specifications for an SAS XPORT file are available at http://support.sas.com/techsup/technote/ts140.html
I have checked OpenClinica (which is the EDC software we are using), PGAdmin3 and AM (which can import .xpt files, but I didn't find an export method)
Easy way? Not that I know of. I think one way or another it will take some development work.
My recommendation is to do it as follows:
Write a user-defined function/stored procedure for pulling the data you need for each section.
Write a user-defined function to pull this data from each section and arrange it into an XML file. TheXML functions are likely to come in handy for this.
Of course you could also put the xml conversion in an arbitrary front-end. However, in general, you will find that the design above forces you to push everything into set-based logic which is likely to be more powerful in your case.
If you don't mind using Python, my XPORT module can write xpt files. https://github.com/selik/xport
If you have trouble using it, write me a note and I'll try to help. https://github.com/selik/xport/issues

Database portable Jasper reports

I use iReport designed jrxmls for Jasper reports
I have done database specific functions and DML queries like date format, string concatenation, concatenate symbol(||) etc.
My Question is, "Is there any way or plug-in to make the jrxml files to be database portable?".
Thanks in advance,
Kalaiselvan.
You are using JDBC, so your reports are already kind of portable unless you use some vendor-specific SQL functions or features.
You could write your OWN datasource in JasperReports (do implement JRDataSource interface), and provide your own layer of database independence. It shouldn't be that hard.
Each report is filled from a data source like a database, but you knew that. Since the report is filled by fetching data from a specific database with queries to specific rows, if you want to make your .jrxml files database portable (or your .jasper files for that matter) you will need to make your data source and sql queries parameters which are fed into your report file from your program. It is pretty straight forward to make the data source and SQL query a parameter using iReport.

Regarding Excel object

i am using ADO.Net oledb for inserting and fetching data from Excel database. I want to make first column in the excel sheet to bold and i want to add comments. I am achieving this thru Interop.Excel Application class.
i dont want to use interop. is there anyway to achieve through ADO.net query itself ? or some other way? My application is c# windows application
No way through ADO.NET, any more than there is of making a SQL Server column bold. ADO.NET treats Excel as a data source - formatting is something quite different and requires knowledge of the Excel spreadsheet format, such as you'd get via Interop. There are probably other libraries you can use if you search...