We have a stored procedure that takes in a set of parameters (from dimension tables) and then outputs a set of rows from which a report has to be created.
Till now this is done in a .NET app using an ORM, but is it possible to integrate it with MicroStrategy?. How do I pass the selected params (from a report prompt) to a Stored Procedure on the database and then map the results back to the report?
In the past I did a Free Form SQL (FFSQL) report in MicroStrategy on top of a PL/SQL function which returned a 'table' (Of course that was an Oracle database, more info about returning a table with a function here).
So what you are looking for is something doable.
You can use the prompts in FFSQL report as parameters for your function
Then you have to map the columns of the returned table as attributes and metrics returned by the FFSQL report
This is something I did in a very old MicroStrategy implementation to provide writeback functionality: the users were allowed to update some values in some tables and the function returned a Success or Error message.
As you can imagine this was not a standard Data Warehouse solution, it was more an hoc solution for an operational database.
My suggestion is to avoid similar solution if not necessary and prepare in advance the tables/rows that you need. Of course in your case you have already the procedure ready so you have just to figure out how to combine it with MicroStrategy.
Some useful readings:
TN37783: Instructions to use stored procedures within Freeform SQL reports in MicroStrategy 9.x against different databases
Using Prompts in Freeform SQL Reports
Related
My ssis package has an oledb source which joins oracle and sql server to get source data and loads it into sql server oledb destination. Earlier we were using linked server for this purpose but we cannot use linked server anymore.
So I am taking the data from sql server and want to return it to the in clause of the oracle query which i am keeping as sql command oledb source.
I tried parsing an object type variable from sql server and putting it into the in clause of oracle query in oledb source but i get error that oracle cannot have more than 1000 literals in the in statement. So basically I think I have to do something like this:
select * from oracle.db where id in (select id from sqlserver.db).
Since I cannot use linked server so i was thinking if I could have a temp table which can be used throughout the package.
I tried out another way of using merge join in ssis. but my source data set is really large and the merge join is returning fewer rows than expecetd. I am badly stuck at this point. I have tried a number if things nothung seems to be working.
Can someone please help. Any help will be greatly appreciated.
A couple of options to try.
Lookup:
My first instinct was a Lookup Task, but that might not be a great solution depending on the size of your data sets, since all of the records from both tables have to pulled over the wire and stored in memory on the SSIS server. But if you were able to pull off a Merge Join, then a Lookup should also work, but it might be slow.
Set an OLE DB Source to pull the Oracle data, without the WHERE clause.
Set a Lookup to pull the id column from your SQL Server table.
On the General tab of the Lookup, under Specify how to handle rows with no matching entries, select Redirect rows to no-match output.
The output of the Lookup will just be the Oracle rows that found a matching row in your SQL Server query.
Working Table on the Oracle server
If you have the option of creating a table in the Oracle database, you could create a Data Flow Task to pipe the results of your SQL Server query into a working table on the Oracle box. Then, in a subsequent Data Flow, just construct your Oracle query to use that working table as a filter.
Probably follow that up with an Execute SQL Task to truncate that working table.
Although this requires write access to Oracle, it has the advantage of off-loading the heavy lifting of the query to the database machine, and only pulling the rows you care about over the wire.
We are building a dashboard with many reports. The relationship between tables is defined in microstrategy. We found that Microstrategy is not using different SQL for different reports. It is pulling all the data from Database(which is 46 million) and then applying post processing on those data to generate individual reports.
This is taking lot of time and it is not using the query engine of the database.
How can we configure microstrategy so that it generates different query for different reports and collect only the required data for a particular report and NOT all data.
One way to do that is to use fre form SQL. But we want to have the capability for drag and drop kind of reports.
How can we achieve this?
We are using Microstrategy 10.1
From your description it sounds like Microstrategy is first pulling all data (46 million records) from the DB using its SQL Engine and then applying filtering after this.
If your reports have been created in Microstrategy developer (or web) using attribute filters then each report should correctly execute sql that has explicit where conditions that translate to those attribute filters. e.g. if you have a report with an attribute titled 'Fruit' and you want to only display apples, then you would have an attribute filter on that report that only displays results where 'Fruit' = 'Apple'. This would translate to a where condition in the SQL engine when the report is executed. However, if you are applying a view filter to the report, then the SQL engine will first obtain everything and then filter the entire dataset in the analytical engine, which would be slow especially if there are multiple reports running on the dashboard.
It's important to know how you are bringing the dataset into the dashboard - is it using a cube as a dataset, or a report, or? There are a few ways of achieving the performance you are looking for, here are a couple:
Option 1: Develop each report in Microstrategy developer using attribute filters as desired. This would require that you have all your attribute relationships defined correctly.
Option 2: Have all your 46 million records pulled into a cube. Use the cube as the dataset for the dashboard and then use view filters however you want on the various reports you want to place on the report.
Option 1 + 2: You can combine both of the above options if you wish. Store entire dataset in cube, define several reports (normal reports, not cube reports) that can dynamically source from cube, using filters as required, and then add these reports into your dashboard.
These are the things I would do as first steps:
Check your attributes and attribute relationships are defined and work
Create a test report and try to filter based on one of these attributes
Try to create a few reports, each with different filter conditions based on one of the attributes
Put these reports into the dashboard and see whether each one generates different SQL statements.
This sounds like you have either:
built the reports using view filters (which apply filtering post query execution) rather than applying filter in the generated SQL, or
you don't have attribute relationships defined, such that the system doesn't think the filters you've defined aren't relevant to the fact tables containing the data.
Are you using cubes? I am assuming that what you mean by executing the query once.
You need to replace the the individual reports with new report- regular report- not the ones made out of cubes. Thats the only way
I am trying to run RAWSQL_REAL("select sum(amount_us)from gbsa_dpo_itg.Fact_tblHistoryData_new where qtr_data='Q42014'") in calculated field and I am getting error message ERROR 2133: Aggregate function calls cannot contain subqueries.
I am using tableau 8.3.3 and HP Vertica database live connection to tableau
When I run the same query in custom sql it is working fine
pleas help in this
thanks in advance
Read the manual about these functions, look under reference, functions
You don't generally pass an entire SQL string to execute in isolation. Instead, they are useful for writing expressions or calling non standard functions that your server may provide, which are embedded within the SQL that Tableau generates. So first learn to use Tableau to get the effect you want, and then resort to Raw SQL functions in the rare case where you need to access some database server specific feature.
There is no reason that you would need Raw SQL to get the information above using Tableau. You could put amount_us on the row shelf and qtr_data on the filter shelf, and Tableau would generate a similar query.
If you are doing this to combine data from multiple queries, first learn about calculated fields and data blending.
I have four parameters on my report. Three of them are required for the underlying stored procedure data source, but the fourth parameter is just used to show/hide items on the report.
If the user changes the value for that fourth parameter, is there a way to refresh the report using the existing data without running the stored procedure again? The result set won't change, only the rows that are to be displayed.
Reporting Services 2008 seems to treat each combination of report parameters as a unique set, even if some of them are internal to the report only, and not related to the stored procedure. Therefore, aside from using report caching, there is no way to prevent report server from making a round trip to the database, even if only the internal parameter changes. You basically have two options:
Turn on report caching in report server, and run all combinations of
the four parameters, so that the user will be accessing report
server's cache when she runs any report. This avoids making a round trip to the database, but only for the parameter values you've already tried.
write your underlying stored procedure with caching behavior so that it writes its results to a database table. Whenever the stored procedure is run, have it first check the table to see if the results for the current set of parameter values is already stored in the cache table, and if so return those rows to report server. This still requires a round trip, but it is faster than running the procedure again.
I'm really at a loss as to how to procede.
I have a very large database, and the table I'm accessing has approx. 600,000 records. This database is accessed using an accounting application, which provides the report with the SQL query by which this report accesses the database.
My report has a linked subreport which has restrictions that are placed in the report header. When this report is run, the average time to refresh, using a very base query is 36 minutes. When adding two more items to the query, the report takes 2.5 hours.
Here is what I've tried:
cleaned up the report only leaving items in absolutely necessary - no difference
removed most formulas (removing the remaining formulas makes no time difference)
tried editing the SQL query - wasn't allowed because of the accounting application
tried flipping subreport and main report - didn't work
added other groupings - no difference
removed groupings - no difference
checked all the servers for lack of temp disc space - no issue
tried "on demand" subreport - no change
checked Parameters (discrete vs. range) and it is as it should be
tried bursting indexes, grouping on server, etc. - no difference
the report requires 2 passes. I've tried getting it down to one pass unsuccessfully.
There must be something I'm missing.
There does not appear to be any other modifications to the report using regular crystal functions. Is there any way to speed up the accessing of the data without having to go through all 600,000 records? The SQL query that accesses this data is long and has many requests. It is not something I can change.
Can I add something (formula?) that nullifies these requests? I'm reaching now...
Couple of things we have had success with is adding indexes to the databases, and instead of importing tables into the report, we instead wrote a stored procedure to retrieve the desired results.
If indices and stored procedures dont get you where you need to be you have reached the denormalise until it works part of life with a database. You might want to look at creating an MI database with tables optimized for your reporting needs; and some data transformation scripts that can extract the data from production to your MI database. Depending on what it is oracle / ms have tools to help you do this.
We use Crystal Reports with a billing system, and we had queries in the database that take over 1.5 hours to complete. This doesn't even take into account the rendering/formatting of the reports.
We created Materialized Views and force the client to refresh them daily. A materialized view is basically a database view that holds the returned dataset. The dataset is not refreshed unless you explicitly tell it to refresh.
Do you know what the SQL query is? If so, you can move the report outside the accounting application and paste the query directly into the Command in the database expert. I've had to do this in a couple of cases with another application I work with.