Sometimes, when I am creating reports, Microstrategy takes the wrong table for join (in case of using fields in report that appear in more than one table). For example if I have fact_table and fact_table_month, and they have the same fields, Microstrategy may take the fact_table_month for join when i need it to take tha fact_table.
i know about the possibility to create a dummy metrics and use them in the report. I know about the possibility to manually change the logical size of tables, but i am looking for an official and proper way to solve this problem in MSTR.
How can i force Microstrategy to take the table i want in join? how can I tell MSRT: ok, for this report use one table , and for that report use another?
MicroStrategy SQL Engine is dimensionally aware of the structure of your hierarchies. So if you have defined your attribute relationships, MicroStrategy should select the right fact table.
If your fact_table_month and fact_table have the same attributes and metrics, then it means the two tables have the same grain, so they are the same for MicroStrategy. If you think that metric A in fact_table_month is not a monthly aggregation of metric A fact_table then or the name of fact_table_month is wrong or your metrics should be two different facts and metrics.
In the past when I had a similar problem, the daily table populated from a system and the monthly from another, I solved using different metrics, unfortunately this didn't allow me to drill down easily.
Among the "tricks" to force the SQL Engine to use a specific table (beside logical size and a specific table), you can also add a specific attribute to report objects: an attribute present only in the lowest level of aggregation it's enough to hit the right table without additional dummy objects.
The best way always depend on your project and reporting requirements.
If you want precise control over your report then you can go for "Free Form SQL" report which requires you to manually write the SQL and use whatever joins you want.
Another way is to use the Lowest level attribute in the report object pane. Make sure that this attribute is stored at the specific level in the Fact table which you want to join.
Also you can take advantage of Metric Dimensionality i.e Filtering = None and Grouping = none whenever needed to join a specific fact table.
Related
When you finish the free form query in microstrategy, the next step is to map the columns.
Is there any way to do it automatically? At least make the list of the columns with its names.
Thanks!!!!
Sadly, this isn't possible. You will have to map all columns manually.
While this functionality isnt possible with freeform reporting specifically, Microstrategy Data Import will allow you the ability to create Data Import Cubes. These cubes can be configured as live connections, meaning they execute against the data source selected every time they are used, and are not your typical snapshot cube. Data Imports from a database can be sourced from a database query. This effectively allows you to write your own SQL with the end result being a report that you did not have to specify columns manually for.
I want to periodically export data from db2 and load it in another database for analysis.
In order to do this, I would need to know which rows have been inserted/updated since the last time I've exported things from a given table.
A simple solution would probably be to add a timestamp to every table and use that as a reference, but I don't have such a TS at the moment, and I would like to avoid adding it if possible.
Is there any other solution for finding the rows which have been added/updated after a given time (or something else that would solve my issue)?
There is an easy option for a timestamp in Db2 (for LUW) called
ROW CHANGE TIMESTAMP
This is managed by Db2 and could be defined as HIDDEN so existing SELECT * FROM queries will not retrieve the new row which would cause extra costs.
Check out the Db2 CREATE TABLE documentation
This functionality was originally added for optimistic locking but can be used for such situations as well.
There is a similar concept for Db2 z/OS - you have to check that out as I have not tried this one.
Of cause there are other ways to solve it like Replication etc.
That is not possible if you do not have a timestamp column. With a timestamp, you can know which are new or modified rows.
You can also use the TimeTravel feature, in order to get the new values, but that implies a timestamp column.
Another option, is to put the tables in append mode, and then get the rows after a given one. However, this option is not sure after a reorg, and affects the performance and space utilisation.
One possible option is to use SQL replication, but that needs extra tables for staging.
Finally, another option is to read the logs, with the db2ReadLog API, but that implies a development. Also, just appliying the archived logs into the new database is possible, however the database will remain in roll forward pending.
We are building a dashboard with many reports. The relationship between tables is defined in microstrategy. We found that Microstrategy is not using different SQL for different reports. It is pulling all the data from Database(which is 46 million) and then applying post processing on those data to generate individual reports.
This is taking lot of time and it is not using the query engine of the database.
How can we configure microstrategy so that it generates different query for different reports and collect only the required data for a particular report and NOT all data.
One way to do that is to use fre form SQL. But we want to have the capability for drag and drop kind of reports.
How can we achieve this?
We are using Microstrategy 10.1
From your description it sounds like Microstrategy is first pulling all data (46 million records) from the DB using its SQL Engine and then applying filtering after this.
If your reports have been created in Microstrategy developer (or web) using attribute filters then each report should correctly execute sql that has explicit where conditions that translate to those attribute filters. e.g. if you have a report with an attribute titled 'Fruit' and you want to only display apples, then you would have an attribute filter on that report that only displays results where 'Fruit' = 'Apple'. This would translate to a where condition in the SQL engine when the report is executed. However, if you are applying a view filter to the report, then the SQL engine will first obtain everything and then filter the entire dataset in the analytical engine, which would be slow especially if there are multiple reports running on the dashboard.
It's important to know how you are bringing the dataset into the dashboard - is it using a cube as a dataset, or a report, or? There are a few ways of achieving the performance you are looking for, here are a couple:
Option 1: Develop each report in Microstrategy developer using attribute filters as desired. This would require that you have all your attribute relationships defined correctly.
Option 2: Have all your 46 million records pulled into a cube. Use the cube as the dataset for the dashboard and then use view filters however you want on the various reports you want to place on the report.
Option 1 + 2: You can combine both of the above options if you wish. Store entire dataset in cube, define several reports (normal reports, not cube reports) that can dynamically source from cube, using filters as required, and then add these reports into your dashboard.
These are the things I would do as first steps:
Check your attributes and attribute relationships are defined and work
Create a test report and try to filter based on one of these attributes
Try to create a few reports, each with different filter conditions based on one of the attributes
Put these reports into the dashboard and see whether each one generates different SQL statements.
This sounds like you have either:
built the reports using view filters (which apply filtering post query execution) rather than applying filter in the generated SQL, or
you don't have attribute relationships defined, such that the system doesn't think the filters you've defined aren't relevant to the fact tables containing the data.
Are you using cubes? I am assuming that what you mean by executing the query once.
You need to replace the the individual reports with new report- regular report- not the ones made out of cubes. Thats the only way
In my current environment I have a HUGE list of tables to scroll down through and finding that specific table I need to double-click is tedious (almost like trying to find a needle in a haystack).
Is there a way to open a specific table upon connecting to a database?
Alternatively, is there a way to create "shortcuts" (something like "favorites") to certain tables, so that they are easily accessible/findable upon SQL Developer startup?
I don't believe it is possible to set up a set of "Favorite" tables. However, if you right-click on Tables in your connection, there is an Apply Filter option. That lets you specify criteria to filter the set of tables that are displayed based on the name of the table or on other attributes like the last DDL time, etc. That's generally the easiest way to reduce the list to a reasonable number of tables.
We create several crystal reports based on SQL Server - usually 2005 or 2008. Broadly there are 2 kind of reports
a) tabular reports - which shows some data in a table (for example, invoice list)
b) document layouts - which shows data in specific format - usually from one or two main tables - and several secondary tables (for example, invoice)
We sometimes use tables directly in crystal. Or create a procedure in SQL and than use that procedure. One invoice could refer to usually around 10-12 tables. Most of these linked using left outer join to the primary invoice table.
What option is better - using tables in crystal (and let crystal create and run the sql query) - or create a query - and than use that query in crystal. Which one will give better performance?
There will be no difference in performance between a query generated by the 'Database Expert' versus the same SQL added to a Command. One caveat: ensure that the record-selection formula can be parsed and sent to the database (a filter applied WhileReadingRecords will definitely be less efficient that a pure-SQL one).
Reasons to prefer the 'Database Expert':
prior to v 2008, Command objects didn't support a multivalued parameter
easier to manage (somewhat subjective)
Reasons to prefer a Command:
you can add hints
you have more finely-grained control over the SQL (e.g. in-line views, CTEs, more-complex JOINs, subselects)
Personally, I try to avoid stored procedures as they offer minimal performance benefits, but require a more-signification investment in development and maintenance.
In the end, there is no substitute for performance. Try you query both ways and measure the results.
Coding it yourself will almost invariably run faster -- after all, you know what your data looks like, and Crystal doesn't. Also, there are things you can do in manual queries (windowing functions, for example) that Crystal can't.
Crystal had tendency to do some crazy stuff behind the scenes. You can view the "Show SQL Query" under the Database menu options to see what it creates. If find it easier to write the query in SQL as I can optimize it myself much easier. I also prefer to do any calculated/formula fields in SQL to and just use Crystal as a display interface. If you do put logic in crystal remember that it is running that logic for every record returned... so if there are conditions that exclude a record from a formula put that first to limit the time spent in the calculation.