I have a report on PowerBI that has many pages/tabs and each one also has alot of data being displayed. As I didn't design this, I'm going through the report to eliminate as much as I can and possibly splitting the report as alot of the data only requries refreshing once a week.
This is where my query comes in, I have information on one page that requires a refresh every two hours over a 12 hour duration, one field of data that requires a daily refresh and two more fields only require refreshing when required.
Is it possible to segment scheduled refreshes throughout a single part of the report, or does scheduled refresh only allow the entire report to be refreshed? (I.E. Sales status is hourly, Outbound status is daily, and sales summary is weekly)
I'd rather avoid having to split reports, as it is very handy to have them on one page; rather than having to open two and view them on multiple monitors.
I am just starting out on PowerBI reports, having been shown enough to get what I need done; but plan to delve further in, this being my first port of call if it is possible.
Thanks for any reponses in advance.
Brian.
No. It's Not Possible.
PowerBI Internal working like Tabular Model.
In Import mode we can not do incremental refresh also.
So other option is you can create Reporting layer and define denormilized with calucaluated columns Reporting tables.( Sales ,summary )
and use Direct query or Refresh and Do ETL for This table.
So you can schedule ETL for specific Tables i.e.Sales or summary.
Related
I was wondering if anyone had a method of running a report to see what a user is doing in Epicor or what they are printing. We are having users report that in the middle of the night, when no one is here at the plant, there are 500 page reports being printed. We are able to see in the print queue who printed what, but the report doesn't match up with anything in our system. We would have for example a report called DailySales.rpt, but in the printer queue it would be something like hb986a87dthr.rpt. Just wondering if anyone else has seen this, or would have a solution that would let me see what a user is printing.
It is not possible to link the print job directly to the SysTask record because neither the print job number, the temp file, nor mac addresses are saved in Epicor for cross referencing. It can be approximated by looking at runtimes and the SysTask record.
You can create a BAQ and BAQ report to display Active and recently completed SysTask information by user. This will give you the report run, start/end times, user, and current status. If you need more detailed information such as the criteria used in the report, you can also join to the SysTaskParam table. Keep in mind that the SysTaskParam table is fully normalized by field name, so you may want to join multiple copies of the table with specific criteria if you need a lot of information. Unfortunately, for "print all pages" jobs, Epicor doesn't know how many pages the report will be until after the data is instantiated and then it is rendered in the reporting software, so you won't be able to get any estimate of number of pages or size.
There are many strategies for mitigating the issue you described. Here are a couple:
You can use criteria within the BAQ to limit the number of records returned for a specific query
You can create a subquery criteria from BAQ parameters to return no data when abnormally open parameters are used for the report (e.g. > 30 days range). You could also use this method with time gates based on the current system time.
Retrain users
I have a JMeter project with multiple GET and POST requests and assertions for these. I use Aggregate results and View results tree listeners, but none of these can store results on hourly basis. I tried JMeterPlugins-Standard and JMeterPlugins-Extras packages and jp#gc - Graphs Generator listener, but all of them use aggregated data instead of hourly data. So I would like to get number of successful and failed requests/assertions per hour, maybe a bar chart would be most suitable for this purpose.
I'm going to suggest a non-conventional design-level solution: name your samplers dynamically with hour (or date and hour), so that each hour the name will change, and thus they will appear in different category, i.e.:
The code for such name is:
${__time(dd:hh,)} the rest of sampler name
Such sampler will appear in the following way in Aggregate Report (here I simulated it with minutes/seconds, but same will happen with days/hours, just on larger scale):
Pros and cons of such approach:
Simple, you can aggregate anything by hour, minute, or any other time slice while test is running, and not by analysis after execution.
Not listener-dependant, can be used with pretty much any listener or visualizer
If you want to also have overall stats, it will require to sum up every sub-category. So it alters data, but in the way that it can still can be added back to original relatively easy.
Calculating __time before every sampler will not be unnoticed completely from performance perspective, but I don't think it will add visible overhead to a script.
You could get the same data by properly aggregating JTL or CSV (whichever you use) after execution, so it doesn't provide you with anything that is not possible to achieve using standard methods
Script needs altering to make this happen. if you have 100s of samplers, it's going to take a while. And if you want to change back...
You might want to use Filter Results Tool which has --start-offset and --end-offset parameters, you can "cut" your results file into "interesting" pieces and plot them according to your requirements.
You can install Filter Results Tool using JMeter Plugins Manager
Also be aware that according to JMeter Best Practices you should
Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
Don't use "View Results Tree" or "View Results in Table" listeners during the load test, use them only during scripting phase to debug your scripts.
You can get whatever information you need from the .jtl results file, you can specify test results location via -l command-line argument
To get summarized results per hour add to your test plan Generate Summary Results:
Generates a summary of the test run so far to the log file and/or standard output
Update interval in jmeter.properties to your needs ,1 hour, 3600 seconds:
summariser.interval=3600
You will get summary per hour of your requests.
You can try with Jmeter backend Listener. It has integration with graphite and Influxdb. After storing the results in these time series database you can display the result in Grafana dashboard. Grafana has its own filtering of showing the results in hourly, monthly, daily basis and so on.
I've a problem with the dashboard in Tableau. In the dashboard there are many worksheets, and all the columns that are in the report are calculable. The problem is that dashboard is being formed for a very long time. The report contains approximately 2 million rows. And it is generated about 5 minutes.
Tell me, what are the solutions in this case?
Maybe I can somehow adjust the page display and not all the records at once?
To reduce the calculation time, try to exclude data you don't need with a data source filter in tableau. You can also hide or delete unused calculated fields. Other things you can do is reduce sheets that are not used.
Here's a link: https://www.tableau.com/about/blog/2016/1/5-tips-make-your-dashboards-more-performant-48574
Steps to follow to reduce calculation time:
Extract the data and use Extract data and also keep option as extract instead of live.Also replace the data source using extract data.
Use "User Filter" to reduce calculation time so that tableau will display of particular user data only.
I hope this will work to solve your problems.
I have one more idea to resolve this issue.
1)when you loan first time your dashboard put into Dashboard Action Filter
First Time load dashboard data exclude in your sheet.
Dashboard Menu->Action->add action->select sheet and exclude option.
2) Live to Extract data source and select radio button extract.
3)use user filter.
I am following the other answers (use extract, dashboard action filter...) and I want to add one point:
Drag every field used by any tablesheet on the dashboard on "Detail" of every tablesheet you are using on the Dashboard. Now Tableau loads all needed data while loading the first tablesheet and can use this data for the other sheets.
i.e. A dashboard contains three tablesheets (A, B, C) now you drag every field used by A on "Deatil" of B and C, every field used by B on "Deatil" of A and C, every field used by C on "Deatil" of B and A.
We are also having a similar issue with 150 million rows but I want to check if you are doing following steps. This may help you. This goes back to fundamentals of Tableau reporting.
1/ Try to make sure your data set is in star schema format. This will help a lot in report.
2/ Try to have tables and views in DB in such a way that same columns are used in Tableau. Any extra columns in tables adds to the performance issue.
3/Make sure indexing is done properly for all the fields that are joined.
4/ In my experience Dashboard adds extra performance lag. So make sure you try to get as much performance tuning on sheets as possible before even going to dashboard.
5/ If required try to use materialized views.
hope this helps.
Try to capture performance metrics using performance recorder option in Tableau.
Check for the underlying DB tables and joins present on the data source layer.
Try using optimized sets and parameters as required and get rid of less relevant filters.
Try using data extracts with scheduled refresh with data source filter for limited business years data.
In Crystal Reports, is there a way to get both full set charting and subset charting, in the report headers?
I'm working on a report from an erstwhile co-worker and I'm still trying to make things "better".
While I haven't found the solution to accruing time
( see Accruing over time (non-overlapping) - technique? )
I'll press on with how to use the resulting data once I retrieve it.
The report is a Global Availability report for network technologies, and part of the report is graphic:
Chart availability for different
network types for last "n" months'
time.
Charts availability for each region
(for each network type for "n"
months' time).
She (co-worker) had a global chart, but for each region, she did a separate sub-report containing just the chart for that region. The query isn't optimal, and using the sub-reports, the query is repeated each time.
If there a way to use a single data-set in one report for all five charts, forcing the four regional charts to display only that region's data?
Additional info:
The charts are all Bar charts, design is
y-axis: calculated availability
x-axis: Group by network type (Switches, Trunks, "Network)
sub group by month
Bad Example:
Let me see if I understand this. In your Report Header, you have 5 Subreports for the 4 regional graphs and the global graph. And you want to collapse this all into 1 Subreport if possible?
Yes, but you can't do it like in your image where United States & Europe are side-by-side. They would have to be 1 per row. Also, the datasource also has to be formatted correctly. To do this,
Make a new subreport. Group it by the Region.
In this subreport, make your regional graph in the Group Header section.
In this subreport, also make your global graph in the Report Header section.
Insert this subreport into your main report and you should be done.
Sometimes, the only way out of the fire is through it.
After lots of un-satisfactory refactoring, I spoke with the original (years ago) requestor and got some good information. I have yet to speak to the most recent requestor again (who didn't have any knowledge of the technical requirements the last several times).
Spoke w/ the guy who is tending a related db, and I get permission to add come functions, views, store procedures, etc. to THAT db... Within reason and after code/perf review -- something that isn't normally conducted, so I welcome it. I WILL have the ability to do the procedural stuff through... a procedure. Written as a stand-alone, I should be able to re-use it for any of the queries against future needs.
And... Yes, I am pretty much going to have to (read "get to") re-design, and hopefully get rid of most of the sub-reports. Yeay, me.
Thanks for coming along for the ride.
I'm trying understand CQRS to see if it can help out in an reporting environment.
Problem: An CQRS designed system is already in production, happily generating commands, events and updating the necessary query views. A new report is required. This report takes a number of parameters; Start Date, End Date, Product Type, and Product Category.
How do I generate the aggregate views for:
A query store that will initially be empty
And, can pass parameters with very different values
Do I try and solve this using a CQRS approach, or is there a better alternative?
Thanks
If it is not reasonable to precompute all your report data into flat view, then just don't do that. You may want to join a bunch of tables for your report. It's your decision what can be precomputed, and what is not worth it (cpu, storage considerations).
In your particular case (StartDate, EndDate,..) - i can't see what is the problem to generate a single ViewModel table for it, and just query directly against the parameters.
Figure out which events are required to gather all report data.
Query all those events, republish them to the endpoint that handles updating the new report table(s).
Wait until all events have been processed.
Put some indexes on the columns that will function as report query criteria.
Done!