Why is Crystal Reports Query so slow? - crystal-reports

I have many Crystal Reports to the same database. Some execute quickly given the same date parameters and many fields are the same as well as the tables they access. One of my reports used to run quickly is now running very slow and I can see it looking through all the records - represented in the bottom 0 of 100000 til it finds records. I have no idea what I may have changed to make it do this. Some reports still run fast and some do not. These findings are consistent with the reports I am talking about. Does anyone know why setting might be causing this?
I have tried looking for any subtle differences in them - I cannot see anything. Many of them where clones from the original(still works fast).
In my CR book in the performance section it states if the where clause can not be translated it will be ignored and for the process of all records - which is what this looks like - though I have a valid where clause when I check it in the report.
Use Indexes Or Server For Speed is checked. All other setting in Report Options as identical.
Thanks

You can do some troubleshoot:
Try run your query directly on db and see how long it takes.
Is there any business logic added in your report.
May be also try to put same query in fresh report and see if it takes similar time.
Also try debug your application and see if some part of your code making your report to show slow.
Are you running it on local db or on some server.
Also if you can share your query, so I can take a look.
Let me know if you need more help.

Related

Unit tests producing different results when using PostgreSQL

Been working on a module that is working pretty well when using MySQL, but when I try and run the unit tests I get an error when testing under PostgreSQL (using Travis).
The module itself is here: https://github.com/silvercommerce/taxable-currency
An example failed build is here: https://travis-ci.org/silvercommerce/taxable-currency/jobs/546838724
I don't have a huge amount of experience using PostgreSQL, but I am not really sure why this might be happening? The only thing I could think that might cause this is that I am trying to manually set the ID's in my fixtures file and maybe PostgreSQL not support this?
If this is not the case, does anyone have an idea what might be causing this issue?
Edit: I have looked again into this and the errors appear to be because of this assertion, which should be finding the Tax Rate vat but instead finds the Tax Rate reduced
I am guessing there is an issue in my logic that is causing the incorrect rate to be returned, though I am unsure why...
In the end it appears that Postgres has different default sorting to MySQL (https://www.postgresql.org/docs/9.1/queries-order.html). The line of interest is:
The actual order in that case will depend on the scan and join plan types and the order on disk, but it must not be relied on
In the end I didn't really need to test a list with multiple items, so instead I just removed the additional items.
If you are working on something that needs to support MySQL and Postgres though, you might need to consider defining a consistent sort order as part of your query.

Query log equivalent for Progress/OpenEdge

Short story: A report running against a Progress database (OpenEdge Release 10.1C03) takes hours to complete. I suspect that it does not take advantage of existing data indexes. Would like to understand how it scans the data to then try to add an index that will make it run faster.
Source code of the report is not available. The code is native Progress 4GL, not SQL.
If it were an SQL database I would try to do a dump of SQL queries and would then go from that. With 4GL I did not find any such functionality. Is it possible to somehow peek at what gets executed at the low level?
What else can be done if there is no source code?
Thanks!
There are several things you can do:
If I recall correctly 10.1C should have the _usertablestat and _userindexstat virtual system tables available. These allow you to observe, at runtime, what tables and indexes are being accessed by a particular session. You can either write your own 4GL program to query them or you can use the screens in PROMON, R&D, 3 "Other Displays", 5 "I/O Operations by User by Table" and 6 "I/O Operations by User by Index". That will show you what tables and indexes are actually in use and how much use they are getting. If the observed data seems wrong it will probably give you a clue. (If the VSTs are missing it might be because the db was upgraded from an older version -- add them with proutil dbname -C updatevsts.)
You could also use the session startup parameters -clientlog "filename" and -logentrytypes QryInfo to obtain more detailed information about the queries being executed.
Keep in mind that Progress is not SQL. Unlike most SQL databases the 4gl uses a static, compile-time, optimizer. Index selection happens when the code is compiled. So unless you can recompile (and you seem to not have source so that seems unlikely) you won't be able to improve things by adding missing indexes. You might, however, at least be able to show the person who does have source where the problem is.
Another tool that can help is the profiler. This will identify where in the code the time is being spent. That can also be good information to provide to the original vendor if they need help finding the problem. For more information on the profiler: http://dbappraise.com/ppt/profiler.pptx

Out of memory exeception for straightforward report

I'm trying to run an SSRS report. It's a straightforward report, just to render data from a table which has around 80K records.
No aggregation or data processing is done in report. There are around 50 columns along with 19 report parameters. I just have to display those 50 columns in report (no pivot).
Usually it takes around 5 minutes to render this report on our development server (off peak hours). Same is the case with our production server, but there users are getting "Out of memory" exceptions a lot, and also report parameter criteria are not utilized (that's the complaints I get from users).
I'm able to filter the criteria locally without any problem although it takes long time to render.
Why does it take such a long time to render the report, even though the report is straightforward?
The report runs fine when I hit F5 on VS 2008 but from time to time I get out of memory exceptions when I hit the "Preview" tab.
Some of the column's name(s) have a "#" character. If I include such columns in the report an "out of memory exception" is thrown (especially in Preview mode). Is there truth to this: doesn't SSRS like column names with "#"? E.g. my column name was "KLN#".
I have created a nonclustered index on the table but that didn't help me much.
Whats the difference between running the report in Preview mode vs hitting F5 on VS 2008? It's fine when I hit F5 even though it takes 5 minutes, but Preview mode has the problem.
There isn't much room for redesign (since it's a straight forward report), perhaps only can I remove of the report parameters.
Any suggestion would be appreciated.
In addition to the already posted answers and regarding the problems with the preview in the Report Designer or Report Manager there is another possible solution: avoid too much data on the first report page!
It can be done by pagination into small record amounts, i.e. by custom groups with page breaks or sometimes automatically (see the answer of done_merson) or by adding a simple cover page.
These solutions are especially helpfull in the development phase and if you plan to render the report results to Excel or PDF anyway.
I had a similar case with out of memory exceptions and never returning reports with a simple report and its dataset containing about 70k records.
The query was executed in about 1-2 minutes, but neither the Report Designer nor our development SSRS 2008R2 Server (Report Manager) could show the resulting report preview. Finally I suspected the HTML preview being the bottleneck and avoided it by adding a cover page with a simple textbox. The next report execution took about 2 minutes and successfully showed the HTML preview with the cover page. Rendering the complete result to Excel only took another 30 seconds.
Hopefully this will help others, since this page is still one of the top posts if you search for SSRS out of memory exceptions.
Why does it take such a long time to render...?
I have created a Nonclustered index on the table but that didn't help me much.
Because (AFAIK) SSRS will construct an in-memory model of the report before rendering. Know that SSRS will take three steps in creating a report:
Retrieve the data.
Create an internal model by combining the report and the data.
Render the report to the appropriate format (preview, html, xls, etc)
You can check the ExecutionLog2 View to see how much time each step takes. Step 1 is probably already reasonably fast (seconds), so the added Index is not tackling the bottle neck. Probably step 2 and 3 are taking a lot of time, and require a lot of RAM.
SSRS doesn't like column names with #?? my column name was KLN#.
As far as I know this shouldn't be a problem. Removing that column more likely was just enough to make the report runnable again.
There isn't much to redesign (since its a straight forward report) such as except i can remove of the report parameters.
SSRS is just not the right tool for this. As such, there is no real "solution" for your problem, only alternatives and workarounds.
Workarounds:
As #glh mentioned in his answer, making more RAM available for SSRS may "help".
Requiring the user to filter the data with a parameter (i.e. don't allow the user to select all those rows, only the ones he needs).
Schedule the report at a quiet moment (when there's enough RAM available) and cache the report.
Alternatives:
Create a small custom app that reads from the database and outputs an Excel.
Use SSIS, which (I thought) is better suited for this kind of task (data transformation and migration).
Rethink your setup. You haven't mentioned the context of your report, but perhaps you have an XY Problem. Perhaps your users want the entire report but only need a few key rows, or perhaps they only use it as a backup mechanism (for which there's better alternatives), or...
Try to increase you ram, see this post for a similar error:
Need SSRS matrix to show more than 400k records
We just had a similar situation and set the "Keep together on one page if possible" option in Tablix Properties / General / Page break options to off and it worked fine.

SSRS report VERY SLOW in prod but SQL query runs FAST

I've spent hours troubleshooting this and I need some fresh perspective . . .
We have a relatively simple report setup in SSRS, simple matrix with columns across the top and data points going down. The SQL query behind the report is "medium" complexity -- has some subqueries and several joins, but nothing real crazy.
Report has worked fine for months and recently has become REALLY slow. Like, 15-20 minutes to generate the report. I can clip-and-paste the SQL query from the Report Designer into SQL Mgmt Studio, replace the necessary variables, and it ruturns results in less than 2 seconds. I even went so far as to use SQL profiler to get the exact query that SSRS is executing, and clipped-and-pasted this into Mgmt Studio, still the same thing, sub-second results. The parameters and date ranges specified don't make any difference, I can set parameters to return a small dataset (< 100 rows) or a humongous one (> 10,000 rows) and still the same results; super-fast in Mgmt Studio but 20 minutes to generate the SSRS report.
Troubleshooting I've attempted so far:
Deleted and re-deployed the report in SSRS.
Tested in Visual Studio IDE on multiple machines and on the SSRS server, same speed (~20 minutes) both places
Used SQL Profiler to monitor the SPID executing the report, captured all SQL statements being executed, and tried them individualy (and together) in Mgmt Studio -- runs fast in Mgmt Studio (< 2 seconds)
Monitored server performance during report execution. Processor is pretty darn hammered during the 20 minute report generation, disk I/O is slightly above baseline
Check the execution plans for both to ensure that a combination of parameter sniffing and/or differences in set_options haven't generated two separate execution plans.
This is a scenario I've come across when executing a query from ADO.Net and from SSMS. The problem occurred when the use of different options created different execution plans. SQL Server makes use of the parameter value passed in to attempt to further optimise the execution plan generated. I found that different parameter values were used for each of the generated execution plans, resulting in both an optimal and sub-optimal plan. I can't find my original queries for checking this at the moment but a quick search reveals this article relating to the same issue.
http://www.sqlservercentral.com/blogs/sqlservernotesfromthefield/2011/10/25/multiple-query-plans-for-the-same-query_3F00_/
If you're using SQL Server 2008 there's also an alternative provided via query hint called "OPTIMIZE FOR UNKNOWN" which essentially disables parameter sniffing. Below is a link to an article that assisted my original research into this feature.
http://blogs.msdn.com/b/sqlprogrammability/archive/2008/11/26/optimize-for-unknown-a-little-known-sql-server-2008-feature.aspx
An alternative to the above for versions earlier than 2008 would be to store the parameter value in a local variable within the procedure. This would behave in the same way as the query hint above. This tip comes from the article below (in the edit).
Edit
A little more searching has unearthed an article with a very in-depth analysis of the subject in case it's of any use, link below.
http://www.sommarskog.se/query-plan-mysteries.html
This issue has been a problem for us as well. We are running SSRS reports from CRM 2011. I have tried a number of the solutions suggested (mapping input parameters to local variables, adding WITH RECOMPILE to the stored procedure) without any luck.
This article on report server application memory configuration (http://technet.microsoft.com/en-us/library/ms159206.aspx), more specifically, adding the 4000000 value to our RSReportServer.config file solved the problem.
Reports which would take 30-60 seconds to render now complete in less than 5 seconds which is about the same time the underlying stored procedure takes to execute in SSMS.

FREETEXTTABLE always has a rank of 0

I'm using SQLServer 2008 and if I perform the following query:
SELECT
*
FROM
FREETEXTTABLE(SomeTable, Name, 'a name that I know exists')
I get the rows back that I would expect, but the rank is always 0.
Searching for a solution to this problem, I found this question on the Microsoft ASP.NET forum, and sure enough if I add:
ALTER FULLTEXT CATALOG MyCatalog REBUILD
I start to get a rank - but only temporarily.
I don't want to have to rebuild my catalog every time I do a search especially when I have lots of data in my database and if I add it to my Sproc directly before the query, my query returns no results anyway, presumably because the catalog has finished being rebuilt. There seem to be other people having this and similar problems but I have been unable to find a solution. Any ideas?
I am running in the same issue, and currently accepted answer is not a solution for me.
Yes the ranking is done as said by this answer, but it is no way a reason for having inconsistent results when it has been some times since the last catalog rebuild. Ranking should not dramatically change upon rebuild and even less some minutes after rebuild...
For me, there is a bug in freetexttable ranking. (Bug which does not affect containstable ranking: I have checked it myself with my own buggy catalog, and it is also written on this Microsoft forum post.)
From this other Microsoft forum post it seems this bug occurs only in catalog having only very few rows indexed. Adding data to the catalog causes the bug to disappear.
So here is my answer, taken from Pavel Valenta on yet another Microsoft forum post:
If your real catalog is not going to have more than a few hundreds rows indexed, add some dummy table to your catalog in order to have more rows indexed.
This will not pollute your results due to the way queries are build. Yes this seems quite a strange fix. But that is the only one that had solved the trouble for me.
One final note: I had this problem with sql 2005 sp4, not tested with 2008. (The question is for 2008.)
The rank is relative to the other results returned in the query and is therefore only useful for sorting on relevance from the returned values. There is detailed information on the ranking method.