I have a repository in GitHub containing only SQL DDL and DML(insert commands) files.
This has been synced with SonarQube. However, we do not have any coverage report for it.
I was checking the below link for it :
https://www.postgresql.org/docs/current/regress-coverage.html
But I am not able to figure out how it works. Moreover it produces report in html format. Is there a way to do this entire procedure and generate in xml format?
Related
I am stepping into a new reporting environment and I don't have a lot of background info yet. But my company utilizes a series of crystal reports.
I want to compare two reports that are identical except that they connect to different data sources. I can click on both reports in the Crystal Reports viewer, go to Database > Set Database Location and I am able to see the data source. If I do this for both reports in question, I can see that they both connect to different data sources, as expected.
However, when I export the two Crystal Reports as text files and then compare them using Notepad++, I don't see the datasource / connection string in the report files, so when I do a compare, they are exactly the same.
If the exported text files are exactly the same, how does Crystal Reports Viewer know to point one report towards a prod data source and another report towards the dev data source? It does not appear to be embedded in the exported metadata / report definition file.
Thank you!!
The connection info is simply not part of the exported report definition text.
But, obviously, it is part of the report definition.
If you need to export more detailed report definition information, including connection properties, consider getting a documentation utility. Ken Hamady maintains a list of those here.
I'm trying to work out how I can export individual Cognos Reports via the command line, for the purposes of source versioning in Git at a report-by-report level. I presume XML would be the output format.
I read that the Cognos SDK can help but you need to build your own solution, which may be possible but this use case feels like something many others would already want and there'd be tooling already.
Of course, importing the individual report would also be needed.
Can anyone help here please?
Thanks.
If your end game is version control (Who changed what, when?), you should look into MotioCI. Last time I looked, there was no free version of MotioCI.
You can use tools like the ones provided by companies like http://www.motio.com. With the free version you can export the XML of the reports but only one by one.
You can also use a Cognos deployment of the reports that generates a zip file with the XML of the reports, but all the reports are in the same file and you will have to extract the XML of the individual reports by hand.
I found the SDK to be cumbersome and, when I got it working, slow.
Yes, report specs are XML.
I have created a process that produces output like what you are asking for. Here's what it involves:
A recursive common table expression (CTE) query to get the report
specs along with the folder structure as seen in Cognos.
A PowerShell script to run the query and write the results to the file system.
Another PowerShell script to pull the current content from the remote git repo, run the first PowerShell script, then add, commit, and push the results up to the remote git repo.
I also wrote a PowerShell script to perform the operations associated with git push. This involves using a program I found called HTML Tidy (http://tidy.sourceforge.net/) that can be used to make the XML human-readable. This helps with diffs in git. I use TFS, so I get a nice, side-by-side diff if I have tidied the XML. (Otherwise, it tells me the only line of XML has changed.)
I recently added output for dashboards (exploration) and data sets (dataSet2). Dashboards are stored as JSON, so my routine had to tidy that (simple in PowerShell).
I run my routine daily, getting new and modified content from the last 3 days (just in case), and weekly to do an entire dump (to capture the deletes). The weekly process takes about six minutes. The daily process is negligible.
Before you ask: I hesitate to provide actual code because I can't take any responsibility for your system.
Updates:
Hacking away at the Content Store database is not recommended and it is not supported by IBM.
For reference/comparison: I'm running IBM Cognos 11.0.7 on IIS on Windows 2012 R2 with the Content Store database on MS SQL Server 2016. Your system may be different.
Additional Resources
https://www.cognoise.com/index.php/topic,28289.msg113869.html#msg113869
https://www.cognoise.com/index.php/topic,17411.msg50409.html#msg50409
https://learn.microsoft.com/en-us/powershell/scripting/overview?view=powershell-6
https://learn.microsoft.com/en-us/sql/t-sql/language-reference?view=sql-server-2017
https://git-scm.com/docs
http://tidy.sourceforge.net/
I am using nant to generate a report of test results using nunit2report.
I have several projects which I test individually and drop the resulting xml into a common folder.
I then use nant to generate a report using all xml files in this folder.
This works fine at first glance, all the tests seem to be merged into a single html output, however for each file the entire test list is being repeated.
The summaries are related to the summary in a single file but the list of test names are the same and being repeated over and over again.
What I would like ideally is a report where the summaries are merged and only a single list of all test names are displayed.
Is this possible? If not how can I fix the issue of test names being repeated?
Can you restructure your process to run nunit once on all projects (either by specifying multiple assemblies or creating a nunit project file)? This will then alleviate the need to have to merge the xml files.
I would like to ask if it is possible to create a crystal report using a dataset without using or creating an xsd file. Because there's one report here that does that. It is connected to a dataset but there's no xsd file that is used.
Of course you can, just use the report document's SetDataSource method :
YourReportDocument.SetDataSource(YourDS);
I have a report that will be viewed from SSRS report manager and scheduled to send a flat file as well. The problem is that the rich display, summation rows, and some other elements that are perferred when viewing the report online or as a PDF are not wanted when the report is viewed in Excel or when it is exported to CVS. The solution I proposed was to simply have two reports. One that was nicely formated and the other that was more of a raw data feed but they want only one report meaning that I need a way to show one thing if it is viewd online or saved to a PDF and something different when it is saved to CVS or XLS. Is this possible and if so how?
When exporting to .csv format, many fields are stripped. Have you looked at what the existing functionality does to your report?
If that's not adequate, you can use the new SSRS 2008R2 global variable to change item visibility. For example set the hidden function to:
=(Globals!RenderFormat.Name = "EXCEL")
This would hide something when exported to Excel format. (This is only available since SSRS 2008 R2.)
More info on this at:
http://blogs.msdn.com/b/robertbruckner/archive/2010/05/02/globals-renderformat-aka-renderer-dependent-report-layout.aspx