How would one find baselines associated with a CR in Telelogic Synergy using the CLI interface? I have tried ccm query "cvtype='baseline' and cr('xxx')", but this doesn't produce any results.
From the GUI you can look at the properties of a baseline and see which CRs are associated with the baseline, but I can't seem to find the proper CLI magic to allow me to write a script to take a CR and list the baselines.
I think associations between a baseline and a CR are handled with relationships (ccm relate).
Search for "Predefined relationships" in the Synergy manual for a list of the existing relationship. When you know the name of the relationship you should then be able to use a query with the function has_relationship_name().
Change request is associated more with a RELEASE rather than a BASELINE. So the following query will help you get the RELEASE, you can further run another query to retrieve the baseline.
To retrieve the release for the
ccm.exe query -f "%release %modify_time %create_time" "cr('xxxxx')"
Once you retrieve the RELEASE and MODIFY_TIME, run a new query to get the BASELINES
ccm.exe query -f "%objectname %modify_time %create_time" "(cvtype='project') and (release='pppp/qqqq') and (modify_time>=time('1/30/13'))" -s integrate
This way you will get a narrower list of BASELINES you can work with, I know this may not be the answer you are looking for but it might help.
Related
I am creating a data dictionary and I am supposed to track the location of any used field in a workbook. For example (superstore sample data), I need to specify which sheets/dashboards have the [sub-category] field.
My dataset has hundreds of measures/dimensions/calc fields, so it's incredibly time exhaustive to click into every single sheet/dashboard just to see if a field exists in there, so is there a quicker way to do this?
One robust, but not free, approach is to use Tableau's Data Catalog which is part of the Tableau Server Data Management Add-On
Another option is to build your own cross reference - You could start with Chris Gerrard's ruby libraries described in the article http://tableaufriction.blogspot.com/2018/09/documenting-dashboards-and-their.html
I'm trying to work out how I can export individual Cognos Reports via the command line, for the purposes of source versioning in Git at a report-by-report level. I presume XML would be the output format.
I read that the Cognos SDK can help but you need to build your own solution, which may be possible but this use case feels like something many others would already want and there'd be tooling already.
Of course, importing the individual report would also be needed.
Can anyone help here please?
Thanks.
If your end game is version control (Who changed what, when?), you should look into MotioCI. Last time I looked, there was no free version of MotioCI.
You can use tools like the ones provided by companies like http://www.motio.com. With the free version you can export the XML of the reports but only one by one.
You can also use a Cognos deployment of the reports that generates a zip file with the XML of the reports, but all the reports are in the same file and you will have to extract the XML of the individual reports by hand.
I found the SDK to be cumbersome and, when I got it working, slow.
Yes, report specs are XML.
I have created a process that produces output like what you are asking for. Here's what it involves:
A recursive common table expression (CTE) query to get the report
specs along with the folder structure as seen in Cognos.
A PowerShell script to run the query and write the results to the file system.
Another PowerShell script to pull the current content from the remote git repo, run the first PowerShell script, then add, commit, and push the results up to the remote git repo.
I also wrote a PowerShell script to perform the operations associated with git push. This involves using a program I found called HTML Tidy (http://tidy.sourceforge.net/) that can be used to make the XML human-readable. This helps with diffs in git. I use TFS, so I get a nice, side-by-side diff if I have tidied the XML. (Otherwise, it tells me the only line of XML has changed.)
I recently added output for dashboards (exploration) and data sets (dataSet2). Dashboards are stored as JSON, so my routine had to tidy that (simple in PowerShell).
I run my routine daily, getting new and modified content from the last 3 days (just in case), and weekly to do an entire dump (to capture the deletes). The weekly process takes about six minutes. The daily process is negligible.
Before you ask: I hesitate to provide actual code because I can't take any responsibility for your system.
Updates:
Hacking away at the Content Store database is not recommended and it is not supported by IBM.
For reference/comparison: I'm running IBM Cognos 11.0.7 on IIS on Windows 2012 R2 with the Content Store database on MS SQL Server 2016. Your system may be different.
Additional Resources
https://www.cognoise.com/index.php/topic,28289.msg113869.html#msg113869
https://www.cognoise.com/index.php/topic,17411.msg50409.html#msg50409
https://learn.microsoft.com/en-us/powershell/scripting/overview?view=powershell-6
https://learn.microsoft.com/en-us/sql/t-sql/language-reference?view=sql-server-2017
https://git-scm.com/docs
http://tidy.sourceforge.net/
I would like to change the properties of multiple diagrams together rather than clicking on them one by one. Does anyone know how this can be achieved?
You can use the scripting facility of Enterprise Architect to loop the diagrams you would like to change and update them.
See this section of the manual to get help.
There is a bunch of example scripts included with EA, either from the local scripts, or from the EAScriptLib MDG.
Another source of examples is my Github repository: https://github.com/GeertBellekens/Enterprise-Architect-VBScript-Library
You could write a SQL to manipulate your database. t_diagram.PDATA holds a long cryptic string where one part is ScalePI=0; (which is the default for no scaling). You can alter that to be ScalePI=1; (meaning scale to one page).
String manipulations vary from database to database. So you need to write your own which you can execute in a script using
Repository.Execute("UPDATE t_diagram ...")
Note that you should test this in a sandbox first since invalid SQLs can easily disrupt your whole repository.
I managed to download a planet file from OSM, and converted it to o5m format with osmconvert, and additional deleted all of the author information from it, to keep the file size smaller. I am trying to fetch every POI from this database from the whole world, so I am not interested in cities, towns, highways, ways, etc, only amenities.
First I tried to achieve this by using osmosis, which as it appears manages to do what I want, only it always runs out of memory, cause the file is too huge to process. (I could split the file up to smaller ones, but I would like to avoid this if possible).
I tried experimenting with osmfilter, there, I managed to filter out every node which has a tag named amenity in it, but I have several problems which I can't solve:
a. if I use the following command:
osmfilter planet.o5m -v --keep-tags="amenity=" -o=amenities.osm
It keeps all nodes, and filters out every tag which doesn't have amenity in it's name.
b. if I use this command:
osmfilter planet.o5m -v --keep-tags="all amenity=" -o=amenities.osm
It now filters out all nodes which doesn't have the amenity tag, but also filters out all additional tags from the matching nodes, which contain information that I need(for example, name of the POI, or description)
c. if I use this command:
osmfilter planet.o5m -v --keep-tags="all amenity= all name=" -o=amenities.osm
Filters out every node which has EITHER name or amenity in it's tags, which leaves me with several named cities or highways(data that I don't need).
I also tried separating this with an AND operator, but it says, that I can't use AND operator when filtering for tags. Any idea how could I achieve the desired result?
End note: I am running a Windows 7 system, so no Linux based program would help me:|
Please try --keep= option instead of --keep-tags option. The latter has no influence on which object will be kept in the file.
For example:
osmfilter planet.o5m --keep="amenity= and name=" -o=amenities.osm
will keep just that objects which have an amenity tag AND a name tag.
Be aware that all dependent objects will also be in the output file. For example, if there is a way object with the requested tags, each node of this way will be in the output file as well. The same is valid for relations and their dependent ways and nodes.
If you do not want this kind of behaviour, please add --ignore-dependencies
Some more information can be found here: https://wiki.openstreetmap.org/wiki/Osmfilter
osmfilter inputfile.osm --keep-nodes="amenity=" --keep-tags="all amenity= name=" --ignore-dependencies -o=outputfile.osm
this is exactly what you are looking for:
keep nodes with the tag amenity (--keep-nodes)
keep only info about amenities and names (--keep-tags)
I want to find out files under a given directory which have been updated most. Is there any command which can display this info? Or is there any way to get max version count for a given file, so I can write some script to get this info from all and then sort desc.
Do you mean changed the most number of times, or undergone the most code chrun?
Either way - looking at the report data might be the easiest option for you. Take a look at the following blog post I did explaining how to use Excel for looking at TFS data that uses churn as an example allowing you to drill down into folders and files - but you should be able to get the data that you are looking for.
Getting Started with the TFS Data Warehouse