Serena Dimensions Merge - merge

In serena dimensions, there are two parallel streams (Stream A and Stream B) from the same ancestor. I require to merge few modified files from Stream A to Stream B. Can someone guide as how this can be done.

None tool provides "automatic merging".
Suppose you have stream A and stream B, same item in different revisions: X 1.0 - production, X 1.1 - Stream A, X 1.2 stream B
First you must choose wich stream will get the "final merged" revision. You can use a third stream, just for all merges, regression tests and action to production.
Usualy you create a "third" version, (ex: X 1.3 if you choose merge based on production), or use a existent stream revision (ex: X.1.1.1, if you chose merge on existent revision 1.1 on stream A).
Use "compare" tool to check differences and proceed merging.

It can also be done locally, by having both streams updated each in its own folder (I assume you have them both as this). Locally merge your changes from Origin to Destination stream (can be all or just the selected files). Make sure that nothing was broken on the Destination stream and make a deliver or check in of the Destination stream.
A good practices is to have all items written, completed or baselined in your Origin stream, whichever makes more sense in your project, prior the merging of the streams, and then as a double check you can Get your Destination stream from the server into a different path and verify that all the changes are merged and nothing was broken. Also action the items as written, completed or baselined in the Destination stream once this final check was successful.

Related

Is it possible to merge Azure Data Factory data flows

I have two separate Data flows in Azure Data Factory, and I want to combine them into a single Data flow.
There is a technique for copying elements from one Data flow to another, as described in this video: https://www.youtube.com/watch?v=3_1I4XdoBKQ
This does not work for Source or Sink stages, though. The Script elements do not contain the Dataset that the Source or Sink is connected to, and if you try to copy them, the designer window closes and the Data flow is corrupted. The details are in the JSON, but I have tried copying and pasting into the JSON and that doesn't work either - the source appears on the canvas, but is not usable.
Does anyone know if there is a technique for doing this, other than just manually recreating the objects on the canvas?
Thanks Leon for confirming that this isn't supported, here is my workaround process.
Open the Data Flow that will receive the merged code.
Open the Data Flow that contains the code to merge in.
Go through the to-be-merged flow and change the names of any transformations that clash with the names of transformations in the target flow.
Manually create, in the target flow, any Sources that did not already exist.
Copy the entire script out of the to-be-merged flow into a text editor.
Remove the Sources and Sinks.
Copy the remaining transformations into the clipboard, and paste them in to the target flow's script editor.
Manually create the Sinks, remembering to set all properties such as "Allow Update".
Be prepared that, if you make a mistake and paste in something that is not correct, then the flow editor window will close and the flow will be unusable. The only way to recover it is to refresh and discard all changes since you last published, so don't do this if you have other unpublished changes that you don't want to lose!
I have already established a practice in our team that no mappings are done in Sinks. All mappings are done in Derived Column transformations, and any column name ambiguity is resolved in a Select transformations, so the Sink is always just auto-map. That makes operations like this simpler.
It should be possible to keep the Source definitions in Step 6, remove the Source elements from the target script, and paste the new Sources in to replace them, but that's a little more complex and error-prone.

UCM Clear Case:How do I force an activity merge?

Often when I'm delivering activities for a build, I get an issue where one or two activities have dependencies on other activities that are not yet ready to be deployed.
What I want to do in most of these situations is force a merge between the two changes and deploy the stream, so that any changes in development that are lost during merge can be recovered.
What happens instead is that ClearCase forces me to move these changes to a new activity and include the activity if I want to make the delivery at all.
I have heard that I can render a branch obsolete - which would be satisfactory in some cases, but occasionally there are changes I might want to include in the deployment - is there any way for me to force a merge between two changes before making a deployment?
Sometimes, UCM doesn't let you make a deliver because of "linked" activities, that is because a previous deliver has created a timeline which made those activities linked (meaning you can no long er deliver one without the other)
In those instances, you still can merge any activity you want in a non-UCM fashion with cleartool findmerge: see "How to merge changes from a specific UCM activity from one ClearCase stream to another" for a complete example.
Then later, you will make your deliver (with all activities from the source stream).
Adding on to #VonC's answer...
There are a couple of ways that you can wind up with activities linked together:
Version dependency: Activity A has versions 1,2,&3 of foo.c Activity B has version 4 of foo.c. You can sometimes also have "1 & 3" and "2 & 4"
Baseline Dependency: Activities A and B were delivered in the same deliver operation from Stream X to sibling stream Y. From that time forward, A&B have to be delivered together since they are in the same "deliverbl" baseline.
Number 1 can be changed by rearranging the changesets using
cleartool chacct -fcsets {Activity X} -tcsets {activity Z} foo.c
Number 2 is pretty much set in stone...

Storing code metrics

I'd like to write a pre-commit hook that tells you if you've improved/worsened some code metric of a project (i.e. average function length). The hook would have to know what the previous average function length was and I don't know where to store that information. One option would be to store an additional .metrics file in the repo but that sounds clunky. Another option would be to git stash, compute the metrics, git stash pop, compute the metrics again and print the delta. I'm inclined to go with the latter. Are the any other solutions?
Disclaimer: I am author of the Metrix++ tool, which I am using in the workflow I described below. I guess the same workflow can be executed with other tools capable to compare the results.
One of the ideas you suggested works perfectly, if you add a couple of CI checks (see the steps below). I find it solid. Not sure why you are considering it clunky.
I have got a file with metrics results which is updated before each commit and stored in VCS. Let's name this file metrics.db, and consider automation of the following workflow on build/test of a project:
1) if metrics.db has not been changed since last checkout (i.e. it is the original data for the previous/base revision), copy it to metrics-prev.db
2) Collect metrics for current code, what produces metrics.db file again. Note: It is very helpful when a metrics tool can do iterative scans for the best performance (i.e. calculate metrics for updated functions/classes), so it gives you the opportunity to run metrics tool on every build, including iterative.
3) Compare metrics-prev.db with metrics.db. If metrics identify regressions, fail the build and [optionally] do not allow to commit - team rule. If metrics are good, build is successful, and commit may happen.
4) [optionally] you may run Continuous Integration (CI) which validates that the actual committed metrics.db file corresponds to the committed code for the same revision (i.e. do the same 1-3 steps and make sure that the diff is zero at the step 3). If diff is not zero, it means somebody forgot to update the metrics.db file, and presumably did not execute pre-commit check, so revert the change.
5) [optionally] CI may do steps 1-3 if you fetch metrics.db as metrics-prev.db from the previous revision. In this case, CI may also check that the collected metrics.db is the same as committed (alternative or addition for the step 4).
Another implementation I have seen: metrics.db files are stored in a separate drive, out of VCS, and custom script is able to locate corresponding metrics.db for a revision. I find this solution unreliable as the drive can disappear, files can be moved and renamed, and so on. So, placing the file in VCS is better solution, but any will work.
I have attempted to do the alternative you suggested: switch to the previous revision and run the metrics tool twice. I abandoned this approach for several reasons: metrics check script alters your source files (so, it is impossible to include it into iterative rebuild and continue to work smoothly with your IDE as it will complain about changed files), and secondly it is very slow performance (comparing with iterative re-scans, it is extremely slow).
Hope it helps.

How to merge delivered changesets to work item into one in RTC

I have already delivered 3 changesets for given work item. Anyway I believe that it should be one changeset only.
How can I merge these changesets into one changeset?
Re: "I believe that it should be one changeset only". It isn't necessary, but I understand the desire to have one change set encapsulating all the work. It makes it easier to deliver from stream to stream, resolve conflicts and avoid gaps.
As VonC said, you can't technically merge the change sets, however, you can create a new change set with equal changes. It's a bit of work, and they'd all have to be in the same component. Change sets cannot span components. Also, if there are gaps between the change sets, you'll have some merging to do. Here are the general steps.
Sync up with your repository workspace's flow target, by accepting all incoming changes.
Discard the three completed change sets from your repository workspace. The next step is the trick.
In the Pending Changes view select the component and choose the context menu action to "Replace in ", where is your flow target. That will take the configuration of the component in your repository workspace, only missing the three change sets, and replace it in the flow target stream. Now you've "undelivered" the three change sets.
Accept the three change sets back into your repository workspace. They should now be the only outgoing change sets in the component, just as if you hadn't delivered.
Select the three change sets you want to merge and create a patch from them.
Remove the change sets from the work item.
Discard the change sets.
Apply the patch back to your sandbox.
Check the changes into one new change set.
Associate the new change set with the work item.
Deliver the new change set. Done.
As you can see, it's not a trivial task, and so you likely won't do it that often, unless somehow your team process requires you to do so.
If those change sets are already delivered to the stream, you can't.
If by "deliver", you mean "change set associated to a work item, but not yet delivered to the stream, then you can move files and directories from two of those change sets to the third, and then "discard" those two (now empty) change sets.
That means those change sets aren't "completed" (they haven't yet a green tick as an overlow on their triangle-shaped icon).
A change set gets completed when it is part of a baseline, or delivered to a stream.

Change name of component

To create a new dev stream from an existing stream, I first created a snapshot of the existing stream and from this snapshot I created a new prod stream.
(similar to a ClearCase UCM rebase of a baseline from a parent Stream to a child Stream)
All of the new stream components are the same as previous. So 'dev-stream' & 'prod-stream' have the same components (the components have the same name and point to same baseline).
Should a copy of the components not instead have been created with the new baseline ?
Here is a snapshot of how my component appears in RTC for both 'dev-stream' & 'prod-stream' :
The baseline should not contain the word "prod" as this is a dev stream.
The problem is circled in red in screenshot: How or why has the word 'prod' appeared in the component name ? Can 'prod' be removed from the name ?
The component must be the same when you add a snapshot to a new stream : same name and same baseline name. (very similar to a ClearCase UCM rebase, where you would find the same baseline name used as a foundation baseline for the sub-stream).
The idea behind a stream is to list what you need in order to work : this is called a configuration, as in "scm" (which can stand for "source configuration management", not just "source code management").
The fact that your new stream starts working with a baseline named with "prod" in it has no bearing on the kind of development you are about to do in said new stream.
It is just a "starting point" (like "foundation baselines" in ClearCase are). Again, no copy or rename involved here.
In your previous question, you mentioned having the current stream as 'dev-stream', but that has no influence on the name of the baselines already delivered in that first Stream (whatever its name is). Those baselines keep their name, and if you snapshot them and reuse that snapshot in a new stream, you will get the exact same baseline name.
But the name of the first baseline you are using as a starting point doesn't matter, as long as its content allows you to start a separate development effort, isolated in its own stream.
Any baseline you will create and deliver on said new stream will be displayed on it, and you won't see anymore that first baseline name.