ClearCase: Email Notification on Deliver - content-management-system

At my new company, the CMS is ClearCase. I've worked with Perforce before and it had a nice built-in notification mechanism for the team to keep up-to-date with files that changed in the project. I'm trying to have something equivalent in ClearCase. I would like to know if someone have achieved this before.
Basically, there is three requirements :
Have a way to subscribe to a project. One receives only notification on projects it has subscribed to.
When someone deliver an activity, all the subscribers of the impacted project
receive an email notification about that activity.
The email contain the list of the files affected by this activity. Each modified file has a link that perform a diff that shows what this activity change in this file.
So is someone is aware of a module/extension or any other existing way to put that in place or do I have to do all this manually with trigger and perl scripts ?
Thanks,
Martin

we wanted the same here, so we are using a trigger called ucm_complete_delivery.pl that can be found on CM Crossroads.
You need to apply this trigger to your PVOB (as it's a UCM trigger).
Once you have applied it, you need to define the following Custom Attributes on your UCM component(s):
auto_baseline_email user_1#mydomain.com,user_2#mydomain.com,etc...
It's a bit painful as the mailing list as to be maintained by hand (or you need to use group mail address), but it's better than nothing. :)
Cheers,
Thomas

I am not sure if that already exist, I am sure it is not provided natively with the UCM product.
May be a more specialized forum like CMCrossroad have more informations, but you already put a question there ;)
Anyhow, the simplest way to implement such a notification would be to have a process following new baselines made on a stream.
Each baseline being composed of activities, it would be simple to list those.
Each baseline being easily compared with its previous baseline, it would be simple to list the file versions, and build the appropriate diff.
As for the users following a project, I would suggest as a "subscription mechanism" the list of views of one of the streams of a project: any user having a view on (one of the streams of) that project is potentially interested.
The general implementation principle would be through post-operation triggers, as described in the "Ten best triggers" article

AFAIK, almost all CC operations can have triggers (in Perl, IIRC)
You need to add an email trigger to the deliver operation. Long, long time ago I saw a simple example. But you have to take care of keeping the subscription list and email the appropriate persons.

Related

How to define Columns and Status in JIRA to make sense for all issue types (Tasks, Epics, etc.)

Background:
JIRA offers a single set of statuses for all types of issues in a project.
Problem:
The problem is that the status set for a task is ToDo, InProgress, and Done. While for a UserStory in the same project it might be Designing, Developping, Testing, Releasing, and Done. It can even be different for a bug or an Epic.
Question:
How do you keep track of the workflow of your product and at the same time manage the status of your tasks using the single set of JIRA status.
PS: I know they can be customized for each project, but it doesn't help because you can't customize them for each issue type separately.
I think one of the reasons that JIRA offers the To Do, In Progress, and Done is that these can apply to anything. You either haven't done it, you're doing something, or you finished. That set can apply to any type of item.
That being said, I feel your pain in wanting to have a better view into the true state of an issue. What we have found we use for our OnDemand agile boards is to set up something like the following:
To Do
In Progress
Ready for Review
In Review
Done
For most types of issues, this can work. It adds that bit of extra layer to be able to identify what is ready for testing.
One of the things that is tricky is dependent tasks. For example, I noticed you mentioned "Designing" as a stage, and I'm not sure this makes sense in an agile sense. If the design is emerging from the development, it may be better to allow the design/development to flow within the development team. However, we all know that sometimes you need to get some details ironed out before you can proceed, or there may be some people that need to become involved before a dev can proceed. We made the mistake of trying to turn this into a stage, but what we found was that this was really either a sub-task for part of the team, or an impediment (blocker). By flagging stories, you can identify that a story requires something to be done before the development team can proceed.
If you are using Kanban, and not a Scrum board, the sub-task approach will not be for you. In those cases, you'll just need to make sure you have stages that make sense for all the issues you create. Stages will have to be fairly 'generic'. This sounds bad.
But it is not!
I believe teams generally use the stages for a few reasons:
Checking on status of an iteration
Inform other team members that they can pick up an item
Try to get a visual estimate on how close to Done an issue is.
More stages doesn't necessarily give a better status on an iteration as you really just need to see how many points you've closed and how many are in progress. So, at least for that goal, a more generic set of stages should work.
As for informing team members, too often I've seen teams retreat to the digital board to replace communication with each other. The fewer stages you have, the more you can force your team to talk to each other and work together to get a story to done. Things will work better this way, I guarantee it! Having a bit of a break-down helps, especially if you are working on a lot of items at once or have distributed teams working in different time zones, but keeping it simple is usually better.
Tracking the "how close to Done" is the hardest to do with generic stages. However, the multiple stages can be misleading. An item that is almost all the way across might have a severe bug in it that hasn't been found yet, so no matter how many stages you have your view on this item isn't any more accurate than a single "In Progress" stage. It isn't Done until it's Done :)
This was a long way for me to recommend keeping your workflow simple and letting your team use communication to keep on top of things. Maybe I should have just started with that!
The statuses that are available to each project is determined by the Workflow to which it is assigned. Not only does a workflow define the statuses, but it also defines what statuses you can progress to from a particular status. You can either create your own Workflows or you can download predefined workflows that suite your need.
In order to have separate workflows for different issue types, we need to define a Workflow Scheme:
1- Go to Jira Administration -> Workflow Schemes
2- Edit the Wokflow Scheme that is assigned to your project
3- Click the "Add Workflow" to add a new workflow for the issue types for which you need a different workflow and assign those issue types.

Managing volatile changes with TFS

I work in a shop where we maintain numerous .Net projects that require many small changes. We typically get a Service Request from our customer asking for a new feature. We need to ensure that the work we do is checked into TFS and can be related back to the SR in our help desk database, and that the changes to our code can be reviewed in isolation.
There have been a few strategies that we have discussed, but I hope this question isn't considered subjective as I feel there must be a single practice here that we should be employing. TFS has been used primarily as a source control repo for us, but we are looking to leverage more of it.
1) Currently, a developer creates a Task in TFS, and gives it the name of the SR work number. Then, all changes to the codebase are checked-in against that task. I personally am hesitant about this approach as we are co-opting the Task artifact to be used in a way it hasn't been intended for.
2) There has been discussion about branching for each new feature request we receive, and tag the branch with the SR work number. Should we be concerned about the overhead here? My understanding is that branching and merging can lead to complexity.
3) Simply add a comment to the changeset that is prefixed with the SR work item number. This is a simple approach, but when I View History, there doesn't seem to be an easy way to search through the changeset comments for the SR work number.
4) We're not terribly familiar with labelling, but would it be an option? It sounds like we could tag our Team Project with our SR work number once the work has been completed, and that would provide us with the snapshot we would need if we ever needed to refer back to the changes made.
Obviously, if I've missed the boat entirely, I'd be grateful for guidance.
I don't know if you're aware that you can customize TFS work items? You can create a Service Request work item. Make it a kind of Requirement. Make the tasks needed to create the new feature be children of the Service Request work item.
You can then use Branches, but only as a method for isolating the work of one feature request from another. As you check in work to the branch, be certain to associate each check-in with a task. You will be able to track the tasks across changesets and across branches.
As you perform builds, they will be associated to the changesets, and therefore, to the service requests. In the same way, test cases, bugs, and the tasks needed to remediate the bugs will also be associated to the service request. You will be able to track everything that happened with respect to that service request.
I assume you have a separate system for entering Service request and you want to continue using that. I'm also assuming that you are using Agile process template in TFS (http://msdn.microsoft.com/en-us/library/dd997897.aspx) but this should also work if you are using Scrum process template.
I would not suggest creating a custom work-item for Service request but just adding a new field to your user story/bug and name the field "SR work number". Creating custom work items and even adding new fields (adding new field is less painful) is not recommended unless you really need it as it becomes painful when you want to upgrade/migrate your project. You can find out how to customize work-items by going to below link:
http://tedgustaf.com/blog/2011/1/how-to-customize-tfs-2010-work-items-and-workflows/
Based on the info you provided I can suggest following workflow. This might be too much for your needs and if that's the case you can ignore creating user story and bug and directly create tasks.
Workflow:
1) Your helpdesk team creates a Service Request (in a different system) which generates a Service Request number.
2) Helpdek/Product/Dev team decides whether its a new feature or a bug in existing code. Based on that they create a User story(for new feature) or Bug work item in TFS.
3) Tasks are child elements of User story so if you want to break down your user story (feature) into multiple tasks then you can create tasks as child elements to the user story.
4) You enter the Service Request number in the new field you created for it. You can also later use the field for reporting purposes.
5) When developers check-in the code they link it to the appropriate user story/bug/task.
I wouldn't suggest #2, #3 and #4 for the same reasons you mentioned.

sitecore workflow with multiple publishing targets

I want to implement something simple like /System/Workflows/Sample Workflow with the small addition of having multiple publishing targets (staging web and production web environment), so instead of the Approved state with the final checkbox set, i want to modify it to two states;
Approved for Staging
Approved for Delivery
only the Approved for Delivery should be final. I want to set a PublishAction for each of them but i don't know how to set the publishing target?
This is a very common issue that ultimately ties to how Sitecore works. Your question seems to indicate that you understand that only one state in workflow should be final -- that's great that you see that. There are ways to do this, but I would say some of them are not best practice. Also, as divamatrix mentioned, there are other custom approaches.
Deviate from best practice and mark Approved for Staging as Final and Approved for Delivery as Final. I do not recommend this. I'm mentioning this is a solution so you can see the full circle of what you can do. The issue with this is that if you log in as an admin, you can potentially publish to any target as well as other things. Generally, this is just not a good idea.
As divamatrix mentioned, there's a custom publishing provider by Alex Shyba on the topic. The article linked is the older approach. There's actually an update to that solution which seems to be the next best thing. That solution includes a custom workflow provider and some updates to the targets in Sitecore.
Another option is to de-couple workflow from publishing, which might sound drastic, but in theory makes sense. Basically, force content to go through all of worflow, then have a publish-only role that is the only one that can publish the content. From there, they can publish to the staging site and get stakeholder approval before publishing live.
UPDATE: As of Sitecore 7.2, there is a built-in mechanism to publish to a pre-production target.
Here's a link to everything you need to know: Alex Shyba's blog entry on custom publishing targets. I can verify that it all works because I've currently got a site in production that uses exactly what Alex outlines. Let me know if you have questions.
UPDATE: As Mark points out, this link is indeed an older solution. It will work, but Alex's part 2 link as posted by Mark is a better solution.

How to set permissions to promote a file in Accurev

All,
We have had problems with engineers promoting files without the code being thoroughly tested and reviewed. They eventually ended up breaking the baseline. Instead of assuming the engineers will only promote their code after it has been reviewed and tested, I want to restrict their ability to promote until they are given permission to do so. For instance, after a code review, I would like to select the user/users and the file/files which they are allowed to promote. How can I automate this process?
How do the rest of you handle this "problem" of engineers deliberately or accidentally promoting files which end up breaking the baseline?
Thanks for your help.
There are several ways to address this. The easiest one is to put a Lock on the destination stream that essentially says "Only a specific user or a specific group can promote to this stream". This is done via point-and-click on a stream in the stream-browser. So now you end up with a barrier to entry to that stream which is something you can control. You can add additional layers of streams to supplement this approach as well. For example, if you currently have:
Prod_Stream -- Build_Stream -- Workspaces
... you could now make it:
Prod_Stream -- Build_Stream -- Review_Stream -- Workspaces
Put the promote lock on Build_Stream so that they can break Review_Stream all they want but you keep a more pristine environment in Build_Stream.
It sounds like you are not using AccuRev Change Packages, the ability to link source files to issue records. Those also become a powerful mechanism of control, where you can put constraints around promotion of those Change Packages, for example not allowing a Review to Build promotion unless the value of an issue field called "Status" has been toggled to "Passed Review". Those then become programmatic controls, as opposed to manually implemented ones.
There are plenty of ways to skin the proverbial cat in AccuRev. If you want more information, you could contact AccuRev Support or your specific account team to discuss alternatives.
Regards,
~James

Is there any form of Version Control for LSL?

Is there any form of version control for Linden Scripting Language?
I can't see it being worth putting all the effort into programming something in Second Life if when a database goes down over there I lose all of my hard work.
Unfortunately there is no source control in-world. I would agree with giggy. I am currently moving my projects over to a Subversion (SVN) system to get them under control. Really should have done this a while ago.
There are many free & paid SVN services available on the net.
Just two free examples:
http://www.sourceforge.net
http://code.google.com
You also have the option to set one up locally so you have more control over it.
Do a search on here for 'subversion' or 'svn' to learn more about how to set one up.
[edit 5/18/09]
You added in a comment you want to backup entire objects. There are various programs to do that. One I came across in a quick Google search was: Second Inventory
I cannot recommend this or any other program as I have not used them. But that should give you a start.
[/edit]
-cb
You can use Meerkat viewer to backupt complete objects. or use some of the test programas of libopenmetaverse to backup in a text environment. I think you can backup scripts from the inventory with them.
Jon Brouchoud, an architect working in SL, developed an in-world collaborative versioning system called Wikitree. It's a visual SVN without the delta-differencing that occurs in typical source code control systems. He announced that it was being open sourced in http://archvirtual.com/2009/10/28/wiki-tree-goes-open-source/#.VQRqDeEyhzM
Check out the video in the blog post to see how it's used.
Can you save it to a file? If so then you can use just about anything, SVN, Git, VSS...
There is no good source control in game. I keep meticulous version information on the names of my scripts and I have a pile of old versions of things in folders.
I keep my source out of game for the most part and use SVN. LSLEditor is a decent app for working with the scripts and if you create a solution with objects, it can emulate alot of the in game environment. (Giving Objects, reading notecards etc.) link text
I personally keep any code snippets that I feel are worth keeping around on github.com (http://github.com/cylence/slscripts).
Git is a very good source code manager for LSL since its commits work line-by-line, unlike other SCM's such as Subversion or CVS. The reason this is so crucial is due to the fact that most Second Life scripts live in ONE FILE (since they can't call each other... grrr). So having the comparison done on the file level is not nearly as effective. Comparing line by line is perfect for LSL. With that said, it also (alike SourceForge and Google Code) allows you to make your code publicly viewable (if you so choose) and available for download in a compressed file for easier distribution.
Late reply, I know, but some things have changed in SecondLife, and some things, well, have not. Since the Third Party Viewer policy still keeps a hard wall up against saving and loading objects between viewer and system, I was thinking about another possibility so far completely overlooked: Bots!
Scripted agents, AKA Bots, have all usual avatar actions available to them. Although I have never seen one used as an object repository, there is no reason you couldn't create one. Logged in as a separate account the agent can be wherever you want automatically or by command, then collect any or all objects you are working on at set intervals or by command, and anything they have collected may be given to you or collaborators.
I won't say it's easy to script an agent, and couldn't even speak for making an extension to a scripted agent myself, but if you don't want to start from scratch there is an extensive open source framework to build on, Corrade. Other bot services don't seem to list 'object repository' among their abilities either but any that support CasperVend must already provide the ability to receive items on request.
Of course the lo-fi route, just regularly taking a copy and sending the objects to a backup avatar, may still be a simple backup solution for one user. Although that does necessitate logging in as the other account either in parallel or once every 20 or-so items to be sure they are being received and not capped by the server. This process cannot rename the items or sort them automatically like a bot may. Identically named items are listed in inventory as most recent at the top but this is a mess when working with multiples of various items.
Finally, there is a Coalesce feature for managing several items as one in inventory. This is currently not supported for sending or receiving objects, but in the absence of a bot, can make it easier to keep track of projects you don't wish to actually link as one item. (Caveat; don't rezz 'no-copy' coalesced items near 'no-build' land parcels, any that cannot be rezzed are completely lost)