Does ClearCase have a trigger for snapshot views? - triggers

It seems like the Trigger extensibility feature in ClearCase has to be attached to a VOB by the owner. I would like something similar that I can administer myself for my local snapshot views. Does such a feature exist?

There is no "local trigger" per view in ClearCase.
When you create a trigger (with mktrtype), you can:
attach it to a VOB
check if you are in a snapshot view by reading the environment variable CLEARCASE_SNAPSHOT_PN
(All operations executed in a snapshot view) The path to the root of the snapshot view directory in which the operation that caused the trigger to fire took place.
check if you are in a snapshot view by reading the environment variable CLEARCASE_VIEW_KIND
(All operations) The kind of view in which the operation that caused the trigger to fire took place; the value may be dynamic, snapshot, or snapshot web.
checking if you are in the right view by reading the view tag CLEARCASE_VIEW_TAG
(All non-UCM operations; for UCM, all deliver and rebase operations and setactivity) View tag of the view in which the operation that caused the trigger to fire took place.
check if the trigger should execute itself for the right user: CLEARCASE_USER
(All) The user who issued the command that caused the trigger to fire; derived from the UNIX or Linux real user ID or the Windows user ID.
With all those elements, you could write a script able to call a custom script versioned in the snapshot view, which means by convention (in a path defined in advance) you could define a script managed by the user for a snapshot view.
But unless you try that kind of indirection, there is no local trigger proposed directly by ClearCase.

Related

Complicated job aggregate

I have a very complicated job process and it's not 100% clear to me where to handle what.
I don't want to have code, it just the question who is responsible for what.
Given is the following:
There is a root directory "C:\server"
Inside are two directories "ftp" and "backup"
Imagine the following process:
An external customer sends a file into the ftp directory.
An importer application get's the file and now the fun starts.
A job aggregate have to be created for this file.
The command "CreateJob(string file)" is fired.
?. The file have to be moved from ftp to backup. Inside the CommandHandler or inside the Aggregate or on JobCreated event?
StartJob(Guid jobId) get's called. A third folder have to be created "in-progress", File have to be copied from backup to in-progress. Who does it?
So it's unclear for me where Filesystem things have to be handled if the Aggregate can not work correctly without the correct filesystem.
Because my first approach was to do that inside an Infrastructure layer/lib which listen to the events from the job layer. But it seems not 100% correct?!
And top of this, what is with replaying?
You can't replay things/files that were moved, you have to somehow simulate that a customer sends the file to the ftp folder...
Thankful for answers
The file have to be moved from ftp to backup. Inside the CommandHandler or inside the Aggregate or on JobCreated event?
In situations like this, I move the file to the destination folder in the Application service that sends the command to the Aggregate (or that calls a command-like method on the Aggregate, it's the same) before the command is sent to the Aggregate. In this way, if there are some problems with the file-system (not enough permissions or space is not available etc) the command is not sent. These kind of problems should not reach our Aggregate. We most protect it from the infrastructure. In fact we should keep the Aggregate isolated from anything else; it must contain only pure business logic that is used to decide what events get generated.
Because my first approach was to do that inside an Infrastructure layer/lib which listen to the events from the job layer. But it seems not 100% correct?!
Indeed, this seems like over engineering to me. You must KISS.
StartJob(Guid jobId) get's called. A third folder have to be created "in-progress", File have to be copied from backup to in-progress. Who does it?
Whoever's calling the StartJob could do the moving, before the StartJob gets called. Again, keep the Aggregate pure. In this case it depends on your framework/domain details.
And top of this, what is with replaying? You can't replay things/files that where moved, you have to somehow simulate that a customer sends the file to the ftp folder...
The events are loaded from the event store and replayed in two situations:
Before every command gets sent to the Aggregate, the Aggregate Repository loads all the events from the event store then it applies every one of them to the Aggregate, probably calling some applyThisEvent(TheEvent) method on the Aggregate. So, this methods should be with no side effects (pure) otherwise you change the outside world again and again at every command execution and you don't want that.
The read-models (the projections, the query-models) that present data to the user listen to those events and update the database tables that hold the data that the users see. The events are sent to those read-models after they are generated and every time the read-models are being recreated. When you invent a new read-model, you must pass it all the events that were previous generated by the aggregates in order to build the correct/complete state on them. If your read-model's event listeners have side effects what do you think happens when you replay those long past events? The outside world is modified again and again and you don't want that! The read-models only interpret the events, they don't generate other events and they don't change the outside world.
There is a special third case when events reach another type of model, a Saga. A Saga must receive an event only once! This is the case that you thought to use in Because my first approach was to do that inside an Infrastructure layer/lib which listen to the events from the job layer. You could do this in your case but is not KISS.
I have a very complicated job process and it's not 100% clear to me where to handle what. I don't want to have code, it just the question who is responsible for what.
The usual answer is that the domain model -- aka the "aggregate" makes decisions, and saves them. Observing those decisions, some event handler induces side effects.
And top of this, what is with replaying? You can't replay things/files that where moved, you have to somehow simulate that a customer sends the file to the ftp folder...
You replay the events to the aggregate, so that it is restored to the state where it made the last decision. That's a separate concern from replaying the side effects -- which is part of the motivation for handling the side effects elsewhere.
Where possible, of course, you prefer to have the side effects be idempotent, so that a duplicated message doesn't create a problem. But notice that from the point of view of the model, it doesn't actually matter whether the side effect succeeds or not.

Can a ClearCase trigger be supressed by another ClearCase trigger?

I have a ClearCase trigger that runs a script after the checkin operation has been performed.
It works when a user checks in a new element version or adds a new element to source control.
When a file is deleted, however, I do not want the trigger to fire (or at least I don't want the script associated to it to run), but I know it will because after an element is removed, the folder is inevitably checked in.
Is the a way for an rmelem operation trigger to somehow suppress the checkin operation trigger?
You might do that by:
defining a preop trigger on rmelem which sets a flag (like a file written somewhere accessible by any client)
modifying your postop trigger on checkin which, is that file exists, will delete it and not execute the rest of the trigger.
But my point is: as far as I know, those triggers are independent one from the others, so you need to come up with an external coordination mechanism in order for one trigger to influence another trigger.
You could also play with environment variable (if a certain EV is set, then then postop trigger unset it and don't execute itself), but I am not sure if you can set and persists EV accross different execution of different trigger.
I am not sure if the trigger has to be run for all element types.
You can distinguish in your script if the element is a directory or a file element using the env var CLEARCASE_ELTYPE. Maybe that helps?
Another point is the env var PPID - the fine manual says:
You can use the CLEARCASE_PPID environment variable to help synchronize multiple firings ...##

Sitecore Cleanup Agent and Database Cleanup

In Sitecore Control Panel there is a command to perform Database Cleanup. Does this cleanup History, PublishQueue and EventQueue tables in both master and web database?
There are also cleanup tasks in the web.config for the above tables. If they are only enabled on the CMS server, do they perform cleanup in both master and web databases?
Thanks
I assume you are referring to the "Clean Databases" option on the Control Panel/Databaeses screen.
The command prompts you to select which database (web, master, core) you wish to clean up.
I looked at the implementation of the CleanupDatabase method in the class Sitecore.Data.DataProviders.Sql.SqlDataProvider using DotPeek, and it does the following tasks:
Removes items that have parents, but the parents are not in the item tree.
Removes invalid language data.
Removes fields for non existing items.
Removes orphaned items.
Removes unused blob records.
Removes fields from orphaned items removed in step 4.
Rebuilds the Descendants table (which stores parent/child relationships).
Clears all caches.
I have confirmed that this task does not clear the History, or PublishQueue tables. My EventQueue table was empty, so I could not test this.
The Web.config clean-up tasks iterate through all the <databases<>databases> nodes, so they should act on the Web database even from the CMS environment. This can quickly be proven out by examining one of these tables before and after this job has run.
Note: This analysis is based on reflecting Sitecore 6.6.0, version 121203.

eclipse CVS usage: clean timestamps

during synchronisation with the CVS server, eclipse compares the content of the files (of course it uses internally CVS commands). But files without any content change are also shown as different, if they have another timestamp, because they are "touched". You always have to look manually per file comparison dialog if there was really a change in it or not.
Due to auto-generation I have some files that always get new timestamps and therefore I always have to check manually if they really contain any change.
At the eclipse docu I read :
Update and Commit Operations
There are several flavours of update and commit operations available
in the Synchronize view. You can perform the standard update and
commit operation on all visible applicable changes or a selected
subset. You can also choose to override and update, thus ignoring any
local changes, or override and commit, thus making the remote resource
match the contents of the local resource. You can also choose to clean
the timestamps for files that have been modified locally (perhaps by
an external build tool) but whose contents match that of the server.
That's exactly what I want to do. But I don't know how!? There is no further description/manual ...
Did anybody use this functionality and can help me (maybe even post a screenshot)?
Thanks in advance,
Mayoares
When you perform a CVS Update on a project (using context menu Team->Update), Eclipse implicitly updates the timestamp of local files whose contents match that of the server.

When was a clearcase snapshot view last updated?

I want to find the timestamp when a clearcase snapshot view was last updated. By this, I mean the time when the last "cleartool update" was started.
Or, said another way, if I was going to make a dynamic view with a timestamp, what timestamp should I use to make it exactly equivalent to a given snapshot view?
The only way I can come up with is to look for the log file called update.[timestamp].updt that is written to the root of the snapshot view directory on every view update. But in some cases, I don't have access to this file. Is there another way?
The following command looks like it comes close, but I'm not sure if it's what I want -
ewat> cleartool lsview -prop -full ewatkins_11122_s_ewatkin4
ewatkins_11122_s_ewatkin4 /scfs3/vws_u/ewatkins/ewatkins_11122_s_ewatkin4.vws
Created 19-Apr-11.23:42:13 by ewatkins.cdev#dscddy02
Last modified 02-Jun-11.16:28:45 by ewatkins.cdev#ewatkin4.us.oracle.com
Last accessed 02-Jun-11.16:28:45 by ewatkins.cdev#ewatkin4.us.oracle.com
Last read of private data 02-Jun-11.16:28:45 by ewatkins.cdev#ewatkin4.us.oracle.com
Last config spec update 25-Apr-11.15:50:13 by ewatkins.cdev#ewatkin4.us.oracle.com
Last view private object update 02-Jun-11.16:28:45 by ewatkins.cdev#ewatkin4.us.oracle.com
Text mode: unix
Properties: snapshot readwrite
Owner: arbor.hyperion.com/ewatkins : rwx (all)
Group: arbor.hyperion.com/cdev : r-x (read)
Other: : r-x (read)
Additional groups: arbor.hyperion.com/essbase_prerel
The above output was taken right after I did a snapshot update. You can see that last modified, last accessed, last read, and last update have all been set to the time when the snapshot update finished -- 02-Jun-11.16:28:45.
This time is not quite what I want. Assume the snapshot update takes several minutes. If I make a dynamic view with this timestamp, the dynamic view will have any new files that were checked in during the several minutes while the update was running.
Am I out of luck unless I have the update.[timestamp].updt file?
I think I answered my own question --
The timestamp in the update.<timestamp>.updt is the moment that the "cleartool update" was started, but it's the time on the local machine - which may be different from the time on the clearcase server machine.
For instance, the time on my two machines are different by about 3 minutes. So this timestamp is not what I'm looking for. Of course, I could just use NTP to synchronize the times, but I found something else interesting...
"cleartool update" does not update the "Last config spec update" time.
"cleartool setcs -current" DOES update the "Last config spec update" time to the moment the update started.
And since we're using Hudson to manage our snapshot views, and the Hudson clearcase plugin does setcs -current, then we are in luck!
The "Last config spec update" time is exactly what I want, and it's even the time on the clearcase server, not the local host!
From what I tested, I don't think that you would end up with any version newer than the "last config spec update" date (while the last modified date is entirely managed by the OS)
So you should be ok doing a dynamic view with the "Last config spec update" time-based rule.
See the IBM man page "How snapshot views are updated"
The update operation accounts for the fact that updates are not instantaneous. As your view is updated, other developers may check in new versions of elements that the load rules for your view select. To avoid loading an inconsistent set of versions, the update operation ignores versions in the VOB that meet both of the following conditions:
The version was checked in after the moment the update began.
The version is now selected by a config spec rule that involves the LATEST version label.
The update adjusts for the possibility that the system clocks on different hosts in a network may not be synchronized (that is, clocks are skewed).