I want to find the timestamp when a clearcase snapshot view was last updated. By this, I mean the time when the last "cleartool update" was started.
Or, said another way, if I was going to make a dynamic view with a timestamp, what timestamp should I use to make it exactly equivalent to a given snapshot view?
The only way I can come up with is to look for the log file called update.[timestamp].updt that is written to the root of the snapshot view directory on every view update. But in some cases, I don't have access to this file. Is there another way?
The following command looks like it comes close, but I'm not sure if it's what I want -
ewat> cleartool lsview -prop -full ewatkins_11122_s_ewatkin4
ewatkins_11122_s_ewatkin4 /scfs3/vws_u/ewatkins/ewatkins_11122_s_ewatkin4.vws
Created 19-Apr-11.23:42:13 by ewatkins.cdev#dscddy02
Last modified 02-Jun-11.16:28:45 by ewatkins.cdev#ewatkin4.us.oracle.com
Last accessed 02-Jun-11.16:28:45 by ewatkins.cdev#ewatkin4.us.oracle.com
Last read of private data 02-Jun-11.16:28:45 by ewatkins.cdev#ewatkin4.us.oracle.com
Last config spec update 25-Apr-11.15:50:13 by ewatkins.cdev#ewatkin4.us.oracle.com
Last view private object update 02-Jun-11.16:28:45 by ewatkins.cdev#ewatkin4.us.oracle.com
Text mode: unix
Properties: snapshot readwrite
Owner: arbor.hyperion.com/ewatkins : rwx (all)
Group: arbor.hyperion.com/cdev : r-x (read)
Other: : r-x (read)
Additional groups: arbor.hyperion.com/essbase_prerel
The above output was taken right after I did a snapshot update. You can see that last modified, last accessed, last read, and last update have all been set to the time when the snapshot update finished -- 02-Jun-11.16:28:45.
This time is not quite what I want. Assume the snapshot update takes several minutes. If I make a dynamic view with this timestamp, the dynamic view will have any new files that were checked in during the several minutes while the update was running.
Am I out of luck unless I have the update.[timestamp].updt file?
I think I answered my own question --
The timestamp in the update.<timestamp>.updt is the moment that the "cleartool update" was started, but it's the time on the local machine - which may be different from the time on the clearcase server machine.
For instance, the time on my two machines are different by about 3 minutes. So this timestamp is not what I'm looking for. Of course, I could just use NTP to synchronize the times, but I found something else interesting...
"cleartool update" does not update the "Last config spec update" time.
"cleartool setcs -current" DOES update the "Last config spec update" time to the moment the update started.
And since we're using Hudson to manage our snapshot views, and the Hudson clearcase plugin does setcs -current, then we are in luck!
The "Last config spec update" time is exactly what I want, and it's even the time on the clearcase server, not the local host!
From what I tested, I don't think that you would end up with any version newer than the "last config spec update" date (while the last modified date is entirely managed by the OS)
So you should be ok doing a dynamic view with the "Last config spec update" time-based rule.
See the IBM man page "How snapshot views are updated"
The update operation accounts for the fact that updates are not instantaneous. As your view is updated, other developers may check in new versions of elements that the load rules for your view select. To avoid loading an inconsistent set of versions, the update operation ignores versions in the VOB that meet both of the following conditions:
The version was checked in after the moment the update began.
The version is now selected by a config spec rule that involves the LATEST version label.
The update adjusts for the possibility that the system clocks on different hosts in a network may not be synchronized (that is, clocks are skewed).
Related
I'm developing a system with database version control in LiquiBase. The system is still in pre-alpha development and there are a lot of changes that were reverted or supplemented by other changes (tables removed, columns added and removed).
The current change set reflects the whole development history with many failed experiments, and this whole is rollouted when initializing the database.
Because there is NO release version, I can start from scratch and pull actual DB state in single XML changeset.
Is there a way to tell LiquiBase to merge all change sets into one file, or the only way to do that is per hand?
Just use your existing database to generate change log that will be used from now on. For this you can use generateChangeLog command from command line, it will generate the changelog file with all the changeSets that represent your current state of the database. You can use this file in your project as initial db creation file, to be used on an empty database. Here's a link to docs.
There is a page in the Liquibase docs which discusses this scenario in detail:
http://www.liquibase.org/documentation/trimming_changelogs.html
To summarise, they recommend that you don't bother since consolidating your changelogs is both risky and low-reward.
If you do want to push ahead with this, then restarting the changelog using generateChangeLog, as suggested by #veljkost, is probably the easiest way. This is documented at http://www.liquibase.org/documentation/existing_project.html
Hence I didn't find automatic solution for this problem in case the changelog is already deployed on several databases in different states, I will describe here my solution for this problem:
Generate changelog of your current development state of database using liquibase generate changelog, like:
mvn liquibase:generateChangeLog -Dliquibase.outputChangeLogFile=current_state.yml
Audit generated changelog, check whether it looks good (liquibase is not perfect, it often generates stupid statements). Also if you have in your schema some static data, like dictionaries or so, which were previosuly populated using liquibase, you have to add these to generated changelog as well, you can export data from your database using generate changelog command mentioned above with -Dliquibase.diffTypes=data property.
Now to prevent the execution of generated changelog (it will obviously fail on existing databases, on prod, test, and other developers local envs), you can do this using for example liquibase changelogSync, or using liquibase contexts, but all this options require you to do some manual work on every database. You can achieve automatic result by adding the preConditions statements for your changeSets.
For changesets intended to run on empty databases (changelogs you generated in step 1. above) you can add something like this:
preConditions:
- onFail: MARK_RAN
- not:
- tableExists:
tableName: t_project
Where t_project is the table name that existed before (most likely this should be table added in first changeSet, so every database which runned at least one changeSet will have this). This will mark generated changelog as run on environments with existing schema, and will run generated changelog for every new database you would like to migrate.
Unfortunatelly you have to adjust all legacy changesets as well (I didn't found better solution yet, I did this change using regex and sed), you have to add something like that:
preConditions:
- onFail: MARK_RAN
- tableExists:
tableName: t_project
So opposite condition, from above one. With this all databases which runned at least one changeset in past, will continue to migrate (EXECUTED status of changesets) until changeset generated in step 1. above, and mark generated changesets with MARK_RAN. And for new databases, all previous changesets will be skipped, and first executed will be one generated in step 1. above.
With this solution you can push your merged changelog at any time, and every environment and developer won't have any problem with manual syncing or so.
In ClearCase I can search for changed files with the command
cleartool find . -version "created_since(DATE)" -print
However, I am not sure if this "created_since" looks for the check-in date or the creation date of the file. Imagine I have created a file on Monday and didn't add it to the source control until Friday. Now I use said command to find all files "created_since" Thursday. Will it find my file?
It will find the file based on the check-in date, meaning the date of the creation of the version.
It depends on the ClearCase Explorer option (also valid for a ClearTeam 8.x Explorer) "Preserve file modification time"
By default, the last modified time of a ClearCase element is the time it was last checked in.
To preserve the last modified time during a checkin operation or when adding resources to source control, click the Preserve modification time when checking in files and adding new files to source control preference.
If that option was selected, then the version date (which is used by the cleartool find created_since query) would be the file last modification date.
I'm having some issues with the installation of Rational Team Concert on my server.
The thing is that when I upload some changes to the server (any kind), it changes the last modified attribute of the file, but it shouldn't do it.
Is there a way to avoid this behavior?
Thank you in advance!
This is something that we have tried to add to RTC SCM (and we still plan to). However, we found that it needs to be an option on load/update.
There are numerous details and discussions available # this work item on jazz.net
Regarding timestamp, getting over the fact that relying on it in a version control tool isn't always considered a best-practice (see "What's the equivalent of use-commit-times for git?"), it is actually a complex issue:
an SCM loader wouldn't use just timestamp to determined what file has changed (Task 179263)
you can have various requirements for that timestamp (like in Defect 159043, where the file timestamp of the modified file on disk that of when it was delivered, not when I accepted.). The variable JAZZ_CCM_SKIP_MOD_TIME=true is mentioned so check if that could improve your specific case.
it is all based on the assumption the timestamp is correctly set by the local workstation, which isn't always true, as illustrated in Task 77201
during synchronisation with the CVS server, eclipse compares the content of the files (of course it uses internally CVS commands). But files without any content change are also shown as different, if they have another timestamp, because they are "touched". You always have to look manually per file comparison dialog if there was really a change in it or not.
Due to auto-generation I have some files that always get new timestamps and therefore I always have to check manually if they really contain any change.
At the eclipse docu I read :
Update and Commit Operations
There are several flavours of update and commit operations available
in the Synchronize view. You can perform the standard update and
commit operation on all visible applicable changes or a selected
subset. You can also choose to override and update, thus ignoring any
local changes, or override and commit, thus making the remote resource
match the contents of the local resource. You can also choose to clean
the timestamps for files that have been modified locally (perhaps by
an external build tool) but whose contents match that of the server.
That's exactly what I want to do. But I don't know how!? There is no further description/manual ...
Did anybody use this functionality and can help me (maybe even post a screenshot)?
Thanks in advance,
Mayoares
When you perform a CVS Update on a project (using context menu Team->Update), Eclipse implicitly updates the timestamp of local files whose contents match that of the server.
Contrived example:
{
productName: 'Lost Series 67 DVD',
availableFrom: '19/May/2011',
availableTo: '19/Sep/2011'
}
View storeFront/currentlyAvailableProducts basically checks if current datetime is within availableFrom - availableTo and emits the doc.
I would like to force a view to regenerate at 1am every night, i.e. process/map all docs.
At first I had a simple python script scheduled via crontab that touched each document hence causing a new revision and the view to update,however since couchdb is append only this wasnt very efficient - i.e. loads of unnecessary IO and disk space usage followed by compaction, very resource wasteful on all fronts.
Second solution was to push the view definition again via couchapp push however this meant the view was unavailable (or partially unavailable) for several minutes which was also unacceptable.
Is there any other solutions?
Will's answer is great; but just to get the consensus viewpoint represented here:
Keep one view, and query it differently every day
Determine your time-slice size, for example one day.
Next, for each document, you emit once for every time slice (day) that it is available. So if a document is available from 19 May to 21 May (inclusive), your emit keys would be:
"2011-05-19"
"2011-05-20"
"2011-05-21"
Once that is computed for every document, to find docs available on a certain day, just query the view with (e.g. today) ?key="2011-05-18".
You never have to update or re-run your views.
If you must never change your query URL for some reason, you might be able to use a _show function to 302 (temporary) redirect to today's correct query.
So your view is not being updated automatically I take it?
New and changed documents are not being added on the fly?
Oh I see, you're cheating. You're using "out of document" information (i.e. the current date) during view creation.
There's no view renaming, but if you were desperate you could use url rewriting.
Simply create a design document "each day": /db/_design/today05172011
Then use some url rewriting to change: GET /db/_design/today/_view/yourview
to: GET /db/_design/today051711/_view/yourview
Create the view at 11pm server time (tweak it so that "now" is "tomorrow", or whatever).
Then add some more clean up code to later delete the older views.
This way your view builds each night as you like.
Obviously you'll need to front Couch with some other web server/proxy to pull this off.
It's elegant, and inelegant, at the same time.