I have a snapshot view of a project of tens of thousands of files. I work remotely and a live view wouldn't be practical. I am only testing with these files, so I never have to put something back, but I do want to be able to get any files that have changed.
The way it has been explained to me is that there is no mechanism in ClearCase to identify my out-of-date files or to automatically update them when I would request an update of just those files.
The only option I have is to replace the entire snapshot, which could mean waiting a very long time for it to download (even when I am on the local network and not working remotely). Even then, I wouldn't know which files were updated since my existing snapshot was made.
I'm new to ClearCase, but have used SVN. SVN has this capability to see which files are out of date and to request an update of just those files.
Is there a way, with ClearCase, to get what I want? I feel (or want to think) that I may be misinformed about how it works.
The cleartool update command using -print option:
-print Produces a preview of the update operation: instead of copying or removing files, update prints a report to standard output the actions it would take for each specified element.
That should suffice to know what's changed and if you need to update.
btw: the update may analyze the entire view, but only actually downloads files that have changed.
update
Updates elements in a snapshot view
[...]
Updating Loaded Elements
For one or more loaded elements, the update command does the following:
* Reevaluates the config spec to select versions of loaded elements in
the VOB and loads them if they differ from the currently loaded
versions
You could also work more effectively by using labels or baselines. If you only update after a particular baseline, you could run cleartool diffbl to find the differences between the current and latest. You could then just monitor for a new baseline. Or you can use cleartool lsact -l to examine the element versions on the new activity.
Do you have the option of using the ClearCase Remote Client (CCRC)? It is designed to efficiently support high-latency (i.e. WAN) connections to the ClearCase servers. See the ClearCase Knowledge Center:
Developing software with Rational ClearTeam Explorer
CCRC supports both Web views (similar to snapshot views) and automatic views (similar to dynamic views) and provides much better performance than CCLC (the "ClearCase Local Client" that supports snapshot and dynamic views) over a high-latency network.
The command line interface for CCRC (rcleartool) supports the 'update' operation as does the ClearTeam Explorer GUI. The update operation evaluates which versioned files have changed and only updates that subset.
Related
Is there a way, with TortoiseCVS, to see what has changed in the repository since the last CVS update?
I am used to Eclipse's synchronize function. But now I want to view differences in a directory that isn't an eclipse project.
I could check out the project somewhere else and use any diff tool. But that's ugly.
The command line version of cvs provides the '-n' option for this purpose. From the cvs manual:
Do not change any files. Attempt to execute the `cvs_command', but only to issue reports; do not remove, update, or merge any existing
files, or create any new files.
Note that CVS will not necessarily produce exactly the same output as without `-n'. In some cases the output will be the same, but in
other cases CVS will skip some of the processing that would have been
required to produce the exact same output.
The option is also available in Tortoise: Choose "CVS Update Special" from the context menu. In the dialog check the box "Simulate Update" (it's on a separate tab in newer version of Tortoise).
However, I find the feature to be of limited usefulness, due to it's cryptic output and low level of integration (e.g. it's not possible to click on a file and actually view the diff's).
Is there any way using Subclipse or Subversive to apply some kind of filter on Package Explorer that will hide all files that weren't modified locally?
It would be sometimes very useful when I just want focus on my local changes (for example to revise them). I know that files that were modified locally are marked in Package Explorer (in Subclipse by "star" symbol) but in big projects with hundred of files it doesn't help that much (it would be much easier and clearer if only modified files would be visible).
Of course packages containing modified files should be visible as well.
Have you tried the Synchronize view? This shows all your changes in a view which makes it easy to work with the items. You can also create and group items by changeset when using this view.
Using Subclipse, I set the Synchronize view so that all SVN projects in my workspace are synchronized. I then pin it and set a schedule to refresh every hour. Local changes refresh immediately, the hourly schedule is for how often to check the repository for incoming changes.
You can out the view in Outgoing mode if you only want to look at your local changes.
We have a legal requirement to ensure the latest version of documents (mainly Word and Excel) are readily accessible. We currently implement document control by manually updating the page footer with a new version number but want a better system.
I've played around with TortoiseSVN and the functionality is good; but the problem is that unless I've missed a configuration variable somewhere, Subversion applies version numbers to the whole project (i.e. every file in the repository), not to the individual files. What I want is to be able to create a folder in the repository and all our documents go in there, and the version numbers of the files are only changed when that particular document is changed. Currently if we had 30 files and each was printed and put on display including the version numbers, if someone went to the repository the version number would almost certainly have changed even if the document contents were identical. Not ideal.
The alternative to this would be to create a new repository for each and every document but the administrative overhead on that will be prohibitive. I'm essentially looking for something that does much of what TortoiseSVN does, but treats files as individual projects with their own independent version number.
Whatever solution we come up with we would want the version of the document to be automatically shown in the page footer of the document. Tortoise can do this with a macro http://insights.oetiker.ch/windows/SvnProperties4MSOffice/.
Appreciate any help, thanks.
Greg
I think what you're actually looking for is a document management system.
Version Control Systems (VCS), such as Subversion (the underlying technology behind TortoiseSVN), are fundamentally unsuited for your task due to their focus on tracking changesets among all files within a given project (i.e. one changeset/version can involve changes to many files within the project).
Another advantage of using a document management system is that they typically allow you to attach extensible metadata attributes to your documents in a much richer way than version control systems, as well as providing search capabilities.
Our product is game-like, and is very rich (~40M - 100M) in binary supporting files - textures, meshes, movies etc. Like kai1968, I'd like to be able to sync-in these assets, and not just code, with a single click.
Strictly speaking, however, that is different than version control: I have no desire to burden our TFS with irrelevant history of these files. Can I somehow upload stuff without keeping history to TFS? It would be even better if I could opt to keep history at specific points (say, label points), and not in every checkin.
More generally, how do you manage sync of binary assets?
(I'm aware of other tools, perhaps better suited for such tasks, but diverging - or altogether migrating - from TFS is not an option right now.)
We've always kept binary assets in TFS when we need to, and just dealt with the side-effects of that choice (extra storage, longer check-ins because you can't diff on binaries, etc). I don't believe there's a way to selectively destroy the history of certain files, except manually. If you want to do this periodically, by hand, you could do the following:
Get a curent copy of the binary files
Destroy (delete with history) the binary copies in TFS
Manually add the files back to TFS
You'd have only the most recent copy, but this has a side-effect - you'd break any previous builds, since an attempt to retrieve source history wouldn't return these new copies of the files. TFS would check for a copy that matches the checkout you're attempting, and finding none, it wouldn't retrieve a copy of those files. You'd need to update your build scripts to pull the most recent binaries, as well as the historical code, if you wanted to build an old version, but even then, it won't be a true history.
The second option is to only check them in periodically - not with every single minor change. For example, keep these files somewhere safe (a file share with daily backups), and then only check in the changed binaries every week or so, or before every label, or whatever - this way, you don't have incremental history, but you'd still have your label history. You might even consider writing some kind of automated routine to apply labels, where it would check in any changes in that folder first, then apply the label.
Please post back what you end up doing - I'm curious to know!
Here are a few thoughts:
Consider using a separate VSTS project, so you don't mix the binaries and code in the same project. This makes it a bit easier to manage (e.g. you can keep the assets separate, and also any work items relating to them are more easily queried by filtering on the project). On the down side, this would mean 2 clicks to get latest.
Why don't you want to keep histories? The point of source control is to keep history so you can go back to a particular build for a particular day. Otherwise, you might as well just use a backup program on a network drive (and you really don't want to do that!)
If you're only worried about disk space usage, then don't. 100MB is tiny, and hard drives are cheap. My last game project had hundreds of gigabytes of assets and we kept the history of every change for over 3 years.
The assets won't slow anything down. They only take time to process if you check them in or Get them, which are both activities you will need to do even if you don't use source control. Indeed, source control makes things faster because you have a "one click does it all" solution.
The many other benefits of source control are really useful on assets, and vastly outweigh the negatives.
I need to keep under version some large files (some Gigs).
I don't need, and I can't keep under version all the version of the files.
I want to be able to remove from my VCS large files version at some moment.
The files that I want to keep under version control are big .zip files or ISO images.
These files may contains executable software or data (seismic data, SAR images, GNSS data) and they are provided by the software supplier of my company.
What control version system could I use?
In CVS you can do that by removing the files from the repo. Subversion allows that by dumping the content of the repo and filter it to remove the files (that is a bit cumbersome). Perforce has an obliterate command for that. Many of the newer distributed VCS make it rather difficult by their usage of hashes all over the places and the fact that your repo may have been replicated elsewhere also complicate things. Hg has a strip command (part of the Mq extension), Git can also do that I think.
I don’t think there’s any version control system that allows you do that regularly because that goes against everything version control systems stand for.
Perforce generally allows files to be put in two way, as head revision only (so, you'd only every have one copy) or all revisions. Perforce does have the admin level obliterate command that can be used to delete revisions. Its up to you to query for a list of files, possibly by date or number of revisions, and to specify the revisions to the obliterate command. As the name suggests obliterate deletes the revisions permanently from the database, so, I always generate scripts to do this and review them before running them. If the obliterate command is NOT run with the -Y flag, it will generate a list of what would be obliterated, also very useful.
Somehow I get the impression that you should not use a version control system at all. As said before, what you're trying to do goes against everything you would need a version control system for in the first place.
I suggest you create a file system directory structure that makes sense for what you're trying to accomplish and so that you can structure your data. And just make backup's of those files.
TFS has a destroy command that you can use to permanently delete files or revisions as you see fit.
There is more information at this MSDN article.
Many version control systems allow you to configure them in a way so that they store only the differences between several versions of a file and save space through that.
For example if you have a 1Gig file committed, change a part of it and commit it again, only the changed part will be stored in the version control system.
There won't be 2Gigs used (initial and new file) but only 1Gig+sizeOfChanges.
There's just one downside:if you're storing files which change their whole content from revision to revision this can also be counter-productive as the changes take almost the same space as the original version. Archive files are a example for such files where only a small change in the (real) content can lead to a completely changed content of the archive file.
I'd suggest to test several version control systems on your own and with your specific needs and environment and monitor each one at the server-side how the storage requirements for each system changes.
Some distributed version control systems allow to create "checkpoints" that allow you to use this version as kind of a base revision and safe you from pulling all the history before the checkpoint on every checkout. So you can remove the big files, create a checkpoint, and checkout/clone the repository from that checkpoint to a new directory. Then you have there a new, small repository, but without the history before the checkpoint. It you don't need that history you can burn the old repository on CD and use the new, partial one from now on.
I've only tested it in darcs, and there it works, but YMMV depending on version control system and use cases.
It sounds to me like you need an intelligent backup system, rather than version control.
I use SyncBackSE; it allows you to keep a number of previous versions, and can also do things like "ignore all files changed more than 30 days ago".
It's one of the few bits of paid-for software I use. I think it's worth checking out.
I think you're talking about something like "AlienBrain" "bucket" system, aren't you? The ability to remove some revisions from version control.
If you want to destroy an item, it's normally called "obliterate" and it's supported by a number of systems out there.
Buckets, AFAIK are supported by:
AlienBrain
Accurev
PlasticSCM
I would save such files under a unique name (datestamped, perhaps), and perhaps additionally make a textual reference to the external file in the version control system.
Fossil allows you to do this via the "shun" mechanism. Fossil being a distributed SCM, however, means that this does not affect all repositories (for obvious reasons).