My client is migrating from Source Safe to Clearcase. They need to list all the link files in the Source Safe database so the links can be carried over to Clearcase, as apparently all the source must be checked into Clearcase on day 1, losing any existing links.
Are there any tools for creating this report, or perhaps even doing the full import into clearcase ?
My plan is to write a powershell script to recurse Source Safe the SS folders, findings links using COM.
Thanks.
As I have mentioned in this question, clearexport_ssafe should be used for import from Source Safe to ClearCase.
However, the documentation for that tool explicitly mentions:
Shares. There is no feature in Rational ClearCase equivalent to a Visual SourceSafe share. clearexport_ssafe does not preserve shares as hard links during conversion. Instead, shares become separate elements
So your script would need to list all links, and create soft links between their initial directory and the newly created separate element.
But I believe you may want to consider another organization for the target ClearCase repository, one in which all share files are no longer directly used, as illustrated by this answer (for SVN repository in this instance):
We have eliminated all of our linked files. All class files that were previously linked have been placed into class libraries which are shared to our other projects as shared project references in the solution. So in essence you share libraries, not class files.
There was a bit of an adjustment process getting used to this, but I haven't missed links since then. It really does promote a better design practice by having your code setup like this.
I work mainly with UCM, and all those "share" are natural candidate for UCM component, with UCM baselines to refer to their different version, and you can then make your own "configuration" (list of labels) in order to select the different components you need, making them easily reusable across projects.
As VonC mentioned, the import from VSS to ClearCase is truly atrocious as:
The export/import takes forever to complete, so much so we open a PMR against IBM for it (that didn't help, btw)
The Source Safe shares are transformed into files, which is creating duplicates all over the place (the horror !).
I work on ClearCase UCM myself, and we took the same decision as you (which, in my 10 years of experience in CM, is ALWAYS the best decision): leave the history behind for reference and import at most a couple of versions one on top of the others, by hand (like current in development ; current in test ; current in live).
The way we solved the shares' problem is as follow:
The "shares" where isolated from the source-tree, to be imported independantly from the other sources
The other sources where imported (without the history and without the shares) from scratch. Let say in a component called MAIN_SRC
The shares where imported (without the history) from scratch. Let say in a component called SHARE_SRC
A project was created containing both components: MAIN___SRC, and SHARE_SRC.
Now, the problem is not solved because your shares are living aside your main source code, when your IDE (e.g. Visual Studio) fully expects them to be in the same folders they were before (i.e. in Visual all your projects become wrong if you don't solve this issue, and all the files would need to be relinked from within Visual itself, etc... A lot of work).
This is resolved by using ClearCase VOB symbolic links:
Let says in MAIN___SRC you need to use a file called myShared file in SHARE_SRC.
From within the folder needing to use the myShared file, use the command line interface and run:
cleartool ln -s ..\..\SHARE_SRC\(myPath)\mySharedFile .
You need as many ..\.. as necessary to go up to the component folder level in ClearCase, and then down following your path (myPath) in the SHARE_SRC component folder.
Remember the ClearCase path is composed of:
M:\View_name\VOB_name\Component_name\Your first level of files and folders
( VOB_name\Component_name is the "root" of the component, apart if you have single component VOB, in this case VOB_name\Component_name becomes just VOB_name)
The easiest way is to have a mapping of all the VOB symbolic links that need to be created, and put all necessary "cleartool ln -s" command lines in a script to run once.
After that, you should be fine, and your IDE think the sources are where they used to be.
Cheers,
Thomas
Related
Trying to determine if there is any behavior analogous to SVN:Externals in IBM Rational Synergy CM.
Ultimate objective is to find a more integrated way for Synergy projects existing in one database to contain files originating from another database without manually copying them to the second database and manually propagating changes. SVN of course supports this through the concept of Externals which allow you point to directories or files even on a different repository.
Search has revealed nothing so far, and unfortunately use of Synergy is mandated in this situation so switching revision control tools is not an option. If manual copying is the answer, so be it, just wanted to confirm.
The only way to share projects and files between databases is the DCM concept (see manual) . However, it's quite complicated to use and require some scripting in order to function properly. If you have simple needs you can always do it "outside" Synergy with a simple file/directory repository.
Situation:
I need several swf/exe output files compiled in FlashDevelop from several projects. More than 60% of ActionScript 3.0 source is common for all project, rest are project-specific. How can I organize that in FlashDevelop? I want to have "one-click-to-build all" setting without duplicating common codebase (so when I need to fix something I do not need to copy-paste solution into several files).
All sources are under develeopment and will change very often.
A straightforward solution is to make an external classpath, for instance:
c:\dev\shared_src\
c:\dev\project1\
c:\dev\project2\
Then configure each project:
Project Properties > Classpath
Add Classpath > select '../shared_src'
PS: of course you should keep everything under source control.
Using svn:externals you could structure your repository in such a way that the commom parts are stored just once in the source control system, so changes made can be synchronised with just a single commit and update cycle.
For example, imagine that you have ^/ProjectA and ^/ProjectB, each of with require ^/Common as a sub directory.
Using svn:externals, pull ^/Common into both projects.
The exact nature of doing this will depend on the version of svn you use, and any client you use (such as TortoiseSvn). Refer to the relevant edition of the svn book for specifics.
The ease of implementing this will depend quite a lot on how separate the common code currently is in your application; and pulling in directories as directories is much more practical than trying to pull them into an existing directory; and unfortunately wildcards for filepaths are not supported.
However, based on your description of your aim; this is the most straight-forward solution I can imagine.
Hope this helps.
I've been asked to put every single file in my project under source control, including the database file (not the schema, the complete file).
This seems wrong to me, but I can't explain it. Every resource I find about source control tells me not to put generated output files in a source control system. And I understand, it's not "source" files.
However, I've been presented with the following reasoning:
Who cares? We have plenty of bandwidth.
I don't mind having to resolve a conflict each time I get the latest revision, it's just one click
It's so much more convenient than having to think about good ignore files
Also, if I have to add an external DLL file in the bin folder now, I can't forget to put it in source control, as the bin folder is not being ignored now.
The simple solution for the last bullet-point is to add the file in a libraries folder and reference it from the project.
Please explain if and why putting generated output files under source control is wrong.
You haven't explained what "the database file" is.
I would certainly include 3rd party libraries in source control, as they're necessarily for the build and it's good to have a way of reproducing a build at a later time with the library versions you used at that particular moment. But yes, those libraries should be included from a "libraries" folder rather than the output directory.
I wouldn't generally include my own libraries built from the sources elsewhere in the same repository - although I have been in situations where that's been worth doing, where some projects didn't use the "latest and greatest" version of a common library, but just occasionally updated.
The most important practical argument I'd give against including everything, in a world where disk, processor and network are considered free and instantaneous, is that it makes it harder to tell what really changed for any given commit. It's easier to look down a list of 3 source files than 3 source files and 150 binaries from the obj/bin directories.
Generated output files (in general) are "dangerous" in a VCS because:
what you need to version is how to regenerate them: the day you will need to actually update them, chances are you won't remember how to do it
they can contain some private generated file which make them work on the committer desktop, but not on a client one ("works on my machine" TM syndrome)
some generated file are not easily stored in delta (binary especially), making them consuming lots of space (and the topic of cleaning that space will come-up someday...)
External libraries are not generated directly by your project, and can be put in a VCS, although external repositories like a public Maven repo are better at this kind of management.
Do we also put compiled object files such as class files, executables, DLLs build from our source? What about when we're doing serious volume testing and that database becomes many gigabytes or terabytes in size?
The clue is in the name: it's Source Code Management System.
I can understand the simplicity of put eveything in, it's more likely that developer doesn't forget some important file. But if you're doing regular automated builds then surely that gets picked up anyway?
I think the key phrase is here:
It's so much more convenient than
having to think about good ignore
files
Are you explicitly forbiden from having good ignore files? My guess is that already you are excluding .exe and .class (or whatever) files. Suppose you did take the trouble to exclude your database would that be a problem? Why? It's a concious action that you are chosing to take for the commone good. In Eclipse it's a couple of seconds work to add a new file type to the workspace's CVS ignore rules for all projects.
A rule of "No Ignore Files" is almost self-evidently absurd. Once you have the freedom the have some ignore files then why not just use them intelligently to exclude the DB? Who is inconveninced? Only yourself, if anyone, and you're prepared to do the extra work.
For my current project, every time I set up a new workspace, I need to import hundreds of existing projects scattered in 20+ different directories. Is there a way to automate this step in Eclipse?
These projects are all checked into ClearCase.
This answer shows how to import an arbitrary set of projects into Eclipse using a custom plugin.
If I understand your question correctly, you would simply need to specify the paths of all the projects to import in the newprojects.txt file in the workspace root. You may want to remove the part that deletes existing projects though.
Could you import them all into a SCCS and then check them out all at once? You might try this as an experiment using cvs, not because you want to start using cvs in 2009, but because it has the best Eclipse support. If cvs can't do it, the others probably can't either.
For snapshot views, we have a "template" workspace which reference the .project and .classpath files in a "standard" way:
c:\ccviews\projectA\vob1\path\...
c:\ccviews\projectB\vob1\path\...
c:\ccviews\projectC\vob2\path\...
So by copying that workspace, we are able to quickly setup the projects for a new member of the team.
Each colleague will define their own snapshot views with:
a unique name (
colleague1_projectA_snap,
colleague1_projectB_snap,
...)
the same root directory for each view referring to a given project
(c:\ccviews\projectA for:
colleague1_projectA_snap or
colleague2_projectA_snap or
colleague3_projectA_snap...)
Since a snapshot view can be located anywhere you like on your disk, you can:
define a standard path
scale that to a large number of snapshot views.
Of course, that would not be possible with dynamic views, since their paths would by:
m:\aUniqueName\vob1\path
You could ask for each user to associate a view to a drive letter, but that do not scale for a large number of views.
Anyway, dynamic views are great for accessing and consulting data, not for compilation (the time needed to access any large jar or dll through the network is just too important)
Eclipse as the concept of project sets, but I'm pretty sure that's tied to using CVS. My team used this feature and it's how we shared the set of projects between us.
Another 2 alternatives I know of:
Buckminster
It's an Eclipse project which does component assembly, and one part of that is projects. Documentation was a bit crappy last time I played with it, but it does work. No idea if they have support for ClearCase, though it is extensible.
Jazz
Costs money and is also built on Eclipse. Covers similar ground to Buckminster but goes a whole lot further in team-orientated stuff.
I have created some scripts to do this for SVN. Currently, the scripts are run from Vagrant, but you could run them standalone. The process for clearcase should be similar.
See the answer here, which provides links to the source code: https://stackoverflow.com/a/21229397/1033422
Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
#Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, #Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.
One scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.
We also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.
I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN.
So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.
Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.
As you say, keeping the derived files in version control only increases the amount of noise you have to deal with.
In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.