Synergy CM equivalent of SVN Externals? - version-control

Trying to determine if there is any behavior analogous to SVN:Externals in IBM Rational Synergy CM.
Ultimate objective is to find a more integrated way for Synergy projects existing in one database to contain files originating from another database without manually copying them to the second database and manually propagating changes. SVN of course supports this through the concept of Externals which allow you point to directories or files even on a different repository.
Search has revealed nothing so far, and unfortunately use of Synergy is mandated in this situation so switching revision control tools is not an option. If manual copying is the answer, so be it, just wanted to confirm.

The only way to share projects and files between databases is the DCM concept (see manual) . However, it's quite complicated to use and require some scripting in order to function properly. If you have simple needs you can always do it "outside" Synergy with a simple file/directory repository.

Related

Is there anyone out there using Clear Case with Sybase Powerbuilder?

Word has come from upon high to standardize our SCM system. And upon the clay tablets was written Clear Case.
I am reaching out to anyone who is actually using this configuration - to get best practices, hints and tips, war stories, anything...
The Sybase Source Control newsgroup only gives back the sound of crickets.
We currently have a boatload of actively maintained Powerbuilder 11.5 and EAServer 5.5 systems - so version-ing at the PBL library file level is NOT an option.
And it will be a long, long time before we go to the newest version 12 - which removes the PBL file and uses text files and works as a Visual-Studio plug-in.
I've always used the following pattern
_work.pbl
_last_minute_changes.pbl
1.pbl
2.pbl
3.pbl
...
I export the objects from 1,2,3... and check them into clearcase. I set up a nightly build using PowerGen to do a bootstrap import to a network share. I use a script to pull those pbl's down into my view. I check an object out of clearcase and import it into my _work.pbl. Make my changes, export it and check it into clearcase. A trigger then fires a CI build that imports the object into the _last_minute_changes.pbl and regenerates it against the previous nights pbl's and then archives it to a network share.
I then refresh my view from the share using the script and delete the object from my work.pbl. When it comes time to deploy we run a script that takes the sync'd pbl's and turns them into pbd's.
I used this process for a team of over 100+ powerbuilder developers in 4 states and it woked really well for us. Our application had over 12,000 objects and we never had any problems.
I do use ClearCase, but not directly with PowerBuilder projects.
The ClearCase manual has:
an extensive section on PowerBuilder integration,
and a couple of technotes, including a "
Getting started with PowerBuilder and ClearCase Integration" document.
The Sybase infocenter (11.5) mentions settings affecting source controls.
PowerBuilder projects or not, I recommend:
snapshot views for all development activities
dynamic views for consultation purposes (you can very well have both: one dynamic view to test your config spec, and one snapshot view to reuse the same tested config spec and actually copy the files locally)
CC Vob servers (for hosting the repositories) should be on a LAN. If there are on a WAN, then use CCRC (a RCP client communicating through web with a Web ClearCase Server which, in turn will communicate with the Vob servers on the same LAN)
CC View servers on a LAN (each client should manage its own view server)
I used ClearCase and PowerBuilder at a previous job.
We were using the IDE-integrated source control, and had it setup so that the individual objects were saved in clearcase as raw text objects (.sro, .srw, etc). I was not the one that exported the objects so unfortunately I can't give details, but I think PB can do at least some of that for you. Anyway, with this configuration when we checked in a file from PB, the IDE would automatically check the .srX file into ClearCase. This is the configuration you need, so that you can view the history of your changes using the ClearCase tools.
We also used PowerGen to automatically create PBL's using the source files in ClearCase. This is also a process you want to set up. Previously to this process we had to manually check the PBL's into source control (!!). I strongly advise against you doing this - otherwise you cannot truly guarantee that the .srX files and the PBL's are in sync.
Anyway, that's a brief summary. Let me know if there's anything you would like me to clarify, and I'll do my best. Good luck!
I am the Source Code Control Administrator and I have been using ClearCase and PowerBuilder together (using the IDE integration) for about 7 years. We have the PBL objects (.srw, .sru, etc) exported and in ClearCase. The PBLs are not in ClearCase. We also use PowerGen for regeneration instead of GLV because of GLVs issues with more complex systems.
ClearCase integrates beautifully with PowerBuilder (we are usign 9 and we are doing an ROI on the upgrade to 12).
Search IBM's website for "Getting started with PowerBuilder and ClearCase.pdf". That contains some very good information.

PowerBuilder 11.5 & Version Control

What is the best version control system to implement with PowerBuilder 11.5?
If you have examples of how you have did branching/trunk/tags that would be awesome. We have tried to wrap our heads around it a few times and always run into problems because we use shared libraries such as PFC/PFE in multiple applications.
Right now we are only using PBNative, and it sucks.
The Agent SVN is a MS-SCCI Subversion plug-in works with PowerBuilder.
Here is a link that describes how to setup Agent SVN to work with PowerBuilder and Subversion.
We currently use Perforce and it's P4SCC plugin, which works very well. In fact, I'm sure I read somewhere that the guys at Sybase who write PowerBuilder, actually use Perforce themselves.
So, to be fair, let's start out by saying that while you're asking about version control, PBNative is source control. If you compare something that is intended to have more features than just keep two developers from editing the same piece of source, then yes, PBNative will suck. The Madone SL may be an incredible bicycle, but if you're trying to take a couple of laps around an Indy track, it will suck.
"Best" is a pretty subjective word. There are lots of features available in version control and configuration management tools. You can get tons of features, but you'll pay through the nose. StarTeam has some nice features like being able to trace a client change request or bug report all the way through to the changed code, and being able to link in a customized diff tool (which is particularly useful in PB). Then again, if cost is your key criteria rather than features, there are lots of free options that will get the job done. As long as the tool supports the Microsoft SCC interface, you should be OK.
There is a relatively active NNTP newsgroup that focuses on source control with PowerBuilder, which you can also access via the web. You can probably find some already-posted opinions there.
Many years ago I used Starteam to control PB applications. PowerBuilder needless to say is an outdated bear, and it has to export each and every object from its "libraries" into source control.
Currently our legacy PB apps have its libraries saved whole into Subversion, without any support for diff's etc.
We use Visual SourceSafe. We don't use PFC, but we do have libraries that are shared among several projects. Till now, each project was developed separately from the others, and so the shared libraries were duplicated. To have them synchronized, they were all shared at the VSS level. Lately we've reorganized our sources so all projects are near each other, and there's only one instance of the shared libraries.
VSS is definitively not the best source control system, to say the least, but it integrates into PB without the need of any bridges. PB has an inherent problem working with source control, so it probably won't make a very difference working with one instead of the other (at least from the PB point of view).
Now, on a personal note, I'd like to say PB 11.5 is a piece of sh*t. It constantly crashes, full of unbelievable UI nuisance and just brings productivity to its knees. It's probably the worst IDE ever created. Stay away if possible.
FYI: The new PB12 (PB.NET) will integrate with SCC systems so you can easily choose which source control system that you want to use. Since we basically have dropped PBLs (they are now directories) files can be checked in/out individually - even with a plain vanilla editor since files are now normal (unicode) text files.
StarTeam integrates so beautifully with the PB IDE. I used that combination at my previous company (PB9 and ST5.x) for several years. You should be managing your code at the object level - don't log the entire PBL into ST...
If you're having problems with that setup, hit me up offline. phoran at sybase dot com.
We use Merant Version Manager for older projects and TFS for newer work. The only issue we have is that TFS does not support keyword expansion and changing the 'read the flowerbox comments' attitude people have. Some folks are nervous about losing the inline versioning history.
We use StarTeam and have been very pleased with it. It combines bug tracking with version control. Unfortunately though we don't store our files on the object level. We just store the PBL files directly in source control. Anything that supports the SCC interface theoretically should work correctly in PowerBuilder.
PB9: We used PVCS but had stability problems with pbl corruption and also problems co-existing with later versions of Crystal Reports (dll conflict) so now we use PB9 with Dynamsoft's Source Anywhere Standalone. This system is more primitive; it is missing the more advanced features for promotion levels and for pulling out an older milestone version of all objects to make a patch build.
What we are looking for now is something which will allow more advanced "change management", to support promotion levels at the change level (rather than at the object level). Would it be better to use perforce, starteam, or (harvest change manager + HarPB), or something else? Any advice on these combinations would be greatly appreciated.
You can always use Plastic SCM with PowerBuilder through SCC. Plastic is pretty advanced in terms of graphics, tools, replica and so on, so it's always a good choice to keep in mind.

Tool to list all source safe link files

My client is migrating from Source Safe to Clearcase. They need to list all the link files in the Source Safe database so the links can be carried over to Clearcase, as apparently all the source must be checked into Clearcase on day 1, losing any existing links.
Are there any tools for creating this report, or perhaps even doing the full import into clearcase ?
My plan is to write a powershell script to recurse Source Safe the SS folders, findings links using COM.
Thanks.
As I have mentioned in this question, clearexport_ssafe should be used for import from Source Safe to ClearCase.
However, the documentation for that tool explicitly mentions:
Shares. There is no feature in Rational ClearCase equivalent to a Visual SourceSafe share. clearexport_ssafe does not preserve shares as hard links during conversion. Instead, shares become separate elements
So your script would need to list all links, and create soft links between their initial directory and the newly created separate element.
But I believe you may want to consider another organization for the target ClearCase repository, one in which all share files are no longer directly used, as illustrated by this answer (for SVN repository in this instance):
We have eliminated all of our linked files. All class files that were previously linked have been placed into class libraries which are shared to our other projects as shared project references in the solution. So in essence you share libraries, not class files.
There was a bit of an adjustment process getting used to this, but I haven't missed links since then. It really does promote a better design practice by having your code setup like this.
I work mainly with UCM, and all those "share" are natural candidate for UCM component, with UCM baselines to refer to their different version, and you can then make your own "configuration" (list of labels) in order to select the different components you need, making them easily reusable across projects.
As VonC mentioned, the import from VSS to ClearCase is truly atrocious as:
The export/import takes forever to complete, so much so we open a PMR against IBM for it (that didn't help, btw)
The Source Safe shares are transformed into files, which is creating duplicates all over the place (the horror !).
I work on ClearCase UCM myself, and we took the same decision as you (which, in my 10 years of experience in CM, is ALWAYS the best decision): leave the history behind for reference and import at most a couple of versions one on top of the others, by hand (like current in development ; current in test ; current in live).
The way we solved the shares' problem is as follow:
The "shares" where isolated from the source-tree, to be imported independantly from the other sources
The other sources where imported (without the history and without the shares) from scratch. Let say in a component called MAIN_SRC
The shares where imported (without the history) from scratch. Let say in a component called SHARE_SRC
A project was created containing both components: MAIN___SRC, and SHARE_SRC.
Now, the problem is not solved because your shares are living aside your main source code, when your IDE (e.g. Visual Studio) fully expects them to be in the same folders they were before (i.e. in Visual all your projects become wrong if you don't solve this issue, and all the files would need to be relinked from within Visual itself, etc... A lot of work).
This is resolved by using ClearCase VOB symbolic links:
Let says in MAIN___SRC you need to use a file called myShared file in SHARE_SRC.
From within the folder needing to use the myShared file, use the command line interface and run:
cleartool ln -s ..\..\SHARE_SRC\(myPath)\mySharedFile .
You need as many ..\.. as necessary to go up to the component folder level in ClearCase, and then down following your path (myPath) in the SHARE_SRC component folder.
Remember the ClearCase path is composed of:
M:\View_name\VOB_name\Component_name\Your first level of files and folders
( VOB_name\Component_name is the "root" of the component, apart if you have single component VOB, in this case VOB_name\Component_name becomes just VOB_name)
The easiest way is to have a mapping of all the VOB symbolic links that need to be created, and put all necessary "cleartool ln -s" command lines in a script to run once.
After that, you should be fine, and your IDE think the sources are where they used to be.
Cheers,
Thomas

Version control of deliverables

We need to regularly synchronize many dozens of binary files (project executables and DLLs) between many developers at several different locations, so that every developer has an up to date environment to build and test at. Due to nature of the project, updates must be done often and on-demand (overnight updates are not sufficient). This is not pretty, but we are stuck with it for a time.
We settled on using a regular version (source) control system: put everything into it as binary files, get-latest before testing and check-in updated DLL after testing.
It works fine, but a version control client has a lot of features which don't make sense for us and people occasionally get confused.
Are there any tools better suited for the task? Or may be a completely different approach?
Update:
I need to clarify that it's not a tightly integrated project - more like extensible system with a heap of "plugins", including thrid-party ones. We need to make sure those modules-plugins works nicely with recent versions of each other and the core. Centralised build as was suggested was considered initially, but it's not an option.
I'd probably take a look at rsync.
Just create a .CMD file that contains the call to rsync with all the correct parameters and let people call that. rsync is very smart in deciding what part of files need to be transferred, so it'll be very fast even when large files are involved.
What rsync doesn't do though is conflict resolution (or even detection), but in the scenario you described it's more like reading from a central place which is what rsync is designed to handle.
Another option is unison
You should look into continuous integration and having some kind of centralised build process. I can only imagine the kind of hell you're going through with your current approach.
Obviously that doesn't help with the keeping your local files in sync, but I think you have bigger problems with your process.
Building the project should be a centralized process in order to allow for better control soon your solution will be caos in the long run. Anyway here is what I'd do.
Create the usual repositories for
source files, resources,
documentation, etc for each project.
Create a repository for resources.
There will be the latest binary
versions for each project as well as
any required resources, files, etc.
Keep a good folder structure for
each project so developers can
"reference" the files directly.
Create a repository for final buidls
which will hold the actual stable
release. This will get the stable
files, done in an automatic way (if
possible) from the checked in
sources. This will hold the real
product, the real version for
integration testing and so on.
While far from being perfect you'll be able to define well established protocols. Check in your latest dll here, generate the "real" versiĆ³n from latest source here.
What about embedding a 'what' string in the executables and libraries. Then you can synchronise the desired list of versions with a manifest.
We tend to use CVS id strings as a part of the what string.
const char cvsid[] = "#(#)INETOPS_filter_ip_$Revision: 1.9 $";
Entering the command
what filter_ip | grep INETOPS
returns
INETOPS_filter_ip_$Revision: 1.9 $
We do this for all deliverables so we can see if the versions in a bundle of libraries and executables match the list in a associated manifest.
HTH.
cheers,
Rob
Subversion handles binary files really well, is pretty fast, and scriptable. VisualSVN and TortoiseSVN make dealing with Subversion very easy too.
You could set up a folder that's checked out from Subversion with all your binary files (that all developers can push and update to) then just type "svn update" at the command line, or use TortoiseSVN: right click on the folder, click "SVN Update" and it'll update all the files and tell you what's changed.

Do you version "derived" files?

Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
#Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, #Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.
One scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.
We also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.
I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN.
So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.
Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.
As you say, keeping the derived files in version control only increases the amount of noise you have to deal with.
In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.