XPages: set up SourceTree for two databases as branches - version-control

Just starting (again) with SourceTree on a fairly big XPages application.
We have two databases we use for development: one is the gold version database, the other is the development database. When we have to fix something, it often has to be done in both databases.
So, I would like to use SourceTree locally for both databases, in such a way that both databases can co-exist while using the same repository, as branches, and that changes in the main code are transported automatically to the other database.
Is that doable? If so, how?
Thanks!

I would recommend that you only use source control (and SourceTree) with the development database and that you then update the gold version database using the development database as template.

If you want to treat this like 2 different branches, you can in fact have 2 different branches open at the same time, you just need to set up 2 different projects of the same repository in sourcetree (that are in different folder locations).
So to be clear this is the same repository but cloned in different locations.
Then these projects should have 'remotes' set up to one another. You can then push/pull from one remote to another. (Or if this is too complicated you can just push up to origin and then pull down from origin into the other project)
This allows you to keep these 2 projects checked out with different branches.
I have done this myself because I need to develop the same project in 2 different environments at the same time.
You then just need to figure out a good strategy for merging changes from one branch to another. One good strategy might be just to keep your commits very small and 'atomic' you can then use cherry-pick to choose which commits to apply to the other branch.
Let me know if you have any more questions
Just as a side note: there should be no problem having 2 databases in one repository if you go down that other oute. We have about 15-20 templates in our one big repository.
Just put the On Disk Project in different sub folders e.g.
.git/
goldodp/
devodp/

Related

Is is possible to re-integrate a sub-project and re-connect mercurial history?

A while back, we pulled a number of more stable packages out of our main application into separate mercurial repositories. We share them in a limited way with some other clients, who access them via artifactory, although these external clients don't generally bother or have a need to stay up to date with our changes. (They are many months behind and doing fine because it's only a few interfaces that are crossing over.)
It's arguable that splitting into separate repositories has made things less efficient for us in that (a) it's more heavy-weight to make changes to the other jars and we sometimes don't bother and (b) it's harder to peruse the changeset history of a feature that involves changes in two or more repos.
We're considering bringing these back into the main repository, but I'm wondering is there any way now to re-connect the histories when doing so? Ideally I'd like to be able trace the history of a given code file, progresing through recent changes, changes during the separation phase, and hopefully changes from before we split them apart. Is that possible?
I guess you can either pull all separate histories into one repository (hg will complain they're unrelated, but will let you proceed anyway if you insist), and then merge them normally (move them into separate directories in each of their branches, and then merge them in a way files from all branches get placed in a single place), or you can filter the history with convert, export/import and maybe mq, but that will be quite difficult to implement.

GitHub Multiple Repositories vs. Branching for multiple environments

This might be a very beginner question, but I'm working on a large production website in a startup environment. I just recently started using Heroku, Github, and Ruby on Rails because I'm looking for much more flexibility and version control as compared to just locally making changes and uploading to a server.
My question, which might be very obvious, is if I should use a different repository for each environment (development, testing, staging, production, etc.) or just a main repository and branches to handle new features.
My initial thought is to create multiple repositories. For instance, if I add a new feature, like an image uploader, I would use the code from the development repository. Make the changes, and upload it with commits along the way to keep track of the small changes. Once I had tested it locally I would want to upload it to the test repository with a single commit that lists the feature added (e.g. "Added Image Uploader to account page").
My thought is this would allow micro-managing of commits within the development environment, while the testing environment commits would be more focused on bug fixes, etc.
This makes sense in my mind because as you move up in environments you remove the extraneous commits and focus on what is needed for each environment. I could also see how this could be achieved with branches though, so I was looking for some advice on how this is handled. Pros and cons, personal examples, etc.
I have seen a couple other related questions, but none of them seemed to touch on the same concerns I had.
Thanks in advance!
-Matt
Using different repos makes sense with a Distributed VCS, and I mention that publication aspect (push/pull) in:
"How do you keep changes separate and isolated across multiple deployment environments in git?"
"Reasons for not working on the master branch in Git"
The one difficult aspect of managing different environments is the configuration files which can contain different values per environment.
For that, I recommend content fiter driver:
That helps generating the actual config files with the current values in them, depending on the current deployment environment.

Distributed Version Control. - Git & Mercurial... multiple sites

I'm looking for a best practice scenario on managing multiple "sites" in mercurial. Since I'm likely to have multiple sites in a web root, all of which are different - but somewhat similar (as they are 'customizations' of a root app)
Should I
A) make a single repository of the wwwroot folder (catching all changes across all sites)
B) make EACH sits folder a different repository
this issue is that each site needs a distinct physical directory, due to vhost pointing for development, and a current need to have "some" physical file difference cross site.
What's the best practice here? I'm leaning towards separate repositories for each directory. which will make tracking any branching and merging for that ONE site cleaner....
It depends on how your software is structured, and how independent the different sites are. The best situation is when you can use your core code like a library, which lives in its own directory, and there is no need in the different sites to change even a single file of core. Then you have the free choice if you want to develop the core along with the different sites in a single repo, or to seperate core from sites. When core and the different sites are dependent on each other, you very probably have to deal with all of them in a sigle repo.
Since in my experience development work better when the different parts are independend of each other I strongly recommend to bring the core stuff into something which can be included into the sites by a directory inclusion.
The next point is how are the different sites developed. If they share lots of code, they can be developed as different branches. But there are two disadvantages of this scheme:
the different sites are normally not visible to the developer, since there is typically only one checked out
The developer has to take great care where to create changes, so that only the wanted changes are going into other branches, not something which is special to a single branch only
You might consider to move common parts of different sites into core if they share lots of common code.
Another situation is if they all have nothing in common, since then things are much better. Then you need to decide if you want them to reside in different repos, or as different directories in a single repos. When these different sites are somehow related to each other (say that they are all of the same company), then it might be better to put them into a common repo, as different subdirectories. When they are unrelated to each other (every site belongs to a different customer, and changes on these sites are not created in synch to each other), than one repo per site is better.
When you have the one repo per site approach, it might also be good if you first create a template site, which includes the core component and basic configuration, and derive your site-repos as clones from this template. Then when you change something in the core which also affects the sites, you do these changes in the template, and merge these changes afterwards into the sites repos (you only need to take care to NOT do this change in one of the site-repos, since when you merge from sites to template you might get stuff from the specific site into the template, which you don't want to be in the template).
So I suggest
develop core as a single independent product
choose the correct development model for your sites
all in one repo, with branches, when there is much code-exchange is goin on between different sites
but better refactor the sites to not share code, since the branches approach has drawbacks
all in one repo, no branches but different folders, if there is no code exchange between different sites
one repo for each site if they are completely independent.
I think, you have to try Mercurial Queues with one repo. I.e
you store "base" site in repository
all site-specific changes separated into the set of MQ-patches (one? patch per site)
you can create "push-only" repos in sites, add they to [paths] section of "working" repo and push changes or use export-copy technique
and after applying the site-patch to codebase you'll get ready to use code for each and every site

Migrating from CVS to distributed version control (Mercurial)

Some background: We're working on projects that involve projects across 2 different countries, and we've been using CVS. Developers in the country not hosting the CVS server will take forever to connect to the remote server, so we've set up this system to have 2 separate CVS servers in each country and have a sync job that keeps them in sync every hour or so.
Given this, we're looking at migrating to a distributed version control system, mostly because we've been having problems with the sync job failing and the limitation that for a given set of files only one side can have the writelock for it at a time.
We're currently looking at Mercurial for this purpose, so can anyone help tell us if:
a. Will Mercurial be a good fit for our use case above? How easy will it be for devs to make the transition, i.e. will they still be able to work the same way? etc
b. Can Mercurial support branching a specific folder only?
c. We also hold a lot of binary docs in version control, will they be suitable for Mercurial?
d. Is there support for getting the "writelock" of particular files? i.e. I want no other people to update these particular files while I'm working on them
Thanks!
a/ and d/: yes and no. Yes, a DVCS like Mercurial is a good fit for distributed development, but by nature, there is no more "writelock", since there is no one "central server" which would be notified each time you want to modify anything.
You will pull (or check incomings) regularly from the remote repo.
b/ no, this isn't how a DVCS works, since a branch is no longer a copy of a directory.
c/ binaries are best kept outside a DVCS (since it will be cloned around, and binaries would make its size grow too fast)
See "How is Mercurial/Git worse than Subversion with binary files?"

Good github structure when dealing with many small projects that have a common code base?

I'm working for a web development company and we're thinking about using GitHub for version control. We work with several different .NET-based CMS-platforms and of course with a lot of different customers.
We have a standard code base for each CMS which we start from when building a new site. We of course would like to maintain that and make it possible to merge some changes from a developed site (when the standard code base has been improved in some way).
We often need to make small changes to a published site at a later date and would like to be able to do this with minimal effort (i.e. the customer gladly pays for us to fix his problem in 2 hours, but doesn't want to pay for a 2 hour set up first).
How should we set this up to be able to work in an efficient fashion? I'm not very used to distributed version control (I've worked with CVS, Subversion and Clear Case before), but from my imagination we could:
Set up one repository for each customer, start with a copy of the standard code base and go from there. Lots of repositories of course.
Set up one repository for each CMS and then branch off one branch for each customer. This is probably (?) the best way, but what happens when we have 100 customers (=branches) in the same repository? It also doesn't feel entirely nice that we create a lot of branches that we don't really have any intention of ever merging back to the main branch.
I don't know, perhaps lots of branches is only a problem in my imagination or perhaps there are better ways to do this that I haven't thought about. I would be interested in any experince in similar problems.
Thank you for your time and help.
With Git, several repos make sense for submodules purpose (sharing common component, see nature of Git submodules, in the third part of the answer)
But in your case, one repo with a branch per customer can work, provided you are using the branches to:
isolate some client-specific changes (and long-lived branch with no merging back to master are ok),
while rebasing those same branches on top of master (which contains the common code, with common evolutions needed by all the branches).