So far I haven't been able to find a clear answer, though it's possible that the answer is "change your workflow".
I've just started playing around with Mercurial's patch queue and I can see some serious power in it. It seems pretty awesome. In my tests, I've discovered that if you have a patch queue in repo1, and you pull from repo2, you can do some bad things. For example:
Create repos 1, and clone it.
Enable the queue on repo1
Make some commits and some patches on repo1
Pull changes to repo2
On repo1 un-apply(pop?) all your patches
Pull changes to repo2
Now you'll see two different branches - which makes sense from a certain viewpoint. However, since my patches aren't a part of repo1's history (at least until they're applied), it seems like there should be a way to tell mercurial that my patches are off-limits, and only provide what's in the "official history".
Is there a way to do this?
Mercurial phases may be the answer to this.
Starting with Mercurial v2.1, you can configure mq changesets to automatically be marked secret. secret changesets are ignored by incoming/pull and outgoing/push commands.
To enable this behavior, you need to add the following to your config:
[mq]
secret = True
Once enabled, it behaves as follows:
$ hg qpush --all
applying my-patch
now at: my-patch
$ hg phase -r .
16873: secret
$hg outgoing
comparing with https://www.mercurial-scm.org/repo/hg
searching for changes
no changes found (ignored 1 secret changesets)
Related
We have a git-svn repository.
I had no troubles to update the trunk with my local changes (commit + push or commit + merge), however sometimes my colleagues make some fixes in the code and update the git-svn and i want to take his changes without committing mine, since the code is not yet ready for commit (even locally, i might want to revert some of it). I can;t find a way to get his code and merge conflicting changes (i work with 'tower' git client on mac os x).
Any way to do the same action an svn 'update' command will do ?
10x.
You could stash your current work in order to get a pristine working tree, allowing you to do your git-svn update.
You can see a similar technique in "How to handle IDE project files with git-svn".
Another alternative is through Stacked Git.
See "How do I track local-only changes/change sets with git-svn?".
StGit is a Python application providing similar functionality to Quilt (i.e. pushing/popping patches to/from a stack) on top of Git. *
These operations are performed using Git commands and the patches are stored as Git commit objects, allowing easy merging of the StGit patches into other repositories using standard Git functionality.
So i've made the switch from CVS to mercurial for my website.
The biggest issue I am having is that if i'm working on some files that I don't want to commit, I just save them.. I then have other files I want to push to the server, however if someone else has made changes to the repository, and I pull them down.. It asks me to merge or rebase.. either of these options will cause me to lose my local changes that I have not committed.
I've read that I should clone the repository for each project on my local host and merge it into the live when it's ready to do so. This not only seems tedious, but also takes a long time as it's a large repository.
Are there better solutions to this?
I would have hoped that Mercurial would see that I haven't committed my changes (even though I have changed the file from what's on the server) so it'd just overlook the file.
Any input on this would be greatly appreciated. Thank you!
Also, i'm using the hg eclipse plugin to work on my files and push/pull from the server.
hg shelve is your friend here I think.
which comes from the shelve extention (maybe - see below)
from the overview:
The shelve extension provides the
shelve command to lets you choose
which parts of the changes in a
working directory you'd like to set
aside temporarily, at the granularity
of patch hunks. You can later restore
the shelved patch hunks using the
unshelve command.
The shelve extension has been adapted
from Mercurial's RecordExtension.
or maybe its the attic extension
This module deals with a set of
patches in the folder .hg/attic. At
any time you can shelve your current
working copy changes there or unshelve
a patch from the folder.
it seems to have the same syntax as the shelve extension, so I'm not certain which one I've used
I second #Sam's answer. However, if you prefer to use standard Mercurial, a simple workflow is to
save your working dir changes in a temporary file,
sync your working dir with a specific revision, then
push, pull, merge .. whatever you want to do and which requires a clean working copy, and
get back your changes from the temporary file into the working dir.
For instance:
$ hg diff > snapshot.patch # save your uncommited changes
$ hg up -C # get a clean working copy
$ hg pull # do things ..
$ hg merge # .. you need a clean ..
$ hg commit -m "merge" # .. working copy for
$ hg import snapshot.patch # get back your uncommited work
First, are you working from the commandline, or using something like Tortoise?
If you're working from the commandline, and you've done a pull, mercurial will not ask you to do anything, as it merely updates your local repository.
If you then do an hg update and have local changes, it should do what you're used to from CVS. It will update to the tip of the current branch, and attempt to merge your outstanding changes in. There are some caveats to that, so refer to the official docs at http://www.selenic.com/mercurial/hg.1.html#update.
Also, for temporarily storing changes, I would recommend MQ over shelve. Shelve only provides one storage area, whereas MQ provides as many as you need. MQ takes some getting used to, but worth the investment.
I exported a bunch of changesets from my local working repo after doing a pull from the server repository. To make sure the patches work, I cloned a fresh repo from the server and I tried to apply the changeset. Unfortunately, the import fails with this:
applying G:\OSS\premake-dev\premake-dev_rev493.patch
unable to find 'src/host/scripts.c' for patching
3 out of 3 hunks FAILED -- saving rejects to file src/host/scripts.c.rej
patching file src/base/api.lua
patching file src/host/scripts.c
patching file src/tools/bcc.lua
file tests/test_bcc.lua already exists
1 out of 1 hunks FAILED -- saving rejects to file tests/test_bcc.lua.rej
patching file tests/premake4.lua
patching file tests/test_bcc.lua
abort: patch failed to apply
[command interrupted]
I know the reason for the failure, it's due to a removed source file that no longer exist in the latest changeset. But I'm not sure how to fix up my patch so that it will apply cleanly with the current server repository.
I'm fairly new to Mercurial so some of the terms used I'm not going to be familiar with. Also note that I don't have write access to the Hg server repository. So in order to get my changeset in, I have to export it as a patch and submit that to the maintainers.
I must confess to not having used patches a lot, so this might not be the right workflow, but what I would try, if I was in that situation was to apply the patch to the changeset it was originally based on.
In other words, it sounds like you have the following case:
+-- you're here
|
v
1---2---3---4---5---6
\
\
X <-- patch was built to change 2 to X
What I would do, provided I know the changeset the patch was originally based on, would be to update back to that changeset, apply the patch, this would add another head, then merge this into the tip of your repository.
After those actions, the repository should look like this:
+-- you're here
|
v
1---2---3---4---5---6---8
\ /
\ /
7-------------/
^
|
+-- this is the changeset you committed after applying the patch
Now, for a different way.
Is using a patch the only way? One common way when using Mercurial is that you set up your own repository, a fork, containing originally a full clone of the central repository, but you have commit access.
Thus you can commit your new changesets into your own clone.
If the central repository have new changesets added after you clone, at some point you pull from it into your clone, and merge.
Then when you're satisfied, you issue a pull request to the maintainers of the central repository, telling them that "Hey, I have some changes for you, you can pull them from my clone here: http://...".
This way, it is really easy for those maintainers to get everything in, since you've done all the hard work for them.
This would mean that you have the two repositories like this:
central: 1--2
clone: 1--2
You add your work:
central: 1--2
clone: 1--2--3
They add some changesets:
central: 1--2--3--4--5--6
clone: 1--2--3
Then you pull:
central: 1--2--3--4--5--6
clone: 1--2--4--5--6--7
\
\
3 <-- this is your changeset
Then you merge:
central: 1--2--3--4--5--6
clone: 1--2--4--5--6--7--8
\ /
\ /
3--------/
If the maintainers now pull from you, they get the exact same history into their repository.
There's also some support for rebasing, which would mean that you didn't have to pull and merge, but the maintainers would issue a "pull with rebase" command, in effect relocating your changeset from its current position into a new position in their repository, this would look like this:
central: 1--2--3--4--5--6---7
^
clone: 1--2--3 | relocated here
| |
+------------+
This would only work if there are no merge conflicts. The method where you pull and merge is the best for them, since you do all the hard work, they only have to verify that the code is what they want.
For more information on forking, check out Tekpub's video on CodePlex and Mercurial, here: Tekpub: 7 - Mercurial With CodePlex, seek to around 21:15 for the forking part to start.
Note that a "fork" is basically just a clone, the way forking works in CodePlex is that it automates setting up a clone on your own account, and sending the original maintainers the pull request, but if you create your own account on Bitbucket or CodePlex or whatever, publish your clone there, and just send the maintainers an email with your repository URL, that's all there is to it.
In my case, my failing hg import was related to line endings. The solution was putting this in my ~/.hgrc file:
[patch]
eol = auto
The question
An open source program uses CVS for version control. I would like to make a number of bug-fixes and submit patch bombs to the developers with commit access. I would also like to maintain my own semi-private fork that mainly tracks the main code-base but that includes my own features (these features, right now, should not be incorporated into the main code-base.)
I prefer to use mercurial for my own version control needs, but I am open to other version control systems if necessary.
I'd like to:
Be able to easily create patch-bombs against the current CVS source with my own bug-fixes
Keep track of history on my own features
Have fixes and improvements from the main tree easily incorporated in my new-feature fork
Easily apply my own bug-fixes to my new-feature fork
Be able to work and track change history without an Internet connection.
What suggestions do you have for doing this?
My current idea
My own best guess is below, to give you a better idea of what I am thinking about.
I will have 3 mercurial repositories.
The first two repos are managed as specified at (https://wiki.mozilla.org/Using_Mercurial_locally_with_CVS). One just mirrors the latest changes from the CVS upstream. I do "cvs update" then "hg commit" in this repo. The second repo holds my bug-fixes as patches using the mq extension and I pull from the the first repo and re-base my patches every so often. When my patches are incorporated into the main tree, I remove the patches from the patch queue/make them permanent commits.
The third repo is my local fork. It will start out as a clone of the first repo. Then each time I do an update of the first repo, I'll pull from it into repo 3. My own features will be directly present as commits in this repo. When I fix a bug, I'll export a patch from repo 2 and apply it to the appropriate pull from repo 1.
I have used Git to manage changes on top of a CVS repository in a similar way. My solution in Git uses local branches instead of multiple repositories, but it sounds essentially similar to your proposed idea.
I found that this arrangement works best if you commit all the CVS metadata (in the CVS/) subdirectories) to your mirrored repository. This means that the CVS metadata gets replicated in the other repositories, but it doesn't cause any harm (and lets you run commands like cvs diff if you need to).
This actually stems from on my earlier question where one of the answers made me wonder how people are using the scm/repository in different ways for development.
Pre-tested commits
Before (TeamCity, build manager):
The concept is simple, the build system stands as a roadblock between your commit entering trunk and only after the build system determines that your commit doesn't break things does it allow the commit to be introduced into version control, where other developers will sync and integrate that change into their local working copies
After (using a DVCS like Git, that is a source repository):
My workflow with Hudson for pre-tested commits involves three separate Git repositories:
my local repo (local),
the canonical/central repo (origin)
and my "world-readable" (inside the firewall) repo (public).
For pre-tested commits, I utilize a constantly changing branch called "pu" (potential updates) on the world-readable repo.
Inside of Hudson I created a job that polls the world-readable repo (public) for changes in the "pu" branch and will kick off builds when updates are pushed.
my workflow for taking a change from inception to origin is:
* hack, hack, hack
* commit to local/topic
* git pup public
* Hudson polls public/pu
* Hudson runs potential-updates job
* Tests fail?
o Yes: Rework commit, try again
o No: Continue
* Rebase onto local/master
* Push to origin/master
Using this pre-tested commit workflow I can offload the majority of my testing requirements to the build system's cluster of machines instead of running them locally, meaning I can spend the majority of my time writing code instead of waiting for tests to complete on my own machine in between coding iterations.
(Variation) Private Build (David Gageot, Algodeal)
Same principle than above, but the build is done on the same workstation than the one used to develop, but on a cloned repo:
How not to use a CI server in the long term and not suffer the increasing time lost staring at the builds locally?
With git, it’s a piece of cake.
First, we ‘git clone’ the working directory to another folder. Git does the copy very quickly.
Next times, we don’t need to clone. Just tell git get the deltas. Net result: instant cloning. Impressive.
What about the consistency?
Doing a simple ‘git pull’ from the working directory will realize, using delta’s digests, that the changes where already pushed on the shared repository.
Nothing to do. Impressive again.
Of course, while the build is running in the second directory, we can keep on working on the code. No need to wait.
We now have a private build with no maintenance, no additional installation, not dependant on the IDE, ran with a single command line. No more broken build in the shared repository. We can recycle our CI server.
Yes. You’ve heard well. We’ve just built a serverless CI. Every additional feature of a real CI server is noise to me.
#!/bin/bash
if [ 0 -eq `git remote -v | grep -c push` ]; then
REMOTE_REPO=`git remote -v | sed 's/origin//'`
else
REMOTE_REPO=`git remote -v | grep "(push)" | sed 's/origin//' | sed 's/(push)//'`
fi
if [ ! -z "$1" ]; then
git add .
git commit -a -m "$1"
fi
git pull
if [ ! -d ".privatebuild" ]; then
git clone . .privatebuild
fi
cd .privatebuild
git clean -df
git pull
if [ -e "pom.xml" ]; then
mvn clean install
if [ $? -eq 0 ]; then
echo "Publishing to: $REMOTE_REPO"
git push $REMOTE_REPO master
else
echo "Unable to build"
exit $?
fi
fi
Dmitry Tashkinov, who has an interesting question on DVCS and CI, asks:
I don't understand how "We’ve just built a serverless CI" cohere with Martin Fowler's state:
"Once I have made my own build of a properly synchronized working copy I can then finally commit my changes into the mainline, which then updates the repository. However my commit doesn't finish my work. At this point we build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that my changes are done. There is always a chance that I missed something on my machine and the repository wasn't properly updated."
Do you ignore or bend it?
#Dmitry: I do not ignore nor bend the process described by Martin Fowler in his ContinuousIntegration entry.
But you have to realize that DVCS adds publication as an orthogonal dimension to branching.
The serverless CI described by David is just an implementation of the general CI process detailed by Martin: instead of having a CI server, you push to a local copy where a local CI runs, then you push "valid" code to a central repo.
#VonC, but the idea was to run CI NOT locally particularly not to miss something in transition between machines.
When you use the so called local CI, then it may pass all the tests just because it is local, but break down later on another machine.
So is it integeration? I'm not criticizing here at all, the question is difficult to me and I'm trying to understand.
#Dmitry: "So is it integeration"?
It is one level of integration, which can help get rid of all the basic checks (like format issue, code style, basic static analysis detection, ...)
Since you have that publication mechanism, you can chain that kind of CI to another CI server if you want. That server, in turn, can automatically push (if this is still fast-forward) to the "central" repo.
David Gageot didn't need that extra level, being already at target in term of deployment architecture (PC->PC) and needed only that basic kind of CI level.
That doesn't prevent him to setup more complete system integration server for more complete testing.
My favorite? An unreleased tool which used Bazaar (a DSCM with very well-thought-out explicit rename handling) to track tree-structured data by representing the datastore as a directory structure.
This allowed an XML document to be branched and merged, with all the goodness (conflict detection and resolution, review workflow, and of course change logging and the like) made easy by modern distributed source control. Splitting components of the document and its metadata into their own files prevented the issues of allowing proximity to create false conflicts, and otherwise allowed all the work that the Bazaar team put into versioning filesystem trees to work with tree-structured data of other kinds.
Definitely Polarion Track & Wiki...
The entire bug tracking and wiki database is stored in subversion to be able to keep a complete revision history.
http://www.polarion.com/products/trackwiki/features.php