Is there a way to have local only changesets? which means those changesets should never be sent to the remote repository. My scenario in more detail -
I want to make certain changes to configurations which make sense only on my local environment and not on the remote repository. I want to keep it in a local changeset, that should never get pushed to the remote repository. Right now i always have to either shelve / unshelve or redo these changes.
You can put your changeset in a secret phase, Mercurial won't bother you with it, and it will not get pushed. You may need a specific branch for this too, as all descendants will be secret as well.
However, what you are really looking for is the Mercurial Queues (mq extension), which allow you to apply and unapply patches on top of any changeset.
Related
Our team is looking to migrate our perforce server over to Git. Is there a way to sync check-ins from a branch in our Github Server back to Perforce to keep them in sync? I have been looking at git-p4 and there seems to be lots of documentation of how to sync Perforce -> Git but not the other way around. I would ideally like to have it syncing both ways perforce <-> git, is that possible with p4-git?
Perforce GitFusion can do this, but the developer would have to push changes to the GitFusion server instead of the github server.
For most of the time, merge conflicts were sorted out automatically. Perforce was the "master". People submitting from Perforce always had their change go straight through. People submitting from git would have their change submitted via this process:
lock master git repo
fetch upstream p4 changes
rebase against upstream p4
submit to p4 if that all went ok
That still left a very small window between (3) and (4) where someone in Perforce land could submit a conflicting change to the same files. But in practice that only happened a couple of times, over the course of several years of probably hundreds of commits per week.
In those two cases I went in and manually patched things up which was a bit clunky. I think you could just automatically discard the problematic change quite easily, but in the end, switching to git entirely made this problem go away.
Git-p4 is designed such that the git repository is initialized with data imported from Perforce. After this initial import bidirectional communication between git and Perforce repositories is fully supported, with the exception of branches/merges that have limited support.
To import updates from Perforce to git:
git p4 sync
To submit changes from git to Perforce:
git p4 submit
For more details regarding git-p4 configuration please consult its documentation.
Update: I would always advise to test any flows in temporary repositories before deploying.
In the past I setup something like this. I had a gitolite repo and a p4 server. Changes pushed to the gitolite repo would be picked up by a cron job and turned into P4 commits (via git-p4 submit). Similarly, changes submitted to p4 would be picked up by the same cron job and synced back to git (via git p4 rebase).
There were two git repos involved: the gitolite repo that normal git developers committed to, and a separate git-p4 based repo where the git-p4 operations took place.
I had a fairly small shell script to co-ordinate everything. The main tricky areas were:
locking: you have to forcibly rebase the gitolite repo so you need some way to ensure developers don't lose changes when this happens (I used a lock file).
Merge conflicts. Very occasionally two people would edit the same section of a file in P4 and git at the same time.
I have faced this problem myself and as far as I know there's nothing of the kind.
In my case I have to basically:
Create a patch for the changes in Git: git diff sha1..sha2 > mypatch.diff
Make a list of the affected files: git diff --name-only sha1..sha2 > files.list
Apply the patch in the P4 repo: git apply mypatch.diff
Reconcile changes in P4: for each file in files.list; p4 reconcile $file; done
Edit your P4 change list and then submit it (I'm sure this can be automated too but I haven't had the need for it).
I have a couple of scripts that help me with this, you can find them here: https://github.com/pgpbpadilla/git-p4-helpers#sharing-changes-git-p4
I have been using this workflow for ~6mo and it works for most cases.
Caveats
This creates a single change lists for all commits in the sha range (a..b).
In my company some users are pushing stuff into the repo without specifying the branch, thus causing a few issues. Is there a way to force the -b option on mercurial so the user always needs to specify a branch when pushing?
I mean, we could force a specific comment in the commit message. Can we do the same with the -b option?
If pushing all pushable commits "causing a few issues" it's a problem of your workflow and (used additional) tools - because pushing all branches is default style for Mercurial and you must design processes, having this in mind and don't emulate ugly Git-style.
Yes, way to force -b in push exist (redefine push in [alias]), but it's wrong way
You could have them use a new enough version of mercurial, make them have their default phase of commits be 'secret' via
[phases]
new-commit = secret
in their .hgrc. As only changesets from the draft and public phase are shared, they would need to change the phase of those commits they plan to share prior to push. Accidential push of changesets added just for personal testing in other branches are thus easily avoided.
When a deployment is in production, it can sometimes be difficult to justify pulling, just so you can merge and push changes back up (we have a fast forward only policy on our central repository).
However, I do want those changes to be merged up sooner rather than later, so that new deployments can benefit from the fixes. As such, I pull changes from the production deployment into a non-production deployment, and do the pull-merge-push from there.
I have set up a series of remotes, so that I can easily pull in changes from lots of remotes in one go, but that only helps if the working directories are clean.
Since we have dozens of repositories per deployment, dozens of deployments, and minor fixes may have been applies in any of them, I was wondering if there were an easy way to tell if a remote has a clean working directory when pulling from it, and flag up when a remote was dirty.
Definitions:
Deployment: A series of git repositories deployed using buckminster, cloned from our central (bare) repositories.
Production: A deployment which is running live. It is expected that production deployments will not be changed any more than is absolutely necessary, i.e. essential bug fixes only.
Non-production: A deployment which is not running live, where we are free to pull, merge, push and change branches without the risk of e-git messing up permissions bits and causing a live system to stop working.
There is no hook on a remote repo side for git pull, so that means you need to:
have a job monitoring the state of the working directory (see "Checking for a dirty index or untracked files with Git" for the script executed by this job)
disable clone/pull/fetch when the working tree is dirty (one way, for instance, is illustrate din "Can I “disable” git-clone over http?")
This isn't a full-proof mechanism though, and a more robust way would be:
For each non-production repo, have a bare non-production repo in addition of the non-bare one
a hook in the non-bare repo in order to sync the bare repo (pull any new commit done in the non-bare to the bare one)
pulling only from the bare "non-production" repos (bare rpeo means no working directory, so no issue with "dirty" state).
I often see the below errors on doing git pull on totally untouched files which I am NOT working on.
Pull is not possible because you have unmerged files
It is clear the conflicted files have changed in the git repo. But I don't understand why git pull cannot over-write on these untouched files - which I've not touched?
What can I or my team do to avoid getting these errors?
Edited -
Along with the solution I want to understand why the errors are coming.
Just to make clear, the other team members are working on other files (say xyz). And I am working on a separate set of files having no dependency on xyz files. So if I pull the repo changes after a long time with no changes from my side in xyz, why the conflicts in those files?
use git diff to see problem files
look at this git cheat shets for usefull commands
http://www.cheat-sheets.org/saved-copy/git-cheat-sheet.pdf
http://jan-krueger.net/wordpress/wp-content/uploads/2007/09/git-cheat-sheet-v2-back.svg
There are some tips from my own experience. i'm not sure whether they're 100% corerect)
Split work with your team on paralel threads. Try not to work in the same files.
Try to avoid situations when 2 or more persons are adding new files simalteniously. When one added new files others should make pull as soon as possible
the last but not least - make git push very often. this will keep your project git up to date
Since git pull does not pull individual files, it's git merge phase will stop for manual correction of any merge conflicts. git merge and/or git pull (and by nature of the fact that git pull is essentially git fetch followed by git merge) is an all-or-nothing operation - you either successfully merge the changes introduced on both (or all) of the branches you are merging together, or you don't merge any of them. The catch in that is when conflicts arise that must be manually resolved - but in that situation, you are in an in-between state, having neither completed and committed the merge nor rolled it back to your previous state.
The message you are getting implies that you have previously done a git pull or a git merge, which stopped in the middle, requesting that you manually resolve some conflicts, which you have not yet done, but have rather continued on doing other stuff, and are now trying to do another git pull / git merge without ever having completed the first one.
Take a look at git status, and follow the suggested directions for either resolving your in-progress merge issues or resetting back to a not-in-the-middle-of-a-merge state.
I have an iOS that app that I need to deploy to several clients. Each clients has a few small changes in the app (images, provisioning files, app name, etc). 95% of the app is the same for all clients.
I don't want to maintain several git repositories (one for each client). I would rather have the once, with branches for each customer.
I'm new to this branching thing and need to know if this can be achieved.
I plan to create the master branch with generic images/configs/etc.
Create a branch for each client
Update each branch with the customers images/configs/etc
Then when I make a change I will make it to the master. Then pull the changes from the master to each branch. How can I stop the images, configs, etc from being overridden when I pull from master. Can I define certain files which can be ignored when I do this for each branch?
Is there a better way of managing what I need to do?
You should pull master and then rebase your branches on top of it.
See "Git: How to rebase many branches (with the same base commit) at once?" for a concrete example.
For extra security, you can add a merge driver "keepMine" associated for your files, and declare that merge driver in a .gitattributes file present in each of your branches.
See "How do I tell git to always select my local version for conflicted merges on a specific file?".
That will change the SHA1 of your client branches, and you will have to force push them to the client repo, but since said clients aren't actively modifying files on their side, it isn't a problem.