At the new workplace where I work, in the past, the main project was split in two branches, because different customers began to have very different requirements. A pretty common scenario I guess.
Now, I'am not a developer but a sysadmin, and not an expert of git, but I was wondering if in these cases usually is the correct approach to use branches, because in my understanding a fork would be more adapt.
What the CTO is asking me to do, is to migrate this branch, into a new git repository. But he also says that he wants to still be able to perform comparisons between commits, therefore (in eclipse + egit) to right-click on workspace > team > show in history > select the commits he wants to compare > click on compare to each other. I believe that these requirements conflict to each other, so my main question is: is it possible to compare commits of different git repositories?
My second question is, if a project with the same core that starts to require different features, should be branched forked or moved to a new repository?
Hope my question is not too broad
There is no concept called fork in Git. Git hosting services, such as Github or Gitlab, provide such a feature. As far as Git is concerned, a fork is essentially just a branch. And also, every clone of a repository - even local repositories - are essentially forks.
To split up your repository into two repositories that have a fork relationship, first just create a clone of the repository. And then delete branches in both repositories that refer to commits of the now-other-repository.
The usual approach to compare forks is to add a remote to the other remote. This is possible in your case too, since you have common commits in both repositories, before the forking-point. More on remotes here: What is "git remote add ..." and "git push origin master"?
Related
Summary:
We used to be on SVN and working with offshore team building our automation.
We periodically merged to keep us in sync(local and offshore team). Like every other day or so.
Local team switched to GIT while offshore continued their work on SVN. Security policies, still flushing out new git processes and such...
We have 2 different projects now that are off sync.
Question:
Now that we want them to move to GIT so that we can sync up I can think of couple of options once their code is in GIT:
Manual merge (its been a month, but I think I can do it). Manually copy files over, build fix etc.
Have offshore team rename their files a little so that GIT thinks its a new file and then manually resolve some duplicates perhaps.
There shouldn't be many
Any advise would be much appreciated.
The manual merge should be a straightforward solution:
Compare and merge two different folders, one with your Git repo, one with the latest SVN HEAD revision.
Once the merge is done, you can send them back a Git bundle (one file), which they can clone and start working from, within their own Git cloned repo.
I have a repo on GitHub here.
I have pushed to this repo from two different machines, so now one machine is current and another has outdated code. Right now, I am on the machine with outdated code, and I want to pull in the master/HEAD/whatever from GitHub.
And then I get to stare at this:
I do not want to do something stupid like delete the project from Eclipse and then pull in all the code from GitHub.
Can someone please help me merge/synchronize the projects? This is as simple as it sounds.
Unfortunately, this is what happens when I click "Pull" on the above menu:
Would someone also explain what the difference is between Pull, Merge, Fetch and Synchronize?
eGit doesn't know, which remote branch you want to pull from.
If you create your local branch based on a remote tracking branch, then the key is generated automatically. Otherwise you have to create it yourself:
branch.master.merge=refs/heads/master
branch.master.remote=origin
where master stands for the branchname, in the key it's your local branch, in the value it's the branch in the remote repository. Place that in the repository-specific configuration file %repositorypath%\.git\config
As for the terms:
merge: join two or more development histories together
fetch: download objects and refs from another repository
pull: fetch from and merge with another repository or local branch
sync: allows you to compare 2 branches
In general, I urge you to read eGit user guide, where you can get even better understanding of Git and eGit. It can be found at http://wiki.eclipse.org/EGit/User_Guide
I'm using Mercurial with TortoiseHg on a Windwos host.
We have a central repository for the team and it must always be in a stable state.
Now I'm working on a feature with a colleague and we want to merge our work, without going via the central repository because our work isn't stable yet.
So we have a common ancestor, then we have individual commits to our local repos and we need to merge this work and test it, before pushing it to the central repo.
How do we do that?
As a an additonal difficulty, I'm working on Windows with TortoiseHg, while my colleague is on a Linux box. We're both only basic users of Hg, so apologies if this is a question with an evident solution. For me it isn't.
You can use named branches and create special named branch (pushed to central repo) for your WIP
You can use Mercurial in true DVCS-way:
Start embedded web-server on both sides hg serve in the Working Directory
Get URL of repo
Pull from remote side hg pull URL-OF-REMOTE-REPO
I am trying to find out what would be the best way to set up egit repos for mutliple developers.
I found some arguments to set up independant repos for each developer and then the recommendation to merge the files by setting the respective external upstream repo to eg developer B in Eclipse of developer A so A can pull and merge with B. However A then needs to change the repo back to his own all the time. And switching upstream repos in the settings is quite cumbersome.
Alternatively all developers could work off the same repo in different braches - then merging would be easier since noone has to go to settings and change the upstream repo. On the other side this is also kind of "dangerous" since every developer is working on the same repo without restrictions (so I heard)
Which way is better in the long run?
In the long run, having one upstream repository is easier to manage.
Each developer can make their own branches locally.
They should agree on a common branch to push to though. It can be master, or a feature branch (if a few of them are collaborating to a specific feature).
The idea is, before each push, to pull --rebase that branch from the upstream repo in order to replay your local work (the commits you haven't pushed already) on top of upstream/branch (git pull --rebase will fetch and then rebase your local work on top of what has just been fetch).
That way, a developer will only push commits which will be merged on upstream as a fast-forward merge.
In EGit terms, that pull --rebase is configured when you create a tracking branch.
Rebase: When pulling, new changes will be fetched from upstream and the remote tracking branch will be updated. Then the current local branch will be rebased onto the updated remote tracking branch
A repo has various pull-requests and other fixes in some of its forks (no pull-request) that are noted in comments in the 'Issues' section. How do I go about gathering all the scattered fixes and committing them?
This is the main repo I refer to.
Some of the commits I would like to add are in this fork and this fork.
Plus miscellaneous ones mentioned in this pull-request (which has not been pulled, even though requested ages ago), namely the pbaker ones.
What is the best way to go about creating a new repo/fork that combines them all?
You can:
add those forks as remote repo (git remote add)
fetch them: git remote update (see "pull/push from multiple remote locations")
merge or cherry-pick the commits you want.
Now, this isn't the ideal workflow: you should pull from only one fork (and only if you can apply its fixes in a fast-foward manner), then ask the other forks to rebase their history on top of your own updated branch.
And then repeat the process for another fork.
However, in the case of truly distributed development, this "ideal" scenario doesn't scale well.
Hence the idea to merge/cherry-pick everything you need, even if that means for the forks to reset their own master branches.