I am planning to add my iOS project to GIT hub and I am new to the GIT branching and taging.
Please suggest a simple and good branching structure for development and production.
If i create one branch for development and one for production(master), is it possible to create sub branches under development?
Help is highly appreciable.
You can branch from any commit in git. All branches are equal. Why don't you try it out?
SCM organisation tends to be project-specific, so, whatever works for you is good. Your clients should never see your entire repository anyway.
First of all, branching in Git is unlike branching in Subversion or any other Central VCS.
Guess you need one major branch, and from there you can make all possible branches you need. Just remember to merge what you want to keep.
E.g. you branch for production (guess solving problems (branch even per problem)) and you branch for development (sub branching again if needed). And later on you can combine (merge) the changes from the production branch back to the master and all the stuff from your development branches that you want to keep.
But there is no best way - that depends on your use cases.
I usually branch per problem (production / issues) and merge it back if solved.
The development branch(es) I only add when needed (f.i. before acceptance testing).
YMMV.
Assume you have a tarball named ios_project.tar.gz with your initial work. You can place it under git revision control as
follows.
$ tar xzf ios_project.tar.gz
$ cd project
$ git init
Git will reply
Initialized empty Git repository in .git/
You've now initialized the working directory--you may notice a new directory created, named ".git".
Let the master be as is. I would recommend you to create two branches one for development and another for production. Do all your code modifications which you are unsure of in your development branch, and when you are sure about your work, push it to the production branch and apply a tag on it so that it is each to track the production line at a later stage.
In addition to this if you would like to share it with others, then have the repository as a shared directory. And clone it on any other development machine, create a local branch tracking the development branch on this machine, do changes, push to origin.
Following set of commands would be useful at a very basic level:
git clone git#repository_path_on_network/folder_name
git branch usrname_activity_dev --track origin/branch_in_repository
git checkout usrname_activity_dev
git add filenames
git commit -m "comments"
git push origin usrname_activity_dev:branch_in_repository
git checkout local_production_branch
git rebase usrname_activity_dev
git push origin local_production_branch:production_branch_on_repo
git tag tag_name
git push origin local_production_branch:production_branch_on_repo --tags
These are very basic commands. I would suggest you to further go through the internet. You will find a lots of commands according to every situation you are in.
Take a look at Git flow. Python scripts to help apply this are available here.
Related
I have created a Template Repository in GitHub and then created some repositories based on the template. Since they were created, there have been updates to the template that I want to pull into those repositories.
Is this possible?
On the other repositories you have to add this template repository as a remote.
git remote add template [URL of the template repo]
Then run git fetch to update the changes
git fetch --all
Then is possible to merge another branch from the new remote to your current one.
git merge template/[branch to merge] --allow-unrelated-histories
https://help.github.com/en/articles/adding-a-remote
I will link to the same location as HRK44 but my answer is very different.
https://help.github.com/en/articles/creating-a-repository-from-a-template
Although forks and templates are mentioned in the same section, they are very different.
One of the differences mentioned in the link is:
A new fork includes the entire commit history of the parent repository, while a repository created from a template starts with a single commit.
This basicly means that you will not be able to pull new changes from the template as your git histories are very different and are not based on the same thing.
If you do use the method mentioned in the accepted answer, you will have very hard manual merges that will result in changes to all of the files received from the template, even if they werent changed since the first time you created that repo from that template.
In short, creating a repo from a template (using only master branch) is the same process as:
git clone template
cd folder
rm -rf .git
git init
git remote add origin <new repo url>
git add .
git commit -m "Initial Commit"
git push -u origin master
A few other things that are not (surprisingly) copied when creating a repo from a template:
(Unless github fix this at a later point)
Repo configurations (allowed merge types, permissions etc)
branch rules
So when using this at your organization, make sure to set all repo configurations on the newly created repo.
If you want to merge changes from a template into your project, you're going to need to fetch all of the missing commits from the template, and apply them to your own repo.
To do this, you're going to need to know the exact commit ID that you templated from, and you're going to need to know the commit ID of your first commit.
ORIGINAL_COMMIT_ID=<commit id from original repo you templated from>
YOUR_FIRST_COMMIT=<first commit id in your repo>
YOUR_BRANCH=master
Next you're going to need add the template as a remote, and fetch it.
git remote add upstream git#github.com:whatever/foo.git
git fetch upstream
And finally, you need to rebase all of the commits you're missing onto your branch
git rebase --onto ORIGINAL_COMMIT_ID YOUR_FIRST_COMMIT YOUR_BRANCH
What this is doing it basically creating a branch off of ORIGINAL_COMMIT_ID, then manually applying all of the commits on your original branch, onto this new branch.
This leaves you with what you would have had, if you had forked.
From here, you can git merge upstream/master just as if you had forked.
Once you've completed your merge, you'll need to use git push --force to push all of the changes up to the remote. If you're working with a team, you'll need to coordinate with everyone when doing this, as you're changing the history of the repo.
Note: It's important to note that this is only going to apply to one branch. If you have multiple feature branches, you'll need to perform the same steps to each one.
#daniel's answer also did not work for me because of the unrelated histories problem mentioned in #dima's answer. I achieved the desired functionality by doing the following:
Copy the URL for the template repository you wish to use to create a new repository. (ex: https://github.com/<username>/my-template.git)
Use GitHub Importer to make a new repository based on the template repository.
This solves the unrelated histories problem because it preserves the entire commit history of the template repository.
You need to use the Importer because you cannot fork your own repository. If you want to use someone else's template repository, you can just fork theirs.
Then, add the template repository as a remote.
git remote add template https://github.com/<username>/my-template.git
After you make new commits to the template repository, you can fetch those changes.
git fetch template
Then, merge or rebase. I recommend to merge on public repos and rebase on private repos.
To merge
git checkout <branch-to-merge-to>
git merge template/<branch-to-merge>
To rebase
git checkout <branch-to-merge-to>
git rebase upstream/<branch-to-merge>
NOTE: When rebasing, you must
git push origin <branch-name> --force
in order to override your old commits on your remote branch. This is why I recommend to rebase only on private repos.
I approached this differently as fetch & merge was not ideal as lot of files diverge across template and downstream projects. I only needed the common elements to sync.
lets says we have the below folder structure locally:
repos
├── template_repo
└── downstream_repo
1. Now create a git patch from the parent folder (repos):
git diff --no-index --diff-filter=d --output=upstream_changes.patch -- downstream_repo/some_common_path template_repo/some_common_path
NOTE - the order of the paths matters!, downstream_repo comes first! (interpret this as "what are the changes we need to make to downstream_repo to make it same as template_repo"; look at the --name-status output, it will hopefully make sense.)
--no-index option generates git diff based on filesystem paths. Path can be a single file or a folder.
--diff-filter=d will ignore any files that are in the downstream_repo but not in the template_repo. This only applies when diffing folder paths.
You can use --stat, --name-status to see what the patch will contain.
Review the generated patch.
2. Change to downstream_repo folder and apply the patch
git apply -p2 ../upstream_changes.patch
explanation for -p<n> option from the official doc:
Remove leading path components (separated by slashes) from
traditional diff paths. E.g., with -p2, a patch against a/dir/file
will be applied directly to file. The default is 1.
Using -p2 will drop a/downstream_repo and b/template_repo from the diff paths allowing the patch to apply.
This is the reason for starting with above illustrated folder structure.
Once the patch is applied, rest of the process should be familiar.
All of these options are clearly explained in git diff and git apply official docs.
Another option is to create a patch from the necessary commits and move the patch to a new project
git format-patch -1 HEAD
Insert a patch
git am < file.patch
details are here
I ran into this same issue. I have 10+ projects all created from the same template project (react-kindling) and using git alone wasn't sufficient, as it would pull in changes to the template that I didn't want in my child projects.
I ended up creating an npm utility for updating child projects from template starter projects. You can check it out here:
LockBlocks
It's been a real life saver. Pulling changes from the template is a heck of a lot easier now.
This works too:
git remote add template git#github.com:org/template-repo.git
git fetch --all
git merge template/main --allow-unrelated-histories
This is the first time I am using Git Hub. So please co-operate with me.
I am working on an iOS project with another developer. Now since we are working on 2 different functionalities, I thought making separate branches for each developer is good way. So my plan in to follow below steps
Create a local branches named functionality1 from the current one using
git checkout -b functionality1
Commit my code in functionality1 branch
Push that branch to the remote using
git push origin functionality1
This will add my branch to remote server. I need branches on remote because I can work from anywhere.
I will merge it in Master branch using
git checkout master
git merge functionality1
Now functionality1 is merged into master branch (provided no conflicts occurred)
Other developer will follow same steps.
We don't want to delete the branches yet.
Now once both branches are merged into master, how can each developer will get the merged code from master branch into their respective branches (functionality1 & functionality2) & then continue on working on same branch (functionality1 & functionality2)?
IMHO you shouldn't unless you really need the new functionality. Because by merging e.g. master back into functionality1 you make it dependend upon the other feature branch. A good read is the gitworkflows(7) man-page.
We've been using Mercurial for a couple of months now and it's improved our deployment process A LOT already. This is the good part.
The system we have in place is working but it's still error prone if you're not careful or rushing. This leave me wondering if there's ways we could improve it or... maybe we're completely off the track :)
Our environment consist of:
Local development workstation (each developer)
Development server (hosting the DB & the central repository)
An acceptance server (Where QA is done)
A staging server (Where we stage the release branch, we then robocopy to the live systems)
A little background on the reason why we switched:
We are in a work environment that often have us switching from task to task, leaving many pending tasks. Many would become stale and clutter up the main branch when we were back on CVS. Deployment was a nightmare as you had to work around lines that needed to go live and others that didn't using Beyond Compare.
Mercurial with named branches and easy merging solves this for us. So not knowing what to expect we set it up.
First we cleaned cleaned up our production source, pruning dead files, etc.
We FTP'd that on staging and made this our new repository as "default", this was to be our stable branch ready to be deployed at all time.
Afterward, we did an hg clone to the development server and had each developer hg clone from the development default branch.
On acceptance where we do QA we did an hg clone of the development server's default branch.
At this point we have a stable copy of the code everywhere, everyone is eager to start!
local machine are pushing to dev, acceptance pulls from dev and staging is completely isolated and can pull from wherever if the path is provided.
Now the idea behind this was that the default branch on our system would always be a copy of the code on the live server, provided that we remembered to pull before starting a new branch. When starting a new feature we would:
hg pull -b default #synch up
hg update default
hg branch {newFeature} #newFeature completely isolated from other changes.
*work on {newFeature}
Oh no! A bug! this is unrelated to what I am currently working on lets call it {bugFix111}. This appear to call for a new branch independant of my feature; go back to updated default. This will isolate newFeature and bugFix111 from each other and each can go live independently as they are based on default.
hg update default
hg pull -u
hg branch {bugFix111}
Once work is completed on say {bugFix111}
hg push -F {bugFix111} #send our fix to the main central repo on dev.
Go to acceptance:
hg pull -b {bugFix111} #pull the fix from the central repo (dev).
hg merge {bugFix111} #merge the code in the default QA branch.
hg commit -m "Merging {bugFix111} into default"
*QA sign off on the fix
We have to branch off acceptance - default were QA take place and release where we merge the stuff as it get signed off.
hg update release
hg merge {bugFix111} #fix is now ready to go live
hg commit -m "Merging {bugFix111} into release"
On staging:
hg pull -b release {PATH TO ACCEPTANCE REPO}
hg merge release
hg commit -m "Merging {bugFix111} into staging default"
hg tag release{date}
*robocopy to live
*run task that pull from staging to dev so that they synch up again.
This has been working for us and save up some deployment time as it's a breeze to just robocopy the stable release branch.
Issues
What we have noticed is:
It's easy to goof up a merge when merging the second time on release, this seem against the flow. We can break it after the QA sign off.
Could get QA to test our release branch as well but it seems like duplicating resource, the goal is just to have features isolated and being able to send them one at a time.
We can completely blow it up by merging release over something wrong, e.g. hg merge release when you are on the default on acceptance completely overwrites it.
If we forget to pull before starting a new branch we are working off the wrong base.
Few other oddities but those are the biggest hurdles.
I realize this is a long post, but hopefully the answers will help other Mercurial newbies like me trying to set up a decent workflow at their company.
Why not pull on staging from the QA branch? Then the merge job has already been done and validated, i.e if the commit has some manual merge you will import it on staging also. Otherwise you have to replicate the merge on staging as you are doing it now.
Where I work, we work (mostly) in pairs. We have seen the need for version control, and we will be using bazaar as our distributed version control system, due to its apparent flexibility.
After some experimentation, we have agreed that in order to set up a project, we should use the following steps:
On Server
bzr init (initializes the project)
bzr add (tells bzr to track all files in current directory, so please make sure you do not have unnecessary files in your project skeleton before you run this command)
bzr commit -m "initial commit" (commits the added files to bzr for version control)
On Development Machine
On your local machine, do a bzr branch project_dir
Daily routine
We are currently trying to establish a workflow that will work for us. This is what we have agreed to do daily:
Pull down latest changes from pull_path
Code and commit. NB. Your commits will be saved on your local machine.
See step 1.
Push your changes to push_path (NB. push_path = pull_path)
If there is any conflict:
Try bzr resolve first.
If that fails, get your partner and do a manual resolve (open file.OTHER, file.BASE and file.THIS and make relevant changes).
Commit your changes (bzr commit)
Push again (bzr push)
Repeat the above sub-points (#5) until all conflicts are resolved.
In terms of the workflow, is this the right way to do version control with bazaar? We have encountered problems where our commit comments 'change ownership' everytime the other team member pushes changes to the server. I'm pretty sure this is not how it's supposed to work, but it may also be due to certain options selected during the project setup phase.
As the VCS evangelist here, I am working on a guide to be used by the team, and particularly by new people as the team grows, and it would be great to have a 'proper' set of steps to follow in getting work done. Your contributions in establishing a nice and simple step-by-step flow to get the best out of bzr would be greatly appreciated. Please add your contributions here.
Thank you all in advance :)
What operating system(s) do you run on the server and development machines? And file systems? Windows file systems' permissions and sometimes the owner / group sometimes differ from the same files on a unix file system. That might be the first stumbling block.
Bazaar workflow:
Run a main tree on the repo server, and do a checkout locally:
bzr checkout sftp://path/to/repo/project /var/source/project
Branch the checkout locally / to your dev environment:
bzr branch sftp://path/to/repo /var/www/project
Don't work on the checkout, only work on the dev branch. Work and commit there, using the various bzr commands.
Once a work module / bug fix / task is finished, merge (not push) into the main repo:
//In /var/source/project
bzr merge /var/www/project
//Resolve any conflicts
bzr resolve
//Commit the merge
bzr commit -m "Work module | task | bug fixed"
Because /var/source/project is a checkout, the repo on the repo server will be updated automatically. This enables two or more developers to work on the same project concurrently, without needing to push and pull the whole time.
I'm not sure how your commit message changes ownership, if you do a merge and commit then the new commit is under the name of the guy who did the merge, but the original commits are still tracked. See bzr log -n0
I have read :
"Best practices for using git with CVS"
"How to export revision history from mercurial or git to cvs?"
, and neither suit my needs.
At work we use a remote CVS repo. Access to this repo is handled via eclipse CVS tools, and in-house eclipse plugins that are built ontop of team tools for eclipse. This means we can't move to a better vcs.
However I would like to use Git on my local machine (to enable personal branching) such that I can accomplish the following:
Create branches in Git and then once finished and merged back into my local trunk, commit back to the cvs repo using the eclipse team tools etc.
My plan is something along the following lines:
Copy the checked out files to another folder [gitRepo].
Create a master git repo in gitRepo
Branch in gitRepo and make changes.
Commit to gitRepo
Copy gitRepo back to checked out files
Sync with remote cvs.
I was planning on using eGit for eclipse however I believe that the CVS and .git files will compete for ownership of the versioning.
Are there any tools or suggested work flows to help me manage this? Also how well does Git play with CVS files. And vice versa since I don’t want them to infect each other.
The reason the former links are of no use is they commit straight to the cvs repo from the git repo and this worries me as I do not wish to infect the cvs repo by accident.
It should also be said that changes in the GitRepo do not need to persist into the CVS repo, for example I don’t need to see every push to the git repo reflected in the remote CVS.
~Thanks for reading.
You perfectly can create a git repo directly within a CVS workspace (much like directly within any other VCS tool.
Make sure git will ignore any .cvs resources, and make sure CVS will ignore the .git.
Any Git commit won't be directly reflected in CVS.
The only trick is for Eclipse to display only Git or only CVS informations and label decoration.
For that I would configure two different Eclipse perspectives in which I will de-activate one or the other VCS tool.
I have done exactly this at work and I found the following practices helpful:
Keep any one (master in my case) branch always in sync with CVS. Do not use this branch for your development. Periodically update this branch to get the changes made by the rest of the team. If these changes are relevant to your current work do a merge master from your dev (or any other appropriate) branch.
When you are ready to check in to CVS switch to the master branch and merge the changes from the appropriate branch (dev, feature etc. as appropriate). Run your tests!
You employer most likely will keep a back up of the CVS repos. You will have to find a way to keep your git repo backed up. One way is to add a mirror repository in a Dropbox folder and use a post-commit hook to update it after each commit.
Before you leave work switch to the master branch. I once made the mistake of running CVS up -d on a dev branch in the morning and ended up quite confused. Adding a script to automatically switch to master before updating helps.