Buildbot 0.8.6
Periodically the buildbot would fail getting a particular repository. It does this command:
argv: ['/usr/bin/git', 'reset', '--hard',
'26d7a2f5af4b777b074ac47a7218fe775d039845']
and then complains:
fatal: Could not parse object
'26d7a2f5af4b777b074ac47a7218fe775d039845'.
However, the correct command is actually:
argv: ['/usr/bin/git', 'reset', '--hard', 'FETCH_HEAD']
Not only that. The SHA number used in the failed command is from a different repository.
Anyone knows how to fix this?
Thanks so much.
Update:
We have two repositories. We have a GitPoller watching one of the repositories. I would like to have a build run if the watched repository had a push. However, both repositories are needed for the build. The error specified above occurs on the second, non-watched repository. The SHA number in the error is from the watched repository.
Ok, first, let's make sure we have the right understanding:
You're having problem with one builder, that builds 2 repositories
Each build has two git steps which clone two different repositories
You're polling one of these repositories to trigger builds
There is no other scheduler that is triggering builds (or at least not those that fail that way)
What happens when you're polling a repository to trigger builds is that each new build carries with it the changes that triggered it. The git steps refer to these changes to checkout the correct version. You probably need to use codebases to help the two steps distinguish between changes. Unfortunately, codebases were introduced in 0.8.7, so you should consider upgrading. 0.8.6 is ancient.
If upgrading is not an option, pass alwaysUseLatest=True to the Git() step of the repository that you are not polling. That will force it to always use FETCH_HEAD. Here's my shot at that setup:
f = BuildFactory()
f.addStep(Git(repourl='git://github.com/you/polled_repo.git', mode='copy'))
f.addStep(Git(repourl='git://github.com/you/other_repo.git', alwaysUseLatest=True))
Related
I am performing salesforce deployments. The current setup is:
Dev org is pushed from develop1 branch.
UAT org is pushed from UAT1 branch.
Every 2 weeks we do a Mergeback of UAT1 into develop1. Which is later deployed onto the dev org.
This has been working for me until now. Currently I have been continuously seeing below error while merging in eclipse:
**"Multiple common ancestors were found and merging them resulted in a
conflict"**
I tried using Eclipse Neon/Mars with Egit 4.*. I am unable to carry out the merge activity and resolve the conflicts.
Based on the answers in the below question:
How to work around "multiple merge bases" error in EGit Eclipse plugin?
I do not want to go about the cherry picking of the merge activity since It will be a very cumbersome task.
Is there another tool which can handle this? I have installed sourcetree, but I am not sure if this would help.
I was able to get this working by doing the following:
Used source tree to merge the local copies of the remote branches develop1 and UAT1---> Source tree was able to handle the multiple ancestor problem and gave me a list of conflicts.
Since I was not very comfortable with using Source tree for conflict resolution, I eclipse to open the code and resolve the conflicts in the Git staging window.(I'd like to know if there is a similar external editor that allows you to accept the changes and reject like eclipse).
Staged and committed the files using eclipse.
This is a long workaround, and I wonder if I will see the Multiple ancestor issue when I perform the mergeback again after 2 weeks.
TortoiseGit on Windows was also successful in merging such a situation.
Every time a try to use github I get tangled in a series of errors that seem to have no solution and I give up. This time I thought I'd try to get help.
I have a local repository created and managed with Xcode. All the local git functions in Xcode work with no problem. Now I want to put this project on github so others can see it. I logged into github and created a repository. It's this one:
lummis/CS193P-2015-Assignment-5
I added a .gitignore file but then deleted it again because I thought it was causing an error. I tried adding a readme file but wasn't able to. I got some error that didn't make sense to me so I gave up on that. So at this point the github repository is empty so far as I can tell.
My local repository has many commits and is currently up-to-date. IOW there is nothing to commit. But when I do "Source Code / Push" I get the following error:
Working copy out of date. Try pulling from the remote to get the
latest changes then push again.
So I try to do that in Xcode by doing "Source Control / Pull". But then I get this error:
"github/master" is not a valid remote branch to pull from. Please
choose a different remote branch.
But there is only one branch. There is no other branch (local or remote) to choose. So I'm stuck in a Xcode-github error loop again. I searched for information about this but didn't find anything relevant. I have the Pro Git book and read and understood it at least thru chapter 2. But that doesn't help on interacting with Xcode.
Can anybody say what I need to do? I thought of deleting the remote repository and starting over but apparently there's no way to do that either!
I know lots of people use github so it must work once you know how to use it but it's a big source of frustration for me.
You have a local repository with "many commits". Let's imagine that we have three:
A---B---C
^
master
Your remote repository on GitHub also contains commits, but they are different ones from what you have locally, e.g.
Y---Z
^
master
At least one of these remote commits was created through the GitHub web interface, which creates a new commit each time you use it.
Because the two repositories contain no common history, Git can't figure out how to handle a push from your local repository to the remote one. It will refuse to accept such a push rather than making any remote commits inaccessible, which is what you usually want.
In this case, commits Y and Z in the remote repository can be discarded. They simply add and then remove a .gitignore file, and you want the remote to reflect what you have locally. The solution is to force push.
Force pushing should generally be avoided, since it can cause commits to be discarded (like Y and Z will be in this case) or to have their hashes changed, which causes major problems with shared repositories. In this instance I don't see any danger in force pushing, which can be accomplished with the -f or --force argument to git push.
(There's nothing fundamentally wrong with force pushing, and in some situations it makes perfect sense, but it should be done with care for the reasons listed above.)
I've recently been looking at using Git to eventually replace the CVS repository we have at work. However after watching Linus Torvalds' video on YouTube about Git it seems that every tutorial I find suggests using Git in the same way CVS is used except that you have a local repository which I agree is very useful for speed and distribution.
However the tutorials suggest that what you do is each clone the repository you want to develop on from a remote location and that when changes are made you commit locally building up a history to help with merge control. When you are ready to commit your changes you then push them to the remote location, but first you fetch changes to check for merge conflicts (just like CVS).
However in Linus' video he describe the functionality of Git as a group of developers working on some code pushing and fetching from each other as needed, not using a remote location i.e. a centralized location. He also describes people pushing their changes out to verifiers who fetch and push code also. So you can see it's possible to create scalable structure within a company also.
My question is can anybody point me in the direction of some tutorials that actually explain how to do this distributed development of code using Git so that developers push and fetch code from each other with out committing to the remote repository and if possible it would be very nice to have this tutorials Eclipsed based.
Thanks in advance,
Alexei Blue.
I don't know any specific tutorial about this. In general, for connecting to a repository, you have to be running a git server that listens (and authenticates) to git requests.
To have a specific repository for each developer is possible - but each repository needs that server component. It is possible to store multiple repositories on the same computer, that allows reducing the number of servers required.
However, for other reasons it is beneficial to have some kind of central structure (e.g. a repository for stuff to be released; or a repository for stuff not verified yet). This structure is not required to be a single central repository, but multiple ones with well-defined workflows regarding the data move between repositories (e.g. if code from the verification repository is validated, it should be pushed to the release repository).
In other words, you should be ready to create Git servers (e.g. see http://tumblr.intranation.com/post/766290565/how-set-up-your-own-private-git-server-linux for details; but there are other tutorials for this as well), and define workflows for your own company to use it.
Additionally, I recommend looking at AlBlue's blog series called Git Tip of the Week.
Finally, to ease the introduction I suggest to first introduce Git as a direct replacement for CVS, and then present the other changes one by one.
Take a look at alblue's blog entry on Gerrit
This shows a workflow different from the classic centralized server such as CVS or SVN. Superficially it looks similar as you pull the source from a central Git server, but you push your commits to the Gerrit server that can compile and test the code to make sure it works before eventually pushing the changes to the central Git server.
Now instead of pushing the changes to Gerrit, you could have pushed the code to your pair programming buddy and he could have manually reviewed and tested the code.
Or maybe you're going on holiday and a colleague will finish the task you've started. No problem, just push your changes to their Git repo.
Git doesn't treat any of these other Git instances any different from each other. From Git's perspective, none of them are special.
In my iPhone app, I am using the in-built Git repository of Xcode 4 so that all the team members can work on the same project.
Now the problem is that even after I commit my changes to the repository, It still shows modified (M) symbol in front of the committed file.
What could be wrong?
I want to ensure that once I commit the changes it should not show "M" for that file.
Is there any setting which I have to do to make it work fine?
What can be done?
The built-in Git repository is a local repository only. How do you share that with your team? If you hooked that repository to GitHub, for example, you will experience problems as the implementation is not 100% reliable. I would use the command line in this case and git add/commit/push the changes. There are discussions and tutorials in the GitHub Blog.
Without knowing what you're doing in Xcode, or how you have set up your repository all I can say is that you should check the status of your repository in the command line. Maybe your commit fails for some reason and you're not seeing the message in Xcode.
Try git status to see what state your repository is in.
Try git add <your files> and then git commit to see if you can actually commit your changes.
Did you stage your files before committing (git add)? Otherwise the commit will do nothing.
You would need to Push the changes in Xcode 4 to remove the "M" or modified status. If you don't have the command line mojo like most people, you can just use the functions built in as they were intended.
It won't solve your issue of sharing as it is only a local repo. I'm finding that even using Xcode 4 with an outside repo, you need to learn some command line stuff or it just isn't going to work, or at the very least kick your butt enough to make you consider giving it up.
*edit
Just to make it clear, the process for Xcode 4 is as follows:
File-->Source Control-->Commit
File-->Source Control-->Push
This actually stems from on my earlier question where one of the answers made me wonder how people are using the scm/repository in different ways for development.
Pre-tested commits
Before (TeamCity, build manager):
The concept is simple, the build system stands as a roadblock between your commit entering trunk and only after the build system determines that your commit doesn't break things does it allow the commit to be introduced into version control, where other developers will sync and integrate that change into their local working copies
After (using a DVCS like Git, that is a source repository):
My workflow with Hudson for pre-tested commits involves three separate Git repositories:
my local repo (local),
the canonical/central repo (origin)
and my "world-readable" (inside the firewall) repo (public).
For pre-tested commits, I utilize a constantly changing branch called "pu" (potential updates) on the world-readable repo.
Inside of Hudson I created a job that polls the world-readable repo (public) for changes in the "pu" branch and will kick off builds when updates are pushed.
my workflow for taking a change from inception to origin is:
* hack, hack, hack
* commit to local/topic
* git pup public
* Hudson polls public/pu
* Hudson runs potential-updates job
* Tests fail?
o Yes: Rework commit, try again
o No: Continue
* Rebase onto local/master
* Push to origin/master
Using this pre-tested commit workflow I can offload the majority of my testing requirements to the build system's cluster of machines instead of running them locally, meaning I can spend the majority of my time writing code instead of waiting for tests to complete on my own machine in between coding iterations.
(Variation) Private Build (David Gageot, Algodeal)
Same principle than above, but the build is done on the same workstation than the one used to develop, but on a cloned repo:
How not to use a CI server in the long term and not suffer the increasing time lost staring at the builds locally?
With git, it’s a piece of cake.
First, we ‘git clone’ the working directory to another folder. Git does the copy very quickly.
Next times, we don’t need to clone. Just tell git get the deltas. Net result: instant cloning. Impressive.
What about the consistency?
Doing a simple ‘git pull’ from the working directory will realize, using delta’s digests, that the changes where already pushed on the shared repository.
Nothing to do. Impressive again.
Of course, while the build is running in the second directory, we can keep on working on the code. No need to wait.
We now have a private build with no maintenance, no additional installation, not dependant on the IDE, ran with a single command line. No more broken build in the shared repository. We can recycle our CI server.
Yes. You’ve heard well. We’ve just built a serverless CI. Every additional feature of a real CI server is noise to me.
#!/bin/bash
if [ 0 -eq `git remote -v | grep -c push` ]; then
REMOTE_REPO=`git remote -v | sed 's/origin//'`
else
REMOTE_REPO=`git remote -v | grep "(push)" | sed 's/origin//' | sed 's/(push)//'`
fi
if [ ! -z "$1" ]; then
git add .
git commit -a -m "$1"
fi
git pull
if [ ! -d ".privatebuild" ]; then
git clone . .privatebuild
fi
cd .privatebuild
git clean -df
git pull
if [ -e "pom.xml" ]; then
mvn clean install
if [ $? -eq 0 ]; then
echo "Publishing to: $REMOTE_REPO"
git push $REMOTE_REPO master
else
echo "Unable to build"
exit $?
fi
fi
Dmitry Tashkinov, who has an interesting question on DVCS and CI, asks:
I don't understand how "We’ve just built a serverless CI" cohere with Martin Fowler's state:
"Once I have made my own build of a properly synchronized working copy I can then finally commit my changes into the mainline, which then updates the repository. However my commit doesn't finish my work. At this point we build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that my changes are done. There is always a chance that I missed something on my machine and the repository wasn't properly updated."
Do you ignore or bend it?
#Dmitry: I do not ignore nor bend the process described by Martin Fowler in his ContinuousIntegration entry.
But you have to realize that DVCS adds publication as an orthogonal dimension to branching.
The serverless CI described by David is just an implementation of the general CI process detailed by Martin: instead of having a CI server, you push to a local copy where a local CI runs, then you push "valid" code to a central repo.
#VonC, but the idea was to run CI NOT locally particularly not to miss something in transition between machines.
When you use the so called local CI, then it may pass all the tests just because it is local, but break down later on another machine.
So is it integeration? I'm not criticizing here at all, the question is difficult to me and I'm trying to understand.
#Dmitry: "So is it integeration"?
It is one level of integration, which can help get rid of all the basic checks (like format issue, code style, basic static analysis detection, ...)
Since you have that publication mechanism, you can chain that kind of CI to another CI server if you want. That server, in turn, can automatically push (if this is still fast-forward) to the "central" repo.
David Gageot didn't need that extra level, being already at target in term of deployment architecture (PC->PC) and needed only that basic kind of CI level.
That doesn't prevent him to setup more complete system integration server for more complete testing.
My favorite? An unreleased tool which used Bazaar (a DSCM with very well-thought-out explicit rename handling) to track tree-structured data by representing the datastore as a directory structure.
This allowed an XML document to be branched and merged, with all the goodness (conflict detection and resolution, review workflow, and of course change logging and the like) made easy by modern distributed source control. Splitting components of the document and its metadata into their own files prevented the issues of allowing proximity to create false conflicts, and otherwise allowed all the work that the Bazaar team put into versioning filesystem trees to work with tree-structured data of other kinds.
Definitely Polarion Track & Wiki...
The entire bug tracking and wiki database is stored in subversion to be able to keep a complete revision history.
http://www.polarion.com/products/trackwiki/features.php