Handling codeship failures on references which no longer exist - version-control

Sometimes I push code to a feature branch, and because of rebasing and git push force the reference no longer exists. However, codeship still tries to download the reference and run its CI on it.
Is there any way (such as a special exit code) to tell CodeShip to neither pass nor fail a build where the reference no longer exists, or to delete it from the build history?

that's not possible right now. In theory you could force a build to succeed even if tests fail by making sure that those commands return an exit code of zero.
But the git clone is a step that's run by Codeship itself and that you can't modify. Because of this and the fact the the exit code of git clone is not zero, the step and the build will fail.
I'll bring this up with the team, but I'm not sure if we're going to change the behavior.
Disclaimer: I'm working for Codeship.

Related

How to execute a client-side Git hook?

I am having problem while implementing a pre-push hook. The developers need to run a static code analyzer before they push the code to the git repo. But usually they don't and hence break the build.
Thus, I have written a pre-push hook; which is a shell script to execute the static code analysis ( and copied to .git/hooks) directory but it is not working correctly. This has to be a client-side hook but it is not working as expected. And I dont want to implement this functionality in a pre-commit or post-commit hook since I want static code analysis to be done on the developer's machine before he/she pushes the code (and not when he/she) commits the code.
Thus, please provide your insight as to how I can execute a task (static code analysis) on the client machine before git push command.
As #sestus said, hooks need to be set up client-side, they are not part of the Git repository. It makes sense if you consider that Git is a distributed system and hooks can execute arbitrary code.
What you can do is check in the script into the repository (e.g. to $REPO_ROOT/git-hooks/pre-push) and use the build toolchain of your project to set the up a symlink (ln -s ../../git-hooks/pre-push .git/hooks).

How can I copy an git repository in Xcode to github?

Every time a try to use github I get tangled in a series of errors that seem to have no solution and I give up. This time I thought I'd try to get help.
I have a local repository created and managed with Xcode. All the local git functions in Xcode work with no problem. Now I want to put this project on github so others can see it. I logged into github and created a repository. It's this one:
lummis/CS193P-2015-Assignment-5
I added a .gitignore file but then deleted it again because I thought it was causing an error. I tried adding a readme file but wasn't able to. I got some error that didn't make sense to me so I gave up on that. So at this point the github repository is empty so far as I can tell.
My local repository has many commits and is currently up-to-date. IOW there is nothing to commit. But when I do "Source Code / Push" I get the following error:
Working copy out of date. Try pulling from the remote to get the
latest changes then push again.
So I try to do that in Xcode by doing "Source Control / Pull". But then I get this error:
"github/master" is not a valid remote branch to pull from. Please
choose a different remote branch.
But there is only one branch. There is no other branch (local or remote) to choose. So I'm stuck in a Xcode-github error loop again. I searched for information about this but didn't find anything relevant. I have the Pro Git book and read and understood it at least thru chapter 2. But that doesn't help on interacting with Xcode.
Can anybody say what I need to do? I thought of deleting the remote repository and starting over but apparently there's no way to do that either!
I know lots of people use github so it must work once you know how to use it but it's a big source of frustration for me.
You have a local repository with "many commits". Let's imagine that we have three:
A---B---C
^
master
Your remote repository on GitHub also contains commits, but they are different ones from what you have locally, e.g.
Y---Z
^
master
At least one of these remote commits was created through the GitHub web interface, which creates a new commit each time you use it.
Because the two repositories contain no common history, Git can't figure out how to handle a push from your local repository to the remote one. It will refuse to accept such a push rather than making any remote commits inaccessible, which is what you usually want.
In this case, commits Y and Z in the remote repository can be discarded. They simply add and then remove a .gitignore file, and you want the remote to reflect what you have locally. The solution is to force push.
Force pushing should generally be avoided, since it can cause commits to be discarded (like Y and Z will be in this case) or to have their hashes changed, which causes major problems with shared repositories. In this instance I don't see any danger in force pushing, which can be accomplished with the -f or --force argument to git push.
(There's nothing fundamentally wrong with force pushing, and in some situations it makes perfect sense, but it should be done with care for the reasons listed above.)

Buildbot fails on reset --hard

Buildbot 0.8.6
Periodically the buildbot would fail getting a particular repository. It does this command:
argv: ['/usr/bin/git', 'reset', '--hard',
'26d7a2f5af4b777b074ac47a7218fe775d039845']
and then complains:
fatal: Could not parse object
'26d7a2f5af4b777b074ac47a7218fe775d039845'.
However, the correct command is actually:
argv: ['/usr/bin/git', 'reset', '--hard', 'FETCH_HEAD']
Not only that. The SHA number used in the failed command is from a different repository.
Anyone knows how to fix this?
Thanks so much.
Update:
We have two repositories. We have a GitPoller watching one of the repositories. I would like to have a build run if the watched repository had a push. However, both repositories are needed for the build. The error specified above occurs on the second, non-watched repository. The SHA number in the error is from the watched repository.
Ok, first, let's make sure we have the right understanding:
You're having problem with one builder, that builds 2 repositories
Each build has two git steps which clone two different repositories
You're polling one of these repositories to trigger builds
There is no other scheduler that is triggering builds (or at least not those that fail that way)
What happens when you're polling a repository to trigger builds is that each new build carries with it the changes that triggered it. The git steps refer to these changes to checkout the correct version. You probably need to use codebases to help the two steps distinguish between changes. Unfortunately, codebases were introduced in 0.8.7, so you should consider upgrading. 0.8.6 is ancient.
If upgrading is not an option, pass alwaysUseLatest=True to the Git() step of the repository that you are not polling. That will force it to always use FETCH_HEAD. Here's my shot at that setup:
f = BuildFactory()
f.addStep(Git(repourl='git://github.com/you/polled_repo.git', mode='copy'))
f.addStep(Git(repourl='git://github.com/you/other_repo.git', alwaysUseLatest=True))

Mercurial post-build commit in MPLAB (Eclipse)

I have a question about using Mercurial with MPLAB (which is basically just a wrapper around Eclipse).
I am wondering if it is possible to add a post-build step to commit a project to the repo.
Right now, we're just doing it the brute force way; we've taken the "commit often" part to the extreme. My co-worker has setup a Windows Event to execute every 15 mins that runs a script he wrote to commit everything in our working directory to the repo. This is great for making sure you don't miss anything (when his computer is on), but has the downside of committing broken code a lot of the time.
I can't help but think that there has to be a more streamlined way to handle our commits. I've read multiple tutorials/wikis about Hg but nothing goes this specific; everything stays much more "general overview".
If you are building via make then just add your commit script as the last stage, (after elf generation), in your makefile. For managed builds see here - assuming it is available in that version and again run your existing script. Either will result in a commit on a successful build.

What is the cleverest use of source repository that you have ever seen?

This actually stems from on my earlier question where one of the answers made me wonder how people are using the scm/repository in different ways for development.
Pre-tested commits
Before (TeamCity, build manager):
The concept is simple, the build system stands as a roadblock between your commit entering trunk and only after the build system determines that your commit doesn't break things does it allow the commit to be introduced into version control, where other developers will sync and integrate that change into their local working copies
After (using a DVCS like Git, that is a source repository):
My workflow with Hudson for pre-tested commits involves three separate Git repositories:
my local repo (local),
the canonical/central repo (origin)
and my "world-readable" (inside the firewall) repo (public).
For pre-tested commits, I utilize a constantly changing branch called "pu" (potential updates) on the world-readable repo.
Inside of Hudson I created a job that polls the world-readable repo (public) for changes in the "pu" branch and will kick off builds when updates are pushed.
my workflow for taking a change from inception to origin is:
* hack, hack, hack
* commit to local/topic
* git pup public
* Hudson polls public/pu
* Hudson runs potential-updates job
* Tests fail?
o Yes: Rework commit, try again
o No: Continue
* Rebase onto local/master
* Push to origin/master
Using this pre-tested commit workflow I can offload the majority of my testing requirements to the build system's cluster of machines instead of running them locally, meaning I can spend the majority of my time writing code instead of waiting for tests to complete on my own machine in between coding iterations.
(Variation) Private Build (David Gageot, Algodeal)
Same principle than above, but the build is done on the same workstation than the one used to develop, but on a cloned repo:
How not to use a CI server in the long term and not suffer the increasing time lost staring at the builds locally?
With git, it’s a piece of cake.
First, we ‘git clone’ the working directory to another folder. Git does the copy very quickly.
Next times, we don’t need to clone. Just tell git get the deltas. Net result: instant cloning. Impressive.
What about the consistency?
Doing a simple ‘git pull’ from the working directory will realize, using delta’s digests, that the changes where already pushed on the shared repository.
Nothing to do. Impressive again.
Of course, while the build is running in the second directory, we can keep on working on the code. No need to wait.
We now have a private build with no maintenance, no additional installation, not dependant on the IDE, ran with a single command line. No more broken build in the shared repository. We can recycle our CI server.
Yes. You’ve heard well. We’ve just built a serverless CI. Every additional feature of a real CI server is noise to me.
#!/bin/bash
if [ 0 -eq `git remote -v | grep -c push` ]; then
REMOTE_REPO=`git remote -v | sed 's/origin//'`
else
REMOTE_REPO=`git remote -v | grep "(push)" | sed 's/origin//' | sed 's/(push)//'`
fi
if [ ! -z "$1" ]; then
git add .
git commit -a -m "$1"
fi
git pull
if [ ! -d ".privatebuild" ]; then
git clone . .privatebuild
fi
cd .privatebuild
git clean -df
git pull
if [ -e "pom.xml" ]; then
mvn clean install
if [ $? -eq 0 ]; then
echo "Publishing to: $REMOTE_REPO"
git push $REMOTE_REPO master
else
echo "Unable to build"
exit $?
fi
fi
Dmitry Tashkinov, who has an interesting question on DVCS and CI, asks:
I don't understand how "We’ve just built a serverless CI" cohere with Martin Fowler's state:
"Once I have made my own build of a properly synchronized working copy I can then finally commit my changes into the mainline, which then updates the repository. However my commit doesn't finish my work. At this point we build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that my changes are done. There is always a chance that I missed something on my machine and the repository wasn't properly updated."
Do you ignore or bend it?
#Dmitry: I do not ignore nor bend the process described by Martin Fowler in his ContinuousIntegration entry.
But you have to realize that DVCS adds publication as an orthogonal dimension to branching.
The serverless CI described by David is just an implementation of the general CI process detailed by Martin: instead of having a CI server, you push to a local copy where a local CI runs, then you push "valid" code to a central repo.
#VonC, but the idea was to run CI NOT locally particularly not to miss something in transition between machines.
When you use the so called local CI, then it may pass all the tests just because it is local, but break down later on another machine.
So is it integeration? I'm not criticizing here at all, the question is difficult to me and I'm trying to understand.
#Dmitry: "So is it integeration"?
It is one level of integration, which can help get rid of all the basic checks (like format issue, code style, basic static analysis detection, ...)
Since you have that publication mechanism, you can chain that kind of CI to another CI server if you want. That server, in turn, can automatically push (if this is still fast-forward) to the "central" repo.
David Gageot didn't need that extra level, being already at target in term of deployment architecture (PC->PC) and needed only that basic kind of CI level.
That doesn't prevent him to setup more complete system integration server for more complete testing.
My favorite? An unreleased tool which used Bazaar (a DSCM with very well-thought-out explicit rename handling) to track tree-structured data by representing the datastore as a directory structure.
This allowed an XML document to be branched and merged, with all the goodness (conflict detection and resolution, review workflow, and of course change logging and the like) made easy by modern distributed source control. Splitting components of the document and its metadata into their own files prevented the issues of allowing proximity to create false conflicts, and otherwise allowed all the work that the Bazaar team put into versioning filesystem trees to work with tree-structured data of other kinds.
Definitely Polarion Track & Wiki...
The entire bug tracking and wiki database is stored in subversion to be able to keep a complete revision history.
http://www.polarion.com/products/trackwiki/features.php