Travis CI build status failed. What can I do? - github

I am new in project continuous deployment. I only add my project in Travis CI and add the status logo in my github repo README.md. My project is created using php,html,javascript,bootstrap. And, my project is also connected with mysql database.
Now, what can I do so that it results successful build status.
Edit 1
As I new I don't know where the error message stored by travis ci. But, I can see a job log.

The job log is where you can find out what went wrong. Simply read the logs generated by travis, i.e. here: https://travis-ci.org/al2helal/HandicraftStore/builds/373415975
The error message you're getting is:
The command "phpunit" exited with 2.
Since you haven't defined what you want Travis to do in your .travis.yml, but only configured the language (php), Travis runs the default set of tests for the configured language. In your case it runs phpunit.
However, you haven't written any tests nor you have configured phpunit, so it only displays a help message and exits with an error code.
Before you deploy your automatically anywhere you should run tests that confirm your application is working as expected. Start with writing some.
Next, you'll need to configure Travis to run those tests and configure deployment if all tests pass.
Since you don't seem to know anything about Travis CI, the best place to start is just go through their docs: https://docs.travis-ci.com/

Related

Is there a way to define external checks on a GitHub PR?

When I open a PR on GitHub, several builds are triggered on our external build server, but because of build queuing, they can run at different times.
In the interim, however, I can merge my PR after even one of the builds has successfully run.
I understand that the build server is probably using the Checks API, and that GitHub doesn't know about the check until the build server tells it that the build has started. I think this is the source of the problem because GitHub is just saying, "All the checks I know about have passed."
Is there a way to configure GitHub to expect all of my builds before the build server starts them?

automated test, code coverage, static analysis and codereview

I used to be developer long ago but for last 10 years working on system ops. I am planning to move into devops and trying to sharpen my saw. However, when it comes to jenkins and specially static code analysis, code coverage, automated test and code review, I get so much confused.
Lets start from automated test ( for simplicity take unit test). I understand that we write a separate class file for unit test. But how does that test is carried out? Will jenkins create a jvm where the newly build artifact is deployed and the tests are run against it? or will the test be run against code ( I do not think but still want to clarify)?
I downloaded one example application with maven and codertura from github and build the project. When the build was completed, it publishes code coverage report.
I have not done any post build, for deploying the artifact. So, I am not sure how it works, and what did it do and how?
Thanks
J
Here is a common flow that you can follow to achieve your requirement.
Work with code --> Push to gerrit for review --> Jenkins gerrit trigger plugin get triggered --> The corresponding job will checkout the code you committed and do the compile, package, unit test, deploy to artifactory --> Execute the sonar build to analysis the code quality, static analysis, code coverage...
Br,
Tim

Can Jenkins build code to remote servers when triggered by GitHub webhook?

I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.
The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.
jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.

Teamcity constantly rebuilding a pull request even though the last build failed and there have been no further commits

I have TeamCity set up to build Github pull requests as per these instructions: http://blog.jetbrains.com/teamcity/2013/02/automatically-building-pull-requests-from-github-with-teamcity/
I have added a VCS build trigger so that TeamCity polls Github looking for changes. This has no special settings enabled.
My build involves a shell script to set up dependencies and an Ant script to run PHPUnit. Right now, the Ant script fails (tests don't pass) with exit code 1. The build shows as a fail, and that should be that. However, every time the VCS build trigger looks for changes, it seems to find some, even though there have been no more commits. It then runs yet another build of the same merge commit and keeps repeating the build endlessly.
Why is it constantly thinking that there are changes when there are not?

How to get Jenkins to report after the build script fails?

I use rake to build my project and one of the steps is running the unit, integration and fitnesse tests. If too many of these fail, I fail the rake script.
That part is working fine.
Unfortunately, after the build is failed, jenkins doesn't publish the html reports I generated from the unit, integration and fitnesse tests I generated, making it tad difficult to track down the failure reason.
Am I missing a configuration step to get the reports published?
Is Jenkins supposed to skip the post-build steps when the build fails?
It seems like it some for most of the plugins I am using.
You have to tell Jenkins which artifacts to archive in a post-build step (there is a check box under general 'Post-build actions' heading which is called 'Archive the Artifacts'). Important: the artifact path is determined relative to the workspace directory. Make sure that the option Discard all but the last successful/stable artifact to save disk space is not checked.
Finally figured it out, one of those I could have had a V8 moments...
I'm using a rake file to build and one of it's tasks is failing just before some reporting tasks that need to run in order to have the HTML pushed into the correct area to be published.