Code from GitHub does not go into Jenkins - AWS CodePipeline integration with Jenkins and Github - github

I integrated my GitHub repository with AWS Codepipeline and that with Jenkins through the AWS Codepipeline plugin in Jenkins. Jenkins is installed in an EC2 server. I created an IAM role for the EC2 instance holding my Jenkins. I also set up AWS CodePipeline Publisher as the post build action.
However, my code from GitHub is taken in by AWS Codepipeline successfully(The Source stage is successfull), but the Build stage fails with a Timeout error after 1 hour.
When I checked with the Jenkins workspace in the EC2 instance, the workspace for the project is empty.
That is, the code taken in from GitHub is not put into the workspace of Jenkins by the AWS Codepipeline.
Is this a problem with enabling security for Jenkins? But actually I tried with disabling the security as well. But I got the same error.
Your help is really appreciated.

in the Build Triggers section, did you choose Poll SCM?
This is where you configure how often Jenkins should poll AWS CodePipeline for new tasks. For example: H/5 * * * * (every 5 minutes).
Something else that comes to mind is an issue with the credentials. If you open your Jenkins project, there should be an AWS CodePipeline Polling Log link on the left, below "Configure", and you should see an error there if the plugin is unable to poll.

First thing - Make sure Jenkins running on EC2 instance have IAM role and its related permissions to perform actions with AWS Code Pipeline.
Second thing - Under Build Triggers section, select Poll SCM and type five asterisks separated by spaces in Schedule.
Kindly follow the link for more details
http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-4.html#getting-started-4-get-instance

This is an old question but had the same problem. After quite a bit of research, I figured out that in my setup, the input and output artifact names were missing.
Steps to check / fix the issue
You will need the aws cli installed.
Use: aws codepipeline get-pipeline --name [pipeline name] > pipeline.json
open the pipeline and confirm that
1. the output artifact in the source stage is the same as the input artifact in the build stage.
2. the output artifact in the build stage is the same as the input artifact in the Beta (or whatever is your deploy stage) stage.
You can check whether things are working fine by going to your S3. In the bucket for your code pipeline, you should see a folder with the same name as the output artifact in your source stage. Inside this, there will be various zip files. Download one and unzip to check that the upload from GitHub was proper.
I am guessing that the issue happened for me because I began with a 2 step pipeline and then added the build process afterwards - May happen with you too if you do not have the Jenkins server ready before creating the pipeline and hence you put that stage later.

Related

GitHub webhook repo is unable to trigger jenkins pipeline

Our Company used to self-host GitLab for source-code management and configured webhook on gitlab to trigger all the project pipelines on jenkins. Initially, the gitlab url was 'https://git.fulcrumdigital.com' and later for an upgraded version, they changed url to 'https://autobuild.fulcrumdigital.com'
Recently, we migrated to 'github.com' and created an organization. The source codes for various projects are found under this organization, which is private. Now, when I try to configure webhooks for these projects, I see that they deliver as intended to jenkins, but jenkins doesn't trigger the respective project's build. Instead, it gives out a message as shown below.
jenkins-github webhook error
I don't find any info regarding this webhook on global configuration page.
Here is a snapshot of jenkins logs
jenkins logs
I don't face this webhook issue for newly created pipeline-projects on jenkins. I face this issue for older pipeline-projects that already had their webhook configured earlier for gitlab.
Help me to resolve this issue and make jenkins trigger build from github webhook for older pipeline-projects.
Did you try force regenerating the webhooks?
Go to Manage Jenkins > Configure System > GitHub plugin > Advance > Re-registers hooks for all jobs.
I had this problem myself. The first thing you want to do is go to Manage Jenkins -> Configure System scroll down to the GitHub section and click on "Advanced". You will see this:
It's important to have access to your Jenkins log (I'm running Jenkins with Docker). When I clicked on Re-register hooks for all jobs, I got the following error:
In my case, the error mentioned something with my access token. So, I checked my Github personal access token and it turned out, I need to turn on Read and Write for Webhook:
Now, go back to Jenkins and click on Re-register hooks for all jobs again, and on the next push, the build was automatically triggered.

Need to integrate sonarqube on our google ci/cd pipeline with github repository

I need to integrate sonarqube that resides on google cloud, as soon we trigger a ci/cd pipeline the sonarqube analyze our code and generate a report.
steps:
we push or make a PR on our github repo
gcp cicd pipeline gets triggered
sonarqube analyzes our repo and generates reports
I have tried with several solutions for this, but i am not able to make it work, either sonarqube doesn’t initializes or else if it does, i cant make it work with our github repo.
I have tried it bitnami sonarqube but got stuck after that, like how would now i trigger the ci/cd pipeline.
I also came across sonarqube helm chart, but that doesn't persists, and gave me a temporary solution where, if i will close my console, the sonarqube server got stopped too.
I am totally new to google cloud and sonarqube, so i may need step by step guide for the same.

Provisioning GitHub-backed CodePipeline using CloudFormation

I am trying to create my CodePipeline using CloudFormation. The problem I'm having is that once it's created and tries to run, it immediately gives me the error:
Invalid action configuration The GitHub repository "MyOrg/MyRepo" or
branch "MyBranch" does not exist.
When in fact they both do. I can click on Edit, select my source control, Connect to GitHub, then select that exact repository and branch, and it works fine. But when starting the pipeline directly following the CloudFormation provisioning of the pipeline, it always gives me this error.
I did an export of the pipeline configuration JSON using aws codepipeline get-pipeline of a freshly provisioned pipeline using CloudFormation, and then did so again immediately after updating the configuration in the console using "Connect to GitHub" and both are identical.
Make sure that the Configuration property for your CodePipeline's GitHub Source Action contains all four required properties as listed in the documentation:
Owner
Repo
Branch
OAuthToken
Double check that your provided values are correct, particularly OAuthToken, which approximates the "Connect To GitHub" step in the the AWS Console-based CodePipeline setup.
To get a valid OAuthToken from GitHub to enter here, you need to create a New personal access token with repo and admin:repo_hook scopes enabled, as described in the documentation troubleshooting page.

Can Jenkins build code to remote servers when triggered by GitHub webhook?

I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.
The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.
jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.