Send gcloud build result to pubsub - gcloud

I want to send the cloud build result to a gcloud pubsub after the build. This is required because I want to add the build successful condition to all my PRs inside bitbucket.
Somehow I am either to stupid to find a propper documentation about how to add this pubsub step in the cloudbuild.yaml or there is not enough information out there.
Can anyone help here?
Post-build step, waitFor step. However gcloud build does not seem to support those actions.

Related

Is there a way to cherry pick Cloud Build runs with Github Comments?

I have set up my cloud build instances to only trigger when a comment of /gcbrun is made to the pull request.
I would like to be able to trigger specific, or all builds.
Currently using /gcbrun is triggering all of my builds when I would like to test out only one of them.

Invalid Argument Error in Google Cloud Build/GitHub

I have been trying to integrate Google Cloud Build with my GitHub account. I have set up working build triggers in the past for other projects on GCP - but with this one, I just can't get it to work reliably. Here is what I did:
Install the Google Cloud Build App on GitHub and link it to my Google Cloud Account.
Connected to my GitHub repository in Google Cloud Build. As source, I selected "GitHub (Cloud Build GitHub App)".
Let Cloud Build create its default trigger for me - just to make sure that the settings are correct.
Now, when manually running the default trigger, I always receive the following error message after selecting my branch: "Failed to trigger build: Request contains an invalid argument." Here is what that looks like:
The trigger also does not work when invoked through a new commit in the GitHub repository. There are two different errors I have spotted through the GitHub UI:
The GitHub Cloud Build Action essentially reports the same error as Cloud Build itself when manually invoking the build and immediately fails:
The GitHub Cloud Build Action is queued/started, but never actually does anything. In this case, Cloud Build does not even seem to know about the build that was triggered by GitHub. The action will remain in this state for hours, even though Cloud Build should usually cancel builds after 10 minutes by default.
Here are some things that I've tried so far to mitigate the issue:
Create all sorts of different trigger variations - none of them seems to work. The error is always the same.
Uninstall the Cloud Build App on Github, unlink my Google Cloud account, and go through the entire setup process again.
When connecting the repository in Cloud Build, instead of selecting the GitHub App as a source, select "GitHub (mirrored)".
At this point, I seem to be stuck and I would be super grateful for any advice/tip that could somehow push me in the right direction.
One more thing that I should note: I have had the triggers working for a while in this project. They stopped working some time after I renamed my master branch on GitHub to "production". I don't know if that has anything to do with my triggers failing though.
I found that this can be caused when you have an "invalid" CloudBuild config file (e.g. cloudbuild.yaml).
This threw me off, because it doesn't necessarily mean it is invalid YAML or JSON, just that it is not what CloudBuild expects.
In my case, I defined a secretEnv value, but had removed the step that utilized it. Apparently, CloudBuild does not allow secretEnv values to go unused, which resulted in the cryptic error message:
Failed to trigger build: Request contains an invalid argument.
In case that isn't clear, here is an example of a config file that will fail:
steps:
- name: "gcr.io/cloud-builders/docker"
entrypoint: "bash"
args: ["-c", "docker login --username=user-name --password=$$PASSWORD"]
secretEnv: ["PASSWORD"]
secrets:
- kmsKeyName: projects/project-id/locations/global/keyRings/keyring-name/cryptoKeys/key-name
secretEnv:
PASSWORD: "encrypted-password"
UNUSED_PASSWORD: "another-encrypted-password"
UNUSED_PASSWORD is never actually used anywhere, so this will fail.
Since this error message is so vague, I assume there are other scenarios that could cause this same problem, so take this as just an example of the type of mistakes to look for.

How to configure Travis job's before_deploy/after_deploy steps to run only for one of the deploy providers?

I would like to define a before_deploy and an after_deploy step in my travis build that runs only for one of the two providers used in my deploy step. The before/after steps currently run once for each of the providers, but the actions apply only to one of them.
If there's no way to configure the .travis.yml file to do this explicitly, is there some way i could pass information along from my deploy step to the after_deploy step so that I could check to see which provider it is being run for?
Note that the two deploy providers that I'm using are bintray and releases, so there seems to be very little flexibility in what I can do as part of the actual deploy step (i.e. I'm not deploying via a script which would give me more freedom to do extra stuff).

Code from GitHub does not go into Jenkins - AWS CodePipeline integration with Jenkins and Github

I integrated my GitHub repository with AWS Codepipeline and that with Jenkins through the AWS Codepipeline plugin in Jenkins. Jenkins is installed in an EC2 server. I created an IAM role for the EC2 instance holding my Jenkins. I also set up AWS CodePipeline Publisher as the post build action.
However, my code from GitHub is taken in by AWS Codepipeline successfully(The Source stage is successfull), but the Build stage fails with a Timeout error after 1 hour.
When I checked with the Jenkins workspace in the EC2 instance, the workspace for the project is empty.
That is, the code taken in from GitHub is not put into the workspace of Jenkins by the AWS Codepipeline.
Is this a problem with enabling security for Jenkins? But actually I tried with disabling the security as well. But I got the same error.
Your help is really appreciated.
in the Build Triggers section, did you choose Poll SCM?
This is where you configure how often Jenkins should poll AWS CodePipeline for new tasks. For example: H/5 * * * * (every 5 minutes).
Something else that comes to mind is an issue with the credentials. If you open your Jenkins project, there should be an AWS CodePipeline Polling Log link on the left, below "Configure", and you should see an error there if the plugin is unable to poll.
First thing - Make sure Jenkins running on EC2 instance have IAM role and its related permissions to perform actions with AWS Code Pipeline.
Second thing - Under Build Triggers section, select Poll SCM and type five asterisks separated by spaces in Schedule.
Kindly follow the link for more details
http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-4.html#getting-started-4-get-instance
This is an old question but had the same problem. After quite a bit of research, I figured out that in my setup, the input and output artifact names were missing.
Steps to check / fix the issue
You will need the aws cli installed.
Use: aws codepipeline get-pipeline --name [pipeline name] > pipeline.json
open the pipeline and confirm that
1. the output artifact in the source stage is the same as the input artifact in the build stage.
2. the output artifact in the build stage is the same as the input artifact in the Beta (or whatever is your deploy stage) stage.
You can check whether things are working fine by going to your S3. In the bucket for your code pipeline, you should see a folder with the same name as the output artifact in your source stage. Inside this, there will be various zip files. Download one and unzip to check that the upload from GitHub was proper.
I am guessing that the issue happened for me because I began with a 2 step pipeline and then added the build process afterwards - May happen with you too if you do not have the Jenkins server ready before creating the pipeline and hence you put that stage later.

Can Jenkins build code to remote servers when triggered by GitHub webhook?

I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.
The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.
jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.