Is there a way to cherry pick Cloud Build runs with Github Comments? - github

I have set up my cloud build instances to only trigger when a comment of /gcbrun is made to the pull request.
I would like to be able to trigger specific, or all builds.
Currently using /gcbrun is triggering all of my builds when I would like to test out only one of them.

Related

Azure release pipeline for static web app

I have a React app and I've set up a build pipeline that publishes the build directory as artifacts.
I was anticipating setting up a release pipeline to deploy it like I would with AzureFunctions or an AppService.
But apparently not: when I created the static website it has created a new build pipeline which also deploys. Why would you want every build to deploy? This is nonsense.
Also, the branch name is hard-coded somehow and can't be changed. Obviously I'll want to change that to master after I've got it working.
Furthermore, when trying to create a release pipeline there is no task for Azure static website.
What is going on?
Can I have a normal build and release like everything else?
Why does this have to be different -- the inconsistency is confusing and infuriating.
But apparently not: when I created the static website it has created a
new build pipeline which also deploys. Why would you want every build
to deploy? This is nonsense.
You can change the trigger of the pipeline to make the deployment be done as you want.
Check these official documents:
CI Trigger for DevOps
PR Trigger for DevOps
If you don't want the pipeline trigger automatically, you can replace the trigger part of the pipeline as this:
trigger: none
Also, the branch name is hard-coded somehow and can't be changed.
Obviously I'll want to change that to master after I've got it
working.
Just like the above, the design of Azure SWA in this regard is a bit counter-intuitive, these settings are also not operated on the Azure Portal side, but on the DevOps side.
You need follow these to change the source:
After the above two steps, the source of the Azure SWA will be changed successfully, but the UI of the Azure Portal side will not change immediately at this time. A success deployment will change it:

Why choose github action when we can just run bash script in github workflow?

Just completed a GitHub workflow using more of them are actions, but also with one bash script.
When writing the workflow, it seems much quicker use bash script than actions.(since some actions are just do one thing. ) Why are the some reasons that we just need GitHub actions rather than bash script or python script trigger?
Or we are just supposed to use script languages for most part, then use GitHub actions for small portion of the whole workflow?
Interesting but not easy to answer with more information about what your goal is. The right answer might depend on your use case.
I have not used GitHub actions yet. Let me try to explain it anyway, starting pretty high level. Unfortunately, there's no option to add a table of contents ;) Please let me know if this helps.
1. What are GitHub Actions for?
From this "What is GitHub Actions? Benefits and examples" PDF file
GitHub Actions is a CI/CD tool for the GitHub flow. You can use it to integrate and deploy code changes to a third-party cloud application platform as well as test, track, and manage code changes. GitHub Actions also supports third-party CI/CD tools, the container platform Docker, and other automation platforms.
From docs.github.com
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository or deploy merged pull requests to production. [...]
GitHub Actions goes beyond just DevOps and lets you run workflows when other events happen in your repository.
2. Continuous Integration/Continuous Deployment (CI/CD)
Usually, people run CI/CD tools to build, deploy, test, and run other tasks while doing that. We use another 3rd party CI/CD pipeline using Rake to build, test, and check links. Our pipeline invokes these small scripts you mention.
3. GitHub actions and scripts
From Essential features of GitHub Actions
If your job generates files that you want to share with another job in the same workflow, or if you want to save the files for later reference, you can store them in GitHub as artifacts. Artifacts are the files created when you build and test your code. For example, artifacts might include binary or package files, test results, screenshots, or log files. Artifacts are associated with the workflow run where they were created and can be used by another job. All actions and workflows called within a run have write access to that run's artifacts.
Here's the key point, I guess. You can really do a lot of crazy stuff within a workflow. All is related/specific to GitHub. Workflows are event-driven, meaning that you can run a series of commands after a specified event has occurred. For example, every time someone creates a pull request, you can automatically run a command that executes a test or other script.
4. GitHub action workflow and scripts
You can include different scripts in your workflow, e.g. using
Javascript: https://github.com/actions/github-script
Python: https://github.com/marketplace/actions/run-python-script
5. (Complex) Examples
You can check out the repository for docs.github.com for some more complex examples, see action-scripts and workflow folders. GitHub themselves seems to use it pretty heavily.
6. Advantages/Disadvantages of GitHub actions
OR: Differences to other CI tools
It took some time to find something not marketing-ish. Key points are:
beginner-friendly using YAML config files
no need to set up your own CI pipeline
You can check out this SO post from 2019 for a list of what's good and bad about GitHub actions.
In short - for readability and the DRY ("Don't repeat yourself") principle.
It's more or less the same as using functions in programming.
I can agree that some trivial actions are useless.
But "actions/checkout" for example is priceless!

Azure datafactory deployment automation from multiple branches

I want to create automated deployment pipeline for azure datafactory.
For one stream of development we can configure it using doc
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
But when it comes to deploying to two diff test datafactories for parrallel features development (in two different branches), it is not working because the adb_publish which gets generated is only specific to the one datafactory.
Currently we are doing deployement using powershell scripts and passing object list which needs to be deployed.
Our repo is in Azure devops.
I tried
linking the repo to multiple df but then it is causing issue, perhaps when finding deltas to publish.
Creating forks of repo instead of branches so that adb_publish can be seperate for the every datafactory - but this approach will not work when there is a conflict, which needs manual merge, so the testing will be required again instead of moving to prod.
Adf_publish get generated whenever you publish. Publishing takes whatever you have in your repo and updates data factory with it.
To develop multiple features in parallel, you need to just use "Save". Save will commit your changes to the branch you are actually working on. Other branches will do the same. Whenever you want to publish, you need to first make a pull request from your branch to master, then publish. Any merge conflict should be solved when merging everything in the master branch. Then just publish and there shouldn't be any conflicts, and adf_publish will get generated after that.
Hope this helped!
Since a GitHub repository can be associated with only one data factory. And you are only allowed to publish to the Data Factory service from your collaboration branch. Check this
It seems there is not a direct and easy way to accomplish this. If forking repo as workaround, you may have to solve the conflicts before merging as #Martin suggested.

Team Services - Create issue in Github on failed build

I am using Github as my repo as well as my Kanban/Scrum board. We use Visual Studio Team Services for our automated builds. We really like the way VSTS works and it works well with Github as the repo.
However, I want to be able to create a new Github issue/bug if and when our Continuous Integration build fails. I know you can create a VSTS Work Item but I would rather keep all issues centralized.
Is there any way to hook up VSTS to create a Github repo whenever a build fails? Or perhaps create a Github issue whenever a new VSTS Work Item is created?
We are running our own build server so possibly something can be done on that end?
Yes, you can create a github issue when VSTS build failed with two options.
Option1:
In VSTS build definition, add a powershell task in the end of the build process. Functions in the powershell should include:
Detect above build tasks in the build definition. Use REST API timeline to get build detail, you can find each task result in result parameter.
Determine to create a github issue or not. If all above build tasks are pass to build, don’t create github issue. Else, create a github issue by github API.
Option2:
Create your own website, and in VSTS use web hooks to tigger build fail information for your own website. After your own website receive the build information, it can create a github issue.

How to set up a github pull request build in a Jenkinsfile?

So, I've been using Jenkins for quite a while. I have set up numerous projects with the Github Pull Request Builder plugin to run tests whenever someone opens a pull request, and then trigger some other job (build, push, deploy, etc) whenever the pull request actually gets merged to master.
So, is there any way to set this up with a Jenkinsfile, or the organization folders, or the multibranch build deal?
The github-organization-folder plugin in combination with the multi-branch plugin plugin offers exactly this awesome feature: It scans a whole organization (optionally restricted to certain patterns in repo/branch names) for Jenkinsfiles and automatically adds jobs. This also happens for Pull Requests.
Once the PR is closed, it automatically removes the job.
To avoid arbitrary code execution, an organization member has to trigger building the job (same as for the GPRB plugin). The phrase can be configured in the Jenkins System settings.
EDIT: Under the Advanced section in Jenkins, you find options about what types of PR you want to build. If you build fork PRs, then there's afaik no way to prevent running code without prior inspecting it.
An example, how this looks like: