How do I automatically apply updates to Tekton resources stored in a git repo? - version-control

As background, I am in the process of upgrading a few projects from Jenkins and Gitlab CI based CI to use Tekton. In those projects, it is common to see a Jenkinsfile or .gitlab-ci.yml defining what the pipeline is to run for the project. Those files are then used by the corresponding tool at build time whenever a triggering event occurs (such as a merge/commit/etc). Those files would change over time to accommodate whatever was needed by the repository to perform its build and then committed to the repo like any other kind of work performed. This had the desirable behavior of knowing exactly what the build pipeline looked like at any point in the commit history and aiding build reproducibility if handled carefully/correctly.
The corresponding approach with Tekton appears to suggest that you store the CRD yaml files under a /tekton folder. However, most of the documentation and examples I've seen for Tekton focus on what appears to be a manual process of pushing your CRDs out with kubectl. Once the CRDs have been installed, the EventTrigger is capable of using the defined resources whenever necessary, but what happens when I commit an update to the pipeline.yaml? Is the expectation that a developer manually pushes the updated CRDs with kubectl or is there a way for the EventTrigger to automatically use the ./tekton/pipeline.yaml that is stored in the git repo that sourced the event?

Related

Azure DevOps: Why does new pipeline commit the yaml file to default branch

I created a new pipeline in Azure DevOps, and created a new branch for it.
As a result, DevOps automatically committed the YAML file for the new pipeline to my 'development' branch.
None of the other pipelines I've created have YAML files committed into the repo...
Why does it do this?
Do we have to keep the YAML file there?
It has nothing to do with the source code of the application, so doesn't seem to make sense why its stored there.
YAML is code for how your application is deployed, thus it is part of the source code. By putting it under source control it can keep track of version changes and any additional changes to parameters or variables that are determined or inserted in the build process.
This is opposed to the older ways of doing things where it was updated via UI and not source control and did not have peer reviews, branching merging, and additional polices that can be applied.
This on top of the the YAML Pipelines for Releases going GA the other week will make YAML under a repo even more powerful as the YAMLs will not only build but also release code.
In Azure Devops Service we define pipelines using the YAML syntax or through the user interface (Classic). So there're two kinds of pipelines, Yaml pipelines and Classic UI(Classic build and release) pipelines.
None of the other pipelines I've created have YAML files committed
into the repo...
Why does it do this?
It's expected behavior when defining pipelines using Yaml syntax: The pipeline is versioned with your code. It follows the same branching structure.
And one advantage for this is: A change to the build process might cause a break or result in an unexpected outcome. Because the change is in version control with the rest of your codebase, you can more easily identify the issue.
To sum up, the yaml pipeline will be added into version control and it's by-design behavior. If you don't want this behavior, you can feel free to use Classic Build and Classic Release pipelines. It's also a good choice! About the differences between these formats you can check Feature availability. Hope it helps :)

How to maintain hundred different Terraform configs?

we created a Terraform template which we will deploy in future many dozen times in different workspaces. Each workspace will have a separate configuration file.
Now we would like to automate this procedure and think about keeping the configuration files in a Git Repo.
Has anyone a best practise how to store the configuration files in a Git Repo and trigger a CICD workflow (Azure DevOps)?
In general we only would like to apply changes for workspaces that have a changed configuration.
The terraform plan and apply command have an option for you to pass in the tfvars file you want to use. So something like this:
terraform apply --var-file=workspace.tfvars
So in the pipeline you would grab your terrafom template artifacts and your config files. I would then set a TF_WORKSPACE variable to force your workspace and I would also make your tfvars files match the workspace name so you can re use the variable in your apply command. This would force your workspace and configuration file to match.
To trigger this when those files have changed would require a path trigger that would trigger on those changes.
I don't see any harm in running the Terraform every time regardless if changes occur. The worse possible outcome would be that someone made a change that isn't in Terraform and it gets undone.

Calling a Jenkinsfile from a remote repo into build pipeline

I would like to pull a source controlled version of a Declarative Jenkinsfile into a multibranch jenkins job.
For example, I have 20 multibranch build job each building an application and deploying, each build job will have a static jenkinsfile that point to, pull and use a version controlled jenkinsfile.
This would reduce the need to make changes across all repositories when making changes
(we do use shared libraries where relevant)
Thanks in advance
You can define the whole pipeline as a global variable within a Shared Library, then you can use a single step in your Jenkinsfile, as explained here.
In that way you are able to update the content of the pipeline, without updating each Jenkinsfile accross all your repositories.

How to set up a github pull request build in a Jenkinsfile?

So, I've been using Jenkins for quite a while. I have set up numerous projects with the Github Pull Request Builder plugin to run tests whenever someone opens a pull request, and then trigger some other job (build, push, deploy, etc) whenever the pull request actually gets merged to master.
So, is there any way to set this up with a Jenkinsfile, or the organization folders, or the multibranch build deal?
The github-organization-folder plugin in combination with the multi-branch plugin plugin offers exactly this awesome feature: It scans a whole organization (optionally restricted to certain patterns in repo/branch names) for Jenkinsfiles and automatically adds jobs. This also happens for Pull Requests.
Once the PR is closed, it automatically removes the job.
To avoid arbitrary code execution, an organization member has to trigger building the job (same as for the GPRB plugin). The phrase can be configured in the Jenkins System settings.
EDIT: Under the Advanced section in Jenkins, you find options about what types of PR you want to build. If you build fork PRs, then there's afaik no way to prevent running code without prior inspecting it.
An example, how this looks like:

Best CD strategy for Kubernetes Deployments

Our current CI deployment phase works like this:
Build the containers.
Tag the images as "latest" and < commit hash >.
Push images to repository.
Invoke rolling update on appropriate RC(s).
This has been working great for RC based deployments, but now that the Deployment object is becoming more stable and an underlying feature, we want to take advantage of this abstraction over our current deployment schemes and development phases.
What I'm having trouble with is finding a sane way to automate the update of a Deployment with the CI workflow. What I've been experimenting with is splitting up the git repo's and doing something like:
[App Build] Build the containers.
[App Build] Tag the images as "latest" and < commit hash >.
[App Build] Push images to repository.
[App Build] Invoke build of the app's Deployment repo, passing through the current commit hash.
[Deployment Build] Interpolate manifest file tokens (currently just the passed commit hash e.g. image: app-%%COMMIT_HASH%%)
[Deployment Build] Apply the updated manifest to the appropriate Deployment resource(s).
Surely though there's a better way to handle this. It would be great if the Deployment monitored for hash changes of the image's "latest" tag...maybe it already does? I haven't had success with this. Any thoughts or insights on how to better handle the deployment of Deployment would be appreciated :)
The Deployment only monitors for pod template (.spec.template) changes. If the image name didn't change, the Deployment won't do the update. You can trigger the rolling update (with Deployments) by changing the pod template, for example, label it with commit hash. Also, you'll need to set .spec.template.spec.containers.imagePullPolicy to Always (it's set to Always by default if :latest tag is specified and cannot be update), otherwise the image will be reused.
We've been practising what we call GitOps for a while now.
What we have is a reconciliation operator, which connect a cluster to configuration repository and makes sure that whatever Kubernetes resources (including CRDs) it finds in that repository, are applied to the cluster. It allows for ad-hoc deployment, but any ad-hoc changes to something that is defined in git will get undone in the next reconciliation cycle.
The operator is also able to watch any image registry for new tags, an update image attributes of Deployment, DaemonSet and StatefulSet types of objects. It makes a change in git first, then applies it to the cluster.
So what you need to do in CI is this:
Build the containers.
Tag the images as <commit_hash>.
Push images to repository.
The agent will take care of the rest for you, as long you've connected it to the right config repo where the app Deployment object can be found.
For a high-level overview, see:
Google Cloud Platform and Kubernetes
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.