Helm Deployment of multiple containers to Kubernetes Cluster (different environments) - kubernetes

I have the following basics:
multiple apps
Multiple environments per app (dev, qa, staging, prod).
2-* subinstances per app (e.g. de and en)
We develop in branches and currently can build a container from the branch (jenkins) and deploy it individually with a helm chart. (dev and qa)
When everything is successfully tested, we apply the changes and more to the master and build it. This creates a new container with the app and all changes and deploys them in dev and qa. At the end of a sprint, the last master build is "released" for staging in the job and writes the container tag to the helm chart value for staging deployment.
This way our helm chart repo consists of single helmcharts per app. (Thus these are individually also deployable).
The container to be deployed is identified with a tag and written to the separate value yaml (e.g. value-qa-de.yaml).
We would now like to deploy all involved apps (and if they have changes) together for staging and final prod. The idea is to iterate over the underlying app helm charts with a jenkins job and run install/upgrade each time.
The question is, for one, what happens if an app deployment fails?
Also, is there a better way or more elegant way to map this process?
Thanks for any input.

Related

How to start/trigger a job when a new version of deployment is released (image updated) on Kubernetes?

I have two environments (clusters): Production and Staging with two independent databases. They are both deployed on Kubernetes and production doesn't have a fixed schedule for new deployments but it happens on a weekly basis (roughly).
I would like to sync the production database with the staging database every time that a new release is deployed to production (kubernetes deployment is updated with new image).
Is there a way that I can set a job/cronjob to be triggered everytime this even happen?
The deployments are done using ArgoCD to pull the changes in the deployment manifest from a github repository.
I don't think this functionality is inherent to kubernetes; you are asking about something custom that can be implemented in a variety of ways (depending on your tool stack)
e.g.
if you are using helm to install to Production, you can use a post-install hook that triggers a Job that does what you want.
Perhaps ArgoCD has some post-install functionality that can also create a Job resource doing what you want.
I think you can also use a tool like Kyverno and write a policy to generate a K8s job upon any resource created in K8s.
This is exactly the case what Argo Events is for.
https://argoproj.github.io/argo-events/
There are many ways to implement this, but it depends on your exact situation how it’s best for you.
Eg. if you can use a Git tag event’s webhook you could go with an HTTP trigger to initiate a Job or Argo Workflow.

How to handle multiple environments with Google Cloud Build and Kubernetes

I'successfully set up a CICD pipeline following this tutorial.
It shows clearly how to make Google Cloud Build and Kubernetes work with one environment: production.
For simplicity, this tutorial uses a single environment —production—
in the env repository, but you can extend it to deploy to multiple
environments if needed.
Right, but some details are missing: is there one kubernetes.yaml file by environment? What about kubernetes namespaces?...
More precisely, what would be the way to handle multiple environments (staging...)?
There could be a bizillion ways of doing environments , but what I understand from this line:
env repository: contains the manifests for the Kubernetes Deployment
That the default master/production branch maps to the production environment , then you can create for example testing , and staging branches , where you test and stage your things , and later on port the change to master branch.
Infact if you keep reading that document , it will tell you something:
The env repository can have several branches that each map to a
specific environment (you only use production in this tutorial) and
reference a specific container image, whereas the app repository does
not.
One more thing , if you have access to gitlab and kubernetes , you can implement it without google GKE and clud build.

Unreleasing (undeploying) applications in VSTS?

I have a project with N git repos, each representing a static website (N varies). For every git repo there exists a build definition that creates an nginx docker image on Azure Container Registry. These N build definitions are linked to N release defenitions that deploy each image to k8s (also on Azure). Overall, CI/CD works fine and after the releases have succedded for the first time, I see a list of environments, each representing a website that is now online.
What I cannot do though with VSTS CI/CD is to declare how these environments are torn down. In GitLab CI (which I used before), there exists a concept of stopping an environment and although this is just a stage in .gitlab-ci.yaml, running it literally removes an environemnt from the list of the deployed ones.
Stopping an environment can be useful when deleting autodeployable feature branches (aka Review Apps). In my case, I'd like to do this when an already shared static website needs to be removed.
VSTS does not seem to have a concept of unreleasing something that has already been released and I'm wondering what the best workaround could be. I tried these two options so far:
Create N new release definition pipelines, which call kubecetl delete ... for a corresponding static websites. That does make things clear at all, because an environment called k8s prod (website-42) in one pipeline is not the same one as in another one (otherwise, I could see whether web → cloud or web × cloud was called last):
Define a new environment called production (delete) in the same release defenition and trigger it manually.
In this case 'deploy' sits a bit closer to 'undeploy', but its hard to figure out what was last (in the example above, you can kind of guess that re-releasing my k8s resources happened after I deleted them – you need to look at the time on the cards, which is a pretty poor indication).
What else could work for deleting / undeploying released applications?
VSTS has not the feature "stop environment" (auto delete the deployed things on the environment) in release management. But you can achieve the same thing in VSTS YAML build.
So except the two workarounds you shared, you can also stop the environment by VSTS YAML build (similar as the mechanism in GitLab).
For YAML CI build, you just need to commit the file end with .vsts-ci.yml. And in the .vsts-ci.yml file, you can specify with the tasks to delete the deployed app.

Can I deploy multiple build versions using the same bamboo deployment plan?

I would like to create a single bamboo deployment plan to deploy multiple versions of an artifact.
Each supported version has a maintenance branch in git.
Bamboo supports the creation of a single build plan that can be applied to many branches.
Can this be transferred to the deployment project, in which the deployment would be identical, parameterized only by the version.
Deployment plans are used for deploying artifacts, which are created by existing bamboo build jobs. So if you have bamboo build jobs, which create the different versions of your artifact, you can easily reuse your existing deployment plan.
I assuming a bamboo build plan (with multiple builds either due to updates or due to different branches, thus containing different versions of your software) and a working deployment plan.
You can then use your deployment plan and start a deployment by clicking on that cloud icon, select a release to deploy "Create new release from build result", choose the branch and/or correct build number of the build, give your child a meaningful name and deploy the newly created release of your software. However this mean, that at a given time you'll have only a single version deployed.
If you want to deploy multiple version simultaneously you should clone your existing environment of the deployment plan. Otherwise you wouln't be able to track which version (release) of your software is deployed.

Best CD strategy for Kubernetes Deployments

Our current CI deployment phase works like this:
Build the containers.
Tag the images as "latest" and < commit hash >.
Push images to repository.
Invoke rolling update on appropriate RC(s).
This has been working great for RC based deployments, but now that the Deployment object is becoming more stable and an underlying feature, we want to take advantage of this abstraction over our current deployment schemes and development phases.
What I'm having trouble with is finding a sane way to automate the update of a Deployment with the CI workflow. What I've been experimenting with is splitting up the git repo's and doing something like:
[App Build] Build the containers.
[App Build] Tag the images as "latest" and < commit hash >.
[App Build] Push images to repository.
[App Build] Invoke build of the app's Deployment repo, passing through the current commit hash.
[Deployment Build] Interpolate manifest file tokens (currently just the passed commit hash e.g. image: app-%%COMMIT_HASH%%)
[Deployment Build] Apply the updated manifest to the appropriate Deployment resource(s).
Surely though there's a better way to handle this. It would be great if the Deployment monitored for hash changes of the image's "latest" tag...maybe it already does? I haven't had success with this. Any thoughts or insights on how to better handle the deployment of Deployment would be appreciated :)
The Deployment only monitors for pod template (.spec.template) changes. If the image name didn't change, the Deployment won't do the update. You can trigger the rolling update (with Deployments) by changing the pod template, for example, label it with commit hash. Also, you'll need to set .spec.template.spec.containers.imagePullPolicy to Always (it's set to Always by default if :latest tag is specified and cannot be update), otherwise the image will be reused.
We've been practising what we call GitOps for a while now.
What we have is a reconciliation operator, which connect a cluster to configuration repository and makes sure that whatever Kubernetes resources (including CRDs) it finds in that repository, are applied to the cluster. It allows for ad-hoc deployment, but any ad-hoc changes to something that is defined in git will get undone in the next reconciliation cycle.
The operator is also able to watch any image registry for new tags, an update image attributes of Deployment, DaemonSet and StatefulSet types of objects. It makes a change in git first, then applies it to the cluster.
So what you need to do in CI is this:
Build the containers.
Tag the images as <commit_hash>.
Push images to repository.
The agent will take care of the rest for you, as long you've connected it to the right config repo where the app Deployment object can be found.
For a high-level overview, see:
Google Cloud Platform and Kubernetes
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.