I have installed Azure Workload Identity, e.g. like that:
az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys
This has installed a mutating webhook that is of version 0.15.0 in kube-system. Now when the new versions will come out, how do I keep it updated?
Does this happen automatically or I would need to uninstall/install again or do something like that?
yes, addons are maintained by Microsoft. Any update/upgrades will be rolled out automatically.
As mentioned here:
Add-ons are a fully supported way to provide extra capabilities for
your AKS cluster. Add-ons' installation, configuration, and lifecycle
is managed by AKS
Workload Identity is not even considered as an additional feature, but the same thing applies since it's a managed component of the cluster, and Microsoft is responsible for the lifecycle of it.
Generally, any out of box resource in the kube-system namespace is managed by Microsoft and will receive the updates automatically.
Related
The page on server-side apply in the Kubernetes docs suggests that it can be enabled or disabled (e.g., the docs say, "If you have Server Side Apply enabled ...").
I have a GKE cluster and I would like to check if server-side apply is enabled. How can I do this?
You can try creating any object like namespace or so and try checking the YAML output using the command you will get an idea if SSA is enabled or not.
Command :
kubectl create ns test-ssa
Get the created namespace
kubectl get ns test-ssa -o yaml
If there is managedFields existing in output SSA is working.
Server-side-apply i think introduced around K8s version 1.14 and now it's in GA with k8s version 1.22. Wiht GKE i have noticed it's already been part of it alpha or beta.
If you are using the HELM on your GKE you might have noticed the Service Side Apply.
I am studying about CI/CD on AWS (CodePipeline/CodeBuild/CodeDeploy) and found it to be a very good tool for managing a pipeline on the cloud with everything managed (don't even need to install Jenkins on EC2).
I am now reading about container building and deployment. For the build phase, CodeBuild supports building container images. For the deploy phase, while I could find a CodeDeploy solution to ECS cluster, it seems there is no direct CodeDeploy solution for EKS (kindly correct if I am wrong).
May I know if there is a solution to integrate EKS cluster (i.e. the deploy phase can fetch the docker image from ECR or dockerhub and deploy to EKS)? I have come across some ideas using lamda functions to trigger the cluster to perform rolling update of the container image, but I could not find a step-by-step guide on this.
=========================
(Update 17 Sep 2020)
Somehow managed to create a lambda function to trigger an update to EKS to perform rolling update of the k8s deployment. Thanks Prashanna for the source base.
Just want to share the key setups in the process.
(1) Update the lambda execution role to include permission to describe EKS clusters
Create a policy with describe EKS cluster access, and attach to the role:
Policy snippet:
...
......
"Action": "eks:Describe*"
...
......
Or you can create a "EKSFullAccess" policy, and attach to the lambda execution role
(2) Update the k8s ConfigMap, and supplement the lambda execution role ARN to the mapRole section. The corresponding k8s role should be a role that has permission to update container images (say system:masters) used for the k8s deployment
You can edit the map with command like below:
kubectl edit -n kube-system configmap/aws-auth
You don't have to add/update another ConfigMap even if your deployment is in another namespace. It will take effect as well.
Sample lambda function call request and response:
Gitab provides the inbuilt integration of EKS and deployment with the help of Helm charts. If you plan to use other tools Using AWS lambda to update the image is the best bet!
I've added my github project.
Setup a lambda with below code and give RBAC access to this lambda in your EKS. Try invoking the lambda by passing the required information like namespace, deployment, image etc
Lambda for Kubernetes image update
The lambda must require EKS:describecluster policy.
The Lambda role must be provided atleast update image RBAC role in EKS cluster RBAC role setup
Since there's no built-in CI/CD for EKS at the moment, this is going to be a showcase of success/failure stories of a 3rd-party CI/CDs in EKS :) My take: https://github.com/fluxcd/flux
Pros:
Quick to set up initially (until you get into multiple teams/environments)
Tracks and deploys image releases out of box
Possibility to split what to auto-deploy in dev/prod using regex. E.g. all versions to dev, only minor to prod. Or separate tag prefixes for dev/prod.
All state is in git - a good practice to start with
Cons:
Getting complex for further pipeline expansion, e.g. blue-green, canary, auto-rollbacks, etc.
The dashboard is proprietary (weave works product)
Not for on-demand parametrized job runs like traditional CIs.
Setup:
Setup an automated image build (looks like you've already figured out)
Setup flux and helm-operator into the cluster, point them to your "gitops repo"
For each app, create a HelmRelease object that describes a regex of image tag to track
Done. A newly published image tag that falls into regex will be auto-deployed to the cluster and the new version is committed to a gitops repo.
i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply
I know that using Terraform to deploy your Infra and Kubernetes Cluster is the way to go. However, does it make any sense to use Terraform to also deploy applications on kubernetes cluster? Is this also the way to go?
Thank you
Though it's not devoid of it's complexities, a better pipeline is Jenkins + Helm + Spinnaker combo.
Jenkins - CI
Helm - templating and chart build
Spinnaker - deploy
Pros:
Spinnaker is an excellent tool for deployment to kubernetis.
It can be made aware of multiple environment ,so cloud pipeline are
easier to build.
Natively integrates with most of the cloud providers like AWS,Azure,PCF etc
Cons:
On the flip side it's a little heavy tool as it is comprised of a
bunch of microservices and configuration can get under your skin.
As David Maze mentioned, you can combine terraform with helm.
You can find more information abut terraform provider here
and here
As per terraform documentation
"install_tiller" - (Optional) Install Tiller if it is not already installed. Defaults to true.
You can use also ansible with helm packages manager here:
Please take a lookk for othe automated tools described shortly here and here. like jenkins mentioned by Shirine.
Please take a lookk for othe automated tools described shortly here like jenkins mentioned bye #Shirine
There are different solutions. Depending on your needs you should consider factors like: paid/free solutions, for developers/teams, preferred platform, other factors like security, increasing transparency, collaboration and availability.
Hope this help
I maintain the Kustomization provider as an alternative integration of Kubernetes manifests into Terraform.
It has three main advantages over alternative options:
Every K8s resource is tracked individually in the Terraform state. This gives you a preview of changes in the plan phase. And also enables destroy-and-recreate plans in case of changes to immutable fields.
The provider allows you to use native Kubernetes YAML unchanged. No need to translate everything into HCL like with the Kubernetes provider.
Being based on Kustomize, it allows you to use Kustomize's overlay approach. But by defining the overlay in Terraform, you can use Terraform variables, module outputs and so on, to patch the Kubernetes resources.
You can of course use the provider's data sources and resources directly, but the most convenient way is probably via this module:
module "example_manifests" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
# list of paths to K8s YAML files
"${path.root}/path/to/a/kubernetes/resource.yaml"
]
}
}
}
As the team gets more comfortable with the Google Cloud Platform and kubernetes, then the ability to track what changes are being applied to the environment gets more important. We're using kubectl apply yaml files (mostly deployments, services, and configmaps). Is there a way to see what changes are being applied via kubectl?
You can use kubernetes audits to do what you need.
If you're using GKE with a cluster version > 1.8.3 audit logging is available by default in stackdriver logging.
https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging
You could also read these logs using fluentd if you're not using GKE, by specifying the log dir in fluentd config.