I have configured an Azure Kubernetes Service.
I have completed a couple of deployments successfully using Kubectl task in Azure DevOps. The task command is "kubectl apply -f deployment.yaml".
In the deployment.yaml I have some items which I would like to configure as a variable for example image as below
containers:
- name: xxxxx
image: containerregistry.azurecr.io/xxxxx:5517
ports:
- containerPort: 80.
Now I am publishing the docker image with building number being 5517,5518 and so on. So how can I change the image tag on the fly when "kubectl apply -f deployment.yaml" is executed. The deployment. Yaml is checked into my Azure DevOps repo.
So you have 2 options:
preprocess the file and replace tokens (there is a task for that)
use helm
You obviously have other options like using pulumi\terraform\flux\etc, but these are the most straight forward ones to use from your starting point
Related
I am very new creating CD pipeline to grape image from Azure Container Registry(ACR) and push it into the Azure Kubernetes(AKS),
In first part like in CI pipeline I am able to push my .netcore api image into the ACR, now my aim is to
Create CD pipeline to grape that image and deploy it to Kubernetes
Although I have created Kubernetes cluster in Azure with running 3 agents. I want to make it very simple without involving any deployment.yaml file etc,
Can any one help me out how i can achieve this goal and
What is the exact tasks in my CD pipeline ?
Thanks for the help in advance
Creating the YAML file is critical for being able to redeploy and track what is happening. If you don't want to create YAML then you have limited options. You could execute the imperative command from Azure DevOps by using a kubectl task.
kubectl create deployment <name> --image=<image>.azureacr.io
Or you can use the Kubernetes provider for Terraform to avoid creating YAML directly.
Follow up:
So if you are familiar with the Kubernetes imperative commands you can use that to generate your YAML by using the --dry-run and --output options. Like so:
kubectl create deployment <name> --image=<image>.azureacr.io --dry-run --output yaml > example.yaml
That would produce something like looks like this which you can use to bootstrap creating your manifest file.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: example
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: example
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Now you can pull that repo or an artifact that contains that manifest into your Azure DevOps Release Pipeline and add the "Deploy to Kubernetes Cluster" task.
This should get you pretty close to completing a pipeline.
this is impossible doesn't really makes sense without any deployment.yaml file or something similar. you can use:
kubectl create deployment %name% --image=your_image.azurecr.io
but this is not really flexible and won't get you anywhere. If you want to use kubernetes you have to understand deployments\pods\services\etc. No way of getting around that
When using the "kubeconfig" option I get the error when I click on "verify connection"
Error: TFS.WebApi.Exception: No user credentials found for cluster in KubeConfig content. Make sure that the credentials exist and try again.
The kubeconfig I pasted in, and selected the correct context from, is a direct copy paste of what is in my ~/.kube./config file and this works fine w/ kubectl
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxxx
server: https://aks-my-stage-cluster-xxxxx.hcp.eastus.azmk8s.io:443
name: aks-my-stage-cluster-xxxxx
contexts:
- context:
cluster: aks-my-stage-cluster-xxxxx
user: clusterUser_aks-my-stage-cluster-xxxxx_aks-my-stage-cluster-xxxxx
name: aks-my-stage-cluster-xxxxx
current-context: aks-my-stage-cluster-xxxxx
kind: Config
preferences: {}
users:
- name: clusterUser_aks-my-stage-cluster-xxxxx_aks-my-stage-cluster-xxxxx
user:
auth-provider:
config:
access-token: xxxxx.xxx.xx-xx-xx-xx-xx
apiserver-id: xxxx
client-id: xxxxx
environment: AzurePublicCloud
expires-in: "3599"
expires-on: "1572377338"
refresh-token: xxxx
tenant-id: xxxxx
name: azure
Azure DevOps has an option to save the service connection without verification:
Even though the verification fails when editing the service connection, pipelines that use the service connection do work in my case.
Depending on the pasted KubeConfig you might encounter a 2nd problem where the Azure DevOps GUI for the service connection doesn't save or close, but also doesn't give you any error message. By inspecting the network traffic in e.g. Firefox' developer tools, I found out that the problem was the KubeConfig value being too long. Only ~ 20.000 characters are allowed. After removing irrelevant entries from the config, it worked.
PS: Another workaround is to run kubelogin in a script step in your pipeline.
It seems like it's not enough just to use converted with kubelogin kubeconfig. This plugin is required for kubectl to make a test connection and probably it's not used in the Azure DevOps service connection configuration.
As a workaround that can work for self-hosted build agent, you can install kubectl, kubelogin and whatever software you need to work with your AKS cluster and use shell scripts like:
export KUBECONFIG=~/.kube/config
kubectl apply -f deployment.yaml
You can try run below command to get the KubeConfig. And then copy the content of ~/.kube/config file the service connection to try again.
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
After run above command and copy the config from the ~/.kube/config on my local machine. i successfully add my kubernetes connection using kubeconfig option
You can also refer to the steps here.
I've integrated GitLab with my Digital Ocean Kubernetes cluster. I am trying to set up a simple manual build that will deploy to my Kubernetes cluster.
My gitlab-ci-yml file details are below:
deploy:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl version
- kubectl apply -f web.yaml
I am not sure why this is not working. Currently getting the following error:
Error from server (Forbidden): error when retrieving current
configuration ... from server for: "web.yaml": ingresses.extensions "hmweb-ingress" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "ingresses" in API group "extensions" in the namespace "hm-ns01"
As far as I can understand it cannot execute the kubectl apply .. commands
Am I doing something wrong?
I think you are missing the environment in your deploy job.
Modify your job definition to look something like this:
deploy:
stage: deploy
image: bitnami/kubectl:latest
environment:
name: production
script:
- kubectl version
- kubectl apply -f web.yaml
Where "production" is interchangable with any environment name.
At least that fixed the issue for me.
In this stackoverflow question: kubernetes Deployment. how to change container environment variables for rolling updates?
The asker mentions mentions he edited the deployment to change the version to v2. What's the workflow for automated deployments of a new version assuming the container v2 already exists? How do you then deploy it without manually editing the deployment config or checking in a new version of the yaml?
If you change the underlying container (like v1 -> another version also named v1) will Kubernetes deploy the new or the old?
If you don't want to:
Checking in the new YAML version
Manually updating the config
You can update the deployment either through:
A REST call to the deployment in question by patching/putting your new image as a resource modification. i.e. PUT /apis/extensions/v1beta1/namespaces/{namespace}/deployments -d {... deployment with v2...}
Set the image kubectl set image deployment/<DEPLOYMENT_NAME> <CONTAINER_NAME>:< IMAGE_NAME>:v2
Assuming v1 is already running and you try to deploy v1 again with the same environment variable values etc., then k8s will not see any difference between your current and updated deployment resource.
Without diff, the k8s scheduler assumes that the desired state is already reached and won't schedule any new pods, even when imagePullPolicy: Always is set. The reason is that imagePullPolicy only has an effect on newly created pods. So if a new pod is being scheduled, then k8s will always pull the image again. Still, without any diff in your deployment, no new pod will be scheduled in the first place ..
For my deployments I always set a dummy environment variable, like a deploy timestamp DEPLOY_TS, e.g.:
containers:
- name: my-app
image: my-app:{{ .Values.app.version }} ## value dynamically set by my deployment pipeline
env:
- name: DEPLOY_TS
value: "{{ .Values.deploy_ts }}" ## value dynamically set by my deployment pipeline
The value of DEPLOY_TS is always set to the current timestamp - so it is always a different value. That way k8s will see a diff on every deploy and schedule a new pod - even if the same version is being re-deployed.
(I am currently running k8s 1.7)
We now have scheduled jobs in Kubernetes 1.4- is it possible to do a rolling container update (new image) against the cluster using this? The basic idea is I want a simple way to automatically roll out updates every set interval.
The 'traditional' way to do updates is for the CI to hit a webhook on the Kube master, but I want to avoid exposing services to the public and would rather just check for updates periodically.
I think it's generally safe to expose your master server and send updates to it from your CI system, but you could definitely set up a scheduled job to update a Deployment to it's latest version. Kubernetes has a concept called Service Accounts for authentication with the API from within the cluster and are integrated well with kubectl (i.e. it will use the service account info automatically to auth). The cluster also provides a kubernetes service for the master API. So you can deploy a container with kubectl and a script and use it to update the Deployment periodically.
You will need a mechanism to figure out what the latest version is. Maybe you could store the latest version info in a text file or something written to GCS or S3 and pull that file to get the latest version.
Say you have a deploy.yaml like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: myapp
image: myapp:<latest-ver>
And then you can generate and update the Deployment in a script like so:
#!/bin/sh
wget -o VERSION http://url/to/VERSION
sed "s/<latest-ver>/$(cat VERSION)/" deploy.yaml | kubectl apply -f -
And build that into an image and run it as your scheduled job.