Azure CD Pipeline to push image into the AKS (Kubernetes pipeline) - kubernetes

I am very new creating CD pipeline to grape image from Azure Container Registry(ACR) and push it into the Azure Kubernetes(AKS),
In first part like in CI pipeline I am able to push my .netcore api image into the ACR, now my aim is to
Create CD pipeline to grape that image and deploy it to Kubernetes
Although I have created Kubernetes cluster in Azure with running 3 agents. I want to make it very simple without involving any deployment.yaml file etc,
Can any one help me out how i can achieve this goal and
What is the exact tasks in my CD pipeline ?
Thanks for the help in advance

Creating the YAML file is critical for being able to redeploy and track what is happening. If you don't want to create YAML then you have limited options. You could execute the imperative command from Azure DevOps by using a kubectl task.
kubectl create deployment <name> --image=<image>.azureacr.io
Or you can use the Kubernetes provider for Terraform to avoid creating YAML directly.
Follow up:
So if you are familiar with the Kubernetes imperative commands you can use that to generate your YAML by using the --dry-run and --output options. Like so:
kubectl create deployment <name> --image=<image>.azureacr.io --dry-run --output yaml > example.yaml
That would produce something like looks like this which you can use to bootstrap creating your manifest file.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: example
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: example
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Now you can pull that repo or an artifact that contains that manifest into your Azure DevOps Release Pipeline and add the "Deploy to Kubernetes Cluster" task.
This should get you pretty close to completing a pipeline.

this is impossible doesn't really makes sense without any deployment.yaml file or something similar. you can use:
kubectl create deployment %name% --image=your_image.azurecr.io
but this is not really flexible and won't get you anywhere. If you want to use kubernetes you have to understand deployments\pods\services\etc. No way of getting around that

Related

Jenkins deployment with Kustomize - how to add JENKINS_OPTS

I feel like this should be an already asked question, but I'm having difficulties finding a concrete answer. I'm deploying Jenkins through ArgoCD by defining the deployment via kustomize (kubernetes yaml). I want to inject a prefix to have Jenkins start on /jenkins, but I don't see a way to add it. I saw online that I can have a env tag, but no full example of this was available. Where would I inject a prefix value if using kubernetes yaml for a Jenkins deployment?
So, I solved this issue myself, and I'd like to post the answer as this is the top searched question when searching "Kustomize Jenkins_opts".
In your project, assuming you are using Kustomize to deploy Jenkins (This will work with any app deployment where you want to inject values when deploying), you should have a project structure similar to this:
ProjectA
|
|---> app.yaml //contains the yaml definitions for your deployment
|---> kustomize.yaml //entry file to run Kustomize to deploy your app
Add a new file to your project structure. Name it whatever you want, I named mine something like app-env.yaml. It will look something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
template:
spec:
containers:
- name: jenkins
env:
- name: JENKINS_OPTS
value: --prefix=/jenkins
This will specifically inject the --prefix flag to assign the prefix value for the URL to Jenkins on deployment to the Jenkins container. You can add multiple env variables. You can inject any value you want. My example is using Jenkins specific flags as this question centered around Jenkins, but it works for any app. Add this file to your Kustomize file from earlier:
namePrefix: kustomize-
resources:
- app.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- app-env.yaml
When your app is deployed via K8s, it will run the startup process for your app, while passing the values defined in your env file. Hope this helps anyone else.

GKE automating deploy of multiple deployments/services with different images

I'm currently looking at GKE and some of the tutorials on google cloud. I was following this one here https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app (source code https://github.com/GoogleCloudPlatform/gke-photoalbum-example)
This example has 3 deployments and one service. The example tutorial has you deploy everything via the command line which is fine and all works. I then started to look into how you could automate deployments via cloud build and discovered this:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments
These docs say you can create a build configuration for your a trigger (such as pushing to a particular repo) and it will trigger the build. The sample yaml they show for this is as follows:
# deploy container image to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes-resource-file
- --image=gcr.io/project-id/image:tag
- --location=${_CLOUDSDK_COMPUTE_ZONE}
- --cluster=${_CLOUDSDK_CONTAINER_CLUSTER}
I understand how the location and cluster parameters can be passed in and these docs also say the following about the resource file (filename parameter) and image parameter:
kubernetes-resource-file is the file path of your Kubernetes configuration file or the directory path containing your Kubernetes resource files.
image is the desired name of the container image, usually the application name.
Relating this back to the demo application repo where all the services are in one repo, I believe I could supply a folder path to the filename parameter such as the config folder from the repo https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config
But the trouble here is that those resource files themselves have an image property in them so I don't know how this would relate to the image property of the cloud build trigger yaml. I also don't know how you could then have multiple "image" properties in the trigger yaml where each deployment would have it's own container image.
I'm new to GKE and Kubernetes in general, so I'm wondering if I'm misinterpreting what the kubernetes-resource-file should be in this instance.
But is it possible to automate deploying of multiple deployments/services in this fashion when they're all bundled into one repo? Or have Google just over simplified things for this tutorial - the reality being that most services would be in their own repo so as to be built/tested/deployed separately?
Either way, how would the image property relate to the fact that an image is already defined in the deployment yaml? e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: photoalbum-app
name: photoalbum-app
spec:
replicas: 3
selector:
matchLabels:
name: photoalbum-app
template:
metadata:
labels:
name: photoalbum-app
spec:
containers:
- name: photoalbum-app
image: gcr.io/[PROJECT_ID]/photoalbum-app#[DIGEST]
tty: true
ports:
- containerPort: 8080
env:
- name: PROJECT_ID
value: "[PROJECT_ID]"
The command that you use is perfect for testing the deployment of one image. But when you work with Kubernetes (K8S), and the managed version of GCP (GKE), you usually never do this.
You use YAML file to describe your deployments, services and all other K8S object that you want. When you deploy, you can perform something like this
kubectl apply -f <file.yaml>
If you have several file, you can use wildcard is you want
kubectl apply -f config/*.yaml
If you prefer to use only one file, you can separate the object with ---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:...
...

How to automatically restart pods when a new image ready

I'm using K8s on GCP.
Here is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleapp-direct
labels:
app: simpleapp-direct
role: backend
stage: test
spec:
replicas: 1
selector:
matchLabels:
app: simpleapp-direct
version: v0.0.1
template:
metadata:
labels:
app: simpleapp-direct
version: v0.0.1
spec:
containers:
- name: simpleapp-direct
image: gcr.io/applive/simpleapp-direct:latest
imagePullPolicy: Always
I first apply the deployment file with kubectl apply command
kubectl apply -f deployment.yaml
The pods were created properly.
I was expecting that every time I would push a new image with the tag latest, the pods would be automatically killed and restart using the new images.
I tried the rollout command
kubectl rollout restart deploy simpleapp-direct
The pods restart as I wanted.
However, I don't want to run this command every time there is a new latest build.
How can I handle this situation ?.
Thanks a lot
Try to use image hash instead of tag in your Pod Definition.
Generally: there is no way to automatically restart pods when the new image is ready. It is generally advisable not to use image:latest (or just image_name) in Kubernetes as it can cause difficulties with rollback of your deployment. Also you need to make sure that the flag: imagePullPolicy is set to Always. Normally when you use CI/CD or git-ops your deployment is updated automatically by these tools when the new image is ready and passed thru the tests.
When your Docker image is updated, you need to setup a trigger on this update within your CI/CD pipilne to re-run the deployment. Not sure about the base system/image where you build your docker image, but you can add there kubernetes certificates and run the above commands like you do on your local computer.

How to deploy a bunch of yaml files?

I would like to deploy a bunch of yaml files https://github.com/quay/quay/tree/master/deploy/k8s on my kubernetes cluster and would like to know, what is the best approach to deploy these at once.
You can directly apply folder
kubectl create -f ./<foldername>
kubectl apply -f ./<foldername>
You can also add mutiliple files in one command
kubectl apply -f test.yaml,test-1.yaml
You can also merge all YAML files into a single file and manage it further.
Marge YAML file using ---
For example :
apiVersion: v1
kind: Service
metadata:
name: test-data
labels:
app: test-data
spec:
ports:
- name: http
port: 80
targetPort: 9595
- name: https
port: 9595
targetPort: 9595
selector:
app: test-data
tier: frontend
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
ports:
- name: http
port: 80
targetPort: 9595
- name: https
port: 9595
targetPort: 9595
selector:
app: test-app
tier: frontend
kubectl apply -f <folder-name>
A simple way to deploy all files in a given folder.
You may consider using Helm (The package manager for Kubernetes). Just like we use yum or apt-get for Linux, we use helm for k8s.
Using Helm, you can deploy multiple resources (bunch of YAMLs) in one go. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on. Also, you don't need to combine all your YAMLs; they can remain separate as part of a given chart. Besides, if one chart depends on another, you can use the helm dependency feature.
The reason why i use Helm is because whenever i deploy a chart, helm tracks it as a release. Any change to a chart get a new release version. This way, upgrade (or rollback) becomes very easy and you can confidently say what went as part of a given release.
Also, if you have different microservices that have stuff in common, then helm provides a feature called Library Chart using which you can create definitions that can be re-used across charts, thus keeping your charts DRY.
Have a look at this introductory video: https://www.youtube.com/watch?v=Zzwq9FmZdsU&t=2s
I would advise linking the yaml's into one. The purpose of a deployment and service yaml is to deploy your application onto the cluster in one fell swoop. You can define many deployments and services within the one file. In your case, a tool such as Kustomize will help you combine them. Kustomize comes preinstalled with kubectl.
You can combine your yamls called a Multi-Resource yaml into one file using the --- operator. i.e.
apiVersion: v1
kind: Service
metadata:
name: foo
spec:
...
---
apiVersion: v1
kind: Service
metadata:
name: bar
spec:
...
Then make a kustomization.yaml which combines all your multi-resource yamls. There is a good guide on this here: https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a
The documentation from k8 is here: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/

How can I edit a Deployment without modify the file manually?

I have defined a Deployment for my app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.20.34.206:5000/myapp_img:2.0
ports:
- containerPort: 8080
Now, if I want update my app's image 2.0 to 3.0, I do this:
$ kubectl edit deployment/myapp-deployment
vim is open. I change the image version from 2.0 to 3.0 and save.
How can it be automated? Is there a way to do it just running a command? Something like:
$ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0
I thought using Kubernetes API REST but I don't understand the documentation.
You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag:
kubectl patch deployment myapp-deployment -p \
'{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}'
According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
There is a set image command which may be useful in simple cases
Update existing container image(s) of resources.
Possible resources include (case insensitive):
pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)
kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
http://kubernetes.io/docs/user-guide/kubectl/kubectl_set_image/
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment "nginx-deployment" image updated
http://kubernetes.io/docs/user-guide/deployments/
(I would have posted this as a comment if I had enough reputation)
Yes, as per http://kubernetes.io/docs/user-guide/kubectl/kubectl_patch/ both JSON and YAML formats are accepted.
But I see that all the examples there are using JSON format.
Filed https://github.com/kubernetes/kubernetes.github.io/issues/458 to add a YAML format example.
I have recently built a tool to automate deployment updates when new images are available, it works with Kubernetes and Helm:
https://github.com/rusenask/keel
You only have to label your deployments with Keel policy like keel.sh/policy=major to enable major version updates, more info in the readme. Works similarly with Helm, no additional CLI/UI required.