Kubernetes rolling-update scheduled job - kubernetes

We now have scheduled jobs in Kubernetes 1.4- is it possible to do a rolling container update (new image) against the cluster using this? The basic idea is I want a simple way to automatically roll out updates every set interval.
The 'traditional' way to do updates is for the CI to hit a webhook on the Kube master, but I want to avoid exposing services to the public and would rather just check for updates periodically.

I think it's generally safe to expose your master server and send updates to it from your CI system, but you could definitely set up a scheduled job to update a Deployment to it's latest version. Kubernetes has a concept called Service Accounts for authentication with the API from within the cluster and are integrated well with kubectl (i.e. it will use the service account info automatically to auth). The cluster also provides a kubernetes service for the master API. So you can deploy a container with kubectl and a script and use it to update the Deployment periodically.
You will need a mechanism to figure out what the latest version is. Maybe you could store the latest version info in a text file or something written to GCS or S3 and pull that file to get the latest version.
Say you have a deploy.yaml like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: myapp
image: myapp:<latest-ver>
And then you can generate and update the Deployment in a script like so:
#!/bin/sh
wget -o VERSION http://url/to/VERSION
sed "s/<latest-ver>/$(cat VERSION)/" deploy.yaml | kubectl apply -f -
And build that into an image and run it as your scheduled job.

Related

Horizontal Pod Autoscaling (HPA) with an initContainer that requires a Job

I have a specific scenario where I'd like to have a deployment controlled by horizontal pod autoscaling. To handle database migrations in pods when pushing a new deployment, I followed this excellent tutorial by Andrew Lock here.
In short, you must define an initContainer that waits for a Kubernetes Job to complete a process (like running db migrations) before the new pods can run.
This works well, however, I'm not sure how to handle HPA after the initial deployment because if the system detects the need to add another Pod in my node, the initContainer defined in my deployment requires a Job to be deployed and run, but since Jobs are one-off processes, the pod can not initialize and run properly (a ttlSecondsAfterFinished attribute removes the Job anyways).
How can I define an initContainer to run when I deploy my app so I can push my database migrations in a Job, but also allow HPA to control dynamically adding a Pod without needing an initContainer?
Here's what my deployment looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: graphql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: graphql-pod
template:
metadata:
labels:
app: graphql-pod
spec:
initContainers:
- name: wait-for-graphql-migration-job
image: groundnuty/k8s-wait-for:v1.4 # This is an image that waits for a process to complete
args:
- job
- graphql-migration-job # this job is defined next
containers:
- name: graphql-container
image: image(graphql):tag(graphql)
The following Job is also deployed
apiVersion: batch/v1
kind: Job
metadata:
name: graphql-migration-job
spec:
ttlSecondsAfterFinished: 30
template:
spec:
containers:
- name: graphql-migration-container
image: image(graphql):tag(graphql)
command: ["npm", "run", "migrate:reset"]
restartPolicy: Never
So basically what happens is:
I deploy these two resources to my node
Job is initialized
initContainer on Pod waits for Job to complete using an image called groundnuty/k8s-wait-for:v1.4
Job completes
initContainer completes
Pod initializes
(after 30 TTL seconds) Job is removed from node
(LOTS OF TRAFFIC)
HPA realizes a need for another pod
initContainer for NEW pod is started, but cant run because Job doesn't exist
...crashLoopBackOff
Would love any insight on the proper way to handle this scenario!
There is, unfortunately, no simple Kubernetes feature to resolve your issue.
I recommend extending your deployment tooling/scripts to separate the migration job and your deployment. During the deploy process, you first execute the migration job and then deploy your deployment. Without the job attached, the HPA can nicely scale your pods.
There is a multitude of ways to achieve this:
Have a bash, etc. script first to execute the job, wait and then update your deployment
Leverage more complex deployment tooling like Helm, which allows you to add a 'pre-install hook' to your job to execute them when you deploy your application

Changing image of kubernetes job

I'm working on the manifest of a kubernetes job.
apiVersion: batch/v1
kind: Job
metadata:
name: hello-job
spec:
template:
spec:
containers:
- name: hello
image: hello-image:latest
I then apply the manifest using kubectl apply -f <deployment.yaml> and the job runs without any issue.
The problem comes when i change the image of the running container from latest to something else.
At that point i get a field is immutable exception on applying the manifest.
I get the same exception either if the job is running or completed. The only workaround i found so far is to delete manually the job before applying the new manifest.
How can i update the current job without having to manually delete it first?
I guess you are probably using an incorrect kubernetes resource . Job is a immutable Pod that runs to completion , you cannot update it . As per Kubernetes documentation ..
Say Job old is already running. You want existing Pods to keep running, but you want the rest of the Pods it creates to use a different pod template and for the Job to have a new name. You cannot update the Job because these fields are not updatable. Therefore, you delete Job old but leave its pods running, using kubectl delete jobs/old --cascade=false.
If you intend to update an image you should either use Deployment or Replication controller which supports updates

How to deploy new app versions in kubernetes

In this stackoverflow question: kubernetes Deployment. how to change container environment variables for rolling updates?
The asker mentions mentions he edited the deployment to change the version to v2. What's the workflow for automated deployments of a new version assuming the container v2 already exists? How do you then deploy it without manually editing the deployment config or checking in a new version of the yaml?
If you change the underlying container (like v1 -> another version also named v1) will Kubernetes deploy the new or the old?
If you don't want to:
Checking in the new YAML version
Manually updating the config
You can update the deployment either through:
A REST call to the deployment in question by patching/putting your new image as a resource modification. i.e. PUT /apis/extensions/v1beta1/namespaces/{namespace}/deployments -d {... deployment with v2...}
Set the image kubectl set image deployment/<DEPLOYMENT_NAME> <CONTAINER_NAME>:< IMAGE_NAME>:v2
Assuming v1 is already running and you try to deploy v1 again with the same environment variable values etc., then k8s will not see any difference between your current and updated deployment resource.
Without diff, the k8s scheduler assumes that the desired state is already reached and won't schedule any new pods, even when imagePullPolicy: Always is set. The reason is that imagePullPolicy only has an effect on newly created pods. So if a new pod is being scheduled, then k8s will always pull the image again. Still, without any diff in your deployment, no new pod will be scheduled in the first place ..
For my deployments I always set a dummy environment variable, like a deploy timestamp DEPLOY_TS, e.g.:
containers:
- name: my-app
image: my-app:{{ .Values.app.version }} ## value dynamically set by my deployment pipeline
env:
- name: DEPLOY_TS
value: "{{ .Values.deploy_ts }}" ## value dynamically set by my deployment pipeline
The value of DEPLOY_TS is always set to the current timestamp - so it is always a different value. That way k8s will see a diff on every deploy and schedule a new pod - even if the same version is being re-deployed.
(I am currently running k8s 1.7)

How can I edit a Deployment without modify the file manually?

I have defined a Deployment for my app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.20.34.206:5000/myapp_img:2.0
ports:
- containerPort: 8080
Now, if I want update my app's image 2.0 to 3.0, I do this:
$ kubectl edit deployment/myapp-deployment
vim is open. I change the image version from 2.0 to 3.0 and save.
How can it be automated? Is there a way to do it just running a command? Something like:
$ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0
I thought using Kubernetes API REST but I don't understand the documentation.
You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag:
kubectl patch deployment myapp-deployment -p \
'{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}'
According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
There is a set image command which may be useful in simple cases
Update existing container image(s) of resources.
Possible resources include (case insensitive):
pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)
kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
http://kubernetes.io/docs/user-guide/kubectl/kubectl_set_image/
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment "nginx-deployment" image updated
http://kubernetes.io/docs/user-guide/deployments/
(I would have posted this as a comment if I had enough reputation)
Yes, as per http://kubernetes.io/docs/user-guide/kubectl/kubectl_patch/ both JSON and YAML formats are accepted.
But I see that all the examples there are using JSON format.
Filed https://github.com/kubernetes/kubernetes.github.io/issues/458 to add a YAML format example.
I have recently built a tool to automate deployment updates when new images are available, it works with Kubernetes and Helm:
https://github.com/rusenask/keel
You only have to label your deployments with Keel policy like keel.sh/policy=major to enable major version updates, more info in the readme. Works similarly with Helm, no additional CLI/UI required.

Kubernetes: Error from server when edit Deployment [duplicate]

I have defined a Deployment for my app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.20.34.206:5000/myapp_img:2.0
ports:
- containerPort: 8080
Now, if I want update my app's image 2.0 to 3.0, I do this:
$ kubectl edit deployment/myapp-deployment
vim is open. I change the image version from 2.0 to 3.0 and save.
How can it be automated? Is there a way to do it just running a command? Something like:
$ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0
I thought using Kubernetes API REST but I don't understand the documentation.
You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag:
kubectl patch deployment myapp-deployment -p \
'{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}'
According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
There is a set image command which may be useful in simple cases
Update existing container image(s) of resources.
Possible resources include (case insensitive):
pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)
kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
http://kubernetes.io/docs/user-guide/kubectl/kubectl_set_image/
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment "nginx-deployment" image updated
http://kubernetes.io/docs/user-guide/deployments/
(I would have posted this as a comment if I had enough reputation)
Yes, as per http://kubernetes.io/docs/user-guide/kubectl/kubectl_patch/ both JSON and YAML formats are accepted.
But I see that all the examples there are using JSON format.
Filed https://github.com/kubernetes/kubernetes.github.io/issues/458 to add a YAML format example.
I have recently built a tool to automate deployment updates when new images are available, it works with Kubernetes and Helm:
https://github.com/rusenask/keel
You only have to label your deployments with Keel policy like keel.sh/policy=major to enable major version updates, more info in the readme. Works similarly with Helm, no additional CLI/UI required.