how gray release or gated launch while on kubernbetes? - kubernetes

Does the k8s support gated launch or called gray release ? for example, I deploy a nginx service in k8s which version is 1.10.2 with replica = 10, then I want to upgrade the service to 1.11.5, I modify the deployment and the I use the kubectl rollout status deployment nginx, I find all of the 10 pods has been set to 1.11.5,so how can I reach the status --- 2 pods with verion is 1.11.5 and 8 pods remain old 1.10.2?

This pattern is referred to as canary deployments in the documentation. (See the page linked)
In short:
add a differentiating label say track: stable to your pods in the deployment (do this once and roll it out)
make a copy of the deployment file, name it foo-canary (make sure you do change the name in the file)
change that label to track: canary
change to replicas: 2
change the image or whatever else you need to and deploy it
when satisfied with the result change the original deployment and roll it out

Related

Replace the image on one pod manually, while other pods uses the main image

Let's say I have 10 pods running a stable version, and I wish to replace the image of one of them to run a newer version before a full rollout.
Is there a way to do that?
Not as such: every pod managed by a Deployment is expected to be identical, including running the same image. You can't change a pod's image once it's been created, and if you change the Deployment's image, it will try to recreate all of its managed pods.
If the only thing you're worried about is the pod starting up, the default behavior of a deployment is to start 25% of its specified replicas with the new image. The old pods will continue running uninterrupted until the new replicas successfully start and pass their readiness checks. If the new pods immediately go into CrashLoopBackOff state, the old pods will still be running.
If you want to start a pod specifically as a canary deployment, you can create a second Deployment to handle that. You'll need to include some label on the pods (for instance, canary: 'true') where you can distinguish the canary from main pods. This would be present in the pod spec, and in the deployment selector, but it would not be present in the corresponding Service selector: the Service matches both canary and non-canary pods. If this runs successfully then you can remove the canary Deployment and update the image on the main Deployment.
Like the other answer mentioned, It sounds like you are talking about a canary deployment. You can do this with Kubernetes and also with Istio. I prefer Istio as it gives you some great control over traffic weighting. I.e you could send 1% of traffic to the canary and 99% to the control. Great for testing in production. It also lets you route using HTTP headers.
https://istio.io/latest/blog/2017/0.1-canary/
If you want to do it with k8s just create two deployments with unique deployment names (myappv1 & myappv2 for example) with the same app= label. Then you can just create a service with the selector = whatever your app label is. The svc will round robin between the two v1 and v2 deployments.

Kubernetes | What's the difference between rollout undo vs deploy to an older version?

I see there are two ways to move back to the older deployment version. One is using rollout undo command and another option is to deploy again to the older version. Is there any difference between the two or they both are interchangeable?
As I understood, you're asking for a difference between doing undo and manually changing pod definitions to the exact previous state. If that's the case - read below.
When you do a new deployment and if in that deployment your pod definitions' hash has been modified - the Deployment Controller will create a new ReplicaSet (let's call it A) in order to roll out new version, but at the same time it will decrease replica count in the existing ReplicaSet (let's call it B) - so you have 2 ReplicaSets (A, B). How it does this - depends on rollout strategy you choose (for example: rolling updates, blue-green deployment and so on).
When you do kubectl rollout undo deploy <your deployment> - the Deployment Controller will basically decrease the number of replicas in your newly created ReplicaSet (A) and increase the number of replicas in the old ReplicaSet (B).
But, when you do, as you said: deploy again to the older version - you basically do a new deployment, so new ReplicaSet (C) will be created in order to roll out your new version (even if it's not a new version), and your existing ReplicaSets(A) replica count will be decreased.
So, basically the difference is the ReplicaSets which gets created.
Read: Deployments for more info
The whole flow is as following:
Deployment Controller manages ReplicaSets
ReplicaSet changes desired pod count in etcd
Scheduler schedules the pod
Kubelet creates/terminates actual pods
And all of them talk to API Server and watch for changes in resource definitions via watch mechanism, again via API Server
when you undo the roll out, you are updating in a way that is not reflected in source control. The preferred way is to revert your YAML and apply previous versions- then your revisions match with the tracked configuration.
kubectl rollout history deployment xyz
REVISION --> these do not reflect correctly, get a new number with undo roll-out
Well first of all there is no direct way of option is to deploy again in kubernetes. Yes the undo way is to going back the previous version of your deployment. and command to go back is
kubectl undo deployment ABC

kubernetes creates more pods than scale amount

I have encountered a strange situation in one of our clusters, where all of a sudden a number of new pods have been created so that we end up with a greater number of running pods than the scale amount.
So in the dashboard it will show
serviceX pods: 8/2
and then 8 running instances of that service
Questions
How can this possibly happen?
Is there an easy way to get rid
of the extra pods (which all seem to be running)?
I have tried changing the scale amount in the dashboard and the extra pods do not disappear.
Both Pod and deployment are full-fledged objects in the Kubernetes API. Deployment manages creating Pods by means of ReplicaSets. What it boils down to is that Deployment will create Pods with spec taken from the template.
In your case deployment name edgeservicepublic-svc is set to have 13 replicas. Deployment is a kind of controller in Kubernetes. Its is naturally that this controller with continuously check if 13 pods are created. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it. Probably at first not enough pods are created co controller with pursue to achieve desried number of them.
To make sure your deployment works properly you can delete deployment, make sure that that pods are deleted. Make sure that you haven't set up autoscaler ( $ kubectl get hpa ) if so, delete it. Then if you want to change deployment specification edit deployment configuration file and apply changes ($ kubectl apply -f deployment_configuration_file.yaml).
Useful documentation about deployment , autoscaling in context of GKE.
EDIT:
Basically at first place check autoscaler then delete it if it exists. I told you to delete deployment because you told that you try to change scale amount/ number of replicas. So if you want to be 100 % sure that changes are applied is to delete whole deployment end then recreate it with desired number of replicas. Of course you can just apply changes in deployment configuration file ($ kubectl edit ...) or ( $ kubectl apply -f ) but sometimes existing pods are not deleted so it will be saver. You could also create new deployment with the same parameters but different name.

Kubernetes deployment strategy to wait for all replicas

I got one service with auto scaling which means it can have 2 pods or 4 pods running. My issue is that this service is a reactjs application with service-worker.
With my current deployment strategy it will create a new pod and kill one at a time, which causes issues when the clients gets alarmed that there is a new update & tries to fetch new assets from server & the loadbalancer forwards it to the old pods.
So basicly I am wondering if it's possible to change to a strategy that creates x pods & replaces them all at the same time?
Use the Recreate deployment strategy to first kill all old pods and then create new ones.
Alternatively, if you're looking to first create a parallel set of new pods, reroute traffic to these new pods and then kill the old pods (i.e., a blue/green deployment), check this guide.
add the spec.strategy.type in your deployment.yaml manfest and set it to "Recreate"
this will kill all the existing pods before new ones are created.
spec:
strategy:
type: Recreate
The strategy you are using is the - RollingUpdate , which is the default if you dont specify any.
Follow this approach though it is manual and meets your requirement.
Say, you are running version 1.0 ( with label version:1.0 ) in the cluster and you want to upgrade to version 2.0
Deploy version 2.0 with label version:2.0
Verify that pods are running and your version 2.0 app is running fine.
edit version 1.0 service selector to use label version:2.0
Delete version 1.0 deployment

Redeploying a Google Container Controller when the repository Image Changes

Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.