I want that every time I create a new image with the tag latest Kubernetes automatically pull the new image. I added imagePullPolicy: Always in pod spec but it doesn't update the old image with new image.
apiVersion: apps/v1
kind: Deployment
metadata:
name: node
namespace: dev
labels:
app: my-node-app
spec:
replicas: 2
selector:
matchLabels:
app: my-node-app
template:
metadata:
labels:
app: my-node-app
spec:
hostNetwork: true
securityContext:
fsGroup: 1000
containers:
- name: node
imagePullPolicy: Always
image: gcr.io/my-repo/my-node-app:latest
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: my-configmap
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2
memory: 8Gi
restartPolicy: Always
imagePullPolicy is only taken into account by Kubernetes when a POD is created or re-started. It is NOT taken into account while a POD is running, which means it does NOT check for image updates at any time while a POD is running.
Even if another POD with the same image would be scheduled onto the same Kubernetes node, the already running POD is not affected, even though Kubernetes does a pull and then uses the new image for the new POD.
If you want the desired functionality, you will have to implement your own solution. You could do this by implementing a sidecar that regularly checks the Docker Repository for changes to the given tag. When it detects such a change, it could trigger a restart of the POD, which would then force the image to be re-pulled.
A restart of the POD can either be triggered by simply exiting the sidecar or by utilizing the Kubernetes API inside the sidecar. The latter solution however gets more complicated as you will also need service accounts and RBAC rules to get proper permissions inside the sidecar container. It also has security implications you'd have to give the whole POD escalated permissions.
Setting imagePullPolicy: Always does not mean an image will be pulled automatically without any trigger.
I would recommend to use tagged image with semvar. Since you are using deployment you can perform rolling update of the pod which will pull new image and rollout the change across all the replica pod one by one in a graceful way without causing any downtime.
Let's say initially the image is gcr.io/my-repo/my-node-app:v1 and you want to update it to v2
kubectl set image deployment/node nginx=gcr.io/my-repo/my-node-app:v2 --record
Check the rollout history
kubectl rollout history deployment.v1.apps/node
In case of any issue rollback to previous version
kubectl rollout undo deployment.v1.apps/node
Also if you want to be more advanced you could do GitOps using FluxCD which supports triggering a rollout automatically whenever a new version of an image is pushed to a container registry.
Kubernetes will pull image only upon Pod creation which means it does not check for image updates while a POD is in running state.
I would recommend to use Semantic Versioning for the image tag and use a CI/CD pipeline which build, tag, and push to your registry. Then use a CD tool such as keel to re-create your pods in the last step of the pipeline.
Related
I tried Keda with AKS and I really appreciate when pod are automatically instanciate based on Azure Dev Ops queue job for release & build.
However I noticed something strange and often AKS/Keda remove pod while processing which makes workflow failed.
Message reads
We stopped hearing from agent aks-linux-768d6647cc-ntmh4. Verify the agent machine is running and has a healthy network connection. Anything that terminates an agent process, starves it for CPU, or blocks its network access can cause this error. For more information, see: https://go.microsoft.com/fwlink/?linkid=846610
Expected behavior: pods must complete the tasks then Keda/AKS can remove this pod.
I share with you my keda yml file
# deployment.yaml
apiVersion: apps/v1 # The API resource where this workload resides
kind: Deployment # The kind of workload we're creating
metadata:
name: aks-linux # This will be the name of the deployment
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: aks-linux # Labels follow the `name: value` template
replicas: 1
template: # This is the template of the pod inside the deployment
metadata: # Metadata for the pod
labels:
app: aks-linux
spec:
nodeSelector:
agentpool: linux
containers: # Here we define all containers
- image: <My image here>
name: aks-linux
env:
- name: "AZP_URL"
value: "<myURL>"
- name: "AZP_TOKEN"
value: "<MyToken>"
- name: "AZP_POOL"
value: "<MyPool>"
resources:
requests: # Minimum amount of resources requested
cpu: 2
memory: 4096Mi
limits: # Maximum amount of resources requested
cpu: 4
memory: 8192Mi
I used latest version of AKS and Keda. Any idea ?
Check the official Keda docs:
When running your agents as a deployment you have no control on which pod gets killed when scaling down.
So, to solve it you need to use ScaledJob:
If you run your agents as a Job, KEDA will start a Kubernetes job for each job that is in the agent pool queue. The agents will accept one job when they are started and terminate afterwards. Since an agent is always created for every pipeline job, you can achieve fully isolated build environments by using Kubernetes jobs.
See there how to implement it.
I'm using K8s on GCP.
Here is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleapp-direct
labels:
app: simpleapp-direct
role: backend
stage: test
spec:
replicas: 1
selector:
matchLabels:
app: simpleapp-direct
version: v0.0.1
template:
metadata:
labels:
app: simpleapp-direct
version: v0.0.1
spec:
containers:
- name: simpleapp-direct
image: gcr.io/applive/simpleapp-direct:latest
imagePullPolicy: Always
I first apply the deployment file with kubectl apply command
kubectl apply -f deployment.yaml
The pods were created properly.
I was expecting that every time I would push a new image with the tag latest, the pods would be automatically killed and restart using the new images.
I tried the rollout command
kubectl rollout restart deploy simpleapp-direct
The pods restart as I wanted.
However, I don't want to run this command every time there is a new latest build.
How can I handle this situation ?.
Thanks a lot
Try to use image hash instead of tag in your Pod Definition.
Generally: there is no way to automatically restart pods when the new image is ready. It is generally advisable not to use image:latest (or just image_name) in Kubernetes as it can cause difficulties with rollback of your deployment. Also you need to make sure that the flag: imagePullPolicy is set to Always. Normally when you use CI/CD or git-ops your deployment is updated automatically by these tools when the new image is ready and passed thru the tests.
When your Docker image is updated, you need to setup a trigger on this update within your CI/CD pipilne to re-run the deployment. Not sure about the base system/image where you build your docker image, but you can add there kubernetes certificates and run the above commands like you do on your local computer.
How can I have Kubernates automatically restart a container which purposefully exits in order to get new data from environment variables?
I have a container running on a Kubernates cluster which operates in the following fashion:
Container starts, polls for work
If it receives a task, it does some work
It polls for work again, until ...
.. the container has been running for over a certain period of time, after which it exits instead of polling for more work.
It needs to be continually restarted, as it uses environment variables which are populated by Kubernates secrets which are periodically refreshed by another process.
I've tried a Deployment, but it doesn't seem like the right fit as I get CrashLoopBackOff status, which means the worker is scheduled less and less often.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-fonky-worker
labels:
app: my-fonky-worker
spec:
replicas: 2
selector:
matchLabels:
app: my-fonky-worker
template:
metadata:
labels:
app: my-fonky-worker
spec:
containers:
- name: my-fonky-worker-container
image: my-fonky-worker:latest
env:
- name: NOTSOSECRETSTUFF
value: cats_are_great
- name: SECRETSTUFF
valueFrom:
secretKeyRef:
name: secret-name
key: secret-key
I've also tried a CronJob, but that seems a bit hacky as it could mean that the container is left in the stopped state for several seconds.
As #Josh said you need to exit with exit 0 else it will be treated as a failed container! Here is the reference According to the first example there "Pod is running and has one Container. Container exits with success." if your restartPolicy is set to Always (which is default by the way) then the container will restart although the Pod status shows running but if you log the pod then you can see the restart of the container.
It needs to be continually restarted, as it uses environment variables which are populated by Kubernates secrets which are periodically refreshed by another process.
I would take a different approach to this. I would mount the config map as explained here this will automatically refresh the Mounted config maps data Ref. Note: please take care of the " kubelet sync period (1 minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet" to manage the refresh rate of configmap data in the Pod.
What I see as a solution for this would be to run your container as a cronjob. but don't use startingDeadlineSeconds as your container killer.
It runs on its schedule.
In your container you can have it poll for work N times.
After N times it exits 0.
If I understood correctly in your example there are 2 problems:
Restarting container
Updating secret values
In order to keep your secrets up to date you should consider using secrets as described by Amit Kumar Gupta comment and mount secrets as volume instead of environment variable, here is an example.
As per the second problem with restarting container it depends on what is the exit code as described by garlicFrancium
From another point of view you can use init container waiting for new tasks and main container in order to proceed this tasks according to your requirements or create job scheduler.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: complete
name: complete
spec:
replicas: 1
selector:
matchLabels:
app: complete
template:
metadata:
labels:
app: complete
spec:
hostname: c1
containers:
- name: complete
command:
- "bash"
args:
- "-c"
- "wa=$(shuf -i 15-30 -n 1)&& echo $wa && sleep $wa"
image: ubuntu
imagePullPolicy: IfNotPresent
resources: {}
initContainers:
- name: wait-for
image: ubuntu
command: ['bash', '-c', 'sleep 30']
restartPolicy: Always
Please note:
When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted secret is fresh on every periodic sync. However, it is using its local cache for getting the current value of the Secret.
The type of the cache is configurable using the (ConfigMapAndSecretChangeDetectionStrategy field in KubeletConfiguration struct). It can be either propagated via watch (default), ttl-based, or simply redirecting all requests to directly kube-apiserver. As a result, the total delay from the moment when the Secret is updated to the moment when new keys are projected to the Pod can be as long as kubelet sync period + cache propagation delay, where cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero corespondingly).
A container using a Secret as a subPath volume mount will not receive Secret updates.
Please refer also to:
Fine Parallel Processing Using a Work Queue
I'm hosting an application on the Google Cloud Platform via Kubernetes, and I've managed to set up this continuous deployment pipeline:
Application code is updated
New Docker image is automatically generated
K8s Deployment is automatically updated to use the new image
This works great, except for one issue - the deployment always seems to have only one pod. Because of this, when the next update cycle comes around, the entire application goes down, which is unacceptable.
I've tried modifying the YAML of the deployment to increase the number of replicas, and it works... until the next image update, where it gets reset back to one pod again.
This is the command I use to update the image deployment:
set image deployment foo-server gcp-cd-foo-server-sha256=gcr.io/project-name/gcp-cd-foo-server:$REVISION_ID
You can use this command if you dont want to edit deployment yaml file:
kubectl scale deployment foo-server --replicas=2
Also, look at update strategy with maxUnavailable and maxsurge properties.
In your orgional deployment.yml file keep the replicas to 2 or more, othervise you cant avoid down time if only one pod is running and you are going to re-deploy/upgrade etc.
Deployment with 3 replicas( example):
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Deployment can ensure that only a certain number of Pods may be down
while they are being updated. By default, it ensures that at least 25%
less than the desired number of Pods are up (25% max unavailable).
Deployment can also ensure that only a certain number of Pods may be
created above the desired number of Pods. By default, it ensures that
at most 25% more than the desired number of Pods are up (25% max
surge).
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Nevermind, I had just set up my deployments wrong - had something to do with using the GCP user interface to create the deployments rather than console commands. I created the deployments with kubectl run app --image ... instead and it works now.
running kubernetes v1.2.2 on coreos on vmware:
I have a pod with the restart policy set to Never. Is it possible to manually start the same pod back up?
In my use case we will have a postgres instance in this pod. If it was to crash I would like to leave the pod in a failed state until we can look at it closer to see why it failed and then start it manually. Rather than try to restart with a restartpolicy of Always.
Looking through kubectl it doesnt seem like there is a manual start option. I could delete and recreate but i think this would remove the data from my container. Maybe I should be mounting a local volume on my host, and I should not need to worry about losing data?
this is my sample pod yaml. I dont seem to be able to restart the 'health' pod.
apiVersion: v1
kind: Pod
metadata:
name: health
labels:
environment: dev
app: health
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Never
One simple method that might address your needs is to add a unique instance label, maybe a simple counter. If each pod is labelled differently you can start as many as you like and keep around as many failed instances as you like.
e.g. first pod
apiVersion: v1
kind: Pod
metadata:
name: health
labels:
environment: dev
app: health
instance: 0
spec:
containers: ...
second pod
apiVersion: v1
kind: Pod
metadata:
name: health
labels:
environment: dev
app: health
instance: 1
spec:
containers: ...
Based on your question and comments sounds like you want to restart a failed container to retain its state and data. In fact, application containers and pods are considered to be relatively ephemeral (rather than durable) entities. When a container crashes its files will be lost and kubelet will restart it with a clean state.
To retain your data and logs use persistent volume types in your deployment. This will let you to preserve data across container restarts.