kubernetes keeps on pulling old image, even with imagePullPolicy set to Always - kubernetes

I have an image on docker hub that I using in a kubernetes deployment. I'm try to debug the application but whenever I make a change to the image, even with a change in the tag, when I deploy the app it still uses the old image? It does update occasionally but without any rhyme or reason. This is with the imagePullPolicy set to Always.
Here is the deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: myid/myapp:0.2.5
imagePullPolicy: Always
resources:
requests:
memory: "2Gi"
cpu: 1
ephemeral-storage: "2Gi"
limits:
memory: "2Gi"
cpu: 1
ephemeral-storage: "2Gi"
ports:
- containerPort: 3838
name: shiny-port
- containerPort: 8080
name: web-port
imagePullSecrets:
- name: myregistrykey
I deploy it using the command
kubectl create -f deployment.yaml
Thanks

This is a community wiki answer posted for better visibility. Feel free to expand it.
As mentioned in comments:
you need to check which tag is used - he same or different than in the previous version of the image;
update your Deployment with actual image;
check, if your image available in Docker hub, otherwise you will get ErrImagePull status during pod checking.
In case you just want to update an image, it is better to use kubectl apply -f deployment.yaml instead of kubectl create -f deployment.yaml.

Related

kubernetes k8 unable to pull latest image

Hi I am working in Kubernetes. Below is my k8 for deployment.
apiVersion: apps/v1
kind: Deployment
metadata: #Dictionary
name: webapp
spec: # Dictionary
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
# maxUnavailable will set up how many pods we can add at a time
maxUnavailable: 50%
# maxSurge define how many pods can be unavailable during the rolling update
maxSurge: 1
selector:
matchLabels:
app: webapp
instance: app
template:
metadata: # Dictionary
name: webapplication
labels: # Dictionary
app: webapp # Key value paids
instance: app
annotations:
vault.security.banzaicloud.io/vault-role: al-dev
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers: # List
- name: al-webapp-container
image: ghcr.io/my-org/al.web:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
memory: "1Gi"
cpu: "900m"
limits:
memory: "1Gi"
cpu: "1000m"
imagePullSecrets:
- name: githubpackagesecret
whenever I deploy this into kubernetes, Its not picking the latest image from the github packages. What should I do in order to pull the latest image and update the current running pod with latest image? Can someone help me to fix this issue. Any help would be appreciated. Thank you
There could be chances if you are doing deployment with the same latest tag deployment might not be getting the updated as same imageTag.
Pod restart is required so it will download the new image each time, if still it's the same there is an issue with building of the cache image.
What you can do as of now to try the
kubectl rollout restart deploy <deployment-name> -n <namespace>
this will restart the pods and it will fetch the new image for all PODs and check if latest code running.
Since you have imagePullPolicy: Always set it should always pull the image. Can you do kubectl describe to the pod while its starting so we can seed the logs ?

Kubernetes reattach to same persistent volume after delete

I have a app where two pods needs to have access to the same volume. I want to be able to delete the cluster and then after apply to be able to access the data that is on the volume.
So for example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retaining
provisioner: csi.hetzner.cloud
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: media
spec:
#storageClassName: retaining
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-php
labels:
app: myapp-php
k8s-app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp-php
template:
metadata:
labels:
app: myapp-php
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-php
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 750m
memory: 3Gi
requests:
cpu: 750m
memory: 3Gi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-web
labels:
app: myapp-web
k8s-app: myapp
spec:
selector:
matchLabels:
app: myapp-web
template:
metadata:
labels:
app: myapp-web
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-web
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 10m
memory: 128Mi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
If I do these:
k apply -f pv-issue.yaml
k delete -f pv-issue.yaml
k apply-f pv-issue.yaml
I want to connect the same volume.
What I have tried:
If I keep the file as is, the volume will be deleted so the data will be lost.
I can remove the pvc declaration from the file. Then it works. My issue that on the real app I am using kustomize and I don't see a way to exclude resources when doing kustomize build app | kubectl delete -f -
Tried using retain in the pvc. It retains the volume on delete, but on the apply a new volume is created.
Statefulset, however I don't see a way that to different statefulsets can share the same volume.
Is there a way to achieve this?
Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Is there a way to achieve this? Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Cluster deletion will make all your local volumes to be deleted. You can achieve this by storing the data outside the cluster. Kubernetes has a wide variety of storage providers to help you deploy data on a variety of storage types.
You may want to think also that you can keep the data locally on nodes with usage of hostPath but that is also not a good solution since it will require you to pin the pod to the specific node to avoid data loss. And if you delete you cluster in a way that all of you VM are gone, then this will be also gone.
Having some network-attached storage would be right way to go here. Very good example of those are Persistence disks which durable network storage devices that you instances can access. They're located independently from you virtuals machines and they are not being deleted when you delete the cluster.

assign memory resources to a running pod?

i would want to know how can i assign memory resources to a running pod ?
i tried kubectl get po foo-7d7dbb4fcd-82xfr -o yaml > pod.yaml
but when i run the command kubectl apply -f pod.yaml
The Pod "foo-7d7dbb4fcd-82xfr" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
thanks in advance for your help .
Pod is the minimal kubernetes resources, and it doesn't not support editing as you want to do.
I suggest you to use deployment to run your pod, since it is a "pod manager" where you have a lot of additional features, like pod self-healing, pod liveness/readness etc...
You can define the resources in your deployment file like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
resources:
limits:
cpu: 15m
memory: 100Mi
requests:
cpu: 15m
memory: 100Mi
ports:
- name: http
containerPort: 80
As #KoopaKiller mentioned, you can't update spec.containers.resources field, this is mentioned in Container object spec:
Compute Resources required by this container. Cannot be updated.
Instead you can deploy your Pods using Deployment object. In that case if you change resources config for your Pods, deployment controller will roll out updated versions of your Pods.

Is it possible to run gcsfuse without privileged mode inside GCP kubernetes?

Following this guide,I am trying to run gcsfuse inside a pod in GKE. Below is the deployment manifest that I am using:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gcsfuse-test
spec:
replicas: 1
template:
metadata:
labels:
app: gcsfuse-test
spec:
containers:
- name: gcsfuse-test
image: gcr.io/project123/gcs-test-fuse:latest
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
lifecycle:
postStart:
exec:
command: ["mkdir", "/mnt"]
command: ["gcsfuse", "-o", "nonempty", "cloudsql-p8p", "/mnt"]
preStop:
exec:
command: ["fusermount", "-u", "/mnt"]
However, I would like to run gcsfuse without the privileged mode inside my GKE Cluster.
I think (because of questions like these on SO) it is possible to run the docker image with certain flags and there will be no need to run it in privileged mode.
Is there any way in GKE to run gcsfuse without running the container in privileged mode?
Edit Apr 26, 2022: for a further developed repo derived from this answer, see https://github.com/samos123/gke-gcs-fuse-unprivileged
Now it finally is possible to mount devices without privileged: true or CAP_SYS_ADMIN!
What you need is
A Kubelet device manager which allows containers to have direct access to host devices in a secure way. The device manager explicitly given devices to be available via the Kubelet Device API. I used this hidden gem: https://gitlab.com/arm-research/smarter/smarter-device-manager.
Define list of devices provided by the Device Manager - add /dev/YOUR_DEVICE_NAME into this list, see example below.
Request a device via the Device Manager in the pod spec resources.requests.smarter-devices/YOUR_DEVICE_NAME: 1
I spent quite some time figuring this out so I hope sharing the information here will help someone else from the exploration.
I wrote my details findings in the Kubernetes Github issue about /dev/fuse. See an example setup in this comment and more technical details above that one.
Examples from the comment linked above:
Allow FUSE devices via Device Manager:
apiVersion: v1
kind: ConfigMap
metadata:
name: smarter-device-manager
namespace: device-manager
data:
conf.yaml: |
- devicematch: ^fuse$
nummaxdevices: 20
Request /dev/fuse via Device Manager:
# Pod spec:
resources:
limits:
smarter-devices/fuse: 1
memory: 512Mi
requests:
smarter-devices/fuse: 1
cpu: 10m
memory: 50Mi
Device Manager as a DaemonSet:
# https://gitlab.com/arm-research/smarter/smarter-device-manager/-/blob/master/smarter-device-manager-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: smarter-device-manager
namespace: device-manager
labels:
name: smarter-device-manager
role: agent
spec:
selector:
matchLabels:
name: smarter-device-manager
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: smarter-device-manager
annotations:
node.kubernetes.io/bootstrap-checkpoint: "true"
spec:
## kubectl label node pike5 smarter-device-manager=enabled
# nodeSelector:
# smarter-device-manager : enabled
priorityClassName: "system-node-critical"
hostname: smarter-device-management
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: smarter-device-manager
image: registry.gitlab.com/arm-research/smarter/smarter-device-manager:v1.1.2
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
resources:
limits:
cpu: 100m
memory: 15Mi
requests:
cpu: 10m
memory: 15Mi
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
- name: dev-dir
mountPath: /dev
- name: sys-dir
mountPath: /sys
- name: config
mountPath: /root/config
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
- name: dev-dir
hostPath:
path: /dev
- name: sys-dir
hostPath:
path: /sys
- name: config
configMap:
name: smarter-device-manager
Privileged mode means you have all the capabilities enabled, see https://stackoverflow.com/a/36441605. So adding CAP_SYS_ADMIN looks redundant here in your example.
You can either give all the privileges or do something more fine-grained by mounting /dev/fuse and giving only SYS_ADMIN capability (which remains an important permission).
I think we can rephrase the question as : Can we run GCSFuse without the capability SYS_ADMIN ?
Actually it does not look feasible, you can find the related docker issue here : https://github.com/docker/for-linux/issues/321.
For most of projects it won't be a hard no-go. You may want to act in regard of your threat model and decide whether or not it may be a security risk for your production.

Kubernetes | Rolling Update on Replica Set

I'm trying to perform a rolling update of the container image that my Federated Replica Set is using but I'm getting the following error:
When I run: kubectl rolling-update mywebapp -f mywebapp-v2.yaml
I get the error message: the server could not find the requested resource;
This is a brand new and clean install on Google Container Engine (GKE) so besides creating the Federated Cluster and deploying my first service nothing else has been done. I'm following the instructions from the Kubernetes Docs but no luck.
I've checked to make sure that I'm in the correct context and I've also created a new YAML file pointing to the new image and updated the metadata name. Am I missing something? The easy way for me to do this is to delete the replica set and then redeploy but then I'm cheating myself :). Any pointers would be appreciated
mywebappv2.yaml - new yaml file for rolling update
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mywebapp-v2
spec:
replicas: 4
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mywebapp
image: gcr.io/xxxxxx/static-js:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: mywebapp
My original mywebapp.yaml file:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mywebapp
spec:
replicas: 4
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mywebapp
image: gcr.io/xxxxxx/static-js:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: mywebapp
Try kind: Deployment.
Most kubectl commands that support Replication Controllers also
support ReplicaSets. One exception is the rolling-update command. If
you want the rolling update functionality please consider using
Deployments instead.
Also, the rolling-update command is imperative
whereas Deployments are declarative, so we recommend using Deployments
through the rollout command.
-- Replica Sets |
Kubernetes