i would want to know how can i assign memory resources to a running pod ?
i tried kubectl get po foo-7d7dbb4fcd-82xfr -o yaml > pod.yaml
but when i run the command kubectl apply -f pod.yaml
The Pod "foo-7d7dbb4fcd-82xfr" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
thanks in advance for your help .
Pod is the minimal kubernetes resources, and it doesn't not support editing as you want to do.
I suggest you to use deployment to run your pod, since it is a "pod manager" where you have a lot of additional features, like pod self-healing, pod liveness/readness etc...
You can define the resources in your deployment file like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
resources:
limits:
cpu: 15m
memory: 100Mi
requests:
cpu: 15m
memory: 100Mi
ports:
- name: http
containerPort: 80
As #KoopaKiller mentioned, you can't update spec.containers.resources field, this is mentioned in Container object spec:
Compute Resources required by this container. Cannot be updated.
Instead you can deploy your Pods using Deployment object. In that case if you change resources config for your Pods, deployment controller will roll out updated versions of your Pods.
Related
Hi I am working in Kubernetes. Below is my k8 for deployment.
apiVersion: apps/v1
kind: Deployment
metadata: #Dictionary
name: webapp
spec: # Dictionary
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
# maxUnavailable will set up how many pods we can add at a time
maxUnavailable: 50%
# maxSurge define how many pods can be unavailable during the rolling update
maxSurge: 1
selector:
matchLabels:
app: webapp
instance: app
template:
metadata: # Dictionary
name: webapplication
labels: # Dictionary
app: webapp # Key value paids
instance: app
annotations:
vault.security.banzaicloud.io/vault-role: al-dev
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers: # List
- name: al-webapp-container
image: ghcr.io/my-org/al.web:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
memory: "1Gi"
cpu: "900m"
limits:
memory: "1Gi"
cpu: "1000m"
imagePullSecrets:
- name: githubpackagesecret
whenever I deploy this into kubernetes, Its not picking the latest image from the github packages. What should I do in order to pull the latest image and update the current running pod with latest image? Can someone help me to fix this issue. Any help would be appreciated. Thank you
There could be chances if you are doing deployment with the same latest tag deployment might not be getting the updated as same imageTag.
Pod restart is required so it will download the new image each time, if still it's the same there is an issue with building of the cache image.
What you can do as of now to try the
kubectl rollout restart deploy <deployment-name> -n <namespace>
this will restart the pods and it will fetch the new image for all PODs and check if latest code running.
Since you have imagePullPolicy: Always set it should always pull the image. Can you do kubectl describe to the pod while its starting so we can seed the logs ?
I have 2 different microservices deployed as backend in minikube, call it deployment A and deployment B. Both these deployments have a different replica of pods running.
Deployment B is exposed as service B. Pods of deployment A call deployment B pods via service B which is of ClusterIP type.
The pods of deployment B have a scrapyd application running inside them with scraping spiders deployed in them. Each celery worker( pods of deployment A) takes a task from a redis queue and calls scrapyd service to schedule spiders on them.
Everything works fine but after I scale the application (deployment A and B seperately), I observe that the resource consumption is not uniform, using kubectl top pods
I observe that some of the pods of deployment B are not used at all. The pattern I observe is that only those pods of deployment B, that are up and running after all the pods of deployment A are up, are never utilized.
Is it normal behavior? I suspect the connection between pods of deployment A and B is persistent I am confused as to why request handling by pods of deployment B is not uniformly distributed? Sorry for the naive question. I am new to this field.
The manifest for deployment A is :
apiVersion: apps/v1
kind: Deployment
metadata:
name: celery-worker
labels:
deployment: celery-worker
spec:
replicas: 1
selector:
matchLabels:
pod: celery-worker
template:
metadata:
labels:
pod: celery-worker
spec:
containers:
- name: celery-worker
image: celery:latest
imagePullPolicy: Never
command: ['celery', '-A', 'mysite', 'worker', '-E', '-l', 'info',]
resources:
limits:
cpu: 500m
requests:
cpu: 200m
terminationGracePeriodSeconds: 200
and that of deployment B is
apiVersion: apps/v1
kind: Deployment
metadata:
name: scrapyd
labels:
app: scrapyd
spec:
replicas: 1
selector:
matchLabels:
pod: scrapyd
template:
metadata:
labels:
pod: scrapyd
spec:
containers:
- name: scrapyd
image: scrapyd:latest
imagePullPolicy: Never
ports:
- containerPort: 6800
resources:
limits:
cpu: 800m
requests:
cpu: 800m
terminationGracePeriodSeconds: 100
---
kind: Service
apiVersion: v1
metadata:
name: scrapyd
spec:
selector:
pod: scrapyd
ports:
- protocol: TCP
port: 6800
targetPort: 6800
Output of kubectl top pods :
So the solution to the above problem that I figured out is as follows :
In the current setup, install linkerd using this link. Linkerd Installation in Kubernetes
After that inject the linkerd proxy into celery deployment as follows :
cat celery/deployment.yaml | linkerd inject - | kubectl apply -f -
This ensures that the requests from celery are first passed on to this proxy and then Load balanced directly to scrapy at L7 Layer. In this case, kube-proxy is by passed and the default load balancing over L4 is no longer functional.
I want to share files between two containers in Kubernetes. Therefore I created a SharedVolumeClaim, which I plan to mount in both containers. But for a start, I tried to mount it in one container only. Whenever I deploy this, two Pods get created.
$ kubectl get pods | grep frontend
dev-frontend-84ca5d6dd6-8n8lf 0/1 ContainerCreating 0 18h
dev-frontend-b4959fb97-f9mgw 1/1 Running 0 18h
The first container is stuck in creating because it can not access the SharedVolume (Volume is already attached by pod fresh-namespace/dev-frontend-b4959fb97-f9mgw).
But when I remove the code that mounts the volume into the container and deploy again, only one container is created. This also happens if I start with a completely new namespace.
$ kubectl get pods | grep frontend
dev-frontend-587bc7f359-7ozsx 1/1 Running 0 5m
Why does the mount spawn a second pod as it should only be one?
Here are the relevant parts of the deployment files:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: shared-file-pvc
labels:
app: frontend
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-frontend
labels:
app: frontend
environment: dev
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app: frontend
environment: dev
template:
metadata:
labels:
app: frontend
environment: dev
spec:
volumes:
- name: shared-files
persistentVolumeClaim:
claimName: shared-file-pvc
containers:
- name: frontend
image: some/registry/frontend:version
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: 100Mi
cpu: 250m
limits:
memory: 300Mi
cpu: 750m
volumeMounts:
- name: shared-files <!-- works if I remove -->
mountPath: /data/shared-files <!-- this two lines -->
---
Can anybody help me, what I am missing here?
Thanks in advance!
If you check exact names of your deployed Pods, you can see that these Pods are managed by two diffrent ReplicaSets (dev-frontend-84ca5d6dd6 and dev-frontend-b4959fb97):
$ kubectl get pods | grep frontend
dev-frontend-84ca5d6dd6-8n8lf 0/1 ContainerCreating 0 18h
dev-frontend-b4959fb97-f9mgw 1/1 Running 0 18h
Your Deployment is using Rolling Deployment as default Deployment Strategy. A Rolling Deployment waits for new Pods to become ready BEFORE scaling down previous Pods.
I assume that first you deployed dev-frontend-b4959fb97-f9mgw (old version of the app) and then you deployed dev-frontend-84ca5d6dd6-8n8lf (new version of the same app).
The newly created Pod (dev-frontend-84ca5d6dd6-8n8lf) is not able to attach volume (the old one still uses this volume) so it will never be in Ready state.
Check how many dev-frontend-* ReplicaSets you have in this particular namespaces and which one is currently managed by your Deployment:
$ kubectl get rs | grep "dev-frontend"
$ kubectl describe deployment dev-frontend | grep NewReplicaSet
This two commands above should give you the idea which one is new and which one is old.
I am not sure which StorageClass you are using, but you can check it using:
$ kubectl describe pvc shared-file-pvc | grep "StorageClass"
Additionally check if created PV really support RWX Access Mode:
$ kubectl describe pv <PV_name> | grep "Access Modes"
Main concept is that PersistentVolume and PersistentVolumeClaim are mapped one-to-one. This is described in Persistent Volume - Binding documentation.
However, if your StorageClass is configured correctly, it is possible to have two or more containers using the same PersistentVolumeClaim that is attached to the same mount on the back end.
The most common examples are NFS and CephFS
As the GCE Disk does not support ReadWriteMany , I have no way to apply change to Deployment but being stucked at ContainerCreating with FailedAttachVolume .
So here's my setting:
1. PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
2. Service
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
3. Deployment
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql/mysql-server
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /mysql-data
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Which these are all fine for creating the PVC, svc and deployment. Pod and container started successfully and worked as expected.
However when I tried to apply change by:
kubectl apply -f mysql_deployment.yaml
Firstly, the pods were sutcked with the existing one did not terminate and the new one would be creating forever.
NAME READY STATUS RESTARTS AGE
mysql-nowhash 1/1 Running 0 2d
mysql-newhash 0/2 ContainerCreating 0 15m
Secondly from the gCloud console, inside the pod that was trying to create, I got two crucial error logs:
1 of 2 FailedAttachVolume
Multi-Attach error for volume "pvc-<hash>" Volume is already exclusively attached to one node and can't be attached to another FailedAttachVolume
2 of 2 FailedMount
Unable to mount volumes for pod "<pod name and hash>": timeout expired waiting for volumes to attach/mount for pod "default"/"<pod name and hash>". list of unattached/unmounted volumes=[mysql-persistent-storage]
What I could immediately think of is the ReadWriteOnce capability of gCloud PV. Coz the kubernetes engine would create a new pod before terminating the existing one. So under ReadWriteOnce it can never create a new pod and claim the existing pvc...
Any idea or should I use some other way to perform deployment updates?
appreciate for any contribution and suggestion =)
remark: my current work-around is to create an interim NFS pod to make it like a ReadWriteMany pvc, this worked but sounds stupid... requiring an additional storage i/o overhead to facilitate deployment update ?.. =P
The reason is that if you are applying UpdateStrategy: RollingUpdate (as it is default) k8s waits for the new Container to become ready before shutting down the old one. You can change this behaviour by applying UpdateStrategy: Recreate
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
I'm trying to perform a rolling update of the container image that my Federated Replica Set is using but I'm getting the following error:
When I run: kubectl rolling-update mywebapp -f mywebapp-v2.yaml
I get the error message: the server could not find the requested resource;
This is a brand new and clean install on Google Container Engine (GKE) so besides creating the Federated Cluster and deploying my first service nothing else has been done. I'm following the instructions from the Kubernetes Docs but no luck.
I've checked to make sure that I'm in the correct context and I've also created a new YAML file pointing to the new image and updated the metadata name. Am I missing something? The easy way for me to do this is to delete the replica set and then redeploy but then I'm cheating myself :). Any pointers would be appreciated
mywebappv2.yaml - new yaml file for rolling update
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mywebapp-v2
spec:
replicas: 4
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mywebapp
image: gcr.io/xxxxxx/static-js:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: mywebapp
My original mywebapp.yaml file:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mywebapp
spec:
replicas: 4
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mywebapp
image: gcr.io/xxxxxx/static-js:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: mywebapp
Try kind: Deployment.
Most kubectl commands that support Replication Controllers also
support ReplicaSets. One exception is the rolling-update command. If
you want the rolling update functionality please consider using
Deployments instead.
Also, the rolling-update command is imperative
whereas Deployments are declarative, so we recommend using Deployments
through the rollout command.
-- Replica Sets |
Kubernetes