IBM-MQ helm chart kubernetes can't see resource limits - kubernetes

I'm trying to deploy IBM-MQ to Kubernetes (Rancher) using helmfile. I was getting the same error as here.
It wasn't working with storage class "longhorn", but was working with storage class "local-path". I added security: initVolumeAsRoot: true to my helmfile, now it looks like this:
....
- name: ibm-mq
...
chart: ibm-stable-charts/ibm-mqadvanced-server-dev
values:
- license: accept
security:
initVolumeAsRoot: true
resources:
limits:
cpu: 800m
memory: 800Mi
requests:
cpu: 500m
memory: 512Mi
image:
tag: latest
dataPVC:
storageClassName: "longhorn"
size: 500Mi
...
But in the Lens in the Stateful Sets I can see that it can't create a pod because of the error create Pod ibm-mq-0 in StatefulSet ibm-mq failed error: pods "ibm-mq-0" is forbidden: failed quota: default: must specify limits.cpu,limits.memory.
But the same helmfile without security worked fine, it didn't get an error because of limits, but got an error because there were problems with longhorn (as in the question I linked). If I change it to local-path (without security) it works fine, but the problem is that with local-path I need to delete manually volume after every restart, which is not what I want (for example, database works on longhorn without deleting volume after every restart). What's might be the problem here? I'm running it using helmfile -n namespace apply
UPD: I tried to place all the values in the order they're presented here, but it didn't work. I'm using helm 3, helmfile v.0.141.0, kubectl 1.22.2

Related

MongoDB throws authentication error after upgrading chart version with HELM in Kubernetes

I tried to upgrade the chart version of MongoDB from 10.1.0 to 11.2.0 since the previous one is outdated. Authentication was not enabled in previous version. But I set the root user password with the upgrade. However, arbiter keep throws authentication errors and mongo pods crash looping.
As I researched it's because of PVC (persistence:true) so when I set it to false and deleted helm releases the installation was successful and pods were running.
But then when using helm3 upgrade the following error occured:
Error: UPGRADE FAILED: cannot patch with kind StatefulSet: StatefulSet.apps is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
I am trying to figure out to keep persistence:true and set authentication for MongoDB.
values.yaml
mongodb:
architecture: replicaset
replicaCount: 2
podAntiAffinityPreset: hard
auth:
enabled: false
useStatefulSet: true
persistence:
enabled: true
size: 1Gi
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
metrics:
enabled: true
livenessProbe:
enabled: true
readinessProbe:
enabled: true
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
gitlab CI
script:
- helm3 install
--namespace="$NAMESPACE"
--wait
--timeout $HELM_TIMEOUT
..some other stuff..
--set mongodb.auth.enabled="true"
--set mongodb.auth.rootPassword="$MONGO_ROOT_PWD_STAGE"
--set mongodb.auth.replicaSetKey="$MONGO_REPLICA_SET_KEY_STAGE"
--values ${!HELM_VALUES}
--kube-context stage
"$RELEASE_NAME" chart/
Thanks for any help
In K8S Statefulset, unlike Deployment, there are some fields that you cannot change once it is created, you can just change the number of replicas, the template of your pod and the updateStrategy.
So you will get this error if you are changing something in
persistence:
enabled: true
size: 1Gi
because it changes the PVC configurations.
To bypass the problem:
create a new Statfulset and delete the old one (you can copy the data before delete it)
trying to patch the PVC (or any other resource you are trying to change) manually, then apply the helm upgrade command
You can use helm-diff plugin to compare the current revision to the new revision you're trying to create, to understand what's going on.

GKE autopilot has scaled up my container resources contary to resource requests

I have a container running in a GKE autopilot K8s cluster. I have the following in my deployment manifest (only relevant parts included):
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
resources:
requests:
memory: "250Mi"
cpu: "512m"
So I've requested the minimum resources that GKE autopilot allows for normal pods. Note that I have not specified a limits.
However, having applied the manifest and looking at the yaml I see that it does not match what's in the manifest I applied:
resources:
limits:
cpu: 750m
ephemeral-storage: 1Gi
memory: 768Mi
requests:
cpu: 750m
ephemeral-storage: 1Gi
memory: 768Mi
Any idea what's going on here? Why has GKE scaled up the resources. This is costing me more money unnecessarily?
Interestingly it was working as intended until recently. This behaviour only seemed to start in the past few days.
If the resources that you've requested are following:
memory: "250Mi"
cpu: "512m"
Then they are not compliant with the minimal amount of resources that GKE Autopilot will assign. Please take a look on the documentation:
NAME
Normal Pods
CPU
250 mCPU
Memory
512 MiB
Ephemeral storage
10 MiB (per container)
-- Cloud.google.com: Kubernetes Engine: Docs: Concepts: Autopilot overview: Allowable resource ranges
As you can see the amount of memory you've requested was too small and that's why you saw the following message (and the manifest was modified to increate the requests/limits):
Warning: Autopilot increased resource requests for Deployment default/XYZ to meet requirements. See http://g.co/gke/autopilot-resources.
To fix that you will need to assign resources that are within the limits of the documentation, I've included in the link above.

Increase memory limit of a running pod

I have a pod running in openshift 3.11, I wish to increase the pod memory limit from 2GB to 4GB. How to do it via Openshift Web Console or via OC command line?
When I try to edit the yaml file in Openshift Web Console I got the following exception
Reason: Pod "hjaha" is invalid: spec: Forbidden: pod updates may not
change fields other than spec.containers[*].image,
spec.initContainers[*].image, spec.activeDeadlineSeconds or
spec.tolerations (only additions to existing tolerations)...
Basically Pods are deployed using containers template of the controllers of the Pods, such as DeploymentConfig, Deployment, DaemonSet, StatefulSet and so on. First of all, you should verify what controller is used for your Pod deployment and modify the resources section on the controller yaml, not running Pod yaml. Look at the following example, if you modify the memory limit on the deployment controller yaml using oc CLI or web console, then it will deploy new pod with new configuration after that.
// List some deployment controller resources as follows.
// Then you can see one which is similar name with running pod name.
$ oc get deploymentconfig,deployment,statefulset,daemonset -n <your project name>
$ oc edit <deployment controller type> <the resource name>
:
kind: <deployment controller type>
metadata:
name: <the resource name>
spec:
:
template:
:
spec:
containers:
- name: <the container name>
resources:
limits:
// modify the memory size from 2Gi to 4Gi.
memory: 4Gi
You have to edit the yaml file and add this resources section under your containers part
containers:
- image: nginx
imagePullPolicy: Always
name: default-mem-demo-ctr
resources:
limits:
memory: 4Gi #<--------------This is limit
requests:
memory: 2Gi #<--------------Your applictaion will use memory in between 2Gb to upto 4GB

Google Kubernetes Engine (GKE) CPU/pod

On GKE I have created a cluster with 1 node and n1-standard-1 instance type (vCPU:1, RAM: 3.75 GB). The main purpose of the cluster is to host an application that has 3 pods (mysql, backend and frontend) on default namespace. I can deploy mysql with no problem. After that when I try to deploy the backend it just remains in "Pending" state saying that not enough CPU is available. The message is very verbose.
So my question is, is it not possible to have 3 pods running using 1 cpu unit? I want is reduce cost and let those pods use the same cpu. Is it possible to achieve that? If yes, then how?
The error message "pending" is not that informative. Could you please run
kubectl get pods
and get your pod name and again run
kubectl describe pod {podname}
then you can get a idea about the error message.
By the way you can run 3 pods in a single cpu.
Yes, it is possible to have multiple pods, or 3 in your case, on a single CPU unit.
If you want to manage your memory resources, consider putting constraints such as those described in the official docs. Below is an example.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
One would need more information regarding your deployment to answer your queries in a more detailer manner. Please consider providing the same.

Kubernetes: How to apply Horizontal Pod (HPA) autoscaling for a RC which contains multiple containers?

I have tried using HPA for a RC which contains only one container and it works perfectly fine. But when I have a RC with multiple containers (i.e., a pod containing multiple containers), the HPA is unable to scrape the CPU utilization and shows the status as "Unknown", shown below. How can I successfully implement a HPA for a RC with multiple containers. The Kuberentes docs have no information regarding this and also I didnt find any mention of it not being possible. Can anyone please share their experience or a point of view, with regard to this issue. Thanks a lot.
prometheus-watch-ssltargets-hpa ReplicationController/prometheus <unknown> / 70% 1 10 0 4s
Also for your reference, below is my HPA yaml file.
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: prometheus-watch-ssltargets-hpa
namespace: monitoring
spec:
scaleTargetRef:
apiVersion: v1
kind: ReplicationController
name: prometheus
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 70
By all means it is possible to set a HPA for an RC/Deployment/Replica-set with multiple containers. In my case the problem was the format of resource limit request. I figured out from this link, that if the pod's containers do not have the relevant resource request set, CPU utilization for the pod will not be defined and the HPA will not take any action for that metric. In my case I was using the resource request as below, which caused the error(But please note that the following resource request format works absolutely fine when I use it with deployments, replication controllers etc. It is only when, in addition I wanted to implement HPA that caused the problem mentioned in the question.)
resources:
limits:
cpu: 2
memory: 200M
requests:
cpu: 1
memory: 100Mi
But after changing it like below(i.e., with a relevant resource request set that HPA can understand), it works fine.
resources:
limits:
cpu: 2
memory: 200Mi
requests:
cpu: 1
memory: 100Mi