I am installing nginx ingress controller through helm chart and pods are not coming up. Got some issue with the permission.
Chart link - https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx
I am using latest version 4.2.1
I done debugging as stated here https://github.com/kubernetes/ingress-nginx/issues/4061
also tried to run as root user runAsUser: 0
I think i got this issue after cluster upgrade from 1.19 to 1.22. Previously it was working fine.
Any suggestion what i need to do to fix that?
unexpected error storing fake SSL Cert: could not create PEM
certificate file
/etc/ingress-controller/ssl/default-fake-certificate.pem: open
/etc/ingress-controller/ssl/default-fake-certificate.pem: permission
denied
You obviously have permission problem. Looking at the Chart you specified, the are multiple values of runAsUser for different config.
controller.image.runAsUser: 101
controller.admissionWebhooks.patch.runAsUser: 2000
defaultBackend.image.runAsUser: 65534
I'm not sure why these are different, but if possible -
Try to delete your existing chart, and fresh install it.
If the issue still persist - check the deployment / pod events, see if the cluster alerts you about something.
Also worth noting, there were breaking changes in 1.22 to Ingress resource.
Check this and this links from the official release notes.
That issue occurred because all worker node is not properly upgraded as a result ingress controller cant setup properly so i tried to install it on particular node having same version as cluster then it works properly
Related
The operator is https://operatorhub.io/operator/keycloak-operator version 11.0.0.
The cluster is Kubernetes version 1.18.12.
I was able to follow the steps from OperatorHub.io to install the Operator Lifecycle Manager and the Keycloak "OperatorGroup" and "Subscription".
It took much longer than I was expecting (maybe 20 minutes?), but eventually the corresponding "ClusterServiceVersion" was created.
However, now when I try to use it by creating the following resource, it doesn't seem to be doing anything at all:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: example-keycloak
namespace: keycloak
labels:
app: sso
spec:
instances: 1
externalAccess:
enabled: true
extensions:
- https://github.com/aerogear/keycloak-metrics-spi/releases/download/1.0.4/keycloak-metrics-spi-1.0.4.jar
It accepts the new resource, so I know the CRD is in place. The documentation states that it should create a stateful set, an ingress, and more, but it just doesn't seem to create anything.
I checked the cluster logs and this is the error that is jumping out to me:
olm-operator ERROR controllers.operator Could not update Operator status {"request": "/keycloak-operator.my-keycloak-operator", "error": "Operation cannot be fulfilled on operators.operators.coreos.com \"keycloak-operator.my-keycloak-operator\": the object has been modified; please apply your changes to the latest version and try again"}
I have quite a bit of experience with plain kubernetes, but I'm brand new to "operators" and so I'm really not sure where to look next wrt what might be going wrong.
Any hints/suggestions/explanations?
UPDATE: I was creating the keycloak resource in a namespace OTHER than the one I installed the operator into. Since it allowed me to create the custom resource (Kind: Keycloak) into this namespace, I thought this was supported. However, when I created the keycloak resource to the same namespace where the operator was installed (my-keycloak-operator), then it actually tried to do something. Its still failing to bring up the pod, mind you, but at least its trying to do something.
Will leave this question open for a bit to see if the "Could not update Operator status" is something I should be concerned about or not...
It looks like the operator or/and the components that it wants to bring up cannot do a write (POST/PUT) to the kube-apiserver.
From what you describe, it appears that the first time when you installed the operator on a different namespace it just didn't have permissions to bring up anything at all. The second time when you installed it on the right namespace it looks like the operator was able to talk to the kube-apiserver but the components that it's bring up (Keycloak, etc) are not able to.
I would check the logs on the kube-apiserver (control plane) to see if you have some unauthorized requests, also check the log files of the components (pods, deployments, etc) that the operator is trying to bring up.
If you have unauthorized requests you may have to manually update the RBAC rules. Finally, I would check with IBM cloud to see what specific permission its K8s control plane could have that is preventing applications to talk to it (the kube-apiserver).
✌️
My helm was properly working and I have been working with it this morning for a couple of hours. Then, suddenly it stopped working and the only error I get is Error: create: failed to create: the server responded with the status code 413 but did not return more information.
Any ideas?
Ok, I found it. I, without paying much attention or caring about the location of the file, was saving some log data from a couple of Kubernetes pods in the same directory as my template files. Once I deleted the log file, which was apparently quite large, I got rid of the error and my helm install command was working. I guess HELM cares about the sizes of the files even if they have nothing to do with your helm installation. This was my case though, your case could be different. But I hope this post helps you in any way. It was a weird one in this case.
I guess the same thing could happen if your charts grew too much.
Cheers.
I had this error message when trying to install kube-prometheus-stack.
I have an nginx load balancer in front of kube-apiserver for high availability.
Note: don't confuse this with nginx ingress controler.
By default nginx has client_max_body_size 1m.
To solve the problem I had to increase this setting by editing /etc/nginx/nginx.conf in my ubuntu VM:
http {
client_max_body_size 10m;
}
And if you're using Rancher you need to edit the nginx-ingress-controller's configMap from the local cluster where Rancher is installed and add:
data:
proxy-body-size: 10m
I am trying to install Greenplum on GKE using the directions here
I make it to step 12: but my operator pod is failing because it cannot pull the secret:
kubectl logs -l app=greenplum-operator -n greenplum
{"level":"INFO","ts":"2020-03-10T18:20:50.803Z","logger":"operator-setup","msg":"Go Info","Version":"go1.13.7","GOOS":"linux","GOARCH":"amd64"}
{"level":"INFO","ts":"2020-03-10T18:20:50.803Z","logger":"operator-setup","msg":"creating operator"}
W0310 18:20:50.803978 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0310 18:20:50.804036 1 client_config.go:546] error creating inClusterConfig, falling back to default config: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied
It looks like a permissions issue pulling the image, but the image pull test earlier in the instructions succeeded:
job.batch/greenplum-operator-fetch-test created
GREENPLUM-OPERATOR TEST OK
job.batch "greenplum-operator-fetch-test" deleted
Has anyone else run into this issue?
There's a bug the current documentation. You most likely did everything right. However, creating a GKE cluster with "Enable Kubernetes alpha features in this cluster" as listed on the prerequisites page (https://greenplum-kubernetes.docs.pivotal.io/1-12/prepare-gke.html) is no longer necessary. In fact, it's currently causing the exact issue you seem to be having. Try creating a GKE cluster following all of the documentation except make sure to NOT enable GKE "alpha features".
I'm running Traefik on a Kubernetes cluster to manage Ingress, which has been running ok for a long time.
I recently implemented Cluster-Autoscaling, which works fine except that on one Node (newly created by the Autoscaler) Traefik won't start. It sits in CrashLoopBackoff, and when I log the Pod I get: [date] [time] command traefik error: field not found, node: redirect.
Google found no relevant results, and the error itself is not very descriptive, so I'm not sure where to look.
My best guess is that it has something to do with the RedirectRegex Middleware configured in Traefik's config file:
[entryPoints.http.redirect]
regex = "^http://(.+)(:80)?/(.*)"
replacement = "https://$1/$3"
Traefik actually works still - I can still access all of my apps from their urls in my browser, even those which are on the node with the dead Traefik Pod.
The other Traefik Pods on other Nodes still run happily, and the Nodes are (at least in theory) identical.
After further googling, I found this on Reddit. Turns out Traefik updated a few days ago to v2.0, which is not backwards compatible.
Only this pod had the issue, because it was the only one for which a new (v2.0) image was pulled (being the only recently created Node).
I reverted to v1.7 until I have time to fix it properly. Had update the Daemonset to use v1.7, then kill the Pod so it could be recreated from the old image.
The devs have a Migration Guide that looks like it may help.
"redirect" is gone but now there is "RedirectScheme" and "RedirectRegex" as a new concept of "Middlewares".
It looks like they are moving to a pipeline approach, so you can define a chain of "middlewares" to apply to an "entrypoint" to decide how to direct it and what to add/remove/modify on packets in that chain. "backends" are now "providers", and they have a clearer, modular concept of configuration. It looks like it will offer better organization than earlier versions.
enter image description hereI tried to used the instructions from this link https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md but I was not able to install it. specifically I dont know what this instruction means "Ensure that kubecfg.sh is exported." I dont even know where I can find this I did this sudo find / -name "kubecfg.sh" and I found no results.
moving on to the next step "kubectl create -f deploy/kube-config/influxdb/" when I did this it says kube-system not found I am using latest version of kubernetes version 1.0.1
These instructions are broken can any one provide some instructions on how to install this? I have kubernetes cluster up and running I was able to create and delete pods and so on and default is the only namespace I have when i do kubectl get pods,svc,rc --all-namespaces
Changing kube-system to default in the yaml files is just getting me one step further but I am unable to access the UI and so on. so installing kube-system makes more sense however I dont know how to do it and any instructions on installing influxdb and grafana to get it up and running will be very helpful
I am using latest version of kubernetes version 1.0.1
FYI, the latest version is v1.2.3.
... it says kube-system not found
You can create the kube-system namespace by running
kubectl create namespace kube-system.
Hopefully once you've created the kube-system namespace the rest of the instructions will work.
We had the same issue deploying grafana/influxdb. So we dug into it:
Per https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md since we don’t have an external load balancer, we changed the port type on the grafana service to NodePort which made it accessible at port 30397.
Then looked at the controller configuration here: https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/influxdb-grafana-controller.yaml and noticed the comment about using the api-server proxy which we wouldn’t be doing by exposing the NodePort, so we deleted the GF_SERVER_ROOT_URL environment variable from the config. At that point Grafana at least seemed to be running, but it looked like it was having trouble reaching influxdb.
We then changed the datasource to use localhost instead of monitoring-influxd and was able to connect. We're getting data on the cluster usage now, though individual pod data doesn’t seem to be working.