We are deploying version 7.5 of Grafana loki-stack using helm on AKS server.
The problem that we are facing is the following when we install helm chart. These is the error message that we obtain in the replica-set
Warning FailedCreate 12s (x13 over 33s) replicaset-controller Error creating: admission webhook "validation.gatekeeper.sh" denied the request: [azurepolicy-k8sazurecontainernoprivilegees-d30fc4a5d3050e7c7bd6] Privilege escalation container is not allowed: grafana-sc-dashboard
[azurepolicy-k8sazurecontainernoprivilegees-d30fc4a5d3050e7c7bd6] Privilege escalation container is not allowed: grafana
[azurepolicy-k8sazurecontainernoprivilegees-d30fc4a5d3050e7c7bd6] Privilege escalation container is not allowed: grafana-sc-datasources
(reverse-i-search)`he': helm upgrade --install --namespace=mon-eval loki . --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=false
Any suggestion about how we could fix this problem?
The cluster admin activated the Azure Policy Addon (Gatekeeper) on this cluster. The policy k8sazurecontainernoprivilegees is blocking the containers grafana, grafana-sc-dashboard & grafana-sc-datasources.
You have some options now:
Ask the Admin to exclude the namespace from the policies (not recommended)
set allowPrivilegeEscalation: false inside the values.yaml at the securityContect sections here and here.
Related
I have an EKS cluster for my university project and I want to setup Prometheus on the cluster. To do this I am using helm with the following commands (see this tutorial https://archive.eksworkshop.com/intermediate/240_monitoring/deploy-prometheus/):
kubectl create namespace prometheus
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2"
When I check the status of the prometheus pods, the alert-manager and server seem to be in an infinite Pending state:
When I describe the prometheus-alertmanager-0 pod I see the following VolumeBinding error:
When I describe the prometheus-server-5d858bd4bd-6xmws pod I see the following VolumeBinding error:
I can also see there are 2 pvcs in Pending state:
When I describe the prometheus-server pvc, I can see its waiting for a volume to be created:
Im familiar with Kubernetes basics but pvcs is not something that I have used before. Is the solution here to create a "volume" and if so how do I do that?, would that solve the issue?, or am I way off the mark?
Should I try to install Prometheus in a different way?
Any help on this greatly appreciated
Note: Although similar this is not a duplicate of Prometheus server in pending state after installation using Helm. For one the errors highlighted there are different errors, also other manual steps such as creating volumes were also performed (which I have not done), Finally, I am following the specific tutorial referenced and also I am asking if I should try to setup Prometheus a different way if there is a simpler way
I'm trying to install telepresence into a EKS cluster that has PodSecurityPolicy's. I've gotten the traffic manager installed by running helm on the traffic manager chart:
helm install traffic-manager -n ambassador datawire/telepresence --create-namespace
After that I modify the traffic-manager-ambassador clusterRole to use one of the cluster PodSecurityPolicy's. Installation of the traffic manager eventually succeeds after I do this. However the installation of the uninstall-agent job fails:
Error creating: pods "uninstall-agents-" is forbidden: PodSecurityPolicy: unable to admit pod: []
My question is - what role or clusterRole do I have to modify to allow helm to uninstall telepresence? Or how do I figure out what service account is being used to try and install the pod so I can give it access to a pod security policy?
I made some fixes at https://github.com/ddl-pjohnson/telepresence/pull/1/files to make it easier to add additional rules and to run the helm hook as the correct user.
helm install --name my-rabbitserver stable/rabbitmq --namespace rabbit
Error: release my-rabbitserver failed: namespaces "rabbit" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "rabbit"
I have tried with (and without a rabbit namespace created before the install attempt)
I am using helm 2.16.9, so I need to qualify the name of my installation with a --name
I am using this against a Google Cloud kubernetes cluster
It looks as though the Helm tiller pod did not have sufficient priveldeges.
I found this similar issue:
https://support.sumologic.com/hc/en-us/articles/360037704393-Kubernetes-Helm-install-fails-with-Error-namespaces-sumologic-is-forbidden-User-system-serviceaccount-kube-system-default-cannot-get-resource-namespaces-in-API-group-in-the-namespace-sumologic-
Basically I have to stop the tiller deployment, set up tiller ServiceAccount yaml and run it to give tiller access to the kube-system. And then execute helm init again with the new service account.
The helm rabbitmq installs then appear work as advertised
I thought helm was supposed to make life easier, but it still has its own limitations and additional yaml files to get it to work as advertised.
I have installed istio in my aks cluster and enabled it to a namespace called database as below.
kubectl label namespace database istio-injection=enabled
I'm going to install helm3 posgress database into database namespace.
helm install pg-db bitnami/postgresql-ha --version=2.0.1 -n database
few seconds database starting to fails because the database pod is not considered healthy.
when I disable adding sidecar into database as below it doesn't restart. How can I run this helm chart without disabling sidecar
podAnnotations:
sidecar.istio.io/inject: "false"
listing pods
pg-db-postgresql-ha-postgresql-1 logs
pg-db-postgresql-ha-pgpool-5475f499b8-7z4ph logs
I'm working on kubernetes. Now I tried Digital Ocean's kubernetes which is very easy to install and access, but how can I install metric-server in it? how can I auto scale in kubernetes by DO?
Please reply as soon as possible.
The Metrics Server can be installed to your cluster with Helm:
https://github.com/helm/charts/tree/master/stable/metrics-server
helm init
helm upgrade --install metrics-server --namespace=kube-system stable/metrics-server
with RBAC enabled, see the more comprehensive instructions for installing Helm into your cluster:
https://github.com/helm/helm/blob/master/docs/rbac.md
If you wish to deploy without Helm, the manifests are available from the GitHub repository:
https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B