Hi there I was reviewing the GKE autopilot mode and noticed that in cluster configureation istio is disabled and I'm not able to change it. Also installation via istioctl install fail with following error
error installer failed to update resource with server-side apply for obj MutatingWebhookConfiguration//istio-sidecar-injector: mutatingwebhookconfigurations.admissionregistration.k8s.io "istio-sidecar-injector" is forbidden: User "something#example" cannot patch resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope: GKEAutopilot authz: cluster scoped resource "mutatingwebhookconfigurations/" is managed and access is denied
Am I correct or it's not possible to run istio in GKE autopilot mode?
TL;DR
It is not possible at this moment to run istio in GKE autopilot mode.
Conclusion
If you are using Autopilot, you don't need to manage your nodes. You don't have to worry about operations such as updating, scaling or changing the operating system. However, the autopilot has a number of limitations.
Even if you are trying to install istio with a command istioctl install, istio will not be installed. You will then see the following message:
This will install the Istio profile into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-ingressgateway
Pruning removed resources 2021-05-07T08:24:40.974253Z warn installer retrieving resources to prune type admissionregistration.k8s.io/v1beta1, Kind=MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "something#example" cannot list resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope: GKEAutopilot authz: cluster scoped resource "mutatingwebhookconfigurations/" is managed and access is denied not found
Error: failed to install manifests: errors occurred during operation
This command failed, bacuse for sidecar injection, installer tries to create a MutatingWebhookConfiguration called istio-sidecar-injector. This limitation is mentioned here.
For more information you can also read this page.
It is not possible to create mutating admission webhooks according to documentation
You cannot create custom mutating admission webhooks for Autopilot clusters
Since Istio uses mutating webhooks to inject its sidecars, it will probably not work and it is also consistent with the error you get.
According to the documentation this should be possible with GKE 1.21:
In GKE version 1.21.3-gke.900 and later, you can create validating and
mutating dynamic admission webhooks. However, Autopilot modifies the
admission webhooks objects to add a namespace selector which excludes the
resources in managed namespaces (currently, kube-system) from being
intercepted. Additionally, webhooks which specify one or more of following
resources (and any of their sub-resources) in the rules, will be rejected:
group: ""
resource: nodes
group: certificates.k8s.io
resource: certificatesigningrequests
group: authentication.k8s.io
resource: tokenreviews
https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#webhooks_limitations
Related
We want to install multiple istio control plane on same kubernetes cluster.
We installed istio by like
istioctl install -f istioOperator.yaml
istioOperator.yaml is based on
istioctl profile dump minimal
And it is further modified by changing istioNamespace, metadata/namespace and restricting namespaces in the mesh by discoverySelector.
When installing second istio in the same way, an error occurred like below (istio-system-la is second istio's namespace).
✔ Istio core installed
- Processing resources for Istiod.
2022-07-13T05:32:17.577423Z error installer failed to update resource with server-side apply for obj EnvoyFilter/istio-system-la/stats-filter-1.11: Internal error occurred: failed calling webhook "rev.validation.istio.io": failed to call webhook: Post "https://istiod.istio-system-la.svc:443/validate?timeout=10s": service "istiod" not found
...
How can we avoid this error, and successfully for istios to coexisting?
I'm running a GKE Autopilot cluster on 1.21.5-gke.1302 and I'm migrating some services from an older account to this cluster. One of my helm charts is deploying a CronJob with cluster-autoscaler.kubernetes.io/safe-to-evict=false annotation set on the Job spec. The Job will not run, and gets the error Error creating: admission webhook "policycontrollerv2.common-webhooks.networking.gke.io" denied the request: GKE Policy Controller rejected the request because it violates one or more policies: {"[denied by autogke-node-affinity-selector-limitation]":["Auto GKE disallows use of cluster-autoscaler.kubernetes.io/safe-to-evict=false annotation on workloads"]}
This exact same CronJob and Job spec are running in our older cluster which is still on 1.20.10-gke.1600. Is this something that changed recently in GKE Autopilot? Is there a solution other than removing the annotation from the Job? The annotation is hard-coded in the helm template, and it does seem like something we'd want to have set on the Job.
I'm trying to install a Kubernetes operator into an OpenShift cluster using OLM 0.12.0. I ran oc create -f my-csv.yaml to install it. It is created successfully, but I do not get any results.
In the olm operator logs I find this message:
level=info msg="couldn't ensure RBAC in target namespaces" csv=my-operator.v0.0.5 error="no owned roles found" id=d1h5n namespace=playground phase=Pending
I also note that there is no InstallPlan created to make the accounts that I thought it was making.
What's wrong?
This message probably means that the RBAC assigned to your service account does not match the requirements specified by CSV (cluster service version).
In other words, while creating an operator you define CSV which defines the requirements for creating your custom resource. Then, when operator creates the resource it checks if the used service account fulfills these requirements.
You can check the Hazelcast Operator we created. It has some requirements regarding RBAC. So, before installing it, you need to apply the following RBAC file.
I'm trying to use Pulumi to create a Deployment with a linked Service in a Kubesail cluster. The Deployment is created fine but when Pulumi tries to create the Service an error is returned:
kubernetes:core:Service (service):
error: Plan apply failed: resource service was not successfully created by the Kubernetes API server : Could not create watcher for Endpoint objects associated with Service "service": unknown
The Service is correctly created in Kubesail and the error seems to be glaringly obvious that it can't do Pulumi's neat monitoring but the unknown error isn't so neat!
What might be being denied on the Kubernetes cluster such that Pulumi can't do the monitoring that would be different between a Deployment and a Service? Is there a way to skip the watching that I missed in the docs to get me past this?
I dug a little into the Pulumi source code and found the resource kinds it uses to track and used kubectl auth can-i and low and behold watching an endpoint is currently denied but watching replicaSets and the service themselves is not.
I've followed the instructions to create an EKS cluster in AWS using Terraform.
https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
I've also copied the output for connecting to the cluster to ~/.kube/config-eks. I've verified this successfully works as I've been able to connect to the cluster and manually deploy containers. However, now i'm trying to use the Terraform Kubernetes provider to connect to the cluster but cannot seem to be able to configure the provider properly.
I've configured the provider to use my kubectl configuration but when attempting to push a simple configmap, i get an error stating the following:
configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system"
I know that the provider is picking up part of the configuration but I cannot seem to get it to authenticate. I suspect this is because EKS uses heptio for authentication and i'm not sure if the K8s Go client used by Terraform can support heptio. However, given that Terraform released their AWS EKS support when EKS went GA, I'd doubt that they wouldn't also update their Terraform provider to work with it.
Is it possible to even do this now? Are there alternatives?
Exec auth was added here: https://github.com/kubernetes/client-go/commit/19c591bac28a94ca793a2f18a0cf0f2e800fad04
This is what is utilized for custom authentication plugins and was published Feb 7th.
Right now, Terraform doesn't support the new exec-based authentication provider, but there is an issue open with a workaround: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
That said, if I get some free time I will work on a PR.