We want to install multiple istio control plane on same kubernetes cluster.
We installed istio by like
istioctl install -f istioOperator.yaml
istioOperator.yaml is based on
istioctl profile dump minimal
And it is further modified by changing istioNamespace, metadata/namespace and restricting namespaces in the mesh by discoverySelector.
When installing second istio in the same way, an error occurred like below (istio-system-la is second istio's namespace).
✔ Istio core installed
- Processing resources for Istiod.
2022-07-13T05:32:17.577423Z error installer failed to update resource with server-side apply for obj EnvoyFilter/istio-system-la/stats-filter-1.11: Internal error occurred: failed calling webhook "rev.validation.istio.io": failed to call webhook: Post "https://istiod.istio-system-la.svc:443/validate?timeout=10s": service "istiod" not found
...
How can we avoid this error, and successfully for istios to coexisting?
Related
Hi there I was reviewing the GKE autopilot mode and noticed that in cluster configureation istio is disabled and I'm not able to change it. Also installation via istioctl install fail with following error
error installer failed to update resource with server-side apply for obj MutatingWebhookConfiguration//istio-sidecar-injector: mutatingwebhookconfigurations.admissionregistration.k8s.io "istio-sidecar-injector" is forbidden: User "something#example" cannot patch resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope: GKEAutopilot authz: cluster scoped resource "mutatingwebhookconfigurations/" is managed and access is denied
Am I correct or it's not possible to run istio in GKE autopilot mode?
TL;DR
It is not possible at this moment to run istio in GKE autopilot mode.
Conclusion
If you are using Autopilot, you don't need to manage your nodes. You don't have to worry about operations such as updating, scaling or changing the operating system. However, the autopilot has a number of limitations.
Even if you are trying to install istio with a command istioctl install, istio will not be installed. You will then see the following message:
This will install the Istio profile into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-ingressgateway
Pruning removed resources 2021-05-07T08:24:40.974253Z warn installer retrieving resources to prune type admissionregistration.k8s.io/v1beta1, Kind=MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "something#example" cannot list resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope: GKEAutopilot authz: cluster scoped resource "mutatingwebhookconfigurations/" is managed and access is denied not found
Error: failed to install manifests: errors occurred during operation
This command failed, bacuse for sidecar injection, installer tries to create a MutatingWebhookConfiguration called istio-sidecar-injector. This limitation is mentioned here.
For more information you can also read this page.
It is not possible to create mutating admission webhooks according to documentation
You cannot create custom mutating admission webhooks for Autopilot clusters
Since Istio uses mutating webhooks to inject its sidecars, it will probably not work and it is also consistent with the error you get.
According to the documentation this should be possible with GKE 1.21:
In GKE version 1.21.3-gke.900 and later, you can create validating and
mutating dynamic admission webhooks. However, Autopilot modifies the
admission webhooks objects to add a namespace selector which excludes the
resources in managed namespaces (currently, kube-system) from being
intercepted. Additionally, webhooks which specify one or more of following
resources (and any of their sub-resources) in the rules, will be rejected:
group: ""
resource: nodes
group: certificates.k8s.io
resource: certificatesigningrequests
group: authentication.k8s.io
resource: tokenreviews
https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#webhooks_limitations
I am attempting to install Maistra on top of an Origin (OKD) v3.11 cluster. Following this guide:
https://medium.com/#jakub.jozwicki/ocp-part-3-installing-istio-1d9f37665d3b
I have attempted installs via v0.7 and v0.12 of the Maistra origin-ansible branch and continue to get an error on my istio-operator pod of: "Failed create pod sandbox.." / "NetworkPlugin cni failed to setup pod, network: OpenShift SDN network process is not (yet?) available". Any ideas how to resolve this error? I have done extensive searching...but things to seem to be not applicable/outdated.
I'm trying to install Ceph using Helm on Kunbernetes following this tutorial
install ceph
Probably the problem is that I installed trow registry before because as soon as I run the helm step
helm install --name=ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml
I get this error in ceph namespace
Error creating: Internal error occurred: failed calling webhook "validator.trow.io": Post https://trow.kube-public.svc:443/validate-image?timeout=30s: dial tcp 10.102.137.73:443: connect: connection refused
How can I solve this?
Apparently you are right with the presumption, I have a few concerns about this issue.
Trow registry manager controls the images that run in the cluster via implementing Admission webhooks that validate every request before pulling image, and as far as I can see Docker Hub images are not accepted by default.
The default policy will allow all images local to the Trow registry to
be used, plus Kubernetes system images and the Trow images themselves.
All other images are denied by default, including Docker Hub images.
Due to the fact that during Trow installation procedure you might require to distribute and approve certificate in order to establish secure HTTPS connection from target node to Trow server, I would suggest to check certificate presence on the node where you run ceph-helm chart as described in Trow documentation.
The other option you can run Trow registry manager with disabled TLS over HTTP, as was guided in the installation instruction.
This command should help to get it cleaned.
kubectl delete ValidatingWebhookConfiguration -n rook-ceph rook-ceph-webhook
I have the following error when installing istio in GKE
kubernetes ver = 1.11.2-gke.18
Istio ver = 1.0.4
Kubectl = latest from repo google
Error from server (NotFound): error when creating
"`install/kubernetes/istio-demo-auth.yaml`":
the server could not find the requested resource
(post `gatewaies.networking.istio.io`)
I have tried to follow the tutorial on GCP:
https://cloud.google.com/kubernetes-engine/docs/tutorials/installing-istio
You are missing the CustomResourceDefinition required by istio and hence getting this error. You need to apply following command from istio folder:
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
This will create all the CRD's like virtualservice, destinationrules etc.
Try following official documentation of Istio to install it on GKE:
https://istio.io/docs/setup/kubernetes/quick-start-gke-dm/
I am also getting this issue when installing a custom Istio helm chart:
[tiller] 2019/11/15 21:50:52 failed install perform step: release test failed: the server could not find the requested resource (post gatewaies.networking.istio.io)
I've confirmed the Istio CRDs are installed properly. Note how the installed Gateway CRD explicitly notes the accepted plural name:
status:
acceptedNames:
categories:
- istio-io
- networking-istio-io
kind: Gateway
listKind: GatewayList
plural: gateways
shortNames:
- gw
singular: gateway
I created an issue on Helm to see if that is the culprit, otherwise, I can open an issue on Istio to see if that is either. I'm very confused where the source of this issue could be coming from.
**Note: ** The type of the Gateway resource is correct:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
istio works by defining a series of crds(Custom Resource Definition), for istio to work, you first need to run command like this:
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
for my version(istio v1.2.0), the command is
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
but as I follow the instructions from the documentatino, I still get the annoying messages:
Error from server (NotFound): error when creating "samples/bookinfo/networking/bookinfo-gateway.yaml": the server could not find the requested resource (post gatewaies.networking.istio.io)
as the hint implies, the requested resource "gatewaies.networking.istio.io" cannot be found, and then I list the crds:
kubectl get crd
and I got a list like this:
enter image description here
as I see inspect this, I find something wrong.
the message issued by kubectl is (post gatewaies.networking.istio.io), but the crd enlisted is post gateways.networking.istio.io, then everything is clear, the kubectl CLI issued a wrong plural for word "gateway", the correct form is gateways, instead of gatewaies, so to satisfy the command form, the crd must change.
And I edit this file:
vim install/kubernetes/helm/istio-init/files/crd-10.yaml
by changing the name from "gateways.networking.istio.io" to "gatewaies.networking.istio.io", everything is ok now.
I'm using Kubernetes 1.11 on Digital Ocean, when I try to use kubectl top node I get this error:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
but as stated in the doc, heapster is deprecated and no longer required from kubernetes 1.10
If you are running a newer version of Kubernetes and still receiving this error, there is probably a problem with your installation.
Please note that to install metrics server on kubernetes, you should first clone it by typing:
git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
then you should install it, WITHOUT GOING INTO THE CREATED FOLDER AND WITHOUT MENTIONING AN SPECIFIC YAML FILE , only via:
kubectl create -f kubernetes-metrics-server/
In this way all services and components are installed correctly and you can run:
kubectl top nodes
or
kubectl top pods
and get the correct result.
For kubectl top node/pod to work you either need the heapster or the metrics server installed on your cluster.
Like the warning says: heapster is being deprecated so the recommended choice now is the metrics server.
So follow the directions here to install the metrics server