unable to create Istio bookinfo-gateway.yaml Gateway - kubernetes

While running this command k
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
I am getting this error
Error from server (NotFound): error when deleting
"samples/bookinfo/networking/bookinfo-gateway.yaml": the server could
not find the requested resource (delete gatewaies.networking.istio.io
bookinfo-gateway)
Can someone please tell me how can i accept gatewaies plural ? or how to fix this error

Upgrading to latest kubectl solved the issue

Related

Why kubectl give unable to list cronjob error while trying to list pods?

I'm having a weird issue at times. When trying to list pods in a Kubernetes cluster, it gives me the same exact error, which has nothing to do with cronjobs. This issue get fixed if I restart the terminal (sometimes I have to restart the computer). When I'm having this issue, I checked with other computers, they don't have any issues. I believe something wrong on my end. Does anyone have any idea why I end up having this issue?
➜ ~ kubectl get pods
Error from server (NotFound): Unable to list "tap.linkerd.io/v1alpha1, Resource=cronjobs": the server could not find the requested resource (get cronjobs.tap.linkerd.io)
Edit:
I can list deployments, cronjobs without any issues. This is happening only when I do get pods. Also it gets fixed by itself if I wait some time.
This may be an indication of version mismatch between your kubectl client and your server, not anything specific to Linkerd. You can confirm with kubectl version --short whether or not that is the case.

kubectl get --raw /metrics return Error from server (NotFound)

I am trying to use Prometheus for monitoring my EKS Fargate (k8s ver.: 1.23). So far I followed the procedure from https://aws.amazon.com/blogs/containers/monitoring-amazon-eks-on-aws-fargate-using-prometheus-and-grafana/ and https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/. Right now I only received the metrics from "kube-state-metrics", but failed to get the resource usage and performance data.
Besides, I got "Error from server (NotFound): the server could not find the requested resource" when I run kubectl get --raw /metrics, kubectl get --raw /api/v1/nodes/${NODE_NAME}/proxy/metrics/cadvisor
Anyone knows the reason why I got Error from server (NotFound)? How can I fix it?
Thank you!

Kubeflow / Istio - problem with admission-webhook-bootstrap and istio-side-injector

I have an installation of KF 1.0.2 on GCP, that used to work fine. Recently two pods went into a state of CrashLoopBackOff
admission-webhook-bootstrap-stateful-set, with the error message:
Error from server (NotFound):
mutatingwebhookconfigurations.admissionregistration.k8s.io
"admission-webhook-mutating-webhook-configuration" not found
istio-sidecar-injector, the error message:
failed to start patch cert loop
mutatingwebhookconfigurations.admissionregistration.k8s.io
"istio-sidecar-injector" not found
I restarted webhooks as shown below, but no success:
kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --all
kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io --all
Any ideas on how to fix it?

Why would kubectl logs return Authorization error?

I am trying to get logs from a pod that is running using kubectl logs grafana-6bfd846fbd-nbv8r
and I am getting the following output:
Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy)
I tried to figure why I would not have this specific authorisation even though I can manage everything with this user, no clue. The weirdest is that when I run kubectl auth can-i get pod/logs I get:
yes
After a few hours of going through ClusterRoles and ClusterRoleBindings, I am getting stuck and do know what to do to be authorized. Thanks for your help!
The failure is kube-apiserver trying to access the kubelet, not related to your user. This indicates your core system RBAC rules might be corrupted, check if your installer or K8s distro has a way to validate or repair them (most don't) or make a new cluster and compare them to that.

Kubernetes cluster is suddenly down

Yesterday, my kubernetes cluster is suddenly down
I tried to investigate as the follows but not sure what the reason was:
Unable to access Kube Dashboard, it returns HTTP ERROR 502
Unable to access deployed apps on cluster, it also returns 502 error
Cannot use kubectl command, it shows the message: "Unable to connect
to the server: x509: certificate has expired or is not yet valid"
With this error, I googled and got the article.
But I'm not sure if this is correct or not.
Can you please help to advise.
Thank you so much.
Environment:
Kubernetes 1.5
Kube-aws