Istio deployed but doesn't show in the GKE UI - kubernetes

I have added Istio to an existing GKE cluster. This cluster was initially deployed from the GKE UI with Istio "disabled".
I have deployed Istio from the CLI using kubectl and while everything works fine (istio namespace, pods, services, etc...) and I was able later on to deploy an app with Istio sidecar pods etc..., I wonder why the GKE UI still reports that Istio is disabled on this cluster. This is confusing - in effect, Istio is deployed in the cluster but the UI reports the opposite.
Is that a GKE bug ?
Deployed Istio using:
kubectl apply -f install/kubernetes/istio-auth.yaml
Deployment code can be seen here:
https://github.com/hassanhamade/istio/blob/master/deploy

From my point of view this doesn't look as a bug, I assume that the status is disabled because you have deployed a custom version of Istio on you cluster. This flag should be indicating the status of the GKE managed version.
If you want to update your cluster to use GKE managed version, you can do it as following:
With TLS enforced
gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_STRICT
or
With mTLS in permissive mode
gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_PERMISSIVE
Check this for more details.
Be careful since you already have deployed Istio, enabling the GKE managed one may cause issues.

Istio will only show as enabled in the GKE cluster UI when using the Istio on GKE addon. If you manually install Istio OSS, the cluster UI will show "disabled".

Related

Enable unsafe sysctls on a cluster managed by Amazon EKS

I'm attempting to follow instructions for resolving a data congestion issue by enabling 2 unsafe sysctls for certain pods running in a Kubernetes cluster where the Pods are deployed by EKS. To do this, I must enable those parameters in the nodes running those pods. The following command is for enabling on a per-node basis:
kubelet --allowed-unsafe-sysctls \
'net.unix.max_dgram_qlen,net.core.somaxconn'
However, the Nodes in the cluster I am working with are deployed by EKS. The EKS cluster was deployed by using the Amazon dashboard (Not a yaml config file/terraform/etc). I am not sure how to translate the above step to have all nodes in my cluster have those systcl enabled.

Can't find cluster-local-gateway in istio-system namespace

I'm using Knative to deploy serverless applications in K8s, recently I experienced that cluster-local-gateway is missing from istio-system namespace, Is there any reason for this..? I'm afraid that in the latest versions istio removed it or, It has to be done by Knative side.
FYI - I do my deployments in GCP enabling istio. I do not manually installed the istio
Cluster local gateway need to be installed as part of knative installation. Since you are using GKE to install istio instead of using helm you need to install it manually.
kubectl apply -f https://raw.githubusercontent.com/knative/serving/master/third_party/${VERSION}/istio-knative-extras.yaml
here VERSION is istio version i.e istio-1.5.0
https://github.com/knative/serving/blob/master/third_party/istio-1.5.0/istio-knative-extras.yaml

Istio 1.0 does not inject envoy proxy to pods on Kubernetes 1.9.3

I have Kubernetes 1.9.3 cluster and deployed Istio 1.0.12 on it. Create a namespace with istio-injection=enabled and created a deployment in that namespace. I don't see envoy proxy getting automatically injected into the pods created by deployments.
Istio calls kube-apiserver to inject envoy proxy into the pods. Two plugins need to be enabled in kube-apiserver for proxy injection to work.
kube-apiserver runs as a static pod and the pod manifest is available at /etc/kubernetes/manifests/kube-apiserver.yaml. Update the line as shown below to include MutatingAdmissionWebhook and ValidatingAdmissionWebhook plugins (available since Kubernetes 1.9).
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
The kubelet will detect the changes and re-create kube-apiserver pod automatically.

Access non Istio resource

My current version of istio is 0.2.12.
I have a deployment that is deployed with istio kube-inject and tries to connect to a service/deployment inside of the kubernetes cluster that not uses Istio, how is it possible to allow access from the istio using deployment to the not istio using deployment.
In this case is the istio baked deployment a Spring boot application and the other is an ephemeral MySQL server.
Any ideas?
You should be able to access all the kubernetes services (Istio-injected and the regular Kubernetes ones) from Istio-injected pods.
This now possible, please see the
"Can I enable Istio Auth with some services while disable others in the same cluster?"
question in the security section of the faq: https://istio.io/help/faq.html

What is the glue between k8 ingress and google load bancers

I am using kubernetes on google cloud container, and I still don't understand how the load-balancers are "magically" getting configured when I create / update any of my ingresses.
My understanding was that I needed to deploy a glbc / gce L7 container, and that container would watch the ingresses and do the job. I've never deployed such container. So maybe it is part of this cluster addon glbc, so it works even before I do anything?
Yet, on my cluster, I can see a "l7-default-backend-v1.0" Replication Controller in kube-system, with its pod and NodePort service, and it corresponds to what I see in the LB configs/routes. But I can't find anything like a "l7-lb-controller" that should do the provisionning, such container does not exist on the cluster.
So where is the magic ? What is the glue between the ingresses and the LB provisionning ?
Google Container Engine runs the glbc "glue" on your behalf unless you explicitly request it to be disabled as a cluster add-on (see https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters#HttpLoadBalancing).
Just like you don't see a pod in the system namespace for the scheduler or controller manager (like you do if you deploy Kubernetes yourself), you don't see the glbc controller pod either.