Access non Istio resource - kubernetes

My current version of istio is 0.2.12.
I have a deployment that is deployed with istio kube-inject and tries to connect to a service/deployment inside of the kubernetes cluster that not uses Istio, how is it possible to allow access from the istio using deployment to the not istio using deployment.
In this case is the istio baked deployment a Spring boot application and the other is an ephemeral MySQL server.
Any ideas?

You should be able to access all the kubernetes services (Istio-injected and the regular Kubernetes ones) from Istio-injected pods.

This now possible, please see the
"Can I enable Istio Auth with some services while disable others in the same cluster?"
question in the security section of the faq: https://istio.io/help/faq.html

Related

Kubernetes - How to create a service that point to LAN application

My infrastructure is based on Kubernetes (k3s, with istio ingress). I would like to use istio to expose an application that is not in my cluster.
outside (internet) --https--> my router --> [cluster] istio --> [not cluster] application (192.168.1.29:8123)
I tried creating a HAProxy container, but it didn't work...
Any ideas?
If you insist on piping your traffic to the non-cluster application through the Kubernetes cluster, there are a couple of ways to handle this. You could use a Kubernetes-native ExternalName Kubernetes service.
The Istio way would be to create a ServiceEntry, though, and then use a VirtualService combined with a Gateway to direct traffic to your application outside of the cluster.

kubernetes vs openshift (routes and services)

I'm new to kubernetes and openshift (came from docker swarm world) and I'm having trouble with some of kubernetes and openshift documentation especially related to route and services. I was looking for how to expose a replica set of containers externally and I've found kubernetes documentation uses a service to expose the pod while openshift uses routes. can anyone explain to me the differences?
There are only minor differences in tools being used. OpenShift is a Kubernetes distribution, this means it is a collection of opinionated pre-selected components. So for Ingress, OpenShift uses HAProxy to get (HTTP) traffic into the cluster. Other Kubernetes distributions maybe use the NGINX Ingress Controller or something similar.
So Services are used to loadbalance traffic inside the cluster. So when you create a ReplicaSet, you'll have multiple Pods running. To "talk" to these Pods, you typically create a Service. That Service will distribute the traffic evenly between your Pods.
So to get HTTP(S) traffic from the outside to your Service, OpenShift uses Routes (Ingress in other Kubernetes distributions):
+-----+
+-->+ Pod |
+-------+ +---------+ | +-----+
Traffic--->+ Route +------>+ Service +--+-->+ Pod |
+-------+ +---------+ | +-----+
+-->+ Pod |
+-----+
So to expose your application to the outside world, you typically create an internal Service using oc create service and then create a Route using oc expose:
# Create a new ClusterIP service named myservice
oc create service clusterip myservice --tcp=8080:8080
oc expose service myservice
"Routes"in OCP do not compare with K8S"Services"but with K8S"Ingress"
Comparison betweenRoutesandIngressis here
Doc on how to expose "Services" in OCP outside the cluster is here
Red Hat had needed an automated reverse proxy solution for containers running on OpenShift long before Kubernetes came up with Ingress. So now in OpenShift we have a Route objects which do almost the same job as Ingress in Kubernetes. The main difference is that routes are implemented by good, old HAproxy that can be replaced by commercial solution based on F5 BIG-IP. On Kubernetes, however, you have much more choice, as Ingress is an interface implemented by multiple servers starting from most popular nginx, traefik, AWS ELB/ALB, GCE, Kong and others including HAproxy as well.
So which one is better you may ask? Personally, I think HAproxy in OpenShift is much more mature, although doesn’t have as much features as some Ingress implementations. On Kubernetes however you can use different enhancements - my favorite one is an integration with cert-manager that allows you to automate management of SSL certificates. No more manual actions for issuing and renewal of certificates and additionally you can use trusted CA for free thanks to integration with Letsencrypt!
As an interesting fact, I want to mention that starting from OpenShift 3.10 Kubernetes Ingress objects are recognized by OpenShift and are translated/implemented by.. a router. It’s a big step towards compatibility with configuration prepared for Kubernetes that now can be launched on OpenShift without any modifications.

Does Istio on AWS requires AWS ALB?

I will install Istio as a service mesh on AWS EKS. I know that Istio provides its own Ingress Gateway. What I am confused about is: Do we still need to use AWS ALB or ELB in front of Istio Ingress Gateway?
Given that Istio will create a Service for its Ingress Deployment of type LoadBalancer, Kubernetes will take care of provisioning the ELB for you. No need to create it yourself although you could also configure the Service to point to an existing ELB.
The linked Service is outdated and for ease of reference only. The latest Istio chart is actually here. You should be able to download it and confirm the Service configuration.

Istio deployed but doesn't show in the GKE UI

I have added Istio to an existing GKE cluster. This cluster was initially deployed from the GKE UI with Istio "disabled".
I have deployed Istio from the CLI using kubectl and while everything works fine (istio namespace, pods, services, etc...) and I was able later on to deploy an app with Istio sidecar pods etc..., I wonder why the GKE UI still reports that Istio is disabled on this cluster. This is confusing - in effect, Istio is deployed in the cluster but the UI reports the opposite.
Is that a GKE bug ?
Deployed Istio using:
kubectl apply -f install/kubernetes/istio-auth.yaml
Deployment code can be seen here:
https://github.com/hassanhamade/istio/blob/master/deploy
From my point of view this doesn't look as a bug, I assume that the status is disabled because you have deployed a custom version of Istio on you cluster. This flag should be indicating the status of the GKE managed version.
If you want to update your cluster to use GKE managed version, you can do it as following:
With TLS enforced
gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_STRICT
or
With mTLS in permissive mode
gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_PERMISSIVE
Check this for more details.
Be careful since you already have deployed Istio, enabling the GKE managed one may cause issues.
Istio will only show as enabled in the GKE cluster UI when using the Istio on GKE addon. If you manually install Istio OSS, the cluster UI will show "disabled".

Can istio use existing services?

I already have some services in my k8s cluster and want to mantain them separately. Examples:
grafana with custom dashboards and custom dockerfile
prometheus-operator instead of basic prometheus
jaeger pointing to elasticsearch as internal storage
certmanager in my own namespace (also I use it for nginx-ingress legacy routing)
Is it possible to use existing instances instead of creating istio-specific ones? Can istio communicate with them or it's hardcoded?
Yes - it is possible to use external services with istio. You can disable grafana and prometheus just by setting proper flags in values.yaml of istio helm chart (grafana.enabled=false, etc).
You can check kyma-project project to see how istio is integrated with prometheus-operator, grafana deployment with custom dashboards, and custom jaeger deployment. From your list only certmanager is missing.
Kubernetes provides quite a big variety of Networking and Load Balancing features from the box. However, the idea to simplify and extend the functionality of Istio sidecars is a good choice as they are used for automatic injection into the Pods in order to proxy the traffic between internal Kubernetes services.
You can implement sidecars manually or automatically. If you choose the manual way, make sure to add the appropriate parameter under Pod's annotation field:
annotations:
sidecar.istio.io/inject: "true"
Automatic sidecar injection requires Mutating Webhook admission controller, available since Kubernetes version 1.9 released, therefore sidecars can be integrated for Pod's creation process as well.
Get yourself familiar with this Article to shed light on using different monitoring and traffic management tools in Istio.