istio: route traffic between sidecar-enabled pods and non-sidecar-enabled pods - kubernetes

If I enable Istio on some of my apps (but not all of them) using Manual sidecar injection, can I route traffic between non-based-Istio apps and Istio-based apps? If yes, is it still true if I enable Citadel? I'm wondering because I'd like to slowly enable the sidecar injection on my apps and migrate over. Do both Istio-based-apps and non-Istio-based apps still talk to each other (within cluster) via the normal Kubernetes service objects? Is there anything else I need to do in order to allow Istio and regular services to talk to each other?
I'm new to Istio, so any context is helpful.

To highlight the proper solution to achieve your goal, as #Vadim Eisenberg mentioned:
You should set PERMISSIVE policy and set a destination rule for each
non-istio service with tls mode "NONE".

Related

How to access monitoring services (prometheus, kibana etc.) deployed in Kubernetes production

I have web services running in the GKE Kubernetes Engine. I also have monitoring services running in the cloud that are monitoring these services. Everything is working fine....except that I don't know how to access the Prometheus, and Kibana dashboards. I know I can use port-forward to temporarily forward a local port and access that way but that cannot scale with more and more engineers using the system. I was thinking of a way to provide access to these dashboards to engineers but not sure what would be the best way.
Should I create a load balancer for each of these?
What about security? I only want a few engineers to have access to these systems.
There are other considerations as well, would love to get your thoughts.
Should I create a load balancer for each of these?
No, you can create but not a good idea.
What about security? I only want a few engineers to have access to
these systems.
You can create an account in Kibana and manage access or else you can use the IAP (Identity-Aware Proxy) to restrict access. Ref doc
You have multiple options. You can use the LoadBalancer as you used but not a good idea though.
A good way to expose different applications is using the ingress. So i you are running the Prometheus, Jaeger, and Kibana in your GKE.
You can create the different hosts with domain prom.example.com, tracing.example.com, kibana.example.com so there will be single ingress controller service with type LoadBalancer and you can map IP to DNS.
Ref doc

Istio service mesh in on premise kubernetes cluster in data center

Currently, we have k8s cluster in our data center due to compliance reasons. We are running the traefik as an ingress controller. Now we want to have the service mesh added to it for monitoring the service level communication. Can you suggest me how can I do it? Do I replace the traefik ingress controller and have the istio ingress on the host network setup or any other better options without removing the traefik and have istio to it too?
If you are going to install Istio to get "free" observability features, you need to keep in mind that in some scenarios it directly doesn't fit. e.g. you want to get the latency within a service. not possible with Istio.
I would recommend you to get Istio, if you need service mesh and/or routing, besides observability, but don't install it just for observability. There are other tools out there specific for that.
Without counting the fact that you are going to use cluster resources, to get an extra container for each service, just for monitoring. Not a good approach, in my opinion.

istio allowing connection to HTTPS url without any service entry

I am using istio v1.0.6 and kubernetes 1.11. I was able to succesfully implement the ingress feature of istio.However, I am seeing that by default istio block the TCP connections from the mesh to applications outside cluster. But, it allows https connections to applications that are not even registered in the mesh.
Is there any default egress rules that I am missing ?
Up until version 1.0, Istio’s default behavior was to block access to external endpoints . This created a connectivity issue and applications were breaking until the user could discover all the endpoints and configure them manually.
Istio 1.1 changed the default to allow access to all external endpoints.
See this for additional details and an automated way to generate serviceentries:
https://medium.com/#tufin/locking-down-istio-egress-with-automatic-traffic-discovery-51f0d49879a3

Is it possible to use Istio without kubernetes or docker?

I have 4 microservices running on my laptop listening at various ports. Can I use Istio to create a service mesh on my laptop so the services can communicate with each other through Istio? All the links on google about Istio include kubernetes but I want to run Istio without Kubernetes. Thanks for reading.
In practice, not really as of this writing, since pretty much all the Istio runbooks and guides are available for Kubernetes.
In theory, yes. Istio components are designed to be 'platform independent'. Quote from the docs:
While Istio is platform independent, using it with Kubernetes (or infrastructure) network policies, the benefits are even greater, including the ability to secure pod-to-pod or service-to-service communication at the network and application layers.
But unless you know really well the details of each of the components: Envoy, Mixer, Pilot, Citadel, and Galley and you are willing to spend a lot of time it becomes not practically feasible to get it running outside of Kubernetes.
If you want to use something less tied to Kubernetes you can take a look at Consul, although it doesn't have all the functionality Istio has, it has overlap with some of its features.
I do some googles, and found that istio claim to support apps running outside k8s, like in vm. But I never try.
https://istio.io/latest/news/releases/0.x/announcing-0.2/#cross-environment-support
https://jimmysong.io/blog/istio-vm-odysssey/

Both public and intranet services on the same OpenShift cluster

In my company we have few public websites and many internal webapps. Currently they are are running in different AWS security groups.
Is it possible to run both kind of services on the same OpenShift cluster and make sure internal services are not accessible from the Internet?
Thanks!
The traditional(?) way that is solved is through Internet-facing ELB/ALBs pointed to the NodePorts on the cluster. I personally haven't tried Service of kind: LoadBalancer since 1.2 to be able to speak to its functionality, but I do know kubernetes has a lot of users on AWS, so it's plausible it works fine by now.
You can also run your own Ingress Controller, several of which have support for ip white/black listing, authentication, SSL/TLS, all the fancy toys, if you'd prefer not to deal with the ELB headache.
If you're not already considering it, Calico SDN has support for in-cluster networking policies, so you could also apply an extra level of locked-down-ness to ensure no Internet app breaks out of its allowed network path; thus, security-groups moving down into the cluster.