Istio VirtualService Networking outside of cluster - kubernetes

I have a monolithic application that is being broken down into domains that are microservices. The microservices live inside a kubernetes cluster using the istio service mesh. I'd like to start replacing the service components of the monolith little by little. Given the UI code is also running inside the cluster, microservices are inside the cluster, but the older web api is outside the cluster, is it possible to use a VirtualService to handle paths I specify to a service within the cluster, but then to forward or proxy the rest of the calls outside the cluster?

You will have to define a ServiceEntry so Istio will be aware of your external service. That ServiceEntry can be used as a destination in a VirtualService. https://istio.io/latest/docs/reference/config/networking/virtual-service/#Destination

Related

Exposing Service from a BareMetal(Kubeadm) Kubernetes Cluster to outside world

Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.
For that, I have created a deployment and service for NGINX as shown below,
As per my search, found that we have below to expose to outside world
MetalLb
Ingress NGINX
Some HELM resources
I would like to know all these 3 or any more approaches in such way it help me to learn new things.
GOAL
Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.
How Can I make my service has its own public IP to access from the outside cluster?
You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service.
Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address.
I have done a similar setup you can check it here in detail:
https://developerdiary.me/lets-build-low-budget-aws-at-home/
You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service.
One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here.
MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.

Kubernetes - How to create a service that point to LAN application

My infrastructure is based on Kubernetes (k3s, with istio ingress). I would like to use istio to expose an application that is not in my cluster.
outside (internet) --https--> my router --> [cluster] istio --> [not cluster] application (192.168.1.29:8123)
I tried creating a HAProxy container, but it didn't work...
Any ideas?
If you insist on piping your traffic to the non-cluster application through the Kubernetes cluster, there are a couple of ways to handle this. You could use a Kubernetes-native ExternalName Kubernetes service.
The Istio way would be to create a ServiceEntry, though, and then use a VirtualService combined with a Gateway to direct traffic to your application outside of the cluster.

How to expose app deployed on Rancher k3s to the Internet

I've different deployments over different namespaces and I would like to expose some of them to the Internet, even if I don't have a static and public IP available.
The different services are deployed on Rancher k3s and every service which should be publicly accessible has an Ingress defined in the same namespace.
I was trying to follow Rancher - How to expose my services publicly?, but I didn't really get what I've to do and, moreover:
Why do we need to define a LoadBalancer? It seems to me that the IngressController used by k3s (Traefik?) already creates one. If this is a must (or a good way to go), how it should the service defined exactly?
I don't have any Rancher UI in my environment. Therefore, is there a way to achieve what described in that link in a declarative way?
Is there a way to use services like No-IP or FreeDNS for the final hostname?
If I get it right, you deployed Kubernetes manually on barebone/vms nodes and now you want to reach you deployments running inside that cluster.
There is two level of loadbalancing in this setup, the one managed by your ingress controller, sounds like it is traefik in your case, and it is recommanded to run a second L4 load balancer in front of your workers to reach the ingress pods that are usually deployed on multiple/all nodes. Traefik, or other lb controllers, will load balancer traffic inside the k8s cluster without issue even if you don't have a L4 load balancer, but it is not recommanded as if you loose this node, no traffic can reach the kubernetes cluster anymore. You "just" need to have your dns resolution pointing at your public ip and routed to one of your worker, or the LB in front of it. However, if you don't have a L4 LB, you'll need to have your ingress pods listening on ports 80 and/or 443.
Most things that you do in Rancher UI is just an easier way to see your k8s objects, all ingress configuration can be achieved via kubectl, k9s (strongly recommand thatone!), lens or other methods. However k8s objects are still k8s objects. In this case, you need to have your services exposed with ClusterIP that are then reachable by the ingress pods.
I've never used such a solution natively from k8s, but when I had too the internet router was able to do this part, once you're there, it is internal routing.
I hope this helps. Ingress can definitely be a tough one to grasp!

Ingress resource deployment

What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?
As we all know there are different cloud provider and many types of settings that are related to the deployment of your ingress resource which depends on your target environments: AWS, OpenShift, plain vanilla K8S, google cloud, Azure.
On cloud deployments like Amazon, Google, etc., ingresses need also special annotations, most of which are common to all micro services in need of an ingress.
If we deploy also a mesh like Istio on top of k8s then we need to use an Istio gateway with ingress. if we use OCP then it has special kind called “routes”.
I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.
So maybe the best approach is to create an operator to deploy the Ingress resource because of the many different setups here?
Is it important to create some generic component to deploy the Ingress while keeping cloud agnostic?
How do other companies deploy their ingress resources to the k8s cluster?
What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?
On AWS the common approach is to use ALB, and the AWS ALB Ingress Controller, but it has its own drawbacks in that it create one ALB per Ingress resource.
Is we deploy also a mesh like Istio then we need to use Istio gateway with ingress.
Yes, then the situation is different, since you will use VirtualService from Istio or use AWS App Mesh - that approach looks better, and you will not have an Ingress resource for your apps.
I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.
Yes, this is in the intersection between the cloud provider infrastructure and your cluster, so there are unfortunately many different setups here. It also depends on if your ingress gateway is within the cluster or outside of the cluster.
In addition, the Ingress resource, just become GA (stable) in the most recent Kubernetes, 1.19.

Why can envoy sidecar control my traffic?

I run istio on Kubernetes. I want to know how the envoy sidecar works. For example, after sidecar is injected into the pod, the original container cannot access the outer network without EgressRule. How does it work?
All the traffic inside the pod is captured by iptables commands and directed to the sidecar proxy. Then the sidecar proxy performs routing, according to routing tables it receives from Istio Pilot (a part of the Istio Control Plane). The routing tables are based on the Kubernetes services and on the Istio RouteRules. Since Istio cannot know anything about the external services, it cannot route the traffic to the external services without an EgressRule defined. EgressRules define the routing tables for the external services.