I'm new in k8s world and using Openshift 4.2.18. I want to deploy a microservice on it. What I need is one common ip and being able to access each microservice using virtual path.
Like this,
https://my-common-ip/microservice1/
https://my-common-ip/microservice2/
https://my-common-ip/microservice3/
Service and deployment are OK. However I'm so confused with the other terms. Should I use route or ingress? Should I use VirtualService like in this link? Also heard about HA-Proxy and Istio. What's the best way of doing this? I would appreciate it if you could provide the information about these terms.
Thanks in advance, Best Regards
Route and ingress are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.If you intend to deploy your application on multiple Kubernetes distributions at the same time then Ingress might be a good option.
Virtual service and istio is service mesh which is not necessary for external access of an app. You bring complexity with a service mesh. Unless the capabilities offered by a service mesh is really needed for your usecase there is no reason to use it.
Related
I have web services running in the GKE Kubernetes Engine. I also have monitoring services running in the cloud that are monitoring these services. Everything is working fine....except that I don't know how to access the Prometheus, and Kibana dashboards. I know I can use port-forward to temporarily forward a local port and access that way but that cannot scale with more and more engineers using the system. I was thinking of a way to provide access to these dashboards to engineers but not sure what would be the best way.
Should I create a load balancer for each of these?
What about security? I only want a few engineers to have access to these systems.
There are other considerations as well, would love to get your thoughts.
Should I create a load balancer for each of these?
No, you can create but not a good idea.
What about security? I only want a few engineers to have access to
these systems.
You can create an account in Kibana and manage access or else you can use the IAP (Identity-Aware Proxy) to restrict access. Ref doc
You have multiple options. You can use the LoadBalancer as you used but not a good idea though.
A good way to expose different applications is using the ingress. So i you are running the Prometheus, Jaeger, and Kibana in your GKE.
You can create the different hosts with domain prom.example.com, tracing.example.com, kibana.example.com so there will be single ingress controller service with type LoadBalancer and you can map IP to DNS.
Ref doc
I'm new to Kubernetes and trying to point all requests to the domain to another local service.
Both applications are running in the same cluster under a different namespace
Example domains
a.domain.com hosting first app
b.domain.com hosting the second app
When I do a curl request from the first app to the second app (b.domain.com). it travels through the internet to the second app.
Usually what I could do is in /etc/hosts point b.domain.com to localhost.
What do we do in this case in Kubernetes?
I was looking into Network Policies but I'm not sure if it correct approach.
Also As I understood we could just call service name.namespace:port from the first app. But I would like to keep the full URL.
Let me know if you need more details to help me solve this.
The way to do it is by using the Kubernetes Gateway API. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.
For what you want, Istio would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at Anthos, which is the managed version of Istio.
Finally, take a look at the blog Welcome to the service mesh era, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.
I am looking for a straightforward way to setup GKE for a new project so that I can use less demanding nodes in a Google managed environment.
I have setup GKE in a previous example with Istio and its Ingress gateway. This works perfectly fine although I feel that I am overcomplicating things and I need to use N1 machines which are quite costly to start the project with. See documentation below.
https://istio.io/latest/docs/setup/platform-setup/gke/
Now I was looking into the GKE Gateway controller. The thought process on how to use it, seemed quite nice, although, I am missing a bunch of stuff.
Now my question is, imagine that I want to use GKE Gateway controller instead of an Istio Ingress gateway. Can I manage the centralized auth flow like Istio offers with its RequestAuthentication and AuthorizationPolicy kinds
(https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/)
In the GKE Gateway controller?
Currently, we have k8s cluster in our data center due to compliance reasons. We are running the traefik as an ingress controller. Now we want to have the service mesh added to it for monitoring the service level communication. Can you suggest me how can I do it? Do I replace the traefik ingress controller and have the istio ingress on the host network setup or any other better options without removing the traefik and have istio to it too?
If you are going to install Istio to get "free" observability features, you need to keep in mind that in some scenarios it directly doesn't fit. e.g. you want to get the latency within a service. not possible with Istio.
I would recommend you to get Istio, if you need service mesh and/or routing, besides observability, but don't install it just for observability. There are other tools out there specific for that.
Without counting the fact that you are going to use cluster resources, to get an extra container for each service, just for monitoring. Not a good approach, in my opinion.
I have a Kubernetes cluster on a private cloud based on the OpenStack. My service is required to be exposed on a specific port. I am able to do this using NodePort. However, if I try to create another service similar to the first one, I am not able to expose it since I have to use the same port and it is already occupied by the first one.
I've noticed that I can use LoadBalancer in public clouds for this, but I assume this is not possible in OpenStack?
I also tried to use Ingress Controller of Kubernetes but it did not worked. However, I am not sure if I went through a correct way to do it.
Is there any other way else than LoadBalancer or Ingress to do this? (My first assumption was that if I dedicate my pods to specific nodes, then I should be able to expose each of services on the same port on different nodes, but this approach also did not worked.)
Please let me know if you have any thoughts on this.
You have to setup the OpenStack Cloud Provider: basically, this Deployment will watch for LoadBalancer Service and will provide an {internal,external} IP address you can use to interact with your application, even at L4 and not only (sic) L7 like many Ingress Controller resources.
If you want to only expose one port then the only answer to the best of my knowledge is an ingress-controller. The two most famous ones are Nginx and Traefik. I agree that setting up ingress-controller can be difficult and I had problems with them before but you have to solve them one by one.
Another thing you can do is you can build your own ingress controller. What I mean is to use a reverse proxy such as Nginx, configure it to reroute the traffic based on your topology then just expose this reverse proxy so all the traffic goes through this custom reverse proxy but this should be done just if you need something very customized.