Usecase when we need service discovery service when infrastructure is built with kubernetes - kubernetes

I am learning Kubernetes and have developed good knowledge about it. however I am not able to understand why and in which case one would use the service discovery tools when infra is on Kubernetes.
This was asked to me during the interview like which service discovery software will you use for microservices. I am not sure why one would need service discovery when in Kubernetes we have services objects which can be referenced by name.
Has anyone come across a case, where they are developing microservices on Kubernetes and needed the service discovery tool to say like etcd ?

Yes, there could be many more cases for setting up your own service discovery. One, in particular, is a multi-cluster setup with k8s. You can look at how Submariner (a tool for connecting several k8s clusters with an l3/4 tunnel) utilize CoreDNS to add a cross-cluster DNS Service Discovery).

Related

Use an existing microservice architecture with kubernetes

I've an existing microservice architecture that uses Netflix Eureka and zuul services,
I've deployed a pod that successfully registers on the discover server but when I hit the API it gives a timeout, what I can think is that while registering on the Discovery server the container IP is given because of which it is not accessible.
Is there a way to either map the correct address or redirect the call to the proper URL looking for a easy way, as this needs to be done on multiple services
I think you should be rethinking your design in Kubernetes way! Your Eureka(service discovery), Zuul server (API gateway/ Loadbalancer) are really extra services that you really don't need in the Kubernetes platform.
For Service discovery and load-balancing, you can use Services in Kubernetes.
From Kubernetes documentation:
An abstract way to expose an application running on a set of Pods as a
network service. With Kubernetes, you don't need to modify your
application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods and can load-balance across them.
And for API gateway, you can think about Ingress in Kubernetes.
There are different implementations for Ingress Controllers for Kubernetes. I'm using Ambassador API gateway implementation.
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

multiple environment for websites in Kubernetes

I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.

Disadvantages of using eureka for Service Discovery with kubernetes

Context
I am deploying a set of services that are containerised using Docker into AWS. No matter which deployment solution is chosen (e.g. raw EC2/ECS/Elastic Beanstalk/Fargate) we will face the issue of "service discovery".
To name just a few of the options for service discovery that I've considered:
AWS Route 53 Service Registry
Kubernetes
Hashicorp Consul
Spring Cloud Netflix Eureka
Specifics Of My Stack
I am developing Java Spring Boot applications using Spring Cloud with the target deployment environment being AWS.
Given that my stack is Spring based, spring cloud eureka made sense to me while developing locally. It was easy to set up a single node, integrates well with the stack and ecosystem of choice and required very little set up.
Locally, we are using docker compose (not swarm) to deploy services - one of the containers deployed is a single node Eureka service discovery server.
However, when we progress outside of local development and into staging or production environment we are considering options like Kubernetes.
My Own Assessment Of Pros/Cons
AWS Route 53 Service Registry
Requires us to couple code specifically to AWS services. Not a problem per se, we are quite tied in anyway on other parts of the stack (SNS/SQS).
Makes running the stack locally slightly more difficult as it relies on Route 53, I suppose we could open up a certain hosted zone for local development.
AWS native, no managing service registries or extra "moving parts".
Spring Cloud Eureka
Downside is that thus requires us to deploy and manage a high availability service registry cluster and requires more resources. Another "moving part" to manage.
Advantages are that it fits into our stack well (spring ecosystem, spring boot, spring cloud, feign and zuul work well with this). Also can be run locally trivially.
I presume we need to configure the networks and registry zone to ensure that that clients publish their host address rather and docker container internal IP address. e.g. if service A is on host A and wants to talk to service B on host B, service B needs to advertise its EC2 address rather than some internal docker IP.
Questions
If we use Kubernetes for orchestration, are there any disadvantages to using something like Spring Cloud Eureka over the built in service discovery options described here https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services
Given Kube provides this, it seems suboptimal to then use eureka deployed using kube to perform discovery. I presume kube can make some optimisations that impact avaialbility and stability that might nit be possible using eureka. e.g kube would know when deploying a new service - eureka will have to rely on heartbeats/health checks and depending on how that is configured (e.g. frequency) this could result in stale records whereas i presume kube might not suffer from this for planned service shutdown/restarts. I guess it still does for unplanned failures such as a host failure or network partition.
Does anyone have any advice on this, do people use services like Kubernetes but use other mechanisms for service discovery rather than those provided by kube. Is there a good reason to do one or the other?
Possible Challenges I Anticipate
We could replace eureka, but relying on Kube to perform discovery will mean that we need to run kube locally to deploy whereas currently we have a simple tiny docker-compose file. Also, I'll have to look at how easy it'll be to ensure that ribbon, zuul and feign play nicely with this.
Currently we have ribbon configured with a eureka client so that service A can server to service B just as "service-b" for example and have ribbon resolve a healthy host via a eureka client. I guess we can configure ribbon to not use eureka and use an external Kube service name which will be resolved by Kube DNS at runtime...
Final Note
Thanks in advance for any contribution or advice. I know this might elicit a primarily opinion focused response. But I am hoping someone can provide objective guidance on when one solution might be preferable to another.
Service discovery is something you get out-of-the-box with Kubernetes. So having another external service in your platform will be another application to maintain, deploy and can be a point of failure. So I would stick with the the service discovery provided by Kubernetes.

Kubernetes VIP using Istio

I am new to Kubernetes and trying to move from VM based services to Kubernetes.
Current approach,
Have multiple VM's and running services on each VM. Services are running on multiple VM's and have VIP in front of them. Clients will be accessing VIP and VIP will be doing round robin on available services.
I read ISTIO and ingress and hope, the same thing can be done using ISTIO. I have setup a local minikube cluster and exploring all the use cases. I was able to deploy my service with scaling factor 2. Now, I would like to access my service using VIP. I was not sure how to create VIP and expose to other service in the Kubernetes cluster and services running outside the Kubernetes cluster? Can i use the same existing VIP? Or, Do i need to do any extra setting create a VIP in Kubenetes with any service name?
Thanks
Please note that Istio is an additional layer on top of other frameworks, including Kubernetes. In your case you should port your application to Kubernetes first, and then add Istio if needed.
Porting to Kubernetes:
Instead of a VIP, you define a Kubernetes service. You change the code or configure your microservices to use the defined Kubernetes services instead of the VIPs.
To access your services from the outside, you define a Kubernetes Ingress.
This probably should be enough to make your application run on Kubernetes.
Once you ported your application to Kubernetes, you can add Istio, see Istio Quick Start Guide. Istio can provide you advanced routing, logging and monitoring, policy enforcement, traffic encryption between services, and also support for various microservices patterns. See more at istio.io.

Should we run a Consul container in every Pod?

We run our stack on the Google Cloud Platform (hosted Kubernetes, GKE) and have a Consul cluster running outside of K8s (regular GCE instances).
Several services running in K8s use Consul, mostly for it's CP K/V Store and advanced locking, not so much for service discovery so far.
We recently ran into some issues with using the Consul service discovery from within K8s. Right now our apps talk directly to the Consul Servers to register and unregister services they provide.
This is not recommended best-practice, usually Consul clients (i.e. apps using Consul) should talk to the local Consul agent. In our setup there are no local Consul agents.
My Question: Should we run local Consul agents as sidekick containers in each pod?
IMHO this would be a huge waste of ressources, but it would match the Consul best-practies better.
I tried searching on Google, but all posts about Consul and Kubernetes talk about running Consul in K8s, which is not what I want to do.
As the official Consul Helm chart and the documentation suggests the standard approach is to run a DaemonSet of Consul clients and then use a connect-side-car injector to inject sidecars into your node simply by providing an annotation of the pod spec. This should handle all of the boilerplate and will be inline with best practices.
Consul: Connect Sidecar; https://www.consul.io/docs/platform/k8s/connect.html