Consul and HA Proxy for Service Discovery - Which should I use? - haproxy

I know that Consul is a tool for Service Discovery in the era of micro-services. But before Consul, HA Proxy was invented. So why do we need Consul for service discovery, or Consul is more powerful than HA Proxy? Is there any comparisons between Consul and HA Proxy? Please give me some advice, which should I use or can I use both of them.

HA Proxy is basically a high performance TCP/HTTP laod balancer and Consul provides both a DNS and HTTP interface for doing service discovery(Consul also provides other features as Key/Value store etc.)
Here is an article as to how you can use both HA proxy and consul together:
Another article here refers to dynamic load balancing using both these tools.
If you are using docker, you might want to look at this basic setup article.
PS : I haven't tried using Consul together with HA Proxy though I am working on two separate POC to use them in separate situations. I hope these articles will help you.

You can use Consul to find out that on which IPs and ports your service is running and then based on this info generate HAProxy config. Clients communication to backends would go via the HAProxy load balancer and they don't need to know about internal ips, ports or even Consul. Related to this question regarding service discovery.

Related

Usecase when we need service discovery service when infrastructure is built with kubernetes

I am learning Kubernetes and have developed good knowledge about it. however I am not able to understand why and in which case one would use the service discovery tools when infra is on Kubernetes.
This was asked to me during the interview like which service discovery software will you use for microservices. I am not sure why one would need service discovery when in Kubernetes we have services objects which can be referenced by name.
Has anyone come across a case, where they are developing microservices on Kubernetes and needed the service discovery tool to say like etcd ?
Yes, there could be many more cases for setting up your own service discovery. One, in particular, is a multi-cluster setup with k8s. You can look at how Submariner (a tool for connecting several k8s clusters with an l3/4 tunnel) utilize CoreDNS to add a cross-cluster DNS Service Discovery).

Use an existing microservice architecture with kubernetes

I've an existing microservice architecture that uses Netflix Eureka and zuul services,
I've deployed a pod that successfully registers on the discover server but when I hit the API it gives a timeout, what I can think is that while registering on the Discovery server the container IP is given because of which it is not accessible.
Is there a way to either map the correct address or redirect the call to the proper URL looking for a easy way, as this needs to be done on multiple services
I think you should be rethinking your design in Kubernetes way! Your Eureka(service discovery), Zuul server (API gateway/ Loadbalancer) are really extra services that you really don't need in the Kubernetes platform.
For Service discovery and load-balancing, you can use Services in Kubernetes.
From Kubernetes documentation:
An abstract way to expose an application running on a set of Pods as a
network service. With Kubernetes, you don't need to modify your
application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods and can load-balance across them.
And for API gateway, you can think about Ingress in Kubernetes.
There are different implementations for Ingress Controllers for Kubernetes. I'm using Ambassador API gateway implementation.
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

How to expose 80 and 443 to the Internet with Kubernetes like I did with docker-compose?

For what I understand, exposing pods or deployments can be achieved with a NodePort or a ClusterIP or a LoadBalancer service.
Coming from the docker-compose world, my stack was quite easy, I have all my applications running on a private Docker network and a reverse proxy (Caddy or NGINX) running as well but with the only port mapping allowed in my stack : :80 and :443 (and it can allow reach the private network of course).
So basically : Internet ----> Caddy ----> Private applications in a docker-compose stack.
Q1 : How can I do such things with Kubernetes in a "bare-metal" context ? I mean, if I do not want to use a cloud load balancer provider ?
Q2 : Is it because Kubernetes has never been built to expose applications like this ? Is it automatically dependent to a cloud provider ?
Q1: You might want to look in to https://metallb.universe.tf/. It's a loadbalancer that works just as the common cloud loadbalancers but on your local cluster. It's fairly easy to setup and works great with any reverse proxy.
Q2: Kubernetes is primarily developed for cloud environments and is definately is easier to run in that context. To run it locally often requires additional tools to replicate the functionallity of its cloud service counterparts.
This depends on how your networking is setup. Kubernetes typically run in its own network, and now you want traffic outside the cluster to access applications within the cluster - typically over a "gateway" / "proxy".
For http and https traffic, it is common that this "gateway" / "proxy" is a reverse proxy configured according to the Ingress resources in the cluster by an Ingress controller. You need to use an Ingress Controller that support your network setup.

Kubernetes VIP using Istio

I am new to Kubernetes and trying to move from VM based services to Kubernetes.
Current approach,
Have multiple VM's and running services on each VM. Services are running on multiple VM's and have VIP in front of them. Clients will be accessing VIP and VIP will be doing round robin on available services.
I read ISTIO and ingress and hope, the same thing can be done using ISTIO. I have setup a local minikube cluster and exploring all the use cases. I was able to deploy my service with scaling factor 2. Now, I would like to access my service using VIP. I was not sure how to create VIP and expose to other service in the Kubernetes cluster and services running outside the Kubernetes cluster? Can i use the same existing VIP? Or, Do i need to do any extra setting create a VIP in Kubenetes with any service name?
Thanks
Please note that Istio is an additional layer on top of other frameworks, including Kubernetes. In your case you should port your application to Kubernetes first, and then add Istio if needed.
Porting to Kubernetes:
Instead of a VIP, you define a Kubernetes service. You change the code or configure your microservices to use the defined Kubernetes services instead of the VIPs.
To access your services from the outside, you define a Kubernetes Ingress.
This probably should be enough to make your application run on Kubernetes.
Once you ported your application to Kubernetes, you can add Istio, see Istio Quick Start Guide. Istio can provide you advanced routing, logging and monitoring, policy enforcement, traffic encryption between services, and also support for various microservices patterns. See more at istio.io.

How to access kubernetes service load balancer with juju

Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS with juju?
Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network.
There its a way to accomplish this deploying kubernetes cluster with juju?
I can't speak to juju specifically, but Kubernetes supports Amazon ELB - turning up a load-balancer should work.
There is a way to accomplish this, but it depends on additional work landing in the Kubernetes charms from the ~Kubernetes charmer team.
While you can reasonably hook it into an AWS ELB - Juju charms strive to be as DC agnostic as possible so its easily portable between data centers and clouds. A 'one size fits most' if you will.
What I see being required, is attaching the kube-proxy service to a load balancer service (Such as nginx) and using a template generator service like confd, or consul-template, to register/render the reverse proxy / load balancer configs for the services.
At present the Kubernetes bundle only has an internally functioning network, and the networking model is undergoing some permutations. If you'd like to participate in this planning + dev cycle, the recommended location to participate is the juju mailing list: juju#lists.ubuntu.com