Service discovery usecase - apache-zookeeper

I am new to service discovery platform like zookeeper and etcd. I have following use case and want to know I'd either zookeeper or etcd can be used to solve it.
Use case is like this
A pub sub model with sticky server - I have 10 independent instances of server(take redis ) from instance1, instance 2, ... instance 10 and when a client connects, it ask a service discovery svc to which redis instance it should connect to.
Now this service has some logic say hash(userid)%10 and it returns the ip address of that redis server.
How can I put up such service? Can this be achieved via service discovery platform like zookeeper or etcd.
Any help is appreciated.

Related

Artemis k8s cluster client connection handling

I am using the Artemis Cloud operator for deploying ActiveMQ Artemis in k8s cluster. Number of replicas configured is 3. Queues are expected to be created by client applications. Operator creates a headless service and service for each pod in the cluster setup.
Whenever client connects to a pod ,it creates a queue in that broker pod.So if client connects to 3 brokers at random time, three queues gets created in pods, one in each pod respectively.So when a producer sends message to the queue, it is sent to the connected pod. It wont be present in each broker pod.(No replication of messages).
My question is what service name should client applications use inorder to connect to artemis pods and also maintain session affinity? In other words, what should be done in order to make a client connect to same broker whenever it tries a connection?(and avoid duplicate queue creation)
What I currently use is a kubernetes clusterip service I created that splits traffics to pods.And queues are created via stomp producer.
Not sure which helm it will be using in the background but you will be using the service name inside the kubernetes app for connection instead of the ClusterIP.
Service name could be starting the with helm chart name. i was checking and referring this chart for ref : https://github.com/deviceinsight/activemq-artemis-helm could be different.
Looks like it's creating the service with Cluster IP:none (Headless service) so connecting to the existing service is not working, i doubt currently it might be returning all the PODs IP.
Option : 2 if above one not work give try to this
In another case, you can create the new service type clusterIP with a different name, everything else will be the same, like port and all.
In service you can notice amqp, mqtt and other port, so you app will connect to new service with port config as per requirement. For example : active-mq-service:61613
If everything working fine for you and you are just looking for session affinity you can add this config to your service and it will start managing the session affinity.
SessionAffinity: ClientIP
If you want to make sure that connections from a particular client are
passed to the same Pod each time, you can select the session affinity
based on the client's IP addresses by setting
service.spec.sessionAffinity to "ClientIP" (the default is "None").
You can also set the maximum session sticky time by setting
service.spec.sessionAffinityConfig.clientIP.timeoutSeconds
appropriately. (the default value is 10800, which works out to be 3
hours).
Ref doc : https://kubernetes.io/docs/concepts/services-networking/service/

Register a service with multiple instance in Consul

I have a couple of microservices that I want to register in Consul, so that they can find each other and communicate.
Everything runs on docker compose.
I am wondering how that would work if one of the two services has multiple replicas. How does Consul (or docker compose) deal with that? Is there some sort of internal load balancing or what?
Consul supports registering multiple instances/replicas of a service. When a consumer queries the Consul catalog, Consul will return information for each of the registered service instances.
If the consumer/client is querying Consul via DNS, the client's DNS resolver will ultimately be responsible for choosing the endpoint to connect to from the list of IP's in the DNS response.
If the client is querying Consul via the HTTP API (e.g., /v1/agent/health/service/:service), the client must implement its own logic to select an upstream instance from the list of instances returned in the API response.
See the query services section of the Register a Service with Consul Service Discovery tutorial for more info.

Within a Kubernetes cluster catch outgoing requests from a Pod and redirect to a different target

I have a cluster with 3 nodes. In each node i have a frontend application running in a Pod and backend application running in a separate Pod.
I send data from the frontend application to the backend application, to do this i utilise the Cluster IP Service and k8 dns resource.
I also have a function in my frontend where i send data to a separate service unrelated to my k8s cluster. I send this data using a standard AJAX request to a url with a payload i.e http://my-seperate-service-unrelated-tok8.com.
All of this works correctly and the cluster operates as i want. - i have this cluster deployed to GKE. 

I now want to run this cluster local using minikube, which i have been able to do, however, when i am running locally i do not want to send data to my external service - instead i want to forward it to either a new Pod i will create or just not send it.


The problem here is i need a proxy to intercept outgoing network traffic, check if the outgoing request is the request i am looking for and if it is then redirect it.
I understand each node running in a cluster has a kube-proxy service running within the node - which is used to forward traffic to the relevant services in the cluster. 

I would like to either extend this service, or create a new proxy service where i can listen for outgoing traffic to a specific url and redirect it. 

Is this possible to do in a k8 cluster? I assume there is a Service i can create to listen for all outgoing requests and redirect specific requests based on rules i set. 

I wasn’t sure if k8 clusters have a Service already configured i can simply add to - that’s why i thought of the kube-proxy, would anyone be able to advice on this?

I wanted to add this proxy so i don’t have to change my code when its ran locally in minikube or deployed to GKE.


Any help is greatly appreciated. Thanks!
I did a tool that help you to forward a service to another service,local port, service from other cluster, etc...
This way you can have exactly your same urls, ports and code... but the underlying services gets "replaced", if I understand correctly this is what you are looking for.
Here is a quick example of an stage service being replaced with my local 3000 port
This is the repository with more info and examples: linker-tool
If you are interested let me know if you need help or have any question.

How to discover services deployed on kubernetes from the outside?

The User Microservice is deployed on kubernetes.
The Order Microservice is not deployed on kubernetes, but registered with Eureka.
My questions:
How can Order Microservice discover and access User Microservice through the Eureka??
First lets take a look at the problem itself:
If you use an overlay network as Kubernetes CNI, the problem is that it creates an isolated Network thats not reachable from the outside (e.g. Flannel). If you have a network like that one solution would be to move the eureka server into kubernetes so eureka can reach the service in Kubernetes and the service outside of Kubernetes.
Another solution would be to tell eureka where it can find the service instead of auto discovery but for that you also need to make the service externally available with a Service of type NodePort, HostPort or LoadBalancer or with an ingress and I'm not sure its possible, but 11.2 in the following doc could be worth a look Eureka Client Discovery.
The third solution would be to use a CNI thats not using an overlay network like Romana which will make the service external routable by default.

How to connect different deployments in Kubernetes?

I have two back-end deployments, REST server and a database server, each running on some specific ports. The REST server internally calls a database server.
Now how do I refer my database server deployment in my REST server deployment so that they can communicate with each other?
first, define a service for your DB server, that will create sort of a loadbalancer (internal kube integration based on iptables in most cases). With that, you will be able to refer to it by service name or fqdn like mydbsvc.namespace.svc.cluster.local. Which will return "Cluster IP" to that loadbalancer.
Then it's just an issue of regular app config to point it to your DB on mydbsvc, preferably by means of env variable like say DB_HOST=mydbsvc set in your REST API deployment manifest (pod template envs)
Expose your deployments as service. For example, kubectl expose ...
Connect/Allow these to communicate by creating network policies.
Service object (of database) will give you a virtual (stable) IP. Depending upon the type of service your rest code can call DB via clusterIP/externalName/externalIP/DNS.