I have a scenario where one of our services exposes WCF hosts that receive callbacks from an external service.
These hosts are dynamically created and there may be hundreds of them. I need to ensure that they are all up and running on the node before the node starts receiving requests so they don't receive failures, this is critical.
Is there a way to ensure that the service doesn't receive requests until I say it's ready? In cloud services I could do this by containing all this code within the OnStart method.
My initial thought is that I might be able to bootstrap this before I open the communication listener - in the hope that the fabric manager only sends requests once this has been done, but I can't find any information on how this lifetime is handled.
There's no "fabric manager" that controls network traffic between your services within the cluster. If your service is up, clients or other services inside the cluster can choose to try to connect to it if they know its address. With that in mind, there are two things you have control over here:
The first is whether or not your service's endpoint is discoverable by other services or clients. This is the point at which your service endpoint is registered with Service Fabric's Naming Service, which occurs when your ICommunicationListener.OpenAsync method returns. At that point, the service endpoint is registered and others can discover it and attempt to connect to it. Of course you don't have to use the Naming Service or the ICommunicationListener pattern if you don't want to; your service can open up an endpoint whenever it feels like it, but if you don't register it with the Naming Service, you'll have to come up with your own service discovery mechanism.
The second is whether or not the node on which your service is running is receiving traffic from the Azure Load Balancer (or any load balancer if you're not hosting in Azure). This has less to do with Service Fabric and more to do with the load balancer itself. In Azure, you can use a load balancer probe to determine whether or not traffic should be sent to nodes.
EDIT:
I added some info about the Azure Load Balancer to our documentation, hope this helps: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/
Related
If I have an ECS service, which can scale out, or always runs more than one task, is load balancing handled by the built-in service discovery?
I mean, if my service A is running 3 tasks, and another service B is making requests using the domain name, generated for A by service discovery, how will it decide where a request goes?
From this Medium article, it appears that route 53 is going to use its own catalog of healthy instances to determine which service to route to. So yes, load balancing will occur in that way.
I've followed the steps from Microsoft to create a Multi-Node On-Premises Service Fabric cluster. I've deployed a stateless app to the cluster and it seems to be working fine. When I have been connecting to the cluster I have used the IP Address of one of the nodes. Doing that, I can connect via Powershell using Connect-ServiceFabricCluster nodename:19000 and I can connect to the Service Fabric Explorer website (http://nodename:19080/explorer/index.html).
The examples online suggest that if I hosted in Azure I can connect to http://mycluster.eastus.cloudapp.azure.com:19000 and it resolves, however I can't work out what the equivalent is on my local. I tried connecting to my sample cluster: Connect-ServiceFabricCluster sampleCluster.domain.local:19000 but that returns:
WARNING: Failed to contact Naming Service. Attempting to contact Failover Manager Service...
WARNING: Failed to contact Failover Manager Service, Attempting to contact FMM...
False
WARNING: No such host is known
Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
Am I missing something in my setup? Should there be a central DNS entry somewhere that allows me to connect to the cluster? Or am I trying to do something that isn't supported On-Premises?
Yup, you're missing a load balancer.
This is the best resource I could find to help, I'll paste relevant contents in the event of it becoming unavailable.
Reverse Proxy — When you provision a Service Fabric cluster, you have an option of installing Reverse Proxy on each of the nodes on the cluster. It performs the service resolution on the client’s behalf and forwards the request to the correct node which contains the application. In majority of the cases, services running on the Service Fabric run only on the subset of the nodes. Since the load balancer will not know which nodes contain the requested service, the client libraries will have to wrap the requests in a retry-loop to resolve service endpoints. Using Reverse Proxy will address the issue since it runs on each node and will know exactly on what nodes is the service running on. Clients outside the cluster can reach the services running inside the cluster via Reverse Proxy without any additional configuration.
Source: Azure Service Fabric is amazing
I have an Azure Service Fabric resource running, but the same rules apply. As the article states, you'll need a reverse proxy/load balancer to resolve not only what nodes are running the API, but also to balance the load between the nodes running that API. So, health probes are necessary too so that the load balancer knows which nodes are viable options for sending traffic to.
As an example, Azure creates 2 rules off the bat:
1. LBHttpRule on TCP/19080 with a TCP probe on port 19080 every 5 seconds with a 2 count error threshold.
2. LBRule on TCP/19000 with a TCP probe on port 19000 every 5 seconds with a 2 count error threshold.
What you need to add to make this forward-facing is a rule where you forward port 80 to your service http port. Then the health probe can be an http probe that hits a path to test a 200 return.
Once you get into the cluster, you can resolve the services normally and SF will take care of availability.
In Azure-land, this is abstracted again to using something like API Management to further reverse proxy it to SSL. What a mess but it works.
Once your load balancer is set up, you'll have a single IP to hit for management, publishing, and regular traffic.
I am trying to build a service that needs to be connected to a socket over the internet without downtime. The service will be reading and publishing info to a message queue, messages should be published only once and in the order received.
For this reason I thought of deploying it into Kubernetes where I can automatically have multiple replicas in case one process fails, i.e. just one process (pod) should be running all time, not multiple pods publishing the same messages to the queue.
These requests need to be routed through a proxy with a static IP, otherwise I cannot connect to the socket. I understand this may not be a standard use case as a reverse proxy as it is normally use with load balancers such as Nginx.
How is it possible to build this kind of forward proxy in Kubernetes?
I will be deploying this on Google Container Engine.
Assuming you're happy to use Terraform, you can use this:
https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway
However, there's one caveat and that is it may inbound traffic to other clusters in that same region/zone.
Is the LoadBalancer that you need?
kubernetes create external loadbalancer,you can see this doc.
Recently I started working with Microservices, I wrote a library for service discovery using Redis to store every service's url and port number, along with a TTL value for the entry. It turned out to be an expensive approach since for every cross service call to any other service required one call to Redis. Caching didn't seem to be a good idea, since the services won't be up all the times, there can be possible downtimes as well.
So I wanted to write a separate microservice which could take care of the orchestration part. For this I need to figure out a really low level network protocol to take care of the exchange of heartbeats(which would help me figure out if any of the service instance goes unavailable). How do applications like zookeeperClient, redisClient take care of heartbeats?
Moreover what is the industry's preferred protocol for cross service calls?
I have been calling REST Api's over HTTP and eliminated every possibility of Joins across different collections.
Is there a better way to do this?
Thanks.
I think the term "Orchestration" is not good for what you are asking. From what I've encountered so far in microservices world the term "Orchestration" is used when a complex business process is involved and not for service discovery. What you need is a Service registry combined with a Load balancer. You can find here all the information you need. Here are some relevant extras that great article:
There are two main service discovery patterns: client‑side discovery and server‑side discovery. Let’s first look at client‑side discovery.
The Client‑Side Discovery Pattern
When using client‑side discovery, the client is responsible for determining the network locations of available service instances and load balancing requests across them. The client queries a service registry, which is a database of available service instances. The client then uses a load‑balancing algorithm to select one of the available service instances and makes a request.
The network location of a service instance is registered with the service registry when it starts up. It is removed from the service registry when the instance terminates. The service instance’s registration is typically refreshed periodically using a heartbeat mechanism.
Netflix OSS provides a great example of the client‑side discovery pattern. Netflix Eureka is a service registry. It provides a REST API for managing service‑instance registration and for querying available instances. Netflix Ribbon is an IPC client that works with Eureka to load balance requests across the available service instances. We will discuss Eureka in more depth later in this article.
The client‑side discovery pattern has a variety of benefits and drawbacks. This pattern is relatively straightforward and, except for the service registry, there are no other moving parts. Also, since the client knows about the available services instances, it can make intelligent, application‑specific load‑balancing decisions such as using hashing consistently. One significant drawback of this pattern is that it couples the client with the service registry. You must implement client‑side service discovery logic for each programming language and framework used by your service clients.
The Server‑Side Discovery Pattern
The client makes a request to a service via a load balancer. The load balancer queries the service registry and routes each request to an available service instance. As with client‑side discovery, service instances are registered and deregistered with the service registry.
The AWS Elastic Load Balancer (ELB) is an example of a server-side discovery router. An ELB is commonly used to load balance external traffic from the Internet. However, you can also use an ELB to load balance traffic that is internal to a virtual private cloud (VPC). A client makes requests (HTTP or TCP) via the ELB using its DNS name. The ELB load balances the traffic among a set of registered Elastic Compute Cloud (EC2) instances or EC2 Container Service (ECS) containers. There isn’t a separate service registry. Instead, EC2 instances and ECS containers are registered with the ELB itself.
HTTP servers and load balancers such as NGINX Plus and NGINX can also be used as a server-side discovery load balancer. For example, this blog post describes using Consul Template to dynamically reconfigure NGINX reverse proxying. Consul Template is a tool that periodically regenerates arbitrary configuration files from configuration data stored in the Consul service registry. It runs an arbitrary shell command whenever the files change. In the example described by the blog post, Consul Template generates an nginx.conf file, which configures the reverse proxying, and then runs a command that tells NGINX to reload the configuration. A more sophisticated implementation could dynamically reconfigure NGINX Plus using either its HTTP API or DNS.
Some deployment environments such as Kubernetes and Marathon run a proxy on each host in the cluster. The proxy plays the role of a server‑side discovery load balancer. In order to make a request to a service, a client routes the request via the proxy using the host’s IP address and the service’s assigned port. The proxy then transparently forwards the request to an available service instance running somewhere in the cluster.
The server‑side discovery pattern has several benefits and drawbacks. One great benefit of this pattern is that details of discovery are abstracted away from the client. Clients simply make requests to the load balancer. This eliminates the need to implement discovery logic for each programming language and framework used by your service clients. Also, as mentioned above, some deployment environments provide this functionality for free. This pattern also has some drawbacks, however. Unless the load balancer is provided by the deployment environment, it is yet another highly available system component that you need to set up and manage.
I'm attempting to transition existing applications to Kubernetes that work as follows:
An outside service calls our application through a load balancer with a new session.
Our application returns the ip of the server that processed the request.
All subsequent calls from the outside service for that session are made directly to the same server (bypassing the load balancer)
Is there any way to do this in kubernetes? I understand that pod ip's are not exposed externally, is there some way to expose them directly?
Also, I don't think I can use sessionAffinity="ClientIP" because the requests will all come in from the same place. Is there a way to write custom sessionAffinity type?
It kind of depends on how your network is set up and what you mean by an "outside service", but the answer is most likely "no".
If you're running using one of the default cluster creation scripts in a cloud environment, pod IP addresses are not routable from the Internet, so any service not in the same private network as your cluster won't be able to talk directly to pods.
However, depending on what cloud provider you're on, you'll likely get the behavior that you want anyways by just continuing to make all calls through to the external IP of a service of type LoadBalancer. For instance, on the Google Cloud Platform, the cloud load balancer that gets created for such services by default maintains connection affinity by 5-tuple (src ip and port, dst ip and port, L4 protocol), which sounds like it's what you want, since you want balancing per session rather than per IP.
As for creating a new sessionAffinity type, that's not an easy thing to extend, since it requires changing Kubernetes source code. If that's really a path you want to take, it's likely that you'd want to run your own load balancer within your cluster rather than relying on the built-in load balancing.