Not able to call a web service hosted in Service Fabric - azure-service-fabric

I've published a OWIN hosted web service to my remote cluster. I'm using a custom port 4444 created during the cluster creation. I see the AppPort rule for 4444. I'm also able to remote to one of the VM, and invoke the service locally. However, I'm still not able to call it remotely. It hangs for a while and doesn't return anything.

Start with this guide and make sure you have the Azure Load Balancer configured properly: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/#service-fabric-in-azure
The trick is to make sure that when the load balancer sends traffic on a particular port to a node in the cluster there is a service instance there listening on that port. By default, the load balancer simply sends traffic to all nodes, so you have to make sure that you have a service instance listening on each node, or if not then have a load balancer probe actively checking which nodes do have a service instance listening on that port.

Related

Cannot connect to kafka connect cluster running on AWS from outside EC2

I have an ECS cluster with 3 EC2 instances all sitting in private subnets. I created a task definition to run the kafka-connect image provided by Confluent with the following environment variables:
CONNECT_CONFIG_STORAGE_TOPIC=quickstart-config
CONNECT_GROUP_ID=quickstart
CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_OFFSET_STORAGE_TOPIC=quickstart-offsets
CONNECT_PLUGIN_PATH=/usr/share/java
CONNECT_REST_ADVERTISED_HOST_NAME=localhost
CONNECT_REST_ADVERTISED_PORT=8083
CONNECT_SECURITY_PROTOCOL=SSL
CONNECT_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
CONNECT_STATUS_STORAGE_TOPIC=quickstart-status
CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
I have an application load balancer in front of this cluster with a listener on port 8083. I have correctly set up target group to include the EC2 instances running kafka-connect. So the load balancer should forward requests to the cluster. And it does, but I always get back a 502 Bad Gateway response. I can ssh into the EC2 instances and curl localhost:8083 and get the response back from kafka-connect, but from outside the EC2, I don't get a response.
To rule out networking issues between the load balancer and the cluster, I created a separate task defintion running Nginx on port 80 and I'm able to successfully hit it from outside the EC2 instances through the load balancer.
I have a feeling that I have not set CONNECT_REST_ADVERTISED_HOST_NAME to the correct value. It's my understanding that this is the host clients should connect to. However, because my EC2 instances are in a private subnet, I have no idea what to set this to, which is why I've set it to localhost. I tried setting it to the load balancer's DNS name, but that doesn't work.
You need to set CONNECT_REST_ADVERTISED_HOST_NAME to the host or IP that the other Kafka Connect workers can resolve and connect to.
It's used for the internal communication between workers, and if it's localhost then if your REST request (via your load balancer) hits a worker that is not the current leader of the cluster, that worker will try to forward the request to the leader—using the CONNECT_REST_ADVERTISED_HOST_NAME. But if CONNECT_REST_ADVERTISED_HOST_NAME is localhost then the worker will simply be forwarding the request to itself and hence things won't work.
For more details see https://rmoff.net/2019/11/22/common-mistakes-made-when-configuring-multiple-kafka-connect-workers/

Service Fabric Load Balancer not forwarding traffic correctly

I have a ASP.NET website that connects to a set of WCF services in a service fabric cluster behind an internal load balancer. The service connection strings in the website points to the address of the internal load balancer. There are three nodes in the cluster and three copies of backend services.
When I manually restart one of the node, I find that the website failed to load correctly because the load balancer seems to be still forwarding requests to the service in the restarting node. Shouldn't the load balancer forward requests to the two other available services? Does anyone know whats going on here?

Connect to On Premises Service Fabric Cluster

I've followed the steps from Microsoft to create a Multi-Node On-Premises Service Fabric cluster. I've deployed a stateless app to the cluster and it seems to be working fine. When I have been connecting to the cluster I have used the IP Address of one of the nodes. Doing that, I can connect via Powershell using Connect-ServiceFabricCluster nodename:19000 and I can connect to the Service Fabric Explorer website (http://nodename:19080/explorer/index.html).
The examples online suggest that if I hosted in Azure I can connect to http://mycluster.eastus.cloudapp.azure.com:19000 and it resolves, however I can't work out what the equivalent is on my local. I tried connecting to my sample cluster: Connect-ServiceFabricCluster sampleCluster.domain.local:19000 but that returns:
WARNING: Failed to contact Naming Service. Attempting to contact Failover Manager Service...
WARNING: Failed to contact Failover Manager Service, Attempting to contact FMM...
False
WARNING: No such host is known
Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
Am I missing something in my setup? Should there be a central DNS entry somewhere that allows me to connect to the cluster? Or am I trying to do something that isn't supported On-Premises?
Yup, you're missing a load balancer.
This is the best resource I could find to help, I'll paste relevant contents in the event of it becoming unavailable.
Reverse Proxy — When you provision a Service Fabric cluster, you have an option of installing Reverse Proxy on each of the nodes on the cluster. It performs the service resolution on the client’s behalf and forwards the request to the correct node which contains the application. In majority of the cases, services running on the Service Fabric run only on the subset of the nodes. Since the load balancer will not know which nodes contain the requested service, the client libraries will have to wrap the requests in a retry-loop to resolve service endpoints. Using Reverse Proxy will address the issue since it runs on each node and will know exactly on what nodes is the service running on. Clients outside the cluster can reach the services running inside the cluster via Reverse Proxy without any additional configuration.
Source: Azure Service Fabric is amazing
I have an Azure Service Fabric resource running, but the same rules apply. As the article states, you'll need a reverse proxy/load balancer to resolve not only what nodes are running the API, but also to balance the load between the nodes running that API. So, health probes are necessary too so that the load balancer knows which nodes are viable options for sending traffic to.
As an example, Azure creates 2 rules off the bat:
1. LBHttpRule on TCP/19080 with a TCP probe on port 19080 every 5 seconds with a 2 count error threshold.
2. LBRule on TCP/19000 with a TCP probe on port 19000 every 5 seconds with a 2 count error threshold.
What you need to add to make this forward-facing is a rule where you forward port 80 to your service http port. Then the health probe can be an http probe that hits a path to test a 200 return.
Once you get into the cluster, you can resolve the services normally and SF will take care of availability.
In Azure-land, this is abstracted again to using something like API Management to further reverse proxy it to SSL. What a mess but it works.
Once your load balancer is set up, you'll have a single IP to hit for management, publishing, and regular traffic.

Can reverse proxy in Service Fabric be used with multiple windows containers?

I'm evaluating using SF or docker swarm for container orchestration and I can see service fabric has an edge by being able to use reverse proxy implementation which runs on all nodes in cluster. Problem is that I can see that based on cluster manifest only one port can be used as reverse proxy port and hence I'm not fully understanding how this can be utilized if you have multiple windows containers running with each of those running on their own port. I need to use port:port mapping only (with no HTTP rewrite), so ultimately wanted one to one reverse port mapping to each individual windows container running.
Is it possible to accomplish by using service fabric?
To be clear I have www.app1.com and www.app2.com hosted in 2 different containers, they don't need to talk to each other. I deploy those to service fabric, how do I use reverse proxy with single published external port to reach those containers externally?
At this point in time (version 5.6 of Service Fabric), Reverse Proxy will do the service resolution using the Service Fabric naming service and provide the URI to get to your service. The URL that reverse proxy will find your service on is specific to Service Fabric - e.g. http://clusterFQDN/appName/serviceName:port.
What you can use the DNS Service to get you a container IP (the IP of a host node in the cluster, running your container). However, you can only find the port by doing a DNS SRV record lookup.
Current best options for exposing containers in a Service Fabric cluster are:
If you have a fixed host port for your container, the Azure load balancer will be able to monitor where the container lives, and forward requests to only those nodes. You can add additional public IPs to your Load Balancer and use one per container. Cannot be used with dynamic host ports in the cluster.
Azure API Management can resolve Service Fabric services by integrating with the Service Fabric Naming Service.
Create your own HTTP Gateway as a Reliable Service: https://github.com/weidazhao/Hosting or https://github.com/c3-ls/ServiceFabric-Http
Running Nginx as a service in the cluster: Based on this prototype you can run and configure Nginx in Service Fabric: https://github.com/knom/ServiceFabric-Nginx
Yes you can use Reverse proxy with multiple containers. The idea is simple
Configure port to host mapping so your host knows which port your
application is listening
Configure container to container so your
container register a end point with service fabric. You can choose
the port for this endpoint. This will be registered with Naming
service and available for reverse proxy
Communication between containers can be done using reverse proxy using the service name and the port you specified. if you didn't specified the port number then service fabric will assign one for you and you can get it using environment variable.
Service Fabric team have excellent documentation about this here
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-container-linux

Hitting an endpoint of HeadlessService - Kubernetes

We wanted podnames to be resolved to IP's to configure the seed nodes in an akka cluster. This was happenning by using the concept of a headless service and stateful sets in Kubernetes. But, how do I expose a headless service externally to hit an endpoint from outside?
It is hard to expose a Kubernetes service to the outside, since this would require some complex TCP proxies. The reason for this is, that the headless services is only a DNS record with an IP for each pod. But these IPs are only reachable from within the cluster.
One solution is to expose this via Node ports, which means the ports are opened on the host itself. Unfortunately this makes the service discovery harder, because you don't know which host has a scheduled pod on it.
You can setup node ports via:
the services: https://kubernetes.io/docs/user-guide/services/#type-nodeport
or directly in the Pod by defining spec.containers[].ports[].hostPort
Another alternative is to use a LoadBalancer, if your cloud provider supports that. Unfortunately you cannot address each instance itself, since they share the same IP. This might not be suitable for your application.