What outgoing TCP ports are needed to fully manage a Service Fabric cluster in Azure? I was aware of 19080 being needed to access the Service Fabric Explorer but then today I discovered that 19000 is needed to publish to a cluster. This makes me wonder if there are other ports.
I need to make an official request to my IT department to open up outgoing TCP Ports and I want to be sure I cover everything in one request. Are there other ports I should be aware of?
The default port for connecting to cluster from visual studio or powershell is 19000, but can be changed when you create the cluster, using either the Azure portal or ARM template deployments.
Port 19080 is used by the Service Fabric Explorer.
There are no other ports used by Service Fabric itself, but your applications can be configured to use other ports, but this is something you control.
Related
I have a kubernetes cluster with several nodes, and it is connecting to a SQL server outside of the cluster. How can I whitelist these (potentially changing) nodes on the SQL server firewall, without having to whitelist each Node's external IP independently?
Is there a clean solution for this? Perhaps some intra-cluster tooling to route all requests through a single node?
You would have to use a NAT. It is possible, but fiddly (we do this weekly in order to connect to a hosted service to make backups, and the hosted service only whitelists a specific IP.)
We used Terraform for spinning up a cluster, then deploying our backup job to it so it could connect to the hosted service, and since it was going via the NAT IP, the remote host would allow the connection.
We used Cloud NAT via Terraform (as we were on GKE): https://registry.terraform.io/modules/terraform-google-modules/cloud-nat/google/latest
Though there are surely similar options for whichever Kubernetes provider you are using. If you are running bare-metal, you'll need to do the routing yourself.
I would like to build a web application that processes video from users' webcams. It looks like WebRTC is ideal for this project. But, I'm having a hard time creating a peer connection between the user's machine and a pod in my Kubernetes cluster. How would you connect these two peers?
This question on Server Fault discusses the issue I'm running into: WEBRTC MCU/SFU inside kubernetes - Port Ranges. WebRTC wants a bunch of ports open so users can create peer connections with the server but Kubernetes has ports closed by default. Here's a rephrasing of my question: How to create
RTCPeerConnections connecting multiple users to an application hosted in a Kubernetes cluster? How should network ports be setup?
The closest I've come to finding a solution is Orchestrating GPU-accelerated streaming apps using WebRTC, their code is available on GitHub. I don't fully understand their approach, I believe it depends on Istio.
The document you link to is helpful, Orchestrating GPU-accelerated streaming apps using WebRTC
What they do to allow for RTCPeerConnection is:
Use two separate Node pools (group of Nodes):
Default Node pool - for most components, using Ingress and load balancer
TURN Node pool - for STUN/TURN service
STUN/TURN service
The STUN/TURN service is network bound and deployed to dedicated nodes. It is deployed with one instance on each node in the node pool. This can be done on Kubernetes using a DaemonSet. In addition this service should use host networking, e.g. all nodes has its ports accessible from Internet. Activate host networking for the PodTemplate in your DaemonSet:
hostNetwork: true
They use coturn as STUN/TURN server.
The STUN/TURN service is run as a DaemonSet on each node of the TURN node pool. The coTURN process needs to allocate a fixed block of ports bound to the host IP address in order to properly serve relay traffic. A single coTURN instance can serve thousands of concurrent STUN and TURN requests based on the machine configuration.
Network
This part of their network diagram shows that some services are served over https with an ingress gateway, whereas the STUN/TURN service is through a different connection using dtls/rtp to the nodes exposed via host network.
I was using NodePort to host a webapp on Google Container Engine (GKE). It allows you to directly point your domains to the node IP address, instead of an expensive Google load balancer. Unfortunately, instances are created with HTTP ports blocked by default, and an update locked down manually changing the nodes, as they are now created using and Instance Group/and an Immutable Instance Template.
I need to open port 443 on my nodes, how do I do that with Kubernetes or GCE? Preferably in an update resistant way.
Related github question: https://github.com/nginxinc/kubernetes-ingress/issues/502
Using port 443 on your Kubernetes nodes is not a standard practice. If you look at the docs you and see the kubelet option --service-node-port-range which defaults to 30000-32767. You could change it to 443-32767 or something. Note that every port under 1024 is restricted to root.
In summary, it's not a good idea/practice to run your Kubernetes services on port 443. A more typical scenario would be an external nginx/haproxy proxy that sends traffic to the NodePorts of your service. The other option you mentioned is using a cloud load balancer but you'd like to avoid that due to costs.
Update: A deamonset with a nodeport can handle the port opening for you. nginx/k8s-ingress has a nodeport on 443 which gets exposed by a custom firewall rule. the GCE UI will not show「Allow HTTPS traffic」as checked, because its not using the default rule.
You can do everything you do on the GUI Google Cloud Console using the Cloud SDK, most easily through the Google Cloud Shell. Here is the command for adding a network tag to a running instance. This works, even though the GUI disabled the ability to do so
gcloud compute instances add-tags gke-clusty-pool-0-7696af58-52nf --zone=us-central1-b --tags https-server,http-server
This also works on the beta, meaning it should continue to work for a bit.
See https://cloud.google.com/sdk/docs/scripting-gcloud for examples on how to automate this. Perhaps consider running on a webhook when downtime is detected. Obviously none of this is ideal.
Alternatively, you can change the templates themselves. With this method you can also add a startup to new nodes, which allows you do do things like fire a webhook with the new IP Address for a round robin low downtime dynamic dns.
Source (he had the opposite problem, his problem is our solution): https://stackoverflow.com/a/51866195/370238
If I understand correctly, if nodes can be destroyed and recreated themselves , how are you going to rest assured that certain service behind port reliably available on production w/o any sort of load balancer which takes care of route orchestration diverting port traffic to new node(s)
I've followed the steps from Microsoft to create a Multi-Node On-Premises Service Fabric cluster. I've deployed a stateless app to the cluster and it seems to be working fine. When I have been connecting to the cluster I have used the IP Address of one of the nodes. Doing that, I can connect via Powershell using Connect-ServiceFabricCluster nodename:19000 and I can connect to the Service Fabric Explorer website (http://nodename:19080/explorer/index.html).
The examples online suggest that if I hosted in Azure I can connect to http://mycluster.eastus.cloudapp.azure.com:19000 and it resolves, however I can't work out what the equivalent is on my local. I tried connecting to my sample cluster: Connect-ServiceFabricCluster sampleCluster.domain.local:19000 but that returns:
WARNING: Failed to contact Naming Service. Attempting to contact Failover Manager Service...
WARNING: Failed to contact Failover Manager Service, Attempting to contact FMM...
False
WARNING: No such host is known
Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
Am I missing something in my setup? Should there be a central DNS entry somewhere that allows me to connect to the cluster? Or am I trying to do something that isn't supported On-Premises?
Yup, you're missing a load balancer.
This is the best resource I could find to help, I'll paste relevant contents in the event of it becoming unavailable.
Reverse Proxy — When you provision a Service Fabric cluster, you have an option of installing Reverse Proxy on each of the nodes on the cluster. It performs the service resolution on the client’s behalf and forwards the request to the correct node which contains the application. In majority of the cases, services running on the Service Fabric run only on the subset of the nodes. Since the load balancer will not know which nodes contain the requested service, the client libraries will have to wrap the requests in a retry-loop to resolve service endpoints. Using Reverse Proxy will address the issue since it runs on each node and will know exactly on what nodes is the service running on. Clients outside the cluster can reach the services running inside the cluster via Reverse Proxy without any additional configuration.
Source: Azure Service Fabric is amazing
I have an Azure Service Fabric resource running, but the same rules apply. As the article states, you'll need a reverse proxy/load balancer to resolve not only what nodes are running the API, but also to balance the load between the nodes running that API. So, health probes are necessary too so that the load balancer knows which nodes are viable options for sending traffic to.
As an example, Azure creates 2 rules off the bat:
1. LBHttpRule on TCP/19080 with a TCP probe on port 19080 every 5 seconds with a 2 count error threshold.
2. LBRule on TCP/19000 with a TCP probe on port 19000 every 5 seconds with a 2 count error threshold.
What you need to add to make this forward-facing is a rule where you forward port 80 to your service http port. Then the health probe can be an http probe that hits a path to test a 200 return.
Once you get into the cluster, you can resolve the services normally and SF will take care of availability.
In Azure-land, this is abstracted again to using something like API Management to further reverse proxy it to SSL. What a mess but it works.
Once your load balancer is set up, you'll have a single IP to hit for management, publishing, and regular traffic.
I'm evaluating using SF or docker swarm for container orchestration and I can see service fabric has an edge by being able to use reverse proxy implementation which runs on all nodes in cluster. Problem is that I can see that based on cluster manifest only one port can be used as reverse proxy port and hence I'm not fully understanding how this can be utilized if you have multiple windows containers running with each of those running on their own port. I need to use port:port mapping only (with no HTTP rewrite), so ultimately wanted one to one reverse port mapping to each individual windows container running.
Is it possible to accomplish by using service fabric?
To be clear I have www.app1.com and www.app2.com hosted in 2 different containers, they don't need to talk to each other. I deploy those to service fabric, how do I use reverse proxy with single published external port to reach those containers externally?
At this point in time (version 5.6 of Service Fabric), Reverse Proxy will do the service resolution using the Service Fabric naming service and provide the URI to get to your service. The URL that reverse proxy will find your service on is specific to Service Fabric - e.g. http://clusterFQDN/appName/serviceName:port.
What you can use the DNS Service to get you a container IP (the IP of a host node in the cluster, running your container). However, you can only find the port by doing a DNS SRV record lookup.
Current best options for exposing containers in a Service Fabric cluster are:
If you have a fixed host port for your container, the Azure load balancer will be able to monitor where the container lives, and forward requests to only those nodes. You can add additional public IPs to your Load Balancer and use one per container. Cannot be used with dynamic host ports in the cluster.
Azure API Management can resolve Service Fabric services by integrating with the Service Fabric Naming Service.
Create your own HTTP Gateway as a Reliable Service: https://github.com/weidazhao/Hosting or https://github.com/c3-ls/ServiceFabric-Http
Running Nginx as a service in the cluster: Based on this prototype you can run and configure Nginx in Service Fabric: https://github.com/knom/ServiceFabric-Nginx
Yes you can use Reverse proxy with multiple containers. The idea is simple
Configure port to host mapping so your host knows which port your
application is listening
Configure container to container so your
container register a end point with service fabric. You can choose
the port for this endpoint. This will be registered with Naming
service and available for reverse proxy
Communication between containers can be done using reverse proxy using the service name and the port you specified. if you didn't specified the port number then service fabric will assign one for you and you can get it using environment variable.
Service Fabric team have excellent documentation about this here
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-container-linux