How does the Image Store Service choose it's ports? Right now, it looks to choose different ports every time Service Fabric starts, sometimes clashing with our application ports.
The ephemeral endpoint ports range is set, but the Image Store Service is still using ports inside the Application Ports Range. Our services have fixed ports.
I have not found a lot of documentation regarding this topic.
SF system services rely on the ephemeral port range to communicate. Make sure there's no overlap between application port ranges and ephemeral port ranges.
Related
When a Kubernetes attempts to allocate a random port in NodePort range (30000-32767) and it happens to be in use by another process running on the host, will Kubernetes simply use another random port in NodePort range? Or will it throw an error?
I know that host processes should not be configured to use ports in NodePort range but this question is not about best practices. I just want to know how Kubernetes will respond in such a situation.
This is the main reason why Kubernetes choose port range 30000-32767 as its default node port range and this range avoids conflicts with any process runs on the common port on the host machine (for example, 22, 80, 443 etc).
Please follow this link . All these are explained very well by one of the Kubernetes official Members.
And the answer to your question
Kubernetes just takes a random port and if that one conflicts and it finds that it isn't free it takes the next one.
You can refer to this Kubernetes golang code. (It is actually a set of test cases for the node port allocation operation. You can understand the behaviour from there).
And just a piece of additional information, it's possible to modify the node port default range by giving --service-node-port-range argument to the kube-apiserver
Today I have started to learn about Kubernetes because I have to use it in a project. When I came to the Service object, I started to learn what is the difference between all the different types of ports that can be specified. I think now I undertand it.
Specifically, the port (spec.ports.port) is the port from which the service can be reached inside the cluster, and targetPort (spec.ports.targetPort) is the port that an application in a container is listening to.
So, if the service will always redirect the traffic to the targetPort, why is it allowed to specify them separately? In which situations would it be necessary?
The biggest use is with LoadBalancer services where you want to expose something on (usually) 80 or 443, but don't want the process to run as root so it's listening on 8080 or something internally. This lets you map things smoothly.
If i use node port in yml file it give a port more than 30000
but when my user want to use it they do not want to remember that port and want to use 80. my kubernetes cluster is on baremetal.
How can i solve that?
Kubernetes doesn't allow you to expose low ports via the Node Port service type by design. The idea is that there is a significant chance of a port conflict if users are allowed to set low port numbers for their Node Port services.
If you really want to use port 80, you're going to have to either use a Load Balancer service type, or route your traffic through an Ingress. If you were on a cloud service, then either option would be fairly straight forward. However, since you're on bare metal, both options are going to be very involved. You're going to have to configure the load balancer or ingress functionality yourself in order to use either option, and it's going to be rough, sorry.
If you want to go forward with this, you'll have to read through a bunch of documentation to figure out what you want to implement and how to implement it.
https://www.weave.works/blog/kubernetes-faq-how-can-i-route-traffic-for-kubernetes-on-bare-metal
According to api-server docs you can use --service-node-port-range parameter for api-server or specify it to kubeadm configuration when bootstrapping your cluster see github issue
I was using NodePort to host a webapp on Google Container Engine (GKE). It allows you to directly point your domains to the node IP address, instead of an expensive Google load balancer. Unfortunately, instances are created with HTTP ports blocked by default, and an update locked down manually changing the nodes, as they are now created using and Instance Group/and an Immutable Instance Template.
I need to open port 443 on my nodes, how do I do that with Kubernetes or GCE? Preferably in an update resistant way.
Related github question: https://github.com/nginxinc/kubernetes-ingress/issues/502
Using port 443 on your Kubernetes nodes is not a standard practice. If you look at the docs you and see the kubelet option --service-node-port-range which defaults to 30000-32767. You could change it to 443-32767 or something. Note that every port under 1024 is restricted to root.
In summary, it's not a good idea/practice to run your Kubernetes services on port 443. A more typical scenario would be an external nginx/haproxy proxy that sends traffic to the NodePorts of your service. The other option you mentioned is using a cloud load balancer but you'd like to avoid that due to costs.
Update: A deamonset with a nodeport can handle the port opening for you. nginx/k8s-ingress has a nodeport on 443 which gets exposed by a custom firewall rule. the GCE UI will not show「Allow HTTPS traffic」as checked, because its not using the default rule.
You can do everything you do on the GUI Google Cloud Console using the Cloud SDK, most easily through the Google Cloud Shell. Here is the command for adding a network tag to a running instance. This works, even though the GUI disabled the ability to do so
gcloud compute instances add-tags gke-clusty-pool-0-7696af58-52nf --zone=us-central1-b --tags https-server,http-server
This also works on the beta, meaning it should continue to work for a bit.
See https://cloud.google.com/sdk/docs/scripting-gcloud for examples on how to automate this. Perhaps consider running on a webhook when downtime is detected. Obviously none of this is ideal.
Alternatively, you can change the templates themselves. With this method you can also add a startup to new nodes, which allows you do do things like fire a webhook with the new IP Address for a round robin low downtime dynamic dns.
Source (he had the opposite problem, his problem is our solution): https://stackoverflow.com/a/51866195/370238
If I understand correctly, if nodes can be destroyed and recreated themselves , how are you going to rest assured that certain service behind port reliably available on production w/o any sort of load balancer which takes care of route orchestration diverting port traffic to new node(s)
By default, when you create service fabric cluster manually using azure portal, you will have to pick a node type name which will be tied to VM size etc. However what is not shown on the GUI is the application port range that is associated with this node type. The default application port range appears to be from 20000 to 30000.
When you create a service fabric cluster application using visual studio, the default port numbers are always less than 20000. The default port number is more like 8868 or something like that.
When you deploy this service to the above cluster, everything works as expected. Lets ignore the LB port mapping for this discussion.
This begs following questions:
Are we supposed to adjust the port number in our visual studio projects to something greater than 20000 (but less than 30000) so that the port numbers are in sync with node type construct's application port range?
Obviously the service works without step (1). But are there any caveats to doing the default way (i.e. without any port number changes)?
If service port numbers do not have to be in the range defined by node type construct, then what is the purpose of the application port range in the node type?
The application port range is used when you let Service Fabric do Service discovery and resolution. If you don't specify end point ports, Service Fabric automatically assigns endpoints in this app port range that you provide while creating the cluster. Each service in Service Fabric cluster works based on an endpoint. Say if you have multiple microservices but you need only few services to be exposed with a http(s) endpoint, then you let Service Fabric decide the port for the services that you don't want to expose with http(s) endpoint. This port range also becomes handy when you want to configure port ranges in firewall or NSGs to open up traffic.
More details can be found here - https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-connect-and-communicate-with-services/
Service Fabric provides a discovery and resolution service called the Naming Service. The Naming Service maintains a table that maps named service instances to the endpoint addresses they listen on. Service Fabric has a registrar that maps service names to their endpoint address.
When an endpoint resource is defined in the service manifest, Service
Fabric assigns ports from the reserved application port range when a
port isn't specified explicitly.
https://learn.microsoft.com/en-gb/azure/service-fabric/service-fabric-service-manifest-resources
Seems to only be used if you don't explicitly specify an endpoint in the manifest