My application is running within a pod container in a kubernetes cluster. Every time it is started in the container it allocates a random port. I would like to access my application from outside (from another pod or node for example) but since it allocates a random port I cannot create a serivce (NodePort or LoadBalancer) to map the application port to a specific port to be able to access it.
What are the options to handle this case in a kubernetes cluster?
Not supported, checkout the issue here. Even with docker, if your range is overly broad, you can hit issue as well.
One option to solve this would be to use a dynamically configured sidecar proxy that takes care of routing the pods traffic from a fixed port to the dynamic port of the application.
This has the upside that the service can be scaled even if all instances have different ports.
But this also has the potentially big downside that every request has an added overhead due to the extra network step and even if it's just some milliseconds this can have quite an impact depending on how the application works.
Regarding the dynamic configuration of the proxy you could e.g. use shared pod volumes.
Edit:
Containers within a pod are a little bit special. They are able to share predefined volumes and their network namespace.
For shared volumes this means. you are able to define a volume e.g. of type emptyDir and mount it in both containers. The result is that you can share information between both containers by writing them into that specific volume in the first pod and reading the information in the second.
For networking this makes communication between containers of one pod easier because you can use the loopback to communicate with between your containers. In your case this means the sidecar proxy container can call your service by calling localhost:<dynamic-port>.
For further information take a look at the official docs.
So how is this going to help you?
You can use a proxy like envoy and configure it to listen to dynamic configuration changes. The source for the dynamic configuration should be a shared volume between both your application container and the sidecar proxy container.
After your application started and allocated the dynamic port you can now automatically create the configuration for the envoy proxy and save it in the shared volume. The same source envoy is listening to due to the aforementioned configuration.
Envoy itself acts as a reverse proxy and can listen statically on port 8080 (or any other port you like) and routes the incoming network traffic to your application with dynamic port by calling localhost:<dynamic-port>.
Unfortunately I don't have any predefined configuration for this use case but if you need some inspiration you can take a look at istio - they use something quite similar.
As gohm'c mentioned there is no built in way to do this.
As a workaround you can run a script to adjust the service:
Start the application and retrieve the choosen random port from the application
Modify the Kubernetes service, loadbalancer, etc. with the new port
The downside of this is that if the port changes there will be short delay to update the service afterwards.
How about this:
Deploy and start your application in a Kubernetes cluster.
run kubectl exec <pod name here> -- netstat -tulpn | grep "application name" to identify the port number associated with your application.
run kubectl port-forward <pod name here> <port you like>:<random application port> to access the pod of your application.
Would this work for you?
Edit. This is very similar to #Lukas Eichler's response.
Related
I want to build some testing environment with use Kubernetes on localhost (can be Docker Desktop. minikube, ...). I want to connect my client to 3 instances of Cassandra inside localhost K8s cluster. Cassandra is example it can be same in etcd, redis, ... or any StatefulSet.
I created StatefulSet with 3 replicas on same ports on localhost Kubernetes.
I create Services to expose each pod.
What I should do next to route traffic with use three different names cassandra-0, cassandra-1, cassandra-2 and same port. This is required by driver - I can not forward individual ports since driver require to run all instances on same port.
So it should be cassandra-0:9042, cassandra-1:9042, cassandra-0:9042.
To shows this I create some drawing to explain it graphically.
I want achieve red line connection with use something ... - I do not know what to use in K8s - maybe services.
I would say you should define a node port and send your request to localhost:NodePort
ports:
- protocol: TCP
port: 8081
targetPort: 8080
nodePort: 32000
Just change your ports so they fit your needs.
If you already created a service with ports exposed, get all endpoints and try turn traffic towards them.
kubectl get endpoints -A
I was using NodePort to host a webapp on Google Container Engine (GKE). It allows you to directly point your domains to the node IP address, instead of an expensive Google load balancer. Unfortunately, instances are created with HTTP ports blocked by default, and an update locked down manually changing the nodes, as they are now created using and Instance Group/and an Immutable Instance Template.
I need to open port 443 on my nodes, how do I do that with Kubernetes or GCE? Preferably in an update resistant way.
Related github question: https://github.com/nginxinc/kubernetes-ingress/issues/502
Using port 443 on your Kubernetes nodes is not a standard practice. If you look at the docs you and see the kubelet option --service-node-port-range which defaults to 30000-32767. You could change it to 443-32767 or something. Note that every port under 1024 is restricted to root.
In summary, it's not a good idea/practice to run your Kubernetes services on port 443. A more typical scenario would be an external nginx/haproxy proxy that sends traffic to the NodePorts of your service. The other option you mentioned is using a cloud load balancer but you'd like to avoid that due to costs.
Update: A deamonset with a nodeport can handle the port opening for you. nginx/k8s-ingress has a nodeport on 443 which gets exposed by a custom firewall rule. the GCE UI will not show「Allow HTTPS traffic」as checked, because its not using the default rule.
You can do everything you do on the GUI Google Cloud Console using the Cloud SDK, most easily through the Google Cloud Shell. Here is the command for adding a network tag to a running instance. This works, even though the GUI disabled the ability to do so
gcloud compute instances add-tags gke-clusty-pool-0-7696af58-52nf --zone=us-central1-b --tags https-server,http-server
This also works on the beta, meaning it should continue to work for a bit.
See https://cloud.google.com/sdk/docs/scripting-gcloud for examples on how to automate this. Perhaps consider running on a webhook when downtime is detected. Obviously none of this is ideal.
Alternatively, you can change the templates themselves. With this method you can also add a startup to new nodes, which allows you do do things like fire a webhook with the new IP Address for a round robin low downtime dynamic dns.
Source (he had the opposite problem, his problem is our solution): https://stackoverflow.com/a/51866195/370238
If I understand correctly, if nodes can be destroyed and recreated themselves , how are you going to rest assured that certain service behind port reliably available on production w/o any sort of load balancer which takes care of route orchestration diverting port traffic to new node(s)
I want to access k8s api resources. my cluster is 1node cluster. kube-api server is listening on 8080 and 6443 port. curl localhost:8080/api/v1 inside node is working. if i hit :8080, its not working because some other service (eureka) is running on this port. this leaves me option to access :6443 . in order to do make api accessible, there are 2 ways.
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
2- make change in waeve (weave is available as service in k8s setup) so that my server can access k8s apis.
anyone of option is fine with me. any help will be appreciated .
my cluster is 1node cluster
One of those words does not mean what you think it does. If you haven't already encountered it, you will eventually discover that the memory and CPU pressure of attempting to run all the components of a kubernetes cluster on a single Node will cause memory exhaustion, and then lots of things won't work right with some pretty horrible error messages.
I can deeply appreciate wanting to start simple, but you will be much happier with a 3 machine cluster than trying to squeeze everything into a single machine. Not to mention the fact that only having a single machine won't surface any networking misconfigurations, which can be a separate frustration when you think everything is working correctly and only then go to scale your cluster up to more Nodes.
some other service (eureka) is running on this port.
Well, at the very real risk of stating the obvious: why not move one of those two services to listen on a separate port from one another? Many cluster provisioning tools (I love kubespray) have a configuration option that allows one to very easily adjust the insecure port used by the apiserver to be a port of your choosing. It can even be a privileged port (that is: less than 1024) because docker runs as root and thus can --publish a port using any number it likes.
If having the :8080 is so important to both pieces of software that it would be prohibitively costly to relocate the port, then consider binding the "eureka" software to the machine's IP and bind the kubernetes apiserver's insecure port to 127.0.0.1 (which is certainly the intent, anyway). If "eureka" is also running in docker, you can change its --publish to include an IP address on the "left hand side" to very cheaply do what I said: --publish ${the_ip}:8080:8080 (or whatever). If it is not using docker, there is still a pretty good chance that the software will accept a "bind address" or "bind host" through which you can enter the ip address, versus "0.0.0.0".
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
Every Pod running in your cluster has the option of declaring a serviceAccountName, which by default is default, and the effect of having a serviceAccountName is that every container in the Pod has access to those components you mentioned: the CA certificate and a JWT credential that enables the Pod to invoke the kubernetes API (which from within the cluster one can always access via: the kubernetes Service IP, the environment variable $KUBERNETES_SERVICE_HOST, or the hostname https://kubernetes -- assuming you are using kube-dns). Those serviceAccount credentials are automatically projected into the container at /var/run/secret/kubernetes.io without requiring that your Pod declare those volumeMounts explicitly.
So, if your concern is that one must have credentials from within the cluster, that concern can go away pretty quickly. If your concern is access from outside the cluster, there are a lot of ways to address that concern which don't directly involve creating all 3 parts of that equation.
I have set up a multi-container pod consisting on multiple interrelated microservices. On docker-compose if I wanted to access the another container in a compose I just use the name of the service.
I am trying to do the same thing with Kube, without having to create a pod per microservice.
I tried the name of the container or suffix with .local neither worked and I got an UnknownHostException.
My preference is also to have all the microservices runnning on port 80, but in case that does not work within a single pod, I also tried having each microservice running on it's own port and use localhost but that didn't work either it simply said connection refused (as opposed to Unknown Host)
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports.
https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication
I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).