closing a port in a running kubernetes service - kubernetes

Im trying to create a load balancer service in kubernetes which will have 2 ports: 80 and 8080. For the port 80 I want to open this port only when I want to use it. Is it possible to do this while the service is running?
Im trying to use 8080 for serving outside requests and 80 for debugging purpose.

I would suggest not exposing the debugging port at all by means of service (aspecially that you dont really know it will hit the same backing pod as port for real traffic). Instead it might be good enough for you to use kubectl port-forward to access the debug port when you need.

Related

How does Kubernetes map ports of multi-container Pods?

I'm trying to learn Kubernetes. One thing I don't understand is the following scenario:
Given I have a pod with 2 containers. One container runs an app listening on port 80, the other container is a sidecar which does some polling from a web resource but doesn't listen on any port.
Now when I start a service with TargetPort = 80, how does Kubernetes know which container within the pod exposes this port? Does it inspect all containers to check for exposed ports? Or does it just do a mapping for port 80 on all containers within the pod?
Additionally, is it possible to change the containers exposed port in Kubernetes, so the port the container exposes (=containerPort) maps to a different port within the container?
I mean something similar like the -p argument in Docker.
The Kubernetes overview documentation of Pods notes:
Every container in a Pod shares the network namespace.... Within a Pod, containers share an IP address and port space....
So if you have multiple containers in a Pod, from outside that Pod, they all look "the same", in the same way that you could have multiple server processes running on a single physical machine with a single IP address. You can't run two containers that listen on the same port in the same Pod. The inbound request will reach whichever of the containers happens to be listening on that port (if any).
Is it possible to change the containers exposed port in Kubernetes, so the port the container exposes (=containerPort) maps to a different port within the container?
You can do this with your Service. Remember that you don't generally connect directly to a Pod; instead, you connect to a Service and that forwards the request on to one of the matching Pods. So if your Pod spec says
containers:
- name: sidecar
# ...
- name: server
ports:
- name: http
containerPort: 8080
then the corresponding Service can say
ports:
- port: 80
targetPort: http
and you'll connect to http://service-name.namespace.svc.cluster.local using the default HTTP port 80, even if the container process is actually listening on port 8080 or 3000 or whatever else.
If I good understand your question the explanation about your question will be in this article:
Containers are often intended to solve a single, narrowly defined problem, such as a microservice, but in the real world, problems require multiple containers for a complete solution. In this article, we’re going to talk about combining multiple containers into a single Kubernetes Pod, and what it means for inter-container communication.
There are several types of communication between containers in a single pod and they are described in the article.
The most important part should be Inter-container network communication.
Look also at this guide about Multi-Container Pods in Kubernetes.
You can also find the tutorial with examples - Extending applications on Kubernetes with multi-container pods.

kubectl port-forward is not working/too slow

I am setting up Traefik in my CentOS VM. I tried to port-forward as specified here:
https://github.com/jakubhajek/traefik-workshop/tree/3cbbb3b8d3dbafcb2a56f3bb715fee41ba8ffe8b/exercise-2
It displays the following, taking hours, and does nothing:
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Handling connection for 9000
Handling connection for 9000
Handling connection for 9000
Handling connection for 9000
Please advise what I should do to make the kubectl port-forward work.
kubectl port-forward makes a specific Kubernetes API request. That means the system running it needs access to the API server, and any traffic will get tunneled over a single HTTP connection.
If it is saying 404 page not found, then probably there is something wrong with the deployment ( application ) as the port 9000 is listening and opened connection. So you can check whether you have done port-forwarding to the right pod or not.

Port fowarding in kubernetes

I know that in kubernetes, we can't use a Service Node Port below 30000, because these ports are used by kubernetes. Can I use "kubectl port-forward svc/someservice 80:80" for instance... without causing conflict with the kubernetes ports below 30000?
In short - yes, you can.
In your question though it's clear that you're missing the understanding of NodePort type of service and the what the kubectl port-forward essentially does.
kubectl port-forward doesn't send the traffic through the port defined in the .spec.type: NodePort stanza in the Service resource. In fact using kubectl port-forward you're able to target a ClusterIP type of service (which doesn't have a .spec.type: NodePort stanza by definition).
Could you please describe what is the reason to have such a setup?
kubectl port-forward svc/someservice 80:80 merely forwards your local_machine:80 to port:80 of endpoints for someservice .
In other words, connections made to local port 80 are forwarded to port 80 of the pod that is running your app. With this connection in place you can use your local workstation to debug the app that is running in the pod.
Due to known limitations, port forward today only works for TCP protocol. The support to UDP protocol is being tracked in issue 47862.
As of now (Feb-2020) the issue is still open.
Node Port is used for totally different stuff. It is used for cases when you shall reach pods by sending traffic to particular port on any node in your cluster.
That is why the answer for your question is "Definitely you can do that"; however, as I said before, it is not clear why you shall do that. Without that inf it is hard to provide a guidance on "what is the best way to achieve the required functionality"
Hope that helps.

How to access from outside a RabbitMQ Kubernetes cluster using rabbitmqctl?

I've a RabbitMQ cluster running on a Kubernetes environment. I don't have access to the containers shell, so I'm trying to run rabbitmqctl from a local container (same image).
These ports are exposed:
- 15672 (exposed as 32672)
- 5671 (exposed as 32671)
- 4369 (exposed as 32369)
- 25672 (exposed as 32256)
The correct cookie is on $HOME/.erlang.cookie on my local container.
How to specify the cluster URL and port to rabbitmqctl, so I can access the RabbitMQ cluster from outside?
Is it necessary to expose other ports?
Is it even possible to do this, since I can't find any reference to this on documentation?
You will want to expose ports 4369 and 25672 using the same port numbers externally as I can't think of a way to tell the Erlang VM running rabbitmqctl to use a different port for EPMD lookups. You should also expose 35672-35682 using the same port range externally.
Since you're using Kubernetes, I'll assume that you are using long names. Assume that, within your container, your node name is rabbit#container1.my.org, to access it externally use this command:
rabbitmqctl -l -n rabbit#container1.my.org
Please note that container1.my.org must resolve via DNS to the correct IP address to connect to that container.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on Stack Overflow.

why pods not getting incoming connections

I am making my first steps with Kubernetes and I have some difficulties.
I have a pod with it's service defined as NodePort at port 30010.
I have a load balancer configured in front of this Kubernetes cluster where port 8443 directs traffic to this 30010 port.
when I try to access this pod from outside the cluster to port 8443 the pod is not getting any connections but I can see the incoming connections via tcptrack in the host in port 30010 with means the load balancer is doing it's job.
when I do curl -k https://127.0.0.1:30010 in the host I get a response from the pods.
what am I missing?
how can I debug it?
thanks