When a serial port is created in a docker container mapped on a host with an operating system of Linux, this is done with the ‘—device’ flag;
e.g. docker run -dit --device=/dev/ttyUSB0 --name SerialTest <docker image>
We would like to know how PODs can be mapped serial ports in Kubernetes. The figure below shows the Pod configuration for the application to be deployed in Rancher 2.x.
(https://i.imgur.com/RHhlD4S.png)
In node scheduling, we have configured pods to be distributed to specific nodes with serial ports. Also, it is of course not possible to map the serial port with the volume mount. So, I would like to raise a question because I couldn't find anything related to ‘—device’ flag of docker in my Rancher 2.x configuration.
(https://imgur.com/wRe7Eds.png) "Application configuration in Rancher 2.x"
(https://imgur.com/Lwil7cz.png) "Serial port device connected to the HOST PC"
(https://imgur.com/oWeW0LZ.png) "Volume Mount Status of Containers in Deployed Pods"
(https://imgur.com/GKahqY0.png) "Error log when running a .NET application that uses a serial port"
Based on the goal of the first diagram: Kubernetes abstractions covering the communication between the pod and the outside world (for this matter, outside of the node) are meant to handle at least layer 2 communications (veth, as in inter-node/pod communication).
Is not detailed why is not possible to map the device volume in the pod, so I'm wondering if you have tried using privileged containers like in this reference:
containers:
- name: acm
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev/ttyACM0
name: ttyacm
volumes:
- name: ttyacm
hostPath:
path: /dev/ttyACM0
It is possible for Rancher to start containers in privileged mode.
Related
I have a pod named test-pod in k8s in GCP. this pod has a container named test-pod (same name as pod). I want to attach an ephemeral container to this container and want to run few commands e.g. ip route add command to add some routes on the "test-pod" container from ephemeral container.
I have created pod/container test-pod with following securityContext:
Spec section of the pod of the yaml file:
spec:
shareProcessNamespace: true
containers:
name: test-pod
image: xxx:1.0
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN", "NET_ADMIN", "SYS_PTRACE"]
this pod is up and running. now i am trying to attach a debug container as follows:
kubectl debug -it test-pod --image=yyy:1.0 -n test
In the debugger container I am giving the following command:
ip route add 10.10.10.0/24 dev eth2
it gives me following error
RTNETLINK answers: Operation not permitted
where as this route add command is working fine in test-pod container.
"ip route show" command is working fine from debugger container.
Is it possible to run this command from debugger container? if yes then what I am missing? please let me know.
The error RTNETLINK answers: Operation not permitted is prompted because of executing some tasks in ephemeral containers. Ephemeral containers share the same container spec as regular containers. However, some fields are disabled, and some behaviors are changed in ephemeral containers. These are special types of containers that run temporarily in an existing pod to accomplish user-initiated actions such as troubleshooting.
You use ephemeral containers to inspect services rather than to build applications.
Ephemeral containers may not have ports, so fields such as ports, livenessProbe, readinessProbe are disallowed.
Pod resource allocations are immutable, so setting resources is disallowed.
This is why you cannot add routes to the regular container through ephemeral containers.
Refer to the ephemeral container spec for a complete list of fields.
I have got a way to do it. Kubectl does not support it. but can be done using kube api.
please check following link
https://betterprogramming.pub/debugging-kubernetes-pods-deep-dive-d6b2814cd8ce
I'm running flink run-application targetting Kubernetes, using these options:
-Dmetrics.reporter.prom.class=org.apache.flink.metrics.prometheus.PrometheusReporter
-Dmetrics.reporter.prom.port=9249
I specify a container image which has the Prometheus plugin copied into /opt/flink/plugins. From within the job manager container I can download Prometheus metrics on port 9249. However, kubectl describe on the flink pod does not show that the Prometheus port is exposed. The ports line in the kubectl output is:
Ports: 8081/TCP, 6123/TCP, 6124/TCP
Therefore, I expect that nothing outside the container will be able to read the Prometheus metrics.
You are misunderstanding the concept of exposed ports.
When you expose a port in kubernetes with the ports option (the same apply with Docker and the EXPOSE tag), nothing is open on this port from the outside world.
It's basically just a hint for users of that image to tell them "Hey, you want to use this image ? Ok, you may want to have a look at this port on this container."
So if your port does not appear when you do kubectl describe, then it does not mean that you can't reach that port. You can still map it with a service targetting this port.
Furthermore, if you really want to make it appear with kubectl describe, then you just have to add it to your kubernetes descriptor file :
...
containers:
- ports:
- name: prom-http
containerPort: 9249
I am trying to access a web api deployed into my local Kubernetes cluster running on my laptop (Docker -> Settings -> Enable Kubernetes). The below is my Pod Spec YAML.
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
kubectl get pods shows the test-api running. However, when I try to connect to it using http://localhost:55555/testapi/index from my laptop, I do not get a response. But, I can access the application from a container in a different pod within the cluster (I did a kubectl exec -it to a different container), using the URL
http://test-api pod cluster IP/testapi/index
. Why cannot I access the application using the localhost:hostport URL?
I'd say that this is strongly not recommended.
According to k8s docs: https://kubernetes.io/docs/concepts/configuration/overview/#services
Don't specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
If you only need access to the port for debugging purposes, you can use the apiserver proxy or kubectl port-forward.
If you explicitly need to expose a Pod's port on the node, consider using a NodePort Service before resorting to hostPort.
So... Is the hostPort really necessary on your case? Or a NodePort Service would solve it?
If it is really necessary , then you could try using the IP that is returning from the command:
kubectl get nodes -o wide
http://ip-from-the-command:55555/testapi/index
Also, another test that may help your troubleshoot is checking if your app is accessible on the Pod IP.
UPDATE
I've done some tests locally and understood better what the documentation is trying to explain. Let me go through my test:
First I've created a Pod with hostPort: 55555, I've done that with a simple nginx.
Then I've listed my Pods and saw that this one was running on one of my specific Nodes.
Afterwards I've tried to access the Pod in the port 55555 through my master node IP and other node IP without success, but when trying to access through the Node IP where this Pod was actually running, it worked.
So, the "issue" (and actually that's why this approach is not recommended), is that the Pod is accessible only through that specific Node IP. If it restarts and start in a different Node, the IP will also change.
I've a RabbitMQ cluster running on a Kubernetes environment. I don't have access to the containers shell, so I'm trying to run rabbitmqctl from a local container (same image).
These ports are exposed:
- 15672 (exposed as 32672)
- 5671 (exposed as 32671)
- 4369 (exposed as 32369)
- 25672 (exposed as 32256)
The correct cookie is on $HOME/.erlang.cookie on my local container.
How to specify the cluster URL and port to rabbitmqctl, so I can access the RabbitMQ cluster from outside?
Is it necessary to expose other ports?
Is it even possible to do this, since I can't find any reference to this on documentation?
You will want to expose ports 4369 and 25672 using the same port numbers externally as I can't think of a way to tell the Erlang VM running rabbitmqctl to use a different port for EPMD lookups. You should also expose 35672-35682 using the same port range externally.
Since you're using Kubernetes, I'll assume that you are using long names. Assume that, within your container, your node name is rabbit#container1.my.org, to access it externally use this command:
rabbitmqctl -l -n rabbit#container1.my.org
Please note that container1.my.org must resolve via DNS to the correct IP address to connect to that container.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on Stack Overflow.
I'm trying to create 3 instances of Kafka and deploy it a local Kubernetes setup. Because each instance needs some specific configuration, I'm creating one RC and one service for each - eagerly waiting for #18016 ;)
However, I'm having problems because Kafka can't establish a network connection to itself when it uses the service IP (a Kafka broker tries to do this when it is exchanging replication messages with other brokers). For example, let's say I have two worker hosts (172.17.8.201 and 172.17.8.202) and my pods are scheduled like this:
Host 1 (172.17.8.201)
kafka1 pod (10.2.16.1)
Host 2 (172.17.8.202)
kafka2 pod (10.2.68.1)
kafka3 pod (10.2.68.2)
In addition, let's say I have the following service IPs:
kafka1 cluster IP: 11.1.2.96
kafka2 cluster IP: 11.1.2.120
kafka3 cluster IP: 11.1.2.123
The problem happens when the kafka1 pod (container) tries to send a message (to itself) using the kafka1 cluster IP (11.1.2.96). For some reason, the connection cannot established and the message is not sent.
Some more information: If I manually connect to the kafka1 pod, I can correctly telnet to kafka2 and kafka3 pods using their respective cluster IPs (11.1.2.120 / 11.1.2.123). Also, if I'm in the kafka2 pod, I connect to both kafka1 and kafka3 pods using 11.1.2.96 and 11.1.2.123. Finally, I can connect to all pods (from all pods) if I use the pod IPs.
It is important to emphasize that I shouldn't tell the kafka brokers to use the pod IPs instead of the cluster IPs for replication. As it is right now, Kafka uses for replication whatever IP you configure to be "advertised" - which is the IP that your client uses to connect to the brokers. Even if I could, I believe this problem may appear with other software as well.
This problem seems to happen only with the combination I am using, because the exact same files work correctly in GCE. Right now, I'm running:
Kubernetes 1.1.2
coreos 928.0.0
network setup with flannel
everything on vagrant + VirtualBpx
After some debugging, I'm not sure if the problem is in the workers iptables rules, in kube-proxy, or in flannel.
PS: I posted this question originally as an Issue on their github, but I have been redirected to here by the Kubernetes team. I reword the text a bit because it was sounding like it was a "support request", but actually I believe it is some sort of bug. Anyway, sorry about that Kubernetes team!
Edit: This problem has been confirmed as a bug https://github.com/kubernetes/kubernetes/issues/20391
for what you want to do you should be using a Headless Service
http://kubernetes.io/v1.0/docs/user-guide/services.html#headless-services
this means setting
clusterIP: None
in your Service
and that means there won't be an IP associated with the service but it will return all IPs of the Pods selected by the selector
Update: The bug is fixed in v1.2.4
You can try container hook.
containers:
- name: kafka
image: Kafka
lifecycle:
postStart:
exec:
command:
- "some.sh" #some shell scripts to get this pod's IP and notify the other Kafka members that "add me into your cluster"
preStop:
exec:
command:
- "some.sh" #some shell scripts to get other Kafka pods' IP and notify the other Kafka members that "delete me from your cluster"
I have got a similar problems on running 3 mongodb pods as a cluster but the pods cannot access themselves through their serivces' IP.
In addition, has the bug been fixed?