Pod in Kubernetes can not establish a RTSP session, UDP port unreachable - kubernetes

I am trying to make a connection between my pod and a camera through RTSP (using ffmpeg).
My container, when running locally and in the server can establish the connection.
However, in Kubernetes, looks to be able to reach and identify the camera server, but it can not initialize a stream. I did a tcpdump on my container when trying to connect and I received the following:
10:55:37.065954 IP **CAMERA_SERVER_IP** > **POD_NAME**: ICMP **CAMERA_SERVER_IP** udp port 36337 unreachable, length 44
10:55:37.066003 IP **CAMERA_SERVER_IP** > **POD_NAME**: ICMP **CAMERA_SERVER_IP** udp port 36336 unreachable, length 48
**CAMERA_SERVER_IP** -> is the camera server IP addresss
**POD_NAME** -> is my pod's name in kubernetes.
When I try locally, the first UDP port fails too, but the second can establish a connection and stream.
I think this has something to do with port communication, but I am bit lost in what I should try or test.
Thanks!
UPDATE*
Actually I found something strange.
I tried again to start the connection and I analyzed the logs from tcpdump and netstat -tulpn.
When connecting locally, netstat identified the creation of two UDP connections. And tcpdump showed ffmpeg was trying to reach a connection from the server using the same UDP ports from netstat
However, in a pod in Kubernetes, the ports opened on netstat were different from the ports that ffmpeg tried a connection (verified using tcpdump).
I think this is the error, as ffmpeg immediately fails when trying to access a port that is not opened.

I actually made a work around using another ffmpeg server wrapper as I explained here: https://github.com/kubernetes/kubernetes/issues/94561
If someone has a similar problem, specially with Intelbras DVR or those that use DAHUA API, this might be interesting to read.

Related

How to set up a client/server connection using port forwarding

I created a multi-threaded client/server application that can send messages to each other at real time. Everything works perfectly, but I want to be able to send messages over the Internet. From what I understand, I need to do port forwarding to be able to make my server reachable for the clients. I then set up my port forwarding options by providing a port (9991) and then my Macbook Air's IP Address (192.168.0.1).
I then tried to connect to my server using my public server IP (let's say 197.132.20.222) and it didn't work. I then tried to see if the port forwarding worked by using this website: https://www.yougetsignal.com/tools/open-ports/ and I realized that the connection was closed. I also tried the command nc -vz 197.132.20.222 9991 while running my application and the connection is refused.
I'm using a JavaFX application, and for my server side I use a ServerSocket with port 9991. For the client side, I use a Socket and set the IP Address to my public router IP Address, and I tried to connect with another PC using mobile data to use a different network.
My firewall settings are turn off, so I really don't know what is blocking my application to connect to that port. Could it be my ISP is blocking connections? I just don't understand why my ports are blocked even with no firewalls enabled.

Localhost server in loopback does not answer incoming SYN

I have a TCP server which runs in localhost (127.0.0.1), I am trying to connect to the server by injecting SYN packets to the loopback interface, but the server doesn't answer them. These packets have the source IP of the Ethernet interface of my internet adapter (and not localhost IP).
I watch the SYN packet that goes to my loopback server in Wireshark, but the server does not answer it with a SYN/ACK. I think it is because the IP source is not 127.0.0.1, which for example is 192.168.1.24.
If I go to the browser and I connect to my localhost server it works fine, but the source IP that I am using is 127.0.0.1 and the destination IP is 127.0.0.1 too; the only difference between the packets is the source IP.
I want to establish a TCP connection with my loopback server (localhost) by using different IP source addresses than 127.0.0.1. Is that possible?
For example, a Loopback TCP SYN packet which comes from 192.168.1.24 to 127.0.0.1 should be answered by the loopbackserver?
Thanks and regards!
You can send packets to localhost via Npcap Loopback Adapter and get response from the counterpart (e.g. a process on the same machine). An example is Nmap, Nmap uses Npcap Loopback Adapter to scan the ports of localhost. The command is: nmap -v -A 127.0.0.1. Nmap is open-sourced here, so you can see its code about the implementation. If you think Nmap is too complicated, you can see the source code of Nping here, a ping tool shipped by Nmap. Nping also uses Npcap Loopback Adapter when pinging localhost, which works differently with the original ping shipped by Windows.
Using IP of one of local adapters or using 127.0.0.1 should be the same. You can run Nmap to test it. Whatever, using 127.0.0.1 is the best and recommended by Npcap when talking to localhost.
So I think the issue still relates to your own implementation.
Does the server bind() using INADDR_LOOPBACK? If so, you could try changing it to INADDR_ANY to see if that helps. See also man 7 ip.
(These links are obviously Linux-specific; if your platform is something else, then refer to the documentation applicable to your system. For example, if you're on Windows, then maybe refer to https://msdn.microsoft.com/en-us/library/windows/desktop/ms737550(v=vs.85).aspx.)
I solved the problem, thank you very much for your answers.
The problem was a bit stupid, I was trying to establish a TCP connection with the loopback server (localhost) with IP source addresses that were not in the range of the loopback, loopback gateway: 127.0.0.1, loopback netmask: 255.255.0.0; It cant accept packets from IP source addresses that are not in the range of 127.0.X.X ; if I do NAT and I translate the packet from for example 192.168.1.154 to 127.0.1.154 the packet is received by the server and I can establish the server connection, I do not know how I did not realize it before.
Thank you for the time, regards!.
I think too that maybe it is better to bind the server to other virtual network adapter and not to the loopback, I am studing this: https://github.com/Microsoft/Windows-driver-samples/tree/master/network/ndis/netvmini/6x
It would be fine to create a miniport driver and bind the server there, we would have the advantage of having our own gateway and netmask and the layer would be ethernet and not BSD loopback. Your opinions will be interesting for me.

DNS does not work over TCP from pod

I have an Openshift Origin installation (v. 1.2.1, but also reproduced this issue on 1.3.0), and I'm trying to get pods' IPs from DNS by service name. Assume my node has IP 192.168.58.6, and I look for pods of headless service 'hz' in project 'hz-test'. When I try to send DNS request to dnsmasq (which is installed on nodes and forwards requests to Kubernetes' SkyDNS) over UDP, everything goes well:
# dig +notcp +noall +answer hz.hz-test.svc.cluster.local #192.168.58.6
hz.hz-test.svc.cluster.local. 14 IN A 10.1.2.5
<and so on...>
However, when I switch transport protocol to TCP, I receive the following error:
# dig +tcp +noall +answer hz.hz-test.svc.cluster.local #192.168.58.6
;; communications error to 192.168.58.6#53: end of file
After looking on tcpdump output, I've discovered, that after establishing a TCP connection (SYN - SYN/ACK - ACK) dnsmasq immediately sends back FIN/ACK, and when the DNS client tries to send its request using this connection, dnsmasq sends back RST packet instead of DNS answer. I've tried to perform the same DNS query over TCP from the node iteself, and dnsmasq gave me usual response, i.e. it worked normally over TCP, and the problem arose only when I tried to perform request from pod. Also, I've tried to send the same query over TCP directly from pod to Kubernetes' DNS (avoiding dnsmasq), and this query was OK too.
So, why dnsmasq on nodes ignores TCP requests from pods, and why any other communications are okay? Is it supposed behavior?
Any help and ideas are appreciated.
Finally, the reason was that dnsmasq was configured to listen node's IP (listen-adress=192.168.58.6). With such configuration dnsmasq binds to all node's network interfaces, but tries to reject "wrong" connections in userspace (i.e. on its own).
I don't really understand, why dnsmasq decided that requests from pod to 192.168.58.6 were forbidden with such configuration, but I got it working by changing "listen-address" to
interface=eth0
bind-interfaces
which forced dnsmasq to actually bind only to NIC with IP 192.168.58.6. After that dnsmasq started to accept all TCP requests.

Explain SSH tunneling process and limitations (for a remote Xdebug session)

The Preamble
I start up my local SSH terminal at work behind a firewall, and connect to a remote server all the time without any problem.
The way Xdebug works, correct me if I'm wrong, is that it sends an "unsolicited" request to my network's port 9000. I actually initiated that action by sending the remote server an HTTP request through my browser with a POST/GET/COOKIE variable instructing xdebug to start up. But my network doesn't know that. All it knows is that it is getting a request on port 9000 from the internet. It doesn't know which computer in its private network to forward it to (without setting up port forwarding on the router), and can only ignore the request.
So if you can't do port forwarding, another option (and a much better one from what I can tell), is SSH tunneling. My computer sends the SSH request, the server responds. My router knows which computer in its network to route these responses to. Piggybacking on that SSH connection allows those "unsolicited" port 9000 requests from the remote server to get to me.
I think I understand that much.
I finally got tunneling to work, thanks to stackoverflow, but how it works is still fuzzy to me.
On the remote server, I tell Xdebug to connect to localhost (not to my ip via xdebug.remote_host=173.123.45.56, and not to xdebug.remote_connect_back=1 which also would end up at my IP) on port 9000. Connecting to localhost seems a bit weird, since I picture that as the server sending messages to its own IP address, as if it is sending messages into itself (but I think that connecting to localhost is probably fundamentally different than connecting to any other IP... I don't think the message gets routed out and back in to localhost).
On my computer at work, I open up an SSH connection on port 22, specifying a tunnel to/on port 9000, and remote port 9000. I've seen some explanations of the various settings here but still don't understand them. Some even seem to involve three machines. What seems to be happening though, is I'm connected as usual via port 22, but I've told the remote machine that I want to receive its port 9000 communications. I've specified "localhost" in my tunnel, and I suppose that might need to match the localhost in my xdebug.remote_host value. I wonder if I specified my IP address in both places (i.e. xdebug.remote_host=173.123.45.56 on the remote server, and same IP in my SSH terminal), would that work too?
So Xdebug on the remote server sends me a request to initiate a debug session. It comes through my port 22, but my SSH tunnel somehow makes it seem that it is coming in on port 9000. So my IDE that is listening on port 9000 receives the request and sends a response (also on 9000), which my SSH tunnel intercepts somehow and sends back to the remote server on port 22, where it is similarly spoofed into looking like port 9000 to xdebug.
The Crux
So what I'm really not clear on is, what exactly is the localhost in my SSH tunnel configuration referring to? Does it relate directly to the xdebug.remote_host=localhost value? Can I change them both to my IP address?
Are all of the remote server's outgoing communications on port 9000 being forwarded to me, or just some of them? E.g., if someone in Chattanooga initiates a debug session in their browser, will I receive Xdebug's response?
Are all of my outgoing communications on port 9000 being forwarded to that server? I.e. can I debug two applications on two different servers at the same time, with some of my port 9000 communications going one way and some the other, or would I need one port per local application? (I can use Google Chrome and Firefox browsers at the same time, both on port 80, for example.)
The tunnel consists of an SSHD listening to port 9000 (as well as 22) at your end and an SSHD listening to port 22 at the other end. When you connect your XDebug to your local 9000, the SSHDs intercommunicate and the remote SSHD connects to port 9000 at the remote. Thereafter your local port 9000 behaves identically to the remote port 9000: all data written to either end appears at the other end.

Connectivity issues with SSL Socket Server

Socket Server with SSLStream some times refuses new connections from clients.
I used the telent hostname port, and it says Connecting To host...
Could not open connection to the host, on port 6002: Connect failed
I used netstat -a , and I see TCP status as
TCP 0.0.0.0:6002 host:0 LISTENING
I also see the service as listening in tcpview too.
The error I see on client side is connection refused with error code 10061.
The same socket server was accepting new connections and just runs fine without any issues.But after some time the above issue happens.its random.
When I restart the sockets it just works fine and accepts conenctions, which I don;t want to do it frequently.becasue this disconnects clients, who are already connected.
Could somebody help me to trouble shoot this?
Thanks.
Where are you running netstat? On the server?
Try connecting to the socket from localhost (from the server itself) using the destination IP address 127.0.0.1
Do the same test with the network IP of the server.
My guess is that the firewall is preventing external access or a router in between is preventing the connection.
It works for a while and then stops. Few options I can think of:
Some firewall on the way does some kind of throttling
You open and close too many connections too quickly. In this case you exhaust the ephemeral ports on the client (usually) and/or on the server. If you do netstat -a you will see a lot of sockets in TIME_WAIT state, try this both on client and server. Solution here is to reuse connections (best). Or increase the number of ephemeral ports (registry setting). But this will take you only so far.
You have a bug in your server and it stops accepting new connections after a while.