Monitoring UDP data on wireshark shows ARP packet - sockets

I am trying to send UDP packet to my server 10.20.1.2 with port number 20000. I have implemented UDP client on PC and when i send data using sendto API , at the same time i monitor data on wireshark wireshark shows it as an ARP packet.
18967 5440.858646 PcsCompu_ef:b4:89 Broadcast ARP 42 Who has 10.20.1.2? Tell 192.168.1.70
192.168.1.70 is my machine ip where UDP client is running.
I am not sure how UDP packet is getting converted into ARP packet ?
I understand ARP is for finding MAC address of target node but here i already know MAC address of target device , How can i add it in my udp client so it directly starts UDP communication . My target device is one embedded camera , i am not expecting it to reply on ARP request so i want to prevent sending ARP request.
Below is my UDP client code :
Any inputs are highly appreciated. Thanks in advance.
/*
Simple udp client
*/
#include<stdio.h> //printf
#include<string.h> //memset
#include<stdlib.h> //exit(0);
#include<arpa/inet.h>
#include<sys/socket.h>
#define SERVER "10.20.1.2"
#define PORT 20000 //The port on which to send data
char message[3]={0x00, 0x00 , 0x24};
int main(void)
{
struct sockaddr_in si_other;
int s, i, slen=sizeof(si_other);
int ret;
if ( (s=socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == -1)
{
printf("socket failed");
}
memset((char *) &si_other, 0, sizeof(si_other));
si_other.sin_family = AF_INET;
si_other.sin_port = htons(PORT);
if (inet_aton(SERVER , &si_other.sin_addr) == 0)
{
fprintf(stderr, "inet_aton() failed\n");
exit(1);
}
ret = sendto(s, message, sizeof(message) , 0 , (struct sockaddr *) &si_other, slen);
close(s);
return 0;
}

Some clarifications regarding networking.
1. ARP must be sent and replied
Your camera has IP interface, which means it must handle ARP requests fine without any doubts. ARP is essential part of communicating via IP, camera without ARP support makes no sense. And ARP isn't a result of converting UDP, it's a preliminary step before sending actual UDP datagram. Once ARP reply is discovered with destination MAC-address, UDP packet is sent to that destination. The issue you see isn't about hardcoding MAC to avoid ARP.
2. Your code looks fine
Compiled it locally with minor corrections (missing #include <unistd.h> header with close() declaration), tested on several targets, client works as expected.
3. Something is wrong with your network topology
You are sending message from 192.168.1.70 to 10.20.1.2, which is weird. 192.168.0.0/24 and 10.0.0.0/8 are private IP addresses from different ranges, so they normally can't reach each other without black magic (like NAT traversal). And, what is much weirder, during your attempt ARP request is sent to strange destination. Let me illustrate different cases:
if both devices are in same subnet (e.g. 192.168.1.70 sends to 192.168.1.71), then message is sent directly, so client asks "who has 192.168.1.71" in ARP request.
if devices are in different subnets (e.g. 192.168.1.70 sends to 8.8.8.8), then message is sent through gateway, thus ARP request reads "who has 192.168.1.1" or whatever your gateway address is. Gateway MAC may be already in cache, in which case ARP isn't sent at all.
in your case subnets are obviously different, but ARP is asking about direct destination address rather than gateway MAC address.
It's a shot in the dark, but probably you have two network interfaces on your PC, one connected to 192.168.0.0 subnet, the other to 10.0.0.0 and ARP request is sent from both. If you sniff the wrong interface, you see weird ARP request and don't see UDP, which is actually sent after it. By the way, seeing single arp request is also confusing, because it should be repeated several times if noone answers.
Anyway, you need to check network topology and/or simplify it. Remove unnecessary network interfaces, configure PC and camera to be on the same subnet connected to the same switch/router and investigate further.

Related

Can I detect different clients from same IP in TcpConnections?

Can we detect different clients(devices) from same IP on TCPConnection ?
Example :
I have a TCP Server called s1 and I have 2 PCs called p1,p2 and my PCs IP is same (e.g. 1.2.3.4)
when I connect to s1 (my TCP Server) with p1 and p2 (Pc1 and Pc2) can s1(my TCP server) detect these clients with same IP, isn't same device ?
From my understanding you are basically asking to detect different devices behind a NAT, i.e. devices sharing the same external IP address. There is no fully reliable way to do this but one can employ heuristics. Typically these are based on the ID field in the IP header and/or the TCP timestamp option, see for example A Technique for Counting NATted Hosts or Time has something to tell us about Network
Address Translation. One might also try to use passive OS fingerprinting in order to detect if different OS are used (and thus different real or virtual devices) - see Passive Fingerprinting.
None of these heuristics are fully reliable though and they also will not work if the devices are behind a proxy, since in this case the TCP/IP connections visible to the server originate from a single device - the proxy.
Yes you can. The server can ask the operating system for the connection information of client associated with the socket. In 'C' this would look like:
//Accept and incoming connection
puts("Waiting for incoming connections...");
c = sizeof(struct sockaddr_in);
new_socket = accept(socket_desc, (struct sockaddr *)&client, (socklen_t*)&c);
if (new_socket<0)
{
perror("accept failed");
return 1;
}
The client sockaddr structure will be filled with the information about the connecting client. The server can look into this to extract the IP Address as a string doing something like:
char *ip = inet_ntoa((struct sockaddr_in *)client.sin_addr);
You can now see if ip matches p1 or p2.

recvfrom icmp packet without ip header

I am trying to get an ICMP packet using the recvfrom() function. The function receives a packet including the IP header, I only need the ICMP portion.
Is it possible to somehow configure the socket so that recvfrom() receives only an ICMP packet through it, without the IP header?
I create socket like this:
int sock_fd;
if ((sock_fd = socket(AF_INET, SOCK_RAW, IPPROTO_ICMP)) == -1) {
perror("socket");
return ERROR;
}
UPDATE: I got an answer that this is impossible. So, if I make the buffer large, since the length of the IP header can be different, will recvfrom() write only one packet to the buffer, or can it be that the beginning of the next packet will be written to the end of the buffer?

Can another client app close TCP connection which a crashed client app opened with a server?

Consider the following sequence:
Client app (web browser) opens several TCP connections to different web servers;
Ethernet cable then becomes disconnected;
Client app is then closed;
Ethernet cable remains disconnected for a few hours;
Ethernet cable is reconnected;
I see "TCP keep-alive" packets (every 60 seconds, for hours) from a few of the servers to which the long-closed client app had connected!
Normally, when an app is closing, the app would initiate the closure of each open socket, and the TCP layer would then attempt to send a FIN packet to each remote endpoint. If it is physically possible to send the FIN packet, and such sending actually happens, then the local endpoint goes from the ESTABLISHED state to the FINWAIT_1 state (and awaits receiving an ACK from the remote endpoint, etc.). But, if the physical link is broken, then the TCP local endpoint can't send that FIN, and the server still assumes the TCP connection still exists (and the client-side call to the "close" function would block indefinitely until the physical link was reestablished, assuming the socket were set to blocking mode, right?).
In any case, upon reconnecting the Ethernet cable after some time with all conventional networked apps (e.g., web browsers) long closed, I am receiving "TCP Keep-Alive" packets from three separate web servers at precisely 60-second intervals for HOURS!
Wireshark shows the local port numbers to which these TCP Keep-Alive packets are being sent, but neither TCPView nor netstat -abno show those local port numbers being used by any application. Looking at the "TCP/IP" property of every single running process using Process Explorer also does not show any matching port numbers. I don't think the ports are being held because of a zombie "process record" (of, say, the web browser process) due to any ongoing child process (e.g., plugin app), but I'm not sure if my observations with TCPView/netstat/Process Explorer were sufficient to rule out this possibility.
Given the identities of the remote web servers (e.g., Akamai servers), I believe the connections were established by "recent" use of a web browser. But, these keep-alives keep coming from those three web servers, even though the browser had been closed, and the physical link had been broken for hours.
If the connections appeared in TCPView, I could simply select them and manually close them. However, the client-side TCP endpoints seem long gone.
Meanwhile, I am baffled why the servers are retrying so many times to get a reply to their keep-alive packets.
TCP keep-alive behavior is typically controlled by three parameters: \
(1) Time to wait until the next "burst" or "probe" attempts;
(2) Time interval between sending each keep-alive packet during a single "probe" attempt;
(3) The maximum number of "probe" attempts before the "burst" is considered a failure (and the TCP connection is consequently considered permanently broken).
For the TCP keep-alive packets I am seeing from the three different servers, the time interval between "probe" retries is exactly 60 seconds. But, it seems like the maximum number of "probe" retries is infinite, which seems like a really bad choice for any server!
Although I am curious about how this relentless stream of keep-alives was created and sustained, I am more immediately interested in how I might use a client-side application to force the server-side endpoints to close, given that there aren't existing local TCP endpoints receiving those keep-alive packets.
My rough idea is to create an app which creates a TCP-mode socket, binds (with port-number reuse allowed) to the port number to which the incoming keep-alives are directed, and then call "open" followed by "close", hoping that the server endpoint will make the TCP state transitions to reach the closed state one way or another! Another way might be to create a raw-mode socket, and receive the TCP keep-alive packet (which is just an ACK), and then form and send an appropriate FIN packet (with proper sequence number, etc., to pick up where the long-terminated client app evidently left off), and then receive an ACK and FIN before sending the final ACK.
One final note -- and I know there will be eye-rolling and ridicule: the working environment here is Windows XP SP3 running in VirtualBox on Windows 7! So, I'd prefer code or an open-source app which could achieve the goal (closing half-open TCP connection) within Windows XP SP3. Sure, I could restart the snapshot, which might close the connections -- but I am more interested in learning how to get more information about the state of network connections, and what I can do to handle this kind of TCP state problem.
I succeeded in provoking the closing of each apparent half-open TCP connection by writing a simple program (full code appears below) which binds a local socket to the port to which the server believes it is already connected, attempts to establish a new connection, and then closes the connection.
(Note: If the connection succeeds, I make an HTTP GET request, just because the phantom TCP keep-alives in my case are apparently originating from plain HTTP servers, and I was wondering what response I might get back. I think the "send" and "recv" calls could be removed without affecting the ability of the code to achieve the desired result.)
In the following code, the src_port_num variable represents the client-side port number (currently unused) to which the server is sending "TCP keep-alive" packets, and dst_ip_cstr is the IP address of the server (e.g., an Akamai web server), and dst_port_num is the port number (which, in my situation, happens to be a plain HTTP server at port 80).
CAUTION! By sharing this code I do not mean to imply that its theory of operation can be rigorously explained by an understanding of the TCP protocol specification. I just guessed that claiming an abandoned local port to which a remote endpoint is sending TCP keep-alive packets, and attempting to establish a new connection to that very same remote endpoint, would, one way or another, prod the remote endpoint to close the stale half-open connection -- and it happened to work for me.
#define _CRT_SECURE_NO_WARNINGS
#include <stdio.h>
#include <winsock2.h>
#pragma comment(lib, "ws2_32.lib")
void main()
{
// Local IP and port number
char * src_ip_cstr = "10.0.2.15";
int src_port_num = 4805;
// Remote IP and port number
char * dst_ip_cstr = "23.215.100.98";
int dst_port_num = 80;
int res = 0;
WSADATA wsadata;
res = WSAStartup( MAKEWORD(2,2), (&(wsadata)) );
if (0 != res) { printf("WSAStartup() FAIL\n"); return; }
printf( "\nSRC IP:%-16s Port:%d\nDST IP:%-16s Port:%d\n\n",
src_ip_cstr, src_port_num, dst_ip_cstr, dst_port_num );
sockaddr_in src;
memset( (void*)&src, 0, sizeof(src) );
src.sin_family = AF_INET;
src.sin_addr.S_un.S_addr = inet_addr( src_ip_cstr );
src.sin_port = htons( src_port_num );
sockaddr_in dst;
memset( (void*)&dst, 0, sizeof(dst) );
dst.sin_family = AF_INET;
dst.sin_addr.S_un.S_addr = inet_addr( dst_ip_cstr );
dst.sin_port = htons( dst_port_num );
int s = socket( PF_INET, SOCK_STREAM, IPPROTO_TCP );
if ((-1) == s) { printf("socket() FAIL\n"); return; }
int val = 1;
res = setsockopt( s, SOL_SOCKET, SO_REUSEADDR,
(const char*)&val, sizeof(val) );
if (0 != res) { printf("setsockopt() FAIL\n"); return; }
res = bind( s, (sockaddr*)&src, sizeof(src) );
if ((-1) == res) { printf("bind() FAIL\n"); return; }
res = connect( s, (sockaddr*)&dst, sizeof(dst) );
if ((-1) == res) { printf("connect() FAIL\n"); return; }
char req[1024];
sprintf( req, "GET / HTTP/1.1\r\nHost: %s\r\nAccept: text/html\r\n"
"Accept-Language: en-us,en\r\nAccept-Charset: US-ASCII\r\n\r\n",
dst_ip_cstr );
printf("REQUEST:\n================\n%s\n================\n\n", req );
res = send( s, (char*)&req, strlen(req), 0 );
if ((-1) == res) { printf("send() FAIL\n"); return; }
const int REPLY_SIZE = 4096;
char reply[REPLY_SIZE];
memset( (void*)&reply, 0, REPLY_SIZE );
res = recv( s, (char*)&reply, REPLY_SIZE, 0 );
if ((-1) == res) { printf("recv() FAIL\n"); return; }
printf("REPLY:\n================\n%s\n================\n\n", reply );
res = shutdown( s, SD_BOTH );
res = closesocket( s );
res = WSACleanup();
}
HILARIOUS / SHAMEFUL / FASCINATING DISCLOSURES
As I mentioned in my original question, I observed these "TCP keep-alive" packets with Wireshark within VirtualBox running Windows XP SP3, where the host OS was Windows 7.
When I woke up this morning and looked at the phenomenon again with a cup of coffee and fresh eyes, with the "TCP keep-alive" packets still appearing every 60 seconds even after 24 hours, I made a hilarious discovery: These packets continued to arrive from the three different IP addresses, precisely at 60-second intervals (but staggered for the three IPs), even when I disconnected the Ethernet cable from the Internet! My mind was blown!
So, although the three IP addresses did correspond to real-world web servers to which my web browser connected long ago, the TCP keep-alive packets were clearly originating from some local software component.
This revelation, as shocking as it was, did not change my thinking about the situation: from my client-side software perspective, there were "server-side" half-open TCP connections that I wanted to provoke to closing.
Within VirtualBox, choosing "Devices" -> "Network" -> "Connect Network Adapter" toggles the virtual network adapter on or off, as if a virtual Ethernet cable were connected or disconnected. Toggling to a disconnected state caused the phantom TCP keep-alive packets to stop arriving to Wireshark. Subsequently toggling to a connected state caused the TCP keep-alive packets to resume arriving in Wireshark.
Anyway, I sometimes needed to run the code above TWICE to succeed in closing the half-open connection. When running the code a first time, Wireshark would show a packet with an annotation "[TCP ACKed unseen segment]", which is just the kind of TCP gas-lighting confusion I hoped to create, haha! Because the new client endpoint is unexpected by the remote endpoint, the call to "connect" hangs for maybe 30 seconds before failing. For a couple of the zombie/phantom half-open connections, running the program just once was enough to also cause an RST packet.
I needed to modify the program repeatedly to change the combination of local port number, remote IP, and remote port number, to match each phantom TCP keep-alive packet I observed in Wireshark. (I leave implementing user-friendly command-line parameters to the dear reader (that's you!).) After a few rounds of modifying and running the program, all zombie keep-alive packets were stopped. "Silence of the Packets", one might say.
EPILOGUE
[In tuxedo, martini glass in hand, gazing wistfully at the ocean from the deck of a yacht, in the company of fellow hackers] "I never did figure out where those zombie packets came from... Was it the 'VirtualBox Host-Only Network' virtual Ethernet adapter? Only the Oracle knows!"
There is nothing you need to do to close the remote socket, it is already built into the TCP protocol. If the system receives TCP packets which don't create a new connection (i.e. have SYN set) and don't belong to any established connection, it will reply with a RST packet. This way the peer will know that the endpoint is no longer there and abandon the connection.

Erlang: receive multiple multicast streams on the same port

I have a multicast-based IPTV in my network.
All channels have muticast addresses like 239.0.1.*.
Streamer device sends UDP data to target port 1234.
So to receive a tv stream I do usual stuff like:
{ok, S} = gen_udp:open(1234, ....
inet:setopts(S, [{add_membership, {{239,0,1,2}, {0,0,0,0}}}]),
It works.
Now I want to subscribe to multiple channels to receive several streams simultaneously.
So I do another call:
inet:setopts(S, [{add_membership, {{239,0,1,3}, {0,0,0,0}}}]),
It works too. I see both streams in Wireshark. I can distinguish them by destination IP addresses - 239.0.1.2 and 239.0.1.3.
BUT.
In Erlang I cant figure out a channel to which incoming packet belongs, cause UDP data arrives as messages:
{udp, Socket, IP, PortNo, Packet},
where IP and PortNo is the source address (10.33.33.32 in my case) and port (49152).
So the question is - how to determine destination IP address of incoming multicast UPD packet.
Windows 7, Erlang 5.9/OTP R15B.
Thanks!
This should retrieve the destination IP from received UDP data:
{udp, Socket, IP, PortNo, Packet},
{ok, {Address, Port}} = inet:sockname(Socket),
Address will contain tuple like {239,0,1,3}.

IP address in TCP sockets

I have a root node(server) connected to many other nodes(clients) through TCP sockets. I want to send some data from server to client, but the data is different for each node and depends on the ip address of that node.
Thus I should have ip address of each node connected to server. How can I have that information?
When you call accept(2) you can choose to retrieve the address of the client.
int accept(int socket, struct sockaddr *restrict address,
socklen_t *restrict address_len);
You need to store those addresses and then send(2) to each what you need to send.
So the workflow should be something like this:
Keep a list of connected clients. Initially the list is empty, of course
When you accept a connection, push its details into that list (the address and the socket returned by accept(2)).
When you need to send something to every client, simply walk the list and send it (using the stored socket)
The one tricky part is that socklen_t *restrict address_len is a value-result argument, so you need to be careful with that.
This is a more nuanced question than it first appears.
If the clients are sitting behind a NAT, you may get the same IP from more than one client. This is perfectly natural and expected behavior. If you need to distinguish between multiple clients behind the same NAT, you'll need some other form of unique client id (say, IP address and port).
As long as you have access to the list of file descriptors for the connected TCP sockets, it is easy to retrieve the addresses of the remote hosts. The key is the getpeername() system call, which allows you to find out the address of the remote end of a socket. Sample C code:
// This is ugly, but simpler than the alternative
union {
struct sockaddr sa;
struct sockaddr_in sa4;
struct sockaddr_storage sas;
} address;
socklen_t size = sizeof(address);
// Assume the file descriptor is in the var 'fd':
if (getpeername(fd, &address.sa, &size) < 0) {
// Deal with error here...
}
if (address.sa.family == AF_INET) {
// IP address now in address.sa4.sin_addr, port in address.sa4.sin_port
} else {
// Some other kind of socket...
}