I have been trying to receive and process the packets from tunnel. There are separate blocks for processing v4 and v6 packets. If the packet does not fall under the either of the categories, they will be dropped. For me, every packets are being dropped during execution. When I used wireshark to capture the packets from the tunnel, I noticed the difference in packet size, i.e., length of the packet. For example, when the length of a received packet in Wireshark is 60 whereas the program prints it 64 as length. I noticed the 4 bytes difference in all packets. I am unable to find out, what I am doing wrong here? Would anyone help me. I also attached the screen of wireshark and program execution for perusal.
Image: Captured packets from tunnel through wireshark and program
#define MTU 1600
void processPacket(const uint8_t *packet, const size_t len) {
//1st octet identifies the IP version
uint8_t version = (*packet) >> 4;
//...
printf("IP version - %d\n", version);
if (version == 4 ) {
//ipv4 packet process ...
} else if (version == 6) {
//ipv6 packet process ...
} else {
//drop packet
printf("Unknown IP version, drop packet\n");
}
}
int main() {
struct ifreq ifr;
int fd;
uint8_t *buffer = (uint8_t *)(malloc(MTU));
ssize_t len;
if ( (fd = open("/dev/net/tun", O_RDWR)) == -1 ) {
perror("Unable to open /dev/net/tun");
exit(EXIT_FAILURE);
}
memset(&ifr, 0, sizeof(ifr));
ifr.ifr_flags = IFF_TUN;
strncpy(ifr.ifr_name, "tun0", IFNAMSIZ);
if ( (err = ioctl(fd, TUNSETIFF, (void *) &ifr)) == -1 ) {
perror("Error encountered during ioctl TUNSETIFF");
close(fd);
exit(EXIT_FAILURE);
}
printf("Device tun0 opened\n");
while(1) {
len = read(fd, buffer, MTU);
printf("Read %lu bytes from tun0\n", len);
processPacket(buffer, len);
}
printf("\nPress any key to exit...");
getchar();
close(fd);
}
The tunnel device pre-pends the IP packet with additional information, so the first byte is not the IP version. If you don't need it, you can add IFF_NO_PI to ifr_flags. See kernel documentation.
Related
I am using a Intel i210-T1 Network Interface Card.
I am running the avnu gptp client (https://github.com/Avnu/gptp) with:
sudo ./daemon_cl -S -V
The other side is a gPTP Master.
I want to live capture incoming UDP packets on an network interface with hardware timestamps.
I can see the UDP Packets with wireshark, so the packets are actually on the wire.
My problem is that pcap doesn't return any packets other than PTP (ethertype 0x88f7) at all.
Is this a bug or am i using pcap the wrong way?
I wrote a minimal example to show my problem.
The code prints:
enp1s0
returnvalue pcap_set_tstamp_type: 0
returnvalue pcap_set_tstamp_precision: 0
returnvalue pcap_activate: 0
and afterwards only:
packet received with ethertype:88f7
#include <iostream>
#include <netinet/in.h>
#include <netinet/if_ether.h>
#include <pcap/pcap.h>
int main(int argc, char **argv)
{
char errbuf[PCAP_ERRBUF_SIZE];
std::string dev = "enp1s0";
pcap_t* pcap_dev;
int i = 0;
printf("%s\n", dev.c_str());
pcap_dev = pcap_create(dev.c_str(), errbuf);
if(pcap_dev == NULL)
{
printf("pcap_create(): %s\n", errbuf);
exit(1);
}
i = pcap_set_tstamp_type(pcap_dev, PCAP_TSTAMP_ADAPTER_UNSYNCED);
printf("returnvalue pcap_set_tstamp_type: %i\n", i);
i = pcap_set_tstamp_precision(pcap_dev, PCAP_TSTAMP_PRECISION_NANO);
printf("returnvalue pcap_set_tstamp_precision: %i\n", i);
i = pcap_activate(pcap_dev);
printf("returnvalue pcap_activate: %i\n", i);
struct pcap_pkthdr* pkthdr;
const u_char* bytes;
while (pcap_next_ex(pcap_dev, &pkthdr, &bytes))
{
struct ether_header* ethhdr = (struct ether_header*) bytes;
std::cout << "packet received with ethertype:" << std::hex << ntohs(ethhdr->ether_type) << std::endl;
}
}
The solution is to enable promiscuous mode by using function:
https://linux.die.net/man/3/pcap_set_promisc
promiscuous mode disables any filtering by lower layers so you get every message arriving on the interface.
int pcap_set_promisc(pcap_t *p, int promisc);
pcap_set_promisc() sets whether promiscuous mode should be set on a capture handle when the handle is activated. If promisc is non-zero, promiscuous mode will be set, otherwise it will not be set.
Return Value
pcap_set_promisc() returns 0 on success or PCAP_ERROR_ACTIVATED if called on a capture handle that has been activated.
I'm trying to connect to a remote peer (which I don't have directory access other than connecting to it via socket and ping) via SCTP. Assuming that I have connected succesfully, what should be the value of my sctp_status.sstate if I try calling getsocktopt()? Mine is SCTP_COOKIE_ECHOED(3) according to sctp.h. Is it correct? Shouldn't it be SCTP_ESTABLISHED?
Because I tried sending message to the remote peer with this code:
ret = sctp_sendmsg (connSock, (void *) data, (size_t) strlen (data), (struct sockaddr *) &servaddr, sizeof (servaddr), 46, 0, 0, 0, 0);
It returned the number of bytes I tried sending. Then when I tried catching if there's any response:
ret = sctp_recvmsg (connSock, (void *) reply, sizeof (reply), NULL,
NULL, NULL, &flags);
It returns -1 with errno of ECONNRESET(104). What are the possible mistakes in my code, or maybe in my flow? Did I miss something?
Thanks in advance for answering. Will gladly appreciate that. :)
Update: Here down below is my client code in connecting to the remote peer. It's actually a node addon for me to use since SCTP is not fully supported in node. Using lksctp-tools package to include the headers.
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <netinet/sctp.h>
#include <arpa/inet.h>
#include <signal.h>
#define MAX_BUFFER 1024
int connSock = 0;
int connect(char host[], int port, char remote_host[], int remote_port, int timeout) {
int ret, flags;
fd_set rset, wset;
struct sockaddr_in servaddr;
struct sockaddr_in locaddr;
struct sctp_initmsg initmsg;
struct timeval tval;
struct sctp_status status;
socklen_t opt_len;
errno = 0;
connSock = socket (AF_INET, SOCK_STREAM, IPPROTO_SCTP);
flags = fcntl(connSock, F_GETFL, 0);
fcntl(connSock, F_SETFL, flags | O_NONBLOCK);
if (connSock == -1)
{
return (-1);
}
memset(&locaddr, 0, sizeof(locaddr));
locaddr.sin_family = AF_INET;
locaddr.sin_port = htons(port);
locaddr.sin_addr.s_addr = inet_addr(host);
ret = bind(connSock, (struct sockaddr *)&locaddr, sizeof(locaddr));
if (ret == -1)
{
return (-1);
}
memset (&initmsg, 0, sizeof (initmsg));
initmsg.sinit_num_ostreams = 5;
initmsg.sinit_max_instreams = 5;
initmsg.sinit_max_attempts = 10;
ret = setsockopt(connSock, IPPROTO_SCTP, SCTP_INITMSG, &initmsg, sizeof(initmsg));
if (ret == -1)
{
return (-1);
}
memset (&servaddr, 0, sizeof (servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_port = htons (remote_port);
servaddr.sin_addr.s_addr = inet_addr (remote_host);
if((ret = connect (connSock, (struct sockaddr *) &servaddr, sizeof (servaddr))) < 0)
if (errno != EINPROGRESS)
return (-1);
if (ret == 0) {
fcntl(connSock, F_SETFL, flags);
return 0;
}
FD_ZERO(&rset);
FD_SET(connSock, &rset);
wset = rset;
tval.tv_sec = timeout;
tval.tv_usec = 0;
ret = select(connSock+1, &rset, &wset, NULL, timeout ? &tval : NULL);
if (ret == 0) {
close(connSock);
errno = ETIMEDOUT;
return(-1);
}
else if (ret < 0) {
return(-1);
}
fcntl(connSock, F_SETFL, flags);
opt_len = (socklen_t) sizeof(struct sctp_status);
getsockopt(connSock, IPPROTO_SCTP, SCTP_STATUS, &status, &opt_len);
printf ("assoc id = %d\n", status.sstat_assoc_id);
printf ("state = %d\n", status.sstat_state);
printf ("instrms = %d\n", status.sstat_instrms);
printf ("outstrms = %d\n", status.sstat_outstrms);
return 0;
}
int sendMessage(char remote_host[], int remote_port, char data[]) {
int ret, flags;
struct sockaddr_in servaddr;
char reply[1024];
errno = 0;
memset (&servaddr, 0, sizeof (servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_port = htons (remote_port);
servaddr.sin_addr.s_addr = inet_addr (remote_host);
printf("\nSending %s (%li bytes)", data, strlen(data));
ret = sctp_sendmsg (connSock, (void *) data, (size_t) strlen (data),
(struct sockaddr *) &servaddr, sizeof (servaddr), 46, 0, 0, 0, 0);
if (ret == -1)
{
printf("\nError sending errno(%d)", errno);
return -1;
}
else {
ret = sctp_recvmsg (connSock, (void *) reply, sizeof (reply), NULL,
NULL, NULL, &flags);
if (ret == -1)
{
printf("\nError receiving errno(%d)", errno);
return -1;
}
else {
printf("\nServer replied with %s", reply);
return 0;
}
}
}
int getSocket() {
return connSock;
}
I don't know if there's anything significant I need to set first before connecting that I missed out. I got the snippet from different sources so it's quite messy.
Another update, here's the tshark log of that code when executed:
3336.919408 local -> remote SCTP 82 INIT
3337.006690 remote -> local SCTP 810 INIT_ACK
3337.006727 local -> remote SCTP 774 COOKIE_ECHO
3337.085390 remote -> local SCTP 50 COOKIE_ACK
3337.086650 local -> remote SCTP 94 DATA
3337.087277 remote -> local SCTP 58 ABORT
3337.165266 remote -> local SCTP 50 ABORT
Detailed tshark log of this here.
Looks like the remote sent its COOKIE_ACK chunk but my client failed to set its state to ESTABLISHED (I double checked the sstate value of 3 here).
If the association setup processes completed the state should be SCTP_ESTABLISHED. SCTP_COOKIE_ECHOED indicated that association has not completely established. It means that originating side (your localhost in this case) has sent (once or several times) COOKIE_ECHO chunk which has not been acknowledged by COOKIE_ACK from remote end.
You can send messages in this state (SCTP will simply buffer it until it get COOKIE_ACK and resend it later on).
It is hard to say what went wrong based on information you provided. At this stage it is probably will be worth diving into wireshark trace, to see what remote side is replying on your COOKIE_ECHO.
Also if you can share your client/server side code that might help to identify the root cause.
UPDATE #1:
It should be also noted that application can abort association them self (e.g. if this association is not configured on that server). If you trying to connect to the random server (rather than your specific one) that is quite possible and actually makes sense in your case. In this case state of association on your side is COOKIE_ECHOED because COOKIE_ACK has not arrived yet (just a race condition). As I said previously SCTP happily accepts your data in this state and just buffers it until it receives COOKIE_ACK. SCTP on remote side sends COOKIE_ACK straight away, even before the application received execution control in accept(). If application decided to terminate the association in ungraceful way, it will send ABORT (that is your first ABORT in wireshark trace). Your side has not received this ABORT yet and sends DATA chunk. Since remote side considers this association as already terminated it cannot process DATA chunk, so it treats it as out of the blue (see RFC 4960 chapter 8.4) and sends another ABORT with t-bit set to 1.
I guess this is what happened in your case. You can confirm it easily just by looking into wireshark trace.
I'm trying to write a transparent proxy that translates arbitrary UDP packets to a custom protocol and back again. I'm trying to use transparent proxying to read the incoming UDP packets that need translation, and to write the outgoing UDP packets that have just been reverse-translated.
My setup for the socket I use for both flavors of UDP sockets is as follows:
static int
setup_clear_sock(uint16_t proxy_port)
{
struct sockaddr_in saddr;
int sock;
int val = 1;
socklen_t ttllen = sizeof(std_ttl);
sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (sock < 0)
{
perror("Failed to create clear proxy socket");
return -1;
}
if (getsockopt(sock, IPPROTO_IP, IP_TTL, &std_ttl, &ttllen) < 0)
{
perror("Failed to read IP TTL option on clear proxy socket");
return -1;
}
if (setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &val, sizeof(val)) < 0)
{
perror("Failed to set reuse address option on clear socket");
return -1;
}
if (setsockopt(sock, IPPROTO_IP, IP_TRANSPARENT, &val, sizeof(val)) < 0)
{
perror("Failed to set transparent proxy option on clear socket");
return -1;
}
saddr.sin_family = AF_INET;
saddr.sin_port = htons(proxy_port);
saddr.sin_addr.s_addr = INADDR_ANY;
if (bind(sock, (struct sockaddr *) &saddr, sizeof(saddr)) < 0)
{
perror("Failed to bind local address to clear proxy socket");
return -1;
}
return sock;
}
I have two distinct, but possibly related problems. First, when I read an incoming UDP packet from this socket, using this code:
struct sock_double_addr_in
{
__SOCKADDR_COMMON (sin_);
in_port_t sin_port_a;
struct in_addr sin_addr_a;
sa_family_t sin_family_b;
in_port_t sin_port_b;
struct in_addr sin_addr_b;
unsigned char sin_zero[sizeof(struct sockaddr) - __SOCKADDR_COMMON_SIZE - 8
- sizeof(struct in_addr) - sizeof(in_port_t)];
};
void
handle_clear_sock(void)
{
ssize_t rcvlen;
uint16_t nbo_udp_len, coded_len;
struct sockaddr_in saddr;
struct sock_double_addr_in sdaddr;
bch_coding_context_t ctx;
socklen_t addrlen = sizeof(sdaddr);
rcvlen = recvfrom(sock_clear, &clear_buf, sizeof(clear_buf),
MSG_DONTWAIT | MSG_PROXY,
(struct sockaddr *) &sdaddr, &addrlen);
if (rcvlen < 0)
{
perror("Failed to receive a packet from clear socket");
return;
}
....
I don't see a destination address come back in sdaddr. The sin_family_b, sin_addr_b, and sin_port_b fields are all zero. I've done a block memory dump of the structure in gdb, and indeed the bytes are coming back zero from the kernel (it's not a bad placement of the field in my structure definition).
Temporarily working around this by hard-coding a fixed IP address and port for testing purposes, I can debug the rest of my proxy application until I get to the point of sending an outgoing UDP packet that has just been reverse-translated. That happens with this code:
....
udp_len = ntohs(clear_buf.u16[2]);
if (udp_len + 6 > decoded_len)
fprintf(stderr, "Decoded fewer bytes (%u) than outputting in clear "
"(6 + %u)!\n", decoded_len, udp_len);
sdaddr.sin_family = AF_INET;
sdaddr.sin_port_a = clear_buf.u16[0];
sdaddr.sin_addr_a.s_addr = coded_buf.u32[4];
sdaddr.sin_family_b = AF_INET;
sdaddr.sin_port_b = clear_buf.u16[1];
sdaddr.sin_addr_b.s_addr = coded_buf.u32[3];
if (sendto(sock_clear, &(clear_buf.u16[3]), udp_len, MSG_PROXY,
(struct sockaddr *) &sdaddr, sizeof(sdaddr)) < 0)
perror("Failed to send a packet on clear socket");
}
and the packet never shows up. I've checked the entire contents of the sdaddr structure I've built, and all fields look good. The UDP payload data looks good. There's no error coming back from the sendto() syscall -- indeed, it returns zero. And the packet never shows up in wireshark.
So what's going on with my transparent proxying? How do I get this to work? (FWIW: development host is a generic x86_64 ubuntu 14.04 LTS box.) Thanks!
Alright, so I've got half an answer.
It turns out if I just use a RAW IP socket, with the IP_HDRINCL option turned on, and build the outgoing UDP packet in userspace with a full IP header, the kernel will honor the non-local source address and send the packet that way.
I'm now using a third socket, sock_output, for that purpose, and decoded UDP packets are coming out correctly. (Interesting side note: the UDP checksum field must either be zero, or the correct checksum value. Anything else causes the kernel to silently drop the packet, and you'll never see it go out. The kernel won't fill in the proper checksum for you if you zero it out, but if you specify it, it will verify that it's correct. No sending UDP with intentionally bad checksums this way.)
So the first half of the question remains: when I read a packet from sock_clear with the MSG_PROXY flag to recvfrom(), why do I not get the actual destination address in the second half of sdaddr?
i have done simple tcp client/server program got working well with strings and character data...i wanted to take each frames(from a webcam) and sent it to server.. here is the part of client program where error happened:
line:66 if(send(sock, frame, sizeof(frame), 0)< 0)
error:
client.cpp:66:39: error: cannot convert ‘cv::Mat’ to ‘const void*’ for argument ‘2’ to ‘ssize_t send(int, const void*, size_t, int)
i cant recognise this error....kindly help...the following complete client program:
#include<stdio.h>
#include<sys/types.h>
#include<sys/socket.h>
#include<netinet/in.h>
#include<string.h>
#include<stdlib.h>
#include<netdb.h>
#include<unistd.h>
#include "opencv2/objdetect.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc,char *argv[])
{
int sock;
struct sockaddr_in server;
struct hostent *hp;
char buff[1024];
VideoCapture capture;
Mat frame;
capture.open( 1 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
begin:
capture.read(frame);
if( frame.empty() )
{
printf(" --(!) No captured frame -- Break!");
goto end;
}
sock=socket(AF_INET,SOCK_STREAM,0);
if(sock<0)
{
perror("socket failed");
exit(1);
}
server.sin_family =AF_INET;
hp= gethostbyname(argv[1]);
if(hp == 0)
{
perror("get hostname failed");
close(sock);
exit(1);
}
memcpy(&server.sin_addr,hp->h_addr,hp->h_length);
server.sin_port = htons(5000);
if(connect(sock,(struct sockaddr *) &server, sizeof(server))<0)
{
perror("connect failed");
close(sock);
exit(1);
}
int c = waitKey(30);
if( (char)c == 27 ) { goto end; }
if(send(sock, frame, sizeof(frame), 0)< 0)
{
perror("send failed");
close(sock);
exit(1);
}
goto begin;
end:
printf("sent\n",);
close(sock);
return 0;
}
Because TCP provides a stream of bytes, before you can send something over a TCP socket, you must compose the exact bytes you want to send. Your use of sizeof is incorrect. The sizeof function tells you how many bytes are needed on your system to store the particular type. This bears no relationship to the number of bytes the data will require over the TCP connection which depends on the particular protocol layered on top of TCP you are implementing which must specify how data is to be sent at the byte level.
like david already said, you got the length wrong. sizeof() won't help, what you want is probably
frame.total() * frame.channels()
you can't send a Mat object, but you can send the pixels ( the data pointer ) , so this would be:
send(sock, frame.data,frame.total() * frame.channels(), 0)
but still a bad idea. sending uncompressed pixels over the netwotrk ? bahh.
please look at imencode/imdecode
i'm pretty sure, you got the client / server roles in reverse here.
usually the server holds the information to retrieve ( the webcam ), and the client connects to that
and requests an image.
I'm writing a game wich playing on LAN with socket. I use 4 bytes length prefix to know how many data in the rest like this:
void trust_recv(int sock, int length, char *buffer)
{
int recved = 0;
int justRecv;
while(recved < length) {
justRecv = recv(sock, buffer + recved, length - recved, 0);
if (justRecv < 0) return;
recved += justRecv;
}
}
void onDataArrival(int sock)
{
int length;
char *data;
trust_recv(sock, 4, (char *) &length);
data = new char[length];
trust_recv(sock, length, data);
do_somethings_with_data(data);
}
The problem is if someone (an intruder or hacker for example) sends data with other format (maybe only 2 bytes or the length of the rest lower than 4 bytes prefix value) or an network problem, my application will be go to "not responding" state and have to close (because I use blocking socket). How to make my socket application more flexible but don't swith socket to non-blocking mode to pass this issue? (Or any ideas for organize data or algorithms as well)
You can set a receive timeout, during the socket setup phase, with setsockopt() call and SO_RCVTIMEO parameter;
struct timeval tv;
tv.tv_sec =8;
tv.tv_usec = 0 ;
if (setsockopt (your_sock_fd, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof tv)
perror(“setsockopt error”);
then test the return of recv() and his errno
if (justRecv < 0)
{
if (errno == EAGAIN)
perror("TIMEOUT!");
return;
}