MSG_PROXY not working to provide/specify alternate addresses for transparent proxying - transparentproxy

I'm trying to write a transparent proxy that translates arbitrary UDP packets to a custom protocol and back again. I'm trying to use transparent proxying to read the incoming UDP packets that need translation, and to write the outgoing UDP packets that have just been reverse-translated.
My setup for the socket I use for both flavors of UDP sockets is as follows:
static int
setup_clear_sock(uint16_t proxy_port)
{
struct sockaddr_in saddr;
int sock;
int val = 1;
socklen_t ttllen = sizeof(std_ttl);
sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (sock < 0)
{
perror("Failed to create clear proxy socket");
return -1;
}
if (getsockopt(sock, IPPROTO_IP, IP_TTL, &std_ttl, &ttllen) < 0)
{
perror("Failed to read IP TTL option on clear proxy socket");
return -1;
}
if (setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &val, sizeof(val)) < 0)
{
perror("Failed to set reuse address option on clear socket");
return -1;
}
if (setsockopt(sock, IPPROTO_IP, IP_TRANSPARENT, &val, sizeof(val)) < 0)
{
perror("Failed to set transparent proxy option on clear socket");
return -1;
}
saddr.sin_family = AF_INET;
saddr.sin_port = htons(proxy_port);
saddr.sin_addr.s_addr = INADDR_ANY;
if (bind(sock, (struct sockaddr *) &saddr, sizeof(saddr)) < 0)
{
perror("Failed to bind local address to clear proxy socket");
return -1;
}
return sock;
}
I have two distinct, but possibly related problems. First, when I read an incoming UDP packet from this socket, using this code:
struct sock_double_addr_in
{
__SOCKADDR_COMMON (sin_);
in_port_t sin_port_a;
struct in_addr sin_addr_a;
sa_family_t sin_family_b;
in_port_t sin_port_b;
struct in_addr sin_addr_b;
unsigned char sin_zero[sizeof(struct sockaddr) - __SOCKADDR_COMMON_SIZE - 8
- sizeof(struct in_addr) - sizeof(in_port_t)];
};
void
handle_clear_sock(void)
{
ssize_t rcvlen;
uint16_t nbo_udp_len, coded_len;
struct sockaddr_in saddr;
struct sock_double_addr_in sdaddr;
bch_coding_context_t ctx;
socklen_t addrlen = sizeof(sdaddr);
rcvlen = recvfrom(sock_clear, &clear_buf, sizeof(clear_buf),
MSG_DONTWAIT | MSG_PROXY,
(struct sockaddr *) &sdaddr, &addrlen);
if (rcvlen < 0)
{
perror("Failed to receive a packet from clear socket");
return;
}
....
I don't see a destination address come back in sdaddr. The sin_family_b, sin_addr_b, and sin_port_b fields are all zero. I've done a block memory dump of the structure in gdb, and indeed the bytes are coming back zero from the kernel (it's not a bad placement of the field in my structure definition).
Temporarily working around this by hard-coding a fixed IP address and port for testing purposes, I can debug the rest of my proxy application until I get to the point of sending an outgoing UDP packet that has just been reverse-translated. That happens with this code:
....
udp_len = ntohs(clear_buf.u16[2]);
if (udp_len + 6 > decoded_len)
fprintf(stderr, "Decoded fewer bytes (%u) than outputting in clear "
"(6 + %u)!\n", decoded_len, udp_len);
sdaddr.sin_family = AF_INET;
sdaddr.sin_port_a = clear_buf.u16[0];
sdaddr.sin_addr_a.s_addr = coded_buf.u32[4];
sdaddr.sin_family_b = AF_INET;
sdaddr.sin_port_b = clear_buf.u16[1];
sdaddr.sin_addr_b.s_addr = coded_buf.u32[3];
if (sendto(sock_clear, &(clear_buf.u16[3]), udp_len, MSG_PROXY,
(struct sockaddr *) &sdaddr, sizeof(sdaddr)) < 0)
perror("Failed to send a packet on clear socket");
}
and the packet never shows up. I've checked the entire contents of the sdaddr structure I've built, and all fields look good. The UDP payload data looks good. There's no error coming back from the sendto() syscall -- indeed, it returns zero. And the packet never shows up in wireshark.
So what's going on with my transparent proxying? How do I get this to work? (FWIW: development host is a generic x86_64 ubuntu 14.04 LTS box.) Thanks!

Alright, so I've got half an answer.
It turns out if I just use a RAW IP socket, with the IP_HDRINCL option turned on, and build the outgoing UDP packet in userspace with a full IP header, the kernel will honor the non-local source address and send the packet that way.
I'm now using a third socket, sock_output, for that purpose, and decoded UDP packets are coming out correctly. (Interesting side note: the UDP checksum field must either be zero, or the correct checksum value. Anything else causes the kernel to silently drop the packet, and you'll never see it go out. The kernel won't fill in the proper checksum for you if you zero it out, but if you specify it, it will verify that it's correct. No sending UDP with intentionally bad checksums this way.)
So the first half of the question remains: when I read a packet from sock_clear with the MSG_PROXY flag to recvfrom(), why do I not get the actual destination address in the second half of sdaddr?

Related

Problem in reading packets from tunnel using read()

I have been trying to receive and process the packets from tunnel. There are separate blocks for processing v4 and v6 packets. If the packet does not fall under the either of the categories, they will be dropped. For me, every packets are being dropped during execution. When I used wireshark to capture the packets from the tunnel, I noticed the difference in packet size, i.e., length of the packet. For example, when the length of a received packet in Wireshark is 60 whereas the program prints it 64 as length. I noticed the 4 bytes difference in all packets. I am unable to find out, what I am doing wrong here? Would anyone help me. I also attached the screen of wireshark and program execution for perusal.
Image: Captured packets from tunnel through wireshark and program
#define MTU 1600
void processPacket(const uint8_t *packet, const size_t len) {
//1st octet identifies the IP version
uint8_t version = (*packet) >> 4;
//...
printf("IP version - %d\n", version);
if (version == 4 ) {
//ipv4 packet process ...
} else if (version == 6) {
//ipv6 packet process ...
} else {
//drop packet
printf("Unknown IP version, drop packet\n");
}
}
int main() {
struct ifreq ifr;
int fd;
uint8_t *buffer = (uint8_t *)(malloc(MTU));
ssize_t len;
if ( (fd = open("/dev/net/tun", O_RDWR)) == -1 ) {
perror("Unable to open /dev/net/tun");
exit(EXIT_FAILURE);
}
memset(&ifr, 0, sizeof(ifr));
ifr.ifr_flags = IFF_TUN;
strncpy(ifr.ifr_name, "tun0", IFNAMSIZ);
if ( (err = ioctl(fd, TUNSETIFF, (void *) &ifr)) == -1 ) {
perror("Error encountered during ioctl TUNSETIFF");
close(fd);
exit(EXIT_FAILURE);
}
printf("Device tun0 opened\n");
while(1) {
len = read(fd, buffer, MTU);
printf("Read %lu bytes from tun0\n", len);
processPacket(buffer, len);
}
printf("\nPress any key to exit...");
getchar();
close(fd);
}
The tunnel device pre-pends the IP packet with additional information, so the first byte is not the IP version. If you don't need it, you can add IFF_NO_PI to ifr_flags. See kernel documentation.

Sending Large files using TCP

My client program needs to send a large file to the server program. After the client connects to the server and the server accepts it, the clients specifies the name of the file which it would be sending. Now, the client needs to send the file using TCP.
I know that if the size of the file is small (less than 1k bytes?), I can send it directly using a single call to the "send" function in socket programming. However, does the same work if my file size is large, say about 100 MB? I want to know does "send" by itself handle the task of breaking the large data into packets and sending them reliably or should I be the one handling this?
Thanks.
I am trying something similar & My client code looks like this
static void send_file(char *ipAddress, char *filename)
{
struct sockaddr_in serverAddr;
int skt;
uint32_t addr_size;
uint32_t sz;
int32_t sent_bytes;
FILE *fp;
if ( inet_pton(AF_INET, ipAddress, &(serverAddr.sin_addr)) ){
skt = socket(PF_INET, SOCK_STREAM, 0);
serverAddr.sin_family = AF_INET;
}
else {
inet_pton(AF_INET6, ipAddress, &(serverAddr.sin_addr));
skt = socket(PF_INET6, SOCK_STREAM, 0);
serverAddr.sin_family = AF_INET6;
}
serverAddr.sin_port = htons(7891);
memset(serverAddr.sin_zero, '\0', sizeof(serverAddr.sin_zero));
addr_size = sizeof(serverAddr);
connect(skt, (struct sockaddr *) &serverAddr, addr_size);
/*find file size*/
fp = fopen(filename, "r");
fseek(fp,0,SEEK_END);
sz = ftell(fp);
rewind(fp);
sent_bytes = send(skt, fp, sz, 0);
printf("sent %d bytes\n", sent_bytes);
fclose(fp);
}

Raw socket multicasting

I have a raw socket I have bound to eth2.
#define DEVICE_NAME "eth2"
// open a socket
int Socket = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (Socket < 0)
{
perror("socket() error");
return -1;
}
// create a interface request structure
struct ifreq ifr;
memset(&ifr, 0, sizeof(ifr));
// set the interface name
strncpy(ifr.ifr_name, DEVICE_NAME, IFNAMSIZ);
// get interface index
ioctl(Socket, SIOCGIFINDEX, &ifr);
int Socket_Index = ifr.ifr_ifindex;
// bind the socket to the interface
struct sockaddr_ll Socket_Addr;
Socket_Addr.sll_family = AF_PACKET;
Socket_Addr.sll_protocol = htons(ETH_P_ALL);
Socket_Addr.sll_ifindex = Socket_Index;
bind(Socket, (struct sockaddr *)&Socket_Addr, sizeof(Socket_Addr));
// add multicast addresses to the socket, based on Unit Number
struct packet_mreq mreq;
mreq.mr_ifindex = Socket_Index;
mreq.mr_type = PACKET_MR_MULTICAST;
mreq.mr_alen = ETH_ALEN;
memcpy(mreq.mr_address, Addresses[UNITS_1_2], ETH_ALEN);
setsockopt(Socket, SOL_PACKET, PACKET_ADD_MEMBERSHIP, &mreq, sizeof(mreq));
memcpy(mreq.mr_address, Addresses[UNIT_3], ETH_ALEN);
setsockopt(Socket, SOL_PACKET, PACKET_ADD_MEMBERSHIP, &mreq, sizeof(mreq));
Where Addresses[UNITS_1_2] resolves to 03:00:00:01:04:00 and Addresses[UNIT_3] resolves to 02:00:00:01:04:01.
The socket is only receiving the multicast packets, and not the unicast ones. While debugging I started tcpdump and low-and-behold going to promiscuous mode did the trick.
My question is: Can I receive both multicast and unicast packets on the same raw socket without promiscuous mode? I have tried adding 02:00:00:01:04:01 to eth0s mac addresses using maddr, with no luck.
Sneaking from gabhijit: Try adding
Socket_Addr.sll_pkttype = PACKET_HOST | PACKET_MULTICAST;

Flexible socket application

I'm writing a game wich playing on LAN with socket. I use 4 bytes length prefix to know how many data in the rest like this:
void trust_recv(int sock, int length, char *buffer)
{
int recved = 0;
int justRecv;
while(recved < length) {
justRecv = recv(sock, buffer + recved, length - recved, 0);
if (justRecv < 0) return;
recved += justRecv;
}
}
void onDataArrival(int sock)
{
int length;
char *data;
trust_recv(sock, 4, (char *) &length);
data = new char[length];
trust_recv(sock, length, data);
do_somethings_with_data(data);
}
The problem is if someone (an intruder or hacker for example) sends data with other format (maybe only 2 bytes or the length of the rest lower than 4 bytes prefix value) or an network problem, my application will be go to "not responding" state and have to close (because I use blocking socket). How to make my socket application more flexible but don't swith socket to non-blocking mode to pass this issue? (Or any ideas for organize data or algorithms as well)
You can set a receive timeout, during the socket setup phase, with setsockopt() call and SO_RCVTIMEO parameter;
struct timeval tv;
tv.tv_sec =8;
tv.tv_usec = 0 ;
if (setsockopt (your_sock_fd, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof tv)
perror(“setsockopt error”);
then test the return of recv() and his errno
if (justRecv < 0)
{
if (errno == EAGAIN)
perror("TIMEOUT!");
return;
}

iOS timeout of ping and ttl

I want make ping with timeout and TTL. I use code by Apple ("Simple Ping"). I read it
"iOS ping with timeout". I change code:
CFSocketNativeHandle sock = CFSocketGetNative(self->_socket);
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 100000; // 0.1 sec
setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO, (void *)&tv, sizeof(tv));
bytesSent = sendto(
sock,
[packet bytes],
[packet length],
0,
(struct sockaddr *) [self.hostAddress bytes],
(socklen_t) [self.hostAddress length]
);
But I don't understand where I should put code that will show me timeout of receiving packets. Also I need to make ping with TTL (time-to-live) information. I want get information based on this pattern: icmp_seq=count from=ip_address ttl=value_of_ttl time=value_of_replytime_ms
To modify the default TTL in the IP Header, call setsockopt with IP_TTL as parameter (tested with IPv4):
- (BOOL)setTTL:(int)ttl{
CFSocketNativeHandle sock = CFSocketGetNative(self->_socket);
int status = setsockopt(sock, IPPROTO_IP, IP_TTL, &ttl, sizeof(ttl));
if(status < 0)
{
return NO;
}
return YES;
}
The "iOS ping with timeout" example add a output timeout to the socket. From what I understand, it will timeout if the packet is not been sent by the socket within this period. I could be wrong, but I cannot find this "timeout" value from the ICMP header and IPv4 header (ICMP Packet format).
Here are the console log and request, response packet captured using apple's simple ping:
If you only want to know the response time of the ping, I guess you can track it yourself in the delegate methods. Get timestamps when "didSendPacket" and "didReceivePingResponsePacket" functions called and then compare the difference.
You can always put some limit time on the receivefrom:
tv.tv_sec = 0;
tv.tv_usec = 10000;
setsockopt(recv_sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv,sizeof(struct timeval));