Java Netty Set source IP in sent UDP packet - sockets

I have captured SNMP traps/informs from my network (a mix of V1 and V2c) and I wish to write a camel pipeline to replay them in order to test my trap processing engine. In order to do this, I must resend the traps with the source IP of the original sender (since that is part of the criteria for identification and correct processing of a trap).
I thought I would send the resulting UDP datagrams using Netty (although I'm open to writing a stand-alone component or using Mina or any other approach, but the use of the Camel SNMP component does not immediately seem appropriate). I have implemented similar functionality in Python, and I needed to write to a raw socket. I have looked through the netty component source in camel and was not able to see how I might use it unmodified to use a raw socket.
Does anyone out there have an example of using netty with raw sockets that they could share, bonus points if it includes some reference to camel or better still if it calls out a way I can do this with something more high-level than a raw socket (ie a regular UDP datagram with some kind of modifier to set the source IP).
Many thanks

Related

C# BeginSend/BeginReceive sometimes send or receive data attatched [duplicate]

I have two apps sending tcp packages, both written in python 2. When client sends tcp packets to server too fast, the packets get concatenated. Is there a way to make python recover only last sent package from socket? I will be sending files with it, so I cannot just use some character as packet terminator, because I don't know the content of the file.
TCP uses packets for transmission, but it is not exposed to the application. Instead, the TCP layer may decide how to break the data into packets, even fragments, and how to deliver them. Often, this happens because of the unterlying network topology.
From an application point of view, you should consider a TCP connection as a stream of octets, i.e. your data unit is the byte, not a packet.
If you want to transmit "packets", use a datagram-oriented protocol such as UDP (but beware, there are size limits for such packets, and with UDP you need to take care of retransmissions yourself), or wrap them manually. For example, you could always send the packet length first, then the payload, over TCP. On the other side, read the size first, then you know how many bytes need to follow (beware, you may need to read more than once to get everything, because of fragmentation). Here, TCP will take care of in-order delivery and retransmission, so this is easier.
TCP is a streaming protocol, which doesn't expose individual packets. While reading from stream and getting packets might work in some configurations, it will break with even minor changes to operating system or networking hardware involved.
To resolve the issue, use a higher-level protocol to mark file boundaries. For example, you can prefix the file with its length in octets (bytes). Or, you can switch to a protocol that already handles this kind of stuff, like http.
First you need to know if the packet is combined before it is sent or after. Use wireshark to check it the sender is sending one packet or two. If it is sending one, then your fix is to call flush() after each write. I do not know the answer if the receiver is combining packets after receiving them.
You could change what you are sending. You could send bytes sent, followed by the bytes. Then the other side would know how many bytes to read.
Normally, TCP_NODELAY prevents that. But there are very few situations where you need to switch that on. One of the few valid ones are telnet style applications.
What you need is a protocol on top of the tcp connection. Think of the TCP connection as a pipe. You put things in one end of the pipe and get them out of the other. You cannot just send a file through this without both ends being coordinated. You have recognised you don't know how big it is and where it ends. This is your problem. Protocols take care of this. You don't have a protocol and so what you're writing is never going to be robust.
You say you don't know the length. Get the length of the file and transmit that in a header, followed by the number of bytes.
For example, if the header is a 64bits which is the length, then when you receive your header at the server end, you read the 64bit number as the length and then keep reading until the end of the file which should be the length.
Of course, this is extremely simplistic but that's the basics of it.
In fact, you don't have to design your own protocol. You could go to the internet and use an existing protocol. Such as HTTP.

Does raw sockets reassemble the packet?

I implement a simple tunneling and encryption of outgoing IP packets, i.e. each packet+IP header is encrypted and added with a new IP header.
For this purpose I use raw sockets in the sender and the receiver.
I just try to figure out if fragmentation of the outgoing packets can result in breaking the capability to decrypt them again.
Do raw sockets provide the assembled packet or do I need to implement de-fragmentation by myself ?
Assuming that you are referring to RAW sockets of the Berkeley Sockets API (aka BSD Sockets),
the answer is:
No, they do not combine fragments of fragmented IP packets. You will receive the IP packets, including IP header, just as they did arrive at your network interface.
Please note that there exist various implementations of BSD sockets in different operation systems. You didn't say for which system(s) you are developing that code. And despite the fact that the POSIX standard based its network API on BSD sockets, POSIX doesn't specify RAW sockets at all, so a POSIX conforming operation system doesn't even have to support RAW sockets.
And despite the fact many systems have adopted the BSD API, among them Linux/Android, FreeBSD, macOS/iOS, and even Windows, there are some important differences in their implementations. E.g. they support different socket options, their socket options behave in different way, or they support different extensions. As an example for differences in socket options, see here. So your system may theoretically have an option you can set to get reassembled packets. This would not be portable but RAW sockets themselves are only limited portable to begin with.
This is OS specific, but generally it depends on how you read them. Take a look at a couple of linux docs on POSIX sockets:
packet
socket
recvfrom
In particular you use a SOCK_RAW then recvfrom will not always return full packets. See the following quotes:
If a message is too long to fit in the supplied buffer,
excess bytes may be discarded depending on the type of socket the
message is received from.
If len is too small to fit an entire packet, the excess bytes will be returned from the next read.
The receive calls
normally return any data available, up to the requested amount,
rather than waiting for receipt of the full amount requested.
To your question:
Do raw sockets provide the assembled packet or do I need to implement de-fragmentation by myself ?
They don't, you need to de-fragment yourself. If the socket isn't flushed, or fragmentation occurs the call will return any data available, possibly only partial packets the expectation is that you restructure them.

How to Enable Timestamp Option in IP Header

I am designing an application layer protocol on top of UDP. One of requirements is that the receiving side should keep only the most up to date datagram.
Therefore, if datagram A was sent and then datagram B was sent, but datagram B was received first, datagram A should be discarded by the application when received.
One way to implement this is a counter stored in the data part of the UDP packet. The counter is incremented each time a datagram is sent.
I also noticed that IP options contain a timestamp option which looks suitable for this task.
My questions are (in the context of BSD-like sockets):
How do I enable this option on the sending side?
How do I read this field on the receiving side?
You can set IP options using setsockopt() using option level IPPROTO_IP and specifying the name of the option. See Unix/Linux IP documentation, for example see here. Reading IP header options generally requires using a RAW socket which in turn usually requires root permissions. It's not advisable to (try to) use IP options because it may not always be supported since it's very rarely used (either at the origination system or at systems it passes).

Determining Packets Received with Winsock2

Is there a way to determine how many packets where received while using recv() with Winsock? I am looking for a solution to implement at the client, without special requirements on the server side (which I have no control of)
You'd need to packet-sniff using something like the WinPCap. Then you could correlate the packets captured with the socket used.

Sockets Asyn Connection

I am new to Async Socket Connection. Can you please explain. How does this technology work.
There's an existing application (server) which requires socket connections to transmit data back and forward. I already create my application (.NET) but the Server application doesn't seem to understand the XML data that I am sending. My documentation is giving me two ports one to Send and another one to Receive.
I need to be sure that I understand how this works.
I got the IP addresses and also the two Ports to be used.
A socket is the most "raw" way you can use to send byte-level TCP and UDP packets across a network.
For example, your browser uses a socket TCP connection to connect to the StackOverflow web server on port 80. Your browser and the server exchange commands and data according to an agreed-on structure/protocol (in this case, HTTP). An asynchronous socket is no different than a synchronous socket except that is does not block the thread that's using it.
This is really not the most ideal way to work (check and see if your server/vendor application supports SOAP/Web Services, etc), but if this is really the only way, there could be a number of reasons why it's failing. To name a few...
Not actually getting connected or sending data. Run a test using WinsockTool (http://www.isatools.org/tools/winsocktool.msi) and simulate your client first to make sure the server is working as expected.
Encoding incorrect - You're sending raw bytes across the network... Make sure you're using the correct encoding to convert your XML into bytes (ASCII, UTF8, etc).
Buffer Length - Your sending buffer (the amount of data you can transmit in one shot) may be too small or the server may expect a content of a certain length, and your XML could be getting truncated.
let's break a misconception... sockets are FULL-DUPLEX: you connect to a server using one port, then you can send AND receive data through the same socket, no need for 2 port numbers. (actually, there is a port assigned for receiving data, but it is: 1. assigned automatically when creating the socket (unless told so) and 2. of no use in the function calls to receive data)
so you tell us that your documentation give you 2 port numbers... i assume that the "server" is an already existing in-house application, and you are trying to talk to it. if the doc lists 2 ports, then you will need 2 sockets: one for sending and another one for receiving. now i would suggest you first use a synchronous socket before trying the async way: a synchronous socket is less error-prone for a first test.
(by the way, let's break another misconception: if well coded, once a server listen on a port, it can receive any number of connection through the same port number, no need to open 2 listening ports to accept 2 connections... sorry for the re-alignment, but i've seen those 2 errors committed enough time, it gives me a urge to kill)