send frame from via raw socket every 5 milliseconds - sockets

Raw sockets can be used to construct a packet manually inside an application. In normal sockets when any data is send over the network, the kernel of the operating system adds some headers .How can i use them to send frame every 5 millisecond.

Related

different socket buffer size for client and server

I have a question:
Can we have different socket buffer size for client and server? For example
settting the send and receive buffer to 2048 at server and to 13312 at client will have any problem (buffers at sever are of lesser size than clent)? If yes, what will be the issues?
I think you are asking about buffers in your application. Buffer used by operating system is a different story.
It is legal to use buffers with different length at client and server. Actually it must be legal because for example web browser has no information buffer size in web server and web server has no idea about client buffer.
But you have to keep in mind that TCP is a stream oriented protocol and it does not keep message boundaries.
For example let client has a buffer with size 10 bytes and sends 3 pieces of data:
send(sock1, "0123456789", 10, 0);
send(sock1, "ABCDEFGHIJ", 10, 0);
send(sock1, "abcdefghij", 10, 0);
The data is transferred in stream and it is up to the underlying TCP stack if they will be transmitted via 3 IP packets:
0123456789 ABCDEFGHIJ abcdefghij
or one big packet:
0123456789ABCDEFGHIJabcdefghij
or even something more weird:
0123456789A BCDEFGHIJab cdefghij
The OS at the receiver side stores all received data in its internal buffer when data is received. OS copy data to application buffer when application calls receive. If receiving application has buffer bigger that size of already received data then all data is copied to application buffer. If application buffer is smaller then OS copies only data that fits to buffer and remaining data will be copied in next receive call.

Socket data read wait time

I have application where I am listening on multiple sockets using select. If I start processing request that came in from Socket A and in the meanwhile if another request on socket B arrives then I want to know how long does socket B request had to wait before I could get it. Since this is a single threaded application I cannot spawn a new thread and go back to select to monitor again and instantly start processing request from socket B.
Is there a 'C' api available to get me this metric or is this just not possible to get?
There is no a straightforward way how to measure the interval between the 'data ready' time and 'data read' time because there is not any timestamp written together with the data. Moreover the situation is even more complex because a stream oriented socket may receive several data segments till select is closed and the it is not what interval should be measured.
If the application data processing is longer than packet processing in the kernel the you can do a reasonable measurement in following way:
print current time and some unique data id based on application protocol when select wakes up due to socket B data availability.
log any packet received for the socket B. You can use either a network traffic capture tool like wireshark or tcpdump. Or you can configure an iptables firewall rule (if it is running on linux) with target -j LOG.
Write a simple script/program that correlates the captured packets and the application log and subtract received and start processing time.
Of course the idea above does ignore the kernel processing time. If you really need exact time I have to introduce a new thread to your application.

TCP connection for real time

I want to use a real time TCP connection, I have a streaming of data from server , and I receive it by a client, but this client is too slow to receive as fast as the sender is, so the server buffer the data until it's reach the destination, for example if I "produce" data at time t, and suppose that the client are 10 time slower, then the data produced at time t, will arrive at time 10t.
I want to make the server "drop" the data that can't reach the client at the present time, and send the new data which is expected to arrive at the time?
B.S : I know that UDP protocol do this, but I want to do this by TCP.
I've done this sort of thing in the past, and got reasonably good results. Here's how I did it:
1) On the sending side, use setsockopt(SOL_SOCKET, SO_SNDBUF) to make the server's TCP socket's send buffer as small as you can get away with (since you can't drop data once it's already in the socket's send buffer, you want to keep as little data there as possible)
2) On the sending side, never proactively send() any outgoing data into the socket at all. Instead, write a function (we'll call it DumpCurrentStateToBuffer()) that writes the "current state" bytes (that you want to send to the client) into an in-memory buffer.
3) When the client's socket select()'s (or poll()'s, or whatever mechanism you use) as ready-for-write, call DumpCurrentStateToBuffer() to create a memory-buffer of bytes that are to be sent to the client. Now send that data to the client (if you're using blocking I/O you can do it synchronously, at the cost of potentially stalling your server until the data can be sent; OTOH if you're using non-blocking I/O, you may need to keep the memory-buffer and your current sent-bytes index into the buffer around as state variables, so you can keep sending more sub-chunks of the memory buffer over time, whenever the socket indicates that it can receive more bytes)
4) Once the memory-buffer's contents have been fully sent, you can free the memory buffer, and then wait for the socket to select as ready-for-write again; when it does, goto (3).
This technique doesn't solve all of TCP's non-real-time issues; for example, a dropped TCP packet will still have to be resent to the client. What it does do is guarantee that the client-to-server data backlog will never be more than one or two "states" long, because you never generate any new data unless/until there is at least some room in the socket's output buffer.

How to send Audio data manually using udp sockets

I am working on Video chat application using udp sockets,
iam able to capture raw audio data which is huge in size. as it is chat application I should be able to transfer this audio data contineously.
The problem is this audio data is huge so socket mtu is not allowing me to transfer this data.
I am finding out the way I can split up this data and send through sockets and capture them at other end and combined them to produce voice data.
Please guide me how using udp sockets
With UDP you have to take care by yourself of transmission order (UDP datagram number 1 could be received AFTER a UDP datagram number 2) and lost packets (UDP doesn't grant delivery of the datagram)
You should use TCP for big size transfers where the order of the packets matters.
About MTU, you don't have to care if it is smaller than the size of the data you're going to send. The OS will defragment it for you.
Just split up the data in 64k blocks (maximum size allowed for a single send() call) and loop until your data is totally transmitted.

Sending And Receiving Sockets (TCP/IP)

I know that it is possible that multiple packets would be stacked to the buffer to be read from and that a long packet might require a loop of multiple send attempts to be fully sent. But I have a question about packaging in these cases:
If I call recv (or any alternative (low-level) function) when there are multiple packets awaiting to be read, would it return them all stacked into my buffer or only one of them (or part of the first one if my buffer is insufficient)?
If I send a long packet which requires multiple iterations to be sent fully, does it count as a single packet or multiple packets? It's basically a question whether it marks that the package sent is not full?
These questions came to my mind when I thought about web sockets packaging. Special characters are used to mark the beginning and end of a packet which sorta leads to a conclusion that it's not possible to separate multiple packages.
P.S. All the questions are about TCP/IP but you are welcomed to share information (answers) about UDP as well.
TCP sockets are stream based. The order is guaranteed but the number of bytes you receive with each recv/read could be any chunk of the pending bytes from the sender. You can layer a message based transport on top of TCP by adding framing information to indicate the way that the payload should be chunked into messages. This is what WebSockets does. Each WebSocket message/frame starts with at least 2 bytes of header information which contains the length of the payload to follow. This allows the receiver to wait for and re-assemble complete messages.
For example, libraries/interfaces that implement the standard Websocket API or a similar API (such as a browser), the onmessage event will fire once for each message received and the data attribute of the event will contain the entire message.
Note that in the older Hixie version of WebSockets, each frame was started with '\x00' and terminated with '\xff'. The current standardized IETF 6455 (HyBi) version of the protocol uses the header information that contains the length which allows much easier processing of the frames (but note that both the old and new are still message based and have basically the same API).
TCP connection provides for stream of bytes, so treat it as such. No application message boundaries are preserved - one send can correspond to multiple receives and the other way around. You need loops on both sides.
UDP, on the other hand, is datagram (i.e. message) based. Here one read will always dequeue single datagram (unless you mess with low-level flags on the socket). Event if your application buffer is smaller then the pending datagram and you read only a part of it, the rest of it is lost. The way around it is to limit the size of datagrams you send to something bellow the normal MTU of 1500 (less IP and UDP headers, so actually 1472).