I have been having a heck of a time getting Udp sockets working correctly on Windows Phone 7 (Mango). First I had this problem udp async receive and now that I figured it out, I am seeing a weird behavior where the end of the data I send over the socket is all zero.
At first, I thought there was a weird size cap. All my packets were user 1380 bytes. I was seeing that for some reason, after ~byte 1220 it was all zeros, but according to the socket, I was still receiving all ~1380 bytes. I matched up the sizes with my server application, and I was receiving the correct number of byte. So I printed the bytes out on both sides of the connection and saw this issue with much of the last 200 bytes or so being zero.
So I reduced the size of my packet data to ~1200 bytes, and I was still seeing the issue. I even reduced it to 1000 bytes and still!
Any ideas?
Update - I have done some testing, and it seems that the last 144 bytes are FUBAR. Sometimes they are zero, sometimes they are garbage. Think this is a bug?
You need to check how many bytes were transferred in the async operation. Check SocketAsyncEventArgs.BytesTransferred to see how many bytes in the buffer are actually valid.
Sorry, I had a bug in my code where I was using an array over, overwriting my own data.
Related
If Two processes communicate via sockets and Process A sends Process B 100 bytes.
Process B tries to read 150 bytes. Later Process A sends 50 bytes.
What is the result of Process B's read?
Will the process B read wait until it receives 150 bytes?
That is dependent on many factors, especially related to the type of socket, but also to the timing.
Generally, however, the receive buffer size is considered a maximum. So, if a process executes a recv with a buffer size of 150, but the operating system has only received 100 bytes so far from the peer socket, usually the available 100 are delivered to the receiving process (and the return value of the system call will reflect that). It is the responsibility of the receiving application to go back and execute recv again if it is expecting more data.
Another related factor (which will not generally be the case with a short transfer like 150 bytes but definitely will if you're sending a megabyte, say) is that the sender's apparently "atomic" send of 1000000 bytes will not all be delivered in one packet to the receiving peer, so if the receiver has a corresponding recv with a 1000000 byte buffer, it's very unlikely that all the data will be received in one call. Again, it's the receiver's responsibility to continue calling recv until it has received all the data sent.
And it's generally the responsibility of the sender and receiver to somehow coordinate what the expected size is. One common way to do so is by including a fixed-length header at the beginning of each logical transmission telling the receiver how many bytes are to be expected.
Depends on what kind of socket it is. For a STREAM socket, the read will return either the amount of data currently available or the amount requested (whichever is less) and will only ever block (wait) if there is no data available.
So in this example, assuming the 100 bytes have (all) been transmitted and received into the receive buffer when B reads from the socket and the additional 50 bytes have not yet been transmitted, the read will return those 100 bytes and will not wait.
Note also, the dependency of all the data being transmitted and received -- when process A writes data to a socket it will not necessarily be sent immediately or all at once. Depending on the underlying transport, there's an MTU size and any write larger than that will be broken up. Smaller writes may also be delayed and combined with later writes to make up the MTU. So in your case the send of 100 bytes might be too large (and broken up), or might be too small and not be transmitted immediately.
I am using this python Websocket client on a RaspberryPi to send to my Tomcat server some frames using Websockets.
More specifically, I am splitting a big file (200MB) into many byte-array chunks (of some fixed size) and via a for loop, I am sending them to my server. Something like that:
for chunk in chunks:
ws.send(chunk, ABNF.OPCODE_BINARY)
The problem is that at random points, the connection closes (I assume from the client) and from then on, I only get "BrokenPipeError: [Errno 32] Broken pipe". Also, the bigger the chunk size and the noisier the Internet connection, the more probable it is for this to happen in general. For example, I never had this problem when using chunks of 512 bytes and a good internet connection, but if the chunks are 16384 bytes and I am using mobile Internet, I get it within the first few chunks. On windows, the same code works perfectly. Lastly, depending on which chunk size I use in Python, I set the same buffer-size on the server.
What could the problem be here and how could I address it?
I have written a single server-client program and I want to ask: Is there any difference in the behavior of the recv() function between 32 and 64 bit operating systems.
I am asking this because I am running both the server and the client on a 64 bit laptop and everything is working fine. I call recv() this way:while((tmp = recv(client_sock,rec_msg,256,0))>0) and as expected if for example i have 3 strings to send from client,in the server part it enters the while 3 times and prints the right result.
When i run exactly the same programs on a 32 bit Debian machine it seems that for some unknown reason if i send 3 strings for example from client it enters the while loop in server part only one time and receives the 3 strings as one.
I have used print statements and found out that it enters the while loop one time and receive all the buffer although in the client part while loop is entered 3 times as expected and 3 strings are sent in 3 different times. I can't found a logical reason for why it is working fine on 64 bit and not working in 32 bit and that's why i am asking this question.
Thanks in advance for your time and your help.
If this is a stream socket, then there are no inherent message boundaries, and there's no correlation between the messages sent and received. recv() may return part of a message, the whole message, or multiple messages; all that's guaranteed is that the bytes are received in the same order that they were sent.
The difference you're seeing is probably just due to speed differences between the two machines. The 32-bit machine is slower, so in the time it takes to check for data being available on the network all 3 packets have arrived. But the faster 64-bit machine processes the received data from the first packet before the second one arrives.
During the RAW-socket based packet send testing, I found very irritating symptoms.
With a default RAW socket setting (especially for SO_SNDBUF size),
the raw socket send 100,000 packets without problem but it took about 8 seconds
to send all the packets, and the packets are correctly received by the receiver process.
It means about 10,000 pps (packets per second) is achieved by the default setting.
(I think it's too small figure contrary to my expectation.)
Anyway, to increase the pps value, I increased the packet send buffer size
by adjusting the /proc/sys/net/core/{wmem_max, wmem_default}.
After increasing the two system parameters, I have identified the irritating symptom.
The 100,000 packets are sent promptly, but only the 3,000 packets are
received by the receiver process (located at a remote node).
At the sending Linux box (Centos 5.2), I did netstat -a -s and ifconfig.
Netstat showed that 100,000 requests sent out, but the ifconfig shows that
only 3,000 packets are TXed.
I want to know the reason why this happens, and I also want to know
how can I solve this problem (of course I don't know whether it is really a problem).
Could anybody give me some advice, examples, or references to this problem?
Best regards,
bjlee
You didn't say what size your packets were or any characteristics of your network, NIC, hardware, or anything about the remote machine receiving the data.
I suspect that instead of playing with /proc/sys stuff, you should be using ethtool to adjust the number of ring buffers, but not necessarily the size of those buffers.
Also, this page is a good resource.
I have just been working with essentially the same problem. I accidentally stumbled across an entirely counter-intuitive answer that still doesn't make sense to me, but it seems to work.
I was trying larger and larger SO_SNDBUF buffer sizes, and losing packets like mad. By accidentally overrunning my system defined maximum, it set the SO_SNDBUF size to a very small number instead, but oddly enough, I no longer had the packet loss issue. So I intentionally set SO_SNDBUF to 1, which again resulted in a very small number (not sure, but I think it actually set it to something like 1k), and amazingly enough, still no packet loss.
If anyone can explain this, I would be most interested in hearing it. In case it matters, my version of Linux is RHEL 5.11 (yes, I know, I'm a bit behind the times).
I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain.
Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one.
It has come to my attention that when I do a:
mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint);
I can get OLD packets that were not processed (almost everytime).
Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ยก?
So, the first question is how is it possible to receive the last packet and discard the ones I missed?
Later I tried using the async example of the Boost documentation but found it did not do what I wanted.
http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html
From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run".
If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right?
BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again.
I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive?
One option is to take advantage of the fact that when the local receive buffer for your UDP socket fills up, subsequently received packets will push older ones out of the buffer. You can set the local receive buffer size to be large enough for one packet, but not two. This will make the newest packet to arrive always cause the previous one to be discarded. When you then ask for the packet using receive_from, you'll get the latest (and only) one.
Here are the API docs for changing the receive buffer size with Boost:
http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio/reference/basic_datagram_socket/receive_buffer_size.html
The example appears to be wrong, in that it shows a tcp socket rather than a udp socket, but changing that back to udp should be easy (the trivially obvious change should be the right one).
With Windows (certainly XP, Vista, & 7); if you set your recv buffer size to zero you'll only receive datagrams if you have a recv pending when the datagram arrives. This MAY do what you want but you'll have to sit and wait for the next one if you post your recv just after the last datagram arrives ...
Since you're doing a game, it would be far better, IMHO, is to use something built on UDP rather than UDP itself. Take a look at ENet which supports reliable data over UDP and also unreliable 'sequenced' data over UDP. With unreliable sequenced data you only ever get the 'latest' data. Or something like RakNet might be useful to you as it does a lot of games stuff and also includes stuff similar to ENet's sequenced data.
Something else you should bear in mind is that with raw UDP you may get those datagrams out of order and you may get them more than once. So you're likely gonna need your own sequence number in their anyway if you don't use something which sequences the data for you.
P2engine is a flexible and efficient platform for making p2p system development easier. Reliable UDP, Message Transport , Message Dispatcher, Fast and Safe Signal/Slot...
You're going about it the wrong way. The receiving end has a FIFO queue. Once the queue gets filled new arriving packets are discarded.
So what you need to do on the receiver is just to keep reading the packets as fast as possible and process them as they arrive.
Your receiving end should easily be able to handle receiving a packet every 500ms. I'd say you've got a bug in your code and from what you describe yes you do.
It could be this, make sure in handle_receive that you only call async_receive_from if there is no error.
I think that I have your same problem, to solve the problem I read the bytes_available and compare with packet width until I receive the last package:
boost::asio::socket_base::bytes_readable command(true);
socket_server.io_control(command);
std::size_t bytes_readable = command.get();
Here is the documentation.