I'm using Veins 4.4 and I need to store some results in an outer server, so I would like to open a UDP connection toward it.
I've read several posts about using a TCP connection for the mobility in Veins,and I understood I should resort to the Inet module to open a connection.
Although I don't need it for the mobility, but to send data to an external server.
Is there any suggestion?
I was trying to use the method processCommandFromApp method from inet/src/transport/UDP.cc class:
void UDP::processCommandFromApp(cMessage *msg)
{
switch (msg->getKind())
{
case UDP_C_BIND: {
UDPBindCommand *ctrl = check_and_cast<UDPBindCommand*>(msg->getControlInfo());
bind(ctrl->getSockId(), msg->getArrivalGate()->getIndex(), ctrl->getLocalAddr(), ctrl->getLocalPort());
break;
}
case UDP_C_CONNECT: {
UDPConnectCommand *ctrl = check_and_cast<UDPConnectCommand*>(msg->getControlInfo());
connect(ctrl->getSockId(), msg->getArrivalGate()->getIndex(), ctrl->getRemoteAddr(), ctrl->getRemotePort());
break;
}
case UDP_C_CLOSE: {
UDPCloseCommand *ctrl = check_and_cast<UDPCloseCommand*>(msg->getControlInfo());
close(ctrl->getSockId());
break;
}
case UDP_C_SETOPTION: {
UDPSetOptionCommand *ctrl = check_and_cast<UDPSetOptionCommand *>(msg->getControlInfo());
SockDesc *sd = getOrCreateSocket(ctrl->getSockId(), msg->getArrivalGate()->getIndex());
if (dynamic_cast<UDPSetTimeToLiveCommand*>(ctrl))
setTimeToLive(sd, ((UDPSetTimeToLiveCommand*)ctrl)->getTtl());
else if (dynamic_cast<UDPSetTypeOfServiceCommand*>(ctrl))
setTypeOfService(sd, ((UDPSetTypeOfServiceCommand*)ctrl)->getTos());
else if (dynamic_cast<UDPSetBroadcastCommand*>(ctrl))
setBroadcast(sd, ((UDPSetBroadcastCommand*)ctrl)->getBroadcast());
else if (dynamic_cast<UDPSetMulticastInterfaceCommand*>(ctrl))
setMulticastOutputInterface(sd, ((UDPSetMulticastInterfaceCommand*)ctrl)->getInterfaceId());
else if (dynamic_cast<UDPSetMulticastLoopCommand*>(ctrl))
setMulticastLoop(sd, ((UDPSetMulticastLoopCommand*)ctrl)->getLoop());
else if (dynamic_cast<UDPSetReuseAddressCommand*>(ctrl))
setReuseAddress(sd, ((UDPSetReuseAddressCommand*)ctrl)->getReuseAddress());
else if (dynamic_cast<UDPJoinMulticastGroupsCommand*>(ctrl))
{
UDPJoinMulticastGroupsCommand *cmd = (UDPJoinMulticastGroupsCommand*)ctrl;
std::vector<IPvXAddress> addresses;
std::vector<int> interfaceIds;
for (int i = 0; i < (int)cmd->getMulticastAddrArraySize(); i++)
addresses.push_back(cmd->getMulticastAddr(i));
for (int i = 0; i < (int)cmd->getInterfaceIdArraySize(); i++)
interfaceIds.push_back(cmd->getInterfaceId(i));
joinMulticastGroups(sd, addresses, interfaceIds);
}
else if (dynamic_cast<UDPLeaveMulticastGroupsCommand*>(ctrl))
{
UDPLeaveMulticastGroupsCommand *cmd = (UDPLeaveMulticastGroupsCommand*)ctrl;
std::vector<IPvXAddress> addresses;
for (int i = 0; i < (int)cmd->getMulticastAddrArraySize(); i++)
addresses.push_back(cmd->getMulticastAddr(i));
leaveMulticastGroups(sd, addresses);
}
else
throw cRuntimeError("Unknown subclass of UDPSetOptionCommand received from app: %s", ctrl->getClassName());
break;
}
default: {
throw cRuntimeError("Unknown command code (message kind) %d received from app", msg->getKind());
}
}
delete msg; // also deletes control info in it
}
I included the inet path as follows:
#include <inet/src/transport/udp/UDP.h>
and I call it as follows, by passing as input UDP_C_CONNECT message.:
cMessage *UDP_C_CONNECT;
void Inet::UDP::processCommandFromApp(UDP_C_CONNECT);
But when I run the simulation, it crashes, returning this error:
Errors occurred during the build.
Errors running builder 'OMNeT++ Makefile Builder' on project 'veins'.
java.lang.NullPointerException
1) Is there the correct way to set up the required connection?
2) Why I'm getting this error as soon as I include the inet path?
UPDATE
I also tried another way to establish the connection:
std::string host;
host = "16777343";
int port = 5144;
Veins::TraCIConnection* connection;
connection = TraCIConnection::connect(host.c_str(), port);
but as soon as it load the plugin, then it's like it is waiting for something at time 0.0 without starting the generation of the nodes.
Thanks for helping
Simulations using OMNeT++ are C++ programs, so you can use the full range of libraries and system calls available to any C++ program. If you want to open a UDP connection to some other computer on your network, just create a UDP socket as you would in any C++ program, then send the data whenever needed.
Maybe the easiest way to go about writing this is to
start with a plain C++ program that has nothing to do with OMNeT++
move the part of the program that has to run before everything else into the initialize method of a module in your simulation, the rest to a handleMessage method.
Related
I'm working on a code to communicate two arduinos, one with ethernet shield and another with an ENC28J60 ethernet module. I'm not a newbie in arduino neither an wise/expert yet. But i'm a complete -and less than a- newbie in UDP communication.
Here is the question: my code works fine, it sends and receives UDP packets from one to another and viceversa. But after every packet is sent, it increment in one the "Udp.remotePort" value (that viewing from the "udp-reader" side). It starts from 1024 up to ~32000 (and starts over after reach the highest value). I have researched about UDP and i understand that the first 0-1023 are reserved for specifics services p.e. 80 http, 21 ftp. But i think it should not be incremented after every send. Or it should?
I don't paste the code because as i said it works OK. I just would like to know what could be wrong from your experience.
The sentence i'm using to write the packets is:
udp.beginPacket(IPAddress([ip address]), [port no]);
The libraries i'm using:
UIPEthernet.h https://github.com/UIPEthernet/UIPEthernet for ENC28J60
Ethernet.h for ethernet shield
EDIT: This is the code of the UDP sender (ENC28J60). Basically is the example code of the library as i said it works correctly in terms of communication. I only changed the IPs: 192.168.1.50 which is the UDP sender and 192.168.1.51 which is the UDP destination.
#include <UIPEthernet.h>
EthernetUDP udp;
unsigned long next;
void setup() {
Serial.begin(115200);
uint8_t mac[6] = {0x00,0x01,0x02,0x03,0x04,0x05};
Ethernet.begin(mac,IPAddress(192,168,1,51));
// Also i used: Ethernet.begin(mac,IPAddress(192,168,1,51), 5000);
// with the same result
next = millis()+2000;
}
void loop() {
int success;
int len = 0;
if (((signed long)(millis()-next))>0)
{
do
{
success = udp.beginPacket(IPAddress(192,168,1,50),5000);
Serial.print("beginPacket: ");
Serial.println(success ? "success" : "failed");
//beginPacket fails if remote ethaddr is unknown. In this case an
//arp-request is send out first and beginPacket succeeds as soon
//the arp-response is received.
}
while (!success && ((signed long)(millis()-next))<0);
if (!success )
goto stop;
success = udp.write("hello world&from&arduino");
Serial.print("bytes written: ");
Serial.println(success);
success = udp.endPacket();
Serial.print("endPacket: ");
Serial.println(success ? "success" : "failed");
do
{
//check for new udp-packet:
success = udp.parsePacket();
}
while (!success && ((signed long)(millis()-next))<0);
if (!success )
goto stop;
Serial.print("received: '");
do
{
int c = udp.read();
Serial.write(c);
len++;
}
while ((success = udp.available())>0);
Serial.print("', ");
Serial.print(len);
Serial.println(" bytes");
//finish reading this packet:
udp.flush();
stop:
udp.stop();
next = millis()+2000;
}
}
EDIT 2: This is a capture of testing with SocketTest listening on port 5000, and after a packet received, the next one arrives with the remote port incremented on 1 each time
You must be creating a new UDP socket per sent datagram. Don't do that. Use the same one for the life of the application.
I use sockets in non-blocking mode, and sometimes WSAConnect function returns WSAEINVAL error.
I investigate a problem and found, that it occurs if there is no pause (or it is very small ) between
WSAConnect function calls.
Does anyone know how to avoid this situation?
Below you can found source code, that reproduce the problem. If I increase value of parameter in Sleep function to 50 or great - problem dissapear.
P.S. This problem reproduces only on Windows XP, on Win7 it works well.
#undef UNICODE
#include <winsock2.h>
#include <ws2tcpip.h>
#include <stdio.h>
#include <iostream>
#include <windows.h>
#pragma comment(lib, "Ws2_32.lib")
static int getError(SOCKET sock)
{
DWORD error = WSAGetLastError();
return error;
}
void main()
{
SOCKET sock;
WSADATA wsaData;
if (WSAStartup(MAKEWORD(2, 2), &wsaData) != 0) {
fprintf(stderr, "Socket Initialization Error. Program aborted\n");
return;
}
for (int i = 0; i < 1000; ++i) {
struct addrinfo hints;
struct addrinfo *res = NULL;
memset(&hints, 0, sizeof(hints));
hints.ai_flags = AI_PASSIVE;
hints.ai_socktype = SOCK_STREAM;
hints.ai_family = AF_INET;
hints.ai_protocol = IPPROTO_TCP;
if (0 != getaddrinfo("172.20.1.59", "8091", &hints, &res)) {
fprintf(stderr, "GetAddrInfo Error. Program aborted\n");
closesocket(sock);
WSACleanup();
return;
}
struct addrinfo *ptr = 0;
for (ptr=res; ptr != NULL ;ptr=ptr->ai_next) {
sock = WSASocket(ptr->ai_family, ptr->ai_socktype, ptr->ai_protocol, NULL, 0, NULL); //
if (sock == INVALID_SOCKET)
int err = getError(sock);
else {
u_long noblock = 1;
if (ioctlsocket(sock, FIONBIO, &noblock) == SOCKET_ERROR) {
int err = getError(sock);
closesocket(sock);
sock = INVALID_SOCKET;
}
break;
}
}
int ret;
do {
ret = WSAConnect(sock, ptr->ai_addr, (int)ptr->ai_addrlen, NULL, NULL, NULL, NULL);
if (ret == SOCKET_ERROR) {
int error = getError(sock);
if (error == WSAEWOULDBLOCK) {
Sleep(5);
continue;
}
else if (error == WSAEISCONN) {
fprintf(stderr, "+");
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
else if (error == 10037) {
fprintf(stderr, "-");
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
else {
fprintf(stderr, "Connect Error. [%d]\n", error);
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
}
else {
int one = 1;
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&one, sizeof(one));
fprintf(stderr, "OK\n");
break;
}
}
while (1);
}
std::cout<<"end";
char ch;
std::cin >> ch;
}
You've got a whole basketful of errors and questionable design and coding decisions here. I'm going to have to break them up into two groups:
Outright Errors
I expect if you fix all of the items in this section, your symptom will disappear, but I wouldn't want to speculate about which one is the critical fix:
Calling connect() in a loop on a single socket is simply wrong.
If you mean to establish a connection, drop it, and reestablish it 1000 times, you need to call closesocket() at the end of each loop, then call socket() again to get a fresh socket. You can't keep re-connecting the same socket. Think of it like a power plug: if you want to plug it in twice, you have to unplug (closesocket()) between times.
If instead you mean to establish 1000 simultaneous connections, you need to allocate a new socket with socket() on each iteration, connect() it, then go back around again to get another socket. It's basically the same loop as for the previous case, except without the closesocket() call.
Beware that since XP is a client version of Windows, it's not optimized for handling thousands of simultaneous sockets.
Calling connect() again is not the correct response to WSAEWOULDBLOCK:
if (error == WSAEWOULDBLOCK) {
Sleep(5);
continue; /// WRONG!
}
That continue code effectively commits the same error as above, but worse, if you only fix the previous error and leave this, this usage will then make your code start leaking sockets.
WSAEWOULDBLOCK is not an error. All it means after a connect() on a nonblcoking socket is that the connection didn't get established immediately. The stack will notify your program when it does.
You get that notification by calling one of select(), WSAEventSelect(), or WSAAsyncSelect(). If you use select(), the socket will be marked writable when the connection gets established. With the other two, you will get an FD_CONNECT event when the connection gets established.
Which of these three APIs to call depends on why you want nonblocking sockets in the first place, and what the rest of the program will look like. What I see so far doesn't need nonblocking sockets at all, but I suppose you have some future plan that will inform your decision. I've written an article, Which I/O Strategy Should I Use (part of the Winsock Programmers' FAQ) which will help you decide which of these options to use; it may instead guide you to another option entirely.
You shouldn't use AI_PASSIVE and connect() on the same socket. Your use of AI_PASSIVE with getaddrinfo() tells the stack you intend to use this socket to accept incoming connections. Then you go and use that socket to make an outgoing connection.
You've basically lied to the stack here. Computers find ways to get revenge when you lie to them.
Sleep() is never the right way to fix problems with Winsock. There are built-in delays within the stack that your program can see, such as TIME_WAIT and the Nagle algorithm, but Sleep() is not the right way to cope with these, either.
Questionable Coding/Design Decisions
This section is for things I don't expect to make your symptom go away, but you should consider fixing them anyway:
The main reason to use getaddrinfo() — as opposed to older, simpler functions like inet_addr() — is if you have to support IPv6. That kind of conflicts with your wish to support XP, since XP's IPv6 stack wasn't nearly as heavily tested during the time XP was the current version of Windows as its IPv4 stack. I would expect XP's IPv6 stack to still have bugs as a result, even if you've got all the patches installed.
If you don't really need IPv6 support, doing it the old way might make your symptoms disappear. You might end up needing an IPv4-only build for XP.
This code:
for (int i = 0; i < 1000; ++i) {
// ...
if (0 != getaddrinfo("172.20.1.59", "8091", &hints, &res)) {
...is inefficient. There is no reason you need to keep reinitializing res on each loop.
Even if there is some reason I'm not seeing, you're leaking memory by not calling freeaddrinfo() on res.
You should initialize this data structure once before you enter the loop, then reuse it on each iteration.
else if (error == 10037) {
Why aren't you using WSAEALREADY here?
You don't need to use WSAConnect() here. You're using the 3-argument subset that Winsock shares with BSD sockets. You might as well use connect() here instead.
There's no sense making your code any more complex than it has to be.
Why aren't you using a switch statement for this?
if (error == WSAEWOULDBLOCK) {
// ...
}
else if (error == WSAEISCONN) {
// ...
}
// etc.
You shouldn't disable the Nagle algorithm:
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, ...);
I've ran into a problem with a simple TCP Client implemented using select.
The problem is that,at the second printf it only displays before it gets to the connect() function then waits for user input. Does connect block the rest of the program until i send something? (The TCP server is also implemented using select but i didn't find anything wrong with it)
I've searched on the web and couldn't find a cause or maybe i didn't search for the right thing..
#include <includes.h>
int main()
{
int sfd;
fd_set rset;
char buff[1024]=" ";
char playerName[20]="";
int nameSet=0;
struct sockaddr_in server;
sfd= socket(AF_INET,SOCK_STREAM,0);
if(sfd<0)
{ printf("socket not created\n"); return 0; }
bzero(&server,sizeof(struct sockaddr_in));
server.sin_family=AF_INET;
server.sin_port=htons(2020);
inet_aton("127.0.0.1",&server.sin_addr);
//here is the problem after %d which calls the connect() function
printf("Conexion returned:%d \n Name:",connect(sfd,(struct sockaddr *)&server,sizeof(server)));
for(;;)
{
bzero(buff,1024);
FD_ZERO(&rset);
FD_SET(0,&rset);
FD_SET(sfd,&rset);
if(select(sfd+1,&rset,NULL,NULL,NULL)<0)
{
printf("con-lost!\n");
break;
}
if(FD_ISSET(0,&rset))
{
printf("Talk: \n");
scanf("%s",buff);
if(nameSet==0)
{
strcpy(playerName,buff);
nameSet=1;
printf("Hi:%s\n",playerName);
}
if(write(sfd,buff,strlen(buff)+10)<0)
{
break;
}
}
if(FD_ISSET(sfd,&rset)>0)
{
if(read(sfd,buff,1024)<=0)
{
printf("con is off!\n");
break;
}
printf("msg rcd %s\n",buff);
}
} //endfor
close(sfd);
return 0;
} //endmain
The connect function, on a blocking socket, blocks until the connect operation succeeds or fails.
You should be warned that using select with a blocking socket, which is what your program does, does not ensure that your program will not block. When you get a select hit, that does not guarantee that a future operation will not block.
strlen(buff)+10
What's the reasoning behind the +10?
I need to setup an ethernet (web) server that have to be turned on and off depending on some conditions on the Arduino UNO.
I read the docs of the Server class in the Ethernet library and it seems there is no chance to stop the server once you started, i.e. there is no EthernetServer.begin() counterpart.
I thought then to setup the server in the setup section and serve incoming connections depending on when the given condition:
EthernetServer server = EthernetServer(80);
void setup() {
Ethernet.begin(mac, ip);
server.begin();
}
void loop() {
if (condition) {
EthernetClient client = server.available();
if (client == true) {
// serve the client...
}
} else {
// do something else
}
}
This indeed works, but the client is not properly rejected: it is just leaved pending. In the browser one can see the web page loading idefinitely, and if the condition turns to true the client will eventually be served for the request issued when the condition was false.
I see no methods for rejecting the request (there is no counterpart of EthernetServer.available()). The only thing that comes to my mind is to perform a
server.available().stop();
in the beginning of the else block. This prevent to serve requests issued while the condition was false, but doesn't prevent the connection between the client and the server to take place (it's like opening a connection and shut it down immediately).
How could I avoid to establish connections at all while the condition is false?
I'm guessing here since I don't have my Arduino collection handy, but from memory and reading the reference you could try something like
void loop()
{
EthernetClient client = server.available();
if ( !condition )
{
client.stop(); // break connection and do something else
}
else
{
// serve the client...
}
}
Hope that may at least help you in the right direction.
Cheers,
Could you just return a 404 header when you want the server disabled?
if(!condition)
{
client.println("HTTP/1.1 404 OK");
client.println("Content-Type: text/html");
client.println("Connnection: close");
client.println();
client.println("<!DOCTYPE HTML>");
client.println("<html><body>404</body></html>");
}
else
{
// serve client
}
I am writing this answer here as it is the only post still active or that hasn't been closed regarding this topic. Despite countless researches regarding being able to switch the EthernetServer on or off at will, this is not possible. The only thing you can do is use some functions defined "public" in the classes of the Ethernet/W5100/W5200/W5500 libraries.
The features I've noticed that actually impact the reliability of the network card are:
#include <Ethernet.h>
#include <utility/w5100.h>
W5100.setRetransmissionTime(milliseconds);
W5100.setRetransmissionCount(number);
(helps to shorten waiting times in case of Wiznet W5100/W5200/W5500 network card timeout)
EthernetClient::setConnectionTimeout(CONNECTION_TIMEOUT);
EthernetClient::setTimeout(CONNECTION_INPUT_STREAMING_TIMEOUT);
(they help to shorten waiting times in case of timeout of the client connected to the EthernetServer)
More tips:
when EthernetServer::available() returns false consider using EthernetServer::flush() to flush server buffers;
when using EthernetClient::write() also use EthernetClient::flush() to ensure that all data has been sent;
use EthernetClient::close() on dead/useless clients to free sockets easely.
Consider implementing a function to force-close network sockets, using the following code:
#include <SPI.h>
#include <utility/w5100.h>
void closeAllSockets()
{
for (int i = 0; i < MAX_SOCK_NUM; i++)
{
SPI.beginTransaction(SPI_ETHERNET_SETTINGS);
W5100.execCmdSn(i, Sock_CLOSE);
SPI.endTransaction();
}
}
void printAllSockets()
{
for (int i = 0; i < MAX_SOCK_NUM; i++)
{
Serial.print(F("Socket #"));
Serial.print(i);
uint8_ts = W5100.readSnSR(i);
Serial.print(F(": 0x"));
Serial.print(s, 16);
Serial.print(F(" "));
Serial.print(W5100.readSnPORT(i));
Serial.print(F(" D:"));
uint8_t dip[4];
W5100.readSnDIPR(i, dip);
for (int j = 0; j < 4; j++)
{
Serial.print(dip[j], 10);
if (j < 3)
Serial.print(".");
}
Serial.print(F("("));
Serial.print(W5100.readSnDPORT(i));
Serial.println(F(")"));
}
}
MAX_SOCK_NUM changes according to the network card, the Wiznet W5100 has a maximum of 4 sockets, the W5200 and W5500 has a maximum of 8 sockets.
Hope this helps someone.
A friend of mine came to me with a problem: when using the NetworkStream class on the server end of the connection, if the client disconnects, NetworkStream fails to detect it.
Stripped down, his C# code looked like this:
List<TcpClient> connections = new List<TcpClient>();
TcpListener listener = new TcpListener(7777);
listener.Start();
while(true)
{
if (listener.Pending())
{
connections.Add(listener.AcceptTcpClient());
}
TcpClient deadClient = null;
foreach (TcpClient client in connections)
{
if (!client.Connected)
{
deadClient = client;
break;
}
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
if (deadClient != null)
{
deadClient.Close();
connections.Remove(deadClient);
}
Thread.Sleep(0);
}
The code works, in that clients can successfully connect and the server can read data sent to it. However, if the remote client calls tcpClient.Close(), the server does not detect the disconnection - client.Connected remains true, and ns.DataAvailable is false.
A search of Stack Overflow provided an answer - since Socket.Receive is not being called, the socket is not detecting the disconnection. Fair enough. We can work around that:
foreach (TcpClient client in connections)
{
client.ReceiveTimeout = 0;
if (client.Client.Poll(0, SelectMode.SelectRead))
{
int bytesPeeked = 0;
byte[] buffer = new byte[1];
bytesPeeked = client.Client.Receive(buffer, SocketFlags.Peek);
if (bytesPeeked == 0)
{
deadClient = client;
break;
}
else
{
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
}
}
(I have left out exception handling code for brevity.)
This code works, however, I would not call this solution "elegant". The other elegant solution to the problem I am aware of is to spawn a thread per TcpClient, and allow the BinaryFormatter.Deserialize (née NetworkStream.Read) call to block, which would detect the disconnection correctly. Though, this does have the overhead of creating and maintaining a thread per client.
I get the feeling that I'm missing some secret, awesome answer that would retain the clarity of the original code, but avoid the use of additional threads to perform asynchronous reads. Though, perhaps, the NetworkStream class was never designed for this sort of usage. Can anyone shed some light?
Update: Just want to clarify that I'm interested to see if the .NET framework has a solution that covers this use of NetworkStream (i.e. polling and avoiding blocking) - obviously it can be done; the NetworkStream could easily be wrapped in a supporting class that provides the functionality. It just seemed strange that the framework essentially requires you to use threads to avoid blocking on NetworkStream.Read, or, to peek on the socket itself to check for disconnections - almost like it's a bug. Or a potential lack of a feature. ;)
Is the server expecting to be sent multiple objects over the same connection? IF so I dont see how this code will work, as there is no delimiter being sent that signifies where the first object starts and the next object ends.
If only one object is being sent and the connection closed after, then the original code would work.
There has to be a network operation initiated in order to find out if the connection is still active or not. What I would do, is that instead of deserializing directly from the network stream, I would instead buffer into a MemoryStream. That would allow me to detect when the connection was lost. I would also use message framing to delimit multiple responses on the stream.
MemoryStream ms = new MemoryStream();
NetworkStream ns = client.GetStream();
BinaryReader br = new BinaryReader(ns);
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
if (objectSize == 0)
break; // client disconnected
byte [] buffer = new byte[objectSize];
int index = 0;
int read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
while (read > 0)
{
objectSize -= read;
index += read;
read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
}
if (objectSize > 0)
{
// client aborted connection in the middle of stream;
break;
}
else
{
BinaryFormatter bf = new BinaryFormatter();
using(MemoryStream ms = new MemoryStream(buffer))
{
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
Yeah but what if you lose a connection before getting the size? i.e. right before the following line:
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
ReadInt32() will block the thread indefinitely.