I am downloading file with ftp protocol. Now, to check the ability of dealing with error, I am simulating happening of some network error. The code to handle the network inputStream is as below:
- (void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
// An NSStream delegate callback that's called when events happen on our
// network stream.
{
#pragma unused(aStream)
assert(aStream == self.networkStream);
switch (eventCode) {
case NSStreamEventOpenCompleted: {
self.connected = YES;
} break;
case NSStreamEventHasBytesAvailable: {
NSInteger bytesRead;
uint8_t buffer[32768];
// Pull some data off the network.
bytesRead = [self.networkStream read:buffer maxLength:sizeof(buffer)];
DLog(#"%#,byteRead:%d",self.urlInput,bytesRead);
if (bytesRead == -1) {
[self _stopReceiveWithStatus:#"Network read error"];
} else if (bytesRead == 0) {
[self _stopReceiveWithStatus:#"success"];
} else {
NSInteger bytesWritten;
NSInteger bytesWrittenSoFar;
bytesWrittenSoFar = 0;
do {
bytesWritten = [self.fileStream write:&buffer[bytesWrittenSoFar] maxLength:bytesRead - bytesWrittenSoFar];
DLog(#"%#,bytesWritten:%d",self.urlInput,bytesWritten);
assert(bytesWritten != 0);
if (bytesWritten == -1) {
[self _stopReceiveWithStatus:#"File write error"];
break;
} else {
bytesWrittenSoFar += bytesWritten;
}
} while (bytesWrittenSoFar != bytesRead);
}
} break;
case NSStreamEventHasSpaceAvailable: {
assert(NO); // should never happen for the output stream
} break;
case NSStreamEventErrorOccurred: {
[self _stopReceiveWithStatus:#"Stream open error"];
} break;
case NSStreamEventEndEncountered: {
assert(NO);
} break;
default: {
assert(NO);
} break;
}
}
If I turn off the wifi manually or turn off my wireless router (Network connection flag is off), a "NSStreamEventErrorOccurred" will be return and the downloading process will be terminated correctly. However, if I turn off the Modem, while keeping the wireless router open (Network connection flag is on). The downloading process stuck at the case "NSStreamEventHasBytesAvailable". Even after I turn on the internet connection , it is still stuck.
I want to know why it is stuck and how can I detect this kind of error. How can I deal with this situation?
First, kudos for considering this and running tests. Many developers assume that "the network connection will always work."
Second, it seems a little odd that you are using NSStream for FTP downloads; you do know that NSURLConnection supports FTP, right? Unless you are doing something really strange you should probably use the built-in URL loading facilities.
In any case, the issue here is that there is in principle no way for an application (or computer) to determine whether there has been a pause in a connection because the connection failed (and should be restarted or canceled) or because it simply is running slowly (in which case patience is required).
I'm a little surprised that an active TCP session is not resumed when your modem is reconnected; that suggests that maybe your ISP's router is dropping the connection when the modem link goes down or that your IP is changing on reconnection, but this isn't important for your question anyway.
TCP sessions will ordinarily eventually get timed out by the OS (or an upstream router) after a period of inactivity, but this might be an unacceptably long time. So the thing you probably need to do is implement a timeout.
What I'd probably do (assuming you stay the course with NSStream as discussed above) is have an NSTimer firing off periodically - maybe every 15 seconds - and have the callback compare the current time to a timestamp that you set on each NSStreamEventHasBytesAvailable event. If the timestamp is too old (say, more than 15 seconds), cancel or restart the download as desired and notify the user.
But again, take a look at just using NSURLConnection.
Related
I'm using Veins 4.4 and I need to store some results in an outer server, so I would like to open a UDP connection toward it.
I've read several posts about using a TCP connection for the mobility in Veins,and I understood I should resort to the Inet module to open a connection.
Although I don't need it for the mobility, but to send data to an external server.
Is there any suggestion?
I was trying to use the method processCommandFromApp method from inet/src/transport/UDP.cc class:
void UDP::processCommandFromApp(cMessage *msg)
{
switch (msg->getKind())
{
case UDP_C_BIND: {
UDPBindCommand *ctrl = check_and_cast<UDPBindCommand*>(msg->getControlInfo());
bind(ctrl->getSockId(), msg->getArrivalGate()->getIndex(), ctrl->getLocalAddr(), ctrl->getLocalPort());
break;
}
case UDP_C_CONNECT: {
UDPConnectCommand *ctrl = check_and_cast<UDPConnectCommand*>(msg->getControlInfo());
connect(ctrl->getSockId(), msg->getArrivalGate()->getIndex(), ctrl->getRemoteAddr(), ctrl->getRemotePort());
break;
}
case UDP_C_CLOSE: {
UDPCloseCommand *ctrl = check_and_cast<UDPCloseCommand*>(msg->getControlInfo());
close(ctrl->getSockId());
break;
}
case UDP_C_SETOPTION: {
UDPSetOptionCommand *ctrl = check_and_cast<UDPSetOptionCommand *>(msg->getControlInfo());
SockDesc *sd = getOrCreateSocket(ctrl->getSockId(), msg->getArrivalGate()->getIndex());
if (dynamic_cast<UDPSetTimeToLiveCommand*>(ctrl))
setTimeToLive(sd, ((UDPSetTimeToLiveCommand*)ctrl)->getTtl());
else if (dynamic_cast<UDPSetTypeOfServiceCommand*>(ctrl))
setTypeOfService(sd, ((UDPSetTypeOfServiceCommand*)ctrl)->getTos());
else if (dynamic_cast<UDPSetBroadcastCommand*>(ctrl))
setBroadcast(sd, ((UDPSetBroadcastCommand*)ctrl)->getBroadcast());
else if (dynamic_cast<UDPSetMulticastInterfaceCommand*>(ctrl))
setMulticastOutputInterface(sd, ((UDPSetMulticastInterfaceCommand*)ctrl)->getInterfaceId());
else if (dynamic_cast<UDPSetMulticastLoopCommand*>(ctrl))
setMulticastLoop(sd, ((UDPSetMulticastLoopCommand*)ctrl)->getLoop());
else if (dynamic_cast<UDPSetReuseAddressCommand*>(ctrl))
setReuseAddress(sd, ((UDPSetReuseAddressCommand*)ctrl)->getReuseAddress());
else if (dynamic_cast<UDPJoinMulticastGroupsCommand*>(ctrl))
{
UDPJoinMulticastGroupsCommand *cmd = (UDPJoinMulticastGroupsCommand*)ctrl;
std::vector<IPvXAddress> addresses;
std::vector<int> interfaceIds;
for (int i = 0; i < (int)cmd->getMulticastAddrArraySize(); i++)
addresses.push_back(cmd->getMulticastAddr(i));
for (int i = 0; i < (int)cmd->getInterfaceIdArraySize(); i++)
interfaceIds.push_back(cmd->getInterfaceId(i));
joinMulticastGroups(sd, addresses, interfaceIds);
}
else if (dynamic_cast<UDPLeaveMulticastGroupsCommand*>(ctrl))
{
UDPLeaveMulticastGroupsCommand *cmd = (UDPLeaveMulticastGroupsCommand*)ctrl;
std::vector<IPvXAddress> addresses;
for (int i = 0; i < (int)cmd->getMulticastAddrArraySize(); i++)
addresses.push_back(cmd->getMulticastAddr(i));
leaveMulticastGroups(sd, addresses);
}
else
throw cRuntimeError("Unknown subclass of UDPSetOptionCommand received from app: %s", ctrl->getClassName());
break;
}
default: {
throw cRuntimeError("Unknown command code (message kind) %d received from app", msg->getKind());
}
}
delete msg; // also deletes control info in it
}
I included the inet path as follows:
#include <inet/src/transport/udp/UDP.h>
and I call it as follows, by passing as input UDP_C_CONNECT message.:
cMessage *UDP_C_CONNECT;
void Inet::UDP::processCommandFromApp(UDP_C_CONNECT);
But when I run the simulation, it crashes, returning this error:
Errors occurred during the build.
Errors running builder 'OMNeT++ Makefile Builder' on project 'veins'.
java.lang.NullPointerException
1) Is there the correct way to set up the required connection?
2) Why I'm getting this error as soon as I include the inet path?
UPDATE
I also tried another way to establish the connection:
std::string host;
host = "16777343";
int port = 5144;
Veins::TraCIConnection* connection;
connection = TraCIConnection::connect(host.c_str(), port);
but as soon as it load the plugin, then it's like it is waiting for something at time 0.0 without starting the generation of the nodes.
Thanks for helping
Simulations using OMNeT++ are C++ programs, so you can use the full range of libraries and system calls available to any C++ program. If you want to open a UDP connection to some other computer on your network, just create a UDP socket as you would in any C++ program, then send the data whenever needed.
Maybe the easiest way to go about writing this is to
start with a plain C++ program that has nothing to do with OMNeT++
move the part of the program that has to run before everything else into the initialize method of a module in your simulation, the rest to a handleMessage method.
Okay this is my first question here on Stack Overflow, so bare over with it if I'm not asking properly.
Basically I'm trying to code some asynchronous sockets using std.socket, but I'm not sure if I've understood the concept correct. I've only ever worked with asynchronous sockets in C# and in D it seem to be on a much lower level. I've researched a lot and looked up a lot of code, documentation etc. both for D and C/C++ to get an understanding, however I'm not sure if I understand the concept correctly and if any of you have some examples. I tried looking at splat, but it's very outdated and vibe seems to be too complex just for a simple asynchronous socket wrapper.
If I understood correctly there is no poll() function in std.socket so you'd have to use SocketSet with a single socket on select() to poll the status of the socket right?
So basically how I'd go about handling the sockets is polling to get the read status of the socket and if it has a success (value > 0) then I can call receive() which will return 0 for disconnection else the received value, but I'd have to keep doing this until the expected bytes are received.
Of course the socket is set to nonblocked!
Is that correct?
Here is the code I've made up so far.
void HANDLE_READ()
{
while (true)
{
synchronized
{
auto events = cast(AsyncObject[int])ASYNC_EVENTS_READ;
foreach (asyncObject; events)
{
int poll = pollRecv(asyncObject.socket.m_socket);
switch (poll)
{
case 0:
{
throw new SocketException("The socket had a time out!");
continue;
}
default:
{
if (poll <= -1)
{
throw new SocketException("The socket was interrupted!");
continue;
}
int recvGetSize = (asyncObject.socket.m_readBuffer.length - asyncObject.socket.readSize);
ubyte[] recvBuffer = new ubyte[recvGetSize];
int recv = asyncObject.socket.m_socket.receive(recvBuffer);
if (recv == 0)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.socket.disconnect();
continue;
}
asyncObject.socket.m_readBuffer ~= recvBuffer;
asyncObject.socket.readSize += recv;
if (asyncObject.socket.readSize == asyncObject.socket.expectedReadSize)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.event(asyncObject.socket);
}
break;
}
}
}
}
}
}
So basically how I'd go about handling the sockets is polling to get the read status of the socket
Not quite right. Usually, the idea is to build an event loop around select, so that your application is idle as long as there are no network or timer events that need to be handled. With polling, you'd have to check for new events continuously or on a timer, which leads to wasted CPU cycles, and events getting handled a bit later than they occur.
In the event loop, you populate the SocketSets with sockets whose events you are interested in. If you want to be notified of new received data on a socket, it goes to the "readable" set. If you have data to send, the socket should be in the "writable" set. And all sockets should be on the "error" set.
select will then block (sleep) until an event comes in, and fill the SocketSets with the sockets which have actionable events. Your application can then respond to them appropriately: receive data for readable sockets, send queued data for writable sockets, and perform cleanup for errored sockets.
Here's my D implementation of non-fiber event-based networking: ae.net.asockets.
Is it ok to invoke WSAAsyncSelect in the WM_CREATE message of a Window Process (WinProc), and then perform all recv actions inside the same WinProc (e.g. to recv and populate a control with the received byte data) under WM_SOCKET?
For example, I know that performing long tasks inside the WinProc can cause the window to be unresponsive (since it cannot handle other messages until this message is completed), but I've seen no examples that treat this recv I/O with a thread or event object. Is it completely unnecessary?
Here's the example case in the WinProc I've seen on the net, and also in Petzold the recv is handled in a similar fashion:
case WM_SOCKET:
{
if(WSAGETSELECTERROR(lParam))
{
MessageBox(hWnd,
"Connection to server failed",
"Error",
MB_OK|MB_ICONERROR);
SendMessage(hWnd,WM_DESTROY,NULL,NULL);
break;
}
switch(WSAGETSELECTEVENT(lParam))
{
case FD_READ:
{
char szIncoming[1024];
ZeroMemory(szIncoming,sizeof(szIncoming));
int inDataLength=recv(Socket,
(char*)szIncoming,
sizeof(szIncoming)/sizeof(szIncoming[0]),
0);
strncat(szHistory,szIncoming,inDataLength);
strcat(szHistory,"\r\n");
SendMessage(hEditIn,
WM_SETTEXT,
sizeof(szIncoming)-1,
reinterpret_cast<LPARAM>(&szHistory));
}
break;
case FD_CLOSE:
{
MessageBox(hWnd,
"Server closed connection",
"Connection closed!",
MB_ICONINFORMATION|MB_OK);
closesocket(Socket);
SendMessage(hWnd,WM_DESTROY,NULL,NULL);
}
break;
}
}
Yes, this is perfectly acceptable. Though typically you would wait until CreateWindow/Ex() exits before then calling WSAAsyncSelect(). But either way works fine. Just be sure to handle the case where recv() fails, or returns fewer bytes than you asked for.
I need to setup an ethernet (web) server that have to be turned on and off depending on some conditions on the Arduino UNO.
I read the docs of the Server class in the Ethernet library and it seems there is no chance to stop the server once you started, i.e. there is no EthernetServer.begin() counterpart.
I thought then to setup the server in the setup section and serve incoming connections depending on when the given condition:
EthernetServer server = EthernetServer(80);
void setup() {
Ethernet.begin(mac, ip);
server.begin();
}
void loop() {
if (condition) {
EthernetClient client = server.available();
if (client == true) {
// serve the client...
}
} else {
// do something else
}
}
This indeed works, but the client is not properly rejected: it is just leaved pending. In the browser one can see the web page loading idefinitely, and if the condition turns to true the client will eventually be served for the request issued when the condition was false.
I see no methods for rejecting the request (there is no counterpart of EthernetServer.available()). The only thing that comes to my mind is to perform a
server.available().stop();
in the beginning of the else block. This prevent to serve requests issued while the condition was false, but doesn't prevent the connection between the client and the server to take place (it's like opening a connection and shut it down immediately).
How could I avoid to establish connections at all while the condition is false?
I'm guessing here since I don't have my Arduino collection handy, but from memory and reading the reference you could try something like
void loop()
{
EthernetClient client = server.available();
if ( !condition )
{
client.stop(); // break connection and do something else
}
else
{
// serve the client...
}
}
Hope that may at least help you in the right direction.
Cheers,
Could you just return a 404 header when you want the server disabled?
if(!condition)
{
client.println("HTTP/1.1 404 OK");
client.println("Content-Type: text/html");
client.println("Connnection: close");
client.println();
client.println("<!DOCTYPE HTML>");
client.println("<html><body>404</body></html>");
}
else
{
// serve client
}
I am writing this answer here as it is the only post still active or that hasn't been closed regarding this topic. Despite countless researches regarding being able to switch the EthernetServer on or off at will, this is not possible. The only thing you can do is use some functions defined "public" in the classes of the Ethernet/W5100/W5200/W5500 libraries.
The features I've noticed that actually impact the reliability of the network card are:
#include <Ethernet.h>
#include <utility/w5100.h>
W5100.setRetransmissionTime(milliseconds);
W5100.setRetransmissionCount(number);
(helps to shorten waiting times in case of Wiznet W5100/W5200/W5500 network card timeout)
EthernetClient::setConnectionTimeout(CONNECTION_TIMEOUT);
EthernetClient::setTimeout(CONNECTION_INPUT_STREAMING_TIMEOUT);
(they help to shorten waiting times in case of timeout of the client connected to the EthernetServer)
More tips:
when EthernetServer::available() returns false consider using EthernetServer::flush() to flush server buffers;
when using EthernetClient::write() also use EthernetClient::flush() to ensure that all data has been sent;
use EthernetClient::close() on dead/useless clients to free sockets easely.
Consider implementing a function to force-close network sockets, using the following code:
#include <SPI.h>
#include <utility/w5100.h>
void closeAllSockets()
{
for (int i = 0; i < MAX_SOCK_NUM; i++)
{
SPI.beginTransaction(SPI_ETHERNET_SETTINGS);
W5100.execCmdSn(i, Sock_CLOSE);
SPI.endTransaction();
}
}
void printAllSockets()
{
for (int i = 0; i < MAX_SOCK_NUM; i++)
{
Serial.print(F("Socket #"));
Serial.print(i);
uint8_ts = W5100.readSnSR(i);
Serial.print(F(": 0x"));
Serial.print(s, 16);
Serial.print(F(" "));
Serial.print(W5100.readSnPORT(i));
Serial.print(F(" D:"));
uint8_t dip[4];
W5100.readSnDIPR(i, dip);
for (int j = 0; j < 4; j++)
{
Serial.print(dip[j], 10);
if (j < 3)
Serial.print(".");
}
Serial.print(F("("));
Serial.print(W5100.readSnDPORT(i));
Serial.println(F(")"));
}
}
MAX_SOCK_NUM changes according to the network card, the Wiznet W5100 has a maximum of 4 sockets, the W5200 and W5500 has a maximum of 8 sockets.
Hope this helps someone.
I'm currently polling my CFReadStream for new data with CFReadStreamHasBytesAvailable.
(First, some background: I'm doing my own threading and I don't want/need to mess with runloop stuff, so the client callback stuff doesn't really apply here).
My question is: what are accepted practices for polling?
Apple's documentation on the subject doesn't seem too helpful.
They recommend to "do something else while you wait". I'm currently just doing something along the lines of:
while(!done)
{
if(CFReadStreamHasBytesAvailable(readStream))
{
CFReadStreamRead(...) ... bla bla bla
} else {
usleep(3600); // I made this up
sched_yield(); // also made this up
continue;
}
}
Is the usleep and the sched_yield "good enough"? In there a "good" number to sleep for in usleep?
(Also: yes, because this is running in my own thread, I could just block on CFReadStreamRead - which would be great but I'm also trying to snag upload progress as well as download progress, so blocking there wouldn't help...).
Any insight would be much appreciated - thanks!
I think this question is a bit of a paradox because you're asking what the best practices are for doing something that's intrinsically not a best practice ;)
When there's a perfectly good method for blocking on network I/O, any compromise that causes you to poll instead is by definition not the best practice.
That said, if you do poll I think it might be more appropriate to "run the runloop until date" on your thread, instead of using whatever posix sleep or yield method you're imagining. Remember that each thread gets its own runloop, so essentially by running the runloop you're allowing Apple to employ its concept of best practices for blocking until a future date.
As for the time delay, I don't know if you'll get a definitive answer for what a good time is. It's a tradeoff between peppering the CPU with polling cycles vs. being stuck in the runloop for a little while when I/O is ready to be read from the network.
Ideally I think I would refocus your efforts on making this work using I/O blocking calls, but if you stick with the poll & idle technique, don't fret too much about the specific delay time. Just pick something that works and doesn't seem to impact performance negatively in either direction.
(Also, I'd like to clarify that I'm not too religious about the polling vs. blocking thing, I'm only stressing its value because you're obviously in search of an elevated solution).
When doing manual CFStream based connections on a separate thread (for custom things like bandwidth monitoring and throttling), I use a combination of CFReadStreamScheduleWithRunLoop, CFRunLoopRunInMode and CFReadStreamSetClient. Basically I run for 0.25 seconds and then check stream status. The client callback also gets notified on its own as well. This allows me to periodically check read status and do some custom behavior but rely mostly on (stream) events.
static const CFOptionFlags kMyNetworkEvents =
kCFStreamEventOpenCompleted
| kCFStreamEventHasBytesAvailable
| kCFStreamEventEndEncountered
| kCFStreamEventErrorOccurred;
static void MyStreamCallBack(CFReadStreamRef readStream, CFStreamEventType type, void *clientCallBackInfo) {
[(id)clientCallBackInfo _handleNetworkEvent:type];
}
- (void)connect {
...
CFStreamClientContext streamContext = {0, self, NULL, NULL, NULL};
BOOL success = CFReadStreamSetClient(readStream_, kMyNetworkEvents, MyStreamCallBack, &streamContext);
CFReadStreamScheduleWithRunLoop(readStream_, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode);
if (!CFReadStreamOpen(readStream_)) {
// Notify error
}
while(!cancelled_ && !finished_) {
SInt32 result = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, NO);
if (result == kCFRunLoopRunStopped || result == kCFRunLoopRunFinished) {
break;
}
if (([NSDate timeIntervalSinceReferenceDate] - lastRead_) > MyConnectionTimeout) {
// Call timed out
break;
}
// Also handle stream status CFStreamStatus status = CFReadStreamGetStatus(readStream_);
if (![self _handleStreamStatus:status]) break;
}
CFRunLoopStop(CFRunLoopGetCurrent());
CFReadStreamSetClient(readStream_, 0, NULL, NULL);
CFReadStreamUnscheduleFromRunLoop(readStream_, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode);
CFReadStreamClose(readStream_);
}
- (void)_handleNetworkEvent:(CFStreamEventType)type {
switch(type) {
case kCFStreamEventOpenCompleted:
// Notify connected
break;
case kCFStreamEventHasBytesAvailable:
[self _handleBytes];
break;
case kCFStreamEventErrorOccurred:
[self _handleError];
break;
case kCFStreamEventEndEncountered:
[self _handleBytes];
[self _handleEnd];
break;
default:
Debug(#"Received unexpected CFStream event (%d)", type);
break;
}
}