How to stop the ftp connection if the server is not avaliable - iphone

I create an inputstream for ftp request as below:
ftpStream = CFReadStreamCreateWithFTPURL(NULL, (CFURLRef) url);
networkStream = (NSInputStream *) ftpStream;
[self.networkStream read:<#(uint8_t *)#> maxLength:<#(NSUInteger)#>]
when I read data, if the server is not able to be connected, the programme will be stucked at the third line above. Is there a method could stop the connection after some second which you can define? Or if some other method to deal with this?

Don't do a synchronous read. Instead, set a delegate on the stream object, schedule it on the run loop, and call -open. If you're on the main thread of an app, just return control back to the framework at this point and it will run the run loop for you. If you're on a background thread or writing a command-line tool, run the thread's run loop yourself. Your delegate will be called when there's data to be read.
To establish a timeout for the connection, you can schedule a timer on the run loop. Alternatively, if you're running the run loop yourself, you can just put a limit on how long you'll run it. If the time expires before the connection has been made, just close the stream.

Related

Epoll events for connecting sockets

I create epoll and register some non-blocking sockets which try connect to closed ports on localhost. Why epoll tells me, that i can write to this socket (it give event for one of created socket with eventmask contain EPOLLOUT)? But this socket doesn't open and if i try send something to it i get an error Connection refused.
Another question - what does mean even EPOLLHUP? I thought that this is event for refused connection. But how in this case event can have simultaneously EPOLLHUP and EPOLLOUT events?
Sample code on Python:
import socket
import select
poll = select.epoll()
fd_to_sock = {}
for i in range(1, 3):
s = socket.socket()
s.setblocking(0)
s.connect_ex(('localhost', i))
poll.register(s, select.EPOLLOUT)
fd_to_sock[s.fileno()] = s
print(poll.poll(0.1))
# prints '[(4, 28), (5, 28)]'
All that poll guarantees is that your application won't block after calling corresponding function. So you are getting what you've paid for - you can now rest assured writing to this socket won't block - and it didn't block, did it?
Poll never guarantees that corresponding operation will succeed.
poll/select/epoll return when the file descriptor is "ready" but that just means that the operation will not block (not that you will necessarily be able to write to it successfully).
Likewise for EPOLLIN: for example, it will return ready when a socket is closed; in that case, you won't actually be able to read data from it.
EPOLLHUP means that there was a "hang up" on the connection. That would really only occur once you actually had a connection. Also, the documentation (http://linux.die.net/man/2/epoll_ctl) says that you don't need to include it anyway:
EPOLLHUP
Hang up happened on the associated file descriptor. epoll_wait(2) will always wait for this event; it is not necessary to set it in events.

OpenSSL Nonblocking Socket Accept And Connect Failed

Here is my question:
Is it bad to set socket to nonblocking before I call accept or connect? or it should be using blocking accept and connect, then change the socket to nonblocking?
I'm new to OpenSSL and not very experienced with network programming. My problem here is I'm trying to use OpenSSL with a nonblocking socket network to add security. When I call SSL_accept on server side and SSL_connect on client side, and check return error code using
SSL_get_error(m_ssl, n);
char error[65535];
ERR_error_string_n(ERR_get_error(), error, 65535);
the return code from SSL_get_error indicates SSL_ERROR_WANT_READ, while ERR_error_string_n prints out "error:00000000:lib(0):func(0):reason(0)", which I think it means no error. SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Then I use a loop to retry those function, but this just leads to a infinite loop :(
I believe I have initialized SSL properly, here is the code
//CRYPTO_malloc_init();
SSL_library_init();
const SSL_METHOD *method;
// load & register all cryptos, etc.
OpenSSL_add_all_algorithms();
// load all error messages
SSL_load_error_strings();
if (server) {
// create new server-method instance
method = SSLv23_server_method();
}
else {
// create new client-method instance
method = SSLv23_client_method();
}
// create new context from method
m_ctx = SSL_CTX_new(method);
if (m_ctx == NULL) {
throwError(-1);
}
If there is any part I haven't mentioned but you think it could be the problem, please let me know.
SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Yes, but this is not the full story.
You should retry the call only after the socket gets readable, e.g. you need to use select or poll or similar functions to wait, until the socket gets readable. Same applies to SSL_ERROR_WANT_WRITE, but here you have to wait for the socket to get writable.
If you just retry without waiting it will probably eventually succeed, but only after having 100s of failed calls. While doing select does not guarantee that it will succeed immediately at the next call it will take only a few calls of SSL_connect/SSL_accept until it succeeds and it will not busy loop and eat CPU in the mean time.

GCDAsyncSocket write timeout does not work

I am trying to set a timeout on write operations when using GCDAsyncSocket. The code is pretty simple and is the following.
[iAsyncSocket writeData:bytesToSend withTimeout:3.0 tag:0];
Then I disable the Internet connection on my Mac and wait for write timeout to occur, but nothing happens. I don't get a disconnection with a GCDAsyncSocketWriteTimeoutError error as I should.
I have also validated that my server stops, as expected, receiving the messages after I turn off the Internet connection.
I have looked inside the source code and I have found out that the writeTimer, that is responsible for firing a write timeout event, is always cancelled (function endCurrentWrite is called). Tracing back to where the timer is cancelled, I ended up at the following line of code.
ssize_t result = write(socketFD, buffer, (size_t)bytesToWrite);
The write system call always returns the total number of bytes that I am sending, as if the socket manages to send the data although there is no Internet connection. Is this logical?
Has anyone come up with the same problem or seen similar behaviour? Or has anyone managed to set a write timeout for a GCDAsyncSocket?
Thanks a lot.

Multiple NSURLConnection calls one after another?

OK i have tried a lot of different methods for this. I want to call a JSON API,get its response, save the key values, then call it again with different parameters. Most recently, I tried calling the URL method in a for loop, posting an NSNotification in connectionDidFinishLoading: and NSLogging values in observer when the notification gets posted. But its only logging the values of the final call multiple times.
I use this for the connection initialisation...
eventConnection=[[NSURLConnection alloc]initWithRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:urlString]] delegate:self startImmediately:NO];
[eventConnection scheduleInRunLoop: [NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[eventConnection start];
This is the delegate method...
-(void)connectionDidFinishLoading:(NSURLConnection *)connection
{
NSNotification *eventNotification=[NSNotification notificationWithName:#"eventsFound" object:nil userInfo:responseDict];
[[NSNotificationQueue defaultQueue]enqueueNotification:eventNotification postingStyle:NSPostNow coalesceMask:NSNotificationCoalescingOnName forModes:nil];
}
Any suggestions as to how i can achieve this?
To effectively chain multiple NSURLConnection requests, you should not start them all at once (e.g., like you say "in a for loop").
Instead, start the first request; then, from the connectionDidFinishLoading start the second; then again, when connectionDidFinishLoading is executed again, start the third and so on.
This is just a sketch of how you could do it to help you find a good design for your app; one possibility is:
define a state machine that is responsible for firing the requests depending on the current state; (the state could be a simple integer representing the sequential step you have go to);
start the state machine: it will fire the first request;
when the request is done, connectionDidFinishLoading will signal the state machine, so that it moves to the next step;
at each step, the state machine fires the request corresponding to that step.
There are other design options possible, of course; which is the best will depend on your overall requirements and the kind of call flow you need.
set a counter and total number of requests to send. and in connection did finished loading, check that if counter is reached to total number of rquests to send else call the method again in which you was sending the data, and increment sent requests. if u use data from file. i.e. first write data to file wid a number and then read it when u want to send request, then it wud b great
Hum,
Did you try to use Libraries like AFNetworking, you can create asynch request and using blocks to handle answers.
You can use NSOPerationQueue or Block or thread for same. Call another function after finishing one.

Why does the browser hang when I register a cleanup handler in mod_perl?

I'm using $r->pool->cleanup_register(\&cleanup); to run a subroutine after a page has been processed and printed to the client. My hope was that the client would see the complete page, and Apache could continue doing some processing in the background that takes a few seconds.
But the client browser hangs until cleanup sub has returned. Is there a way to get apache to finalise the connection with the client before all my code has returned?
I'm convinced I've done this before, but I can't find it again.
Use a job queue system and do the long operation completely asynchronously -- just schedule the operation during the web request. A job queue also handles peak load situations better than doing something expensive within the web server processes themselves.
You want to flush the buffer. It doesn't finalize the connection, but your client will see the output before the task completes.
sub handler {
my $r = shift;
$r->content_type('text/html');
$r->rflush; # send the headers out
$r->print(long_operation());
return Apache2::Const::OK;
}