NSStream Response Time - iphone

My current requirement is to send some command to a set of ip addresses on some particular port, and as per the response, detect devices(say for example detecting a wifi printer on the network by pinging it on a particular port with a status command)
For this I am creating NSStreams, and everything is working peacefully with reading and writing data by NSInputSteam/NSOutputStream.
The only problem is that its taking too long for the response to come back when its an error and it doesnt find the 'intended' device.
I am assuming the input stream must be waiting for the response, and times out after a certain time interval if it doesnt get anything. So is there any way to control that 'time out' interval? So that this scanning process can be done in a few minutes rather than an hour.

Related

Monitoring memcached flush with delay

In order to not overload our database server we are trying to flush each server with a 60 second delay between them. I'm having a bit of issue determining when a server was actually flushed when a delay is given.
I'm using BeITMemcached and calling the FlushAll with a 60 second delay and staggered set to true.
I've tried using command line telnet host port followed by stats to see if the flush delay is working, however when I look at the cmd_flush the value goes up instantly on all of the host/port combinations being flushed without a delay. I've tried stats items and stats slabs but can't find information on what all the values represent and if there is anything that shows that it has been invalidated.
Is there another place I can look to determine when the server was actually flushed? Or does that value going up instantly mean that the delay isn't working as expected?
I found a round about way of testing this. Even though the cmd_flush gets updated right away the actual keys don't until after the delay.
So I connected with telnet to the server/port I wanted to monitor. Then used gets key to find a key with a value set. Once found I ran the flushall with a delay between the first servers and this one and continued to monitor that key value. After the delay was up the key started to return no value.

Socket data read wait time

I have application where I am listening on multiple sockets using select. If I start processing request that came in from Socket A and in the meanwhile if another request on socket B arrives then I want to know how long does socket B request had to wait before I could get it. Since this is a single threaded application I cannot spawn a new thread and go back to select to monitor again and instantly start processing request from socket B.
Is there a 'C' api available to get me this metric or is this just not possible to get?
There is no a straightforward way how to measure the interval between the 'data ready' time and 'data read' time because there is not any timestamp written together with the data. Moreover the situation is even more complex because a stream oriented socket may receive several data segments till select is closed and the it is not what interval should be measured.
If the application data processing is longer than packet processing in the kernel the you can do a reasonable measurement in following way:
print current time and some unique data id based on application protocol when select wakes up due to socket B data availability.
log any packet received for the socket B. You can use either a network traffic capture tool like wireshark or tcpdump. Or you can configure an iptables firewall rule (if it is running on linux) with target -j LOG.
Write a simple script/program that correlates the captured packets and the application log and subtract received and start processing time.
Of course the idea above does ignore the kernel processing time. If you really need exact time I have to introduce a new thread to your application.

SSE Server Sent Events - Client keep sending requests (like polling)

How come every site explains that in SSE a single connection stays opened between client and server "With SSE, a client sends a standard HTTP request asking for an event stream, and the server responds initially with a standard HTTP response and holds the connection open"
And then, when server decides it can send data to the client while what I am trying to implement SSE I see on fiddler requests being sent every couple of seconds
For me it feels like long polling and not a one single connection kept opened.
Moreover, It is not that the server decides to send data to the client and it sends it but it sends data only when the client sends next request
If i respond with "retry: 10000" even tough something has happened that the server wants to notify right now, will get to the client only on the next request (in 10 seconds from now) which for me does not really looks like connection that is kept opened and server sends data as soon as he wants to
Your server is closing the connection immediately. SSE has a built-in retry function for when the connection is lost, so what you are seeing is:
Client connects to server
Server myteriously dies
Client waits two seconds then auto-reconnects
Server myteriously dies
Client waits two seconds then auto-reconnects
...
To fix the server-side script, you want to go against everything your parents taught you about right and wrong, and deliberately create an infinite loop. So, it will end up looking something like this:
validate user, set up database connection, etc.
while(true){
get next bit of data
send it to client
flush
sleep 2 seconds
}
Where get next bit of data might be polling a DB table for new records since the last poll, or scan a file system directory for new files, etc.
Alternatively, if the server-side process is a long-running data analysis, your script might instead look like this:
validate user, set-up, etc.
while(true){
calculate next 1000 digits of pi
send them to client
flush
}
This assumes that the calculate line takes at least half a second to run; any more frequently and you will start to clog up the socket with lots of small packets of data for no benefit (the user won't notice that they are getting 10 updates/second instead of 2 updates/second).

Golang tcp socket read gives EOF eventually

I have problem reading from socket. There is asterisk instance running with plenty of calls (10-60 in a minute) and I'm trying to read and process CDR events related to those calls (connected to AMI).
Here is library which I'm using (not mine, but was pushed to fork because of bugs) https://github.com/warik/gami
Its pretty straightforward, main action goes in gami.go - readDispatcher.
buf := make([]byte, _READ_BUF) // read buffer
for {
rc, err := (*a.conn).Read(buf)
So, there is TCPConn (a.conn) and buffer with size 1024 to which I'm reading messages from socket. So far so good, but eventually, from time to time (this time may vary from 10 minutes to 5 hours independently of data amount which comes through socket) Read operation fails with io.EOF error. I was trying to reconnect and relogin immediately, but its also impossible - connection times out, so i was pushed to wait for about 40-60sec, and this time is very crucial to me, I'm losing a lot of data because of delay. I was googling, reading sources and trying a lot of stuff - nothing. The most strange thing, that simple socket opened in python or php does not fail.
Is it possible that problem because of lack of file descriptors to represent socket on mine machine or on asterisk server?
Is it possible that problem in asterisk configuration (because i have another asterisk on which this problem doesn't reproduce, but also, i have time less calls on last one)?
Is it possible that problem in my way to deal with socket connection or with Go in general?
go version go1.2.1 linux/amd64
asterisk 1.8
Update to latest asterisk. There was bug like that when AMI send alot of data.
For check issue, you have send via ami command like "COMMAND sip show peers"(or any other long output command) and see result.
Ok, problem was in OS socket buffer overflow. As appeared there were to much data to handle.
So, were are three possible ways to fix this:
increase socket buffer volume
increase somehow speed of process which reeds data from socket
lower data volume or frequency
The thing that gami is by default reading all data from asterisk. And i was reading all of them and filter them after actual read operation. According that AMI listening application were running on pretty poor PC it appeared that it simply cannot read all the data before buffer capacity will be exposed.But its possible to receive only particular events, by sending "Events" action to AMI and specifying desired "EventMask".
So, my decision was to do that. And create different connections for different events type.

QuickFix Sequence Reset not working

I am working on QuickFix/J (FIX 4.2)to submit orders to an acceptor FIX engine. Basically I need help on two accounts:
When I first try to establish a connection with the acceptor, the acceptor rejects the initial Logon requests saying "Msg Seq No too Low". After this my initiator goes on incrementing the outgoing sequence number by one and when this seq no. and the no. expected by the acceptor engine match, I get a stable connection. To speed this process, I began to extract the expected seq. no. from the reject message sent by the acceptor engine and changed the outgoing sequence no. for my engine using
session.setNextTargetMsgSeqNum(expectedSeqNo).
However, later on, if my engine finds incoming sequence no. higher than expected, it sends a Resend request. In response, the other party sends back a Sequence Reset msg (35=4, 123=Y). Now after receiving this msg, incoming seq no. for my engine should be automatically set to the one it received from Seq Reset msg. But this does not happen and my engine goes on asking for messages resend request with no change in the incoming seq no.
Interesting thing is, I found this thing to work when I don't explicitly change the outgoing seq no in the first place (using setNextTargetMsgSeqNum).
Why is my engine not showing expected behavior when it gets Sequence Reset Msg?
I have talked to the other party and they won't have ResetOnLogon=Y in their configuration. So every time my engine comes up, it often sends Logon request with a seq no. lower than expected(starts from 1). Is there a better way to have the connection set up quickly? Like can I somehow make my engine use the sequence no. resuming from the point just before it went down? What should be the ideal approach?
So I am now persisting the messages in a file which is taking care of sequence numbers. However, what is troubling again is, my quickfix initiator engine is not responding to Sequence Reset messages. There are no admin call backs at all now.
I notice that no response to sequence reset message is happening almost always when I am connecting to the acceptor from one server and then, closing that session, and using a different server to connect to the acceptor, using the same session id. Once the logon is accepted, I expect things to work fine. However, while the other engine sends sequence reset to a particular number (gap fill basically), my fix engine does not respond to it, meaning, it does not reset its expected sequence number and keeps on sending resend requests to the acceptor. Any help will be greatly appreciated!
For normal FIX session usage, you configure the session start and end times and let the engine manage the sequence numbers. For example, if your session is active from 8:00 AM to 4:30 PM then QuickFIX/J will automatically reset the outgoing and incoming sequence number to 1 the first time the engine is started after 8:00 AM (or at 8:00 AM if the engine is already started at that time).
(Question #1). You are correct that your engine should use the new incoming sequence number after the Sequence Reset. Given that this works properly for thousands of QuickFIX/J users, think about what you might be doing that would change that behavior. For example, do you have an admin message callback and might it be throwing exceptions. Have you looked at your log files to see if there are any hints there?
(Question #2). If you are using a persistent MessageStore (FileStore, JdbcStore, etc.) then your outgoing sequence number will be available when you restart.