I would like to know how to manage read and write buffers in a many clients / one server architecture.
My server is asynchronous.
My exact questions are:
Do I need to have one read buffer per client connected, or one for the server globally ? Why ? (to avoid data crushing if multiple clients write to the server simultaneously for example).
Same question with write buffer.
If my server wasn't asynchronous, would it change any of the answers ?
Thank you in advance for your responses,
Flo
Do I need to have one read buffer per client connected, or one for the server globally?
One per client, because you will get partial reads, and you need to keep reading to assemble entire messages: you want all that data in one place, adjacent.
Same question with write buffer.
One per client, because you can get short writes.
If my server wasn't asynchronous, would it change any of the answers?
No.
Related
I didn't get whether Max transactions refer to client side or server side of CoAP. For instance, if COAP_MAX_OPEN_TRANSACTIONS is 4. Does it mean that CoAP Client can send 4 parallel request to different servers or it means that CoAP Server can process max 4 requests in parallel.
Because from the code I see that it initiates a blocking request from the client side which will not allow looping for another transaction.
So, need clarification here. If multiple CoAP transactions possible from client side then please mention how. Thank you.
According to paper dunkels.com/adam/kovatsch11low-power.pdf
Section III-F CoAP Clients provide a blocking function call implemented with protothreads to issue a request. This linear programming model can also hide blockwise transfers, as it continues first when all data were received. So based on this I am guessing client can generate one transaction at a time and blocks to wait for ack (or timeout).
Here is code reference https://github.com/contiki-os/contiki/blob/master/apps/er-coap/er-coap-engine.c#L370.
Contrarily, Server can respond to multiple transactions simultaneously because there are transactions which wait for response (from say sensors) and need to save state. This is my understanding of the question posted. If I am wrong then please correct.
According to links:
https://github.com/contiki-os/contiki/blob/bc2e445817aa546c0bb93a9900093ec276005e2a/apps/er-coap/er-coap-conf.h#L51
https://github.com/contiki-ng/contiki-ng/wiki/Documentation:-CoAP#configuration
I guess it's just a max number of confirmable requests (which have not yet received an ACK) to be stored simultaneously for retransmission.
And it used for reserving memory for the max number of those requests:
https://github.com/contiki-os/contiki/blob/3f4436bac9a9f6da0df188372d4374693eab8a52/apps/er-coap/er-coap-transactions.c#L57
MEMB(transactions_memb, coap_transaction_t, COAP_MAX_OPEN_TRANSACTIONS);
I have problem reading from socket. There is asterisk instance running with plenty of calls (10-60 in a minute) and I'm trying to read and process CDR events related to those calls (connected to AMI).
Here is library which I'm using (not mine, but was pushed to fork because of bugs) https://github.com/warik/gami
Its pretty straightforward, main action goes in gami.go - readDispatcher.
buf := make([]byte, _READ_BUF) // read buffer
for {
rc, err := (*a.conn).Read(buf)
So, there is TCPConn (a.conn) and buffer with size 1024 to which I'm reading messages from socket. So far so good, but eventually, from time to time (this time may vary from 10 minutes to 5 hours independently of data amount which comes through socket) Read operation fails with io.EOF error. I was trying to reconnect and relogin immediately, but its also impossible - connection times out, so i was pushed to wait for about 40-60sec, and this time is very crucial to me, I'm losing a lot of data because of delay. I was googling, reading sources and trying a lot of stuff - nothing. The most strange thing, that simple socket opened in python or php does not fail.
Is it possible that problem because of lack of file descriptors to represent socket on mine machine or on asterisk server?
Is it possible that problem in asterisk configuration (because i have another asterisk on which this problem doesn't reproduce, but also, i have time less calls on last one)?
Is it possible that problem in my way to deal with socket connection or with Go in general?
go version go1.2.1 linux/amd64
asterisk 1.8
Update to latest asterisk. There was bug like that when AMI send alot of data.
For check issue, you have send via ami command like "COMMAND sip show peers"(or any other long output command) and see result.
Ok, problem was in OS socket buffer overflow. As appeared there were to much data to handle.
So, were are three possible ways to fix this:
increase socket buffer volume
increase somehow speed of process which reeds data from socket
lower data volume or frequency
The thing that gami is by default reading all data from asterisk. And i was reading all of them and filter them after actual read operation. According that AMI listening application were running on pretty poor PC it appeared that it simply cannot read all the data before buffer capacity will be exposed.But its possible to receive only particular events, by sending "Events" action to AMI and specifying desired "EventMask".
So, my decision was to do that. And create different connections for different events type.
I have a setup with two Erlang nodes on the same physical machine, and I wanna be able to send large files between the nodes.
From the symptoms I see it looks like there is only one Tcp connection between the nodes, and sending the large binary across stops all other traffic, is this the case?
And even more interesting is there a way of making the vm use several connections between the nodes?
Yeah, you only get 1 connection, according to the manual
The handshake will continue, but A is informed that B has another
ongoing connection attempt that will be shut down (simultaneous
connect where A's name is greater than B's name, compared literally).
Not sure what "big" means in the question, but generally speaking (and imho), it might be good to setup a separate tcp port to handle the payloads, and only use the standard erlang messages as a signaling method (to negotiate ports, setup a listener, etc), like advising there's a new incoming payload and negotiate anything needed.
Btw, there's an interesting thread on the same subject, and you might try tunning the net_* variables to see if they help with the issues.
hope it helps!
It is not recommended to send large messages between erlang nodes,
http://learnyousomeerlang.com/distribunomicon
Refer to "bandwidth is infinite" section, I would recommend use something else like GFS so that you don't lose the distribution feature of erlang.
In order not to flood the remote endpoint my server app will have to implement a "to-send" queue of packets I wish to send.
I use Windows Winsock, I/O Completion Ports.
So, I know that when my code calls "socket->send(.....)" my custom "send()" function will check to see if a data is already "on the wire" (towards that socket).
If a data is indeed on the wire it will simply queue the data to be sent later.
If no data is on the wire it will call WSASend() to really send the data.
So far everything is nice.
Now, the size of the data I'm going to send is unpredictable, so I break it into smaller chunks (say 64 bytes) in order not to waste memory for small packets, and queue/send these small chunks.
When a "write-done" completion status is given by IOCP regarding the packet I've sent, I send the next packet in the queue.
That's the problem; The speed is awfully low.
I'm actually getting, and it's on a local connection (127.0.0.1) speeds like 200kb/s.
So, I know I'll have to call WSASend() with seveal chunks (array of WSABUF objects), and that will give much better performance, but, how much will I send at once?
Is there a recommended size of bytes? I'm sure the answer is specific to my needs, yet I'm also sure there is some "general" point to start with.
Is there any other, better, way to do this?
Of course you only need to resort to providing your own queue if you are trying to send data faster than the peer can process it (either due to link speed or the speed that the peer can read and process the data). Then you only need to resort to your own data queue if you want to control the amount of system resources being used. If you only have a few connections then it is likely that this is all unnecessary, if you have 1000s then it's something that you need to be concerned about. The main thing to realise here is that if you use ANY of the asynchronous network send APIs on Windows, managed or unmanaged, then you are handing control over the lifetime of your send buffers to the receiving application and the network. See here for more details.
And once you have decided that you DO need to bother with this you then don't always need to bother, if the peer can process the data faster than you can produce it then there's no need to slow things down by queuing on the sender. You'll see that you need to queue data because your write completions will begin to take longer as the overlapped writes that you issue cannot complete due to the TCP stack being unable to send any more data due to flow control issues (see http://www.tcpipguide.com/free/t_TCPWindowSizeAdjustmentandFlowControl.htm). At this point you are potentially using an unconstrained amount of limited system resources (both non-paged pool memory and the number of memory pages that can be locked are limited and (as far as I know) both are used by pending socket writes)...
Anyway, enough of that... I assume you already have achieved good throughput before you added your send queue? To achieve maximum performance you probably need to set the TCP window size to something larger than the default (see http://msdn.microsoft.com/en-us/library/ms819736.aspx) and post multiple overlapped writes on the connection.
Assuming you already HAVE good throughput then you need to allow a number of pending overlapped writes before you start queuing, this maximises the amount of data that is ready to be sent. Once you have your magic number of pending writes outstanding you can start to queue the data and then send it based on subsequent completions. Of course, as soon as you have ANY data queued all further data must be queued. Make the number configurable and profile to see what works best as a trade off between speed and resources used (i.e. number of concurrent connections that you can maintain).
I tend to queue the whole data buffer that is due to be sent as a single entry in a queue of data buffers, since you're using IOCP it's likely that these data buffers are already reference counted to make it easy to release then when the completions occur and not before and so the queuing process is made simpler as you simply hold a reference to the send buffer whilst the data is in the queue and release it once you've issued a send.
Personally I wouldn't optimise by using scatter/gather writes with multiple WSABUFs until you have the base working and you know that doing so actually improves performance, I doubt that it will if you have enough data already pending; but as always, measure and you will know.
64 bytes is too small.
You may have already seen this but I wrote about the subject here: http://www.lenholgate.com/blog/2008/03/bug-in-timer-queue-code.html though it's possibly too vague for you.
I'm using Perl sockets in AIX 5.3, Perl version 5.8.2
I have a server written in Perl sockets. There is a option called "Blocking", which can be set to 0 or 1. When I use Blocking => 0 and run the server and client send data (5000 bytes), I am able to recieve only 2902 bytes in one call. When I use Blocking => 1, I am able to recieve all the bytes in one call.
Is this how sockets work or is it a bug?
This is a fundamental part of sockets - or rather, TCP, which is stream-oriented. (UDP is packet-oriented.)
You should never assume that you'll get back as much data as you ask for, nor that there isn't more data available. Basically more data can come at any time while the connection is open. (The read/recv/whatever call will probably return a specific value to mean "the other end closed the connection.)
This means you have to design your protocol to handle this - if you're effectively trying to pass discrete messages from A to B, two common ways of doing this are:
Prefix each message with a length. The reader first reads the length, then keeps reading the data until it's read as much as it needs.
Have some sort of message terminator/delimiter. This is trickier, as depending on what you're doing you may need to be aware of the possibility of reading the start of the next message while you're reading the first one. It also means "understanding" the data itself in the "reading" code, rather than just reading bytes arbitrarily. However, it does mean that the sender doesn't need to know how long the message is before starting to send.
(The other alternative is to have just one message for the whole connection - i.e. you read until the the connection is closed.)
Blocking means that the socket waits till there is data there before returning from a recieve function. It's entirely possible there's a tiny wait on the end as well to try to fill the buffer before returning, or it could just be a timing issue. It's also entirely possible that the non-blocking implementation returns one packet at a time, no matter if there's more than one or not. In short, no it's not a bug, but the specific 'why' of it is the old cop-out "it's implementation specific".