lexicographicMode in asyncio bulkCmd - pysnmp

While using Asynchronous pysnmp bulkCmd with asyncio, if requested OID have many values (like 1.3.6.1.2.1.17.4.3.1.2 that show MAC address learned by Cisco switch) or if using several OIDs in one request, I have problem that total number of OIDs in response limited by MTU/MSS of network, which means that not all OIDs received.
This problem can control if using lexicographicMode in Synchronous bulkCmd, but Asynchronous bulkCmd generator haven`t that options.
It is possible to use getNext but it is significantly reducing performance because of increasing total number of packet (one request/response per OID).
Is there way to control that all "sub oid" received in response using Asynchronous bulkCmd ?

Could you use the maxRepetitions parameter to limit the number of response OIDs per each requested OID? It's 50 in this example.
I believe the lexicographicMode option is designed to stop walking the MIB once the initial prefix goes out of scope. So it has only indirect influence over message size what makes it unreliable for the purpose.

Related

Do memory instructions pass through the load-store queue and issue queue in the microarchitecture

What is the difference between the issue queue and lsq queue for
memory instructions? Do memory instructions pass through both queues, or do they only pass
through the lsq queue.
If they pass through both queues what is their order?
I'm assuming you use the arm-like nomenclature here so the issue queue is what Intel calls RS (reservation station) and by issue you mean sending a uop ready for execution.
The answer is that memory instructions need to pass both. All instructions need to be issued (except the ones that can be eliminated without execution, for example register moves, zero idioms, nops, etc..). Let's rephrase - all instructions that need to go through an ALU need to go through the issue process first. Memory instructions will simply use that step to calculate their addresses.
This is true for loads, for stores there is usually an internal split into store-address and store-data, so the store-address will behave like a load in that sense and calculate its address during that step.
There is usually a dedicated execution port for that and dedicated execution units because the address calculation usually follows one of few specific addressing modes (each architecture has a different set of these), but aside from that the execution needs to follow the same rules like any other operation in the CPU - it needs to have its sources ready and read from the register file or bypassed from an in flight operation, it needs to get arbitrated when the execution port is free and prioritized by the same aging rules, so it makes sense that it uses the common path.
Once the memory operation has finished execution, it will be sent to the LSU (load-store unit, or the DCU, data-cache unit on Intel) and perform the actual memory access using the generated address. The LSU pipe will take care of the address translation, TLB lookups, the page walk if needed (though this is sometimes done in a dedicated unit), the address range and property checks, the cache lookup (if cacheable) and sending a miss to the next cache level or memory if needed. It may also trigger prefetches as part of the process.
For a load, when the LSU pipe has completed (which may require multiple passes and wakeups if the data was not available in the L1), the LSU will signal the issue queue again in order to wakeup anyone who was depended on the result.
For a store, store-address may fetch the line to the cache in advance as an optimization but the actual next step is usually to wakeup after retirement (since stores may not be dispatched to memory while speculative, unless you have some tricks to handle that).
It's also worth to mention that some CPUs try to optimize loads that can forward the data directly from prior stores instead of fetching it from the cache/memory. This can include forwarding (very common) or memory renaming (less common). The former is usually handled by the LSU internally, but the latter can be done much earlier and without the LSU (though the LSU pipe is usually still activated to validate the result).

Cannot use sc_http_req_rate when the counter isn’t currently tracked

As per documentation, sc_http_req_rate will return the rate of http requests from the stick table, but only from the currently tracked counters.
From testing, this means that if you are incrementing your counter using http-response rather than http-request, the field is unavailable. It cannot be used for rate limiting, or sent on to the backends.
A down to earth conseguence of this is that I cannot limit the number of bots generating too many 404 requests.
How can I load and use the stick table http rate during the request if it’s only tracked in the response?
Thank you :)

What does CoAP Max transaction in Contiki means?

I didn't get whether Max transactions refer to client side or server side of CoAP. For instance, if COAP_MAX_OPEN_TRANSACTIONS is 4. Does it mean that CoAP Client can send 4 parallel request to different servers or it means that CoAP Server can process max 4 requests in parallel.
Because from the code I see that it initiates a blocking request from the client side which will not allow looping for another transaction.
So, need clarification here. If multiple CoAP transactions possible from client side then please mention how. Thank you.
According to paper dunkels.com/adam/kovatsch11low-power.pdf
Section III-F CoAP Clients provide a blocking function call implemented with protothreads to issue a request. This linear programming model can also hide blockwise transfers, as it continues first when all data were received. So based on this I am guessing client can generate one transaction at a time and blocks to wait for ack (or timeout).
Here is code reference https://github.com/contiki-os/contiki/blob/master/apps/er-coap/er-coap-engine.c#L370.
Contrarily, Server can respond to multiple transactions simultaneously because there are transactions which wait for response (from say sensors) and need to save state. This is my understanding of the question posted. If I am wrong then please correct.
According to links:
https://github.com/contiki-os/contiki/blob/bc2e445817aa546c0bb93a9900093ec276005e2a/apps/er-coap/er-coap-conf.h#L51
https://github.com/contiki-ng/contiki-ng/wiki/Documentation:-CoAP#configuration
I guess it's just a max number of confirmable requests (which have not yet received an ACK) to be stored simultaneously for retransmission.
And it used for reserving memory for the max number of those requests:
https://github.com/contiki-os/contiki/blob/3f4436bac9a9f6da0df188372d4374693eab8a52/apps/er-coap/er-coap-transactions.c#L57
MEMB(transactions_memb, coap_transaction_t, COAP_MAX_OPEN_TRANSACTIONS);

Push data to client, how to handle slow clients?

In a push model, where server pushes data to clients, how does one handle clients with low or variable bandwidth?
For example i receive data from a producer and send the data to my clients (push). What if one of my clients decides to download a linux iso, the available bandwidth to this client becomes too little to download my data.
Now when my producers produces data and the server pushes it to the client, all clients will have to wait until all clients have downloaded the data. This is a problem when there is one or more slow clients with little bandwidth.
I can cache the data to be send for every client, but because the data size is big this isn't really an option (lots of clients * data size = huge memory requirements).
How is this generally solved? No need for code, just a few thoughts/ideas are already more then welcome.
Now when my producers produces data and the server pushes it to the
client, all clients will have to wait until all clients have
downloaded the data.
The above shouldn't be the case -- your clients should be able to download asynchronously from each other, with each client maintaining its own independent download state. That is, client A should never have to wait for client B to finish, and vice versa.
I can cache the data to be send for every client, but because the data
size is big this isn't really an option (lots of clients * data size =
huge memory requirements).
As Warren said in his answer, this problem can be reduced by keeping only one copy of the data rather than one copy per client. Reference-counting (e.g. via shared_ptr, if you are using C++, or something equivalent in another language) is an easy way to make sure that the shared data is deleted only when all clients are done downloading it. You can make the sharing more fine-grained, if necessary, by breaking up the data into chunks (e.g. instead of all clients holding a reference to a single 800MB linux iso, you could break it up into 800 1MB chunks, so that you can start removing the earlier chunks from memory as soon as all clients have downloaded them, instead of having to hold the entire 800MB of data in memory until every client has downloaded the entire thing)
Of course, that sort of optimization only gets you so far -- e.g. if two clients each request a different 800MB file, then you're liable to end up with 1.6GB of RAM usage for caching, unless you come up with a more clever solution.
Here are some possible approaches you could try (from less complex to more complex). You could try any of these either separately or in combination:
Monitor how much each client's "backlog" is -- that is, keep a count of the amount of data you have cached waiting to send to that client. Also keep track of the number of bytes of cached data your server is currently holding; if that number gets too high, force-disconnect the client with the largest backlog, in order to free up memory. (this doesn't result in a good user experience for the client, of course; but if the client has a buggy or slow connection he was unlikely to have a good user experience anyway. It does keep your server from crashing or swapping itself to death because a single client has a bad connection)
Keep track of how much data your server has cached and waiting to send out. If the amount of data you have cached is too large (for some appropriate value of "too large"), temporarily stop reading from the socket(s) that are pushing the data out to you (or if you are generating your data internally, temporarily stop generating it). Once the amount of cached data gets down to an acceptable level again, you can resume receiving (or generating) more data to push.
(this may or may not be applicable to your use-case) Revise your data model so that instead of being communications-oriented, it becomes state-oriented. For example, if your goal is to update the clients' state to match the state of the data-source, and you can organize the data-source's state into a set of key/value pairs, then you can require that the data-source include a key with each piece of data it sends. Whenever a key/value pair is received from the data-source, simply place that key-value pair into a map (or hash table or some other key/value oriented data structure) for each client (again, used shared_ptr's or similar here to keep memory usage reasonable). Whenever a given client has drained its queue of outgoing TCP data, remove the oldest item from that client's key/value map, convert it into TCP bytes to send, and add them to the outgoing-TCP-data queue. Repeat as necessary. The advantage of this is that "obsolete" values for a given key are automatically dropped inside the server and therefore never need to be sent to the slow clients; rather the slow clients will only ever get the "latest" value for that given key. The beneficial consequence of that is that a given client's maximum "backlog" will be limited by the number of keys in the state-model, regardless of how slow or intermittent the client's bandwidth is. Thus a slow client might see fewer updates (per second/minute/hour), but the updates it does see will still be as recent as possible given its bandwidth.
Cache the data once only, and have each client handler keep track of where it is in the download, all using the same cache. Once all clients have all the data, the cached data can be deleted.

Implement a good performing "to-send" queue with TCP

In order not to flood the remote endpoint my server app will have to implement a "to-send" queue of packets I wish to send.
I use Windows Winsock, I/O Completion Ports.
So, I know that when my code calls "socket->send(.....)" my custom "send()" function will check to see if a data is already "on the wire" (towards that socket).
If a data is indeed on the wire it will simply queue the data to be sent later.
If no data is on the wire it will call WSASend() to really send the data.
So far everything is nice.
Now, the size of the data I'm going to send is unpredictable, so I break it into smaller chunks (say 64 bytes) in order not to waste memory for small packets, and queue/send these small chunks.
When a "write-done" completion status is given by IOCP regarding the packet I've sent, I send the next packet in the queue.
That's the problem; The speed is awfully low.
I'm actually getting, and it's on a local connection (127.0.0.1) speeds like 200kb/s.
So, I know I'll have to call WSASend() with seveal chunks (array of WSABUF objects), and that will give much better performance, but, how much will I send at once?
Is there a recommended size of bytes? I'm sure the answer is specific to my needs, yet I'm also sure there is some "general" point to start with.
Is there any other, better, way to do this?
Of course you only need to resort to providing your own queue if you are trying to send data faster than the peer can process it (either due to link speed or the speed that the peer can read and process the data). Then you only need to resort to your own data queue if you want to control the amount of system resources being used. If you only have a few connections then it is likely that this is all unnecessary, if you have 1000s then it's something that you need to be concerned about. The main thing to realise here is that if you use ANY of the asynchronous network send APIs on Windows, managed or unmanaged, then you are handing control over the lifetime of your send buffers to the receiving application and the network. See here for more details.
And once you have decided that you DO need to bother with this you then don't always need to bother, if the peer can process the data faster than you can produce it then there's no need to slow things down by queuing on the sender. You'll see that you need to queue data because your write completions will begin to take longer as the overlapped writes that you issue cannot complete due to the TCP stack being unable to send any more data due to flow control issues (see http://www.tcpipguide.com/free/t_TCPWindowSizeAdjustmentandFlowControl.htm). At this point you are potentially using an unconstrained amount of limited system resources (both non-paged pool memory and the number of memory pages that can be locked are limited and (as far as I know) both are used by pending socket writes)...
Anyway, enough of that... I assume you already have achieved good throughput before you added your send queue? To achieve maximum performance you probably need to set the TCP window size to something larger than the default (see http://msdn.microsoft.com/en-us/library/ms819736.aspx) and post multiple overlapped writes on the connection.
Assuming you already HAVE good throughput then you need to allow a number of pending overlapped writes before you start queuing, this maximises the amount of data that is ready to be sent. Once you have your magic number of pending writes outstanding you can start to queue the data and then send it based on subsequent completions. Of course, as soon as you have ANY data queued all further data must be queued. Make the number configurable and profile to see what works best as a trade off between speed and resources used (i.e. number of concurrent connections that you can maintain).
I tend to queue the whole data buffer that is due to be sent as a single entry in a queue of data buffers, since you're using IOCP it's likely that these data buffers are already reference counted to make it easy to release then when the completions occur and not before and so the queuing process is made simpler as you simply hold a reference to the send buffer whilst the data is in the queue and release it once you've issued a send.
Personally I wouldn't optimise by using scatter/gather writes with multiple WSABUFs until you have the base working and you know that doing so actually improves performance, I doubt that it will if you have enough data already pending; but as always, measure and you will know.
64 bytes is too small.
You may have already seen this but I wrote about the subject here: http://www.lenholgate.com/blog/2008/03/bug-in-timer-queue-code.html though it's possibly too vague for you.