Is there any limit on maximum no of items in ejabberd rosters list? If so then where can I configure it.
As a default, there is not such limit in ejabberd Community edition.
Related
Consider a brokerage platform where customers can see all the securities they hold. There is an API which supports the brokerage frontend by providing all the securities data corresponding to a customer Id.
Now this api has a tps of 6000, that is what it is certified for.
And the SLA for this api is 500ms, and 99th percentile response time is 200ms.
The API is hosted on aws, where the servers are autoscaled.
I have few questions:
At peak traffic, the num of users on frontend is in hundreds of thousands if not million. How does the tps relate with the number of users ?
My understanding is that tps is the number of transactions which the api can process in a second. So, the number 6000 for the api, is it for one server or for one thread in a server? And let's say if we add 5 servers, then the cumulative tps would become 30,000 ?
How do we decide the configuration (core, ram) of each server if we know the tps ?
And finally, is the number of servers basically [total num of users / tps] ?
I tried reading about it in couple of books and few blogs, but I wasn't able to clear my confusion entirely.
Kindly correct me if I am asking the wrong questions or if the questions doesn't make sense.
MongoDB Atlas has rate limit of 100 requests per minute per project but if I host mongodb on my own server then will it have a rate limit or I can make unlimited read/write calls per minute to database. (obviously it is relative to the specifications of server) I am using Node js for making calls with mongoose
No, when you install MongoDB in your own premises, then you don't have any limit. Of course, the system has some MongoDB Limits and Thresholds but I don't think you will hit any of them.
I would like to list all of my subnodes , say: ls /mynode
Unfortunately the above command doesnt work when we have a massive amount of subnodes. The reason is the buffer limit. Even if we increase it by jute.maxbuffer we can reach that limit also.
So what can we do if we want to list all of our nodes?
1.Does Zk support paging? No.
2.Does Zk support wildcards? No.
3.Does Zk support filtering? No.
What is the solution?
I am trying to read the records from the source based on the count of total max records to be processed which should be given by the user.
Eg: Total Records in the source table is 1 million
Total Max records to process are 100K
I need to process those 100k records only from source.
I have gone through JDBC IO library classes to check if I have any option to implement it like there is an option to set the batch size, but I have found none.
PS: I want to implement it IO level, Not by adding limit to query
I was able to do it using with setMaxRows by turning off the auto-commit for JDBC IO
you can use the withQuery to specify the query with the number of records to read e.g. .withQuery("select id,name from Person limit 1000"). You can also parameterize the number of records using JdbcIO.StatementPreparator. The example in the doc may help.
EDIT
Another option is to use withFetchSize
I use Memcached to store content lists with very key combinations, when user edited the content, I must refresh the cache, but it is hard to say what particular list to refresh, it is not either a good idea to flush the entire Memcached server, so my question is: Can I group the Memcached keys so that I can flush a group but the total Memcached?
Memcached does not natively support flushing of the cache group. What you could try is to group your Memcached keys in namespaces. Check the Memcached wiki for more information.
If by any chance you're using Spring Boot you could try the auto-configuration library for the Memcached cache. The library supports clearing out of the cache group.
Memcached does not have support for range queries so unfortunately you cannot flush a subset of keys.