Is there a way to programmatically get the NOTIFY payload size limit? - postgresql

I'm trying to figure out the size limit of the NOTIFY payload in PostgreSQL, in order to split my messages into smaller packages.
This has to be done programmatically, as my script is intended to run on other people's servers.
Is there a way to achieve this?
I looked around in the documentation and on the web and didn't find anything.

Related

Pagination and listing in APIs

I wanna ask you about lists and pagination in APIs
I want to build a long list in home screen that's mean this request will have a lot of traffic because it's the main screen and I want to build it in a good way to handle the traffic
After I searched about the way of how I gonna implement it
Can I depend on postgresql in pagination ? Or I need to use search engine like solr
If I depend on the database and users started to visit the app, then this request gonna submit a lot of queries on the database is this gonna kill the database ?
Also I'm using Redis to Cache Some data and this gonna handle some traffic but the problem with home screen the response it too large and I can't cache all of this response in one key in Redis
Can anyone explain to me what is the best way to implement this request for pagination .. the only thing I want is pagination I'm not looking to implement a full text search but to handle the traffic I read that search engine will handle it to not affect the database or kill it
Thanks a lot :D
You can do this seamlessly with the pagination technology we know in PostgreSQL. PostgreSQL has enough functions and capabilities to do this. (limit, offset, fetch)
But let me give you a recommendation.
There are several types of pagination.
The first type is that the count of pages must be known in advance. This technology is outdated and is not recommended. Because at this time you need to know the count of records in the table. But calculating count of records is a very slowing process, mainly in large tables.
The second type is that the number of pages is not known in advance. Information from the next page is brought in parts only if necessary. Just like Google, LinkedIn and other big companies use it. In this case, it is not necessary to calculate the count of any table.

How to know the amount of requests

I'm using React and Firebase, and when I check the usage on Firestore, I see a lot of request being made. The problem is that I'm not the only one using it, so I don't know if most of them are mine or not. Is there anyway (using console maybe?) to know how many request I'm doing?
There is currently no way to track the source of reads and write happening in Firestore. You can only see the total volume of those requests in the console.

Is there a way for Apama to read files line by line?

I'm new to Apama. I see that a com.apama.file lib exists, but I am unsure how to actually use it to read a file. I want to send each line as an event to be parsed and then depending on the contents sent as a different event from there, but googling suggests that I'd need a transport (not sure what that is either) to do so, but my project lead is under the impression that this can all be done using Apama EPL. How true is this and if it has some validity, how can I go about achieving that?
Yes, this is certainly possible. To help you do it, though, please can you provide a little more information about your setup? For example, what is the file type and is the file local to where the correlator will be running? Will there only be one file to process at a time? How large is the file, and are there any specific performance requirements?
You may find this helpful:
https://github.com/SoftwareAG/apama-streaming-analytics-connectivity-FileTransport
You don't say quite what you are trying to achieve, but if you are new to Apama then I will say that that is not something that is done frequently, especially in simpler solutions when your are just starting.
Depending what you are trying to achieve, are you aware of the "engine_send" tool and the ability to use it to send in a text file of Apama events (normally a .evt file), and with batch tags if you want spread them over time?
http://www.apamacommunity.com/documents/10.5.3.0/apama_10.5.3.0_webhelp/apama-webhelp/apama-webhelp/re-DepAndManApaApp_sending_events_to_correlators.html
http://www.apamacommunity.com/documents/10.5.3.0/apama_10.5.3.0_webhelp/apama-webhelp/apama-webhelp/co-DepAndManApaApp_event_file_format.html

Paginated REST API : How to select amount of data returned?

I know that from most of today's REST APIs, web calls responses have to be paginated.
But, I don't see on the web any insight on how to select the ideal size of a batch returned by an API call: should it be 10, 100, 1000. To be short: on what factors should we base the reflection of the size of an API response?
Some people state that it should be based on the number of elements displayed by the UI. I don't agree with this, as not all APIs are directly linked with an UI, and, in any cases, modern REST APIs allow to chose the number of items in output batch with a configurable parameter up to a certain amount.
So, how could we define the value for this "maximum number of elements returned by an HTTP request"?
Should it be based on the payload size? Internal architecture of the API? Should it be based on performance measurement?
Any insight on this? I'm not really looking for an explicit figure, but more some techniques that could help to find the answer. The best for me would be the process followed by some successful APIs. But, I cannot find any.
My general approach to this is to:
Try to avoid paging
Make REST clients resilient against paging changes, by using next and previous links, instead of using a ?page= attribute. A REST client knows another page is available strictly by the appearance of the link.
I don't use hard rules or measurements to figure out when paging is needed. Paging is generally painful, so my first approach would be to try to figure out what requirements drives the need for paging, and if that requirement can be removed.
Once I've determined it's not possible to remove this requirement in another way, I would set the cut-off of a page as large as reasonable, to remove the likelyhood clients need to do additional requests.
If it's a closed API (used only by clients you control), pick whatever the UI wants. It's trivial to change. If clients can choose from several options, you can include a pageSize parameter. Or, better..
If it's an open API (used by clients you don't control), then let clients control what size paging they want. Rather than support a pageNumber parameter, support offset and limit. Offset is how many records to skip before starting to return records, and limit is the maximum number of records to return. If the client is not happy with how their request performs, they can adjust the parameters to suit their needs. Any upper limit your API has should be driven by what your service can handle. It's neither possible nor desirable for you to try to figure out the Magic Maximum Page Size that makes all clients happy and no clients sad.
Also, please note that none of this has anything to do with ReST, which is silent when it comes to paging.
I usually make a rough performance measurement by hand. I want as few requests as necessary, but do not want to risk timeouts.

Recommended way to stream data from one process to another in macOS via Swift

I do have a helper application wich generates data which I need in my main application.
I'm searching for a way to push this data to the main app.
One way would be DistributedNotificationCenter.
The documentation says that notifications will be dropped if there are too many, but I cannot find a recommendation what the maximum suggested notifications are.
Currently I need to send an array with about 100 entries about 5-10 times a second.
If this is not a suggested way what would be a better one?
Thanks!