Azure Function creating too many connections to PostgreSQL - postgresql

I have an Azure Durable Function that interacts with a PostgreSQL database, also hosted in Azure.
The PostgreSQL database has a connection limit of 50, and furthermore, my connection string limits the connection pool size to 40, leaving space for super user / admin connections.
Nonetheless, under some loads I get the error
53300: remaining connection slots are reserved for non-replication superuser connections
This documentation from Microsoft seemed relevant, but it doesn't seem like I can make a static client, and, as it mentions,
because you can still run out of connections, you should optimize connections to the database.
I have this method
private IDbConnection GetConnection()
{
return new NpgsqlConnection(Environment.GetEnvironmentVariable("PostgresConnectionString"));
}
and when I want to interact with PostgreSQL I do like this
using (var connection = GetConnection())
{
connection.Open();
return await connection.QuerySingleAsync<int>(settings.Query().Insert, settings);
}
So I am creating (and disposing) lots of NpgsqlConnection objects, but according to this, that should be fine because connection pooling is handled behind the scenes. But there may be something about Azure Functions that invalidates this thinking.
I have noticed that I end up with a lot of idle connections (from pgAdmin):
Based on that I've tried fiddling with Npgsql connection parameters like Connection Idle Lifetime, Timeout, and Pooling, but the problem of too many connections seems to persist to one degree or another. Additionally I've tried limiting the number of concurrent orchestrator and activity functions (see this doc), but that seems to partially defeat the purpose of Azure Functions being scalable. It does help - I get fewer of the too many connections error). Presumably If I keep testing it with lower numbers I may even eliminate it, but again, that seems like it defeats the point, and there may be another solution.
How can I use PostgreSQL with Azure Functions without maxing out connections?

I don't have a good solution, but I think I have the explanation for why this happens.
Why is Azure Function App maxing out connections?
Even though you specify a limit of 40 for the pool size, it is only honored on one instance of the function app. Note that that a function app can scale out based on load. It can process several requests concurrently in the same function app instance, plus it can also create new instances of the app. Concurrent requests in the same instance will honor the pool size setting. But in the case of multiple instances, each instance ends up using a pool size of 40.
Even the concurrency throttles in durable functions don't solve this issue, because they only throttle within a single instance, not across instances.
How can I use PostgreSQL with Azure Functions without maxing out connections?
Unfortunately, function app doesn't provide a native way to do this. Note that the connection pool size is not managed by the function runtime, but by npgsql's library code. This library code running on different instances can't talk to each other.
Note that, this is the classic problem of using shared resources. You have 50 of these resources in this case. The most effective way to support more consumers would be to reduce the time each consumer uses the resource. Reducing the Connection Idle Lifetime substantially is probably the most effective way. Increasing Timeout does help reduce errors (and is a good choice), but it doesn't increase the throughput. It just smooths out the load. Reducing Maximum Pool size is also good.
Think of it in terms of locks on a shared resource. You would want to take the lock for the minimal amount of time. When a connection is opened, it's a lock on one of the 50 total connections. In general, SQL libraries do pooling, and keep the connection open to save the initial setup time that is involved in each new connection. However, if this is limiting the concurrency, then it's best to kill idle connections asap. In a single instance of an app, the library does this automatically when max pool size is reached. But in multiple instances, it can't kill another instance's connections.
One thing to note is that reducing Maximum Pool Size doesn't necessarily limit the concurrency of your app. In most cases, it just decreases the number of idle connections - at the cost of - paying the initial setup time when a new connection will need to be established at a later time.
Update
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT might be useful. You can set this to 5, and pool size to 8, or similar. I would go this way if reducing Maximum Pool Size and Connection Idle Lifetime is not helping.

This is where Dependency Injection can be really helpful. You can create a singleton client and it will do the job perfectly. If you want to know more about service lifetimes you can read it here in docs
First add this nuget Microsoft.Azure.Functions.Extensions.DependencyInjection
Now add a new class like below and resolve your client.
[assembly: FunctionsStartup(typeof(Kovai.Serverless360.Functions.Startup))]
namespace MyFunction
{
class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
ResolveDependencies(builder);
}
}
public void ResolveDependencies(IFunctionsHostBuilder builder)
{
var conStr = Environment.GetEnvironmentVariable("PostgresConnectionString");
builder.Services.AddSingleton((s) =>
{
return new NpgsqlConnection(conStr);
}
}
}
Now you can easily consume it from any of your function
public FunctionA
{
private readonly NpgsqlConnection _connection;
public FunctionA(NpgsqlConnection conn)
{
_connection = conn;
}
public async Task<HttpResponseMessage> Run()
{
//do something with your _connection
}
}

Here's an example of using a static HttpClient, something which you should consider so that you don't need to explicitly manage connections, rather allow your client to do it:
public static class PeriodicHealthCheckFunction
{
private static HttpClient _httpClient = new HttpClient();
[FunctionName("PeriodicHealthCheckFunction")]
public static async Task Run(
[TimerTrigger("0 */5 * * * *")]TimerInfo healthCheckTimer,
ILogger log)
{
string status = await _httpClient.GetStringAsync("https://localhost:5001/healthcheck");
log.LogInformation($"Health check performed at: {DateTime.UtcNow} | Status: {status}");
}
}

Related

MongoDB CPU usage gets 100%

We have around 100 IoT devices which get connected to the cloud and sends data to cloud every 10 seconds.
We tested on 2vcore/4G RAM and 8 vcore/16G RAM. The CPU usage increases to 200% and 800% respectively in a short time. The established TCP connections are around 106.
Is it because we created too many mongoldb connections or the frequency of writing to mongoDB is too fast?
I think Object in Scala is like singleton so it should only create one DBHelper object? But does the code in DBHelper create each datastore for each TCP connection?
1.DBHelper.scala:
Object DBHelper{
var datastore= morphia.createDataStore(…………….);
}
2.MqttClient.java
mqttPushClient.setCallback(pushCallBack);
3.
public class PushCallback implements MqttCallback {
public void messageArrived(String topic, MqttMessage mqttMessage) throws Exception {
//calls DBHelper and save message to mongoDB
}
}
You should only create one MongoClient and reuse that as much as you can. MongoClient has its own internal connection pool and so can manage all that for you. Each time you create a new client, it has to reestablish a connection and load up certain bits of metadata. So, as much as you can, create the one client and the one Datastore for that connection and just reuse it us much as you can.

Mongo DB reading 4 million documents

We have Scheduled jobs that runs daily,This jobs looks for matching Documents for that day and takes the document and do minimal transform and sent it a queue for downstream processing. Typically we have 4 millions Documents to be processed for a day. Our aim is to complete the processing within one hour. I am looking for suggestions on the best practices to read 4 million Documents from MongoDB quickly ?
The MongoDB Async driver is the first stop for low overhead querying. There's a good example of using the SingleResultCallback on that page:
Block<Document> printDocumentBlock = new Block<Document>() {
#Override
public void apply(final Document document) {
System.out.println(document.toJson());
}
};
SingleResultCallback<Void> callbackWhenFinished = new SingleResultCallback<Void>() {
#Override
public void onResult(final Void result, final Throwable t) {
System.out.println("Operation Finished!");
}
};
collection.find().forEach(printDocumentBlock, callbackWhenFinished);
It is a common pattern in asynchronous database drivers to allow results to be passed on for processing as soon as they are available. The use of OS-level async I/O will help with low CPU overhead. Which brings up the next problem - how to get the data out.
Without seeing the specifics of your work, you probably want to place the results into an in memory queue to be picked up by another thread at this point so the reader thread can keep reading results. An ArrayBlockingQueue is probably appropriate. put is more appropriate than add because it will block the reader thread if the worker(s) isn't able to keep up (keeping things balanced). Ideally, you don't want it to back up which is where multiple threads will be necessary. If the order of the results is important, use a single worker thread, otherwise use a ThreadPoolExecutor with the queue passed into the constructor. Using the in-memory queue does open up the possibility for data-loss if the results are being somehow discarded as they are read (i.e. if you were immediately sending off another query to delete them), and the reader process crashed.
At this point, either do the 'minimal transforms' on the worker thread(s), or serialize them in the workers and put them on a real queue (e.g. RabbitMQ, ZeroMQ). Putting them onto a real queue allows the work to be divided up amoungst multiple machines trivially, and provides optional persistence allowing recovery of work, and those queues have great clustering options for scalability. Those machines can then put the results into the queue you mentioned in the question (assuming it has the capacity).
The bottleneck in a system like that is how quickly one machine can get through a single mongo query, and how many results the final queue can handle. All the other parts (MongoDB, queues, # of worker machines) are individually scalable. By doing as little work as possible on the querying machine and pushing that work onto other machines that impact can be greatly reduced. It sounds like your destination queue is out of your control.
When trying to work out where bottlenecks are, measurements are critical. Adding metrics to your application up front will let you know which areas need improvement when things aren't going well.
That set-up can build a pretty scalable system. I've built many similar systems before. Beyond that, you'll want to investigate getting your data into something like Apache Storm.

How to free Redis Scala client allocated by RedisClientPool?

I am using debasishg/scala-redis as my Redis Client.
I want it to support multi threaded executions. Following their documentation: https://github.com/debasishg/scala-redis I defined
val clients = new RedisClientPool("localhost", 6379)
and then using it on each access to redis:
clients.withClient {
client => {
...
}
}
My question is, do I need to free each allocated client? And if so, what is a correct way to do it?
If you look at the constructor for RedisClientPool, there is a default value maxIdle ("the maximum number of objects that can sit idle in the pool", as per this), and a default value for poolWaitTimeout. You can change those values, but basically if you wait poolWaitTimeout you are guaranteed to have your ressources cleaned, except for the maxIdle clients on stand-by.
Also, if you can't stand the idea of idle clients, you can shut down the whole pool with mypool.close, and create it again when needed, but depending on your use case that might defeat the purpose of using a pool (if it's a cron job I guess that's fine).

Service Fabric reliable queue long operation

I'm trying to understand some best practices for service fabric.
If I have a queue that is added to by a web service or some other mechanism and a back end task to process that queue what is the best approach to handle long running operations in the background.
Use TryPeekAsync in one transaction, process and then if successful use TryDequeueAsync to finally dequeue.
Use TryDequeueAsync to remove an item, put it into a dictionary and then remove from the dictionary when complete. On startup of the service, check the
dictionary for anything pending before the queue.
Both ways feel slightly wrong, but I can't work out if there is a better way.
One option is to process the queue in RunAsync, something along the lines of this:
protected override async Task RunAsync(CancellationToken cancellationToken)
{
var store = await StateManager.GetOrAddAsync<IReliableQueue<T>>("MyStore").ConfigureAwait(false);
while (!cancellationToken.IsCancellationRequested)
{
using (var tx = StateManager.CreateTransaction())
{
var itemFromQueue = await store.TryDequeueAsync(tx).ConfigureAwait(false);
if (!itemFromQueue.HasValue)
{
await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken).ConfigureAwait(false);
continue;
}
// Process item here
// Remmber to clone the dequeued item if it is a custom type and you are going to mutate it.
// If success, await tx.CommitAsync();
// If failure to process, either let it run out of the Using transaction scope, or call tx.Abort();
}
}
}
Regarding the comment about cloning the dequeued item if you are to mutate it, look under the "Recommendations" part here:
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-reliable-collections/
One limitation with Reliable Collections (both Queue and Dictionary), is that you only have parallelism of 1 per partition. So for high activity queues it might not be the best solution. This might be the issue you're running into.
What we've been doing is to use ReliableQueues for situations where the write amount is very low. For higher throughput queues, where we need durability and scale, we're using ServiceBus Topics. That also gives us the advantage that if a service was Stateful only due to to having the ReliableQueue, it can now be made stateless. Though this adds a dependency to a 3rd party service (in this case ServiceBus), and that might not be an option for you.
Another option would be to create a durable pub/sub implementation to act as the queue. I've done tests before with using actors for this, and it seemed to be a viable option, without spending too much time on it, since we didn't have any issues depending on ServiceBus. Here is another SO about that Pub/sub pattern in Azure Service Fabric
If very slow use 2 queues.. One a fast one where you store the work without interruptions and a slow one to process it. RunAsync is used to move messages from the fast to the slow.

Apache HttpClient PoolingHttpClientConnectionManager leaking connections?

I am using the Apache Http Client in a Scala application.
The application is fairly high throughput with high parallelism.
I am not sure but I think perhaps I am leaking connections. It seems that whenever the section of code that uses the client gets busy, the application become unresponsive. My suspicion is that I am leaking sockets or something which is then causing other aspects of the application to stop working. It may also not be leaking connections so much as not closing them fast enough.
For more context, occasionally, certain actions lead to this code being executed hundreds of times a minute in parallel. When this happens the Rest API (Spray) of the application becomes unresponsive. There are other areas of the application that operate in high parallelism as well and those never cause a problem with the applications responsiveness.
Cutting back on the parallelism of this section of code does seem to alleviate the problem but isn't a viable long term solution.
Am I forgetting to configure something, or configuring something incorrectly?
The code I am using is something like this:
class SomeClass {
val connectionManager = new PoolingHttpClientConnectionManager()
connectionManager.setDefaultMaxPerRoute(50)
connectionManager.setMaxTotal(500)
val httpClient = HttpClients.custom().setConnectionManager(connectionManager).build()
def postData() {
val post = new HttpPost("http://SomeUrl") // Typically this URL is fixed. It doesn't vary much if at all.
post.setEntity(new StringEntity("Some Data"))
try {
val response = httpClient.execute(post)
try {
// Check the response
} finally {
response.close()
}
} finally {
post.releaseConnection()
}
}
}
EDIT
I can see that I am building up a lot of connections in the TIME_WAIT state. I have tried adjusting the DefaultMaxPerRoute and the MaxTotal to a variety of values with no noticeable effect. It seems like I am missing something and as a result the connections are not being re-used, but I can't find any documentation that suggests what I am missing. It is critical that these connections get re-used.
EDIT 2
With further investigation, using lsof -p, I can see that if I set the MaxPerRoute to 10, there are in fact 10 connections being listed as "ESTABLISHED". I can see that the port numbers do not change. This seems to imply to me that in fact it is re-using the connections.
What that doesn't explain is why I am still leaking connections in this code? The reused connections and leaked connections (found with netstat -a) showing up in TIME_WAIT status share the same base url. So they are definitely related. Is it possible perhaps that I am re-using the connections but then somehow not properly closing the response?
EDIT 3
Located the source of the TIME_WAIT "leak". It was in an unrelated section of code. So it wasn't anything to do with the HttpClient. However after fixing up that code, all the TIME_WAITs went away, but the application is still becoming unresponsive when hitting the HttpClient code many times. Still investigating that portion.
You should really consider re-using HttpClient instance or at least the connection pool that underpins it instead of creating them for each new request execution. If you wish to continue doing the latter, you should also close the client or shut down the connection pool before they go out of scope.
As far as the leak is concerned, it should be relatively easy to track by running your application with context logging for connection management turned out as described here
IMO - you can use a much lower number of maxConnection per domain ( like 5 instead of 50 ) and still completely saturate your network bandwidth, if you use http efficiently.
im not a scala person ( android , java ) but have done lots and lots of optimization on http client side threadpools. IMO - blindly increasing connections per domain to 50 is masking some other serious issue with thruput.
2 points:
if you are using a shared "sharedPoolingClientConnManager" , correctly going to a small pool per domain and you conform to the recommended way of release your conn back to the pool ( you should be able to debug all this seeing a running metric of the state of connection per threadpool instance ) then u should be good.
whatever the parallelism feature of scala , you should understand something of how the 5 respective threads from the pool on a domain are sharing the socket?? IMO from the android/java experience is that even though each thread executor is supposedly doing blocking I/O to the server in the scope of that httpclient.exec statement, the actual channel management involved allows very high thruput without resorting to ASNyC client libs for http.
Android experience may not be relevant because client has only 4 threads. Having said that , even if you have 64 or more threads available , i just dont understand needing more than 10 connection per domain in order to keep your underlying http socket very , very busy with thruput.