Azure SQL Server (elastic pool) connect slow - entity-framework

We ar running a webapplication in Azure Web Apps using a database per customer (multiple accounts per customer). When logging in we connect the user to the correct customer database. This database is also hosted in azure (an elastic pool). It is hosted in the same region (West Europe) as the Web App.
Once the connection is pooled, request times are fast, but the first time a user log's in, the connection still needs to be created an this takes (quiet) a long time.
The connectionstring is build up using a SqlConnectionStringBuilder.
var csb = new System.Data.SqlClient.SqlConnectionStringBuilder();
csb.DataSource = "tcp:******.database.windows.net,1433";
csb.InitialCatalog = "***-***-***";
csb.UserID = "**-**";
csb.Password = "**********";
csb.MultipleActiveResultSets = true;
csb.Encrypt = true;
csb.PersistSecurityInfo = false;
csb.TrustServerCertificate = false;
csb.ConnectTimeout = 30;
_connectionString = csb.ConnectionString;
// Data Source=tcp:******.database.windows.net,1433;Initial Catalog=***-***-***;Persist Security Info=False;User ID=**-***;Password=******;MultipleActiveResultSets=True;Connect Timeout=30;Encrypt=True;TrustServerCertificate=False
Am I doing anything wrong? Or are there some settings in azure to speed up the connect process?
The above request shows the first request to the application of a customer. It therefor includes the EF Migration Seed resulting in the first 2 queries not actually going to the database itself and quite a lot of queries (not all shown here) to the database.

Well, I solved my problem eventualy. Seems i was matching wrong queries within Applications Insights. I installed Stackify and this gives just the little bit more information I needed.
Seem's Entity Framework does some things with the 'master' database. As the user in the connectionstring did not have access to the 'master' database it throws an error. Well, handling that error take's up quite some time on the app-service used and therefor returning slow. It just doesn't fail.
What EF tries to do is determine if the database exist by querying the master database wich is faster then connecting to a non existing database. If it fails because it can not connect to the master database, EF just tries to connect to the database itself. If connection to the database works, it continues normal execution like the seed method.

Related

Entity Framework/dotnet5.0 creates multiple connection

I have a very straightforward dotnet core application that creates a db context
services.AddDbContext<MyContext>(options =>
{
var connectionString = settings.ConnectionString;
options
.UseMySql(
connectionString, ServerVersion.AutoDetect(connectionString),
x => x.MigrationsAssembly("My.Migrations")
);
}, ServiceLifetime.Scoped);
All requests are asynchronous. MySql connector is from Pomelo.
It serves a separate http client ( angular ) The client sometimes sends more than 20 requests simultaneously. No other clients are connected to the api. It works smoothly but from time time I see an error that the db user created too many connections (over 10).
"MySqlConnector.MySqlException (0x80004005):
User 'user-name' has exceeded the 'max_user_connections'
resource (current value: 20)\r\n at
MySqlConnector.Core.ServerSession.ConnectAsync(ConnectionSettings cs
I managed to persuade the db admin to increase the limit to 20 ( default for mysql is 10) but it did not solve the issue completely and in fact is not a good way to solve it.
Is there a way to limit the number of connections the context creates ?

#hapijs/lab: how to prevent creation multiple DB connections when testing the server?

I develop an API server with Hapi. I use #hapi/lab for testing. I have different test files for different API modules/routes.
There is a server initialization in each test file. A DB connection is created when the server is initialized, consequently, multiple DB connections are created at the same time when I try to test the server. And I got warning like that:
WARNING: Creating a duplicate database object for the same connection.
at Object.register (/home/.../node_modules/hapi-pg-promise/index.js:19:20)
at internals.Server.register (/home/.../node_modules/#hapi/hapi/lib/server.js:453:35)
at async Object.exports.compose (/home/.../node_modules/#hapi/glue/lib/index.js:46:9)
at async createServer (/home/.../server.js:10:115)
at async Immediate.<anonymous> (/home/.../node_modules/#hapi/lab/lib/runner.js:662:17)
So, is there an approach, how to test a Hapi server in multiple files without multiple server connections?
You will have to lazy load the servr. let's assume you have a function serverFactory which returns the server. you could do this
let serverFactoryObj;
function getServerFactory() {
if (!serverFactoryObj) {
serverFactoryObj = serverFactory();
}
return serverFactoryObj;
}
This way, no matter how many times you test the endpoints, you will always have a single instance of server. Ideally you should always test with single instance rather than creating/stopping server for each test.

Queuing multi-tenant application in Laravel

I have a multi-tenant application built in Laravel 5.1. It uses one main database connection for storing users, roles, permissions, as well as jobs and failed_jobs. Additionally, every user has his own database.
I use a Job class for sending mail, and when it is executed, the following exception occurs:
[PDOException] SQLSTATE[3D000]: Invalid catalog name: 1046 No database selected
The class uses tables from two db connections (the main connection, and the one associated with the current user).
Thanks in advance, any help is appreciated.
Ok, this was easy. For anyone who is interested, I totally forgot to set the database in the second connection.
In fact, the database field in the second db connection is dynamically filled, depending on the authenticated user. So, each time the job class is executed, there should be initialization of the database field:
Config::set('database.connections.second_connection.database', 'user_' . $user_id);
// $user_id is in fact auth()->user()->id, passed as parameter
That solves the problem.

How to perform an "I can reach my database" healthcheck?

I have a classic spray+slick http server which is my database access layer, and I'd like to be able to have an healthcheck route to ensure my server is still able to reach my DB.
I could do it by doing a generic sql query, but I was wondering if there was a better way to just check the connection is alive and usable without actually adding load on the database (or at least the minimum possible load).
So pretty much :
val db = Database.forConfig("app.mydb")
[...]
db.???? // Do the check here
Why do you want to avoid executing a query against the database?
I think the best health check is to actually use the database as your application would (actually connecting and running a query). With that in mind, you can perform a SELECT 1 against your DB, and verify that it responds accordingly.

What things should I consider when using System.Transactions in my EF project?

Background
I have both an MVC app and a windows service that access the same data access library which utilizes EntityFramework. The windows service monitors certain activity on several tables and performs some calculations.
We are using the DAL project against several hundred databases, generating the connection string for the context at runtime.
We have a number of functions (both stored procedures and .NET methods which call on EF entities) which because of the scope of data we are using are VERY db intensive which have the potential to block one another.
The problem
The windows service is not so important that it can't wait. If something must be blocked, the windows service can. Earlier I found a number of SO questions that stated that System.Transactions is the way to go when setting your transaction isolation level to READ UNCOMMITTED to minimize locks.
I tried this, and I may be misunderstanding what is going on, so I need some clarification.
The method in the windows service is structured like so:
private bool _stopMe = false;
public void Update()
{
EntContext context = new EntContext();
do
{
var transactionOptions = new System.Transactions.TransactionOptions();
transactionOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted;
using (var transactionScope = new System.Transactions.TransactionScope( System.Transactions.TransactionScopeOption.Required, transactionOptions))
{
List<Ent1> myEnts = (from e....Complicated query here).ToList();
SomeCalculations(myEnts);
List<Ent2> myOtherEnts = (from e... Complicated query using entities from previous query here).ToList();
MakeSomeChanges(myOtherEnts);
context.SaveChanges();
}
Thread.Sleep(5000); //wait 5 seconds before allow do block to continue
}while (! _stopMe)
}
When I execute my second query, an exception gets thrown:
The underlying provider failed on Open.
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please
enable DTC for network access in the security configuration for MSDTC using the
Component Services Administrative tool.
The transaction manager has disabled its support for remote/network
transactions. (Exception from HRESULT: 0x8004D024)
I can assume that I should not be calling more than one query in that using block? The first query returned just fine. This is being performed on one database at a time (really other instances are being run in different threads and nothing from this thread touches the others).
My question is, is this how it should be used or is there more to this that I should know?
Of Note: This is a monitoring function, so it must be run repeatedly.
In your code you are using transaction scope. It looks like the first query uses a light weight db transaction. When the second query comes the transaction scope upgrades the transaction to a distributed transaction.
The distributed transaction uses MSDTC.
Here is where the error comes, by default MSDTC is not enabled. Even if it is enabled and started, it needs to be configured to allow a remote client to create a distributed transaction.