How create a database in azure elastic pool with entity framework core? - entity-framework-core

I would like to create a database with entity Framework core that would be automatically added to my azure elactic pool.
I do that with a DatabaseFacadeExtension that execute SQL command after the db creation like suggested here:
Azure SQL Server Elastic Pool - automatically add database to pool
public static async Task<bool> EnsureCreatedAsync(this DatabaseFacade databaseFacade, string elasticPoolName, CancellationToken cancellationToken = default)
{
if (!await databaseFacade.EnsureCreatedAsync(cancellationToken)) return false;
// the database has been created.
var dbName = databaseFacade.GetDbConnection().Database;
try
{
cancellationToken.ThrowIfCancellationRequested();
if (!string.IsNullOrEmpty(elasticPoolName))
{
await databaseFacade.ExecuteSqlCommandAsync(new RawSqlString(
$"ALTER DATABASE {dbName} MODIFY ( SERVICE_OBJECTIVE = ELASTIC_POOL (name = [{elasticPoolName}] ));"),
cancellationToken);
}
return true;
}
catch
{
await databaseFacade.EnsureDeletedAsync(cancellationToken);
throw;
}
}
It's works but I will prefer an atomic operation where the database would be created directly in the Azure Elastic Pool.

I had a very similar issue. Fortunately, I took cues from the previous answer and I improvised on it to arrive at a solution.
I had a common database to manage the application and whenever a new client onboards, I need to create a new database. So, I had to maintain multiple database contexts in my .NET core application. Also, I had migrations for the clientContext ready in my codebase, which just needed
client_db.Database.MigrateAsync();
to create the database. But, I couldn't create it directly under elastic pool as Azure doesn't have any default settings which support that. So, MigrateAsync always created the database outside the pool.
So, I created the database within the pool using T-SQL command from my common database context, followed by MigrateAsync() to migrate all the required schema.
var commandText = "CREATE DATABASE client1 ( SERVICE_OBJECTIVE = ELASTIC_POOL ( name = demoPool ) );";
db.Database.ExecuteSqlCommand(commandText);
clientContext client_db = new clientContext(approved_corporate.Id, _configuration);
client_db.Database.MigrateAsync();
Also I had a custom Constructor in my clientContext to support this:
public clientContext(int client_id, IConfiguration configuration = null)
{
string client_code = "client" + client_id.ToString();
connection_string = configuration["ConnectionStrings:Client"].ToString().Replace("client_code", client_code);
}

Azure Elastic Pool supports you creates a new database in an existing pool or as a single database. You must be connected to the master database to create a new database.
For more details, please see: Transact-SQL: Manage pooled databases.
Example T-SQL Code:
Creating a Database in an Elastic Pool:
CREATE DATABASE db1 ( SERVICE_OBJECTIVE = ELASTIC_POOL ( name = S3M100 ) ) ;
Please see: Azure SQL Database single database/elastic pool
You can replace the T-SQL statement and try again.
Hope this helps

Related

How to connect to an Azure CosmoDB for MongoDB from an Azure Function

I'm starting in Azure Function & Cosmo DB.
I created in the Azure Portal a function app, then I followed the guide to get started in VS Code:
npm install -g azure-functions-core-tools#4 --unsafe-perm true
Then New Project
Then New function, selected the HTTP trigger template
When running F5 and deployed, it work.
Then I created, in the portal, a "Azure Cosmos DB API for MongoDB" database. I followed this to publish a document when having my method called:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-add-output-binding-cosmos-db-vs-code?tabs=in-process&pivots=programming-language-csharp
So my current result is:
a function:
namespace TakeANumber
{
public static class TestFunc
{
[FunctionName("TestFunc")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[CosmosDB(databaseName: "cosmodb-take-a-number", collectionName: "take-a-number", ConnectionStringSetting = "cosmoDbConnectionString")] IAsyncCollector<dynamic> documentsOut,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
string responseMessage = string.IsNullOrEmpty(name)
? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
: $"Hello, {name}. This HTTP triggered function executed successfully.";
if (!string.IsNullOrEmpty(name))
{
// Add a JSON document to the output container.
await documentsOut.AddAsync(new
{
// create a random ID
id = System.Guid.NewGuid().ToString(),
name = name
});
}
return new OkObjectResult(responseMessage);
}
}
}
A local.settings.json file with a cosmoDbConnectionString settings that contains a mongo db connexion string.
When I run the function, I get this:
[2022-04-21T17:40:34.078Z] Executed 'TestFunc' (Failed, Id=b69a625c-9055-48bd-a5fb-d3c3b3a6fb9b, Duration=4ms)
[2022-04-21T17:40:34.079Z] System.Private.CoreLib: Exception while executing function: TestFunc. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'documentsOut'. Microsoft.Azure.DocumentDB.Core: Value cannot be null. (Parameter 'authKeyOrResourceToken | secureAuthKey').
My guess is that it's expecting a Core SQL database, with another kind of access token.
My question:
Is it possible to connect to an azure CosmoDB for MongoDB from an Azure function?
If you're using out-of-the-box bindings, you can only use Cosmos DB's SQL API.
You can totally use the MongoDB API, but you'd have to install a MongoDB client SDK and work with your data programmatically (just like you'd do with any other code-oriented approach).
Since your sample code is taking data in, and writing out to Cosmos DB, you'd do your writes via MongoDB's node/c#/python/etc driver (I believe they still call them drivers), which effectively gives you a db.collection.insert( {} ) or something more complex.
More info about Cosmos DB bindings here.

Is it possible writing down to RDS raw sql (PostgreSQL) using AWS/Glue/Spark shell?

I have a Glue/Connection for an RDS/PostgreSQL DB pre-built via CloudFormation, which works fine in a Glue/Scala/Sparkshell via getJDBCSink API to write down a DataFrame to that DB.
But also I need to write down to the same db, plain sql like create index ... or create table ... etc.
How can I forward that sort of statements in the same Glue/Spark shell?
In python, you can provide pg8000 dependency to the spark glue jobs and then run the sql commands by establishing the connection to the RDS using pg8000.
In scala you can directly establish a JDBC connection without the need of any external library as far as driver is concerned, postgres driver is available in aws glue.
You can create connection as
import java.sql.{Connection, DriverManager, ResultSet}
object pgconn extends App {
println("Postgres connector")
classOf[org.postgresql.Driver]
val con_st = "jdbc:postgresql://localhost:5432/DB_NAME?user=DB_USER"
val conn = DriverManager.getConnection(con_str)
try {
val stm = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)
val rs = stm.executeQuery("SELECT * from Users")
while(rs.next) {
println(rs.getString("quote"))
}
} finally {
conn.close()
}
}
or follow this blog

MongoDB: A timeout occured after 30000ms selecting a server using CompositeServerSelector

I'm completely stumped. I am using the latest c# drivers (2.3.0.157) and the latest MongoDB (3.2). The DB is running as a standalone setup with no replication or sharding. I've tried running locally on Windows as well as remotely on Amazon LINUX.
I continue to get a timeout error but mysteriously sometimes it just works (maybe once every 20 - 30 attempts).
I am creating the connection as such:
private static readonly string ConnectionString = ConfigurationManager.ConnectionStrings["MongoDB"].ToString();
private static readonly string DataBase = ConfigurationManager.ConnectionStrings["MongoDBDatabase"].ToString();
private static IMongoDatabase _database;
public static IMongoDatabase GetDatabase(string database)
{
if (_database == null)
{
var client = new MongoClient(ConnectionString);
_database = client.GetDatabase(database);
}
return _database;
}
And calling it like this:
public static List<Earnings> GetEarnings()
{
var db = GetDatabase(DataBase);
if (db == null) return new List<Earnings>();
var logs = db.GetCollection<Earnings>("EarningsData");
var all = logs.Find(new BsonDocument()).ToEnumerable().OrderBy(x => x.Symbol).ToList();
return all;
}
And it'll time out on the logs.Find part of the method. Here's the full message:
Additional information:
A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode = Primary, TagSets = [] } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "Direct", Type : "Unknown", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "XX.XX.XX.XX:27017" }", EndPoint: "XX.XX.XX.XX:27017", State: "Disconnected", Type: "Unknown" }] }.
I've tried using the fully qualified host name, adding connect=direct and connect=replicaSet to the connection string, using MongoClientSettings instead of the connection string and everything else I could possibly find on forums and StackOverflow. I'm at a loss and not even sure where to look next. Any advice?
I should also add, I can connect fine from the command line and RoboMongo...
We finally figured out how to work around this issue but I still don't understand what's happening. In our case, we have a server that spawns ~10 signalr hubs that get their data from MongoDB. It seems that when the app was starting up it was making several rapid calls to MongoDB to get the initial set of data and while it would occasionally worked, most times it didn't. We ended up solving this by adding a one second delay between loading each SignalR hub so the initial query was delayed a bit and we didn't have contention.
The weird thing about this is none of these collections have a large amount of data and the initial load is usually < 100 documents per collection (max). Once things are initialized it doesn't seem to matter how often we hit MongoDB. It just seems to be on the initial load.
An old topic but I found I was getting a similar error (2.11.0-beta2, netcoreapp3.1) and then I realised DocumentDb is restricted to connectivity within the same VPC. It's mentioned here.
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
Amazon DocumentDB (with MongoDB compatibility) clusters are deployed within an Amazon Virtual Private Cloud (Amazon VPC). They can be accessed directly by Amazon EC2 instances or other AWS services that are deployed in the same Amazon VPC. Additionally, Amazon DocumentDB can be accessed by EC2 instances or other AWS services in different VPCs in the same AWS Region or other Regions via VPC peering.
Check you're in the same VPC. If not, good luck.

Seed Database When Deploying to Azure Website

I am trying to Seed (using an external recource, which is CSV file) the Azure SQL database associated with an Azure Website.
I am able to Seed the database in development environment using EF Migration Seed method and a CSV file as defined as follows in the Migration.cs file. Note: the CSV file in the project in Visual Studio are set to the Build Action to Embedded Resource.
public void SeedData(WebApp.Models.ApplicationDbContext Context)
{
Assembly assembly = Assembly.GetExecutingAssembly();
string resourceName = "WebApp.SeedData.Name.csv";
using (Stream stream = assembly.GetManifestResourceStream(resourceName))
{
using (StreamReader reader = new StreamReader(stream, Encoding.UTF8))
{
CsvReader csvReader = new CsvReader(reader);
csvReader.Configuration.WillThrowOnMissingField = false;
var records = csvReader.GetRecords<Products>().ToArray();
foreach (Product record in records)
{
Context.Products.AddOrUpdate(record);
}
}
}
Context.SaveChanges();
}
When I deploy to Azure from VS2013 and select Execute Code First Migration, the database does not get seeded.
UPDATE
This is now working. After I did a clean build, then built the project, and then deployed the the web site with selecting Execute Code First Migration.

How to detach a LocalDB (SQL Server Express) file in code

When using LocalDB .mdf files in deployment you will often want to move, delete or backup the database file.
It is paramount to detach this file first as simply deleting it will cause errors because LocalDB still keeps a registration of it.
So how is a LocalDB .mdf file detached in code?
I had to string together the answer from several places, so I wil post it here:
Mind, manually detaching the .mdf file from Visual Studio is possible after manually deleting it before detachment by going through SQL Server Object Explorer.
''' <summary>
''' Detach a database from LocalDB. This MUST be done prior to deleting it. It must also be done after a inadvertent (or ill advised) manual delete.
''' </summary>
''' <param name="dbName">The NAME of the database, not its filename.</param>
''' <remarks></remarks>
Private Sub DetachDatabase(dbName As String)
Try
'Close the connection to the database.
myViewModel.CloseDatabase()
'Connect to the MASTER database in order to excute the detach command on it.
Dim connectionString = String.Format("Data Source=(LocalDB)\v11.0;Initial Catalog=master;Integrated Security=True")
Using connection As New SqlConnection(connectionString)
connection.Open()
Dim cmd = connection.CreateCommand
'--Before the database file can be detached from code the workaround below has to be applied.
'http://web.archive.org/web/20130429051616/http://gunnalag.wordpress.com/2012/02/27/fix-cannot-detach-the-database-dbname-because-it-is-currently-in-use-microsoft-sql-server-error-3703
cmd.CommandText = String.Format("ALTER DATABASE [{0}] SET OFFLINE WITH ROLLBACK IMMEDIATE", dbName)
cmd.ExecuteNonQuery()
'--
'--Now detach
cmd.CommandText = String.Format("exec sp_detach_db '{0}'", dbName)
cmd.ExecuteNonQuery()
'--
End Using
Catch ex As Exception
'Do something meaningful here.
End Try
End Sub
I had the same issue and was thinking of how to deal with it.
There are 3 approaches.
Detach at the end of (or during) working with database
I didn't find the way to close connection in LinqToSQL, but actually it is not needed. Simply execute the following code:
var db = #"c:\blablabla\database1.mdf";
using (var master = new DataContext(#"Data Source=(LocalDB)\v11.0;Initial Catalog=master;Integrated Security=True"))
{
master.ExecuteCommand(#"ALTER DATABASE [{0}] SET OFFLINE WITH ROLLBACK IMMEDIATE", db);
master.ExecuteCommand(#"exec sp_detach_db '{0}'", db);
}
and make sure nothing will try to query db after (or you get it attached again).
Detach on start
Before you made any connection to db, detaching is as simple as:
var db = #"c:\blablabla\database1.mdf";
using (var master = new DataContext(#"Data Source=(LocalDB)\v11.0;Initial Catalog=master;Integrated Security=True"))
master.ExecuteCommand(#"exec sp_detach_db '{0}'", db);
This suit very well to my needs, because I do not care about delay to start application (because in this case I will have to attach to db always), but it will fix any kind of
System.Data.SqlClient.SqlException (0x80131904): Database 'c:\blablabla\database1.mdf' already exists. Choose a different database name.
which occurs, if database file is delete and you try to create it programmatically
// DataContext
if (!DatabaseExists())
CreateDatabase();
Another way
You can also run command line tool sqllocaldb like this:
var start = new ProcessStartInfo("sqllocaldb", "stop v11.0");
start.WindowStyle = ProcessWindowStyle.Hidden;
using (var stop = Process.Start(start))
stop.WaitForExit();
start.Arguments = "delete v11.0";
using (var delete = Process.Start(start))
delete.WaitForExit();
It will stop the server, detaching all databases. If you have other application using LocalDB, then they will experience attaching delay next time when they try to do query.
Here is my solution for EntityFramework Core 1.0
As you see the database name can be used with its full file path.
var dbf = fileDlg.FileName;
var options = new DbContextOptionsBuilder();
options.UseSqlServer($#"Server=(localdb)\mssqllocaldb;Initial Catalog=master;MultipleActiveResultSets=False;Integrated Security=True");
using (var master = new DbContext(options.Options))
{
master.Database.ExecuteSqlCommand($"ALTER DATABASE [{dbf}] SET OFFLINE WITH ROLLBACK IMMEDIATE");
master.Database.ExecuteSqlCommand($"exec sp_detach_db '{dbf}'");
}