MongoDB C# Driver database.GetCollection and magic strings - mongodb

Just getting into the NoSQL stuff so forgive me if this is a simple question. I am trying to somewhat implement a repository type pattern using a generic repository for the more common operations.
One thing that I have run into that is killing this idea is that in order to get the collection you plan to work with you have to pass a string value for the name of the collection.
var collection = database.GetCollection<Entity>("entities");
This means that I have to hard code my collection names or code up a dictionary somewhere to act as a lookup so that i can map the object type to a collection name.
How is everyone else handling this?

What you can do is "semi-hardcode." You can put the name of the collection in a class name and refere to it:
public class Entity {
public static readonly string Name = "entities";
}
var collection = database.GetCollection<Entity>(Entity.Name);

I wrote a class to manage DB transactions
First you need a base class for all entities:
public abstract class Entity
{
public ObjectId Id { set; get; }
}
then an static class to manage all:
public static class MongoDB
{
private static string connectionString = "mongodb://localhost";
public static string DatabaseName { get { return "test"; } }
private static MongoServer _server;
private static MongoDatabase _database;
public static MongoServer Server
{
get
{
if (_server == null)
{
var client = new MongoClient(connectionString);
_server = client.GetServer();
}
return _server;
}
}
public static MongoDatabase DB
{
get
{
if(_database == null)
_database = Server.GetDatabase(MongoDB.DatabaseName);
return _database;
}
}
public static MongoCollection<T> GetCollection<T>() where T : Entity
{
return DB.GetCollection<T>(typeof(T).FullName);
}
public static List<T> GetEntityList<T>() where T : Entity
{
var collection = MongoDB.DB.GetCollection<T>(typeof(T).FullName);
return collection.FindAll().ToList<T>();
}
public static void InsertEntity<T>(T entity) where T : Entity
{
GetCollection<T>().Save(entity);
}
}
then use it like this:
public class SomeEntity : Entity { public string Name {set;get;} }
MongoDB.InsertEntity<SomeEntity>(new SomeEntity(){ Name = "ashkan" });
List<SomeEntity> someEntities = MongoDB.GetEntityList<SomeEntity>();

I finally found an approach very usefull for me as all my mongo collections follow a camel case underscore naming convention, so I made a simple string extension to translate the POCO naming convention to my mongo convention.
private readonly IMongoDatabase _db;
public IMongoCollection<TCollection> GetCollection<TCollection>() =>
_db.GetCollection<TCollection>(typeof(TCollection).ToString().MongifyToCollection());
This method is inside a class made for handling mongo using dependency injection and it also wraps the default GetCollection to make it a bit more OO
public class MongoContext : IMongoContext
{
private readonly IMongoDatabase _db;
public MongoContext()
{
var connectionString = MongoUrl.Create(ConfigurationManager.ConnectionStrings["mongo"].ConnectionString);
var client = new MongoClient(connectionString);
_db = client.GetDatabase(connectionString.DatabaseName);
RegisterConventions();
}
public IMongoCollection<TCollection> GetCollection<TCollection>() =>
_db.GetCollection<TCollection>(typeof(TCollection).Name.MongifyToCollection());
...
And the extension:
// It may require some improvements, but enough simple for my needs at the moment
public static string MongifyToCollection(this string source)
{
var result = source.Mongify().Pluralize(); //simple extension to replace upper letters to lower, etc
return result;
}

Related

How to use dynamic connection string in mongodbcontext in C#

I would like to connect to the database specified in the connection string. I have multiple connection sting for different client
public MongoContext(IConfiguration configuration)
{
_configuration = configuration;
// Every command will be stored and it'll be processed at SaveChanges
_commands = new List<Func<Task>>();
}
private void ConfigureMongo()
{
if (MongoClient != null)
{
return;
}
// Configure mongo (You can inject the config, just to simplify)
MongoClient = new MongoClient(_configuration["MongoSettings:Connection"]);
Database = MongoClient.GetDatabase(_configuration["MongoSettings:DatabaseName"]);
}
public IMongoCollection<T> GetCollection<T>(string name)
{
ConfigureMongo();
return Database.GetCollection<T>(name);
}
}
My Repository Class is
public abstract class BaseRepository<TEntity> : IRepository<TEntity> where TEntity : class
{
protected readonly IMongoContext Context;
protected IMongoCollection<TEntity> DbSet;
protected BaseRepository(IMongoContext context, int parentID)
{
Context = context;
DbSet = Context.GetCollection<TEntity>(typeof(TEntity).Name);
}
}
For each request, i want to set connection string dynamically based on UserID
Not sure I've got what you're asking, but if you need connecting to more than one cluster, you should create a separate mongo client for each separate cluster

Custom DynamoDb TableNameResolver not being called when using CrudRepository

I am testing DynamoDB tables and want to set up different table names for prod and dev environment using the keyword"dev" for development and prod for production.
I have a POJO
#DynamoDBTable(tableName = "abc_xy_dev_MyProjectName_Employee")
public class Employee implements Cloneable {
}
On Prod I want its name to be abc_xy_prod_MyProjectName_Employee.
So, I wrote a TableNameResolver
public static class MyTableNameResolver implements TableNameResolver {
public static final MyTableNameResolver INSTANCE = new MyTableNameResolver();
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
final TableNameOverride override = config.getTableNameOverride();
String tableNameToReturn = null;
if (override != null) {
final String tableName = override.getTableName();
if (tableName != null) {
System.out.println("MyTableNameResolver ==================================");
return tableName;
}
}
String env = System.getenv("DEPLOYMENT_ENV");
for(Annotation annotation : clazz.getAnnotations()){
if(annotation instanceof DynamoDBTable){
DynamoDBTable myAnnotation = (DynamoDBTable) annotation;
if ("production".equals(env)){
tableNameToReturn = myAnnotation.tableName().replace("_dev_", "_prod_");
}
else {
tableNameToReturn = myAnnotation.tableName();
}
}
}
return tableNameToReturn;
}
}
This works by creating a table with the name abc_xy_prod_MyProjectName_Employee in production.
However, I have a repository with the following code
#EnableScan
public interface EmployeeRepository extends CrudRepository<Employee, String>
{
#Override
<S extends Employee> S save(S employee);
Optional<Employee> findById(String id);
#Override
List<Employee> findAll();
Optional<Employee> findByEmployeeNumber(String EmployeeNumber);
}
Thus when i try to call the method findAll via a endpoint /test case, i get the exception
There was an unexpected error (type=Internal Server Error,
status=500). User:
arn:aws:iam::87668976786:user/svc_nac_ps_MyProjectName_prod is not
authorized to perform: dynamodb:Scan on resource:
:table/abc_xy_dev_MyProjectName_Employee (Service: AmazonDynamoDBv2;
Status Code: 400; Error Code: AccessDeniedException; Request ID:
aksdnhLDFL)
i.e MyTableNameResolver doesn't get called internally when the respository methods are executed. It still points to table name with the name abc_xy_dev_MyProjectName_Employee given in the annotation #DynamoDBTable(tableName = "abc_xy_dev_MyProjectName_Employee")
You have used spring JPA as persistence dynamoDB Integration.
Below configuration can be used to set table name override as part of spring boot configuration.
Sample example is found in https://github.com/ganesara/SpringExamples/tree/master/spring-dynamo
Map Dynamo db repository with user defined mapper config reference
#EnableDynamoDBRepositories(basePackages = "home.poc.spring", dynamoDBMapperConfigRef="dynamoDBMapperConfig")
Mapper Config for table override is as below
#Bean
public DynamoDBMapperConfig dynamoDBMapperConfig() {
DynamoDBMapperConfig mapperConfig = new DynamoDBMapperConfig
.Builder()
.withTableNameOverride(DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix("PROD_"))
.build();
return mapperConfig;
}
Full configuration for reference
#Configuration
#EnableDynamoDBRepositories(basePackages = "home.poc.spring", dynamoDBMapperConfigRef="dynamoDBMapperConfig")
public class DynamoDBConfig {
#Value("${amazon.dynamodb.endpoint}")
private String amazonDynamoDBEndpoint;
#Value("${amazon.aws.accesskey}")
private String amazonAWSAccessKey;
#Value("${amazon.aws.secretkey}")
private String amazonAWSSecretKey;
#Bean
public AmazonDynamoDB amazonDynamoDB() {
AmazonDynamoDB amazonDynamoDB
= new AmazonDynamoDBClient(amazonAWSCredentials());
if (!StringUtils.isEmpty(amazonDynamoDBEndpoint)) {
amazonDynamoDB.setEndpoint(amazonDynamoDBEndpoint);
}
return amazonDynamoDB;
}
#Bean
public AWSCredentials amazonAWSCredentials() {
return new BasicAWSCredentials(
amazonAWSAccessKey, amazonAWSSecretKey);
}
#Bean
public DynamoDBMapperConfig dynamoDBMapperConfig() {
DynamoDBMapperConfig mapperConfig = new DynamoDBMapperConfig
.Builder()
.withTableNameOverride(DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix("PROD_"))
.build();
return mapperConfig;
}
#Bean
public DynamoDBMapper dynamoDBMapper() {
return new DynamoDBMapper(amazonDynamoDB(), dynamoDBMapperConfig());
}
}
You are using DynamoDBMapper (the Java SDK). Here is how I use it. Lets say I have a table called Users, with an associated User POJO. In DynamoDB I have DEV_Users and LIVE_Users.
I have an environment variable 'ApplicationEnvironmentName' which is either DEV or LIVE.
I create a custom DynamoDBMapper like this:
public class ApplicationDynamoMapper {
private static Map<String, DynamoDBMapper> mappers = new HashMap<>();
private static AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withRegion(System.getProperty("DynamoDbRegion")).build();
protected ApplicationDynamoMapper() {
// Exists only to defeat instantiation.
}
public static DynamoDBMapper getInstance(final String tableName) {
final ApplicationLogContext LOG = new ApplicationLogContext();
DynamoDBMapper mapper = mappers.get(tableName);
if (mapper == null) {
final String tableNameOverride = System.getProperty("ApplicationEnvironmentName") + "_" + tableName;
LOG.debug("Creating DynamoDBMapper with overridden tablename {}.", tableNameOverride);
final DynamoDBMapperConfig mapperConfig = new DynamoDBMapperConfig.Builder().withTableNameOverride(TableNameOverride.withTableNameReplacement(tableNameOverride)).build();
mapper = new DynamoDBMapper(client, mapperConfig);
mappers.put(tableName, mapper);
}
return mapper;
}
}
My Users POJO looks like this:
#DynamoDBTable(tableName = "Users")
public class User {
...
}
When I want to use the database I create an application mapper like this:
DynamoDBMapper userMapper = ApplicationDynamoMapper.getInstance(User.DB_TABLE_NAME);
If I wanted to a load a User, I would do it like this:
User user = userMapper.load(User.class, userId);
Hope that helps.

Entity Framework Core 1.1 In Memory Database fails adding new entities

I am using the following code in a unit test for the test setup:
var simpleEntity = new SimpleEntity();
var complexEntity = new ComplexEntity
{
JoinEntity1List = new List<JoinEntity1>
{
new JoinEntity1
{
JoinEntity2List = new List<JoinEntity2>
{
new JoinEntity2
{
SimpleEntity = simpleEntity
}
}
}
}
};
var anotherEntity = new AnotherEntity
{
ComplexEntity = complexEntity1
};
using (var context = databaseFixture.GetContext())
{
context.Add(anotherEntity);
await context.SaveChangesAsync();
}
When SaveChangesAsync is reached EF throws an ArgumentException with the following message:
An item with the same key has already been added. Key: 1
I'm using a fixture as well for the unit test class which populates the database with objects of the same types, though for this test I want this particular setup so I want to add these new entities to the in memory database. I've tried adding the entities on the DbSet (not the DbContext) and adding all three entities separatly to no avail. I can however add "simpleEntity" separately (because it is not added in the fixture) but EF complains as soon as I try to add "complexEntity" or "anotherEntity".
It seems like EF in memory database cannot handle several Add's over different instances of the context. Is there any workaround for this or am I doing something wrong in my setup?
The databaseFixture in this case is an instance of this class:
namespace Test.Shared.Fixture
{
using Data.Access;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;
public class InMemoryDatabaseFixture : IDatabaseFixture
{
private readonly DbContextOptions<MyContext> contextOptions;
public InMemoryDatabaseFixture()
{
var serviceProvider = new ServiceCollection()
.AddEntityFrameworkInMemoryDatabase()
.BuildServiceProvider();
var builder = new DbContextOptionsBuilder<MyContext>();
builder.UseInMemoryDatabase()
.UseInternalServiceProvider(serviceProvider);
contextOptions = builder.Options;
}
public MyContext GetContext()
{
return new MyContext(contextOptions);
}
}
}
You can solve this problem by using Collection Fixtures so you can share this fixture across several test classes. This way you don't build you context several times and thus you won't get this exception:
Some information about collection Fixture
My own example:
[CollectionDefinition("Database collection")]
public class DatabaseCollection : ICollectionFixture<DatabaseFixture>
{ }
[Collection("Database collection")]
public class GetCitiesCmdHandlerTests : IClassFixture<MapperFixture>
{
private readonly TecCoreDbContext _context;
private readonly IMapper _mapper;
public GetCitiesCmdHandlerTests(DatabaseFixture dbFixture, MapperFixture mapFixture)
{
_context = dbFixture.Context;
_mapper = mapFixture.Mapper;
}
[Theory]
[MemberData(nameof(HandleTestData))]
public async void Handle_ShouldReturnCountries_AccordingToRequest(
GetCitiesCommand command,
int expectedCount)
{
(...)
}
public static readonly IEnumerable<object[]> HandleTestData
= new List<object[]>
{
(...)
};
}
}
Good luck,
Seb

How to change database schema on runtime in EF7 or EF core

My database have different schema depending on user selections on runtime.
My code is below:
public partial class FashionContext : DbContext
{
private string _schema;
public FashionContext(string schema) : base()
{
_schema = schema;
}
public virtual DbSet<Style> Styles { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder options)
{
options.UseSqlServer(#"Server=.\sqlexpress;Database=inforfashionplm;Trusted_Connection=True;");
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Style>()
.ToTable("Style", schema: _schema);
}
}
Upon testing. I created a context instance with 'schema1'.
So far so good.
But when I create another context instance with different schema 'schema2', the resulting data in which the schema is still on 'schema1'.
Here is the implementation:
using (var db = new FashionContext("schema1"))
{
foreach (var style in db.Styles)
{
Console.WriteLine(style.Name);
}
}
Console.ReadLine();
Console.Clear();
using (var db = new FashionContext("schema2"))
{
foreach (var style in db.Styles)
{
Console.WriteLine(style.Name);
}
}
Console.ReadLine();
Later I noticed that the OnModelCreating is called only one time, so it is never called again when you create a new context instance of the same connection string.
Is it possible to have dynamic schema on runtime? Note: this is possible in EF6
One of possible way was mentioned above, but briefly, so I will try to explain with examples.
You ought to override default ModelCacheKeyFactory and ModelCacheKey.
ModelCachekeyFactory.cs
internal sealed class CustomModelCacheKeyFactory<TContext> : ModelCacheKeyFactory
where TContext : TenantDbContext<TContext>
{
public override object Create(DbContext context)
{
return new CustomModelCacheKey<TContext>(context);
}
public CustomModelCacheKeyFactory([NotNull] ModelCacheKeyFactoryDependencies dependencies) : base(dependencies)
{
}
}
ModelCacheKey.cs, please review Equals and GetHashCode overridden methods, they are not best one and should be improved.
internal sealed class ModelCacheKey<TContext> : ModelCacheKey where TContext : TenantDbContext<TContext>
{
private readonly string _schema;
public ModelCacheKey(DbContext context) : base(context)
{
_schema = (context as TContext)?.Schema;
}
protected override bool Equals(ModelCacheKey other)
{
return base.Equals(other) && (other as ModelCacheKey<TContext>)?._schema == _schema;
}
public override int GetHashCode()
{
var hashCode = base.GetHashCode();
if (_schema != null)
{
hashCode ^= _schema.GetHashCode();
}
return hashCode;
}
}
Register in DI.
builder.UseSqlServer(dbConfiguration.Connection)
.ReplaceService<IModelCacheKeyFactory, CustomModelCacheKeyFactory<CustomContext>>();
Context sample.
public sealed class CustomContext : TenantDbContext<CustomContext>
{
public CustomContext(DbContextOptions<CustomContext> options, string schema) : base(options, schema)
{
}
}
You can build the model externally and pass it into the DbContext using DbContextOptionsBuilder.UseModel()
Another (more advanced) alternative is to replace the IModelCacheKeyFactory to take schema into account.
I found a way to recreate the compiled model on each context creation.
public partial class MyModel : DbContext {
private static DbConnection _connection
{
get
{
//return a new db connection
}
}
private static DbCompiledModel _model
{
get
{
return CreateModel("schema name");
}
}
public MyModel()
: base(_connection, _model, false)
{
}
private static DbCompiledModel CreateModel(string schema)
{
var modelBuilder = new DbModelBuilder();
modelBuilder.HasDefaultSchema(schema);
modelBuilder.Entity<entity1>().ToTable(schema + ".entity1");
var builtModel = modelBuilder.Build(_connection);
return builtModel.Compile();
}
}

MongoDB IRepository db Connections

This is what I have so far with regards to my IRepository for MongoDB and was wondering whether or not I'm on the right lines?
public abstract class Repository<TEntity> : IRepository<TEntity> {
private const string _connection = "mongodb://localhost:27017/?safe=true";
private MongoDatabase _db;
protected abstract string _collection{get;}
public Repository() {
this._db = MongoServer.Create(_connection).GetDatabase("Photos");
}
public IQueryable<TEntity> FindAll() {
return this._db.GetCollection<TEntity>(_collection).FindAll().AsQueryable();
}
}
This way I can create my PhotoRepository class that inherits from here and supplies the required _collection name.
I just want to make sure that I'm opening the connection to the db in the correct place and in the correct way.
Yes, this is fine. MongoServer.Create will return the same instance of MongoServer when passed the same connection string, so it is safe to call MongoServer.Create as many times as you want.