Persisting dynamic groovy properties with GORM MongoDB - mongodb

I am currently trying to persist the following class with the GORM MongoDB plugin for grails:
class Result {
String url
def Result(){
}
static constraints = {
}
static mapWith="mongo"
static mapping = {
collection "results"
database "crawl"
}
}
The code I'm running to persist this class is the following:
class ResultIntegrationTests {
#Before
void setUp() {
}
#After
void tearDown() {
}
#Test
void testSomething() {
Result r = new Result();
r.setUrl("http://heise.de")
r.getMetaClass().setProperty("title", "This is how it ends!")
println(r.getTitle())
r.save(flush:true)
}
}
This is the result in MongoDB:
{ "_id" : NumberLong(1), "url" : "http://heise.de", "version" : 0 }#
Now the url is properly persisted with MongoDB but the dynamic property somehow is not seen by the mapper - although the println(r.getTitle()) works perfectly fine.
I am new to groovy so I thought that someone with a little more experience could help me out with this problem. Is there a way to make this dynamically added property visible to the mapping facility? If yes how can I do that?
Thanks a lot for any advice...

Rather than adding random properties to the metaClass and hoping that Grails will both scan the metaClass looking for your random properties and then persist them, why not just add a Map to your domain class, (or a new Key/Value domain class which Result can hasMany) so you can add random extra properties to it as you want.

try this doc
#Test
void testSomething() {
Result r = new Result();
r.url = "http://heise.de"
r.['title'] = "This is how it ends!" //edit: forgot the subscript
println r.['title']
r.save(flush:true)
}
BTW, Instead of using gorm or hibernate you can always use directly java api / gmongo.

Related

Specify connection string for a query with DbContextScope project

I am currently using Mehdi El Gueddari's DbContextScope project, I think by the book, and it's awesome. But I came across a problem I'm unsure how to solve today. I have a query that I need to execute using a different database login/user because it requires additional permissions. I can create another connection string in my web.config, but I'm not sure how to specify that for this query, I want to use this new connection string. Here is my usage:
In my logic layer:
private static IDbContextScopeFactory _dbContextFactory = new DbContextScopeFactory();
public static Guid GetFacilityID(string altID)
{
...
using (_dbContextFactory.CreateReadOnly())
{
entity = entities.GetFacilityID(altID)
}
}
That calls into my data layer which would look something like this:
private AmbientDbContextLocator _dbcLocator = new AmbientDbContextLocator();
protected CRMEntities DBContext
{
get
{
var dbContext = _dbcLocator.Get<CRMEntities>();
if (dbContext == null)
throw new InvalidOperationException("No ambient DbContext....");
return dbContext;
}
}
public virtual Guid GetFaciltyID(string altID)
{
return DBContext.Set<Facility>().Where(f => f.altID = altID).Select(f => f.ID).FirstOrDefault();
}
Currently my connection string is set in the default way:
public partial class CRMEntities : DbContext
{
public CRMEntities()
: base("name=CRMEntities")
{}
}
Is it possible for this specific query to use a different connection string and how?
I ended up modifying the source code in a way that feels slightly hacky, but is getting the job done for now. I created a new IAmbientDbContextLocator with a Get<TDbContext> method override that accepts a connection string:
public TDbContext Get<TDbContext>(string nameOrConnectionString) where TDbContext : DbContext
{
var ambientDbContextScope = DbContextScope.GetAmbientScope();
return ambientDbContextScope == null ? null : ambientDbContextScope.DbContexts.Get<TDbContext>(nameOrConnectionString);
}
Then I updated the DbContextCollection to pass this parameter to the DbContext's existing constructor overload. Last, I updated the DbContextCollection maintain a Dictionary<KeyValuePair<Type, string>, DbContext> instead of a Dictionary<Type, DbContext> as its cached _initializedDbContexts where the added string is the nameOrConnectionString param. So in other words, I updated it to cache unique DbContext type/connection string pairs.
Then I can get at the DbContext with the connection I need like this:
var dbContext = new CustomAmbientDbContextLocator().Get<CRMEntities>("name=CRMEntitiesAdmin");
Of course you'd have to be careful your code doesn't end up going through two different contexts/connection strings when it should be going through the same one. In my case I have them separated into two different data access class implementations.

MongoDB and Large Datasets when using a Repository pattern

Okay so at work we are developing a system using MVC C# & MongoDB. When first developing we decided it would probably be a good idea to follow the Repository pattern (what a pain in the ass!), here is the code to give an idea of what is currently implemented.
The MongoRepository class:
public class MongoRepository { }
public class MongoRepository<T> : MongoRepository, IRepository<T>
where T : IEntity
{
private MongoClient _client;
private IMongoDatabase _database;
private IMongoCollection<T> _collection;
public string StoreName {
get {
return typeof(T).Name;
}
}
}
public MongoRepository() {
_client = new MongoClient(ConfigurationManager.AppSettings["MongoDatabaseURL"]);
_database = _client.GetDatabase(ConfigurationManager.AppSettings["MongoDatabaseName"]);
/* misc code here */
Init();
}
public void Init() {
_collection = _database.GetCollection<T>(StoreName);
}
public IQueryable<T> SearchFor() {
return _collection.AsQueryable<T>();
}
}
The IRepository interface class:
public interface IRepository { }
public interface IRepository<T> : IRepository
where T : IEntity
{
string StoreNamePrepend { get; set; }
string StoreNameAppend { get; set; }
IQueryable<T> SearchFor();
/* misc code */
}
The repository is then instantiated using Ninject but without that it would look something like this (just to make this a simpler example):
MongoRepository<Client> clientCol = new MongoRepository<Client>();
Here is the code used for the search pages which is used to feed into a controller action which outputs JSON for a table with DataTables to read. Please note that the following uses DynamicLinq so that the linq can be built from string input:
tmpFinalList = clientCol
.SearchFor()
.OrderBy(tmpOrder) // tmpOrder = "ClientDescription DESC"
.Skip(Start) // Start = 99900
.Take(PageLength) // PageLength = 10
.ToList();
Now the problem is that if the collection has a lot of records (99,905 to be exact) everything works fine if the data in a field isn't very large for example our Key field is a 5 character fixed length string and I can Skip and Take fine using this query. However if it is something like ClientDescription can be much longer I can 'Sort' fine and 'Take' fine from the front of the query (i.e. Page 1) however when I page to the end with Skip = 99900 & Take = 10 it gives the following memory error:
An exception of type 'MongoDB.Driver.MongoCommandException' occurred
in MongoDB.Driver.dll but was not handled in user code
Additional information: Command aggregate failed: exception: Sort
exceeded memory limit of 104857600 bytes, but did not opt in to
external sorting. Aborting operation. Pass allowDiskUse:true to opt
in..
Okay so that is easy to understand I guess. I have had a look online and mostly everything that is suggested is to use Aggregation and "allowDiskUse:true" however since I use IQueryable in IRepository I cannot start using IAggregateFluent<> because you would then need to expose MongoDB related classes to IRepository which would go against IoC principals.
Is there any way to force IQueryable to use this or does anyone know of a way for me to access IAggregateFluent without going against IoC principals?
One thing of interest to me is why the sort works for page 1 (Start = 0, Take = 10) but then fails when I search to the end ... surely everything must be sorted for me to be able to get the items in order for Page 1 but shouldn't (Start = 99900, Take = 10) just need the same amount of 'sorting' and MongoDB should just send me the last 5 or so records. Why doesn't this error happen when both sorts are done?
ANSWER
Okay so with the help of #craig-wilson upgrading to the newest version of MongoDB C# drivers and changing the following in MongoRepository will fix the problem:
public IQueryable<T> SearchFor() {
return _collection.AsQueryable<T>(new AggregateOptions { AllowDiskUse = true });
}
I was getting a System.MissingMethodException but this was caused by other copies of the MongoDB drivers needing updated as well.
When creating the IQueryable from an IMongoCollection, you can pass in the AggregateOptions which allow you to set AllowDiskUse.
https://github.com/mongodb/mongo-csharp-driver/blob/master/src/MongoDB.Driver/IMongoCollectionExtensions.cs#L53

How to use BeanWrapperFieldSetMapper to map a subset of fields?

I have a Spring batch application where BeanWrapperFieldSetMapper is used to map fields using a prototype object. However, the CSV file that is being read (via a FlatFileItemReader) contains one (indicator) field that determines the mapping of another field. If the indicator field has a value of Y, then the value of the another field should be mapped to property foo otherwise it should be mapped to property bar.
I know that I can use a custom FieldSetMapper to do this, but then I have to code the mapping all of the other fields (of which there are a quite a few). Alternatively, I could do this post reading via an ItemProcessor but then my domain (prototype) object must have a property representing the indicator field (which I prefer not to do since it is not really part of the business domain).
Is it possible to perhaps use a custom FieldSetMapper to only map these custom fields and delegate the other mappings to BeanWrapperFieldSetMapper? Or is there some other better way to solve for this?
Here is my current attempt to use a custom FieldSetMapper and delegate to BeanWrapperFieldSetMapper:
public class DelegatedFieldSetMapper extends BeanWrapperFieldSetMapper<MyProtoClass> {
#Override
public MyProtoClass mapFieldSet(FieldSet fieldSet) throws BindException {
String indicator = fieldSet.readString("indicator");
Properties fieldProperties = fieldSet.getProperties();
if (indicator.equalsIgnoreCase("y")) {
fieldProperties.put("test.foo", fieldSet.readString("value");
} else {
fieldProperties.put("test.bar", fieldSet.readString("value");
}
fieldProperties.remove("indicator");
Set<Object> keys = fieldProperties.keySet();
List<String> names = new ArrayList<String>();
List<String> values = new ArrayList<String>();
for (Object key : keys) {
names.add((String) key);
values.add((String) fieldProperties.getProperty((String) key));
}
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
return super.mapFieldSet(domainObjectFieldSet);
}
}
However, a FlatFileParseException is thrown. The relevant parts of the batch config class are as follows:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Value("${file}")
private File file;
#Bean
#Scope("prototype")
public MyProtoClass () {
return new MyProtoClass();
}
#Bean
public ItemReader<MyProtoClass> reader(LineMapper<MyProtoClass> lineMapper) {
FlatFileItemReader<MyProtoClass> flatFileItemReader = new FlatFileItemReader<MyProtoClass>();
flatFileItemReader.setResource(new FileSystemResource(file));
final int NUMBER_OF_HEADER_LINES = 1;
flatFileItemReader.setLinesToSkip(NUMBER_OF_HEADER_LINES);
flatFileItemReader.setLineMapper(lineMapper);
return flatFileItemReader;
}
#Bean
public LineMapper<MyProtoClass> lineMapper(LineTokenizer lineTokenizer, FieldSetMapper<MyProtoClass> fieldSetMapper) {
DefaultLineMapper<MyProtoClass> lineMapper = new DefaultLineMapper<MyProtoClass>();
lineMapper.setLineTokenizer(lineTokenizer);
lineMapper.setFieldSetMapper(fieldSetMapper);
return lineMapper;
}
#Bean
public LineTokenizer lineTokenizer() {
DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer();
lineTokenizer.setNames(new String[] {"value", "test.bar", "test.foo", "indicator"});
return lineTokenizer;
}
#Bean
public FieldSetMapper<MyProtoClass> fieldSetMapper(PropertyEditor emptyStringToNullPropertyEditor) {
BeanWrapperFieldSetMapper<MyProtoClass> fieldSetMapper = new DelegatedFieldSetMapper();
fieldSetMapper.setPrototypeBeanName("myProtoClass");
Map<Class<String>, PropertyEditor> customEditors = new HashMap<Class<String>, PropertyEditor>();
customEditors.put(String.class, emptyStringToNullPropertyEditor);
fieldSetMapper.setCustomEditors(customEditors);
return fieldSetMapper;
}
Finally, the CSV flat file look like this:
value,bar,foo,indicator
abc,,,y
xyz,,,n
Let's say that BatchWorkObject is the class to be mapped.
Here's a sample code in Spring Boot style that needs only your custom logic to be added.
new BeanWrapperFieldSetMapper<BatchWorkObject>(){
{
this.setTargetType(BatchWorkObject.class);
}
#Override
public BatchWorkObject mapFieldSet(FieldSet fs)
throws BindException {
BatchWorkObject tmp= super.mapFieldSet(fs);
// your custom code here
return tmp;
}
});
The code actually accomplishes what is desired except for one issue that results in the FlatFileParseException. The DelegatedFieldSetMapper contains the issue as follows:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
To resolve, change to:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(values.toArray(new String[values.size()]), names.toArray(new String[names.size()]));
Write your own FieldSetMapper with a set of prepared delegates inside.
Those delegates are pre-built for every different kind of fields mapping.
In your object route to correct delegate based on indicator field (with a Classifier, for example).
I can't see any other way, but this solution is quite easy and straightforward to maintain.
Processing based on the input format/data can be done using a custom implementation of ItemProcessor which is either changing values in the same entity (that was populated by IteamReader) or creates a new one output entity.

Using Entity Framework 6, Custom Code-First Migrations & CSharpMigrationCodeGenerator

I recently started using Entity Framework 6's code-first custom migrations. It's working well, but one thing I'd like to do is generate a pair of CreateIndex() and DropIndex() statements when attempting to rename an index, instead of using RenameIndex() like the default CSharpMigrationCodeGenerator wants to.
For example, I currently use data annotations in a fluent mapping to rename an index like this:
Property(x => x.TeacherId).HasColumnAnnotation("Index", new IndexAnnotation(new IndexAttribute("IX_Students_TeacherId")));
The problem here is that by default EF6 wants to generate the following code when I add a new migration to capture this change to the model:
using System;
using System.Data.Entity.Migrations;
namespace MyApp.Migrations
{
public partial class RenameIndexTest : DbMigration
{
public override void Up()
{
// BAD: [RenameIndex] will generate a "EXEC sp_rename" statement.
RenameIndex(table: "dbo.Students", name: "IX_TeacherId", newName: "IX_Students_TeacherId");
}
public override void Down()
{
RenameIndex(table: "dbo.Students", name: "IX_Students_TeacherId", newName: "IX_TeacherId");
}
}
}
But what I really need EF6 to generate is this:
using System;
using System.Data.Entity.Migrations;
namespace MyApp.Migrations
{
public partial class RenameIndexTest : DbMigration
{
public override void Up()
{
// GOOD: We generate separate SQL statements to drop & add the index.
DropIndex(table: "dbo.Students", name: "IX_TeacherId");
CreateIndex(table: "dbo.Students", name: "IX_Students_TeacherId", column: "TeacherId");
}
public override void Down()
{
DropIndex(table: "dbo.Students", name: "IX_Students_TeacherId");
CreateIndex(table: "dbo.Students", name: "IX_TeacherId", column: "TeacherId");
}
}
}
Our data team has a hard requirement that developers use T-SQL DROP/CREATE statements when renaming indexes. Thus far, I haven't been able to find a way to override the behavior of the RenameIndex() statement, using a custom class that uses CSharpMigrationCodeGenerator as its base class, because the RenameIndexOperation class doesn't have any information about the column(s) an index has been created on.
This is as far as I've been able to get on my own:
namespace MyApp.Migrations
{
internal class CustomCSharpMigrationCodeGenerator : CSharpMigrationCodeGenerator
{
protected override string Generate(IEnumerable<MigrationOperation> operations, string #namespace, string className)
{
var customizedOperations = new List<MigrationOperation>();
foreach (var operation in operations)
{
if (operation is RenameIndexOperation)
{
var renameIndexOperation = operation as RenameIndexOperation;
var dropIndexOperation = new DropIndexOperation(operation.AnonymousArguments)
{
Table = renameIndexOperation.Table,
Name = renameIndexOperation.Name
};
var createIndexOperation = new CreateIndexOperation(operation.AnonymousArguments)
{
Table = renameIndexOperation.Table,
Name = renameIndexOperation.NewName,
// HELP: How do I get this information about the existing index?
// HELP: How do I specify what columns the index should be created on?
IsUnique = false,
IsClustered = false
};
// Do not generate a RenameIndex() statement; instead, generate a pair of DropIndex() and CreateIndex() statements.
customizedOperations.Add(dropIndexOperation);
customizedOperations.Add(createIndexOperation);
}
else
{
customizedOperations.Add(operation);
}
}
return base.Generate(customizedOperations, #namespace, className);
}
}
}
Does this make sense? And more importantly, does anyone have any suggestions or ideas on how to proceed? Either way, thanks in advance!
I'm closing this question out. I was never able to do exactly what I sought to... it wasn't a dealbreaker, I simply was hoping EF6 had some way of (easily) exerting control over the name of indexes being created.
IIRC, I did what #steve-greene suggested and manually specified the name of the index using the Sql() method.

How to support embedded maps (with custom value types) in MongoDB GORM?

I would like to have an embedded document referred to by a map (as in 'class A' below). The environment is Grails + GORM + MongoDB.
is that possible, and if yes, how?
class A { // fails with IllegalArgumentException occurred when processing request: can't serialize class X in line 234 of org.bson.BasicBSONEncoder
static mapWith = "mongo"
Map<String, X> map = new HashMap<String, X>()
}
class B { // works
static mapWith = "mongo"
List<X> list = new ArrayList<X>()
}
class C { // works with primitive type values
static mapWith = "mongo"
Map<String, String> map = new HashMap<String, String>()
}
class X {
String data
public X(String data) {
this.data = data
}
}
The embedding works perfectly,as Art Hanzel advised.
However your problem comes from the fact that you try and use List genericity as a sort of constraint :
Map<String, X>
The problem is that Grails couldn't cope well with this syntax, first because Groovy doesn't support genericity.
However, the MongoDB plugin offers a very powerful functionality that lets you define custom type as Domain Class Properties : see here.
In your case you could have
class A {
static mapWith = "mongo"
MyClass map = new MyClass()
}
Then in your src/java for example you could for example implement a
class MyClass extends HashMap<String,X> { }
Then, of course, you have to define a special AbstractMappingAwareCustomTypeMarshaller to specify how to read and write the property in the DB.
An additional step could also be to add a custom validator to class A to check the validity of data...
The MongoDB Grails plugin documentation describes how to make embedded documents:
class Foo {
Address address
List otherAddresses
static embedded = ['address', 'otherAddresses']
}
Off the top of my head, you should be able to access these via the object graph. I don't see any reason why you shouldn't.
myFoo.address.myAddressProperty...