Once many teams work with the same mongodb database there needs to be some way to express what each document may contain. Otherwise the document will end be having "email", "mail", "email_addr" fields added by each team. What's the best way to represent this for the purpose of communication across teams?
Obviously, the best way is what the team is most comfortable with. It can be UML, whiteboard drawings, XML mappings, model code files, maybe even haiku poems :)
I personally prefer using an ODM (mongoid). It encourages you to specify all fields in the model class. Then you just need one glance at it to understand the schema.
What you can do is create your Objects first in a set of commons that all team members import into their projects. If you change schema design, you update Commons project and all team members import latest.
It's more about process and project management and less about technology given Mongo's schema-less design. One thing we find helpful is design your Tests first and lately, SoapUI and LoadUI have been excellent tools. Once you define your tests, it can stub the returns for you and produces HTML documentation you can distribute to team.
Check out: http://www.soapui.org/REST-Testing/working-with-rest-services.html
When you create collection, just add to it some first "reference" object that would have all the fields/sub-objects that object of this collection can possibly have and use it as "schema". You can even write validator that would check that new objects conform to this reference object.
Related
I am working on developing a set of assemblies that encapsulate parts of our domain that will be shared by many applications. Using the example of an order management system, one such assembly will contain all of the core operations an application can perform to/with an order. We are applying a simple version of CQS/CQRS so that all operations that change the state of the "system" are represented as public commands, such as CancelOrderCommand, ShipOrderCommand and CreateORderCommand. The command handlers are internal to the assembly.
The question I am struggling to answer is how to best expose the read model to consuming code?
The read model will be used by consuming code to perform queries. I don't know how all of the ways the read model will be used so the interface needs to be flexible to allow any query.
What complicates it for me is that I not only need to expose my aggregate root but there are also several "lookup" lists of related data that client applications may use. For example, each order has an associated OrderType which is data-driven (i.e., not an enum) and contains several properties that will drive some of our business rules that control what operations can/cannot be performed, etc. It is easy inside my module to manage this relationship; however, a client application that allows order creation will most likely need to display the list of possible OrderTypes to the user. As a result, I need to not only expose the list of Order aggregates but the supporting list of OrderTypes (and other lookup lists) from my read model.
How is this typically done?
I'm not sure what else to explain that will help trigger a solution, so please ask away...
I have never seen a CQRS based implementation expose a full dataset for ad-hoc querying so this is an interesting situation! In a typical CQRS scenario you would expose very specific queries because you may want to raise events when they are called (for caching for example - see this post for more details on that).
However since this is your design, let's not worry about "typical" or "correct" CQRS, I guess you just need a solution! One of the best new mechanisms for exposing data for flexible querying I have seen is the Open Data Protocol (OData). It will allow consumers to implement their own filtering, sorting and paging over a data source you expose.
Most implementations of this seem to deal with relational data. If you are dealing with a relational data source then OData might be a nice way to go. I suspect by your comment of "expose my aggregate root" that you might be using a document database? If so, there is one example I have seen of OData services on top of MongoDB: http://bloggingabout.net/blogs/vagif/archive/2012/10/11/mongodb-odata-provider-now-supports-arrays-and-nested-collections.aspx.
I hope that helps, OData is definitely worth looking into. It seems to be growing really quickly and is getting good support on both server and client technology platforms.
I have written a wrapper around ADO.NET's DbProviderFactory that I use extensively throughout my applications. I also have written a lot of code that maps IDataReader rows to POCOs. However, as I have tons of classes the whole thing is getting to be a pain in the ass to maintain.
I have been looking at replacing the whole she-bang with a micro-orm like Petapoco. I have a few queries though:
I have lots of POCOs that contain other POCOs in them as properties. How well does the Petapoco support this?
Should I use a ORM like Massive or Simple.Data that returns a dynamic object and map that to a POCO?
Are there any approaches I can take to the whole mapping of rows to POCOs? I can't really use convention-based tools as my database isn't particularly consistent in how it is designed.
How about using a text templating/code generator to build out a lightweight persistence layer? I have a battle-hardened open source project called TextMetal to generate the necessary persistence layer based on tried and true architectural decisions. The only lacking thing is object to object relations but it does support query expressions and works well with poorly designed data schemas.
You can see a real world project that uses the above tool call Can Do It For.
Feel free to ask me about any design decisions once you take a look-sse.
Simple.Data automagically casts its dynamic type to static types. It will map nested properties as long as they have been eager-loaded using the .With method. So for example
Customer customer = db.Customer.WithOrders().Get(42);
would populate the Orders property of the customer object.
Could you use QueryFirst, or modify it? It takes your sql and wraps it in vanilla ADO code, generated at design time. You get fresh POCOs from your result schema every time you save your file. Additionally, you can choose to test all queries and regenerate all wrappers via the option in the tools menu. It's dependent on Sql Server and SqlClient, so unless you do some modification, you'll lose DbProviderFactory.
I have some documents and an ontology for some concepts. Are there any frameworks that automatically extracts those concepts from the given documents and creates triples? The ontology must contain special properties?
I found UIMA, but as far as I understood with UIMA I can do only something like this:
create some dictionaries which keep associations with the ontology
use this dictionary with ConceptMapper
write a CAS consumer that creates the triples and persists them -
I don't like this approach because I have to keep in sync the concepts from the ontology and the dictionary.
Can be UIMA used differently, or are there any advanced frameworks that can use directly my ontology with lets say some custom properties as input and based on it annotate the documents?
I want to use ontologies as domain model because I want to create further a knowledge base and ontologies seem more flexible than for example relational model.
Thanks.
After spending more time searching on Google I found GATE and more specifically OntoRoot Gazetter and Large KB Gazetteer.
OntoRoot Gazetteer is a type of a dynamically created gazetteer that is, in combination with few other generic GATE resources, capable of producing ontology-based annotations over the given content with regards to the given ontology. This gazetteer is a part of ‘Gazetteer_Ontology_Based’ plugin that has been developed as a part of the TAO project.
I didn't test them but these ones seem good solution candidates for my problem.
I have the following collections:
Client contains Product contains Project contains Task and
Company contains Subsidiary contains Department contains Users (and user contains custom properties)
What is the best practice? How to use mongodb more efficiently?
As for me, Users and Projects will be changed more often and should be defined as separate collections.
What is your advise?
There is no unified answer for you question, as MongoDB allows you to do both embedding and reference.
But here are some tips: don't bloat a collection, because querying the elements deep below might be hard, especially if you're going to use lists.
If you are able to prototype, try embedding the collections first and then write a unit test which will add/remove/query projects and users. If that would work, then there's no need to reference them.
We are using mongodb with c#. We are trying to figure out a way to keep our collection consistent seamlessly. Right now, if a developer make any changes to the class structure(add a field or change data type or changing the property within a nested class) he/she has to change the mongo collection manually.
Its a pain as our project is growing and the developers working on the project keeps increasing. Was wondering whether someone already have figured out a way to manage this issue.
Research
I found a similar question. however, couldn't find the solution.
Found a way to find all properties Finding the properties; however, datatype and nested documents becomes an issue.
If you want to migrate gradually as records are accessed you need to follow a few simple rules:
1) If you add a field it had better be nullable or have a default value specified.
2) Never rename fields, never change field types
- Instead always add new fields, add migration code, remove the old fields only when all documents have been migrated over.
For prototyping with MongoDB and C# I build a dynamic wrapper ... that lets you specify your objects using only interfaces (no classes needed), and it lets you dynamically add new interfaces to an existing object. Not ready for production use but for prototyping it saves a lot of effort and makes migration really easy.