Where does Search fit into the MVC software pattern? - mongodb

I am implementing a database search algorithm which searches over many collections in a MongoDB and returns optimized results based on the state of the entire database. I have no problems with the implementation, but the nomenclature and how I should structure the file system is bugging me. Where in the model-view-controller pattern should I place read only operations? Is it a service? It has a controller but I hardly think it satisfies the criteria to be a model.

This question is extremely language dependant and the features that exist within that language. I will speak from a PHP point of view.
Search functions should go into the model, the model backs up as a data provider in the MVC pattern. A single central point from which to dish out instances of it self.
Some MVCs implement what is known as factory classes. They are specifically designed to sit outside of the MVCs normal pattern to be able to provide data: http://en.wikipedia.org/wiki/Factory_method_pattern . As someone who has used this pattern I can say it gets complicated and unmanageable very quickly. That is why I prefer to backup the model as a data provider itself, it merely requires class organisation.

Model View Controller architecture is pretty much the equivalent of a three or four tier solution in a client server setup and the same rules apply.
Complex and intensive database functionality lives with the tool that is best suited to the task and is most re-usable and in this case I would argue that the RDBMS would be the best option in the vast majority of RDBMS's as it is the RDBMS that best knows how to manipulate it's data, work out query plans etc...
It could also be argued that the model layer would be the most natural place from a purist coding point of view where you have all your data access in one layer.
It is highly unlikely that it would ever be advantageous to place this sort of functionality in the least re-usable layer i.e. the controller/view
This is of course only my opinion and I suspect you will get many alternative opinions but ( can not for the life of me think that from a performance point of view that yopur logic belongs anywhere other than at the database level
UPDATE
A model is the guardian of all data. if a view or controller wants data, it asks the model for that data. The view or controller shouldn't care about how the data is obtained or where it comes from. It's about separation of concerns. So that leaves the question. Do I place the code to query the database in the model or in the RDBMS?
Well of course you have to have a method in a model for the view or controller to call in the first place so of course you need a model but what goes inside that method and where the actual query SQL lives is up to the designer. The point is, that so long as the query lives at model or database level you are hiding the implementation from the view or controller and are free to change the implementation whenever you wish without having to worry about the potentially many places it is called from.
So model or RDBMS is the answer. The solution chosen depends on the MVC tools you are using and the RDBMS you are using. Also remember that a model does not have to consist of a single method which is what you are implying you may be thinnking from your comment.

Related

Using ViewModels instead DTOs as the result of a CQRS query

Reading a SO question, I realized that my Read services could provide some smarter object like ViewModels instead plain DTOs. This makes me reconsider what information should be provided by the objects returned by the Read Services
Before, using just DTOs, my Read Service just made flat view mapping of a database query into hash like structure with minimum normalization and no behavior.
However I tend to think of a ViewModel as something "smarter" that can have generated information not provided by the database, like status icon, calculated values, reformatted values, default values, etc.
I am starting to see that the construction of some ViewModel objects might get more complicated and has potential downsides if I made my generic ReadServiceInterface return ViewModels only:
(1) Should I plan some design restriction for the ViewModels returned by my CQRS? Like making sure that their construction is almost as fast as a plain DTO?
(2) DTOs by nature are easily serialized and ready to be sent to an external system in a SOA architecture or embedded into a message. Does this mean that using ViewModels will have a negative impact on my architecture?
(3) Which type of ViewModels should I keep outside my Read Services?
(4) Should I expect all ViewModels to be retrieved from Read Services?
In the past I implemented some ViewModels that needed more than one query. In a CQRS I suppose, that is a design smell, since everything they provide, should be in only one query.
I am starting a new project, where I thought that any query will return either aggregate objects or DTOs. Since now ViewModels come into play. I am wondering:
(5) Should I plan that queries within my architecture will yield two type of objects (ViewModels+Aggregates) or three (+DTO)?
View Models (VM) serve a single master: the View. We're usually consider the VM a pretty dumb object so in this regard, there's no technical difference between a VM and a DTO, only their purpose and semantics are different.
How you build a VM is an implementation detail. Some VM are pre generated and stored in a VM repository. Others are built in real-time by a service (or a query handler) either by querying the db directly or querying other repos/services then assembling the results. There's no right or wrong and no rules about how to do it. It comes down to preference.
In CQRS the important part is separation of commands from queries i.e more than one model. There's no rule about how many queries you should do or if you should return a view model or dto. As long as you have at least one read model dedicated for queries, it's CQRS.
Don't let technicalities complicate your design. Proper design is more about high level structure and not low level implementation. Use CQRS because having a read model simplifies your app, not for other reasons. Aim for simplification and clean code, not for rigid rules that dictate a 'how to' recipe.

Best Data Structure for an Entity-Component-System Framework

I've been reading a lot about Entity Frameworks and now I want to implement it on my game. An Entity Framework is based on making the game entities simple containers of Components, where a Component contains a certain characteristic of an Entity (and all the variables/accessors which describe this characteristic).
The game logic is then modularized by creating Systems. Each System implements and runs a certain aspect of the game logic (eg. Collisions, Rendering, Animation). Each System has to be able to access every Entity which has some certain combination of Components (eg. RenderSystem has to get only Entities which have PositionComponent and AnimationComponent).
My question regards the best data structure for achieving such functionality.
My current idea is to create a Vector (with N cells, where N is the number of possible components) of List of Entity. So whenever I create (instantiate and add certain Components) an Entity, I would also reference this Entity from each List for each Component it contains. "Killing" an Entity would require removing each reference from each List. The problem would be querying which entities have to be processed by a certain System, because the search-key would be a combination of Components, and not a single Component, adding overhead to the operation (many searches and comparisons would have to be done).
Is my idea good? Is there any better data structure I can use? Note that everything in the game is supposed to be an Entity, summing up to thousands of Entites on a single Level (I could possibly use some space partitioning).
They are two ways of doing it,
The purely data oriented system would lead you not to have an Entity class but just components sharing an ID. In this case, a vector or a hashmap for every system wouldn't be a problem as the search in these data structure is fast. If you want several components per system per entity you can aggregate your components in one data structure adapted for each system.
The problem is that a pure data oriented system can be less usable than a more pragmatic approach where you keep all the features of the previously described system but you keep an entity class that holds reference to his components (or aggregated components structures) of every system. Processing an entity (deleting or inspecting it) becomes much easier as you still have a place where all the information about what the entity is, i.e. what it is made of and not what state it is in, can be found in one place instead of querying every system.
In your case, the best thing is to try... It's quite easy and fast to implement a rough engine in the two ways, and once you've played with the two you'll be able to decide which one suites you better.
This article is valuable as far as it suggests 4 iterations for the data structure, but no one is a good solution in my opinion. But I recommend to read it, because there is a detailed analysis of the problem, nice estimations in terms of memory and such other good material.

CQRS read model side - normalized tables

I have been reading about Command Query Responsibility Segregation (CQRS) and how this pattern would suit our current applications.
When it comes to the read model I am well aware of the concepts:
"separating read and write data model", "flat denormalized data returned by the thin read layer". In most cases we are stuck with the same database(the same read/write data model), running on SQL Server with normalized tables, with common layered application on top of it.
So, is it any value of applying CQRS on this kind of scenario?
If so, what would it be when it comes to the read model side?
Another question that hits my mind is MVC application requesting information from my thin read layer that expose flattened out views. Data exposed still need to be structured(aggragated) before presented to the user, or am I wrong?
Best regards
CQRS doesn't need to have a flattened read model; that is a benefit that CQRS can allow you to provide, but it is neither required nor a key part of the approach.
CQRS is about separation (or segregation if you follow the name). It is the Command Query Separation principle on steroid (in my opinion). The benefits that it provides you (off the top of my head) are:
separation of your read operations from your write operations;
communication between layers via messaging (e.g. commands, events), so that your layers are clean;
separation within your layers, applying the Single Responsibility Principle (e.g. your domain applies business logic, your command handles route commands, your denormalizers or event handlers (or whatever you call them) persist information to your read store, etc.)
allows you to have team members work on different parts of your application without hard dependencies between them;
etc.
So if those things above are important to you or something you want to strive for (and your application's design supports implementing CQRS), then CQRS provides benefit and value to you.
There are many benefits to CQRS. It's not the right solution for every problem, but when the stars align, it's a nice approach to your problem (even if you don't have a denormalized read store, or an event store, or an async model, etc.).
I hope this helps!
I've fought with multiple joins so many times in my career that when a structure like CQRS and ES comes along and offers a clean way to simplify the read side, I jumped at it. The nice thing is that you can get many of the benefits without necessarily implementing all the elements often associated with CQRS and ES. Just separating command from queries has the benefit of simplifying your code. However, when you do start using a de-normaliser to build out read models for you application you suddenly realise how simple, clean and performant your app can be.
If it helps to see 'how' this de-normalisation works take a look at this post (it comes with a code sample to take a gander at): How to build a master details view with CQRS and ES. I hope you find this helpful.
Applying CQRS over the same (say) third normal form database can still give you value on the read side if it allows you to stop projecting read models from domain objects.
This also allows you to better specialise your domain to (I assume) transaction processing, meaning many relationships may not be necessary.

Moving logic from Template Toolkit to Catalyst

I think that I am using too much conditionals and calculations in the TT templates.
I am displaying a result set of items from DBIc. For each item I need to calculate things using the retrieved values, and the template doesn't seems to be the right place.
But in Catalyst it is a thick object that comes from DBIc.
So how can I move logic to the model? Must I run a whole loop for all items and change the object somehow?
Regards:
Migue,
First, you're on the right track by wanting to properly separate concerns. You'll thank yourself if you're the maintainer 6-12 months down the road.
IMHO, your Catalyst controllers should be as thin as possible with the business logic in the various models. This makes it easier to test because you don't have the overhead of Catalyst to deal with. I've been thinking about model separation quite a bit myself. There are two schools of thought I've come across:
1) Make your DBIx::Class Result classes have the business logic. This approach is convenient and simple.
2) Make a standalone Model which is instantiated by the Controller, and which has a DBIx::Class schema object. The model would use the DBIC schema to query the database, and then use the resulting data in its own business logic methods. This approach might be better if you have a lot of business logic since you separate DB access from business logic.
Personally, I've historically used approach #1 but I'm leaning towards #2 for larger apps.
Two possibilities.
Create a method in corresponding schema class.
(if 1 is not possible) Pass a callback to template that would have this object as argument.
You could
create a resultset that retrieves the data from the database and then calculates the needed values
if possible calculate the needed values within the database and then only retrieve the data needed for output
I personally would prefer the second one.
I hope that helps.

Is core data implementing data mapper pattern?

I know that core data should not be considered as ORM but it still offers the functionality that is similar to ORM. Just curious, is it implementing data mapper pattern? I know "The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other." (Martin Fowler). IMHO context manager handles all SQL stuff into one transaction, so it's very performance wise design and IMHO core data might be considered implementing data mapper pattern.
One year latter, I will contribute with my two cents
I am not an ORM expert and just recently started something using a Data Mapper, but as a long time Core Data user I can say that no. The main objective of this pattern is having a clear cut of a domain object from all database related operations.
Once I start writing unit tests, the first thing I notice is that I must load a database, even if it is just some in memory store, but I do must load one. Also there are no mappers for each class, I have no control about how each relation is stored.
Core Data loads lots of meta information about your object graph and forces some structure to them. Although you can change the persistent store and bake something of your own, you will have lots of restrictions about how to do it, with a clear "relational" feeling to it.
The idea is good, we might say it is some variation of it. Something that I do love is that the save operation is done by the context, not the object itself. So there is some type of separation.
However look at those functions like "awakeFromFetch" or "didSave", both operations are related with the data store, not a plain domain object. A proper Data Mapper pattern would allow you to define those operations for each persistent store, not unified in a single object.
UPDATE:
Funny enough one day after my answer I had to deal with an old CoreData based project and must come back to improve this answer. To make things clear, I do consider that "seems like a pattern" is not enough. For example, implementation of the facade and adapter patterns is quite similar, but you name them differently depending on how you use them.
Is Core Data implementing data mapper?
I must say that my "not quite" should have been "definitely not!"
I have just been very angry because I needed to rename some fields and later add new ones. Although I do know quite well how auto-migrations work with Core Data I forgot how annoying these are.
How many times do you need some new field, rename something, experiment until you get it right.... and every single tiny change requires a full blown database migration? With Data Mappers this never happens because domain objects are perfectly decoupled. You only touch the database to catch up with the domain objects after you finish some new feature. Core Data forces you to bind at every single moment every single detail of your domain objects.
Boy, how sweet life was until I forgot that "tiny" annoyance of Core Data being the exact opposite of what you can achieve with data mappers.