I think that I am using too much conditionals and calculations in the TT templates.
I am displaying a result set of items from DBIc. For each item I need to calculate things using the retrieved values, and the template doesn't seems to be the right place.
But in Catalyst it is a thick object that comes from DBIc.
So how can I move logic to the model? Must I run a whole loop for all items and change the object somehow?
Regards:
Migue,
First, you're on the right track by wanting to properly separate concerns. You'll thank yourself if you're the maintainer 6-12 months down the road.
IMHO, your Catalyst controllers should be as thin as possible with the business logic in the various models. This makes it easier to test because you don't have the overhead of Catalyst to deal with. I've been thinking about model separation quite a bit myself. There are two schools of thought I've come across:
1) Make your DBIx::Class Result classes have the business logic. This approach is convenient and simple.
2) Make a standalone Model which is instantiated by the Controller, and which has a DBIx::Class schema object. The model would use the DBIC schema to query the database, and then use the resulting data in its own business logic methods. This approach might be better if you have a lot of business logic since you separate DB access from business logic.
Personally, I've historically used approach #1 but I'm leaning towards #2 for larger apps.
Two possibilities.
Create a method in corresponding schema class.
(if 1 is not possible) Pass a callback to template that would have this object as argument.
You could
create a resultset that retrieves the data from the database and then calculates the needed values
if possible calculate the needed values within the database and then only retrieve the data needed for output
I personally would prefer the second one.
I hope that helps.
Related
I am implementing a database search algorithm which searches over many collections in a MongoDB and returns optimized results based on the state of the entire database. I have no problems with the implementation, but the nomenclature and how I should structure the file system is bugging me. Where in the model-view-controller pattern should I place read only operations? Is it a service? It has a controller but I hardly think it satisfies the criteria to be a model.
This question is extremely language dependant and the features that exist within that language. I will speak from a PHP point of view.
Search functions should go into the model, the model backs up as a data provider in the MVC pattern. A single central point from which to dish out instances of it self.
Some MVCs implement what is known as factory classes. They are specifically designed to sit outside of the MVCs normal pattern to be able to provide data: http://en.wikipedia.org/wiki/Factory_method_pattern . As someone who has used this pattern I can say it gets complicated and unmanageable very quickly. That is why I prefer to backup the model as a data provider itself, it merely requires class organisation.
Model View Controller architecture is pretty much the equivalent of a three or four tier solution in a client server setup and the same rules apply.
Complex and intensive database functionality lives with the tool that is best suited to the task and is most re-usable and in this case I would argue that the RDBMS would be the best option in the vast majority of RDBMS's as it is the RDBMS that best knows how to manipulate it's data, work out query plans etc...
It could also be argued that the model layer would be the most natural place from a purist coding point of view where you have all your data access in one layer.
It is highly unlikely that it would ever be advantageous to place this sort of functionality in the least re-usable layer i.e. the controller/view
This is of course only my opinion and I suspect you will get many alternative opinions but ( can not for the life of me think that from a performance point of view that yopur logic belongs anywhere other than at the database level
UPDATE
A model is the guardian of all data. if a view or controller wants data, it asks the model for that data. The view or controller shouldn't care about how the data is obtained or where it comes from. It's about separation of concerns. So that leaves the question. Do I place the code to query the database in the model or in the RDBMS?
Well of course you have to have a method in a model for the view or controller to call in the first place so of course you need a model but what goes inside that method and where the actual query SQL lives is up to the designer. The point is, that so long as the query lives at model or database level you are hiding the implementation from the view or controller and are free to change the implementation whenever you wish without having to worry about the potentially many places it is called from.
So model or RDBMS is the answer. The solution chosen depends on the MVC tools you are using and the RDBMS you are using. Also remember that a model does not have to consist of a single method which is what you are implying you may be thinnking from your comment.
I've been reading a lot about Entity Frameworks and now I want to implement it on my game. An Entity Framework is based on making the game entities simple containers of Components, where a Component contains a certain characteristic of an Entity (and all the variables/accessors which describe this characteristic).
The game logic is then modularized by creating Systems. Each System implements and runs a certain aspect of the game logic (eg. Collisions, Rendering, Animation). Each System has to be able to access every Entity which has some certain combination of Components (eg. RenderSystem has to get only Entities which have PositionComponent and AnimationComponent).
My question regards the best data structure for achieving such functionality.
My current idea is to create a Vector (with N cells, where N is the number of possible components) of List of Entity. So whenever I create (instantiate and add certain Components) an Entity, I would also reference this Entity from each List for each Component it contains. "Killing" an Entity would require removing each reference from each List. The problem would be querying which entities have to be processed by a certain System, because the search-key would be a combination of Components, and not a single Component, adding overhead to the operation (many searches and comparisons would have to be done).
Is my idea good? Is there any better data structure I can use? Note that everything in the game is supposed to be an Entity, summing up to thousands of Entites on a single Level (I could possibly use some space partitioning).
They are two ways of doing it,
The purely data oriented system would lead you not to have an Entity class but just components sharing an ID. In this case, a vector or a hashmap for every system wouldn't be a problem as the search in these data structure is fast. If you want several components per system per entity you can aggregate your components in one data structure adapted for each system.
The problem is that a pure data oriented system can be less usable than a more pragmatic approach where you keep all the features of the previously described system but you keep an entity class that holds reference to his components (or aggregated components structures) of every system. Processing an entity (deleting or inspecting it) becomes much easier as you still have a place where all the information about what the entity is, i.e. what it is made of and not what state it is in, can be found in one place instead of querying every system.
In your case, the best thing is to try... It's quite easy and fast to implement a rough engine in the two ways, and once you've played with the two you'll be able to decide which one suites you better.
This article is valuable as far as it suggests 4 iterations for the data structure, but no one is a good solution in my opinion. But I recommend to read it, because there is a detailed analysis of the problem, nice estimations in terms of memory and such other good material.
So, I'm working on new software, but I have no choice but to brownfield the database. I would like to use Entity Framework where it makes sense.
Here's my dilemma:
Since the tables are very wide, and I can't change this, I will probably make heavy use of projection to limit the width of the datasets that I query.
I do want to make use of navigation properties where it makes sense
From what I've seen, a lot of people use a model where there is a single DbContext class for the whole project.
So,
I'm weighing these pro's and cons, and I'm wondering what the established best practices might be:
Use 1 DbContext.
There could be A LOT of "pollution" here, with bunches of projections of the data inside of the 1 context class. This sounds like it could become a maintenance nightmare.
Don't make my projections dbsets at all -- just make them plain old objects and select new MyProject {..} into them.
This offers the benefit of keeping my projections in module-specific assemblies and namespaces, but now I get NO navigation/lazy loading/ etc.
Be evil?? and use multiple DbContexts?
I'm not really sure what the maintenance story looks like here, but I'm kind of starting to lean in this direction. My biggest problem with it is that it feels like I'm swimming against the current -- not many people seem to do this, but for a large system, it seems like it could be the best option.
Thoughts?
I think you must use POCO or DTO for data transfer between different layers of application. Use ViewModel to send data to View.
Consider using Repository Pattern and UoW to have a better and efficient architecture in this scenario. Limit the use of navigation properties till repository otherwise they makes entity heavy while transferring those across layers (Use POCO or DTO).
If you are doing as above, then I do not think using multiple DbContexts would give you any benefit. Thanks.
I know that core data should not be considered as ORM but it still offers the functionality that is similar to ORM. Just curious, is it implementing data mapper pattern? I know "The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other." (Martin Fowler). IMHO context manager handles all SQL stuff into one transaction, so it's very performance wise design and IMHO core data might be considered implementing data mapper pattern.
One year latter, I will contribute with my two cents
I am not an ORM expert and just recently started something using a Data Mapper, but as a long time Core Data user I can say that no. The main objective of this pattern is having a clear cut of a domain object from all database related operations.
Once I start writing unit tests, the first thing I notice is that I must load a database, even if it is just some in memory store, but I do must load one. Also there are no mappers for each class, I have no control about how each relation is stored.
Core Data loads lots of meta information about your object graph and forces some structure to them. Although you can change the persistent store and bake something of your own, you will have lots of restrictions about how to do it, with a clear "relational" feeling to it.
The idea is good, we might say it is some variation of it. Something that I do love is that the save operation is done by the context, not the object itself. So there is some type of separation.
However look at those functions like "awakeFromFetch" or "didSave", both operations are related with the data store, not a plain domain object. A proper Data Mapper pattern would allow you to define those operations for each persistent store, not unified in a single object.
UPDATE:
Funny enough one day after my answer I had to deal with an old CoreData based project and must come back to improve this answer. To make things clear, I do consider that "seems like a pattern" is not enough. For example, implementation of the facade and adapter patterns is quite similar, but you name them differently depending on how you use them.
Is Core Data implementing data mapper?
I must say that my "not quite" should have been "definitely not!"
I have just been very angry because I needed to rename some fields and later add new ones. Although I do know quite well how auto-migrations work with Core Data I forgot how annoying these are.
How many times do you need some new field, rename something, experiment until you get it right.... and every single tiny change requires a full blown database migration? With Data Mappers this never happens because domain objects are perfectly decoupled. You only touch the database to catch up with the domain objects after you finish some new feature. Core Data forces you to bind at every single moment every single detail of your domain objects.
Boy, how sweet life was until I forgot that "tiny" annoyance of Core Data being the exact opposite of what you can achieve with data mappers.
I am struggling with how to understand the correct usage of models. Currently i use the inheritance of Db_Table directly and declare all the business logic there. I know it's not correct way to do this.
One solution would be to use Doctrine ORM, but this requires learning curve and all the current components what i use needs to be rewritten paginator and auth. Also Doctrine1 adds a another dozen classes which need to be loaded.
So the current cleanest implementation what i have seen is to use the Data Mapper classes between the so called model and DbTabel. I haven't yet implemented this as it seems to head writing another ORM. But example could be something this: SQL table User
create class with setters, getters, business logic here /model/User.php
data mapper /model/mapper/UserMapper.php, the funcionality is basically writing all the update, save actions in here.
the data source /model/DbTable/User.php extends the Db_Table_Abstract
Problems are with relationships between other models.
I have found it beneficial to not have my models extend Db_Table, but to use composition instead. That means my model 'has a' Db_Table rather than 'is a' Db_Table.
That way I find it much easier to reference multiple tables in the same model, which is a common requirement. This is enough for a simple project. I am currently developing a more complex application and have used the Data Mapper pattern and have found that it has simplified my code more than I would have believed.
Specifically, I have created a class which provides all access to the database and exposes methods such as getUser() etc.. That way, if the DB changes, or my client wants something daft like storing records in XML or we split the servers or something I only have to rewrite one class.
Again, my models do not extend this class, but have an instance of it assigned as a property during construction.
I would say the 'correct' way depends on the situation. Following the YAGNI and KISS principles, it is not good to over-complicate your model setup unless you really believe that it will benefit you in the long run.
What is the application you are developing? How is your current setup of extending Db_Table holding you back?