I've been reading a lot about Entity Frameworks and now I want to implement it on my game. An Entity Framework is based on making the game entities simple containers of Components, where a Component contains a certain characteristic of an Entity (and all the variables/accessors which describe this characteristic).
The game logic is then modularized by creating Systems. Each System implements and runs a certain aspect of the game logic (eg. Collisions, Rendering, Animation). Each System has to be able to access every Entity which has some certain combination of Components (eg. RenderSystem has to get only Entities which have PositionComponent and AnimationComponent).
My question regards the best data structure for achieving such functionality.
My current idea is to create a Vector (with N cells, where N is the number of possible components) of List of Entity. So whenever I create (instantiate and add certain Components) an Entity, I would also reference this Entity from each List for each Component it contains. "Killing" an Entity would require removing each reference from each List. The problem would be querying which entities have to be processed by a certain System, because the search-key would be a combination of Components, and not a single Component, adding overhead to the operation (many searches and comparisons would have to be done).
Is my idea good? Is there any better data structure I can use? Note that everything in the game is supposed to be an Entity, summing up to thousands of Entites on a single Level (I could possibly use some space partitioning).
They are two ways of doing it,
The purely data oriented system would lead you not to have an Entity class but just components sharing an ID. In this case, a vector or a hashmap for every system wouldn't be a problem as the search in these data structure is fast. If you want several components per system per entity you can aggregate your components in one data structure adapted for each system.
The problem is that a pure data oriented system can be less usable than a more pragmatic approach where you keep all the features of the previously described system but you keep an entity class that holds reference to his components (or aggregated components structures) of every system. Processing an entity (deleting or inspecting it) becomes much easier as you still have a place where all the information about what the entity is, i.e. what it is made of and not what state it is in, can be found in one place instead of querying every system.
In your case, the best thing is to try... It's quite easy and fast to implement a rough engine in the two ways, and once you've played with the two you'll be able to decide which one suites you better.
This article is valuable as far as it suggests 4 iterations for the data structure, but no one is a good solution in my opinion. But I recommend to read it, because there is a detailed analysis of the problem, nice estimations in terms of memory and such other good material.
Related
I’m build a calendar/entry/statistics application using quite complex models with a large number of relationships between models.
In general I’m concerned of performance, and are considering different strategies and are looking for input before implementing the application.
I’m completely new to DBContextPooling so please excuse me for possible stupid questions. But has DBContextPooling anything to do with the use of multiple DBContext classes, or is the use related to improved performance regardless of a single or multiple DBContext?
I might end up implementing a larger number of DBsets, or should I avoid it? I’m considering to create multiple DBContext Classes for simplicity, but will this reduce memory use and improve performance? Would it be better/smarter to split the application into smaller projects?
Is there any performance difference in using IEnumerable vs ICollection? I’m avoiding the use of lists as much as possible. Or would it be even better to use IAsyncEnumerable?
Most of your performance pain points will come from a complex architecture that you have not simplified. Managing a monolithic application result in lots of unnecessary 'compensating' logic when one use case is treading on the toes of another and your relationships are so intertwined.
Optimisations such as whether to use Context Pooling or IEnumerable vs ICollection can come later. They should not affect the architecture of your solution.
If your project is as complex as you suggest, then I'd recommend you read up on Domain Driven Design and Microservices and break your application up into several projects (or groups of projects).
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/
Each project (or group of projects) will have its own DbContext to administer the entities within that project.
Further, each DbContext should start off by only exposing Aggregate Roots through the DbSets. This can mean more database activity than is strictly necessary for a particular use case but best to start with a clean architecture and squeeze that last ounce of performance (sometimes at the cost of architectural clarity), if and when needed.
For example if you want to add an attendee to an appointment, it can be appealing to attack the Attendee table directly. But, to keep things clean, and considering an attendee cannot exist without an appointment, then you should make appointment the aggregate root and only expose appointment as an entry point for the outside world to attack. That appointment can be retrieved from the database with its attendees. Then ask the appointment to add the attendees. Then save the appointment graph by calling SaveChanges on the DbContext.
In summary, your Appointment is responsible for the functionality within its graph. You should ask Appointment to add an Attendee to the list instead of adding an Attendee to the Appointment's list of attendees. A subtle shift in thinking that can reduce complexity of your solution an awful lot.
The art is deciding where those boundaries between microservices/contexts should lie. There can be pros and cons to two different architectures, with no clear winner
To your other questions:
DbContext Pooling is about maintaining a pool of ready-to-go instantiated DbContexts. It saves the overhead of repeated DbContext instantiation. Probably not worth it, unless you have an awful lot of separate requests coming in and your profiling shows that this is a pain point.
The number of DbSets required is alluded to above.
As for IEnumerable or ICollection or IList, it depends on what functionality you require. Here's a nice simple summary ... https://medium.com/developers-arena/ienumerable-vs-icollection-vs-ilist-vs-iqueryable-in-c-2101351453db
Would it be better/smarter to split the application into smaller
projects?
Yes, absolutely! Start with architectural clarity and then tweak for performance benefits where and when required. Don't start with performance as the goal (unless you're building a millisecond sensitive solution).
I'm done DDD for a couple of years now and still its challenging when it comes to designing Aggregates. Thats the fun part of DDD and it makes your head spin. I'm asking this question since I'm architect in a project and we're in the middle of designing the model. Its an iteration when model evolves parallel with GUI and requirement gathering together with customer.
Now to the problem. Our scenario is that we are facing some Aggregates that are growing into very large AR's. I think I'm good at finding Value objects and avoiding the anemic domain model trap. But I've never been in this situation.
One example is that our system should represent a mobile telecom antenna. The antenna is located on a green field. But the antenna can have a shelter with equipment. Antenna can have microwave links, it can have fiber lines in ground, it can have radio elements, it can have power supply. Face it. If Antenna is terminated... all these dependencies are removed as well. Since they are part of the installation (except for the green field :))
But You get the picture. The antenna model is complex... And large AR's are inflexible regarding to concurrency locks, performance, memory consumption.
After reading Vaughn Vernons very good paper on Effective AR design http://dddcommunity.org/library/vernon_2011/ I realize that We need to start chopping our big AR's up in pieces.
My Idea is to do like Vernon suggest to move out for example MicrowaveLinks to a separate AR (even if its not in reality).
The MicrowaveLink Entity, now AR, is reference Antenna by Id. In MicrowaveLink Entity class we have a value object property that is AntennaId.
Our Uses cases support this scenario. We rarely list antenna and links together. So loading MicrowaveLinks is possible through a MicrowaveLinkRepository.ListByAntenna(Guid antennaId)
1) Have you done this AR split before and how did you do it?
2) Did you manage to support this AR --> AR relationship intact through both domain constraints and DB (we use EF 5 as ORM)?
My optimal goal is to be able to skip a Antenna.Microwaves Collection on Antenna. So Antenna are not aware if Links. The Links are aware of what Antenna they are mounted on.
And At MicrowaveLink Entity I only want a AntennaId Property, with hopefully, a DB Constraints that make sure that Antenna exists.
I'm aware of that I can manually add FK constraints in Seed method in EF or in DB directly through T-SQL scripting. But can this relationship be supported in some way by EF5 Code First Fluent mapping?
By the sounds of it you have an Installation AR. When requiring an AR in another you should model the contained AR as a only the ID in the container or a VO if required.
You need to have hard edges around your ARs.
Back to the Order / OrderLine example :)
An OrderLine seems to 'require' a Product but you shouldn't ever give a Product instance tot eh OrderLine. Instead only model, say, the ProductName and ProductId as a VO in the OrderLine. Now you have a distinct edge to your Order AR.
Hope that helps somewhat.
I have been reading about Command Query Responsibility Segregation (CQRS) and how this pattern would suit our current applications.
When it comes to the read model I am well aware of the concepts:
"separating read and write data model", "flat denormalized data returned by the thin read layer". In most cases we are stuck with the same database(the same read/write data model), running on SQL Server with normalized tables, with common layered application on top of it.
So, is it any value of applying CQRS on this kind of scenario?
If so, what would it be when it comes to the read model side?
Another question that hits my mind is MVC application requesting information from my thin read layer that expose flattened out views. Data exposed still need to be structured(aggragated) before presented to the user, or am I wrong?
Best regards
CQRS doesn't need to have a flattened read model; that is a benefit that CQRS can allow you to provide, but it is neither required nor a key part of the approach.
CQRS is about separation (or segregation if you follow the name). It is the Command Query Separation principle on steroid (in my opinion). The benefits that it provides you (off the top of my head) are:
separation of your read operations from your write operations;
communication between layers via messaging (e.g. commands, events), so that your layers are clean;
separation within your layers, applying the Single Responsibility Principle (e.g. your domain applies business logic, your command handles route commands, your denormalizers or event handlers (or whatever you call them) persist information to your read store, etc.)
allows you to have team members work on different parts of your application without hard dependencies between them;
etc.
So if those things above are important to you or something you want to strive for (and your application's design supports implementing CQRS), then CQRS provides benefit and value to you.
There are many benefits to CQRS. It's not the right solution for every problem, but when the stars align, it's a nice approach to your problem (even if you don't have a denormalized read store, or an event store, or an async model, etc.).
I hope this helps!
I've fought with multiple joins so many times in my career that when a structure like CQRS and ES comes along and offers a clean way to simplify the read side, I jumped at it. The nice thing is that you can get many of the benefits without necessarily implementing all the elements often associated with CQRS and ES. Just separating command from queries has the benefit of simplifying your code. However, when you do start using a de-normaliser to build out read models for you application you suddenly realise how simple, clean and performant your app can be.
If it helps to see 'how' this de-normalisation works take a look at this post (it comes with a code sample to take a gander at): How to build a master details view with CQRS and ES. I hope you find this helpful.
Applying CQRS over the same (say) third normal form database can still give you value on the read side if it allows you to stop projecting read models from domain objects.
This also allows you to better specialise your domain to (I assume) transaction processing, meaning many relationships may not be necessary.
We develop the back office application with quite large Db.
It's not reasonable to load everything from DB to memory so when model's proprties are requested we read from DB (via EF)
But many of our UIs are just simple lists of entities with some (!) properties presented to the user.
For example, we just want to show Id, Title and Name.
And later when user select the item and want to perform some actions the whole object is needed. Now we have list of items stored in memory.
Some properties contain large textst, images or other data.
EF works with entities and reading a bunch of large objects degrades performance notably.
As far as I understand, the problem can be solved by creating lightweight entities and using them in appropriate context.
First.
I'm afraid that each view will make us create new LightweightEntity and we eventually will end with bloated object context.
Second. As the Model wraps EF we need to provide methods for various entities.
Third. ViewModels communicate and pass entities to each other.
So I'm stuck with all these considerations and need good architectural design advice.
Any ideas?
For images an large textst you may consider table splitting, which is commonly used to split a table in a lightweight entity and a "heavy" entity.
But I think what you call lightweight "entities" are data transfer objects (DTO's). These are not supplied by the context (so it won't get bloated) but by projection from entities, which is done in a repository or service.
For projection you can use AutoMapper, especially its newer feature that I describe here. This allows you to reduce the number of methods you need to provide "for various entities" (DTO's), because the type to project to can be given in a generic type parameter.
I know that core data should not be considered as ORM but it still offers the functionality that is similar to ORM. Just curious, is it implementing data mapper pattern? I know "The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other." (Martin Fowler). IMHO context manager handles all SQL stuff into one transaction, so it's very performance wise design and IMHO core data might be considered implementing data mapper pattern.
One year latter, I will contribute with my two cents
I am not an ORM expert and just recently started something using a Data Mapper, but as a long time Core Data user I can say that no. The main objective of this pattern is having a clear cut of a domain object from all database related operations.
Once I start writing unit tests, the first thing I notice is that I must load a database, even if it is just some in memory store, but I do must load one. Also there are no mappers for each class, I have no control about how each relation is stored.
Core Data loads lots of meta information about your object graph and forces some structure to them. Although you can change the persistent store and bake something of your own, you will have lots of restrictions about how to do it, with a clear "relational" feeling to it.
The idea is good, we might say it is some variation of it. Something that I do love is that the save operation is done by the context, not the object itself. So there is some type of separation.
However look at those functions like "awakeFromFetch" or "didSave", both operations are related with the data store, not a plain domain object. A proper Data Mapper pattern would allow you to define those operations for each persistent store, not unified in a single object.
UPDATE:
Funny enough one day after my answer I had to deal with an old CoreData based project and must come back to improve this answer. To make things clear, I do consider that "seems like a pattern" is not enough. For example, implementation of the facade and adapter patterns is quite similar, but you name them differently depending on how you use them.
Is Core Data implementing data mapper?
I must say that my "not quite" should have been "definitely not!"
I have just been very angry because I needed to rename some fields and later add new ones. Although I do know quite well how auto-migrations work with Core Data I forgot how annoying these are.
How many times do you need some new field, rename something, experiment until you get it right.... and every single tiny change requires a full blown database migration? With Data Mappers this never happens because domain objects are perfectly decoupled. You only touch the database to catch up with the domain objects after you finish some new feature. Core Data forces you to bind at every single moment every single detail of your domain objects.
Boy, how sweet life was until I forgot that "tiny" annoyance of Core Data being the exact opposite of what you can achieve with data mappers.