Class Engine has "start(c:Component)" method. So do we need to draw an association between Engine and Component Class IF there is no "new Component()" inside Engine class.
No, you do not in general need to have an association to a type even if the type is mentioned in a parameter. It entirely depends on if the state of an Engine maintains a relationship with one or more Components.
If the Component you passed around is only use locally in method start, then there is no real association that persists from one state (one method call) to the next.
This is not an association, it´s a dependency relationship between the two. A dependency means that if the dependee (the Component in your case) changes the depender (the Engine) may become affected (maybe Engine::start was using a Component method that it is no longer available or that has changed its parameters)
Related
I need to add a global filter to a repository entity, i.e. it has to be applied everywhere this entity is accessed on Application service layer. My filter contains two conditions. Whereas adding the first condition, which depends on a constant, is easy and applied in OnModelCreating using HasQueryFilter, I have no idea how to apply automatically the second one, which depends on the currently selected (or default) UI language.
Use dependency injection via constructor in your DbContext class. Set the currently selected UI language inside the class implementing the interface. Use the injected implementation in the OnModelCreating method to apply the filter globally with .HasQueryFilter() method like you normally would.
If you're using something like a .NET Core API, you could build a middleware that determines the language of the current incoming request. I guess the same will work for MVC too.
My application is broken down into several assemblies.
The MyProject.Infrastructure assembly contains all of the Domain objects such as Person and Sale as well as interfaces repositories such as IPersonRepository and ISaleRepository.
The MyProject.Data assembly contains concrete implementations of these repositories.
The repositories pull data from a database and instantiate new domain classes. For example, IPersonRepository.GetPersonByNumber(customerNumber) will read a customer from the data source, create a new Person class, populate it and return to the caller.
I'm now starting to see cases where adding some methods to my Domain classes might make sense, such as Person.UpdateAddress(address).
Is it ok to put this method on my Person class as a virtual method, and then create derived classes inside my Data layer which override those methods to provide the desired functionality?
I want to make sure I'm not going breaking any DDD conventions.
I know I also have the option of putting these methods on the repository - e.g. IPersonRepository.UpdatePersonAddress(person, address).
Person.UpdateAddress should definitely be in your domain, not in your Repository. UpdateAddress is logic and you should try to avoid logic inside your repository. If you're working with Entity framework there is no need for 'data classes'.
Most ORMs have change trackers which will persist related entities automatically when you persist the main one (provided you declared the right relations in the mapping configuration), so you don't need UpdatePersonAddress() on your Repository. Just do whatever you want to do at the object level in Person.UpdateAddress(address) without thinking about persistence, this is not the place for that.
What you need though is an object that will be called in execution context-aware code to flush changes to the persistent store when you think it's time to save these changes. It might be a Unit of Work that contains the Entity Framework DbContext, for instance.
I'm using Prism framework with EF in WPF application.
ViewModel:
keeps service references (passed by unity container).
Services:
are providing "high level" operations with data
keeps reference of Repository, which provides basic CRUD operations with database (single table per repository).
Repository:
every method in repository uses the "using" pattern, where I work with short lived object context.
This is where i got stuck: after object context is disposed, I can not work with mapped properties no longer. My database model is complex (many related tables) and many .Include() calls when retrieving data keeps code dirty.
After reading several threads, I discovered that "unit of work" pattern is probably what I need.
Here comes my question:
Who keeps reference of unit of work (and therefore context)?
If I choose context per view approach, viewModel should have context reference.
How can I inject unit of work to my services then? Or should I create new instance of Service in ViewModel and pass context in constructor parameter?
We're using a similar architecture in a project:
Each ViewModel gets its own Service object which gets injected in the constructor (at least the top level ones that directly correspond to a View. Some hierarchical ViewModels may reuse their parent's Service, but let's keep it simple here).
By default, each Service operation creates a new context, but...
Services have BeginContext and EndContext methods which can be called
by the ViewModels to keep the context open over multiple operations.
This worked pretty well for us. Most of the time we call BeginContext when a View is opened and EndContext when it is closed.
We are developing a CMS based on JCR/Sling/JSP/Felix/etc.
What I found so far is using Nodes are very straight forward and flexible. But my concern is over time it could become too hard to maintain and manage.
So, is it wise to invest in using a OCM? Would it be just an extra layer of complexity? What's the real benefit in OCM if there's any? Or it's better for us to stick to Nodes instead?
And lastly, is Jackrabbit OCM the best option for us if we are to go down that path?
Thank you.
In my personal experience I can say it severly depends on your situation if OCM is a useful tool for your project or not.
The real problem in using OCM (in my personal experience) is when the definition of a class used in existing persisted data (as objects) in the repository has changed. For example: you found it necessary to change some members and methods of a class to match with functionality changes. By this I mean that the class definition of the persisted data object in the repository no longer matches the definition of actual class. When a persisted data is saved to the jcr repository it is usually saved in a format that java understands in terms of serialization. Which means that when something changes to the definition of the used class, the saved data in the repository can no longer be correctly interpreted by java. This issue tends to lead to complex deployment where you need to convert old persisted data objects to the new definition and save them again in the repository to make sure you can still use "old" but still required persisted data.
What does work (in my opinion) is using a framework that allows to map nodes and node properties to java objects directly (for example by using annotations) and the other way around (persist a java object to the repository as a JCR node where the java member fields are actual node properties). This way you stick to the data representation of jcr (nodes with properties) and can still map them to the members of a java class.
I've used a framework like this in a cms called AEM (of Adobe) before, although I must mention this is in a OSGI context (but the prinicipe still stands). The used framework basically allowed maximum flexibility and persists the java object as a JCR node and the other way around. Because it mapped directly to the jcr definition, code changes in the class and members ment just changing annotations, and old persisted data was still usuable without much effort.
I would like to experimentally apply an aspect of encapsulation that I read about once, where an entity object includes domains for its attributes, e.g. for its CostCentre property, it contains the list of valid cost centres. This way, when I open an edit form for an Extension, I only need pass the form one Extension object, where I normally access a CostCentre object when initialising the form.
This also applies where I have a list of Extensions bound to a grid (telerik RadGrid), and I handle an edit command on the grid. I want to create an edit form and pass it an Extension object, where now I pass the edit form an ExtensionID and create my object in the form.
What I'm actually asking here is for pointers to guidance on doing this this way, or the 'proper' way of achieving something similar to what I have described here.
It would depend on your data source. If you are retrieving the list of Cost Centers from a database, that would be one approach. If it's a short list of predetermined values (like Yes/No/Maybe So) then property attributes might do the trick. If it needs to be more configurable per-environment, then IoC or the Provider pattern would be the best choice.
I think your problem is similar to a custom ad-hoc search page we did on a previous project. We decorated our entity classes and properties with attributes that contained some predetermined 'pointers' to the lookup value methods, and their relationships. Then we created a single custom UI control (like your edit page described in your post) which used these attributes to generate the drop down and auto-completion text box lists by dynamically generating a LINQ expression, then executing it at run-time based on whatever the user was doing.
This was accomplished with basically three moving parts: A) the attributes on the data access objects B) the 'attribute facade' methods at the middle-tier compiling and generation dynamic LINQ expressions and C) the custom UI control that called our middle-tier service methods.
Sometimes plans like these backfire, but in our case it worked great. Decorating our objects with attributes, then creating a single path of logic gave us just enough power to do what we needed to do while minimizing the amount of code required, and completely eliminated any boilerplate. However, this approach was not very configurable. By compiling these attributes into the code, we tightly coupled our application to the datasource. On this particular project it wasn't a big deal because it was a clients internal system and it fit the project timeline. However, on a "real product" implementing the logic with the Provider pattern or using something like the Castle Projects IoC would have allowed us the same power with a great deal more configurability. The downside of this is there is more to manage, and more that can go wrong with deployments, etc.