I have an application that is used in 2 different sites. Each site has it's own Database.
There are 2 DbContexts, 1 for each site. When the user opens my application there is a splash page where they choose their site. After choosing the site the kernel is rebound to use the DbContext for the selected site.
private void RebindDbContext(string site)
{
switch (site)
{
case "Site1":
_kernel.Rebind<DbContext>().To<DbContext1>().InRequestScope();
break;
case "Site2":
_kernel.Rebind<DbContext>().To<DbContext2>().InRequestScope();
break;
}
}
Now for the Bob & Mary explanation:
This works fine when Bob selects site 1. But when Mary selects a site 2. The dbContext for Bob is re-bound to site 2. What I want is for Bob and Mary to be able to use the application at the same time without affecting each other.
I have tried using TransientScope, ThreadScope and InRequest Scope but none of these have worked.
The application is being run on an IIS server
Thanks for any help
Bindings are intended to be done once per application, not dependent upon state. In this instance, you have a couple options:
1) A Ninject.Activation.IProvider
public class DbContextProvider : Ninject.Activation.IProvider
{
public Type Type
{
get { return typeof(DbContext); }
}
public object Create(IContext context)
{
var siteProvider = context.Kernel.Get<ISiteProvider>(); // use a provider to find which site is being used
switch (siteProvider.Current)
{
case "Site1":
return new DbContext1(); // or use a factory to create
case "Site2":
return new DbContext2();
}
}
}
then:
Bind<DbContext>().ToProvider<DbContextProvider>().InRequestScope();
2) Conditional Binding
The When() modifier has a bunch of overloads for different states, or you could create an extension method if you have one type you use a lot.
Bind<DbContext>().To<DbContext1>()
.When(request => request.ParentContext.Kernel.Get<ISiteProvider>().Current == "Site1")
.InRequestScope();
Bind<DbContext>().To<DbContext2>()
.When(request => request.ParentContext.Kernel.Get<ISiteProvider>().Current == "Site2")
.InRequestScope();
This is a good option if you only have a few conditions that this binding may be applied. If your logic gets ANY more complex than this, go for the provider. Also note that Conditional Bindings incur a performance penalty.
3) A ToMethod() Binding
For the simplest binding logic, you can have Ninject run some code each time the binding is resolved:
Bind<DbContext>().ToMethod(context =>
context.Kernel.Get<ISiteProvider>().GetSite());
Basically, which option you select depends on how much logic is involved in deciding which instance to activate. In each instance, you can either new() up an instance, or you have access to the IKernel in which you can resolve an instance:
context.Kernel.Get<DbContext2>();
here's some official documentation of the activation process:
https://github.com/ninject/Ninject/wiki/Providers%2C-Factory-Methods-and-the-Activation-Context
Related
I am developing a mod_plugin for Moodle and want to support the automatic activity completion with a custom rule. I followed the official documentation and implemented all necessary functions. In the lib.php [pluginname]_supports method I have registered, FEATURE_GRADE_HAS_GRADE, FEATURE_COMPLETION_TRACKS_VIEWS, FEATURE_COMPLETION_HAS_RULES.
The \mod_[pluginname]\completion\custom_completion class defines a custom rule named "completiontest" in get_defined_custom_rules(). During my tests I found out that the methods get_state(), get_sort_order() and get_custom_rule_descriptions() are never executed. Also I don't see any output via activity_information().
I have cleared all caches, created new instances of my activity module, with no result. My development environment uses Moodle 3.11.7 (Build: 20220509).
My custom_completion.php script:
<?php
declare(strict_types=1);
namespace mod_cwr\completion;
use core_completion\activity_custom_completion;
class custom_completion extends activity_custom_completion {
public function get_state(string $rule): int {
return COMPLETION_INCOMPLETE;
}
public static function get_defined_custom_rules(): array {
return [
'completiontest'
];
}
public function get_custom_rule_descriptions(): array {
return [
'completiontest' => 'testout'
];
}
public function get_sort_order(): array {
return [
'completionview',
'completiontest',
'completionusegrade'
];
}
}
Test at the view.php:
$completion = new completion_info($course);
$completion->set_module_viewed($coursemodule);
if($completion->is_enabled($coursemodule) == COMPLETION_TRACKING_AUTOMATIC){
$completion->update_state($coursemodule, COMPLETION_INCOMPLETE, $USER->id);
}
$completiondetails = \core_completion\cm_completion_details::get_instance($coursemodule, $USER->id);
$activitydates = \core\activity_dates::get_dates_for_module($coursemodule, $USER->id);
echo $OUTPUT->activity_information($coursemodule, $completiondetails, $activitydates);
At the mod_form.php I check with completion_rule_enabled() if a custom rule is activated by the settings.
Does anyone have any idea what the problem could be?
Looking at the mod_forum plugin code showed me, that the get_state($rule) method does not observe all custom rules, only those selected in the settings. How do I tell Moodle to use a specific custom rule?
You appear to be calling update_state() and passing in the possible state change as COMPLETION_INCOMPLETE.
This is a way of telling Moodle "if the state is already incomplete, don't bother doing any expensive completion calculations to check if it should change state".
If you want Moodle to check and then (potentially) change the state to "complete", then pass COMPLETION_COMPLETE. If you really don't know which way it could be switching, then leave the param at the default COMPLETION_UNKNOWN (a good example would be forum completion - if you have just created a forum post, then you might cause the forum to be marked as "complete", but you certainly can't cause the forum to be marked as "incomplete", so you can pass the COMPLETION_COMPLETE parameter, so Moodle knows it only needs to check for changes if the forum is not already "complete").
Also, don't bother passing $USER->id as the third parameter - that's the default which is used of you don't pass anything.
As for telling Moodle which rules to use - it is up to you, when your function is called, to check your plugins settings to determine which rules are in use (and any other relevant configuration - e.g. with mod_forum, it needs to check how many posts are required for completion).
Thank you for the support. Got it running.
I now use $completion->update_state($coursemodule, COMPLETION_COMPLETE); and also had to fix [pluginname]_get_coursemodule_info() with $result->customdata['customcompletionrules']['completiontest'] = $cwr->completiontest; and totally forgot the return $result;.
Please forgive my non-native English:
In short, What is the best way for a tenant to override default IEnumerable<T> registration?
TL;DR So I have a service ServiceToBeResove(IEnumerable<IShitty> svcs) need an IEnumerable<IShitty> dependency, but we found not all our tenants have services registered as IShitty, so in our application container we create an not implemented NoImplementShitty and register it as a TypeService of IShitty to server as a default one to make resolve process happy, we do get tenant-specific if tenant have registration and this default non-implemented if tenant forgot to register. But we soon find the ServiceToBeResove will have both tenants implemented registered IShitty and the default NoImplementShitty for its dependence of IEnumerable. What I really want for the IEnumerable<IShitty> dependency is just used tenant registered (registered 1 or more), if tenant not registered, just use the default NoImplementShitty as the IEnumerable<IShitty>. I have played with .OnlyIf(), OnlyIfRegistered(), .PreventDefault() on the app container and it really not helps since autofac will build default first and then tenant. I can certainly use the NoImplementShitty for all the tenant that missing registration of IShitty but it doesn't seem to take the advantage of multiple tenant's override-default features.
To be more specific, In our base AgreementModule, we have
builder.RegisterType<NoOpAgreementHandler>() //NoOpAgreementHandler is the IShitty
.As<IAgreementHandler>()
.InstancePerLifetimeScope();
In our tenantA, we have
public class TenantAContainerBuilder : ITenantContainerBuilder
{
public virtual object TenantId => "1";
public virtual void Build(ContainerBuilder builder)
{
builder.RegisterType<TenantAAgreementHandler>()
.As<IAgreementHandler>()
.InstancePerLifetimeScope();
}
}
We build container as below:
var appContainer = builder.Build();
var tenantIdentifier = new ManualTenantIdentificationStrategy(); //We have our own strategy here I just use the ManualTenantIdentificationStrategy for example
var multiTenantContainer = new MultitenantContainer(tenantIdentifier, appContainer);
//GetTenantContainerBuilders will basically give you all TenantBuilder like TenantAContainerBuilder above
foreach (IGrouping<object, ITenantContainerBuilder> source in GetTenantContainerBuilders().GroupBy(x => x.TenantId))
{
var configurationActionBuilder = new ConfigurationActionBuilder();
configurationActionBuilder.AddRange(source.Select(x => new Action<ContainerBuilder>(x.Build)));
multiTenantContainer.ConfigureTenant(source.Key, configurationActionBuilder.Build());
}
When try to resolving the service, if we do:
public DisbursementAgreementManager(IEnumerable<IAgreementHandler> agreementHandlers)
{
_agreementHandlers = agreementHandlers;
}
The agreementHandlers will be an IEnumerable of NoOpAgreementHandler and TenantAAgreementHandler, seems wierd to have NoOpAgreementHandler and I thought we will only get TenantAAgreementHandler. But if we change the DisbursementAgreementManager to
public DisbursementAgreementManager(IAgreementHandler agreementHandler)
{
_agreementHandler = agreementHandler;
}
We will get only the TenantAAgreementHandler which is expected.
The default behavior of Autofac is there for a reason. Asking it to do it differently would be adding application logic at the dependency-injection level, which violates the separation of concerns (DI should only inject dependencies) and leads directly to surprising behavior ("Why did DI not inject every available component?") and undercuts the maintainability of the system.
This may be a non-issue.
The logic is self-contained inside each IAgreementHandler.
If so, at the point where they are invoked by DisbursementAgreementManager, they are all called and then perform their own logic (which may include a decision whether to do anything all). E.g.:
foreach (var ah in _agreementHandlers) ah.Agree(disbursementInfo);
or maybe something like
foreach (var ah in _agreementHandlers.Where(a => a.ShouldRun(data) || overridingCondition))
{
var agreement = ah.Agree(info);
this.Process(agreement);
}
or whatever. The point is that if NoOpAgreementHandler is doing what it is supposed to (that is, nothing) then it should have no effect when it is called. No problem.
If the situation is other than described, then NoOpAgreementHandler and possibly IAgreementHandler need to be refactored.
There is another point of concern:
The reason we add the no-op is we have unit tests for registration/resolve in order to make sure all registration is properly configured.
Your testing requirements are bleeding into your primary logic. These DI configuration tests should be independent of the production DI configuration. NoOpAgreementHandler shouldn't even be in your primary project, just a member of the unit test project.
Just a question to poll how you guys would tackle this in Laravel:
I have a user preferences page defined in UserController.php which creates the view at user/preferences.blade.php.
Administrators should of course be able to edit the user's preferences and have some extra administrative fields shown to be changed. Furthermore I'd like to collect all admin functionality concerning users in a separate controller called AdminUserController.php
I'm thinking of some possibilities to achieve this functionality:
Create an additional view (e.g. admin/user/preferences.blade.php) and almost replicate the GET and POST methods of UserController.php to accommodate the extra fields. However this seems to me like a lot of redundant code...
Convert the GET and POST methods of UserController.php to something like this:
public function postPreferences($user = NULL, $admin = FALSE) {
if (!isset($user)) $user = Auth::user();
// Process regular fields.
if ($admin) {
// Process admin fields.
}
}
Add the admin fields to user/preference.blade.php and conditionally show them if $admin is TRUE, and then call the UserController's methods from within AdminUserController, e.g.:
public function postPreferences($user) {
return (new UserController)->postPreferences($user, TRUE);
}
However, there are some drawbacks. First: controllers shouldn't call each other... Second: this only works for the POST method. Upon requesting the GET method from UserController an exception is being thrown...
I'm curious about how you would tackle this!
Thanks.
This is mostly a question of preference, but I really suggest you to completely separate all that is possible here. Administration is a process that is very sensitive and not in any way should it be possible, that a normal user will be able to see it under any circumstance.
I believe that we all are humans and we make mistakes more or less often, that's why we need to make sure that our mistakes in not assigning right value to the right variable or making a type of = instead of == not to ruin business logic.
I think you should make a separate view and a separate controller for user self management and administration and never tie them up. If you want to keep your code DRY as much as possible, you may extend your user controller and model from your admin controller and model, but not the other way around. That's just my 2 cents, it all depends on what type of application you are working on and what the stakes are.
<?php
class AdminController extends UserController
{
public function __construct(AdminModel $model)
{
// Use dependency injection
$this->model = $model;
}
// In the original UserController class:
public function postPreferences($user) {
$this->model->edit($user, Input::all());
// you may do it this way so your user only saves user data and
// you admin model saves all the data including administrative
}
// ...
}
I have read so much (dozens of posts) about one thing:
How to unit test business logic code that has Entity Framework code in it.
I have a WCF service with 3 layers :
Service Layer
Business Logic Layer
Data Access Layer
My business logic uses the DbContext for all the database operations.
All my entities are now POCOs (used to be ObjectContext, but I changed that).
I have read Ladislav Mrnka's answer here and here on the reasons why we should not mock \ fake the DbContext.
He said:
"That is the reason why I believe that code dealing with context / Linq-to-entities should be covered with integration tests and work against the real database."
and:
"Sure, your approach works in some cases but unit testing strategy must work in all cases - to make it work you must move EF and IQueryable completely from your tested method."
My question is - how do you achieve this ???
public class TaskManager
{
public void UpdateTaskStatus(
Guid loggedInUserId,
Guid clientId,
Guid taskId,
Guid chosenOptionId,
Boolean isTaskCompleted,
String notes,
Byte[] rowVersion
)
{
using (TransactionScope ts = new TransactionScope())
{
using (CloseDBEntities entities = new CloseDBEntities())
{
User currentUser = entities.Users.SingleOrDefault(us => us.Id == loggedInUserId);
if (currentUser == null)
throw new Exception("Logged user does not exist in the system.");
// Locate the task that is attached to this client
ClientTaskStatus taskStatus = entities.ClientTaskStatuses.SingleOrDefault(p => p.TaskId == taskId && p.Visit.ClientId == clientId);
if (taskStatus == null)
throw new Exception("Could not find this task for the client in the database.");
if (taskStatus.Visit.CustomerRepId.HasValue == false)
throw new Exception("No customer rep is assigned to the client yet.");
TaskOption option = entities.TaskOptions.SingleOrDefault(op => op.Id == optionId);
if (option == null)
throw new Exception("The chosen option was not found in the database.");
if (taskStatus.RowVersion != rowVersion)
throw new Exception("The task was updated by someone else. Please refresh the information and try again.");
taskStatus.ChosenOptionId = optionId;
taskStatus.IsCompleted = isTaskCompleted;
taskStatus.Notes = notes;
// Save changes to database
entities.SaveChanges();
}
// Complete the transaction scope
ts.Complete();
}
}
}
In the code attached there is a demonstration of a function from my business logic.
The function has several 'trips' to the database.
I don't understand how exactly I can strip the EF code from this function out to a separate assembly, so that I am able to unit test this function (by injecting some fake data instead of the EF data), and integrate test the assembly that contains the 'EF functions'.
Can Ladislav or anyone else help out?
[Edit]
Here is another example of code from my business logic, I don't understand how I can 'move the EF and IQueryable code' out from my tested method :
public List<UserDto> GetUsersByFilters(
String ssn,
List<Guid> orderIds,
List<MaritalStatusEnum> maritalStatuses,
String name,
int age
)
{
using (MyProjEntities entities = new MyProjEntities())
{
IQueryable<User> users = entities.Users;
// Filter By SSN (check if the user's ssn matches)
if (String.IsNullOrEmusy(ssn) == false)
users = users.Where(us => us.SSN == ssn);
// Filter By Orders (check fi the user has all the orders in the list)
if (orderIds != null)
users = users.Where(us => UserContainsAllOrders(us, orderIds));
// Filter By Marital Status (check if the user has a marital status that is in the filter list)
if (maritalStatuses != null)
users = users.Where(pt => maritalStatuses.Contains((MaritalStatusEnum)us.MaritalStatus));
// Filter By Name (check if the user's name matches)
if (String.IsNullOrEmusy(name) == false)
users = users.Where(us => us.name == name);
// Filter By Age (check if the user's age matches)
if (age > 0)
users = users.Where(us => us.Age == age);
return users.ToList();
}
}
private Boolean UserContainsAllOrders(User user, List<Guid> orderIds)
{
return orderIds.All(orderId => user.Orders.Any(order => order.Id == orderId));
}
If you want to unit test your TaskManager class, you should employ the Repository dessign pattern and inject repositories such as UserRepository or ClientTaskStatusRepository into this class. Then instead of constructing CloseDBEntities object you will use these repositories and call their methods, for example:
User currentUser = userRepository.GetUser(loggedInUserId);
ClientTaskStatus taskStatus =
clientTaskStatusRepository.GetTaskStatus(taskId, clientId);
If yout wanto to integration test your TaskManager class, the solution is much more simple. You just need to initialize CloseDBEntities object with a connection string pointing to the test database and that's it. One way how to achieve this is injecting the CloseDBEntities object into the TaskManager class.
You will also need to re-create the test database before each integration test run and populate it with some test data. This can be achieved using Database Initializer.
There are several misunderstandings here.
First: The Repository Pattern. It's not just a facade over DbSet for unit testing! The repository is a pattenr strongly related to Aggregate and Aggreate Root concepts of Domain Driven Design. An aggregate is a set of related entities that should stay consistent to each other. I mean a business consistency, not just only a foreign keys validity. For example: a customer who have made 2 orders should get a 5% discount. So we should somehow manage the consistency between the number of order entities related to a customer entity and a discount property of the customer entity. A node responsible for this is an aggregate root. It is also the only node that should be accessible directly from outside of the aggregate. And the repository is an utility to obtain an aggregate root from some (maybe persistent) storage.
A typical use case is to create a UoW/Transaction/DbContext/WhateverYouNameIt, obtain one aggregate root entity from the repository, call some methods on it or access some other entities by traversing from the root, Commit/SaveChanges/Whatever. Look, how far it differs from yur samples.
Second: The Business Logic. I've already showed you one example: a customer who have made 2 orders should get a 5% discount. In contrary: your second code sample is not a business logic. It's just a query. The responsibility of this code is to obtain some data from the storage. In such a case, the storage technology behind it does matter. So I would recomend integration tests here rather than pretending the storage doesn't matter when interacting with the storage is the sole purpose of this function.
I would also encapsulate that in a Query Object that was already suggested. Then - such a query object could be mocked. Not just DbContext behind it. The whole QO.
The first code sample is a bit better because it probably ivolves some business logic, but that's dificult to identify. Wich leads us to the third problem.
Third: Anemic Domain Model. Your domain doesnt' look very object oriented. You have some dumb entities and transaction scripts over them. With 7 parameters! Thats pure procedural programming.
Moreover, in your UpdateTaskStatus use case - what is the aggregate root? Befere you answer that, the most important question first: what exactly do you want to do? Is that... hmm... marking a current task of a user done when he was visited? Than, maybe there should be a method Visit() inside a Customer Entity? And this method should have something like this.CurrentTaskStatus.IsCompleted = true?
That was just a random guess. If I missed, that would clearly show another issue. The domain model should use the ubiquitous language - something common for the programmer and a business. Your code doesn't have that expressive power that a common language gives. I just don't know what is going on there in UpdateTaskStatus with 7 parameters.
If you place proper expressive methods for performing business operations in your entities that will also enforce you to not use DbContext there at all, as you need your entities to stay persistence ignorant. Then the problem with mocking disappears. You can test the pure business logic without persistence concerns.
So the final word: Reconsider your model first. Make your API expressive by using ubiquitous language first.
PS: Please don't treat me as an authority. I may be completely wrong as I'm just starting to learn DDD.
I've two entities with 1 to N relation in between. Let's say Books and Pages.
Book has a navigation property as Pages. Book has BookId as an identifier and Page has an auto generated id and a scalar property named PageNo. LazyLoading is set to true.
I've generated this using VS2010 & .net 4.0 and created a database from that.
In the partial class of Book, I need a GetPage function like below
public Page GetPage(int PageNumber)
{
//checking whether it exist etc are not included for simplicity
return Pages.Where(p=>p.PageNo==PageNumber).First();
}
This works. However, since Pages property in the Book is an EntityCollection it has to load all Pages of a book in memory in order to get the one page (this slows down the app when this function is hit for the first time for a given book). i.e. Framework does not merge the queries and run them at once. It loads the Pages in memory and then uses LINQ to objects to do the second part
To overcome this I've changed the code as follows
public Page GetPage(int PageNumber)
{
MyContainer container = new MyContainer();
return container.Pages.Where(p=>p.PageNo==PageNumber && p.Book.BookId==BookId).First();
}
This works considerably faster however it doesn't take into account the pages that have not been serialized to the db.
So, both options has its cons. Is there any trick in the framework to overcome this situation. This must be a common scenario where you don't want all of the objects of a Navigation property loaded in memory when you don't need them.
Trick? Besides "Try both?"
public Page GetPage(int pageNumber)
{
// check local values, possibly not persisted yet.
// works fine if nothing loaded.
var result = Pages.Where(p => p.PageNo == pageNumber).FirstOrDefault();
if (result != null)
{
return result;
}
// check DB if nothing found
MyContainer container = new MyContainer();
return container.Pages.Where(p => p.PageNo == pageNumber && p.Book.BookId==BookId).First();
}
There's nothing to do this automatically except for the specific case of loading by the PK value, for which you can use ObjectContext.[Try]GetObjectByKey.