Renewing instances in Autofac - entity-framework

I know that the entire context of this issue is a bit specific, but I'll try to do my best explaining it. I'm performing a quite big importation from one ecommerce platform to nopCommerce.
nopCommerce works with Autofac as dependency injection container. Importing one product to nopCommerce involves some queries over nopCommerce tables and finally an insertion to the products table. These steps are repeated a lot of times, and Entity Framework context gets bigger, as it has to track more and more entities and trying to detect changes and figure out how many objects has to persist.
What I want to do is, in every iteration of the loop, renew the context, so it only tracks the entities associated to the current iteration. Obviously I want to achieve this, trying to not modify (as much as possible) nopCommerce core. In the container configuration, it is explicitly set that the EF context instances are given per http request (something I want to avoid, as I need a new instance per iteration).
An easy way to do it would be:
foreach job in jobs
Eject all instances in container
service1 = Container.RequestInstance<SomeServiceINeed>
service2 = Container.RequestInstance<SomeServiceINeed2>
DoTheJob
The thing is, I don't know how to accomplish this with Autofac. I have been trying to create a new ContainerBuilder and update the existing one, but _context.GetHashCode will always return the same instance.
Any idea about the best way to do it?
EDIT:
As it was suggested in the comments, I've tried to get the instances inside a lifetime scope. Basically:
using (var lifeTime = EngineContext.Current.ContainerManager.Container.BeginLifetimeScope())
{
service1 = lifeTime.Resolve<SomeServiceINeed>();
service2= lifeTime.Resolve<SomeServiceINeed2>();
..............
}
But I get this exception:
No scope with a Tag matching 'AutofacWebRequest' is visible from the scope in
which the instance was requested. This generally indicates that a component
registered as per-HTTP request is being requested by a SingleInstance() component
(or a similar scenario.) Under the web integration always request dependencies from
the DependencyResolver.Current or ILifetimeScopeProvider.RequestLifetime,
never from the container itself.
The services I'm trying to resolve, obviously depends also on a lot of different repositories and other services that are already defined in the container wiring (app start). Some of them are configured as 'PerHttpRequest'.
Thanks a lot!

Related

REST new ID with DDD Aggregate

This question seemed fool at first sight for me, but then I realized that I don't have a proper answer yet, and interestingly also didn't find good explanation about it in my searches.
I'm new to Domain Driven Design concepts, so, even if the question is basic, feel free to add any considerations to it.
I'm designing in Rest API to configure Server Instances, and I came up with a Aggregate called Instance that contains a List of Configurations, only one specific Configuration will be active at a given time.
To add a Configuration, one would call an endpoint POST /instances/{id}/configurations with the body on the desired configuration. In response, if all okay, it would receive a HTTP 204 with a Header Location containing the new Configuration ID.
I'm planning to have only one Controller, InstanceController, that would call InstanceService that would manipulate the Instance Aggregate and then store to the Repo.
Since the ID's are generated by the repository, If I call Instance.addConfiguration and then InstanceRepository.store, how would I get the ID of the newly created configuration? I mean, it's a List, so It's not trivial as calling Instance.configuration.identity
A option would implement a method in Instance like, getLastAddedConfiguration, but this seems really brittle.
What is the general approach in this situation?
the ID's are generated by the repository
You could remove this extra complexity. Since Configuration is an entity of the Instance aggregate, its Id only needs to be unique inside the aggregate, not across the whole application. Therefore, the easiest is that the Aggregate assigns the ConfigurationId in the Instance.addConfiguration method (as the aggregate can easily ensure the uniqueness of the new Id). This method can return the new ConfigurationId (or the whole object with the Id if necessary).
What is the general approach in this situation?
I'm not sure about the general approach, but in my opinion, the sooner you create the Ids the better. For Aggregates, you'd create the Id before storing it (maybe a GUID), for entities, the Aggregate can create it the moment of creating/adding the entity. This allows you to perform other actions (eg publishing an event) using these Ids without having to store and retrieve the Ids from the DB, which will necessarily have an impact on how you implement and use your repositories and this is not ideal.

How to resolve: The transaction operation cannot be performed Exception

I am getting this error, and I have not been able to resolve:
System.Data.SqlClient.SqlException: 'The transaction operation cannot be performed because there are pending requests working on this transaction.'
What is going on is that a usual data operation is taking place as part of a Controller Action.
At the same time, there is a Filter that is running that logs the action to a database.
this._orderEntryContext.ServerLog.Add(serverLog);
return this._orderEntryContext.SaveChanges() > 0;
This is where the error occurs.
So it seems to me that there is two SaveChanges going on at the same time, and so the transaction gets fouled up.
Not sure how to resolve. They are both using the same context that is gotten through DI. A workaround was to create a second context manually, but I would rather stick to the DI pattern. But I don't know how to create a second Db Context in DI, or even if that is a good idea.
Perhaps I should be using SaveChangesAsync() on both calls to ensure that they do not step on each other?
Turns out the answer to this was to make the Context a transient service:
services.AddDbContext<OrderEntryContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")), ServiceLifetime.Transient);
Then, I changed all repositories to also be transient:
services.AddTransient<AssociateRepository, AssociateRepository>();

JCR API or Apache Sling

I read a lot of articles like JCR vs Apache Sling and I'm confused about what to use. Some authors advise to use JCR API like more performance optimized and the rest are on the side of Apache Sling because it's faster to write and far more readable and maintainable in the long run. And I had some questions:
What practice is better from your point of view?
What is more often used in production projects?
I think Maciej Matuszewski exhausted this subject enough in his presentation JCR, Sling or AEM? Which API should I use and when?.
In most of the cases, it is recommended to use Apache Sling as a higher API whereas JCR is required when performance needs to be taken into account. It is then important however to know the border between these two scenarios.
Maciej notices it is around 1ms difference for opening the regular AEM page without taking any caching into account. Taking care of the performance is totally unnecessary in that case. We should instead focus on writing code that is readable, understandable, reduced to the minimum and reusing already existing APIs, frameworks, util classes that are covered already by proper unit tests and peer reviewed, rather than reinventing the wheel from the beginning. Base on that, we should also prefer AEM layer over Sling layer.
From my experience, I would say that JCR should be utilized in few scenarios, mainly when traversing of a large amount of data of CRX database and it cannot be achieved by any searching API.
So that the difference is like between using C# or C++ as a programming language for computer games development - in some of the cases it is enough to stay higher API for development convenience however for some cases it is required to get lower and start using C++ pointers.
However, the most important thing is to not mix both abstract layers in your implementation.
To start with a very typical answer, 'IT DEPENDS'.
Consider the following scenarios for your understanding:
Scenario 1: Read the title of the page which is containing the current resource.
Approach 1: Leverage the awesome Sling API's to work upon all the available context objects like currentPage, resource, pageManager, wcmmode and many more in your Java controller (Sling Model/ WCMUSe class).
// get the page that contains this resource.
// If the resource is a page the resource is returned. Otherwise it
// walks up the parent resources until a page is found.
Page page = pageManager.getContainingPage(resource);
// Check if the returned page object isn't null
if(page != null){
return page.getTitle();
}
Approach 2: Use the JCR API's:
// assign the current resource node to parent Node to check
// if the current resoure in itself is a page
Node parentNode = currentNode;
while (parentNode.getProperty("jcr:PrimaryTpe").getString() != "cq:Page" ){
parentNode = parentNode.getParent();
}
// The page Title
String pageTitle = null;
// find the jcr:content node of the page and return the
// jcr:title property of that node
if(parentNode.hasNode("jcr:content"){
Node jcrContentNode = parentNode.getNode("jcr:Content");
pageTitle = jcrContentNode.getProperty("jcr:title").getValue().getString();
}
return pageTitle;
In this scenario, obviously the Sling API's win by a huge margin on
the ease of access and usability. I have never experienced any
performance issue with the Sling APIs in comparison to the JCR APIs.
Scenario 2: Change the title of the first level page nodes (considering /content/mywebsite/en to be level ZERO) of your website to Upper Case letters.
Approach: In such a requirement where you need to do certain one-time changes to your JCR repository, you should use the JCR APIs by creating a Standalone Java Application to perform such tasks instead of creating an unnecessary component, its controller, an unnecessary page to put this component and then using the Sling API's in the controller to perform these tasks.
//Create a connection to the CQ repository running on local host
Repository repository = JcrUtils.getRepository("http://localhost:4502/crx/server");
//Create a Session
Session session = repository.login( new SimpleCredentials("username", "password".toCharArray()),"crx.default");
//Create a node that represents the root node
Node root = session.getRootNode();
// Get the level ZERO node
Node homepageNode = root.getNode("/content/mywebsite/en");
NodeIterator iter = homePageNode.getNodes;
while(iter.hasNext){
// if next node is of primarty type cq:Page
// get its jcr:content node and
// set its jcr:title property to uppercase letters.
}
Rule of Thumb:
If you want access to your AEM repository from within the AEM
application use Sling APIs over JCR APIs they are:
higher APIs than JCR (have a lot of predefined methods to do a lot of work)
provide access to all the Global Context objects inside the controller
very easy to use
but in case if you need to accces your repository for large scale opertaions,
(generally one time changes) choose to work with a standalone Java
application using the JCR APIs.

Querying a list of Actors in Azure Service Fabric

I currently have a ReliableActor for every user in the system. This actor is appropriately named User, and for the sake of this question has a Location property. What would be the recommended approach for querying Users by Location?
My current thought is to create a ReliableService that contains a ReliableDictionary. The data in the dictionary would be a projection of the User data. If I did that, then I would need to:
Query the dictionary. After GA, this seems like the recommended approach.
Keep the dictionary in sync. Perhaps through Pub/Sub or IActorEvents.
Another alternative would be to have a persistent store outside Service Fabric, such as a database. This feels wrong, as it goes against some of the ideals of using the Service Fabric. If I did, I would assume something similar to the above but using a Stateless service?
Thank you very much.
I'm personally exploring the use of Actors as the main datastore (ie: source of truth) for my entities. As Actors are added, updated or deleted, I use MassTransit to publish events. I then have Reliable Statefull Services subscribed to these events. The services receive the events and update their internal IReliableDictionary's. The services can then be queried to find the entities required by the client. Each service only keeps the entity data that it requires to perform it's queries.
I'm also exploring the use of EventStore to publish the events as well. That way, if in the future I decide I need to query the entities in a new way, I could create a new service and replay all the events to it.
These Pub/Sub methods do mean the query services are only eventually consistent, but in a distributed system, this seems to be the norm.
While the standard recommendation is definitely as Vaclav's response, if querying is the exception then Actors could still be appropriate. For me whether they're suitable or not is defined by the normal way of accessing them, if it's by key (presumably for a user record it would be) then Actors work well.
It is possible to iterate over Actors, but it's quite a heavy task, so like I say is only appropriate if it's the exceptional case. The following code will build up a set of Actor references, you then iterate over this set to fetch the actors and then can use Linq or similar on the collection that you've built up.
ContinuationToken continuationToken = null;
var actorServiceProxy = ActorServiceProxy.Create("fabric:/MyActorApp/MyActorService", partitionKey);
var queriedActorCount = 0;
do
{
var queryResult = actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken).GetAwaiter().GetResult();
queriedActorCount += queryResult.Items.Count();
continuationToken = queryResult.ContinuationToken;
} while (continuationToken != null);
TLDR: It's not always advisable to query over actors, but it can be achieved if required. Code above will get you started.
if you find yourself needing to query across a data set by some data property, like User.Location, then Reliable Collections are the right answer. Reliable Actors are not meant to be queried over this way.
In your case, a user could simply be a row in a Reliable Dictionary.

Entity framework multiple contexts for logging

I've seen a fair few articles/posts that recommend not having more than one context per request when using EF.
Is it valid to have a second context for logging purposes such as 'user x did y', 'failed login from z' etc.
The rationale behind this is that I'd like these errors to be logged even if there is an error while using the "main" context, ie. foreign key issues etc.
Is there another way to do this or if I head down this road is there any things to try and avoid?
You can always have more context instances if your application logic really needs them and ability to persist log to database even with invalid data in the main context can be considered as such situation. You just need to ensure that your updates do not run in the same transaction (they must use different DB connection as well) - that should be a default behavior unless you use TransactionScope.