Automatically Select Per-Thread or Per-Request Lifetime Scopes - autofac

I am developing a library that is distributed internal to my company and consumed by a variety of applications. This library must be platform agnostic in that it may be deployed in a web context or even within a console app. I would like to register objects to be per-http-request or per-thread, depending on the context of the application consuming this framework. In StructureMap, I can do this using the Hybrid lifetime. Essentially, if an HttpContext exists then the object will be scoped to that, otherwise ThreadLocalStorage will be used on a per-thread basis. No additional configuration is required for the distributed library or the consuming application. Is this possible using Autofac? Given our wide variability of developer skill levels, our goal is to minimize/eliminate any specialized configuration for consumers.
I understand that registrations can be context agnostic using the InstancePerLifetimeScope lifetime, but then consuming applications are required to consume the ASP.NET/WCF/MVC integration binaries in order to bind InstancePerLifetimeScope registrations to an Http Request. Or, for per-thread scopes, the consuming code needs to have the responsibility of creating a lifetime scope per thread.
Any suggestions?

It's easy to implement own lifetime manager that will check if 'HttpContext.Current != null' and then delegate to one of the existing managers.
I would however suggest that each application wire up appropriate manager itself. An example would be a unit test scenario where 'HttpContext' might not exist since it's mocked an you might want to control the lifetime manually for test specific purposes.

Related

Fetching potentially needed data from repository - DDD

We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.

Testing service session management via REST

I need to write test for some JAX RS web service that asserts that certain value is cached in the session from disk on the first request in the session.
The testing process does not have access to the tested process. The use case involves using REST API to invoke services.
I can think of several options to proceed with:
Create a REST endpoint just for testing, and query there the needed session value.
Write and then read a log message.
I am aware that I am trying to test an implementation detail via an external API which does not provide contract for this detail, but currently I'm a bit constrained about which processes may be run by the testing infrastructure.
Are there any additional seams to exploit for testing, and what general good practice exists for this scenario?
I just came up with the idea of changing the cached resource and using the change in the behavior.

Deploying IdentityServer3 on Load Balancer

We are moving right along with building out our custom IdentityServer solution based on IdentityServer3. We will be deploying in a load balanced environment.
According to https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html there are a number of services and stores that need to be implemented.
I have implemented the mandatory user service, client and scope stores.
The document says there are other mandatory items to implement but that there are default InMemory versions.
We were planning on using the default in memory for the other stuff but am concerned that not all will work in a load balanced scenario.
What are the other mandatory services and stores we must implement for things to work properly when load balanced?
With multiple Identity Server installations serving the same requests (e.g. load balanced) you won't be able to use the various in-memory token stores, otherwise authorization codes, refresh tokens and reference tokens issued by one server won't be recognized by the other, nor will user consent be persisted. If you are using IIS, machine key synchronization is also necessary to have tokens work across all instances.
There's an entity framework package available for the token stores. You'll need the operational data.
There's also a very useful guide to going live here.

Howto develop a SaaS application with limited resources each tenant

I'd like to develop a bunch of SaaS-Applications in Java and I'm not sure wat is the best way to go.
Each Application will have a WAR containing the Webservice and will have at least one Worker-WAR, which is a Thread waiting for new Tasks in the DB to come up and then working off this task. This worker contains the intelligence of the application and uses a lot of cpu. The Webservice gives Users the possibility to add new tasks and other stuff ...
Resource Limitations
The infrastructure must ensure the following:
The Webservice must always get a certain amount of cpu time to be able to respond to the user. So the hungry Worker must not get all cpu time for its working.
Each Tenant has its own worker and they must not interfere with each other as it must be not possible to block the whole system (and all tenants) with a single task.
Resource Sharing
It would be nice to be able to share the resources but always ensure that in extreme situations every worker and webservice gets the required minimum.
Versioning
As new Versions of a application are released each tenant must have the possibility to initiate a update on its own when he adapted to the API-Changes. Furthermore a tenant must be able to keep more than one application-endpoint (lets call them channels) to have a production channel and a beta-channel. In the Beta-Channel the tenant can test againts new versions and when he feels comfortable with the new version he can update his production channel.
User-Management
All applications of a tenant must share a user-Database and have the same way to authenticate.
Environment
I want to use Java EE 7. I would enjoy using Wildfly.
Question
What is the best infrastructure to approach these aims? I want to host this on my own servers.
What I already found
I understand that you cannot limit CPU-usage in a jvm. So the Workers must have their own jvms.
I looked at PaaS-Providers like OpenShift Origin, but it seems that they encourage you to run a application-server per tenant, per application which sounds to me as a resource-eater.
Is there no way to have one Wildfly running and limit the amount cpu-usage per tenant and app?
Thank You
Lukas

How to make sessions persistent in Scalatra?

I have a webapp using the Scala-based Scalatra web framework. The problem is, anytime the application is re-deployed, or anytime the app-server is rebooted, all session data is lost. This means (to name one downside) users must re-login every time we make an update to the site.
Some research reveals there are, apparently, "container-specific" ways to make sessions persist across app and server reboots (e.g., in the case of Tomcat), but this has two shortcomings:
If the app is not always deployed in the same container (and in the case of Scalatra, an embedded Jetty is used for dev purposes) then I'll need separate configuration for each container.
Using a server-local configuration file is much more fickle -- it's likely to get lost in server migrations, and it won't be automatically available to each instance (e.g., to each developer) of the app, whereas something stored with the core application code is much easier to test, retain, and generally keep track of.
So, to sum up...
Is there a generic, container-neutral way to make sessions persistent? Even if only by overriding appropriate methods in the Java/Servlet stack and storing the session data manually?
Barring that, is there a way to store relevant configuration for multiple containers (e.g., for both Jetty and Tomcat) in my application code (web.xml or similar)?
Thanks -- any insights appreciated!