How to use Unity's injectionfactory for child containers in ninject? - inversion-of-control

I am stuck with ninjects equivalent to this unity registration:
var container = new UnityContainer();
container.RegisterType<MyFactory>(new ContainerControlledLifetimeManager());
// resolve instances once per child container by MyFactory
container.RegisterType<IMyInstance>(new HierarchicalLifetimeManager(),
new InjectionFactory(c => c.Resolve<MyFactory>().GetMyInstance()));
Already tried to this without any effect:
var kernel = new StandardKernel();
kernel.Bind<MyFactory>().ToSelf().InSingletonScope();
// context preserving get should do the job for child containers, right?
// thought in call scope ist the equivalent to unity's hierarchical lifetime manager?
kernel.Bind<IMyInstance>().ToMethod(ctx => ctx.ContextPreservingGet<MyFactory>().GetMyInstance())
.InCallScope();

Related

How to Set View Model Class Programmatically when Creating Components

I am experimenting with ZK framework and look for ways to load a zul-defined view programmatically via a utility class that uses Executions to load a view definition programmatically.
I do however fail to find a way to set the view model class programmatically, which would be very convenient and would allow some very simple way of reusing a view with different view model classes.
I.e. the code to load the view looks like this:
public static Component loadComponent(Class<?> modelClz, String zulFile, Component parent, Map<String,Object> params) {
Execution exec = Executions.getCurrent();
PageDefinition page = exec.getPageDefinitionDirectly(
new InputStreamReader(modelClz.getResourceAsStream(zulFile)),
null
);
return exec.createComponents(
page,
// no (previous parent)
parent,
params
);
}
I thought about "forcing" the view model by setting annotations programmatically on the top component info from the page definition, like so:
public static Component loadComponent(Class<?> modelClz, String zulFile, Component parent, Map<String,Object> params) {
Execution exec = Executions.getCurrent();
PageDefinition page = exec.getPageDefinitionDirectly(
new InputStreamReader(modelClz.getResourceAsStream(zulFile)),
null
);
if (!page.getChildren().isEmpty()) {
ComponentInfo top = (ComponentInfo) page.getChildren().get(0);
AnnotationMap annotationMap = top.getAnnotationMap();
String viewModel = "viewModel";
if (annotationMap==null || !annotationMap.getAnnotatedProperties().contains(viewModel)) {
// no view model set on top declaration,
// force ours
Map<String,String[]> id = new HashMap<>();
id.put(null, new String[]{"vm"});
top.addAnnotation("viewModel","id",id, null);
Map<String,String[]> init = new HashMap<>();
init.put(null, new String[]{String.format("%s", modelClz.getName())});
top.addAnnotation("viewModel","init",init, null);
top.enableBindingAnnotation();
}
}
return exec.createComponents(
page,
// no (previous parent)
parent,
params
);
}
This did not work however. Maybe it was too late in the process. Or there is some really simple way of doing this but I missed it. Or maybe I should "apply" some BindComposer, but I am not sure how to do that.
Any helpful idea would be great!
Just to make sure I've understood:
You have some zul fragment (such as a reusable UI structure)
This fragment uses the MVVM pattern, but you want to choose a viewModel object when you are instantiating that fragment, instead of declaring a class name in the zul viewModel="#id()#init()" attribute
You want to initialize that fragment from a java class that has access to the UI (using Execution#createComponents to load the fragment and attach them to the page)
Does that sound correct?
If that's the case, you can make this way simpler
The attribute can be written as viewModel="#id('yourVmId')#init(aReferenceToAnAlreadyInstantiatedObject)".
Important note here: Notice that I have NOT put quotes around the object in the #init declaration. I'm passing an actual object, not a string containing a reference to a class to be instantiated.
When you invoke execution.createComponents() you may pass a map<String, Object> of arguments to the created page. You can then use the name of the relevant passed object when you create bind the VM.
have a look at this fiddle (bit rough, but it should make sense): https://zkfiddle.org/sample/2jij246/4-Passing-an-object-through-createComponents-as-VM#source-2
HashMap<String, Object> args = new HashMap<String, Object>();
args.put("passedViewModel", new GenericVmClass("some value in the passed VM here"));
Executions.createComponents("./fragment.zul", e.getTarget().getPage(),null, args);
FYI if you are using ZK shadow-elements, you can also pass that object to the fragment from an apply with a source in pure MVVM pattern.
The <apply> shadow element for example can pass objects to the created content with a variable name, and you can use that variable name when initializing the VM.
Regarding BindComposer:
You need to instantiate BindComposer up to ZK 7.X
In ZK 8.X and above, BindComposer will be instantiated automatically when you use the viewModel="..." attribute on a ZK component.

How to retain an object instance across unnamed/differently named scopes?

As shown in the runnable code example below I want to create a named scope wherein a certain instance of an object is resolved regardless of other unnamed scopes that are created after the object is created.
With regard to the documentation found here:
// You can't resolve a per-matching-lifetime-scope component
// if there's no matching scope.
using(var noTagScope = container.BeginLifetimeScope())
{
// This throws an exception because this scope doesn't
// have the expected tag and neither does any parent scope!
var fail = noTagScope.Resolve<Worker>();
}
In my case in the example below a parent scope DOES have a matching tag but it still does not work. Should it?
In the following example scopes are tidy and parent scopes are known - In my application only the root container object is accessible so when a scope is created it is always from the container not the parent scope.
public class User
{
public string Name { get; set; }
}
public class SomeService
{
public SomeService(User user)
{
Console.WriteLine($"Injected user is named {user.Name}");
}
}
class Program
{
private static IContainer container;
private const string USER_IDENTITY_SCOPE = "SOME_NAME";
static void Main(string[] args)
{
BuildContainer();
Run();
Console.ReadKey();
}
private static void BuildContainer()
{
ContainerBuilder builder = new ContainerBuilder();
builder.RegisterType<SomeService>();
builder.RegisterType<User>().InstancePerMatchingLifetimeScope(USER_IDENTITY_SCOPE);
container = builder.Build();
}
private static void Run()
{
using (var outerScope = container.BeginLifetimeScope(USER_IDENTITY_SCOPE))
{
User outerUser = outerScope.Resolve<User>();
outerUser.Name = "Alice"; // User Alice lives in this USER_IDENTITY_SCOPE
SomeService someService = outerScope.Resolve<SomeService>(); // Alice
// Now we want to run a "process" under the identity of a different user
// Inside of the following using block, we want all services that
// receive a User object to receive Bob:
using (var innerSope = container.BeginLifetimeScope(USER_IDENTITY_SCOPE))
{
User innerUser = innerSope.Resolve<User>();
innerUser.Name = "Bob"; // We get a new instance of user as expected. User Bob lives in this USER_IDENTITY_SCOPE
// Scopes happen in my app that are unrelated to user identity - how do I retain User object despite this?
// The following is not a USER_IDENTITY_SCOPE -- We still want Bob to be the User object that is resolved:
using (var unnamedScope = container.BeginLifetimeScope())
{
// Crashes. Desired result: User Bob is injected
SomeService anotherSomeService = unnamedScope.Resolve<SomeService>();
}
}
}
}
}
Using Autofac 4.9.2 / .net core 2.2
In your example, you're launching the unnamed scope from the container, not from a parent with a name:
using (var unnamedScope = container.BeginLifetimeScope())
Switch that to be a child of a scope with a name and it'll work.
using (var unnamedScope = innerScope.BeginLifetimeScope())
I'd also note that you've named these outerScope and innerScope but innerScope is not actually a child of the outerScope so the names are misleading. Technically the two scopes are peers.
container
innerScope (named)
outerScope (named)
unnamedScope
All three are direct children of the container. If you think about sharing the user in terms of scope hierarchy, you'd need to make child scopes from parents that have children.
container
innerScope (named)
unnamedScope
outerScope (named)
unnamedScope
You'll notice inner and outer are still peers - you can't have a parent and a child with the same name, so given inner and outer are both named, they'll never share the same hierarchy except for the container.
I would strongly recommend not trying to bypass a hierarchical model here. For example, say you really are trying to do this:
container
outerScope (named)
unnamedScope
Which might look like this:
using(var outerScope = container.BeginLifetimeScope(USER_IDENTITY_SCOPE))
using(var unnamedScope = container.BeginLifetimeScope())
{
//...
}
This is pretty much what you have in the snippet above. The only common sharing these scopes have is at the container level. If you tried to resolve something from the named scope and pass it to a peer scope, you run the risk of things being disposed out from under you or other weird hard-to-troubleshoot problems. Like if outerScope gets disposed but unnamedScope lives on, you can get into trouble.
// PLEASE DO NOT DO THIS. YOU WILL RUN INTO TROUBLE.
using(var outerScope = container.BeginLifetimeScope(USER_IDENTITY_SCOPE))
{
var user = outerScope.Resolve<User>();
using(var unnamedScope = container.BeginLifetimeScope(b => b.RegisterInstance(user)))
{
//...
}
}
That's bad news waiting to happen, from odd disposal problems to things not sharing the same set of dependencies when you think they should. But, you know, we can give you the gun, it's up to you to not shoot yourself in the foot with it.

What is the proper way to resolve a reference to a service fabric stateless service?

I have been developing a system which is heavily using stateless service fabric services. I thought I had a good idea how things worked however I am doing something slightly different now and it is obvious my understanding is limited.
I have a cluster of 5 nodes and all my services have an instance count of -1 for simplicity currently. With everything on every node it means I can watch a single node for basic behavior correctness.
I just added a new service which needs an instance count of 1. However it seems I am unable to resolve this service correctly. Instead SF tries to resolve the service on each machine which fails for all machines except the one where the single service exists.
It was my assumption that SF would automagically resolve a reference to a service anywhere in the cluster. If that reference fails then it would automagically resolve a new reference and so on. It appears that this is not correct at least for the way I am currently doing things.
I can find an instance using code similar to this however what happens if that instance fails. How do I get another reference?
I can resolve for every call like this however that seems like a terrible idea when I really only want to resolve a IXyzService and pass that along.
This is how I am resolving services since I am using the V2 custom serialization.
var _proxyFactory = new ServiceProxyFactory(c =>
{
return new FabricTransportServiceRemotingClientFactory(
serializationProvider: new CustomRemotingSerializationProvider(Logger)
);
});
var location = new Uri("fabric:/xyz/abcService");
var proxy = _proxyFactory.CreateServiceProxy<TService>(location);
This does actually work however it appears to only resolve a service on the same machine. So ServiceA would resolve a reference to ServiceB on the same machine. However if ServiceB doesn't exist on the machine for a valid reason then the resolution would fail.
Summary:
What is the correct way for ServiceA to use the V2 custom serialization ServiceProxyFactory to resolve an interface reference to ServiceB wherever ServiceA and ServiceB are in the cluster?
Update:
The evidence it doesn't work is the call to resolve hangs forever. According to this link that is correct behavior because the service will eventually come up. However only 1 node ever resolved it correctly and that is the node where the single instance is active. I have tried several things even waiting 30 seconds just to make sure it wasn't an init issue.
var proxy = _proxyFactory.CreateServiceProxy<TService>(location);
// Never gets here except on the one node.
SomethingElse(proxy);
Listener code
This essentially follows the V2 custom serialization tutorial almost exactly.
var listeners = new[]
{
new ServiceInstanceListener((c) =>
{
return new FabricTransportServiceRemotingListener(c, this, null,
new CustomRemotingSerializationProvider(Logger));
})
};
public class HyperspaceRemotingSerializationProvider : IServiceRemotingMessageSerializationProvider
{
#region Private Variables
private readonly ILogger _logger;
private readonly Action<RequestInfo> _requestAction;
private readonly Action<RequestInfo> _responseAction;
#endregion Private Variables
public CustomRemotingSerializationProvider(ILogger logger, Action<RequestInfo> requestAction = null, Action<RequestInfo> responseAction = null)
{
_logger = logger;
_requestAction = requestAction;
_responseAction = responseAction;
}
public IServiceRemotingRequestMessageBodySerializer CreateRequestMessageSerializer(Type serviceInterfaceType, IEnumerable<Type> requestWrappedTypes,
IEnumerable<Type> requestBodyTypes = null)
{
return new RequestMessageBodySerializer(_requestAction);
}
public IServiceRemotingResponseMessageBodySerializer CreateResponseMessageSerializer(Type serviceInterfaceType, IEnumerable<Type> responseWrappedTypes,
IEnumerable<Type> responseBodyTypes = null)
{
return new ResponseMessageBodySerializer(_responseAction);
}
public IServiceRemotingMessageBodyFactory CreateMessageBodyFactory()
{
return new MessageBodyFactory();
}
}
Connection code
_proxyFactory = new ServiceProxyFactory(c =>
{
return new FabricTransportServiceRemotingClientFactory(
serializationProvider: new CustomRemotingSerializationProvider(Logger)
);
});
// Hangs here - tried different partition keys or not specifying one.
var proxy = _proxyFactory.CreateServiceProxy<TService>(location, ServicePartitionKey.Singleton);

CheckboxTreeViewer on checked Children node get Parent node value

I am using JFace CheckboxTreeViewer and adding on ICheckStateListener to get checked elemets,
My CheckboxTreeViewer structure is as follows,
P1
----Child1
----Child2
----Child3
----Child4
P2
----Child6
----Child7
----Child8
----Child9
My Requirement is that when I am checked a Children node to get its Related parent node
for example
when I checked Child8 then get Parent Node p2
when I checked Child2 then get Parent Node p1
how to achieve this?
You get the element that has changed from the CheckStateChangedEvent passed to the listener by calling the getElement method:
public void checkStateChanged(CheckStateChangedEvent event) {
Object changed = event.getElement();
This is the object that your tree content provider provided. So you can get its parent by asking the content provider:
ITreeContentProvider provider = (ITreeContentProvider)viewer.getContentProvider();
Object parent = provider.getParent(changed);
where viewer is the CheckboxTreeViewer.

Service Fabric DI implementation from Unity to Autofac and Child Scopes

I'm currently porting a Service Fabric DI implementation from Unity to Autofac and I need some advice.
Original Implementation:
https://github.com/s-innovations/S-Innovations.ServiceFabric.DependencyInjection
The Unity implementation late registers two dependencies inside the ActorFactory which and then uses the DI container inside the scope of the Actor.
I'm considering the best way to do this using Autofac.
I think I need to create a new ContainerBuilder, and register the two Actor Scoped dependencies using a named scope (maybe using the ActorId as the scope Id ?) so that the instances are scope to this particular Actor Instance.
This is what I have so far:
public static ILifetimeScope WithActor<TActor>(this Container container,
ActorServiceSettings settings = null) where TActor : ActorBase
{
ContainerBuilder builder = new ContainerBuilder();
if (!container.IsRegistered<IActorDeactivationInterception>())
{
builder.RegisterType<OnActorDeactivateInterceptor>()
.As<IActorDeactivationInterception>()
.EnableInterfaceInterceptors();
builder.Update(container);
}
builder.Register(c => ActorProxyTypeFactory.CreateType<TActor>())
.As(typeof(TActor))
.InstancePerLifetimeScope();
builder.Update(container);
ActorRuntime.RegisterActorAsync<TActor>((context, actorType) => {
try
{
return new ActorService(context,
actorTypeInfo: actorType,
actorFactory: (service, id) =>
// container.BeginLifetimeScope()
// .RegisterInstance(service, new ContainerControlledLifetimeManager())
// .RegisterInstance(id, new ContainerControlledLifetimeManager()).Resolve<TActor>(),
{
var actorBuilder = new
ContainerBuilder();
actorBuilder
.RegisterInstance(service)
.InstancePerDependency();
actorBuilder
.RegisterInstance(id)
.InstancePerDependency();
actorBuilder.Update(container);
return container.Resolve<TActor>();
}, settings: settings);
}
catch (Exception ex)
{
throw;
}
}).GetAwaiter().GetResult();
return container;
}
Any ideas, Comments most welcome! Anyone have a view on the performance impact of resolving the Actors and their dependencies through DI / this approach ?
The complete code is here:
https://github.com/Applicita/S-Innovations.ServiceFabric.DependencyInjection/tree/Autofac/src/Applicita.ServiceFabric.Autofac