Is it possible to implement a module that is not a WPF module (a standard class library, no screens)? - mvvm

I am developing a modular WPF application with Prism in .Net Core 5.0 (using MVVM, DryIoc) and I would like to have a module that is not a WPF module, i.e., a module with functionality that can be used by any other module. I don't want any project reference, because I want to keep the loosely coupled idea of the modules.
My first question is: is it conceptually correct? Or is it mandatory that a module has a screen? I guess it should be ok.
The second and more important (for me) is, what would be the best way to create the instance?
This is the project (I know I should review the names in this project):
HotfixSearcher is the main class, the one I need to get instantiated. In this class, for example, I subscribe to some events.
And this is the class that implements the IModule interface (the module class):
namespace SearchHotfix.Library
{
public class HotfixSearcherModule : IModule
{
public HotfixSearcherModule()
{
}
public void OnInitialized(IContainerProvider containerProvider)
{
//Create Searcher instance
var searcher = containerProvider.Resolve<IHotfixSearcher>();
}
public void RegisterTypes(IContainerRegistry containerRegistry)
{
containerRegistry.RegisterSingleton<IHotfixSearcher, HotfixSearcher>();
}
}
}
That is the only way I found to get the class instantiated, but I am not a hundred per cent comfortable with creating an instance that is not used, I think it does not make much sense.
For modules that have screens, the instances get created when navigating to them using the RequestNavigate method:
_regionManager.RequestNavigate(RegionNames.ContentRegion, "ContentView");
But since this is only a library with no screens, I can't find any other way to get this instantiated.
According to Prism documentation, subscribing to an event shoud be enough but I tried doing that from within my main class HotfixSearcher but it does not work (breakpoints on constructor or on the event handler of the event to which I subscribe are never hit).
When I do this way, instead, the instance is created, I hit the constructor breakpoint, and obviously the instance is subscribed to the event since it is done in the constructor.
To sum up, is there a way to get rid of that var searcher = containerProvider.Resolve<IHotfixSearcher>(); and a better way to achieve this?
Thanks in advance!

Or is it mandatory that a module has a screen?
No, of course not, modules have nothing to do with views or view models. They are just a set of registrations with the container.
what would be the best way to create the instance?
Let the container do the work. Normally, you have (at least) one assembly that only contains public interfaces (and the associated enums), but no modules. You reference that from the module and register the module's implementations of the relevant interfaces withing the module's Initialize method. Some other module (or the main app) can then have classes that get the interfaces as constructor parameters, and the container will resolve (i.e. create) the concrete types registered in the module, although they are internal or even private and completely unknown outside the module.
This is as loose a coupling as it gets if you don't want to sacrifice strong typing.
is there a way to get rid of that var searcher = containerProvider.Resolve<IHotfixSearcher>(); and a better way to achieve this?
You can skip the var searcher = part :-) But if the HotfixSearcher is never injected anywhere, it won't be created unless you do it yourself. OnInitialized is the perfect spot for this, because it runs after all modules had their chance to RegisterTypes so all dependencies should be registered.
If HotfixSearcher is not meant to be injected, you can also drop IHotfixSearcher and resolve HotfixSearcher directly:
public void OnInitialized(IContainerProvider containerProvider)
{
containerProvider.Resolve<HotfixSearcher>();
}
I am not a hundred per cent comfortable with creating an instance that is not used, I think it does not make much sense.
It is used, I suppose, although not through calling one of its methods. It's used by sending it an event. That's just fine. Think of it like Task.Run - it's fine for the task to exist in seeming isolation, too.

Related

Any way to trigger creation of a list of all classes in a hierarchy in Swift 4?

Edit: So far it looks like the answer to my question is, "You can't do that in Swift." I currently have a solution whereby the subclass names are listed in an array and I loop around and instantiate them to trigger the process I'm describing below. If this is the best that can be done, I'll switch it to a plist so that least it's externally defined. Another option would be to scan a directory and load all files found, then I would just need to make sure the compiler output for certain classes is put into that directory...
I'm looking for a way to do something that I've done in C++ a few times. Essentially, I want to build a series of concrete classes that implement a particular protocol, and I want to those classes to automatically register themselves such that I can obtain a list of all such classes. It's a classic Prototype pattern (see GoF book) with a twist.
Here's my approach in C++; perhaps you can give me some ideas for how to do this in Swift 4? (This code is grossly simplified, but it should demonstrate the technique.)
class Base {
private:
static set<Base*> allClasses;
Base(Base &); // never defined
protected:
Base() {
allClasses.put(this);
}
public:
static set<Base*> getAllClasses();
virtual Base* clone() = 0;
};
As you can see, every time a subclass is instantiated, a pointer to the object will be added to the static Base::allClasses by the base class constructor.
This means every class inherited from Base can follow a simple pattern and it will be registered in Base::allClasses. My application can then retrieve the list of registered objects and manipulate them as required (clone new ones, call getter/setter methods, etc).
class Derived: public Base {
private:
static Derived global; // force default constructor call
Derived() {
// initialize the properties...
}
Derived(Derived &d) {
// whatever is needed for cloning...
}
public:
virtual Derived* clone() {
return new Derived(this);
}
};
My main application can retrieve the list of objects and use it to create new objects of classes that it knows nothing about. The base class could have a getName() method that the application uses to populate a menu; now the menu automatically updates when new subclasses are created with no code changes anywhere else in the application. This is a very powerful pattern in terms of producing extensible, loosely coupled code...
I want to do something similar in Swift. However, it looks like Swift is similar to Java, in that it has some kind of runtime loader and the subclasses in this scheme (such as Derived) are not loaded because they're never referenced. And if they're not loaded, then the global variable never triggers the constructor call and the object isn't registered with the base class. Breakpoints in the subclass constructor shows that it's not being invoked.
Is there a way to do the above? My goal is to be able to add a new subclass and have the application automatically pick up the fact that the class exists without me having to edit a plist file or doing anything other than writing the code and building the app.
Thanks for reading this far — I'm sure this is a bit of a tricky question to comprehend (I've had difficulty in the past explaining it!).
I'm answering my own question; maybe it'll help someone else.
My goal is to auto initialize subclasses such that they can register with a central authority and allow the application to retrieve a list of all such classes. As I put in my edited question, above, there doesn't appear to be a way to do this in Swift. I have confirmed this now.
I've tried a bunch of different techniques and nothing seems to work. My goal was to be able to add a .swift file with a class in it and rebuild, and have everything automagically know about the new class. I will be doing this a little differently, though.
I now plan to put all subclasses that need to be initialized this way into a particular directory in my application bundle, then my AppDelegate (or similar class) will be responsible for invoking a method that scans the directory using the filenames as the class names, and instantiating each one, thus building the list of "registered" subclasses.
When I have this working, I'll come back and post the code here (or in a GitHub project and link to it).
Same boat. So far the solution I've found is to list classes manually, but not as an array of strings (which is error-prone). An a array of classes such as this does the job:
class AClass {
class var subclasses: [AClass.Type] {
return [BClass.self, CClass.self, DClass.self]
}
}
As a bonus, this approach allows me to handle trees of classes, simply by overriding subclasses in each subclass.

Autofac and Quartz.Net Integration

Does anyone have any experience integrating autofac and Quartz.Net? If so, where is it best to control lifetime management -- the IJobFactory, within the Execute of the IJob, or through event listeners?
Right now, I'm using a custom autofac IJobFactory to create the IJob instances, but I don't have an easy way to plug in to a ILifetimeScope in the IJobFactory to ensure any expensive resources that are injected in the IJob are cleaned up. The job factory just creates an instance of a job and returns it. Here are my current ideas (hopefully there are better ones...)
It looks like most AutoFac integrations somehow wrap a ILifetimeScope around the unit of work they create. The obvious brute force way seems to be to pass an ILifetimeScope into the IJob and have the Execute method create a child ILifetimeScope and instantiate any dependencies there. This seems a little too close to a service locator pattern, which in turn seems to go against the spirit of autofac, but it might be the most obvious way to ensure proper handling of a scope.
I could plug into some of the Quartz events to handle the different phases of the Job execution stack, and handle lifetime management there. That would probably be a lot more work, but possibly worth it if it gets cleaner separation of concerns.
Ensure that an IJob is a simple wrapper around an IServiceComponent type, which would do all the work, and request it as Owned<T>, or Func<Owned<T>>. I like how this seems to vibe more with autofac, but I don't like that its not strictly enforceable for all implementors of IJob.
Without knowing too much about Quartz.Net and IJobs, I'll venture a suggestion still.
Consider the following Job wrapper:
public class JobWrapper<T>: IJob where T:IJob
{
private Func<Owned<T>> _jobFactory;
public JobWrapper(Func<Owned<T>> jobFactory)
{
_jobFactory = jobFactory;
}
void IJob.Execute()
{
using (var ownedJob = _jobFactory())
{
var theJob = ownedJob.Value;
theJob.Execute();
}
}
}
Given the following registrations:
builder.RegisterGeneric(typeof(JobWrapper<>));
builder.RegisterType<SomeJob>();
A job factory could now resolve this wrapper:
var job = _container.Resolve<JobWrapper<SomeJob>>();
Note: a lifetime scope will be created as part of the ownedJob instance, which in this case is of type Owned<SomeJob>. Any dependencies required by SomeJob that is InstancePerLifetimeScope or InstancePerDependency will be created and destroyed along with the Owned instance.
Take a look at https://github.com/alphacloud/Autofac.Extras.Quartz. It also available as NuGet package https://www.nuget.org/packages/Autofac.Extras.Quartz/
I know it a bit late:)

How to use OSGi getServiceReference() right

I am new to OSGi and came across several examples about OSGi services.
For example:
import org.osgi.framework.*;
import org.osgi.service.log.*;
public class MyActivator implements BundleActivator {
public void start(BundleContext context) throws Exception {
ServiceReference logRef =
context.getServiceReference(LogService.class.getName());
}
}
My question is, why do you use
getServiceReference(LogService.class.getName())
instead of
getServiceReference("LogService")
If you use LogService.class.getName() you have to import the Interface. This also means that you have to import the package org.osgi.services.log in your MANIFEST.MF.
Isn't that completely counterproductive if you want to reduce dependencies to push loose coupling? As far as I know one advantage of services is that the service consumer doesn't have to know the service publisher. But if you have to import one specific Interface you clearly have to know who's providing it. By only using a string like "LogService" you would not have to know that the Interface is provided by org.osgi.services.log.LogService.
What am I missing here?
Looks like you've confused implementation and interface
Using the actual interface for the name (and importing the interface , which you'll end up doing anyway) reenforces the interface contract that services are designed around. You don't care about the implemenation of a LogService but you do care about the interface. Every LogService will need to implement the same interface, hence your use of the interface to get the service. For all you know the LogService is really a wrapper around SLF4J provided by some other bundle. All you see is the interface. That's the loose coupling you're looking for. You don't have to ship the interface with every implementation. Leave the interface it's own bundle and have multiple implementations of that interface.
Side note: ServiceTracker is usually easier to use, give it a try!
Added benefits: Using the interface get the class name avoids spelling mistakes, excessive string literals, and makes refactoring much easier.
After you've gotten the ServiceReference, your next couple lines will likely involve this:
Object logSvc = content.getService(logRef)
// What can you do with logSvc now?!? It's an object, mostly useless
// Cast to the interface ... YES! Now you need to import it!
LogSerivce logger = (LogService)logSvc;
logger.log(LogService.LOG_INFO, "Interfaces are a contract between implementation and consumer/user");
If you use the LogService, you're coupled to it anyway. If you write middleware you likely get the name parameterized through some XML file or via an API. And yes, "LogService" will fail terribly, you need to use the fully qualified name: "org.osgi.service.log.LogService". Main reason to use the LogService.class.getName() pattern is to get correct renaming when you refactor your code and minimize spelling errors. The next OSGi API will very likely have:
ServiceReference<S> getServiceReference(Class<S> type)
calls to increase type safety.
Anyway, I would never use these low level API unless you develop middleware. If you actually depend on a concrete class DS is infinitely simpler, and even more when you use it with the bnd annotations (http://enroute.osgi.org/doc/217-ds.html).
#Component
class Xyz implements SomeService {
LogService log;
#Reference
void setLog( LogService log) { this.log = log; }
public void foo() { ... someservice ... }
}
If you develop middleware you get the service classes usually without knowing the actual class, via a string or class object. The OSGi API based on strings is used in those cases because it allows us to be more lazy by not creating a class loader until the last moment in time. I think the biggest mistake we made in OSGi 12 years ago is not to include the DS concepts in the core ... :-(
You cannot use value "LogService"
as a class name to get ServiceReference, because you have to use fully qualified class name
"org.osgi.services.log.LogService".
If you import package this way:
org.osgi.services.log;resolution:=optional
and you use ServiceTracker to track services in BundleActivator.start() method I suggest to use "org.osgi.services.log.LogService" instead of LogService.class.getName() on ServiceTracker initializazion. In this case you'll not get NoClassDefFoundError/ClassNotFountException on bundle start.
As basszero mentioned you should consider to use ServiceTracker. It is fairly easy to use and also supports a much better programming pattern. You must never assume that a ServiceReference you got sometime in the past is still valid. The service the ServiceReference points to might have gone away. The ServiceTracker will automatically notify you when a service is registered or unregistered.

ServiceContainer, IoC, and disposable objects

I have a question, and I'm going to tag this subjective since that's what I think it evolves into, more of a discussion. I'm hoping for some good ideas or some thought-provokers. I apologize for the long-winded question but you need to know the context.
The question is basically:
How do you deal with concrete types in relation to IoC containers? Specifically, who is responsible for disposing them, if they require disposal, and how does that knowledge get propagated out to the calling code?
Do you require them to be IDisposable? If not, is that code future-proof, or is the rule that you cannot use disposable objects? If you enforce IDisposable-requirements on interfaces and concrete types to be future-proof, whose responsibility is objects injected as part of constructor calls?
Edit: I accepted the answer by #Chris Ballard since it's the closest one to the approach we ended up with.
Basically, we always return a type that looks like this:
public interface IService<T> : IDisposable
where T: class
{
T Instance { get; }
Boolean Success { get; }
String FailureMessage { get; } // in case Success=false
}
We then return an object implementing this interface back from both .Resolve and .TryResolve, so that what we get in the calling code is always the same type.
Now, the object implementing this interface, IService<T> is IDisposable, and should always be disposed of. It's not up to the programmer that resolves a service to decide whether the IService<T> object should be disposed or not.
However, and this is the crucial part, whether the service instance should be disposed or not, that knowledge is baked into the object implementing IService<T>, so if it's a factory-scoped service (ie. each call to Resolve ends up with a new service instance), then the service instance will be disposed when the IService<T> object is disposed.
This also made it possible to support other special scopes, like pooling. We can now say that we want minimum 2 service instances, maximum 15, and typically 5, which means that each call to .Resolve will either retrieve a service instance from a pool of available objects, or construct a new one. And then, when the IService<T> object that holds the pooled service is disposed of, the service instance is released back into its pool.
Sure, this made all code look like this:
using (var service = ServiceContainer.Global.Resolve<ISomeService>())
{
service.Instance.DoSomething();
}
but it's a clean approach, and it has the same syntax regardless of the type of service or concrete object in use, so we chose that as an acceptable solution.
Original question follows, for posterity
Long-winded question comes here:
We have a IoC container that we use, and recently we discovered what amounts to a problem.
In non-IoC code, when we wanted to use, say, a file, we used a class like this:
using (Stream stream = new FileStream(...))
{
...
}
There was no question as to whether this class was something that held a limited resource or not, since we knew that files had to be closed, and the class itself implemented IDisposable. The rule is simply that every class we construct an object of, that implements IDisposable, has to be disposed of. No questions asked. It's not up to the user of this class to decide if calling Dispose is optional or not.
Ok, so on to the first step towards the IoC container. Let's assume we don't want the code to talk directly to the file, but instead go through one layer of indirection. Let's call this class a BinaryDataProvider for this example. Internally, the class is using a stream, which is still a disposable object, so the above code would be changed to:
using (BinaryDataProvider provider = new BinaryDataProvider(...))
{
...
}
This doesn't change much. The knowledge that the class implements IDisposable is still here, no questions asked, we need to call Dispose.
But, let's assume that we have classes that provide data that right now doesn't use any such limited resources.
The above code could then be written as:
BinaryDataProvider provider = new BinaryDataProvider();
...
OK, so far so good, but here comes the meat of the question. Let's assume we want to use an IoC container to inject this provider instead of depending on a specific concrete type.
The code would then be:
IBinaryDataProvider provider =
ServiceContainer.Global.Resolve<IBinaryDataProvider>();
...
Note that I assume there is an independent interface available that we can access the object through.
With the above change, what if we later on want to use an object that really should be disposed of? None of the existing code that resolves that interface is written to dispose of the object, so what now?
The way we see it, we have to pick one solution:
Implement runtime checking that checks that if a concrete type that is being registered implements IDisposable, require that the interface it is exposed through also implements IDisposable. This is not a good solution
Enfore a constraint on the interfaces being used, they must always inherit from IDisposable, in order to be future-proof
Enforce runtime that no concrete types can be IDisposable, since this is specifically not handled by the code using the IoC container
Just leave it up to the programmer to check if the object implements IDisposable and "do the right thing"?
Are there others?
Also, what about injecting objects in constructors? Our container, and some of the other containers we've looked into, is capable of injecting a fresh object into a parameter to a constructor of a concrete type. For instance, if our BinaryDataProvider need an object that implements the ILogging interface, if we enforce IDispose-"ability" on these objects, whose responsibility is it to dispose of the logging object?
What do you think? I want opinions, good and bad.
One option might be to go with a factory pattern, so that the objects created directly by the IoC container never need to be disposed themselves, eg
IBinaryDataProviderFactory factory =
ServiceContainer.Global.Resolve<IBinaryDataProviderFactory>();
using(IBinaryDataProvider provider = factory.CreateProvider())
{
...
}
Downside is added complexity, but it does mean that the container never creates anything which the developer is supposed to dispose of - it is always explicit code which does this.
If you really want to make it obvious, the factory method could be named something like CreateDisposableProvider().
(Disclaimer: I'm answering this based on java stuff. Although I program C# I haven't proxied anything in C# but I know it's possible. Sorry about the java terminology)
You could let the IoC framework inspect the object being constructed to see if it supports
IDisposable. If not, you could use a dynamic proxy to wrap the actual object that the IoC framework provides to the client code. This dynamic proxy could implement IDisposable, so that you'd always deliver a IDisposable to the client. As long as you're working with interfaces that should be fairly simple ?
Then you'd just have the problem of communicating to the developer when the object is an IDisposable. I'm not really sure how this'd be done in a nice manner.
You actually came up with a very dirty solution: your IService contract violates the SRP, wich is a big no-no.
What I recommend is to distinguish so-called "singleton" services from so-called "prototype" services. Lifetime of "singleton" ones is managed by the container, which may query at runtime whether a particular instance implements IDisposable and invoke Dispose() on shutdown if so.
Managing prototypes, on the other hand, is totally the responsibility of the calling code.

Library assembly IoC setup

I am working in a project that has two main parts: a class library assembly and the main application. Both are using Castle Windsor for IoC and both manually setup their list of of components in code (to aid refactoring and prevent the need for a config file). Currently the main application has code like this:
public static void Main()
{
// Perform library IoC setup
LibraryComponent.Init();
// Perform application IoC setup
IoC.Register<IXyz, Abc>("abc");
// etc, etc, ...
// Start the application code ...
}
However the call to initialise the library doesn't seem like a good solution. What is the best way to setup a class library that uses an IoC container to decouple its internal components?
Edit:
Lusid proposed using a static method on each public component in the library that would in turn make the call to initialise. One possible way to make this a bit nicer would be to use something like PostSharp to do this in an aspect-oriented way. However I was hoping for something a bit more elegant ;-)
Lusid also proposed using the AppDomain.AssemblyLoad event to perform custom steps at load time, however I am really after a way to avoid the client assembly from requiring any setup code.
Thanks!
I'm not sure if I'm understanding exactly the problem you are trying to solve, but my first guess is that you are looking for a way to decouple the need to call the Init method from your main application.
One method I've used in the past is a static constructor on a static class in the class library:
static public class LibraryComponent {
static LibraryComponent() {
Init();
}
}
If you have multiple class libraries, and would like a quick and dirty way of evaluating all of them as they are loaded, here's a (kinda hairy) way:
[STAThread]
static void Main()
{
AppDomain.CurrentDomain.AssemblyLoad += new AssemblyLoadEventHandler(CurrentDomain_AssemblyLoad);
}
static void CurrentDomain_AssemblyLoad(object sender, AssemblyLoadEventArgs args)
{
IEnumerable<Type> types = args.LoadedAssembly.GetTypes()
.Where(t => typeof(IMyModuleInterface).IsAssignableFrom(t));
foreach (Type t in types)
{
doSomethingWithType(t);
}
}
The Where clause could be anything you want, of course. The code above would find any class deriving from IMyModuleInterface in each assembly that gets loaded into the current AppDomain, and then I can do something with it, whether it be registering dependencies, maintaining an internal list, whatever.
Might not be exactly what you are looking for, but hopefully it helps in some way.
You could have a registration module. Basically LibraryComponent.Init() function takes an IRegistrar to wire everything up.
The IRegistrar could basically have a function Register(Type interface, Type implementation). The implimentor would map that function back to their IOC container.
The downside is that you can't rely on anything specific to the container your using.
Castle Windsor actually has a concept called facilities that are basically just ways of wrapping standardised pieces of configuration. In this model, you would simply add the two facilities to the container and they would do the work.
Of course, this wouldn't really be better than calling a library routine to do the work unless you configured the facilities in a configuration file (consider binsor). If you are really allergic to configuration files, your current solution is probably the best.