Rust: Enforcing lifetimes within (and outside of) dynamically loaded libraries - plugins

I'm exploring dynamically loading libraries with Rust, and would like to get lifetimes right.
I'm basically following https://michael-f-bryan.github.io/rust-ffi-guide/dynamic_loading.html for the general setup, however it seems as if the Plugin trait on that site is creating references with 'static lifetime from within a dynamically loaded plugin, and I'm a bit puzzled how that can be correct, given that the plugin is loaded and unloaded at runtime.
Example copied (and shortened) from the linked page:
pub trait Plugin: Any + Send + Sync {
fn name(&self) -> &'static str;
...
}
With the library getting unloaded at runtime, both, the lifetime of the return value of fn name(&self) and the Any supertrait sound like a lie to me, at least if I understand correctly that unloading the library will remove all its "static" symbols from RAM and therefore making the string, the type id, etc. point to invalid memory...
The linked site even explicitly mentions that the loaded libraries need to stay in memory longer than the Plugin trait objects created from them, but the code seems not to enforce this in any way.
My naive idea to fix this was to just remove the Any trait (I don't need it), to tie all output references to the lifetime of self, and to also annotate the lifetime of the trait object in the return type of the function that creates the trait object from the library.
pub trait Plugin: Send {
fn name<'p>(&'p self) -> &'p str;
...
}
pub fn load_plugin_from_library<'p>(&'p library : Library) -> Box<dyn Plugin + 'p> {
...
}
I think that this will make sure that on the calling side I cannot accidentally call any plugin methods after the library goes out of scope, however I just realized that there is nothing stopping the implementer of the plugin to accidentally reference the library even after the Plugin trait object has been dropped.
For instance, the fn name(&self) could internally just spawn a new thread and detach it. That thread could still rely on the library's symbols, and if it is still running when I unload the library everything will crash.
Now my actual question is, if I can somehow prevent this situation, other than marking the trait as unsafe to make sure implementers actually read the documentation to make sure all threads they spawn are joined before returning from the Plugin's methods?

Related

Calling function in swift application from dependent module

I have a Swift application that uses a module, and I need to call a global function that is in the application from the module, is this possible?
To perhaps explain a little better, this is a test app structure:
CallbackTestApp contains a function foo(), I would like to call it from Module1 or File, will swift allow this?
edit #1
More details have been requested on what is the background of my issue, hopefully, this will not turn out to be an XY situation.
There's a tool developed by my company that process the application source* code and in some places add function call (ignore the why etc, have to be generic here.). Those function calls are exactly to foo() which then does some magic (btw, no return value and no arguments are allowed), if the application does not use modules or if modules are excluded from the processing then all is fine (Linker does not complain that the function is not defined), if there are modules then nothing works since I did not found a way to inject foo() (yet).
*Not exactly the source code, actually the bitcode is processed, the tool get the source, use llvm toolchain to generate bitcode, do some more magic and then add the call to foo() by generating it's mangled name and adding a swiftcall
Not actually sure those additional details will help.

Using a global object or parameters to pass config data, which one is better in Scala?

I'm newbie to Scala, and I have years of experience programming in Java.
Usually there are two patterns passing some config:
Using a global object sounds like "ConfigManager". And every time I
needs a config I get directly from it.
Passing the config through parameter. The config param may exists in
many layers in the program.
I choose one pattern depends on how the config will be used when I'm writing Java.
But in Scala, many people talks about eliminating side effects. This makes me wonder if I should use the second patterns at any costs.
Which pattern is better in Scala?
Global objects are bad: https://softwareengineering.stackexchange.com/questions/148108/why-is-global-state-so-evil
Make each component take it's configuration (individual pieces) as constructor parameters (possibly with some defaults). That prevents the creation of invalid components or components that have not been configured.
You can collect the initial processing of configuration values in a single class to centralize configuration code and to fail-fast when things are missing. But don't make your components (classes needing the configuration) depend on a global object or take in an entire configuration as a parameter. Just what they need as constructor params.
Example:
// centralize the parsing of configuration
case class AppConfig (config: Config) {
val timeInterval = config.getInt("type_interval")
val someOtherSetting = config.getString("some_other_setting")
}
...
// don't depend on global objects
class SomeComponent (timeInterval: Int) {
...
}
object SomeApplication extends App {
val config = AppConfig(ConfigFactory.load())
val component = new SomeComponent(config.timeInterval)
}
Use global object (this object stores only read-only immutable data, so no issues) which loads configuration object and config variables at once. This has many benefits over loading the configuration deep inside the code.
object ConfigParams {
val config = ConfigFactory.load()
val timeInterval = config.getInt("time_interval")
....
}
Benefits:
Prevents runtime errors (Fail fast approach).
If you have miss spelt any property name your app fails during startup as you are trying to fetch the data eagerly. If this were to be deep inside the codebase then it would be hard to know and it fails when the control of the program goes to that line. So, it cannot be easily detected unless rigorous testing is done.
Central place for all configuration logic and configuration transformations if any.
This serves as a central place for all config logic. easy to change and maintain.
Transformations can be done without need for refactoring the code.
Maintainable and readable.
Easy refactoring.
Functional programming point of view
Yes, loading the config file eagerly is great idea from Fail fast point of view but its not a good functional programming practice.
But important thing is you are not mixing the side effect with any other logic and keeping it separate during the loading of the app. So, as you are isolating the side effect and side effecting at the starting of your project, this would not be a program.
Once the side effecting is done and app has started. Your pure code base will not effected from this and remains pure and clean. So, though it is side effecting, it is isolated and does not effect your codebase. Benefits you again from this are worth experiencing, So go ahead.

How to Install a module that needs an instance per something else that is registered in Castle Windsor

I am trying to get the hang of IoC and DI and am using Castle Windsor. I have an object I have created that can be multiply instantiated but over different generic types. For example
MyType<Generic, Generic2>
on Installation of MyType's assembly
container.Register(Component.For(typeof (IMyType<>)).ImplementedBy(typeof (MyType<>)));
Then in my main modules initialization I install MyTypes module with MyTypeInstaller which is a IWindsorInstaller.
Then I am manually resolving the various types of MyType that I want (this will actually be spread around different installers). But something like
container.Resolve<IMyType<type1, type2>();
That creates an actual instance of MyType registered for the generic types passed in.
This works fine, I get the instances of MyType<,> I need created.
Now, finally I have another module I have created that I will install last. I want to say,
container.ResolveAll<IMyType<,>>()
then create this instances of this new object for each object that exists.
However I cant seem to resolve all of the IMyTypes<,> without knowing the concrete types that each one were instantiated as.
At any rate, it is possible I am just doing this wrong and want feedback in general as well.
First, if MyType<T1,T2> can only be instantiated once for each combination of T1,T2 then you should be registering it is a Singleton.
Second, You cannot call strongly type container methods (like ResolveAll<T>) with an open generic - it MUST be a closed type.
Third, MyType is an open generic type and the number of closed generic classes is infinite (generic type constraints are not considered by the container). So, as far as the container is concerned you can call Resolve<ANYTHING, ANYTHINGELSE> and it will attempt to provide a MyType<ANYTHING,ANYTHINGELSE> for you. If ANYTHING and ANYTHINGELSE don't satisfy the type constraints then you will simply get a run time error.
Even if you could call ResolveAll<IMyType<,>>() what would you expect it to return given that you have registered an open generic implementation?

How to create a scala class based on user input?

I have a use case where I need to create a class based on user input.
For example, the user input could be : "(Int,fieldname1) : (String,fieldname2) : .. etc"
Then a class has to be created as follows at runtime
Class Some
{
Int fieldname1
String fieldname2
..so..on..
}
Is this something that Scala supports? Any help is really appreciated.
Your scenario doesn't seem to make sense. It's not so much an issue of runtime instantiation (the JVM can certainly do this with reflection). Really, what you're asking is to dynamically generate a class, which is only useful if your code makes use of it later on. But how can your code make use of it later on if you don't know what it looks like? For example, how would your later code know which fields it could reference?
No, not really.
The idea of a class is to define a type that can be checked at compile time. You see, creating it at runtime would somewhat contradict that.
You might want to store the user input in a different way, e.g. a map.
What are you trying to achieve by creating a class at runtime?
I think this makes sense, as long as you are using your "data model" in a generic manner.
Will this approach work here? Depends.
If your data coming from a file that is read at runtime but available at compile time, then you're in luck and type-safety will be maintained. In fact, you will have two options.
Split your project into two:
In the first run, read the file and write the new source
programmatically (as Strings, or better, with Treehugger).
In the second run, compile your generated class with the rest of your project and use it normally.
If #1 is too "manual", then use Macro Annotations. The idea here is that the main sub-project's compile time follows the macro sub-project's runtime. Therefore, if we provide the main sub-project with an "empty" class, members can be added to it dynamically at compile time using data that the macro sees at runtime. - To get started, Modify the macro to read from a file in this example
Else, if you're data are truly only knowable at runtime, then #Rob Starling's suggestion may work for you as it did me. I'll share my attempt if you want to be a guinea pig. For debugging, I've got an App.scala in there that shows how to pass strings to a runtime class generator and access it at runtime with Java reflection, even define a Scala type alias with it. So the question is, will your new dynamic class serve as a type-parameter in Slick, or fail to, as it sometimes does with other libraries?

How does Import and Export work at Runtime in MEF?

I am starting to learn, MEF and one important thing in it is that I can mark some item (class, propety,method) with Export attribute so that, who ever wants use it will create Import attribute on an instance varaible and use it. How does this mapping happen and when does it happen? Is the import happen lazily on demand or all the composition happen at the start up? Sorry for the ignorant question, I am trying to understand the flow.
It happens in a phase called "Composition". First you create a container and load all your possible sources of parts into it, and then you Compose it. When you do the composition, it resolves all the dependencies and throws an exception if it can't resolve them all properly.
In general, your parts get instantiated during composition (and if you set a break point in the constructor of your part classes, you will see the break point hit during your call to Compose()). However, you can override this in a straightforward way if you use Lazy<T> as the type of your import (assuming you exported your part as type T).
To see how the composition works, take a look at the Compose() method here.