What I'm trying to do in Aurelia, is something like Prism is doing in WPF- Composite applications.
So lets say I have a "shell" application that defines the main application layout, then i have modules that I can plugin at run-time. Those modules can be an Aurelia application per se or Aurelia plugin (don't know what to use - need recommendation).
When loaded, the module needs to add it's menu items to the main application menu to expose it's features.
This is a mockup of the application:
Each module can have multiple menu items and can be pretty complex.
I'm using latest Typescript, Aurelia-CLI to create the application, and I'm using the built-in bundler : Aurelia's new built-in bundler.
So What I don't know is:
Those modules/features - what must they be? (Maybe Aurelia Plugins, or another Aurelia application?)
How to load those modules/features at run-time? (like deploy it in some plugins folder and tell the main shell application to load them)
How to modify the main menu and add new menu items from the loaded module?
Please help
Aurelia supports ultra dynamic applications. Also, there have been other community members who have had similar requirements and was able to resolve it. So I think the scenario is possible.
It seems the sub-application can just be a route.How/where to load the route should be determined based on the application URL
Those modules doesn't need to do anything specific, they can just be a normal, plain JS/TS class with lifecycle methods to handle activation/deactivation. I guess that main shell and all sub-applications need to share a common URL, you cannot have more than one router.
There could be a singleton/central store for new route to register information about loaded features, or it can be loaded upfront by a configuration file/metadata file or a database fetch.
Here is a similar question from another community member that I think can help you see how to glue things to https://discourse.aurelia.io/t/dynamicaly-load-routes/1906
Related
I am trying to obtain information about the system configuration in RCP 4 application. I got a link RCP3 System Configuration which implemented in RCP3 for getting the system configuration. WorkbenchMessages properties and WorkbenchPlugin are used to get the system configuration in RCP3 application but in RCP4 those are not available. How can I implement to get the system configuration?
The ConfigurationInfo class you reference is using the org.eclipse.ui.systemSummarySection extension point and calling the ISystemSummarySection interfaces it defines to get the system summary.
This extension point and the ISystemSummarySection interface do not exist in a plain e4 app so this information is not available using this code.
You may be able to get some of the information by looking at the individual classes that implement ISystemSummarySection and copying the code for those parts that don't use 3.x compatibility mode classes.
For example the ConfigurationLogDefaultSection class just uses System.getProperties() to list the system properties section.
I'm interested in using Hot Module Replacement with a newly created React app.
Facebook Incubator's create-react-app uses Webpack 2 which can be configured to support HMR, however in order to do so, one needs to "eject" the create-react-app project.
As the documentation points out, this is a "one way" operation and cannot be reversed.
If I'm to do this, I want to know what I might be giving up. I've been unable to locate any documentation that explains the potential drawbacks of ejecting.
The current configuration allows your project to get updates from create-react-app core team. Once you eject you no longer get this.
It's kind of like pulling in bootstrap css via CDN as opposed to downloading the source code and injecting it directly into your project.
If you want more control over your webpack, there are ways to configure/customize it without ejecting:
https://www.npmjs.com/package/custom-react-scripts
I'm wondering what the convention to use is when creating a component inside an addon project... If I generate a component in my addon project using ember-cli#0.2.0, the blueprint will create a js file in addon/components, a template in addon/templates/components, and a js file in app/components. The part I'm not real clear about is where templates should live for these components. If my component template requires a partial, I need to put the partial template in the app/templates directory. If it lives in the addon/templates directory, it can't be resolved. So the question is this: Is it best to put all the templates (the component template and the partials) in the app/templates directory or leave the component template in the addon/templates/components directory and the partial in the app/templates directory? The latter feels slightly disorganized and the former seems more correct only because of the behavior of the blueprint. Anyone have any insight?
Thanks in advance.
Ember-cli is under heavy development so a lot of the file structure is likely to change soon, but here on some insights on the current state of the folder structure and why it is arranged the way it is:
The app/ folder is what gets directly imported into your application. Helpers are pulled from here, which is why you have to have a file for each of your components in this folder. Additionally templates will get pulled from the main application here, and as such they will be accessible in the ways that templates are normally accessible in an ember app (render, partial, and standard resolution).
Some people choose to place all of their components code in app/, but this is a bad idea because the addon/ folder exists not only as a separation of your addons code, but as a way for it to be imported using ES6 imports. So, while you can't directly access the components under addon/components/, you can import them into your application like so:
import SomeComponent from 'some-addon/components/some-component'
This is very useful for addon consumers if they want to extend an addon to add some functionality.
Templates in addon get precompiled in the build tree, which makes addons a bit more robust (for instance if they are using a different version of htmlbars they will still be compatible with the base app). However, they are not accessible via the resolver, so you have to import them manually in your addon's components, which is why the blueprint for addon components looks like the following:
import Ember from 'ember';
import layout from '../templates/components/some-component';
export default Ember.Component.extend({
layout: layout
});
Styles for addons can either be placed in addon/styles/ or app/styles/. In addon/styles/ they are also precompiled and included in the application by default. I highly recommend including styles in app/styles because this makes them accessible using preprocessor imports in the base application:
#import some-addon/main.css
This makes them completely optional to users of the addon, without resorting to app.import and config trickery (which is good because nested addons _do not support app.import. Don't use it.)
NOTE: They are not automatically namespaced, so you should put your styles in a folder to make sure they aren't conflicting with other addons styles.
In summary:
Any addon code that does not need to be directly accessible by the base app via helpers, initializers, etc. Should live in addon/
Anything you want to be accessible by ES6 imports should live in addon/
Templates should live in addon/templates/ and be imported manually
Component stubs, initializers, and other files that should be included in the standard Ember build chain should live in app/
Styles should live in app/styles/ and should be namespaced in a folder (e.g. app/styles/some-addon/)
Don't use app.import.
Adapting the MVVMCross framework in Xamarin crossplatform application development, we have PCL (containing Model and View Model) and View (for each platform) as in here.
a) Where does the Xamarin.mobile (for gaining single set of API access) reside? I think inside the PCL. But, i see different binaries for Xamarin.mobile (eg: Android and IOS), do we put all the Xamarin.mobile library inside the PCL? They all have the same name, won't there be any conflict?
b) Where do we keep codes like accessing bluetooth (not available in Xamarin.mobile)? Using MVVMCross decouples the view and business logic, so do all the codes for creating view items after an event has occured (btn click), reside in the view?
c) Where can we use the conditional compilation adapting MVVMCross? I guess in the Model, but is it only used for file access or can it also be used to show view items (toast message on Android) according to the target platform, by placing it on the PCL?
(Excuses if inappropriate, just gathered some information on MVVMCross and Xamarin.mobile and had some reasonings/confusions in mind)
Thank You!
Regards,
Saurav
a) Where does the Xamarin.mobile (for gaining single set of API access) reside? I think inside the PCL. But, i see different binaries for Xamarin.mobile (eg: Android and IOS), do we put all the Xamarin.mobile library inside the PCL? They all have the same name, won't there be any conflict?
Xamarin.Mobile is not portable code - it can't be called directly from PCLs.
For many Xamarin.Mobile functions (and many, many functions which Xamarin.Mobile does not cover) then MvvmCross provides Plugins - you can see some of that in https://www.nuget.org/packages?q=mvvmcross
For the remaining few methods that X.M has but we haven't already included - e.g. contacts lookup - then you can either:
access the Xamarin.Mobile functions by writing a portable interface (a facade) through which to access them
write a new plugin to implement them
For more on plugins:
see N=8 - adding the location plugin - from the N+1 videos in http://mvvmcross.wordpress.com/
for writing a plugin see https://speakerdeck.com/cirrious/plugins-in-mvvmcross
b) Where do we keep codes like accessing bluetooth (not available in Xamarin.mobile)?
Generally this is done the same way as above. For example, for Bluetooth take a look at the Sphero example:
http://blog.xamarin.com/xamarin-developer-showdown-winning-entries-showcase-xamarin-mobile/
https://github.com/slodge/BallControl/tree/master/Cirrious.Sphero.WorkBench/Plugins/Sphero
Using MVVMCross decouples the view and business logic, so do all the codes for creating view items after an event has occured (btn click), reside in the view?
Yes - if it's a 'view concern', then it belongs in the view (this is the same as any Mvvm code)
c) Where can we use the conditional compilation adapting MVVMCross?
I try not to use 'conditional compilation' including #if and partial classes. Sometimes I'll use it in plugin platform-specific modules, but generally I try to use inheritance or abstraction instead - the reason for this is because I use tools like 'refactoring' and 'unit tests' a lot and conditional compilation simply does not work with these.
For more on the benefits (and disadvantages) of using PCLs rather than file-linking and other project-based techniques, see What is the advantage of using portable class libraries instead of using "Add as Link"?
I have a Qt application containing a Webkit module and using Dart (compiled to JS). It's like a bare-bones browser written in Qt. The application basically replaces certain text on the webpage with different text. I want users to be able to make their own Dart files to replace their own text with their own different text.
Any recommendations for approaches to creating a plugin system?
I think that this question needs a little clarification: are you asking about using Dart for scripting Qt applications (where Dart plays the role of a scripting language), or are you asking about a plugin system for Dart application that is compiled to JS and used in a Qt application, probably via QtScript (in which case, the role of a scripting language is played by JavaScript)?
I presume that it is the latter variant (and I don't know enough about Qt to be able to answer about the former variant anyway).
Let's assume that all plugins for the Dart application are available at the build time of that Qt application, so that you don't need to compile Dart to JS dynamically. Then, if you compile a Dart script, resulting JS will include all necessary code from its #imports. All you need is to create a proper script that imports all plugins and calls them (importing isn't enough, as dead code will be eliminated).
Maybe an example will be more instructive. Let's say that you want to allow plugins to do some work on a web page. One way you might structure it is that every plugin will be a separate #library with a top-level function of a well known name (say doWork). Example of a plugin:
// my_awesome_plugin.dart
#library('My Awesome Plugin')
doWork(page) {
page.replaceAll('JavaScript is great', 'Dart is great');
}
You can have as many plugins of this nature as you wish. Then, you would (at the build time) generate a following simple main script in Dart:
// main.dart
// these lines are automatically generated -- for each plugin file,
// one #import with unique prefix
#import('my_awesome_plugin.dart', prefix: 'plugin1');
#import('another_plugin.dart', prefix: 'plugin2');
main() {
var page = ...; // provided externally, from your Qt app
// and these lines are automatically generated too -- for each plugin,
// call the doWork function (via the prefix)
plugin1.doWork(page);
plugin2.doWork(page);
}
Then, if you compile main.dart to JavaScript, it will include all those plugins.
There are other possibilities to structure the plugin system: each plugin could be a class implementing a specific interface (or inheriting from a specific base class), but the general approach would be the same. At least the approach that I would recommend -- making each plugin a separate library.
You probably don't like the step with generating the main script automatically, and I don't like it either. But currently, Dart only allows one way to dynamically load new code: spawning new isolates. I'm not sure how (or even if) that would work in QtScript, because isolates are implemented as web workers when compiled to JavaScript, so I won't discuss this here.
Things will get more complicated if you want to support compiling Dart scripts at the runtime of your Qt application, but I think that I'm already guessing too much about your project and I might be writing about something you don't really need. So I'll finish it like this for now.