What do you mean by Export Change in Managed Exensibility Framework? - mef

I'm new to MEF. In Managed Exensibility Framework, what do you mean by events exportschanging and exportschanged. How one can visualize it?

In the Managed Extensibility Framework, objects are wired together by matching imports with exports. I assume you already know about that. (If not, you should read through the MEF programming guide first and play with MEF a bit.)
In a typical scenario, exports are provided by a catalog of types. Some of these catalogs can be changed while the application is running, at which point the application might be recomposed.
Here are two examples of modifying a catalog:
DirectoryCatalog.Refresh() (this will rescan the directory and pick up new assemblies)
AggregateCatalog.Catalogs.Add
When this happens, the CatalogExportProvider based on the catalog will trigger the ExportsChanging event right before handling the change, and ExportsChanged right after.
Not all export providers have to be based on catalogs, but I hope you get the idea.

Related

Entity Framework 6 Code First - Multiple Model/Configurations

After many fruitless searches I have decided to put my head above the parapet. I am building an EF6 code first application for an embedded device.
I have successfully followed many references to enable me to create SQL CE files in the folder locations I need.
However, I need to have two separate models running at the same time
1 - Admin model deals with common things like user logins and generic details.
2 - Project specific model where the real work gets done.
The concept is that the Admin database file resides and remains on the
device, but the project database file can be passed from one device to
another.
Is it possible to have two different model databases open at the same time in EF6?
If so is there a good reference site that my search terms have not yet found that will offer advice appropriate to my use-case?
Thanks folks.
I have been able to solve my own question now having found a new article to refer to Multiple Model EF 6 Data points
Read the article on how to solve my question and other variants of similar problems. The key to solving this problem was to specify separate folders for each context migration when commanding the NuGet Package Manager Console. Then in the initialiser for each context, refer to the configuration in the appropriate migration folder. Rather than repeating here, please see the article on MSDN I linked to as all the information I needed was in there, plus lots more.

Keeping track of changed properties in JPA

Currently, I'm working on a Java EE project with some non-trivial requirements regarding persistence management. Changes to entities by users first need to be applied to some working copy before being validated, after which they are applied to the "live data". Any changes on that live data also need to have some record of them, to allow auditing.
The entities are managed via JPA, and Hibernate will be used as provider. That is a given, so we don't shy away from Hibernate-specific stuff. For the first requirement, two persistence units are used. One maps the entities to the "live data" tables, the other to the "working copy" tables. For the second requirement, we're going to use Hibernate Envers, a good fit for our use-case.
So far so good. Now, when users view the data on the (web-based) front-end, it would be very useful to be able to indicate which fields were changed in the working copy compared to the live data. A different colour would suffice. For this, we need some way of knowing which properties were altered. My question is, what would be a good way to go about this?
Using the JavaBeans API, a PropertyChangeListener could suffice to be notified of any changes in an entity of the working copy and keep a set of them. But the set would also need to be persisted, since the application could be restarted and changes can be long-lived before they're validated and applied to the live data. And applying the changes on the live data to obtain the working copy every time it is needed isn't feasible (hence the two persistence units).
We could also compare the working copy to the live data and find fields that are different. Some introspection and reflection code would suffice, but again that seems rather processing-intensive, not to mention the live data would need to be fetched.
Maybe I'm missing something simple, or someone know of a wonderful JPA/Hibernate feature I can use. Even if I can't avoid making (a) separate database table(s) for storing such information until it is applied to the live data, some best-practices or real-life experience with this scenario could be very useful.
I realize it's a semi-open question but surely other people must have encountered a requirement like this. Any good suggestion is appreciated, and any pointer to a ready-made solution would be a good candidate as accepted answer.
Maybe you can use the Hibernate flush entity event listener. The dirty properties are calculated before the flush. You can store them somewhere in your database.
A sample code of using the dirty properties feature of Hibernate which may give you an idea.

How to create/insert product programmatically in Websphere Commerce 7 WCS7

I'm developing an ecommerce based on Websphere Commerce 7 WCS7. I need to import products from an external supplier, which is exposing a webservice. I've already implemented a Controller Command performing all the operation needed to extract the products from the remote service, and I've them avalaible as custom Java classes.
I'm a little bit confused about the approach I should follow in this case. I've defined the attributes needed in my scenario and used the dataload utility to import them in the DB. What should I do next? I expect to be able to "create" WCS product programmatically from my Controller Command but I don't know how to use the attribute I've defined in a programmatic insert.
Can someone point me on the right track on how to perform this kind of operation? I went through the documentation, but, given the fact I'm quite new of the WCS environment, I don't know how to proceed according to the current best practices.
It is possible to create a new catalog entry programmatically if you copy what is being done in the LOBTools. I have not done this myself though. I have always added new products via the data loads and when we did need to add from an external service I just output the information to a file and loaded along with our other products. The reason was due to keeping the catalog in sync with the product management system.
Have a look at the various DataBean classes in WCS, like: CatalogEntryDataBean.
See here for WCS data beans:Link
And here for "activation" of a DataBean:Link

How to make an external requirement internal

I'm using version 8.0.858 of Enterprise Architect and I am hoping someone knows how to make an external requirement internal again.
I have searched thru the EA user guide, and this tells me how to make an internal requirement external, but is silent on how to reverse the process.
I have hundreds of requirements linked to Use Cases where the requirement is marked as external, but they shouldn't be as they each only relate to one Use Case.
Here's an example of what I'm talking about
This makes it difficult to get an overview of what the Use Case requires because when you click on an external requirement, the description does not show up in the textbox, and you have to double-click it to open in a separate window.
My only thought is to hack the database in Access, but I'd rather not if there is any UI functionality. That said, if you have know how to edit the database directly to achieve my goal, then that would be a valid solution too.
To my knowledge this isn't possible, for the reason #observer notes. External requirements are model elements in their own right and thus have far more information associated with them than internal requirements do.
External requirements (and other model elements) are stored in the t_objects table, while internal requirements are in t_objectrequires. Connectors are in t_connector.
I'd advise against trying to hack the database directly. Use the automation interface instead (it can be accessed from an in-EA script); look at the Element and ElementRequirement classes.

Is there any reason why someone would want to create an Core Data model programmatically?

I wonder in which cases it would be good to make an NSManagedObjectModel completely programmatically, with NSEntityDescription instances and all this stuff.
I'm that kind of person who prefers to code programmatically, rejecting Interface Builder. But when it comes to Core Data, I have a hard time figuring out why I should kill my time NOT using the nice Xcode Data Modeler tool.
And since data models are stuck to a given state (except when you want to do some ugly migration operations where thinks probably go wrong and users get mad, really mad), I see no big sense in a data model that's made programmatically for the purpose of changing it all the time.
Did I miss something?
I dont think you missed anything. The only reason I can see to create your model programatically would be if the objects you are modeling are themselves dynamic: you could for instance build a coredata entity (or graph of entities) in response to a web service which changed over time, or was selected by the user. However, I think if you had that or a similar use case, you wouldn't need to write this question (and you'd probably solve it a different way anyway)
So, if your application is dealing with resources that are dynamic, as #Andiih mentioned, then this programatic is the only way to do it. I don't know what my core data entities are until runtime, I don't know what the attributes are, or what the data looks like. So, I ask the server to give me the kinds of resources I should support and what their attributes look like. I build the model, the entities, the properties, the relationships - at runtime. I still want to use Core Data because I'm dealing with a lot of data and I need the benefit of efficient memory management with NSFetchedResultsController, etc. I can only do this programmatically.
The trouble is how to handle migration to try and preserve as much of the persistent store as possible, to reduce the size of the networked data payload after the model changes. Right now I blast the whole model and the persistent store if there's a conflict. I haven't yet figured out a way to create an .xcdatamodel from a programmatically generated model, thus I can't yet create a version mapping to do the migration.
Everything is a trade off. Basically, I think IB and the visual Core Data modeler are the right tool if you're building a simple application. You'll need to make the determination when your application becomes large/complex enough that you prefer to have direct control over all aspects of the code.
Regarding Interface Builder, if you have an application with a variety of complex interactions between view controllers, and multiple custom controls, I find code more appropriate.
For Core Data, the question is pretty much the same. Does your project have a defined scope? Can you foresee everything in that scope being done within the visual modeler? If so, it's probably fine. For other projects, where you may be asked to add features on an ongoing basis, perhaps it's better to spend a little more time writing it out so you have more flexibility later.
One other thing to consider, that doesn't get mentioned much, is it's MUCH harder to ask for help with IB or any hybrid visual design/code system. When something does go wrong, or you need help, it's way easier to post your code, than try to explain what's going on in a visual modeler.
In general, there's no reason to build the managed object model in code. There's nothing you can do in code that can't be done in the model editor. There are some fancy tricks you can do in code, however, to work with multiple models. For example, you can merge two models, establishing cross-model relationships between entities in those models at load time (see Cross-model relationships in NSManagedObjectModel from merged models?).
Regarding whether it's a good idea to code or use the graphical editor, I think the balance tips heavily towards the graphical editor in this case. Being able to verify the model by visual inspection instead of (rather convoluted) code is a win. The model can still be verified by unit test, if you desire.
I have one use case that might be valid, what if you load some data from the internet whether it is XML from an RSS Feed or WSDL response, then flatten those responses into a tabular from generating an in memory data table and finally mash it all up into a single coherent data model, then you can create the entities for those in memory data tables and create master/detail relationships. That's one case I think Core Data data model generated programmatically could become handy and a powerful feature.
I've changed models programmatically in unit tests. For example, I wrote a class that is designed to work with Core Data models that have a particular protocol attached. Instead of testing against a random implementation, I mutated the default model by adding one just in the unit tests programmatically, and tested against that test-only model.