Structure Plug-ins and Dependent Libraries while implementing P2 - eclipse-rcp

I am developing an RCP application and I am also implementing p2 updates for it.
I am using this link as guide to implement p2 updates.
For example I have 3 plug-ins A, B and C in my application.
Where Plugin A represent the core functionality of my application. Plug-in B is one more mandatory plugin. While plugin C is optional.
I have created 3 feature projects. Where FeatureA contains plug-in A and dependent libraries.
FeatureB contains plug-in B and dependent libraries. While FeatureC contains plug-in C and dependent libraries.
There are certain libraries which are common across these 3 plug-in e.g. birt, nattable. How should I structure them. Currently I am adding them in each feature project independently. What is the better way to structure feature projects? Kindly guide me.

When you have a common subset of plugins that your features require, you could make another "Requirements" feature which includes the required plugins and then require that feature in your existing features. This makes it easier for you to change your set of required plugins over time.
One downside of this approach is that Feature B may not need all of the plugins from the common feature which means if you ship Feature B without Feature A it could ship more plugins than you intend.
Another item you should consider is will you be updating the features independent of each other? If you require all versions of your features to be the same, then having the new Requirements feature makes sense. But if Feature A can upgrade to version 2.0 while Feature B remains at 1.0 you will encounter provisioning conflicts if you have singleton plugins.
One more thought is that since you're creating an RCP, you may just want to run the p2 publisher over your product file to produce a lineup IU. This produces much more deterministic provisioning of your RCP application. If you're creating a simple RCP which doesn't have the standard Eclipse About dialog, then you wouldn't even need to expose features.
As a final note, you could just buy update functionality. Having written this technology for the past 5 years I can tell you there are many pitfalls. My company's product, Secure Delivery Center, makes it easy to ship your software and includes simple update support.

Related

Setting start levels for dynamic Eclipse features: alternatives to p2.inf?

I have an Eclipse-based OSGi application consisting of bundles organised in features. I use a product definition to launch the application. In this definition, I can also set start-levels for my bundles.
Now imagine I want to add a feature to the running application. Is a p2.inf file the only way to specify start levels for the bundles in this feature? Re-defining and re-starting the product does not sound like an optimal solution as it's not really dynamic.
I am not aware of any real tooling support for setting start levels for bundles on the feature level. The only option you have is manual hacking with a p2.inf.
I think the reason that setting start levels is only really supported for products is that p2 can't handle the case where start levels are specified multiple times. This could easily happen if setting start levels on feature level was encouraged.
So, you can make this work on feature level, but only if you know what you do.
I have the same problem I believe: I have a feature that is both part of a packaged product and present on an update site to be installed into an Eclipse IDE. And I also want to set the start-level for some of the plug-ins to ensure a very early start-up.
I have overcome this with a p2.inf file with the following content:
instructions.configure=setStartLevel(startLevel:1);markStarted(started: true);
instructions.unconfigure=setStartLevel(startLevel:-1);markStarted(started: false);
(I don't think I need to specify the start-level in the product definition anymore though I have not tried to remove this yet.)
I originally used start-level 2, but at least for Juno packages, there are some plug-ins that are started at level 1 so I now use level 1 as well.

How to set up for multi-product Eclipse plug-in development?

I have a set of plug-ins which need to support different Eclipse products. There is a core plug-in, which is product-independent, and an adaptation plug-in each for Product X, Product Y, etc.
Deployment-wise, I'm thinking one feature for the core plug-in and one for each product, containing the adaptation plug-in and having a dependency to the core feature, so the core plug-in gets installed without the user having to select it.
1) Is there a better way of structuring the features?
On the development side, I would like to be able to work with both the core and adaptation plug-ins within the same workspace, which as I understand it gives me two main options: a) working within each product using their respective installations as target platforms, or b) working in raw Eclipse with an explicitly defined target platform for each product.
2) What would be the best way to set up the development environment?
If option a), can I use the same workspace for different products or would I need to set up separate workspaces? In other words, are different Eclipse products able to share a workspace as long as they're all based on the same (say) major version, eg 3.x?
If option b), can Eclipse manage multiple simultaneous target platforms? In other words, can different plug-in projects within the same workspace be compiled against different target platforms during the same build? And if not, how could I automate switching between them so I wouldn't have to do that manually during a workspace build?
Or indeed, am I missing something fundamental and is there a much better way of doing all this?
The short answer is you can do it either way.
You can have 1 workspace per product, and each workspace has the target platform of that product. At the moment, eclipse supports one target platform active per workspace, not per project though.
Or you can have eclipse and the 3rd party plugins you need as your target platform, and simply work on all 3 products and the common plugin in one workspace. If your total source plugins is <20, this would probably be fine. For >20, eclipse supports Working Set which would hide the plugin you are not working on at the moment.

Eclipse PDE - Plug-in, Feature, and Product Versioning

I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace.
I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself.
However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)?
For example, assume we have the typical "Hello World" setup as follows:
Plug-in: com.example.helloworld, version 1.0.0
Feature: com.example.helloworld.feature, version 1.0.0
Product: com.example.helloworld.product, version 1.0.0
If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)?
I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.
If you would like to apply the same versioning systems to feature and product, then you would set feature and product to 2.0.0 when one of the plugins go to 2.0.0. That would communicate to whoever is consuming your feature or product that there is a breaking API change inside it somewhere.
On the other hand, there is no requirement to apply the same versioning convention. You can version your bundles following that convention to properly communicate your API changes and then turn around and use more marketing-sensible versions for product/feature. Keep in mind that user will see product/feature version more than they will individual bundle version.
I've seen it done both ways effectively. There isn't really a right or wrong way on this.

Eclipse UI Plugins

In a custom Eclipse's product We are asking ourselves:
Should we create one single UI plugin for all the user interface matters or should we broke these matters in several plugins (for example, ui.views - ui.preferences - ui.properties etc ...)
It seems Eclipse's "official" products such as CDT, JDT ... only have one UI plugin and some third party plugins I am using have several ui plugins (Papyrus for example)
I know this is rather a subjective question but I would be interested to learn about the way you manage your UI stuff.
Manu
I'd create separate bundles (or plugins) for each independently usable component. So if I have e.g. a view that can be used without some other things, I'd put it in a bundle of its own. I find that this makes it easier to configure the feature, replace certain parts, provide custom combinations of components, handle dependencies, and such.
If your plugin does one thing (e.g. add a menu item to order pizza) it makes little sense to split it up, you're just introducing complexity. The modularity of your product is the key factor in deciding how to split the functions into plugins. Consider the functionality you're trying to deliver and whether there are any optional components or pieces that may be useful in isolation.
Take m2eclipse as an example, it has multiple UI plugins, but that is because they are functionally separate. The XML editor is certainly a useful UI addition, but users of the core function (dependency management) don't necessarily need it so it makes sense to bundle it separately and make it optional.
Ignoring anything specific to Eclipse, I would say from a product support perspective it makes much more sense to have a single plug-in. This has the following benefits:
Every customer has the same environment, so if someone contacts you with a problem you know what they have.
You have to test a single configuration. If you split your code into 3 plug-ins that's 7 different configurations you have to test.
In future you won't have to worry about which plug-in new functionality should be added to.

Salesforce - How to Deploy between Environments (Sandboxes, Live etc)

We're looking into setting up a proper deployment process.
From what I've read there seems to be 4 methods of doing this.
Copy & Paste -- We don't want to do this
Using the "Package" mechanism built into the Salesforce Web Interface
Eclipse Force IDE "Deploy to Server" option
Ant Script (haven't tried this one yet)
Does anyone have advice on the limitation of the various methods .
Can you include everything in a Web Interface package?
We're looking to deploy the following items:
Apex Classes
Apex Triggers
WorkFlows
Email Templates
MailMerge Templates -- Can't seem to find these in Eclipse
Custom Fields
Page Layout
RecordTypes (can't seem to find these in Website or Eclipse)
PickList items?
SControls
I recommend the Force.com Migration Tool.
For reference:
Force.com Migration Tool Documentation
Migration Tool Guide
The Migration Tool allows you to use ant targets to move your metadata between salesforce.com organzations.
I can speak to this from recent painful experience.
Packaging: this is a very old method that predates the metadata API on which both Ant and Eclipse rely. In our experience, packaging's only benefit is in defining your project. If you're using Eclipse (which we do, and I recommend), you can define your project as being based on a particular package. As long as you remember to add new components to your package, your project hangs together
One thing that baffled us for a while, btw, are the many uses of package. We've noted the following:
Installed packages: these come in managed and unmanaged flavors and are really, in the words of a recent post on the SFDC boards, for ISVs to deploy their stuff into various unknown orgs "out there". Both managed and unmanaged packages have limitations that make them unsuitable and unneeded for deployment from development to production within an org, or in any case where you're doing custom development and don't intend to distribute code to a large anonymous base.
Non-installed packages: this is what you see when you click "Packages" in the web UI. These, that we sometimes call "development packages", seem to be just a convenient way to keep a project definition together.
Anyway, the conclusion I'm coming toward is that our team (custom development, not an ISV) does not need packages in any form.
The other forms of deployment, both Eclipse and Ant, rely on the Metadata API. In theory they are capable of exactly the same things. In reality they appear to be complementary. The Force.com migration tool, built into the Force.com IDE for Eclipse, makes deployment as easy as it can be (which is not very) and gives you a nice look at what it intends to deploy. On the other hand, we've seen Ant do some things the IDE could not. So it's probably worthwhile to learn both.
The process we're leaning toward is to keep all our projects in SVN, and use the SVN structure as the project definition (Eclipse will work with this and respect it). And we use Eclipse and sometimes Ant for migration. No apparent need for packages anywhere.
By the way, one more thing to be aware of -- not all components are migratable. Some things must be reconfigured by hand in the target environment. One example would be time-based workflows. Queues and Groups also need to behand-created, I think. Likewise the metadata API can't directly process field deletions so if you deleted a field in your source, you need to delete it by hand in the target. There are other cases as well.
Hope that's useful --
-- Steve Lane
As of Spring '09, mail merge templates are not supported in metadata but record types are. You will find record types as an XML element in the file for the object they belong to. Everything else on your list is supported with a small exception. Picklist values for standard fields cannot be edited in Spring '09. Stay tuned for news on Summer '09 feature announcements.
Update: Standard picklists on standard objects are now metadata exposed (as of API v16):
http://www.salesforce.com/us/developer/docs/api_meta/Content/meta_picklist.htm
Otherwise, Steve Lane's response is pretty accurate. The advantage of using unmanaged packages (what Steve calls non-installed packages) is that when you add metadata to a package, the metadata it depends on will automatically be added. So it's easier to grab a full set of metadata containing all its dependencies. If you are repeatedly moving metadata from one org (sandbox) to another (production), Steve's approach is probably the best way to go and certainly the most common today. I frequently use unmanaged "developer" packages to move something I've developed in one org to another unrelated org. For my purpose, I like to have the package defined in the org as opposed to an Eclipse project / SVN. But that probably doesn't make sense if you are doing team development across many dev/sandbox orgs and are using SVN already.
Jesper
Another option is to use Change Sets if you want to move meta data from a sandbox to production.
There are currently some limitations on how change sets can be used:
Sending a change set between two organizations requires a deployment
connection. Currently, change sets can only be sent between
organizations that are affiliated with a production organization, for
example, a production organization and a sandbox, or two sandboxes
created from the same organization.
From the docs:
A package must be managed for it to be published publicly on AppExchange, and for it to support upgrades. An organization can create a single managed package that can be downloaded and installed by many different organizations. They differ from unmanaged packages in that some components are locked, allowing the managed package to be upgraded later. Unmanaged packages do not include locked components and cannot be upgraded. In addition, managed packages obfuscate certain components (like Apex) on subscribing organizations, so as to protect the intellectual property of the developer.
Advantage to managed package would be that it allows you to easily version and distribute things across multiple SFDC organizations.