Patch a plugin with a single class? - eclipse

This is my situation: We have a third party feature in our Eclipse environment. The feature contains several plugins. The plugin contains a bunch of classes. One of the classes contains a bug.
We have been able to find a solution to the bug, so we have a working version of the class with the bug.
Unfortunately this plugin is covered by a 55 page long EULA (by IBM) so I think it's pretty safe to assume that decompiling the jar, exchanging class files, recompiling and distribute is legally out of the question. I'm no legal expert, but I'd guess we cannot tamper with the jar files in any way.
So this means I have a single class file that I want to be loaded instead of a class in a plugin, is this at all possible?
This page suggests using fragments, but this requires modifying the manifest in the plugin.
This question has the same problem as me, but in that case there is access to the source code and he is able to build a plugin.
This blogpost shows how use feature patches, but they require that I'm able to build my own plugin, which I cant since I have just the one class.

I would not try using a fragment. In my experience, the cleanest thing to do would be to use a feature patch. I have successfully used feature patches to do exactly what you are looking to do. It's not simple, but it is robust. You need to do the following.
create a plugin that encapsulates your single class file
create a feature patch that includes your new plugin and that patches the feature that you are targeting.
export your feature patch and create the p2 metadata (to create an update site).
Install into your Eclipse using the install manager
Rejoice!
(optional) Feature patches by default only target a single version of the target feature. So, if the target feature bumps up its version number, the feature patch will silently no longer be applied. However, it is possible to relax the version constraints on the feature patch. This process is described in detail here: http://aniefer.blogspot.com/2009/06/patching-features-part-2.html
More information:
http://aniefer.blogspot.com/2009/06/patching-features-with-p2.html
http://aniefer.blogspot.com/2009/06/patching-features-part-2.html
The benefit of using a feature patch over a fragment is that anyone can install the patch and get the patch working, but things are more difficult with a fragment in that end users must muck with manifests.

So this means I have a single class file that I want to be loaded instead of a class in a plugin, is this at all possible?
Your first sentence is the answer. You can use a fragment, but that requires modifying the manifest in the plugin. Otherwise, Eclipse would have no idea which class to load.
My suggestion is that you write IBM with all of this information, including the patch. IBM should be able to release a maintenance fix which would solve your problem.
In the mean time, you could pursue the fragment option, which would require you to unpack the jar, add your fragment, modify the manifest, and repackage the jar. Whether or not this is legal is beyond my ability to determine.

Related

How to find a class in a list of Nuget packages

My team is using more and more NuGet packages as a way to break the system into smaller pieces and share things between parts. We have adopted a sort of SRP principle for packaging, creating small and hopefully cohesive packages that do just one thing (logging, auditing, security stuff, etc).
Ideally they should be so cohesive and self-contained that it would be straightforward to know what package will contain what you need. However we are not yet there and sometimes is difficult to know what package you should add to access some functionality.
My question is: is there any way to publish and navigate package content information? Like, for instance, in MSDN you can see what assembly contains a class. Would it be possible to know something like that, at the package level?
Thanks.
It's a very localised version, but there is a package searcher for the ASP.NET 5 packages hosted on NuGet. It might be possible to host a version that looks at a wider scope at some point.
https://packagesearch.azurewebsites.net/
The closest functionality I can think of is implemented in ReSharper. However it can only search the packages in nuget.org(closed issue on GitHub). Since packages don't expose type info, JetBrains built a custom index and that's the only data source it can query.

Architecture for plugins to be loaded in runtime

Considering that I am developing an end-user software program (as an uberjar) I am wondering what my options are to make it possible for the user to download a plugin and load that during runtime.
The plugin(s) should come compiled and without source code, so sth. like load is not an option.
What existing libraries (or ways of Java...?) exist to build this on?
EDIT: If you are not sure I would also be satisfied with a way that costs a reboot/-start of the main-program. However, what is important is that the source-code won't be included in any JAR file (neither main application nor plugin-jars, see :omit-source of Leiningen documentation).
To add a jar during runtime, use pomegranate which lets you add a .jar file to the classpath. Plugins of your software should be regular Clojure libs that follow certain conventions that you need to establish:
Make them provide (e. g. in an edn) a symbol to an object implementing a constructor/destructor mechanism such as the Lifecycle protocol in Stuart Sierras component library. In runtime, require and resolve that symbol, start the resulting object and hand it over to rest your programs plugin coordination facilities.
Provide a public API in your program that allows the plugins to interact with it in ways that you coordinate asynchronously e. g. with clojure.core.async (don't let one plugin block the entire program).
Make sure that the plugins have a coordinated way to expose their functionality to each other only if they desire so to enable a high degree of modularity among your plugins. Make sure that your plugin loader is capable of detecting dependencies among plugins and is capable of loading and unloading them in the right order.
I've not tried it myself, but you should in theory be able to get OSGi to work with Clojure.
There's a Clojure / OSGi integration library here:
https://github.com/aav/clojure.osgi
If I were to attempt to role my own solution, I would try using tools.namespace to load and unload plugins. I'm not entirely sure it will work, but it's certainly in the right direction. I think one key piece is that the plugin jars will have to be "installed" in a location that's already on the classpath.
Again, this is only the start of one possible solution. I haven't tried doing this.

Arguments for and against including 3rd-party libraries in version control? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've met quite a few people lately who says that 3rd party libraries doesn't belong in version control. These people haven't been able to explain to me why they shouldn't yet, so I hoped you guys could come to my rescue :)
Personally, I think that when I check the trunk of a project out, it should just work - No need to go to other sites to find libraries. More often than not, you'd end up with multiple versions of the same 3rd party lib for different developers then - and sometimes with incompatibility problems.
Is it so bad to have a libs folder up there, with "guarenteed-to-work" libraries you could reference?
In SVN, there is a pattern used to store third-party libraries called vendor branches. This same idea would work for any other SVN-like version control system. The basic idea is that you include the third-party source in its own branch and then copy that branch into your main tree so that you can easily apply new versions over your local customizations. It also cleanly keeps things separate. IMHO, it's wrong to directly include the third-party stuff in your tree, but a vendor branch strikes a nice balance.
Another reason to check in libraries to your source control which I haven't seen mentioned here is that it gives you the ability to rebuild your application from a specific snapshot or version. This allows you to recreate the exact version that someone may report a bug on. If you can't rebuild the exact version you risk not being able to reproduce/debug problems.
Yes you should (when feasible).
You should be able to take a fresh machine and build your project with as few steps as possible. For me, it's:
Install IDE (e.g. Visual Studio)
Install VCS (e.g. SVN)
Checkout
Build
Anything more has to have very good justification.
Here's an example: I have a project that uses Yahoo's YUI compressor to minify JS and CSS. The YUI .jar files go in source control into a tools directory alongside the project. The Java runtime however, does not--that has become a prereq for the project much like the IDE. Considering how popular JRE is, it seems like a reasonable requirement.
No - I don't think you should put third party libraries into source control. The clue is in the name 'source control'.
Although source control can be used for distribution and deployment, that is not its prime function. And the arguments that you should just be able to check out your project and have it work are not realistic. There are always dependencies. In a web project, they might be Apache, MySQL, the programming runtime itself, say Python 2.6. You wouldn't pile all those into your code repository.
Extra code libraries are just the same. Rather than include them in source control for easy of deployment, create a deployment/distribution mechanism that allows all dependencies to easily be obtained and installed. This makes the steps for checking out and running your software something like:
Install VCS
Sync code
Run setup script (which downloads and installs the correct version of all dependencies)
To give a specific example (and I realise this is quite web centric), a Python web application might contain a requirements.txt file which reads:
simplejson==1.2
django==1.0
otherlibrary==0.9
Run that through pip and the job is done. Then when you want to upgrade to use Django 1.1 you simply change the version number in your requirements file and re-run the setup.
The source of 3rd party software doesn't belong (except maybe as static reference), but the compiled binary do.
If your build process will compile an assembly/dll/jar/module, then only keep the 3rd party source code in source control.
If you won't compile it, then put the binary assembly/dll/jar/module into source control.
This could depend on the language and/or environment you have, but for projects I work on I place no libraries (jar files) in source control. It helps to be using a tool such as Maven which fetches the necessary libraries for you. (Each project maintains a list of required jars, Maven automatically fetches them from a common repository - http://repo1.maven.org/maven2/)
That being said, if you're not using Maven or some other means of managing and automatically fetching the necessary libraries, by all means check them into your version control system. When in doubt, be practical about it.
The way I've tended to handle this in the past is to take a pre-compiled version of 3rd party libraries and check that in to version control, along with header files. Instead of checking the source code itself into version control, we archive it off into a defined location (server hard drive).
This kind of gives you the best of both worlds: a 1 step fetch process that fetches everything you need, but it doesn't bog down your version control system with a bunch of necessary files. Also, by fetching pre-compiled binaries, you can skip that phase of compilation, which makes your builds faster.
You should definitively put 3rd party libraries under the source control. Also, you should try to avoid relying on stuff installed on individual developer's machine. Here's why:
All developers will then share the same version of the component. This is very important.
Your build environment will become much more portable. Just install source control client on a fresh machine, download your repository, build and that's it (in theory, at least :) ).
Sometimes it is difficult to obtain an old version of some library. Keeping them under your source control makes sure you won't have such problems.
However, you don't need to add 3rd party source code in your repository if you don't plan to change the code. I tend just to add binaries, but I make sure only these libraries are referenced in our code (and not the ones from Windows GAC, for example).
We do because we want to have tested an updated version of the vendor branch before we integrate it with our code. We commit changes to this when testing new versions. We have the philosophy that everything you need to run the application should be in SVN so that
You can get new developers up and running
Everyone uses the same versions of various libraries
We can know exactly what code was current at a given point in time, including third party libraries.
No, it isn't a war crime to have third-party code in your repository, but I find that to upset my sense of aesthetics. Many people here seem to be of the opinion that it's good to have your whole developement team on the same version of these dependencies; I say it is a liability. You end up dependent on a specific version of that dependency, where it is a lot harder to use a different version later. I prefer a heterogenous development environment - it forces you to decouple your code from the specific versions of dependencies.
IMHO the right place to keep the dependencies is on your tape backups, and in your escrow deposit, if you have one. If your specific project requires it (and projects are not all the same in this respect), then also keep a document under your version control system that links to these specific versions.
I like to check 3rd party binaries into a "lib" directory that contains any external dependencies. After all, you want to keep track of specific versions of those libraries right?
When I compile the binaries myself, I often check in a zipped up copy of the code along side the binaries. That makes it clear that the code is not there for compiling, manipulating, etc. I almost never need to go back and reference the zipped code, but a couple times it has been helpful.
If I can get away with it, I keep them out of my version control and out of my file system. The best case of this is jQuery where I'll use Google's AJAX Library and load it from there:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js" type="text/javascript"></script>
My next choice would be to use something like Git Submodules. And if neither of those suffice, they'll end up in version control, but at that point, its only as up to date as you are...

Salesforce - How to Deploy between Environments (Sandboxes, Live etc)

We're looking into setting up a proper deployment process.
From what I've read there seems to be 4 methods of doing this.
Copy & Paste -- We don't want to do this
Using the "Package" mechanism built into the Salesforce Web Interface
Eclipse Force IDE "Deploy to Server" option
Ant Script (haven't tried this one yet)
Does anyone have advice on the limitation of the various methods .
Can you include everything in a Web Interface package?
We're looking to deploy the following items:
Apex Classes
Apex Triggers
WorkFlows
Email Templates
MailMerge Templates -- Can't seem to find these in Eclipse
Custom Fields
Page Layout
RecordTypes (can't seem to find these in Website or Eclipse)
PickList items?
SControls
I recommend the Force.com Migration Tool.
For reference:
Force.com Migration Tool Documentation
Migration Tool Guide
The Migration Tool allows you to use ant targets to move your metadata between salesforce.com organzations.
I can speak to this from recent painful experience.
Packaging: this is a very old method that predates the metadata API on which both Ant and Eclipse rely. In our experience, packaging's only benefit is in defining your project. If you're using Eclipse (which we do, and I recommend), you can define your project as being based on a particular package. As long as you remember to add new components to your package, your project hangs together
One thing that baffled us for a while, btw, are the many uses of package. We've noted the following:
Installed packages: these come in managed and unmanaged flavors and are really, in the words of a recent post on the SFDC boards, for ISVs to deploy their stuff into various unknown orgs "out there". Both managed and unmanaged packages have limitations that make them unsuitable and unneeded for deployment from development to production within an org, or in any case where you're doing custom development and don't intend to distribute code to a large anonymous base.
Non-installed packages: this is what you see when you click "Packages" in the web UI. These, that we sometimes call "development packages", seem to be just a convenient way to keep a project definition together.
Anyway, the conclusion I'm coming toward is that our team (custom development, not an ISV) does not need packages in any form.
The other forms of deployment, both Eclipse and Ant, rely on the Metadata API. In theory they are capable of exactly the same things. In reality they appear to be complementary. The Force.com migration tool, built into the Force.com IDE for Eclipse, makes deployment as easy as it can be (which is not very) and gives you a nice look at what it intends to deploy. On the other hand, we've seen Ant do some things the IDE could not. So it's probably worthwhile to learn both.
The process we're leaning toward is to keep all our projects in SVN, and use the SVN structure as the project definition (Eclipse will work with this and respect it). And we use Eclipse and sometimes Ant for migration. No apparent need for packages anywhere.
By the way, one more thing to be aware of -- not all components are migratable. Some things must be reconfigured by hand in the target environment. One example would be time-based workflows. Queues and Groups also need to behand-created, I think. Likewise the metadata API can't directly process field deletions so if you deleted a field in your source, you need to delete it by hand in the target. There are other cases as well.
Hope that's useful --
-- Steve Lane
As of Spring '09, mail merge templates are not supported in metadata but record types are. You will find record types as an XML element in the file for the object they belong to. Everything else on your list is supported with a small exception. Picklist values for standard fields cannot be edited in Spring '09. Stay tuned for news on Summer '09 feature announcements.
Update: Standard picklists on standard objects are now metadata exposed (as of API v16):
http://www.salesforce.com/us/developer/docs/api_meta/Content/meta_picklist.htm
Otherwise, Steve Lane's response is pretty accurate. The advantage of using unmanaged packages (what Steve calls non-installed packages) is that when you add metadata to a package, the metadata it depends on will automatically be added. So it's easier to grab a full set of metadata containing all its dependencies. If you are repeatedly moving metadata from one org (sandbox) to another (production), Steve's approach is probably the best way to go and certainly the most common today. I frequently use unmanaged "developer" packages to move something I've developed in one org to another unrelated org. For my purpose, I like to have the package defined in the org as opposed to an Eclipse project / SVN. But that probably doesn't make sense if you are doing team development across many dev/sandbox orgs and are using SVN already.
Jesper
Another option is to use Change Sets if you want to move meta data from a sandbox to production.
There are currently some limitations on how change sets can be used:
Sending a change set between two organizations requires a deployment
connection. Currently, change sets can only be sent between
organizations that are affiliated with a production organization, for
example, a production organization and a sandbox, or two sandboxes
created from the same organization.
From the docs:
A package must be managed for it to be published publicly on AppExchange, and for it to support upgrades. An organization can create a single managed package that can be downloaded and installed by many different organizations. They differ from unmanaged packages in that some components are locked, allowing the managed package to be upgraded later. Unmanaged packages do not include locked components and cannot be upgraded. In addition, managed packages obfuscate certain components (like Apex) on subscribing organizations, so as to protect the intellectual property of the developer.
Advantage to managed package would be that it allows you to easily version and distribute things across multiple SFDC organizations.

Can Eclipse 3.5 discover all bundles in the plugins dir?

Simple usecase: assemble an Eclipse product using simple scripts, just dumping bundles into the plugins dir.
This used to work with 3.3 - with 3.5 it's broken: my application doesn't start as the app plugin is not found.
Question: what's the easiest way to fix that? This seems to be the only pain in the whole upgrade process for me.
Attempts:
I guess this is a no-no for P2: it maintains the bundles.info file instead, which is probably very smart.. a bit too smart for me.
Some ideas I had:
can I just skip P2 altogether and get back to plain old, simple -dirty- discovery mechanism?
can I set up plugins dir as a 'watched directory'
looks like I need to use the p2.reconciler for that.. oh wait, it's deprecated already :-( bug 251561.. (thanks VonC for the pointer)
can this old setting in the config.ini still work? (which is now replaced with the 'simpleconfigurator') osgi.bundles=org.eclipse.equinox.common#2:start, org.eclipse.update.configurator#3:start, org.eclipse.core.runtime#start
should I call the (p2) director?
"please pick my plugins up" :)
I'd avoid the dropin folder for this - that's more for the
end-users.
I'd avoid messing with the bundles.info if possible.
I don't care about all those smart features in my product yet- actually the users don't use the built-in update mechanism at all.
So I'd like to KISS (ie: just to start up), and add more advanced support when needed.
I've asked this on Eclipse forums, but no answer yet, so would really be grateful for some enlightenment.
Also, feel free to correct me on the assumptions - I've just read the P2 docs, which seem confusing at times.
Thanks!
Answer: actually option 3 above seems to work after all - thanks Francis for confirming this! (it didn't work originally, but that was probably caused by some missing deps).
My only issue with that now is, some Eclipse bundles actually require simpleconfigurator. So I wonder if swapping it out will cause problems down the line.
You can alter your configuration/config.ini file to not use the org.eclipse.equinox.simpleconfigurator (which does the p2-based configuration) and instead use the org.eclipse.update.configurator which is the old-school way of just configuring whatever is in the plugins directory. This should give you what you want.
Even if it does not fully answer what you are after, you can specify in an eclipse.ini (like the one I describe here):
-Dorg.eclipse.equinox.p2.reconciler.dropins.directory=C:/jv/eclipse/mydropins
That does specify to p2 to monitor any directory of your choosing to detect plugins in it.
Another source of idea could be this article: Composing and updating custom Eclipse distros
It's not hard to create a feature based product that includes these things, and do a product build to end up with something like this:
Note: the concept of reconciliation is detailed in the eclipse Wiki.
For certain installations of Eclipse, there will exist the notion of a shared installation -- this may be in the case of a Linux system where a base set of software is installed via packages (perhaps RPMs), or may be in a Maya deployment where shared profiles are defined in a central server.
In both cases, it is necessary to perform reconciliation between the shared profile and the user's current instantiation of the profile including any modifications they may have made.
Part of this mechanism is the Dropins Reconciler setting. Although, as bug 251561 illustrates, it is not advised to put too many plugins in there.
Maybe this will help you (shot in the dark)? I found this when upgrading my Eclipse installation to Galileo and trying to keep my Flex Plugin install.