Restricting Target Platform's API usage when developing Eclipse plugins - eclipse

I'm developing an Eclipse plugin and i've run into this problem several times already.
I always keep my Target Platform updated for the latest (stable) Eclipse release so that i test my code against all the recent updates, fixes etc.
However, this may (and have) result in accidental breakage of backward compatibility of my plugin, e.g. when i accidentally use new API that did not exist in the Eclipse version i aim to support.
Or, more sneaky example, in 4.6 Eclipse moved to Java 8 and some interface methods got default implementations. Now when i implement these interfaces my IDE doesn't automatically generate empty implementations for those methods and no error is generated. If i install and run this code against a previous Eclipse version these methods will throw AbstractMethodError since no implementation has been provided.
So my question is: is there a tool to further restrict API my Target Platform provides to some earlier Eclipse API version?
Is API Baseline an appropriate tool for this? Because i couldn't get it to work like this. (It allowed even non-baseline method calls not to mention the more complex default-methods example.)

You can use multiple target platforms, switching between them doesn't take long. For testing Stack Overflow questions I have one Eclipse install with 10 target platforms.
So have a target platform for the oldest release you want to support as well as your current release target platform and check the code runs against that.
It is particularly important to test with the actual Target Platform if you want to support Eclipse 3 releases as the were large changes going from Eclipse 3 to 4.

Related

Using ANT to source control the code in cloud (NetSuite)

For our ERP platform (NetSuite) customization code rests in the Cloud. We (different entities) can directly make changes to it but there is no source control available to us in cloud.
It is possible to fetch the code files through a SOAP API.
I was wondering if it is possible to get the files through API using Apache ANT and shove into TFS/SVN?
I am not familiar with Apache ANT so I do not know the capabilities of ANT that if it can fetch any info through API?
(you can also suggest any better approach to source control the code in cloud)
Ant has several third party task plugins for version control tasks. Plus, you can always use the <exec/> task to build an equivalent command line checkout. However, I do not recommend people use Ant to fetch versions from your version control system. This ends up being a chicken v. egg issue.
Your build script is in version control. You need to fetch it in order to run Ant against it. If you're fetching your build script, why not the rest of your project?
Once you checked out your project in a working directory, and want to do an update, why not let Ant do the update? Because your build script is also version controlled. Doing an update and build at the same time could have you running the wrong version of your build script against your build.
Maybe you're going to check in the files that were modified by the build system. Not a good idea. You should rarely, if ever, check in files you built. If you need them, rebuild them. Built files are usually binary in nature, and can vary greatly from one version to another. In most cases, your version control system will be checking in completely new versions of the built object instead of using diff format. That takes up a lot of room in your version control system.
Even worse, you can't diff the built objects, so you can't really verify their content or trace their history. And, built objects tend to age very quickly. Something built last month is already obsolete. With in a year, the vast majority of the information in your version control system will be nothing but obsolete binary, and very little stored is useful code.
Besides, your version control system has nothing to do with building your files. Imagine between Release 2.1 and Release 2.2, you change version control systems from Subversion to Git. Now, a bug in Release 2.1 needs to be fixed, and you need to create Release 2.1.1. Your checkout code in your build scripts will no longer work.
If you're using NetSuite IDE, you're using Eclipse, and Eclipse is great at handling version control. Eclipse can handle both SVN and TFS (although I don't know why anyone would use TFS). Eclipse tracks file changes quite nicely. In fact, Eclipse gets confused when you change files behind its back (like you do an update outside of Eclipse).
Let Eclipse handle your version control issues. It presents a common interface to almost all version control systems. This way, your build system can handle the builds.
I'm not sure what other requirements you might have, but if you use the NetSuite IDE (Eclipse + bundled plugin), you can use it to pull and push files to NetSuite. And then you can use any source control system you like (we use SVN, for instance).

NetBeans: "Package as" uses IDE default platform instead of custom platform

So I've been messing around with the packaging feature that NetBeans offers, following this tutorial: http://platform.netbeans.org/tutorials/nbm-nbi.html. I didn't like how I had to modify the platform that my IDE was running on in order to customize the installer itself, so I decided to create a copy and just change the platform the application suite was using (Properties->Libraries).
This seemed to work fine, and even packaged that platform as part of the installer. However, when doing the packaging itself, I noticed that it was calling the IDE's platform build script to create the installer rather than the one I had customized. This defeats the purpose, at least in my case, of having the separate platform.
Within the platform manager, under the harness tab, I made sure that the platforms harness was being used rather than the IDE's, but it didn't seem to make a difference.
I verified the behaviour by throwing an echo into both the default IDE platform and my customized platform to see which was being called. I also noticed that the Ant call that gets made at the start of packaging makes an explicit reference to the IDE platform, as well.
I've tried this under 7.2 (currently using 7.3) as 7.3 has had some fairly nasty bugs and thought perhaps it was just recently introduced.
At this point I'm thinking it's a bug, but I was hoping that perhaps someone else had come across this and found some sort of solution or could shed some light on why it's doing what it's doing.
Thanks!
This is slated to be fixed for 7.4, in case anyone comes across it in the meantime.
Here's a link to the bug ticket: https://netbeans.org/bugzilla/show_bug.cgi?id=229478

Oracle or 3rd party service for determining 'latest Java version'

Is there a service available that responds with the latest version of Java that's available?
I'm writing system check for an application that uses applets. As part of the check I'd like to inform users if a new version of Java is available for download. Is there any online service that simply responds with the version number for the latest Java version?
How about a different strategy of 'leave it to the manufacturer'?
The JRE is configured by default to be auto-updating to the latest version Oracle considers to be stable enough for general use. Best leave it to the auto-update feature.
Run-time testing
Of course, there is always How do I test whether Java is working on my computer?
FireFox
An old version of Java has been detected on your system.
Update Java by clicking the button below:
A polite way of saying 'no plug-in found for FF'.
Chrome
User Friendly
But ultimately either way, leave it to the end-user's discretion as to what version to use, and whether to update.

How to get API Tooling to work in Eclipse

I have been having a real hard time getting API Tooling to work in Eclipse 3.4.2. It keeps telling me:
The minor version should be incremented in version 3.4.0.qualifier, since new APIs have been added since version 3.4.0.40001
That being said, I have generated the plugins that are used for the baseline from the exact same code that it is being analyzed against. The API Tools docs say that it compares the current code against the baseline to see if there are any differences. I can't see how there could be differences if the built version is built from the current code.
The way that I tested it:
Create a new eclipse workspace
Create a new Plug-in Project with API Analysis turned on
Add a simple class to that plugin and export the package with that class in it
Build/Export that plugin to some location on your hard drive
Set the workspace baseline to that location and do a full build
You get an error for the project in your problems view.
Thanks,
-One very perplexed user
Looks like this is something that got fixed in 3.5. Too bad my company doesn't want us using 3.5 in case there are any incompatibility issues. (there were 3.3 to 3.4)
My recommendation to anyone who wants to do Eclipse API Analysis is to use 3.5.
First off, I apologize for jumping on a thread late after its "active time" but I am currently running into this exact situation, but with Eclipse Helios 3.6.
From your answer, you noted that something was fixed in 3.5. Are you aware of what this exact fix was AND if you have yet been able to verify that it is working under Eclipse Helios 3.6?
I would really like to have PDE API tooling working but I'm nearing my time allowed on this effort and need to move forward onto some pending tasks.
Thanks!
EDIT: I would have posted this in a followup link but did not see any such links available.

How to version control the build tools and libraries?

What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.