Managing team bundles - eclipse

We are moving our application to the OSGI platform (All developers are using Eclipse) and are trying to figure out the best team environment for developing our bundles.
We have bundles from multiple sources:
Common bundles from projects such as Orbit or Apache that are managed by outside agencies.
Bundles that wrap domain specific jar files. We manage these bundles internally.
Bundles provided by other teams in the company that are effectively read only for us
Bundles provided by our team that contain actively developed source code.
In cases 1-3 we would like install in our local Eclipse IDE and provide a target platform. It seems to me we would just create a p2 repository that provides all of the bundles in 1-3 and provide them as a target definition. Feel free to point out a better solution if there is one.
The bundles contained in case 4 are stored in a Mercurial repository. Although the target definition looks like it can grab bundles from several sources it does not address how to include bundles from a (d)vcs.
What is the best practice? Do we put our (d)vcs bundle information in the target platform and just make developers download the correct bundles manually? Also how do we manage changes to the target definition? Do we have to email everyone when it changes, or is there a more elegant solution?
Thanks for your help.

space to share the target to the developer. The disadvantage is, that we have artifacts in our SVN!
But the p2 repository sounds much better. When every devloper activate auto-update, he will informed when updates avaiable.
I think we must try it in the future at my company.
Our actively developed source code we share by Team Project Sets (*.psf). This is an single text file which contains all repository information of the exportet eclipse projects. Try it in your Eclipse IDE with File -> Export -> Team -> Team Project Set. Are there any changes on the Project Set actually we send an email to our developers. An more elegant way I think is it to share it over the p2 repository.
I hope that helps and sorry for my bad english!

I am using eclipse, m2eclipse, maven-bundle-plugin, subversion, nexus and hudson, and it works like a charm, especially in a team environment.
Automating the manifest.mf generation is critical in OSGi, because doing this by hand is very error-prone. Use bnd for this (automated by bndtools or maven-bundle-plugin)
Pax Construct can help in building a complete OSGi runtime environment.

It's better to use Apache Maven [1] if you like to use Eclipse-independent environment.
Pros:
all your artifacts will be stored in one Maven repo. You can use such tools like Artifactory [2] to create and share Maven repo for whole team (to avoid any problems with 3rd-party artifacts)
there a lot of OSGi Maven tutorials available that help you to find answers to almost all your questions
Eclipse supports Maven very well with m2Eclipse [3] plugin
IDE is not so important in this case. Your team members can select any (even vi or emacs)
Cons:
you have to find Maven repos for all your artifacts. It's not so easy for Eclipse artifacts, but you can try to find them here: [4]
change your project structure based on Maven requirements
spend some time to understand and use Maven patterns (for OSGi)
[1] - http://maven.apache.org/
[2] - http://www.jfrog.org/products.php
[3] - http://m2eclipse.sonatype.org/
[4] - http://build.eclipse.org/helios/hybrid/final/
Regards,
Dmytro

Thanks to everyone who answered for the insight into how others are solving this problem.
We ended up going with Buckminster. It allows us to quickly describe where all our bundles are (cases 1-3 from p2 repositories, case 4 from mercurial) and provides one click setup of empty workspaces through the CQuery. It also integrates well with Hudson and simplifies CI setup compared to the PDE build I have used on other projects.

Related

Filling the gap between PDE and Tycho

I used to package my various Eclipse RCP products with PDE, for years.
With my latest upgrade attempt to Eclipse Oxygen, I got some new strange resolution errors which I could not solve, and I decided it really was time to give Tycho a try. I followed the excellent article about Tycho by Lars Vogel, and after a bit of tweaking, it worked well (and I was not stucked by the same resolution errors as in PDE! Yay!).
But indeed it was a simple test: I created a folder for my bundles, another for my features, created my poms, and so on. Now I look at the degree of automating in my PDE and find quite a huge gap.
In PDE, there is a build.properties where you give your master feature file and a map file, and the process will, seemingly:
parse the master feature
parse the features in it (recursively)
parse the plugins in them
find in the map file the plugins to be packaged (the other dependencies are supposed to be in the target platform)
download the relevant git repos
move the relevant plugins/features to the working directories
launch compile, p2 and so on
(note : the git part needs you provide the egit fetchfactory)
Now in Tycho, I have to create poms, but it is not the problem. I have to create some master poms, and for the individual plugin poms, I have either the pomless option, or the pom generator. The pom generator also seems to have the advantage of creating the parent pom which contains all the plugins as modules. So far so good.
But I have to fill the features and plugins folders, and I'm stuck here.
I do not have PSFs for my products, because I never needed it: in PDE, map + product definition does the trick.
Does it mean I will have to maintain PSFs from now on or is there another tycho solution I did not find? (Tycho doc is quite scarce, in my opinion). Maintaining PSFs seems redundant to me because I have product and map, and also because I have many products, many plugins, and many of which are common to several products.
(Indeed, a basic solution would be to take the git repositories mentioned in the map file, dump them all and launch tycho. Tycho would compile all the plugins, and then the p2 part would package only the product-relevant ones. The problem is that I have plenty of different products that rely on plenty of different repositories. And even in a given git repo, I have plugins that may or may not be relevant for a given product. Thus, I would compile hundreds of useless plugins in the process.)
My need is to copy in the tycho folders only the plugins and features which are referenced in my product and which are not already in my target platform. Generating a PSF from my product and my map would be shifting the problem.
Indeed, I can code this, and I will if needed.
But given that all this is already automated in PDE, is there at least some parts of the process that could be automated with some tycho plugins I did not uncover?
After some time of digging, here is the solution I finally chose.
In order to fetch the relevant features and plugins, I used... PDE ! I digged in PDE and found the various steps in its process. The first one is to fetch (it is an ant task named eclipse.fetch). I externalized this part, and my script launches it, then generates the master poms by scanning the fetched features names and fecthed plugins names, then adds the other tycho confugurations and then launches tycho.
In the end, granted, it is not a full tycho solution, it is a hybrid one PDE + Tycho. But it works like a charm, and the build/package process is Tycho, only the initial fetch is delegated to PDE. (anyway, PDE build/package process does not work in my case, as stated initially)

Best way to share a Target Platform for Eclipse RCP project

What is the best way to share a Target Platform?
E.g. together with the source code of some RCP project.
I can define a .target file and fill it with remote p2 sites and share this file. The problem here is first, those sites tend to be very slow and unreliable. From experience, resolving such a Target Platform fails from time to time.
A more reliable and faster (in terms of loading this Target Platform) approach is to define a local directory, that contains all plugins and features.
This directory can either be part of the source repository itself or I can provide a (fast and reliable) remote site from which I can download this target platform anytime.
The difficulty here is, how can I translate a list of p2 sites into a directory, that contains those plugins and features which are provided by the update site?
When I set a specific target platform from within Eclipse, where are those artifacts actually saved? I assume I could just copy this directory.
Despite the sometimes slow Eclipse p2 repositories and despite the crappy target platform editor, I still recommend using .target files. They are easy to share because they can be stored in the source code repository.
While resolving a target platform, PDE caches bundles in the .metadata/.plugins/org.eclipse.pde.core/.bundle_pool directory of the workspace.
Using .target files also allows you to use Tycho as a build tool. Note, however, that Tycho cannot read from local (i.e. file://) repositories.
It is the most common and most accepted way of providing dependencies for RCP/plug-in development.
To mitigate the unreliable performance of Eclipse p2 repositories you may want to mirror these and specify the mirrored sites in your target platform definition.
There is also Target Platform Definition DSL and Generator that you might try as a drop-in replacement for if you are dissatisfied with the reliability of the the PDE target platform editor.

What's the point of downloading the source jars in a grails project?

I've noticed that in eclipse if you Right click on a project -> Grails Tools -> You have the option to 'Download Source Jars'.
What is the point of this and what are some common reasons as to why you would want to do this?
Grails 2.2.3
Edit:
I'm not even sure what grails does instead of that.
Many (most) libraries (JARs, "artifacts" in the Maven terminology) publish a sources archive alongside their binary artifacts in the repositories. This can be useful for Eclipse to show you the Javadoc and source code when you're using the library in your projects. As #JonSkeet commented above, it's very useful to have source code available directly in the IDE when using a library.
By default, Grails does not download the sources for artifacts; this option triggers it to do so and attach the sources to the binary JARs.
Agreed with E-Riz.
Here are the reasons I use the sources:
i want to have a deeper understanding of how the library works when debugging my own depending code
i want to find a possible bug in the library, so I can fork it and apply my own patch. i will possibly share this with the maintainers as a pull request if I'm willing to spend that much time on it.
i want to find out what logging systems it uses that might be poorly documented, so I can see better what their code is doing during runtime, to troubleshooting complicated problems.

How to Create a Spring+Primefaces+Hibernate (no maven) project in eclipse?

I am new to J2EE. I would like to create a Spring+Primefaces+Hibernate project.
I googled for it.
But I found all projects examples show in internet contains maven. My questions are
Is it possible to create a spring+primefaces+hibernate project in eclipse without Maven? If no, what is need of maven?
How to add the jar file of primefaces and spring and hibernate in eclipse?
Will the spring controller xml file (spring context or dispatcher servlet) be created automatically or manually?I mean Spring MVC.
Will the hibernate file (mapping file) also be created automatically or manually?
If possible, can anyone guide me to tutorial (preferably video) to implement the same?
I am using tomcat 7 and Eclipse - kepler.
Any help is appreciated.
If this is downvoted , do specify the reason also.
Although it's not a 'must' to use Maven or any other build tool, you should strongly consider using one.Eclipse Kepler has by default maven support but feel free to use other build tools(Gradle, Ant) or none(see 2.).Maven and the other build tools remove the headache of scaffolding, searching for dependencies(external jars like spring-mvc, hibernate, some db drivers), even deploying applications in a server.
If you chose not to use a build tool you have to manually get your project dependencies and enter them
into your project's buildpath(Right Click -> Build Path then enter their location).As you have noticed this step can be really really time consuming...
No, you have to manually create the configuration unless you use another project that already has what you need, again this might get easier with a build tool(maven archetypes for example)
The same as 3.
You won't have a hard time finding resources about these technologies, they are being used practically everywhere, and I think the Spring team has some videos in their YouTube channel.
Hope that helps a little!
1:* The fundamental difference between Maven and Ant is that Maven's design regards all projects as having a certain structure and a set of supported task work-flows (e.g., getting resources from source control, compiling the project, unit testing, etc.). While most software projects in effect support these operations and actually do have a well-defined structure, Maven requires that this structure and the operation implementation details be defined in the POM file. Thus, Maven relies on a convention on how to define projects and on the list of work-flows that are generally supported in all projects.
This design constraint resembles the way that an IDE handles a project, and it provides many benefits, such as a succinct project definition, and the possibility of automatic integration of a Maven project with other development tools such as IDEs, build servers, etc.
But one drawback to this approach is that Maven requires a user to first understand what a project is from the Maven point of view, and how Maven works with projects, because what happens when one executes a phase in Maven is not immediately obvious just from examining the Maven project file. In many cases, this required structure is also a significant hurdle in migrating a mature project to Maven, because it is usually hard to adapt from other approaches.
In Ant, projects do not really exist from the tool's technical perspective. Ant works with XML build scripts defined in one or more files. It processes targets from these files and each target executes tasks. Each task performs a technical operation such as running a compiler or copying files around. Targets are executed primarily in the order given by their defined dependency on other targets. Thus, Ant is a tool that chains together targets and executes them based on inter-dependencies and other Boolean conditions.
The benefits provided by Ant are also numerous. It has an XML language optimized for clearer definition of what each task does and on what it depends. Also, all the information about what will be executed by an Ant target can be found in the Ant script.
A developer not familiar with Ant would normally be able to determine what a simple Ant script does just by examining the script. This is not usually true for Maven.
However, even an experienced developer who is new to a project using Ant cannot infer what the higher level structure of an Ant script is and what it does without examining the script in detail. Depending on the script's complexity, this can quickly become a daunting challenge. With Maven, a developer who previously worked with other Maven projects can quickly examine the structure of a never-before-seen Maven project and execute the standard Maven work-flows against it while already knowing what to expect as an outcome.
It is possible to use Ant scripts that are defined and behave in a uniform manner for all projects in a working group or an organization. However, when the number and complexity of projects rises, it is also very easy to stray from the initially desired uniformity. With Maven this is less of a problem because the tool always imposes a certain way of doing thi
2:* You have to download all required jars file for hibernate/spring/primefaces from internet and place them in your project build path or in lib folder.
3:* Spring configuration files need to be created by you so that you can get the concept.
4:* Hibernate mapping files can be created by using reverse Engineering techniques for hibernate from where you can generates hbm files or you can use annotations if you dont want xml.
I suggest you to first create a sample java project in eclipse then download all required jars and place them in lib folder. Then configure hibernate in projects and spring integration.

install/update/remove bundle programmatically

I'm new to osgi and wonder if it is possible to have a centralized mechanism to update, install or remove bundles.
Yes. You can do this programmatically, which means there are a large number of bundles that provide you out-of-the-box solutions. It is so easy (and so much fun) that for many people one of their first bundles is a little "management agent" (as the OSGi specification calls this part).
The absolute simplest solution is Apache File Install. It tracks a directory and installs/uninstalls from there. Couple this to Google Drive or Dropbox and you have a large scale fully automated deploy model (it also handles configuration, which is quite important).
The OSGi specification now has an OSGi Bundle Repository (OBR) specification. This is a very powerful model to describe dependencies (not just bundles) that allow management agents to calculate/leverage dependencies. This is supported out of the box on Felix.
There are a myriad of solutions that manage OSGi frameworks. There is commercial support with Paremus, IBM Tivoli, ProSyst and many others. And open source with Apache ACE and fusebundles.
There are two general ways to do that: Have you application to 'pull' bundles from a repository hosting bundles and update itself, or have an external provisioning application 'push' bundles to your application.
For pull solutions I'd say there is:
Eclipse P2 Used by the update manager of Eclipse. Mature, stable, but can be a bit tricky to get into, also I'm not sure if P2 works with other OSGi runtimes than Eclipse Equinox
Apache Bundle Repository (OBR) A bit easier, and it's in the OSGi spec.
For push solutions I'd say have a look at Apache Ace, from your question I think that is closest to what you want to do.