Cross platform build matrix with SBT for Scala + Native project - scala

I'm seeking suggestions for publishing and releasing a Scala/ Java project I'm currently working on that uses SBT as the primary build tool. The project structure looks like the following,
|- parent
| |- native
| |- core
Here,
Project parent is the project root containing all the plugins, build.sbt and just acts as a wrapper for the main 2 sub modules - native and core.
The native project contains no Scala/ Java core, but only rust code that is access using JNI from the core module. This project has its own internal build system cargo which builds native as a system specific library (so on linux, dylib on mac and dll on windows).
core is the main module that will be consumed by the end users and provides a nice interface that depends on the native code.
Everything is working fine in development as I'm relying on the sbt-jni plugin, but the final automated release process is what I'm confused about.
What I want to do is the following,
Publish the project as a platform independent jar on maven repo.
This jar, should preferably of the core sub module containing the built core code and all the platform lib from native together. This way this jar can work across a matrix of,
Java versions - 8, 11, 17
Scala versions - 2.12, 2.13, 3.x
OS and platforms - mac (silion/ intel), windows and linux
Preferably not publish the native sub module at all because it doesn't need to be exposed to the end users and is incapable of direct usage without the core.
Preferably not rely on maven dependency classifiers for platform specific libs that get built from native module (so, dll and dylib)
Automate all of this on GitHub actions via a release workflow.
I know what for different scala version there will be different final jars with suffix like _2.12 and this is fine. Java versions can be handled with -target or release javac options.
Looking for any guidance and suggestions regarding this as I do not want to end up following some anti-pattern. This is also the first scala + rust project that I am trying out.
Thanks in advance.

Related

IntelliJ download the libraries once and use in multiple projects

I have an IntelliJ with Scala plugin intalled on the server. The server is disconnected from the global internet and all the updates can be done ocsasionally only.
Would like to download some libraries once (e.g. Spark libraries, some libraries from Java) and use them in IntelliJ in the multiple projects without need of downloading them, but loading from local direcories. Also it would be great to have a 'full' bundle of libraries (e.g. all Spark libraries) and be able to use only particular classes when it's necessary (e.g. Spark Context only).
TK
P.S. Question is somewhat related to the: Use Scala on computer without internet connection
As #MarioGalic sugested, the cluse was to move a required libraries to the ~/.ivy2 directory.
Somethimes the case is to add libraries manually in IntelliJ project setup, insted of using SBT or Maven to manage the dependencies.

What is the right way to create JUnit tests for Eclipse fragments?

One of the most common uses of eclipse fragments is as a container for JUnit test classes. But how to write JUnit tests for Eclipse fragment when it plays another, more important role? For example, when it has platform specific code.
The problem is that it is impossible to create a fragment for a fragment. And you can't write tests for host plug-in to test the fragment because it doesn't even compile as a fragment is "merged" into a host only at runtime.
I don't know of a satisfactory solution, however, you may want to consider these workarounds.
Eclipse-ExtensibleAPI
You can use the Eclipse-ExtensibleAPI manifest header like this
Eclipse-ExtensibleAPI: true
It causes the packages exported by the fragment to be re-exported by the host bundle. Now you can create a test bundle that imports the desired packages and therefore has access to the public types in the fragment.
This isn't as close as test-fragments where you benefit from tests and production code using the same class loader that gives access to package-private types and methods. But you can at least test through the publicly accessible means.
Note, however, that this header is specific to Eclipse PDE and not part of the OSGi specification. Hence you are tied to this development environment. Furthermore, the packages of the fragment will be exported through its host bundle and will be visible not only for the test bundle but for all bundles.
Java Library
If your fragment has few dependencies and doesn't require the OSGi/Eclipse runtime you could consider treating it as a plain Java library w.r.t tests. Another sibling Java project could contain tests and have a project-dependency (Properties > Java Build Path > Projects) on the fragment project. Again, access to package-private members would not work.
And if you use a build tool like Maven/Tycho, some extra work would be required to declare dependencies and execute these tests during the build.
Bndtools
You could also look into Bndtools to see if this development tool fits your needs better than the Eclipse Plug-in Development Environment (PDE).
Plain JUnit tests are held in a separate source folder in the same project as the production code. This would give your test code access to the production code in the same way as if test-fragments were used.
Bndtools also supports executing integration tests, though I doubt that you would have access to the fragment code other than through services or other API provided by the fragment.
For CI-builds, Bndtools projects usually use Maven or Gradle with the help of the respective bnd(http://bnd.bndtools.org/) plug-in. Just as Maven/Tycho is used to build and package PDE projects.
Since Bndtools is an IDE extension to develop OSGi bundles, it doesn't know about Eclipse plug-in specificities such as extensions declared in the plugin.xml. Hence there is no builder and editor for these artifacts. But if you are lucky, you may even be able to use the PDE builder to show error markers for invalid extensions and extension points.
Another downside that comes with having production- and test-code in the same project, is that pure test dependencies like JUnit, mock libraries, etc. are also visible for the production code at development time.
Of course, the produced (fragment) bundles do neither contain test code nor test dependencies.
However, Bndtools itself is developed with Bndtools. So there is proof that Bndtools can be used to write Eclipse plug-ins.

How to define an OSGI/Eclipse plugin with binary components for multiple platforms

I created an Eclipse plugin and there is a native binary needed to support its functionality. I have the native code ready for Win and Mac. The invocation of the native code is different for each platform, so there is also some plugin code related to the native code. (In fact the native code is JNA code, so very different indeed.) Currently I have an extension point and each native support plugin contributes there. So, as soon as a native support is here, the main plugin works. Also I have a test fragment for each of the native support plugins to unit test functionality.
How should I set the plugin(s) up, so that everybody get's the right plugin when downloading from update site or p2 repo? (I noticed that for example SWT uses fragments for the native code, so is this the way to go?)
Edit: After converting the plugins to fragments as indicated by the answer, what should I do with the unit test fragments of these plugins? Fragments of fragments are not possible.
How can I set this up in Tycho, so that Tycho build runs the test suitable for the current platform and ignores the other platform.
Edit: I have Mac and Windows native code, two fragments and therefore two environments in the pom. But then Tycho complains "plugin x cannot be installed in this environment because its filter is not applicable", of course not, only one of Win/Mac can be active at any certain time. Can Tycho figure out this itself or do I need os dependant Maven profiles?
Yes, you will need to package the native bundles into plug-in fragments. Each fragment should specify the platform filter to ensure only one fragment is valid per platform. For example on Windows 64 bit you need to specify os=win32, ws=win32 arch=x86_64.
If your fragments are part of a feature, you should also specify the platform filter in the feature definition.
Under Tycho, you need to specify all your supported platform filter combinations under the environments section of target platform configuration in your pom file.
Tycho always runs tests under the current platform. Add your fragments to your test runtime - see here on adding dependencies to the tycho test runtime. Tycho often needs help in identifying fragments to add to the test runtime.

Automating build tasks using eclipse / maven m2e

I am about to use maven to automate my builds. Unfortunately, I am not able to get all the features I want, even after reading several tutorials :(
I would be glad if somebody could explain a way I can achieve all my goals!
I want to automate 3 specific build tasks with several actions for a project from within eclipse, using m2e:
Build snapshot
compile
define current project version + date as version
build jar file
copy jar file into the local repository in the project path itself (ยง{project}/builds/)
Debug snapshot
build snapshot as mentioned above
copy jar file to plugins folder of a local test server
build another project the current project depends on, copy its jar file to the plugins folder aswell
launch server / connect to eclipse debugger (I know how to do that, the previous steps are the important ones)
Create release
compile
define current project version as version
build jar file
copy jar file into the local repository in the project path itself
create javadoc
copy source files and javadoc to an archive folder
increase the project version (for example v6 -> v7)
As mentioned I don't need a perfect solution, just a way to realize this ;)
(Annotation: Chaining multiple launch configurations is not a problem.)
Edit:
Which sections of the pom.xml do I have to modify to realize these steps and how can I invoke them using an eclipse launch configuration?
Hi based on your requirements i can say the following:
Build Snapshots
Building a SNAPSHOT is usually the convention during development cycle.
1.1 just using the conventions.
1.2 Date as version
This is a bad idea, cause Maven has some conventions how a version looks like (1.0-SNAPSHOT or 1.2.3-SNAPSHOT etc.)
1.3 Build jar file
Usually done by the jar life cycle (mvn package)
1.4 The local repository is on your home drive in ${HOME}/.m2/repository for all your projects. Technically you can do what you like but it's against the Maven conventions. The question is why do you need such thing?
2.1 Usual procedure
2.2 Usually a deployment is not a job for Maven but you can do such things by using cargo-maven-plugin (integration testing).
2.3 If you have dependencies between project you need CI solution like Jenkins to do such things otherwise you need to do this manually. But that is different from a multi-module build.
2.4 Integration testing different story. Depends on what exactly you like to do.
3.
1-7
The maven-release-plugin will handle such things except copying to the project path itself which is against the conventions. For such purposes you need a repository manager.
I can recommand reading these books: http://www.sonatype.com/Support/Books

Varying dependencies depending on target project type

I have a package Ninject.Extensisons.Wcf which shall be installed differently depending on the type of the project to which it is installed. In the case where WCF is hosted in IIS (any project containing global.asax) a second package Ninject.Web.Common needs to be installed together with the package. For all other project types such as libraries, Console, WinForms, WPF applications this package should not be installed.
Is it somehow possible to achieve this e.g., using a powershell script? Or do I have to deploy two different packages in this case?
Unfortunately the current nuspec file does not provide for managing dependencies based on project type. We currently support targeting different framework versions, but that doesn't apply in your situation.
It is recommended that all dependencies are handled using package references. Although it would be technically possible to download and install a package using a PowerShell script, this is not supported and will most likely break in future versions.
First determine if it would actually be a problem to reference a web package in a non-web project. Just because assemblies are referenced, if they are not used, it should not have an impact.
If it turns out that having the web dependency causes undesirable side-effects, then you'll need to create separate packages.
I would split up your package into logical pieces. As you state, you have a package that is used by non-web projects. Web projects require a dependency on a different package.
So now you have 2 logical packages:
MyProject
MyProject.Web
MyProject (dependencies)
SomeOtherPackage
So a user would Install-Package MyProject for non-web projects, and Install-Package MyProject.Web for web projects.
At this point you would be done and everything would be fine. But I think you should consider another step. One of the issues I see with these split packages is that I have to figure out which particular package I need to install. I have to know that I need the "Web" version.
At this point, determine the typical use case for your package. If 90% of your users will be installing the Web version, then I would make a "meta" package that simply has dependencies for your common packages.
In your case I would make 3 packages:
MyProject (meta package)
MyProject.Web
MyProject.Web
MyProject.Core
SomeOtherPackage
MyProject.Core (common non-web package)
By creating the "meta" package, you can reserve the "short" package name for the most common case. This meta package only has dependencies to other packages.
A good example of this is the SignalR package.
Hope this was helpful.