shapeless port to scala-js: create artifact with few external dependencies - scala

There is a port of shapeless library to scala-js (https://github.com/alexander-myltsev/shapeless). I need to publish artifact properly with fewer possible dependencies to original shapeless.
Now I forked Miles Sabin's repo, and added changes required to generate scala-js library: add scalajs-sbt-plugin, tune build.scala, add bintray-sbt-plugin.
It is wrong to ask shapeless maintainers to merge my branch because scala-js could broke their build.
On the other hand I'd like to have minimal dependencies to original repo as well. Theoretically and ideally what I'd like is to create, say, shapeless-scalajs sbt-project from scratch. Then reference somehow original shapeless library. And then derive from shapeless-build-scala with required overrides to build it against scala-js and publish to my bintray.
I believe in almighty sbt :) What are my options to solve this task?

I think the easiest way is (no sbt hackery involved):
Fork shapeless
Create Scala.js branch
Change build files as you need. That is, modify the shapelessCore project directly as in your PR. (add scalaJSSettings, your repo coordinates)
Commit
Publish shapeless to your maven central
When a new version of shapeless comes out, just merge shapeless/master with your scala.js branch. If no changes happened to the build file, this will merge just fine.
Re-publish
This will be way easier than an sbt project that depends on an external project (which is doable, but doesn't directly allow you to reuse settings etc.)

Related

Is it possible to publish the build project itself of a project using sbt?

I have a downstream project which would like to reference values defined inside the build files of an upstream project. I was thinking if there is an easy way to publish a jar with the source files of the build project itself then I could publish the build project of the upstream project and the build project of the downstream repo could depend on the build project of the upstream repo. Is it possible to do this? Is it reasonable? Are there other, potentially better solutions?
To be clear because I can see how the above might be confusing (and pardon me if I am using incorrect terminology), I am referring to the recursive nature of sbt builds and the fact that the build definition for a project is a project in itself and that's what I would like to publish, not the source files of the project itself.
I'm familiar with writing plugins in sbt and with the sbt buildinfo plugin. I'm hoping there's another way.

Metacello dependency on a package from github project

How can one depend directly on a package from a github repo? (Assuming the project has no baseline, or there is a reason not use the baseline.)
I've tried the following spec
spec
package: 'Magritte-XMLBinding'
with: [ spec
repository: 'github://magritte-metamodel/XML-Bindings:master/repository' ].
However it failed (Could not resolve: Magritte-XMLBinding [Magritte-XMLBinding.package]), and in Monticello browser under the repo I see only Magritte-Tests-XMLBinding.
What is more, when I look at the unpacked repo (in github-cache/), only the Magritte-Tests-XMLBinding has been unzipped.
The unpacked Tests package is the first in alphabetical order, which makes me feel like Metacello spec just grabs the first package without thinking.
When using git (and github), you cannot depend on packages but in complete projects. You can, however, depend on a project but load just one package of that project.
normally this definition should work:
spec
baseline: 'XMLBindings'
with: [
spec
repository: 'github://magritte-metamodel/XML-Bindings:master/repository';
loads: #('Magritte-XMLBinding') ].
However, while this answer is correct in general, in this case it will not work because the author of the project didn't included any baseline definition that would allow this kind of dependency definition to work, which suggests me he just uses that project as a mirror of the real one... so here you have three possible solutions:
send a pull request to author with a baseline
contact the project author and ask to add a baseline
use the original source instead the github mirror

Simulink Project dependency management and dependency resolution

What is the best practice for managing dependencies within a Simulink Project when the project is worked on across a team and the project has dependencies on different models and libraries?
An parallel example would be when building an application using Gradle and declaring the dependencies of a project including the required version numbers. Grade will resolve and download the versions that are required to build the project.
e.g. the following declares a dependency on version 2.1 of library and version 1.0 upwards of some-library, so that the latest version 1.x (1.0, 1.1, 1.2...) that is available will be downloaded and used.
dependencies {
compile("com.example:library:2.1")
compile("com.example:some-library:1.+")
}
The documentation for Simulink (and also here covering manifests) seems to talk about models within a project having version numbers. It doesn't seem to mention libraries that are imported into the project. Models that are only used within a single project could all be contained in the overall project, but what happens if there are (for example) generic S-Functions defined within a separate project or library (or library defined within a project) that are applicable across multiple projects? This requirement is all with the aim of helping to support an automatic build process triggered by a Continuous Integration server, such as Jenkins.
I'm interested in a workflow that will easily support dependency management and automatic dependency resolution with a Github Flow git branching policy.
I've spent much time on this problem. Finally I didn't find an appropriate solution online, but I'd like to share the workflow we are using now and which fulfills our needs.
In short: We created our own dependency management by using git submodules.
Assumption: In fact, it is more a version management of persistent dependencies rather than offering the possibility to dynamically add new or remove old packages or libraries. This also works, but requires the git submodules to be added to or removed from the main git repository.
Objectives:
Consistent setup for everyone who works on the project.
Traceability of depdendencies.
Continous Integration with less effort.
How we do it (Example):
We have Project A and Project B which shall be used in Project C.
All three projects are under git version control and still under development.
We have set up additional release repositories for Project A and Project B, e.g. located on a network drive.
In Project C we add the release repositories of Project A and Project B as git submodules
We have set up some kind of auto-deployment to push only relevant files into these release repositories. For example if we want to make changes of Project B accessible to Project C, we only create a version tag in Project B's repository and it gets pushed to its release repository.
In Project C we update our git submodules and can checkout a new submodule version (if needed).
Advantages:
Since git stores the checked out version (commit) of git submodules in the main project, we can ensure that everyone works with the same files.
Changing the commit of a submodule is traceable in the main project.
The relation between the main project and the dependencies is always consistent.
Continuous Integration should work "out of the box". We are using GitLab and GitLab Runner and only had to setup our runner to fetch submodules recursively (in case of nested submodules).
I think this approach works as long as the repositories won't get too big, since you do not fetch only the version you need but also the whole version history.

Can sbt be used to access a none-scala github repo to read into a scala project?

I'm dealing with two repos:
- A github repo that contains a bunch of text files.
- A scala project that would like to read those text files.
I would like to use SBT to download the contents of the github repo as a build dependency.
I wouldn't mind if SBT supplied either a path (into the ivy repo?) for the project to use or build them into the projects available resources - or any other way that will just work. I'm aiming for something automatic; clearly there are ways I could do this manually.
If you talk about a bunch of text files like *.property for example that used as dependency for your project (do you really want download them every time?) you may use sbt.IO.download(url: URL, to: File). Just create task and add to project definition compile <<= (compile in Compile) dependsOn myDownloadTask After that you may process them as regular local files ;-).
IMHO you understand that you may add custom logic like caching or page parsing or REST request to GitHub to your project definition. At last you may create your own SBT plugin - there are few video tutorials "How to create SBT plugin in 5 minutes" on YouTube.

Maven best practices for versioning different branches [development, qa / pre-release]

I have a couple of projects which are developed and released on different branches, namely development and release. The process works pretty well but unfortunately it has some drawbacks and I have been wondering if there is a better versioning scheme to apply in my situation.
The main development happens on a development branch (i.e. Subversion trunk but it doesn't matter much) where team of developers commit their changes. After building and packaging artifacts, Jenkins deploys them to maven repository and development integration application server. This is a DEVELOPMENT-SNAPSHOT and basically is just a feature branch containing all developed features on one common branch:
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>D.16-SNAPSHOT</version>
When one particular business change is done and requested by QA team, this single change is then being merged to the release branch (branches/release). Jenkins deploys the resulting artifact to QA application server:
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>R.16-SNAPSHOT</version>
Then there's a release which happens via maven-release-plugin on the release branch version of software (which creates a maintenance tag/branch for quick bug fixing). (R.16-SNAPSHOT => R.16)
Development and release branches are currently being versioned as D.16-SNAPSHOT and R.16-SNAPSHOT respectively. This allows to separate artifacts in maven repository but creates a problem with different maven mechanisms which rely on standard maven versioning style. And this breaks OSGI versioning as well.
Now, how would you name and version maven artifacts in such a scheme? Is there a better way? Maybe I could make some changes to maven structures other than simply changing the versioning and naming schemes? But I need to keep development and QA (release) SCM branches separate.
Would a maven classifier of 'development'/'production' be a reasonable alternative?
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>16-SNAPSHOT</version>
<classifier>D</classifier>
As far as I know, a common naming extension for a release artifact would be just the name of the artifact, without any stuff, only the version specified. A development branch would have the same artifact name but with snapshot.
For example, take twitter4j. The artifact name of the release version is
twitter4j-2.5.5
Snapshot of their(his) development version
twitter4j-2.6.5-SNAPSHOT
That is the naming convention almost everybody uses and is recognized by most tools. For example, my Nexus repository can specify a policy to ignore development releases which basically means it ignores the artifacts containing -SNAPSHOT in their name.
EDIT:
To your followup question:
Well, depending on your build tool, you can create your snapshots to have the timestamp or some other unique identifier. However, I have never heard of some branching logic being embedded in the artifact's name just so the continuous int server can distinguish it. From the artifact's perspective, it is either a release, or a SNAPSHOT, I don't see the benefit of embedding more logic into the name of the artifact just cause your Hudson allows yo to do so. To be honest, your release cycle seems OK to me, but it would require some fine tweaking of your maven tools. If you can't live with that I would suggest you to use a classifier instead of relying on the name as it is always easier to tweak the integration server than a lot of plugins that rely on standard naming convention. In conclusion, I believe you are on the right track.
I think you could simply the process by having only two types as far as maven is concerned
Snapshot (In perpetual development)
Releasable (with a version number that can be deployed to maven repository or production release)
I would handle your branching a little differently, If you look at the iterative/scrum development model your code should be releasable/shippable at end of a iteration/sprint
Main sub version trunk is where developers commit their code
At the end of the sprint/iteration branch the main trunk and called it release branch (there should not be a QA branch any code that is to be released is tested for quality)
Bug fixes should happen on the release branch and periodically merged back to main trunk
This way you can keep creating branches for a release and any bug fixes are committed to branch
Always make sure before creating a new branch from main trunk, It has all the merges from previous branches
The release plugin from Maven supports branching. It appears to work by assuming that the branch is created to support the next version of your code.
Personally, I'm more inclined to use the versions plug-in, and explicitly set my Maven project's version numbers.