Is it possible to use typesafe activator laucher to achieve the same thing as the zip release? - scala

As a scaffolding tool, the official release has a size of 238MB , which is too big and I already have an repo on my local, why activator ships another repo and continue downloading the existing dependencies into it?
The launcher is actually really small, < 50K, is it possible to use it like sbt, just use the activator launcher to achieve the same functions as in the full release?

Yes. The repository is only their for convenience. The actual size of the "bootloader" of Activator is something like 2Mb. If you delete the "repository" directory from the activator zip, then everything will still work, but dependencies will be downloaded on demand.
Another hidden feature is if you download a particular template from the website (see go to http://typesafe.com/activator/template/activator-akka-spray and click the download link), you'll get a a launcher for activator which is standalone.
The activator.bat/activator/activator-launcher-<version>.jar files are the only portion of the distribution you really need to run.

Related

How does sbt integrate with IntelliJ?

Is there a definite doc somewhere that explains all the magic that happens behind the "Typesafe Activator" generation of "IntelliJ supported" project?
The sbt build files look absolutely monstrous, and I have no idea what and where IntelliJ looks for.
This is frustrating as working from two different PCs the scala seed project refers to different hard-coded paths.
Is there a good place to start?
Last time I checked, the typesafe activator was using SBT as the underlying build tool. When creating an intellij project it would thus use the sbt-idea plugin.
I guess a possible place to start would be that plugin's documentation.
However I think there is something else going on here. I think you have the activator installed on two different PCs and are trying to share the project between both PCs whether using version control or copying the folders.
The sbt-idea plugin will indeed write some absolute path in ideas project files (most likely the absolute paths to the sbt managed libraries in the ivy cache of your home folder) since this is required for the intellij project to work.
There should be no reason to "share" the idea project files, these should be considered computer specific and should not be checked into source control, or expected to work when copied from a random computer to another. You are expected to regenerate them for each computer the project is worked on.
If that sounds like a burden, you may want to install the Intellij scala plugin. Once installed, the sbt integration will allow you to import any sbt project even if you haven't generated the intellij support in the activator. Have a look at the features page, there is a video showing how to use the plugin.

Tell SBT to not use staging area

I want to be able to compile my project once and pass it through multiple build steps on a CI server. But SBT puts files in a staging area like the one below.
/home/vagrant/.sbt/0.13/staging/
This means the project is not stand-alone and for every CI step it is going to compile it again.
How can I tell SBT to keep things simple and stand-alone and to make sure everything it needs is inside the project directory?
FYI, the staging area is used for the target files when the source folder is not read/write. Making the source folder read/write should fix this.
If you pass -Dsbt.global.staging=./.staging to sbt when starting it up, the staging directory will be .staging in the project's directory.
I figured that out by looking at the sbt source and patching that together with how Paul P's sbt runner passes the value for the sbt boot path.
If that doesn't accomplish what you want, then you might be able to make something go with a custom resolver. The sbt Build Loaders page talks about creating a custom resolver that lets you specify more detail about where dependencies are written. If my solution doesn't get you what you want, you'd probably need to do something like that.

Why should the Gradle Wrapper be committed to VCS?

From Gradle's documentation:
The scripts generated by this task are intended to be committed to
your version control system. This task also generates a small
gradle-wrapper.jar bootstrap JAR file and properties file which should
also be committed to your VCS. The scripts delegates to this JAR.
From: What should NOT be under source control?
I think generated files should not be in the VCS.
When are gradlew and gradle/gradle-wrapper.jar needed?
Why not store a gradle version in the build.gradle file?
Because the whole point of the gradle wrapper is to be able, without having ever installed gradle, and without even knowing how it works, where to download it from, which version, to clone the project from the VCS, to execute the gradlew script it contains, and to build the project without any additional step.
If all you had was a gradle version number in a build.gradle file, you would need a README explaining everyone that gradle version X must be downloaded from URL Y and installed, and you would have to do it every time the version is incremented.
Because the whole point of the Gradle wrapper is to be able, without having ever installed Gradle
Same argument goes for the JDK, do you want to commit that also? Do you also commit all your dependency libraries?
The dependencies should be upgraded continuously as new versions are released to get security and other bug fixes, and because if you get to far behind it can be a very time consuming task to get up to date again.
If the Gradle wrapper is incremented for every new release, and it is committed, the repo will grow very large. The problem is obvious when working with distributed VCS where a clone will download all versions of everything.
and without even knowing how it works
Create a build script that downloads the wrapper and uses it to build. Everyone does not need to know how the script works, they need to agree that the project is build by executing it.
where to download it from, which version
task wrapper(type: Wrapper) {
gradleVersion = 'X.X'
}
for Gradle version >= 5:
wrapper {
gradleVersion = 'X.X'
}
and then
gradle wrapper
to download the correct version.
to clone the project from the VCS, to execute the gradlew script it contains, and to build the project without any additional step.
Solved by the steps above. Downloading the Gradle wrapper is not different from downloading any other dependency. The script could be smart enough to check for any current Gradle wrapper and only download it if there is a new version.
If the developer has never used Gradle before and maybe doesn't know the project is built with Gradle, then it is more obvious to run a build.sh compared to running gradlew build.
If all you had was a gradle version number in a build.gradle file, you would need a README explaining everyone that gradle version X must be downloaded from URL Y an installed,
No, you would not need a README. You could have one, but we are developers and we should automate as much as possible. Creating a script is better.
and you would have to do it every time the version is incremented.
If the developers agree that the correct process is to:
Clone repo
Run build script
Then there upgrading to latest Gradle wrapper is no problem. If the version is incremented since last run, the script could download the new version.
I would like to recommend a simple approach.
In your project's README, document that an installation step is required, namely:
gradle wrapper --gradle-version 3.3
This works with Gradle 2.4 or higher. This creates a wrapper without requiring a dedicated task to be added to "build.gradle".
With this option, ignore (do not check in) these files/folders for version control:
./gradle
gradlew
gradlew.bat
The key benefit is that you don't have to check-in a downloaded file to source control. It costs one extra step on installation. I think it is worth it.
According to Gradle docs, adding gradle-wrapper.jar to VCS is expected as making Gradle Wrapper available to developers is part of the Gradle approach:
To make the Wrapper files available to other developers and execution environments you’ll need to check them into version control. All Wrapper files including the JAR file are very small in size. Adding the JAR file to version control is expected. Some organizations do not allow projects to submit binary files to version control. At the moment there are no alternative options to the approach.
What is the "project"?
Maybe there is a technical definition of this idiom that excludes build scripts. But if we accept this definition, then we must say your "project" is not all the things that you need to versioned!
But if we say "your project" is everything you have done. Then we can say you must include it and only it into VCS.
This is very theoretical and maybe not practical in case of our development works. So we change it to "your project is every file (or folder) you need to editing them directly".
"directly" means "not indirectly" and "indirectly" means by editing another file and then an effect will be reflected into this file.
So we reach the same that OP said (and is said here):
I think Generated files should not be in the VCS.
Yes. Because you haven't created them. So they are not part of "your project" according to the second definition.
What is the result about these files:
build.gradle: Yes. We need to edit it. Our works should be versioned.
Note: There is no difference where you edit it. Whether in your text editor environment or in Project Structure GUI environment. Anyway you doing it directly!
gradle-wrapper.properties: Yes. We need to at least determine Gradle version in this file.
gradle-wrapper.jar and gradlew[.bat]: I haven't created or edited them in any of my development works, till this moment! So the answer is "No". If you have done so, the answer is "Yes" about you at that work (and about the same file you edited).
The important note about the latest case is the user who clones your repo, needs to execute this command on repo's <root-directory> to auto-generate wrapper files:
> gradle wrapper --gradle-version=$v --distribution-type=$distType
$v and $distType are determined from gradle-wrapper.properties:
distributionUrl=https\://services.gradle.org/distributions/gradle-{$v}-{$distType}.zip
See https://gradle.org/install/ for more information.
gradle executable is bin/gradle[.bat] in local distribution. It's not required that local distribution be same as that determined in the repo. After wrapper files created then gradlew[.bat] can download determined Gradle distribution automatically (if not exists locally). Then he/she probably must regenerate wrapper files using new gradle executable (in downloaded distribution) using above instructions.
Note: In the above instructions, supposed the user has at least one Gradle distribution locally (e.g. ~/.gradle/wrapper/dists/gradle-4.10-bin/bg6py687nqv2mbe6e1hdtk57h/gradle-4.10). It covers almost all real cases. But what happens if the user hasn't any distribution already?
He/She can download it manually using the URL in .properties file. But if he/she doesn't locate it in the path that the wrapper expected, the wrapper will download it again! The expected path is completely predictable but is out of the subject (see here for the most complex part).
There are also some easier (but dirty) ways. For example, he/she can copy wrapper files (except .properties file) from any other local/remote repository to his/her repository and then run gradlew on his/her repository. It will automatically download the suitable distribution.
Old question, fresh answer. If you don't upgrade gradle often (most of us don't), it's better to commit it to VCS. And the main reason for me is to increase the build speed on the CI server. Nowadays, most of the projects are getting built and installed by CI servers, different server instance every time.
If you don't commit it, CI server will download a jar for every build and it significantly increases a build time. There are other ways to handle this problem, but I find this one the easiest to maintain.

Automating build tasks using eclipse / maven m2e

I am about to use maven to automate my builds. Unfortunately, I am not able to get all the features I want, even after reading several tutorials :(
I would be glad if somebody could explain a way I can achieve all my goals!
I want to automate 3 specific build tasks with several actions for a project from within eclipse, using m2e:
Build snapshot
compile
define current project version + date as version
build jar file
copy jar file into the local repository in the project path itself (§{project}/builds/)
Debug snapshot
build snapshot as mentioned above
copy jar file to plugins folder of a local test server
build another project the current project depends on, copy its jar file to the plugins folder aswell
launch server / connect to eclipse debugger (I know how to do that, the previous steps are the important ones)
Create release
compile
define current project version as version
build jar file
copy jar file into the local repository in the project path itself
create javadoc
copy source files and javadoc to an archive folder
increase the project version (for example v6 -> v7)
As mentioned I don't need a perfect solution, just a way to realize this ;)
(Annotation: Chaining multiple launch configurations is not a problem.)
Edit:
Which sections of the pom.xml do I have to modify to realize these steps and how can I invoke them using an eclipse launch configuration?
Hi based on your requirements i can say the following:
Build Snapshots
Building a SNAPSHOT is usually the convention during development cycle.
1.1 just using the conventions.
1.2 Date as version
This is a bad idea, cause Maven has some conventions how a version looks like (1.0-SNAPSHOT or 1.2.3-SNAPSHOT etc.)
1.3 Build jar file
Usually done by the jar life cycle (mvn package)
1.4 The local repository is on your home drive in ${HOME}/.m2/repository for all your projects. Technically you can do what you like but it's against the Maven conventions. The question is why do you need such thing?
2.1 Usual procedure
2.2 Usually a deployment is not a job for Maven but you can do such things by using cargo-maven-plugin (integration testing).
2.3 If you have dependencies between project you need CI solution like Jenkins to do such things otherwise you need to do this manually. But that is different from a multi-module build.
2.4 Integration testing different story. Depends on what exactly you like to do.
3.
1-7
The maven-release-plugin will handle such things except copying to the project path itself which is against the conventions. For such purposes you need a repository manager.
I can recommand reading these books: http://www.sonatype.com/Support/Books

Maven: Prevent upload of default-jar - only upload jar-with-dependencies

I'm evaluating Maven 3 at work. For several example projects I have to deploy them to a server (no repository), but that's not the problem.
In my current example-project I'm trying to upload only the "jar-with-dependencies".
and exactly that's my problem.
It all works fine, except that the main-artifact AND the jar-with-dependencies (created by the assembly-plugin) are uploaded.
How do I prevent Maven or rather the deploy-phase from uploading the main-jar and only upload a given or specified file (in this case, the assembly-file "jar-with-dependencies")?
Referring to the question Only create executable jar-with-dependencies in Maven, I can't just alter the packaging-setting to pom, because it will also prevent the assembly-plugin from adding my classes to the JAR file. It only creates a JAR file with the files of the dependencies.
I hope I'm clear about my problem, and you can help me ;)
if you just looking how to add a file to be deployed you can take a look here:
http://mojo.codehaus.org/build-helper-maven-plugin/attach-artifact-mojo.html
May be this helps. If not express your needs more in detail.
There seems to be no way to configure the deploy plugin to filter out some of the artifacts from a project and selectively deploy the others. Faced with a similar problem, we solved this with the ease-maven-plugin. It fit well into our release process but might not be the right choice for everyone as it mandates a two-step approach. During the build you would make a list of all artifacts and filter out those that you want deployed. In a second step, you then run mvn deploy on a separate project (or separate profile) in which the list of artifacts is attached to the project as the only artifacts which then get deployed. See the examples in the source code of the ease maven plugin to better understand how it works.
The original version is not able to filter out specific artifacts of a project. I have forked the project and added patches that add this.