Is there a way to build Scala 2.10.3 from source code? - scala

I know it's a very old thing now, but I need it.
I download scala-sources from the link, unpacking on Ubuntu 20.04. Also I download Ant 1.7.0 and install it. Also I download Apache maven 2.2.1 (or I tried maven 3.5.0) and install it.I also use jdk1.8.0.
I am trying to run "ant" or "ant build" from the scala directory, and something happens, but the build failed for the following reasons:
failed to create task or type scalacfork
Build failed
The following messages also appear at the top:
[taskdef] java.util.zip.ZipException: zip file is empty
(several times)
...
[WARNING] POM for 'biz.aQute:bndlib:pom:1.43.0:compile' is invalid.
...
[taskdef] Could not load definitions from resource scala/tools/ant/sabbus/antlib.xml. It could not be found.
I understand that many things are no longer available now and are definitely not supported, but maybe someone will tell me a way to get the result.
Maybe I can download the necessary data and manually build scala? I'm new to this and don't quite understand what and where I can change. And I can't use a newer version of scala because of work.
I hope for any help. Thank you in advance.

Yes, it is possible. Although it requires some patches in the build process and a few workarounds.
Read the README.rst, it contains some information on how builds are structured
Run git init, git add ., git commit as the build uses some git info (commit hash, etc) during build. It may be not needed for release, but I have not tried.
You need either JDK 1.6 or JDK 1.7. It can't be built on the 1.8 JDK (compilation fails if you remove checks from build.xml). And on 1.7 it says it can't build swing library so you won't be able to build distribution.
Fix the tools/binary-repo-lib.sh and change URLs to https otherwise fetching artifacts will fail. Maybe you have to remove some of the curl arguments to see what happens, maybe it won't be required.
Run pull-binary-libs.sh
Fix build.xml to use https protocol. This is based on the Official usage documentation. You have to define repository before the first artifact:dependencies task in the file (I did that in the same block).
<artifact:remoteRepository id="central" url="https://repo1.maven.org/maven2" />
Then you have to tell every artifact:dependency task to use that repo by adding the remoteRepository element, the tasks should look like
<artifact:dependencies pathId="extra.tasks.classpath" filesetId="extra.tasks.fileset">
<dependency groupId="biz.aQute" artifactId="bnd" version="1.50.0"/>
<remoteRepository refid="central"/>
</artifact:dependencies>
I think this was enough for me to build it using ant build on JDK 1.7 (apparently, swing libs were not built).
But probably the easier way would be to just download prebuild Scala version or tell your build tool to use the Scala version you want.

Related

Build vs Deploy vs Publish (Eclipse IDE)

I'm a newbie to J2ee though not a complete newbie. I'm unable to find a good resource (book or video) that could help me understand what exactly happens when we build, deploy and publish. I have a fair idea though. So my questions are -
Is there a good resource out there that can help me understand these concepts? I've read some books on struts and servlets/jsp but they don't delve into eclipse and how/what it does. The eclipse documentation has been helpful but only slightly.
When we build an application the the java files are converted into the class files and stored in the java build path. What else happens during build? Many people use the term 'library dependencies', what does this mean? Also, when people refer to dependencies do they refer to files like xml and tld?
At what stage (build or run on server) does the container check to see if the dependencies are alright? Say for instance, if the servlet class/name in the web.xml file.
Is it appropriate to say that build is basically compilation while deploying the project and running it is the same as executing it?
Familiarity with the servlet specification would help you (perhaps some older version would be quicker to read like 2.4), but general understanding of what you build and how you do it in Eclipse is what you are after.
The way I see it is that during the build Eclipse creates almost complete version of WAR (or some other archive, if you use EJBs for instance) and by publishing you deploy it to some server (this is practically the same thing although Eclipse might just configure the server to use exploded WAR that it just prepared instead of copying it to some "deploy" dir that you are supposed to do if you work without an IDE).
If you configure your project well, the build can only mean compilation, but if you have more ceremony in it, then some source generation and moving files around might happen too.
To address your second question, library dependencies can be files that reside in WEB-INF/lib for instance. Read the spec to know what should be there and what should not. Eclipse tries to copy there all defined dependencies of your project.

Why should the Gradle Wrapper be committed to VCS?

From Gradle's documentation:
The scripts generated by this task are intended to be committed to
your version control system. This task also generates a small
gradle-wrapper.jar bootstrap JAR file and properties file which should
also be committed to your VCS. The scripts delegates to this JAR.
From: What should NOT be under source control?
I think generated files should not be in the VCS.
When are gradlew and gradle/gradle-wrapper.jar needed?
Why not store a gradle version in the build.gradle file?
Because the whole point of the gradle wrapper is to be able, without having ever installed gradle, and without even knowing how it works, where to download it from, which version, to clone the project from the VCS, to execute the gradlew script it contains, and to build the project without any additional step.
If all you had was a gradle version number in a build.gradle file, you would need a README explaining everyone that gradle version X must be downloaded from URL Y and installed, and you would have to do it every time the version is incremented.
Because the whole point of the Gradle wrapper is to be able, without having ever installed Gradle
Same argument goes for the JDK, do you want to commit that also? Do you also commit all your dependency libraries?
The dependencies should be upgraded continuously as new versions are released to get security and other bug fixes, and because if you get to far behind it can be a very time consuming task to get up to date again.
If the Gradle wrapper is incremented for every new release, and it is committed, the repo will grow very large. The problem is obvious when working with distributed VCS where a clone will download all versions of everything.
and without even knowing how it works
Create a build script that downloads the wrapper and uses it to build. Everyone does not need to know how the script works, they need to agree that the project is build by executing it.
where to download it from, which version
task wrapper(type: Wrapper) {
gradleVersion = 'X.X'
}
for Gradle version >= 5:
wrapper {
gradleVersion = 'X.X'
}
and then
gradle wrapper
to download the correct version.
to clone the project from the VCS, to execute the gradlew script it contains, and to build the project without any additional step.
Solved by the steps above. Downloading the Gradle wrapper is not different from downloading any other dependency. The script could be smart enough to check for any current Gradle wrapper and only download it if there is a new version.
If the developer has never used Gradle before and maybe doesn't know the project is built with Gradle, then it is more obvious to run a build.sh compared to running gradlew build.
If all you had was a gradle version number in a build.gradle file, you would need a README explaining everyone that gradle version X must be downloaded from URL Y an installed,
No, you would not need a README. You could have one, but we are developers and we should automate as much as possible. Creating a script is better.
and you would have to do it every time the version is incremented.
If the developers agree that the correct process is to:
Clone repo
Run build script
Then there upgrading to latest Gradle wrapper is no problem. If the version is incremented since last run, the script could download the new version.
I would like to recommend a simple approach.
In your project's README, document that an installation step is required, namely:
gradle wrapper --gradle-version 3.3
This works with Gradle 2.4 or higher. This creates a wrapper without requiring a dedicated task to be added to "build.gradle".
With this option, ignore (do not check in) these files/folders for version control:
./gradle
gradlew
gradlew.bat
The key benefit is that you don't have to check-in a downloaded file to source control. It costs one extra step on installation. I think it is worth it.
According to Gradle docs, adding gradle-wrapper.jar to VCS is expected as making Gradle Wrapper available to developers is part of the Gradle approach:
To make the Wrapper files available to other developers and execution environments you’ll need to check them into version control. All Wrapper files including the JAR file are very small in size. Adding the JAR file to version control is expected. Some organizations do not allow projects to submit binary files to version control. At the moment there are no alternative options to the approach.
What is the "project"?
Maybe there is a technical definition of this idiom that excludes build scripts. But if we accept this definition, then we must say your "project" is not all the things that you need to versioned!
But if we say "your project" is everything you have done. Then we can say you must include it and only it into VCS.
This is very theoretical and maybe not practical in case of our development works. So we change it to "your project is every file (or folder) you need to editing them directly".
"directly" means "not indirectly" and "indirectly" means by editing another file and then an effect will be reflected into this file.
So we reach the same that OP said (and is said here):
I think Generated files should not be in the VCS.
Yes. Because you haven't created them. So they are not part of "your project" according to the second definition.
What is the result about these files:
build.gradle: Yes. We need to edit it. Our works should be versioned.
Note: There is no difference where you edit it. Whether in your text editor environment or in Project Structure GUI environment. Anyway you doing it directly!
gradle-wrapper.properties: Yes. We need to at least determine Gradle version in this file.
gradle-wrapper.jar and gradlew[.bat]: I haven't created or edited them in any of my development works, till this moment! So the answer is "No". If you have done so, the answer is "Yes" about you at that work (and about the same file you edited).
The important note about the latest case is the user who clones your repo, needs to execute this command on repo's <root-directory> to auto-generate wrapper files:
> gradle wrapper --gradle-version=$v --distribution-type=$distType
$v and $distType are determined from gradle-wrapper.properties:
distributionUrl=https\://services.gradle.org/distributions/gradle-{$v}-{$distType}.zip
See https://gradle.org/install/ for more information.
gradle executable is bin/gradle[.bat] in local distribution. It's not required that local distribution be same as that determined in the repo. After wrapper files created then gradlew[.bat] can download determined Gradle distribution automatically (if not exists locally). Then he/she probably must regenerate wrapper files using new gradle executable (in downloaded distribution) using above instructions.
Note: In the above instructions, supposed the user has at least one Gradle distribution locally (e.g. ~/.gradle/wrapper/dists/gradle-4.10-bin/bg6py687nqv2mbe6e1hdtk57h/gradle-4.10). It covers almost all real cases. But what happens if the user hasn't any distribution already?
He/She can download it manually using the URL in .properties file. But if he/she doesn't locate it in the path that the wrapper expected, the wrapper will download it again! The expected path is completely predictable but is out of the subject (see here for the most complex part).
There are also some easier (but dirty) ways. For example, he/she can copy wrapper files (except .properties file) from any other local/remote repository to his/her repository and then run gradlew on his/her repository. It will automatically download the suitable distribution.
Old question, fresh answer. If you don't upgrade gradle often (most of us don't), it's better to commit it to VCS. And the main reason for me is to increase the build speed on the CI server. Nowadays, most of the projects are getting built and installed by CI servers, different server instance every time.
If you don't commit it, CI server will download a jar for every build and it significantly increases a build time. There are other ways to handle this problem, but I find this one the easiest to maintain.

Build failed when trying to compile JVisualVM

I've been following the instructions shown on http://visualvm.java.net/build/build.html when attempting to build JVisualVM.
I checked out the trunk to my hard-drive, I've downloaded http://java.net/projects/visualvm/downloads/download/dev/nb73_visualvm_14012013.zip and extracted its contents to the visualvm/ directory, as asked:
To build the visualvm core tool you need the NetBeans 7.3 platform and profiler binaries available here. These binaries must be extracted into the trunk/visualvm directory. You can use ant run or ant build-zip to build or run VisualVM.
When executing ant run I got a:
compile:
[mkdir] Created dir: C:\Users\user\Desktop\jvisualvm\visualvm\applicationviews\build\classes
[nb-javac] Compiling 19 source files to C:\Users\user\Desktop\jvisualvm\visualvm\applicationviews\build\classes
[nb-javac] warning: [options] bootstrap class path not set in conjunction with -source 1.5
[nb-javac] C:\Users\user\Desktop\jvisualvm\visualvm\applicationviews\src\com\sun\tools\visualvm\application\views\threads\ThreadMXBeanDataManager.java:117: e
rror: cannot find symbol
[nb-javac] super(dummyLong, CommonConstants.SERVER_RUNNING, CommonConstants.SERVER_PROGRESS_INDETERMINATE);
[nb-javac] ^
[nb-javac] symbol: variable SERVER_RUNNING
[nb-javac] location: interface CommonConstants
[nb-javac] C:\Users\user\Desktop\jvisualvm\visualvm\applicationviews\src\com\sun\tools\visualvm\application\views\threads\ThreadMXBeanDataManager.java:117: e
rror: cannot find symbol
[nb-javac] super(dummyLong, CommonConstants.SERVER_RUNNING, CommonConstants.SERVER_PROGRESS_INDETERMINATE);
[nb-javac] ^
[nb-javac] symbol: variable SERVER_PROGRESS_INDETERMINATE
[nb-javac] location: interface CommonConstants
[nb-javac] Note: Some input files use unchecked or unsafe operations.
[nb-javac] Note: Recompile with -Xlint:unchecked for details.
[nb-javac] 2 errors
[nb-javac] 1 warning
I've even installed Netbeans 7.3 but that didn' seem to help a bit!
I'm quite new to these matters, am I missing something?
You mentioned that you are "quite new to these matters", so may I ask if there is a reason you want to build the trunk, specifically? Most of the time, the workflow when using SVN for source control includes tagging releases under /tags. The tagged releases have generally been tested and met a minimum testing criteria to be considered suitable for release, so you will probably have an easier time building one of the tags.
There is nothing wrong with building /trunk for yourself, but it should be considered an "unstable/work in progress" build, so you should expect to encounter problems.
As you may already know, there are pre-compiled binaries available for download on the VisualVM site. If there is no binary for your operating system listed, you can probably find it with your package manager (e.g. sudo apt-get install visualvm on Ubuntu).
That said, don't let me discourage you from trying, if you want to:
In this case, the compiler is telling you that it cannot find the symbols SERVER_RUNNING and SERVER_PROGRESS_INDETERMINATE in the CommonConstants class. These are referenced on line 117 of the ThreadMXBeanDataManager class. If you take a look at that class, you will see the import org.netbeans.lib.profiler.global.CommonConstants statement, which tells us that CommonConstants comes from NetBeans. If we examine the SVN commit history for the ThreadMXBeanDataManager class, we can see that the developer made the changes intending to make VisualVM compatible with NetBeans 7.3. So, there are a few possiblities:
The developer was wrong, and was actually compiling with some other version of NetBeans (possibly a pre-release, etc). If you find that this is the case, you should file a bug report (and a patch, if possible).
You are trying to compile against the wrong version of NetBeans.
Something is wrong with the classpath/build script.
Let's examine #2 and #3. We can take binary you linked to and find out which jar(s) the CommonConstants class lives in by using JFind or a similar utility (or by Googling, etc.):
We see that the class lives in two places, so the next thing to do is check both of the class files and make sure that they have the constants. Use an unzip utility (e.g. 7-zip) to expand the jar files and use a decompiler (I like JD-GUI) to verify that the class has the constants:
The version you linked to seems OK on my machine, so unfortunately now you have to investigate further. Are either or both of these jars on the classpath in the Ant script that does the compilation? Do you have a different version of the jar on your classpath via an environment variable? There are a lot of possibilities here, so you will have to do some sleuthing. If all of this seems like a lot of work, then I suggest you go with one of the pre-compiled binaries or switch to a tag build.
It looks like you are compiling it against NetBeans 7.2. Did you opened the VisualVM project in NetBeans before you tried to compile it? If so, please check which NetBeans Platform is set for top-level VisualVM project in NetBeans. You should compile VisualVM against NetBeans platform from nb73_visualvm_14012013.zip. To check that your installation is correct, try the following:
If you have VisualVM open in NetBeans, close NetBeans
find all trunk/visualvm/*/nbproject/private directories in the VisualVM source tree and delete private directory.
use ant run from command line
If that works, open NetBeans, register trunk/visualvm/netbeans as NetBeans Platform and set is as platform for VisualVM top-level project.

Automating build tasks using eclipse / maven m2e

I am about to use maven to automate my builds. Unfortunately, I am not able to get all the features I want, even after reading several tutorials :(
I would be glad if somebody could explain a way I can achieve all my goals!
I want to automate 3 specific build tasks with several actions for a project from within eclipse, using m2e:
Build snapshot
compile
define current project version + date as version
build jar file
copy jar file into the local repository in the project path itself (§{project}/builds/)
Debug snapshot
build snapshot as mentioned above
copy jar file to plugins folder of a local test server
build another project the current project depends on, copy its jar file to the plugins folder aswell
launch server / connect to eclipse debugger (I know how to do that, the previous steps are the important ones)
Create release
compile
define current project version as version
build jar file
copy jar file into the local repository in the project path itself
create javadoc
copy source files and javadoc to an archive folder
increase the project version (for example v6 -> v7)
As mentioned I don't need a perfect solution, just a way to realize this ;)
(Annotation: Chaining multiple launch configurations is not a problem.)
Edit:
Which sections of the pom.xml do I have to modify to realize these steps and how can I invoke them using an eclipse launch configuration?
Hi based on your requirements i can say the following:
Build Snapshots
Building a SNAPSHOT is usually the convention during development cycle.
1.1 just using the conventions.
1.2 Date as version
This is a bad idea, cause Maven has some conventions how a version looks like (1.0-SNAPSHOT or 1.2.3-SNAPSHOT etc.)
1.3 Build jar file
Usually done by the jar life cycle (mvn package)
1.4 The local repository is on your home drive in ${HOME}/.m2/repository for all your projects. Technically you can do what you like but it's against the Maven conventions. The question is why do you need such thing?
2.1 Usual procedure
2.2 Usually a deployment is not a job for Maven but you can do such things by using cargo-maven-plugin (integration testing).
2.3 If you have dependencies between project you need CI solution like Jenkins to do such things otherwise you need to do this manually. But that is different from a multi-module build.
2.4 Integration testing different story. Depends on what exactly you like to do.
3.
1-7
The maven-release-plugin will handle such things except copying to the project path itself which is against the conventions. For such purposes you need a repository manager.
I can recommand reading these books: http://www.sonatype.com/Support/Books

Target Platform for PDE Headless build does not work

I am currently trying to get my headless pde-build working but I am stuck on a point where I do not know how to continue.
The problem is how to define the related target platform to compile the plugins against.
I have a build.bat with the following call (all in one line!):
java -jar D:\target\eclipse\plugins\org.eclipse.equinox.launcher_1.0.201.R35x_v20090715.jar
-application org.eclipse.ant.core.antRunner
-f D:\target\eclipse\plugins\org.eclipse.pde.build_3.5.2.R35x_20100114\scripts\productBuild\productBuild.xml
-Dbuilder=c:\pde-build\scripts %*
I tried to create the target eclipse platform from different parts: The eclipse SDK, RCP SDK, Delta Pack, PDE-SDK in all combinations but none of them worked well.
I got the following error:
BUILD FAILED
D:\target\eclipse\plugins\org.eclipse.pde.build_3.5.2.R35x_20100114\scripts\productBuild\productBuild.xml:18: Cannot fin
d ${eclipse.pdebuild.scripts}/build.xml imported from D:\target\eclipse\plugins\org.eclipse.pde.build_3.5.2.R35x_2010011
4\scripts\productBuild\productBuild.xml
where the variable ${eclipse.pdebuild.scripts} does not got resolved. I also tried to give this parameter via the command line but then I got another error regarding missing svn task which is absolutely confusing as this is working with my local eclipse installation referenced.
When I replace the path from d:/target/eclipse to my local eclipse installation the pde build works as expected!
This leads my to the point that the configuration of the target eclipse is not correct but in the moment I have no idea how to configure this!
My goal is the automate the pde build first on my local site without referencing my local eclipse and later on integrate this building process into our running cruisecontrol instance.
As I saw already another question about defining the target eclipse I would be happy if anyone can contribute hints or facts regarding the problem.
Regards,
Andreas
When performing a headless build, the target can be separate from the eclipse that is actually running the build itself. The problem you had here is that the eclipse that you were using to run the build did not have PDE/Build properly installed.
This is why the ${eclipse.pdebuild.scripts} was not set, because PDE/Build was not installed into that eclipse instance, the org.eclipse.pde.build bundle was not resolved and the code that sets this property never got called. Similarly, the necessary ant classpath entries for PDE/Build tasks would not have been set up properly either.
You need the Eclipse with PDE installed inside to run the build, but the target for the build can be separate from this.
In the build.properties file found under -Dbuilder=c:\pde-build\scripts you can set several properties:
baseLocation This is a path to an eclipse that is your target.
buildDirectory This is where the build will actually take place, source is fetched to plugins/ and features/ subfolders, but if there are already binary plugins located here then those become part of the target as well.
pluginPath This is a list of paths (separated with ';' on windows or ':' on linux) containing other locations that should be considered as part of your target. These locations can be several things:
The root of an eclipse-like install with plugins/ and features/ subfolders. This is a good way to provide the delta-pack instead of just unzipping it on top of an eclipse install.
The root of a workspace-like folder, where all subfolders are treated as plugins or features depending on the presence of a manifest or feature.xml.
The root of a bundle or feature, or the jar for a bundle.
If you are doing a p2 build (p2.gathering = true) you can also provide p2 repositories under a ${repoBaseLocation} which will be transformed and placed under ${transformedRepoLocation} and will become part of your target, and the p2 metadata there will get reused during the build.
after some more time of investigation I found out, what I did wrong so far. As I mentioned above defining the target platform is not that easy as copying the SDK and plugins in into one location (as it was in early times of eclipse dev).
The working solution by now is the following: Copying the eclipse SDK into the target location and run this version. Install inside this the neccessary PDE-Tools to enable plugin development. After that, close the IDE and copy the delta pack + the respective svn plugin (I used org.eclipse.pde.build.svn-1.0.1RC2 from sourceforge) into the target platform and you're done.
Now my automated PDE build is running as expected.
Only minor issue now is the following: The result product contains eclipse-specific menu entries which are not there when I ran this from inside my dev-eclipse.
Any hints on that?
I just posted an answer to my question on this kind of topics, may be this can help you:
Plugin product VS Feature product