I'm working in a project in a very security concious place with no access via proxy to all the online repositories SBT usually requires. We'd like to fetch the dependencies and transitive dependencies we need once.
How can sbt be forced to fetch all the dependencies a project needs once and from there on, only work offline? I have tried doing exactly that from home. I then copied over everything under:
~/.ivy2/cache
~/.ivy2/local
$ACTIVATOR_HOME/repository
but still SBT even when executed with sbt "set offline := true" run goes and tries to fetch everything online ... is a pain. Then finally breaks and complains it doesn't find some dependency.
UPDATE: I noticed another source of troubles but can't yet conclude it is the culprit of the OP broken build issue. I build and get the dependencies for the project from a Linux (Ubuntu box) and then I copy all the files to the corporate Windows 7 Pro environment. I found that many property files under ~/.ivy2/cache refer to the absolute path of the activator repository directory in Ubuntu and this is of course incorrect in the Windows env e.g.
#ivy cached data file for ch.qos.logback#logback-classic;1.1.3
#Fri Mar 10 08:39:37 CET 2017
artifact\:ivy\#ivy.original\#xml\#-1844423371.location=/opt/dev/activator/1.3.12/repository/ch.qos.logback/logback-classic/1.1.3/ivys/ivy.xml
artifact\:ivy\#ivy\#xml\#1016118566.is-local=true
artifact\:ivy\#ivy\#xml\#1016118566.location=/opt/dev/activator/1.3.12/repository/ch.qos.logback/logback-classic/1.1.3/ivys/ivy.xml
artifact\:ivy\#ivy.original\#xml\#-1844423371.is-local=true
artifact\:ivy\#ivy\#xml\#1016118566.exists=true
artifact\:logback-classic\#jar\#jar\#804750561.is-local=true
artifact\:logback-classic\#jar\#jar\#804750561.location=/opt/dev/activator/1.3.12/repository/ch.qos.logback/logback-classic/1.1.3/jars/logback-classic.jar
artifact\:ivy\#ivy.original\#xml\#-1844423371.exists=true
artifact\:logback-classic\#jar\#jar\#804750561.exists=true
So I went and did a find and replace but the build still doesn't work. It doesn't look like a brilliant idea to have thousands of property files hardcoding an absolute path to the activator location. I would rather prefer they use an environment variable for that.
Maybe you could try coursier?
No only it offers
better offline mode - one can safely work with snapshot dependencies if these are in cache (SBT tends to try and fail if it cannot check for updates)
but also is much faster than Ivy due to parallel artifacts downloading. The project is young but promising.
Related
To speed up our development workflow we split the tests and run each part on multiple agents in parallel. However, compiling test sources seem to take most of the time for the testing steps.
To avoid this, we pre-compile the tests using sbt test:compile and build a docker image with compiled targets.
Later, this image is used in each agent to run the tests. However, it seems to recompile the tests and application sources even though the compiled classes exists.
Is there a way to make sbt use existing compiled targets?
Update: To give more context
The question strictly relates to scala and sbt (hence the sbt tag).
Our CI process is broken down in to multiple phases. Its roughly something like this.
stage 1: Use SBT to compile Scala project into java bitecode using sbt compile We compile the test sources in the same test using sbt test:compile The targes are bundled in a docker image and pushed to the remote repository,
stage 2: We use multiple agents to split and run tests in parallel.
The tests run from the built docker image, so the environment is the
same. However, running sbt test causes the project to recompile even
through the compiled bitecode exists.
To make this clear, I basically want to compile on one machine and run the compiled test sources in another without re-compiling
Update
I don't think https://stackoverflow.com/a/37440714/8261 is the same problem because unlike it, I don't mount volumes or build on the host machine. Everything is compiled and run within docker but in two build stages. The file modified times and paths are retained the same because of this.
The debug output has something like this
Initial source changes:
removed:Set()
added: Set()
modified: Set()
Invalidated products: Set(/app/target/scala-2.12/classes/Class1.class, /app/target/scala-2.12/classes/graph/Class2.class, ...)
External API changes: API Changes: Set()
Modified binary dependencies: Set()
Initial directly invalidated classes: Set()
Sources indirectly invalidated by:
product: Set(/app/Class4.scala, /app/Class5.scala, ...)
binary dep: Set()
external source: Set()
All initially invalidated classes: Set()
All initially invalidated sources:Set(/app/Class4.scala, /app/Class5.scala, ...)
Recompiling all 304 sources: invalidated sources (266) exceeded 50.0% of all sources
Compiling 302 Scala sources and 2 Java sources to /app/target/scala-2.12/classes ...
It has no Initial source changes, but products are invalidated.
Update: Minimal project to reproduce
I created a minimal sbt project to reproduce the issue.
https://github.com/pulasthibandara/sbt-docker-recomplile
As you can see, nothing changes between the build stages, other than running in the second stage in a new step (new container).
While https://stackoverflow.com/a/37440714/8261 pointed at the right direction, the underlying issue and the solution for this was different.
Issue
SBT seems to recompile everything when it's run on different stages of a docker build. This is because docker compresses images created in each stage, which strips out the millisecond portion of the lastModifiedDate from sources.
SBT depends on lastModifiedDate when determining if sources have changed, and since its different (the milliseconds part) the build triggers a full recompilation.
Solution
Java 8:
Setting -Dsbt.io.jdktimestamps=true when running SBT as recommended in https://github.com/sbt/sbt/issues/4168#issuecomment-417655678 to workaround this issue.
Newer:
Follow recomendation in https://github.com/sbt/sbt/issues/4168#issuecomment-417658294
I solved the issue by setting SBT_OPTS env variable in the docker file like
ENV SBT_OPTS="${SBT_OPTS} -Dsbt.io.jdktimestamps=true"
The test project has been updated with this workaround.
Using SBT:
I think there is already an answer to this here: https://stackoverflow.com/a/37440714/8261
It looks tricky to get exactly right. Good luck!
Avoiding SBT:
If the above approach is too difficult (i.e. getting sbt test to consider that your test classes do not need re-compiling), you could instead avoid using sbt but instead run your test suite using java directly.
If you can get sbt to log the java command that it is using to run your test suite (e.g. using debug logging), then you could run that command on your test runner agents directly, which would completely preclude sbt re-compiling things.
(You might need to write the java command into a script file, if the classpath is too long to pass as a command-line argument in your shell. I have previously had to do that for a large project.)
This would be a much hackier approach that the one above, but might be quicker to get working.
A possible solution might be defining your own sbt task without dependencies or try to change the test task. For example you could create a task to run a JUnit runner if that was your testing framework. To define a task see this on Implementing Tasks.
You could even go as far as compiling sending the code and running the remotes from the same task as it is any scala code you want. From the sbt reference manual
You could be defining your own task, or you could be planning to redefine an existing task. Either way looks the same; use := to associate some code with the task key
I'm using latest Jenkins (v 1.590) LOL, but Jenkins official site say: 1.588. I'm 200% sure that I did see 1.589 and 1.590 few days back on Jenkins official download site(when I wanted to upgrade Jenkins to newer version).
This is what I see at the bottom of my Jenkins instance page.
Page generated: Nov 19, 2014 12:07:51 PMREST APIJenkins ver. 1.590
Now, the issue I'm facing is: Since I upgraded few of the plugins and Jenkins itself recently, some of the jobs are missing (I see this can happen during upgrades but upgrading to latest Jenkins should fix it and I'm two steps ahead of what Jenkins has on their official site, right):
I go to Manage Jenkins, Manage Plugins, Go to Available tab, check mark bunch of plugins to install (Artifactory, Maven project plugin etc ) and restart Jenkins using Jenkins GUI interface (which happens automatically once plugins are downloaded/installed in Jenkins GUI). After the restart, I do the same to see whether the plugin is now showing under "Installed" tab or not, but to my luck, it's still showing under "Available tab" and is NOT listed under "Installed" tab. If I open an existing job's configuration OR create a new job, the features available due to installed plugins are NOT visible i.e. if I installed Maven Project Plugin, I don't see an option to create a Maven style(2/3) project job while creating a new job.
I see valid .jpi files for respective plugins in plugins folder in JENKINS_HOME and there are some .pinned files as well. I have tried this a couple of times but the plugins are not visible once installed. Installation doesn't give any error during the whole operation.
Jenkins system log file (upon Jenkins restart) is attached (NOTE: Use slow download button to see/download this log file).
Download at SpeedyShare
or
[code]http://speedy.sh/x6vd8/Jenkins.System.Log[/code]
Issue was with the plugins permissions and expanded folders.
If you see under the plugins folder, you'll see .jpi or .hpi files (Jenkins jpi and Hudson hpi).
If I have awesomeplugin.jpi then there will be a folder called awesomeplugin.
Using Slav's hint, I ran bunch of checks and found out of 70+ plugins that I had installed, few of them somehow got "root" and "root" as their owner and group for their .jpi files and corresponding folder.
Now, The best solution one can try (the safest approach) is to chown -R yourvalidjenkinsuser:yourvalidgroup * and chmod -R 755 * as root. Before doing this, stop/shutdown jenkins.
I went even a step further, I first took backup of config files / whole jenkins JENKINS_HOME folder. Then I went to plugins folder and remove all .jpi corresponding folders using root account or as the owner of those folders (NOTE, I didn't delete the .jpi files). Then, I ran the above two commands (chown/chmod) and started Jenkins.
OUTCOME:
when I'm going to Jenkins > New item (to create a new job), Shenzi, all different types of jobs options are showing up (which included the Maven2/3 type job which I found got missing and few others like "Multi-configuration project" and Multijob Project job type.. all were missing and now they are showing up.
OK, I also checked one of the old job, went to its job's configuration and Shenzi!! I now see all the features there i.e. (Promoted Job plugin feature "Promtoe builds when.." check box. This feature which I configured sometime back, got missing but now it's showing up again.
Few of the Maven jobs that I created in past with Maven Release Plugin and Release Plugin POC work had bunch of steps in it. I found there was nothing in the Build step (after this whole mess), but after the above solution, I now see everything is back. I can see the configurations and build steps populated as they were set.
I hope this can help someone facing similar issue.
Still, I don't know why my Jenkins version is 1.590 (which Jenkins updated recently in automatic fashion) and Jenkins site today says, their latest Jenkins artifact is version 1.588 (seems like a mystery).
When you say "valid .hpi files", did you actually test that they are valid? You should be able to rename them to .zip and extract as a valid archive. An issue I face a lot is the network layer filtering system that we have in the office. It intercepts Jenkins's calls sometimes with the filtering system's login page, instead of whatever internet resource was being loaded.
If your .hpi files are not valid zip archives, open them in a text editor, and see if they are in the form of an html page/response of some kind.
An easy way to get rid of *everything* generated by SBT? asks for an easy way to clean up all files generated from sbt, and didn't find one. I'll ask for a hard one. How do I get a make cleanall with sbt?
UPDATE: For a definition of everything, you have to know why you ever need to clean all. There are two common reasons:
To distribute something: You need to confirm that someone can take a fresh machine, download your code, and build it without problem. (Or, someone can't, and you want to debug).
When something has gone horribly wrong. Some funny version of some jar somewhere is breaking something somehow. Sometimes it's easiest to just start fresh, rather than try to debug... (See Jim Gray's Why Computers Stop)
Here are the usual suspects of things you need to clean:
~/.sbt/boot - Safe cache of sbt itself to avoid disappearing JAR oddiites
~/.sbt/**/target - Compiled global plugin/build definitions.
~/.ivy2/cache - Ivy cache of resolved dependencies
~/.ivy2/local - publishLocal files
<cwd>/project/**/target - Compiled build/plugin definitioins
<cwd>/**/target - Artifacts from building your project, including compiler caches, classfiles, etc.
sbt is unable to provide a "clean everything" task directly, because deleting the JARs/classes sbt is actively using to run your build leads to super odd and evil behavior. But you could write a simple BASH script which can accomplish this if desired.
Is it possible to compile my Play!framework application only serverside?
Since I connect a samba share to my client from the server hosting Play!, the paths differ between client and server (modules, play, libs). So eclipsify gives me the server paths on my client, instead of using the client paths. Due to this the client gives me a build error.
Solution would be;
Change the eclipsify paths per client configuration.
Only compile my app on the server (preferred since there'll be no differences in env settings).
Can anyone tell me how one of these options would be possible?
Take a look at the play-maven plugin? Using maven for dependency management means all developers will have the same pom/config file, on running a maven build jars/libs will be downloaded from the repository server (you can use your own repo server too).
why don't you install paly framework in the client? this framework is for development tasks so you should install it in your development machine (client i presume). Play framework is freely downloadable and easy to install on your client.
I've found a temp "solution" to let each client define its own path (probably will be overwritten by play eclipsify? Can I change this?).
In Eclipse I've added a variable called PLAY_HOME under Window > Preferences > Java > Build path > Classpath Variables pointing to "D:\play-1.2.2" in this case.
In the .classpath I've replaced all absolute paths:
<classpathentry kind="lib" path="/usr/local/bin/play-1.2.2/framework/lib/...jar" />
to:
<classpathentry kind="var" path="PLAY_HOME/framework/lib/...jar"/>
Still no compilation on the server/continious integration etc. but it's a working solution for now, though it could be improved (the client - server diff dependencies still exists).
Would be nice to check if the version of play matches
Would be nice to make the PLAY_HOME variable optional by defaulting it to '..' (parent dir)
Perhaps an Ant script is what you need?
If I understand your question correctly, you want to develop with multiple developers on a single instance of an application hosted on some server???
It's maybe not the answer you're looking for, but my advice: don't do it this way.
Developing directly on a server, especially with multiple developers, is one of the great anti-patterns in development. Typically, only beginners and rather non-professional developers (no offense meant) do their development this way.
Restarting the server, debugging code, working in the same files... it only ends in tears when doing this 'shared' development.
Make sure you can run the application completely isolated on each workstation. Use version control to check in changes. If two developers have been working on the same code, you at least have a chance to rectify the situation (and a rather good chance if you use e.g. Mercurial or Git). If you still want to a global server to e.g. demo changes to non-developers, just periodically check-out a snapshot from version control and deploy that to this server.
I have taken the org.eclipse.equinox.p2.examples.rcp.prestartupdate project and adapted it for use in my RCP application. I then setup an update repository that gets updated as part of my nightly build.
When I open my application it goes through the motions like it is updating - it finds the update site, generates an uninstall and install operand for each bundle correctly and says that it finished with no errors. The problem is that the plugins never actually get installed in the plugins folder even though the profile gets updated (a subsequent run states there are no updates). Next time my build runs it correctly identifies there are updates, but the same thing happens again.
I have spent days debugging and the only thing that looks out of the ordinary (not that I fully understand what is going on) is that during the final configure phase none of the TouchpointData objects have any instructions so it doesn't look like configure is doing what it should.
I really have no clue where to look next and would like to see if anyone else has any ideas.
Update:
I finally figured out what was going on.
The problem started when I built my product without the generating the metadata repository. When building through Eclipse I didn't check the "Generate metadata repository" in the export product wizards because I didn't need a p2 repository, just the product. The problem is that without checking that button the product does not install as P2 enabled causing side effects such as not generating a profile among other things.
I tried to compensate for this by manually creating a profile in code which I have since found out is a really bad idea. My original problems were created because my profile wasn't set up correctly.
Once I started exporting the product with "Generate metadata repository" checked the update started correctly installing the new plugins.
The problem I have now is that although the plugins are being installed correctly, the executable is getting trashed and I cannot launch my application any more. I am building my update site through Hudson and the binary folder which is present when I use the Eclipse Export Product wizard is missing. I am assuming that is what is going wrong now.
Any ideas why the binaries would not be building in my headless PDE build?
Figured this out also. I had assumed that all I needed was the individual launcher plugins for the platforms I wanted to build on. Since I was trying to understand the process I was copying over plugins one by one to the build server. It turns out to include the platform specific binaries in the build you need to have the org.eclipse.equinox.executable feature from the delta pack. Once I added that to the build the binaries started showing up in the output. With the binaries the update mechanism works exactly as intended.
I had assumed that all I needed was the individual launcher plugins for the platforms I wanted to build on. Since I was trying to understand the process I was copying over plugins one by one to the build server. It turns out to include the platform specific binaries in the build you need to have the org.eclipse.equinox.executable feature from the delta pack. Once I added that to the build the binaries started showing up in the output. With the binaries the update mechanism works exactly as intended.