I've been creating some projects in Scala, and there might be already several components that I constantly use , reuse or implement in different ways, I want to start putting all that stuff in some sort of a library but the problem is that I really want to have the chance to check its implementation while working, like the nice "Hot reloading" that the revolver plugin brings whenever we require to see the changes of our code in the console
For now its clear that whenever I want to publish something setup my local build.sbt file and publish it
sbt publishLocal
And then bring them as
"eu.myproject" %% "my-lib" % "1.0.0"
But I really would appreciate a way to work with this libraries with some real time sync in order to see the changes without having to publish them for each change
UPDATE
So thanks to Matthias Berndt I manage to update a project , with some nice hot reload still with revolver by configuring the sbt file as
lazy val root = Project ....
.dependsOn(
ProjectRef(file("/HOME/my-lib"), "my-lib"))
I still will research a nice pattern to bring some more local and published libraries in order to have them in dev and prod
You can use a ProjectRef to add the library as a subproject to the build system of the program that uses the library.
Check out this question: How do you use `ProjectRef` to reference a local project in sbt 1.x?
This blog post should also be helpful:
https://eed3si9n.com/hot-source-dependencies-using-sbt-sriracha
When attempting to execute a query via Slick (v3.0.3), I am receiving a com.typesafe.config.ConfigException$Missing exception (wrapped in a ExceptionInInitializerError), exclaiming:
No configuration setting found for key 'slick'
Apparently Slick requires a config value for slick.dumpPaths to be present when debug logging is enabled. Ordinarily, a default value will be provided by the reference.conf file that comes stock in Slick's jar-file, but for some reason that file (or that particular key) is not getting picked up, in this case.
In addition, adding an application.conf (which includes the requested config value, slick.dumpPaths) to my application's resource directory (src/main/resources/, by default) and/or to the test resource directory does not help the problem -- the exception still occurs.
It turns out this was (apparently) happening because I was attempting to run the Slick query via SBT's Tests.Setup hook. My hook, in build.sbt, looks something like this:
testOptions in Test += Tests.Setup(loader =>
loader.loadClass("TestSetup").newInstance)
My guess is that SBT has not properly instantiated the classpath at the time this TestSetup class gets instantiated (and when my Slick query tries to execute). Perhaps someone that knows more about SBT's internals can edit this answer to provide more insight, though.
Using scala playframework 2.5,
I build the app into a jar using sbt plugin PlayScala,
And then build and pushes a docker image out of it using sbt plugin DockerPlugin
Residing in the source code repository conf/development.conf (same where application.conf is).
The last line in application.conf says include development which means that in case development.conf exists, the entries inside of it will override some of the entries in application.conf in such way that provides all default values necessary for making the application runnable locally right out of the box after the source was cloned from source control with zero extra configuration. This technique allows every new developer to slip right in a working application without wasting time on configuration.
The only missing piece to make that architectural design complete is finding a way to exclude development.conf from the final runtime of the app - otherwise this overrides leak into production runtime and obviously the application fails to run.
That can be achieved in various different ways.
One way could be to some how inject logic into the build task (provided as part of the sbt pluging PlayScala I assume) to exclude the file from the jar artifact.
Other way could be injecting logic into the docker image creation process. this logic could manually delete development.conf from the existing jar prior to executing it (assuming that's possible)
If you ever implemented one of the ideas offered,
or maybe some different architectural approach that gives the same "works out of the box" feature, please be kind enough to share :)
I usually have the inverse logic:
I use the application.conf file (that Play uses by default) with all the things needed to run locally. I then have a production.conf file that starts by including the application.conf, and then overrides the necessary stuff.
for deploying to production (or staging) I specify the production/staging.conf file to be used
This is how I solved it eventually.
conf/application.conf is production ready configuration, it contains placeholders for environment variables whom values will be injected in runtime by k8s given the service's deployment.yaml file.
right next to it, conf/development.conf - its first line is include application.conf and the rest of it are overrides which will make the application run out of the box right after git clone by a simple sbt run
What makes the above work, is the addition of the following to build.sbt :
PlayKeys.devSettings := Seq(
"config.resource" -> "development.conf"
)
Works like a charm :)
This can be done via the mappings config key of sbt-native-packager:
mappings in Universal ~= (_.filterNot(_._1.name == "development.conf"))
See here.
I have a PostgreSQL(v9.6) instance running on my machine. The database is called 'postgres'. I've managed to open the application in pgAdmin 3 using localhost:5432. In my Play! application(v2.6.2) I have added the driver and the url to the application.conf file following the tutorial here and I have added the javaJdbc dependency to my build.sbt file. So I have the following:
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://localhost:5432/postgres"
db.default.username = "user"
db.default.password = "pass"
When I run the application though I get this error in the console:
Cannot connect to database [default]
Could somebody explain to me why this is the case? I can provide more information if I need to.
So after quite a bit of digging, it turns out the problem was actually me missing the postgres dependency. If anyone encounters this problem I would suggest checking your build.sbt file and adding the org.postgres dependency if you are missing it.
This documentation on the play website doesn't explain this step.
I have an issue when trying to import in scala. The object Database exists under com.me.project.database but when I try to import it:
import com.me.project.database.Database
I get the error:
object Database is not a member of package com.me.project.controllers.com.me.project.database
Any ideas what the problem is?
Edit:
It is worth mentioning that the import is in the file Application.scala under the package com.me.project.controllers, I can't figure out why it would append the import to the current package though, weird...
Edit 2:
So using:
import _root_.com.me.project.database.Database
Does work as mentioned below. But should it work without the _root_? The comments so far seem to indicate that it should.
Answer:
So it turns out that I just needed to clean the project for the import to work properly, using both:
import _root_.com.me.project.database.Database
import com.me.project.database.Database
are valid solutions. Eclipse had just gotten confused.
imports can be relative. Is that the only import you have? be careful with other imports like
import com.me
ultimately, this should fix it, then you can try to find more about it:
import _root_.com.me.project.database.Database
In my case I also needed to check that object which is not found as a member of package is compiled successfully.
I realize this question already has an accepted answer, but since I experienced the same problem but with a different cause I figured I'd add an answer.
I had a bunch of interdependent projects which suddenly needed a root import in order to compile. It turned out that I had duplicated the package declaration in a single file. This caused some kind of chain reaction and made it very hard to find the source of the problem.
In summary I had
package foo.bar
package foo.bar
on the top of the file instead of just
package foo.bar
Hope this saves someone some really tedious error hunting.
In my case I had to run sbt clean.
I had faced similar issue where IntelliJ showed error on importing one file from the same project.
What did not resolve the issue in my case:
adding _root_ in import statement
sbt clean
restarting machine
What actually resolved the issue:
main menu => select File => click on Invalidate Caches / Restart => pop-up dailog => click on invalidate the caches and restart.
I was using IDEA (2019.2.2 Ultimate Edition) on macOs mojave 10.14.6
Java -> Scala conversion without cleaning
Don't forget to clean if you convert some file in a project from Java to Scala. I had a continuous integration build running where I couldn't get things to work, even though the build was working locally, after I had converted a Java class into a Scala object. Solution: add 'clean' to the build procedure on the CI server. The name of the generated .class file in Scala is slightly different than for a Java class, I believe, so this is very likely what was causing the issue.
If you are using gradle as your build tool, then ensure that jar task is not disabled.
I had multiple modules in my project, where one module was dependent on a few other modules. However, I had disabled jar task in build.gradle:
jar {
enabled = false
}
That caused it to fail to resolve classes in the dependent modules and fail with the above error.
I will share my story, just in case it may help someone.
Scenario: intellij compilation succeeds, but gradle build fails on import com.foo.Bar, where Bar is a scala class.
TLDR reason: Bar was located under src/main/java/... as opposed to src/main/scala/...
Actual reason: Bar was not being compiled by compileScala gradle task (from gradle scala plugin) because it looks for scala sources only under src/<sourceSet>/scala.
From docs.gradle.org:
All the Scala source directories can contain Scala and Java code. The
Java source directories may only contain Java source code.
Hope this helps
I had a similar problem but none of the solutions here worked for me. What did work however was a simple restart of my machine.
Perhaps it was something with my Intellij but after a quick restart, everything seems to be working fine.
I had a similar situation, which was failing in both IntelliJ and maven on the command line. I went to apply the suggested temp fix (adding _root_) but intellij was glitching so bad that wasn't even possible.
Eventually I noticed that I had mis-created a package so that it repeated the whole path of the package. That meant that the directory my class was in had a subfolder called "com", and the start of my file looked like:
package com.mycompany.mydept.myproject.myfunctionality.sub1
import com.holdenkarau.spark.testing.DataFrameSuiteBase
where I had another package called
com.mycompany.mydept.myproject.myfunctionality.sub1.com.mycompany.mydept.myproject.myfunctionality.sub2
And the compiler was looking for "holdenkarau" under com.mycompany.mydept.myproject.myfunctionality.com and failing.
I had this issue while using Intellij and the built-in sbt shell (precisely, I was trying to run the command console, which invokes a compiler check of the code).
In my case, after trying the other suggested solutions on this thread, I found that I could restart the sbt shell and it would go away. There's a button on the left-hand side of a looped green arrow and a small grey square which does this in one click (obviously, this is subject to Jet Brains not changing the design of the IDE!!!).
I hope this helps some people get past this issue quickly.
In my case, In Intellij, Just renaming the package file to something else >> see if it updates the import statements >> run the code >> then renaming back to the original name worked.