Object not found when is in src_managed folder - scala

This is (I think) a different question from Type not found: type .. when type is in src_managed folder.
I am building from sbt, 1.1.1, I have set up a code generation task in sbt that is runnign as expected and creaing a number of files with the same structure.
package com.a3.traffic
package object Vendor
And they are imported in other files as:
import com.a3.traffic.Vendor._
The files are generated under src_managed. I have tried two different setups
src_managed / main / Vendor
src_managed / main / scala / com / a3 / traffic / Vendor
In both cases I get the following error:
[error] /Users/luis/IdeaProjects/SparkTrafficAllocation/core/src/main/scala/com/a3/traffic/Params.scala:5:28: object Vendor is not a member of package com.a3.traffic
[error] import com.a3.traffic.Vendor._
I can fix that by moving the generatd code to src / main / scala / com / a3 / traffic / Vendor (that is with the rest of my code) but then I get this.
[error] /Users/luis/IdeaProjects/SparkTrafficAllocation/core/target/scala-2.11/src_managed/main/scala/com/a3/traffic /Vendor/Vendor.scala:3:16: package is already defined as package object Vendor
[error] package object Vendor {
I find this quite puzzling. The objects defined in src_managed can not be seen from my code, but it can see what is in the package.How can I make the objects in src_managed available to the rest of the package?
EDIT
I created a minimal project to show this https://github.com/sisamon/MinimalApp
EDIT 2
I am using a name / package.scala / package.scala => object name as the original name.scala case class / case object was not working.

The problem is in here.
def generator (x: Country) = {
generateADT("Vendor", x.vendor)
generateADT("InstallationType", x.installationType)
}
Remember that your task MUST returns a Seq with the ALL Files that were generated!.
And, your generateADT each return a Seq of one File, and as such, you are returning only the Seq of the last call (which in this case is InstallationType), that is why your Vendor is not found!
You can check that by commenting the second line, which will make the first line the return, in that case the Vendor will be found!
There a couple of ways to fix this, the most simple & elegant (IMHO) would be this:
def generator (x: Country): List[File] =
List(
("Vendor", x.vendor),
("InstallationType", x.installationType)
).map((generateADT _).tupled)
def generateADT (base: String, d: Descriptor): File = {
// ...
// The path really does not matter, as long as it is inside the src_managed folder.
val adtFile = (sourceManaged in Compile).value / s"${base}.scala"
IO.writeLines(adtFile, code)
adtFile
}
PS: As an advice, you should explicitly put the return type of all functions/methods. Not only it will help the type inference of other things, also it will avoid a couple of compile errors and will increase the readability of your code.

Related

Sealed trait and dynamic case objects

I have a few enumerations implemented as sealed traits and case objects. I prefer using the ADT approach because of the non-exhaustive warnings and mostly because we want to avoid the type erasure. Something like this
sealed abstract class Maker(val value: String) extends Product with Serializable {
override def toString = value
}
object Maker {
case object ChryslerMaker extends Vendor("Chrysler")
case object ToyotaMaker extends Vendor("Toyota")
case object NissanMaker extends Vendor("Nissan")
case object GMMaker extends Vendor("General Motors")
case object UnknownMaker extends Vendor("")
val tipos = List(ChryslerMaker, ToyotaMaker, NissanMaker,GMMaker, UnknownMaker)
private val fromStringMap: Map[String, Maker] = tipos.map(s => s.toString -> s).toMap
def apply(key: String): Option[Maker] = fromStringMap.get(key)
}
This is working well so far, now we are considering providing access to other programmers to our code to allow them to configure on site. I see two potential problems:
1) People messing up and writing things like:
case object ChryslerMaker extends Vendor("Nissan")
and people forgetting to update the tipos
I have been looking into using a configuration file (JSON or csv) to provide these values and read them as we do with plenty of other elements, but all the answers I have found rely on macros and seem to be extremely dependent on the scala version used (2.12 for us).
What I would like to find is:
1a) (Prefered) a way to create dynamically the case objects from a list of strings making sure the objects are named consistently with the value they hold
1b) (Acceptable) if this proves too hard a way to obtain the objects and the values during the test phase
2) Check that the number of elements in the list matches the number of case objects created.
I forgot to mention, I have looked briefly to enumeratum but I would prefer not to include additional libraries unless i really understand the pros and cons (and right now I am not sure how enumerated compares with the ADT approach, if you think this is the best way and can point me to such discussion that would work great)
Thanks !
One idea that comes to my mind is to create an SBT SourceGenerator task.
That will read an input JSON, CSV, XML or whatever file, that is part of your project and will create a scala file.
// ----- File: project/VendorsGenerator.scala -----
import sbt.Keys._
import sbt._
/**
* An SBT task that generates a managed source file with all Scalastyle inspections.
*/
object VendorsGenerator {
// For demonstration, I will use this plain List[String] to generate the code,
// you may change the code to read a file instead.
// Or maybe this will be good enough.
final val vendors: List[String] =
List(
"Chrysler",
"Toyota",
...
"Unknow"
)
val generatorTask = Def.task {
// Make the 'tipos' List, which contains all vendors.
val tipos =
vendors
.map(vendorName => s"${vendorName}Vendor")
.mkString("val tipos: List[Vendor] = List(", ",", ")")
// Make a case object for each vendor.
val vendorObjects = vendors.map { vendorName =>
s"""case object ${vendorName}Vendor extends Vendor { override final val value: String = "${vendorName}" }"""
}
// Fill the code template.
val code =
List(
List(
"package vendors",
"sealed trait Vendor extends Product with Serializable {",
"def value: String",
"override final def toString: String = value",
"}",
"object Vendors extends (String => Option[Vendor]) {"
),
vendorObjects,
List(
tipos,
"private final val fromStringMap: Map[String, Vendor] = tipos.map(v => v.toString -> v).toMap",
"override def apply(key: String): Option[Vendor] = fromStringMap.get(key.toLowerCase)",
"}"
)
).flatten
// Save the new file to the managed sources dir.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala"
IO.writeLines(vendorsFile, code)
Seq(vendorsFile)
}
}
Now, you can activate your source generator.
This task will be run each time, before the compile step.
// ----- File: build.sbt -----
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue
Please note that I suggest this, because I have done it before and because I don't have any macros nor meta programming experience.
Also, note that this example relays a lot in Strings, which make the code a little bit hard to understand and maintain.
BTW, I haven't used enumeratum, but giving it a quick look looks like the best solution to this problem
Edit
I have my code ready to read a HOCON file and generate the matching code. My question now is where to place the scala file in the project directory and where will the files be generated. I am a little bit confused because there seems to be multiple steps 1) compile my scala generator, 2) run the generator, and 3) compile and build the project. Is this right?
Your generator is not part of your project code, but instead of your meta-project (I know that sounds confusing, you may read this for understanding that) - as such, you place the generator inside the project folder at the root level (the same folder where is the build.properties file for specifying the sbt version).
If your generator needs some dependencies (I'm sure it does for reading the HOCON) you place them in a build.sbt file inside that project folder.
If you plan to add unit test to the generator, you may create an entire scala project inside the meta-project (you may give a look to the project folder of a open source project (Yes, yes I know, confusing again) in which I work for reference) - My personal suggestion is that more than testing the generator itself, you should test the generated file instead, or better both.
The generated file will be automatically placed in the src_managed folder (which lives inside target and thus it is ignored from your source code version control).
The path inside that is just by order, as everything inside the src_managed folder is included by default when compiling.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala" // Path to the file to write.`
In order to access the values defined in the generated file on your source code, you only need to add a package to the generated file and import the values from that package in your code (as with any normal file).
You don't need to worry about anything related with compilation order, if you include your source generator in your build.sbt file, SBT will take care of everything automatically.
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue // Activate the source generator.
SBT will re-run your generator everytime it needs to compile.
"BTW I get "not found: object sbt" on the imports".
If the project is inside the meta-project space, it will find the sbt package by default, don't worry about it.

How do I change a project's id in SBT 1.0?

I have a bunch of SBT 0.13 project definitions that look like this:
lazy val coreBase = crossProject.crossType(CrossType.Pure).in(file("core"))
.settings(...)
.jvmConfigure(_.copy(id = "core"))
.jsConfigure(_.copy(id = "coreJS"))
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
(Mostly because I'm resentful about having to maintain Scala.js builds and don't want to have to type the JVM suffix every time I'm changing projects, etc.)
This doesn't compile in SBT 1.0 because Project doesn't have a copy method now.
Okay, let's check the migration guide.
Many of the case classes are replaced with pseudo case classes generated using Contraband. Migrate .copy(foo = xxx) to withFoo(xxx).
Cool, let's try it.
build.sbt:100: error: value withId is not a member of sbt.Project
.jvmConfigure(_.withId("core"))
^
So I asked on Gitter and got crickets.
The links for the 1.0 API docs actually point to something now, which is nice, but they're not very helpful in this case, and trying to read the SBT source gives me a headache. I'm not in a rush to update to 1.0, but I'm going to have to at some point, I guess, and maybe some helpful person will have answered this by then.
(This answer has been edited with information about sbt 1.1.0+ and sbt-crossproject 0.3.1+, which significantly simplify the whole thing.)
With sbt 1.1.0 and later, you can use .withId("core"). But there's better with sbt-crossproject 0.3.1+, see below.
I don't know about changing the ID of a Project, but here is also a completely different way to solve your original issue, i.e., have core/coreJS instead of coreJVM/coreJS. The idea is to customize crossProject to use the IDs you want to begin with.
First, you'll need to use sbt-crossproject. It is the new "standard" for compilation across several platforms, co-designed by #densh from Scala Native and myself (from Scala.js). Scala.js 1.x will allways use sbt-crossproject, but it is also possible to use sbt-crossproject with Scala.js 0.6.x. For that, follow the instructions in the readme. In particular, don't forget the "shadowing" part:
// Shadow sbt-scalajs' crossProject and CrossType from Scala.js 0.6.x
import sbtcrossproject.{crossProject, CrossType}
sbt-crossproject is more flexible than Scala.js' hard-coded crossProject. This means you can customize it more easily. In particular, it has a generic notion of Platform, defining how any given platform behaves.
For a cross JVM/JS project, the new-style crossProject invocation would be
lazy val coreBase = crossProject(JVMPlatform, JSPlatform)
.crossType(CrossType.Pure)
.in(file("core"))
.settings(...)
.jvmConfigure(_.copy(id = "core"))
.jsConfigure(_.copy(id = "coreJS"))
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
Starting with sbt-crossproject 0.3.1, you can simply tell it not to add the platform suffix for one of your platforms. In your case, you want to avoid the suffix for the JVM platform, so you would write:
lazy val coreBase = crossProject(JVMPlatform, JSPlatform)
.withoutSuffixFor(JVMPlatform)
.crossType(CrossType.Pure)
...
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
and that's all you need to do!
Old answer, applicable to sbt-crossproject 0.3.0 and before
JVMPlatform and JSPlatform are not an ADT; they are designed in an OO style. This means you can create your own. In particular, you can create your own JVMPlatformNoSuffix that would do the same as JVMPlatform but without adding a suffix to the project ID:
import sbt._
import sbtcrossproject._
case object JVMPlatformNoSuffix extends Platform {
def identifier: String = "jvm"
def sbtSuffix: String = "" // <-- here is the magical empty string
def enable(project: Project): Project = project
val crossBinary: CrossVersion = CrossVersion.binary
val crossFull: CrossVersion = CrossVersion.full
}
Now that's not quite enough yet, because .jvmSettings(...) and friends are defined to act on a JVMPlatform, not on any other Platform such as JVMPlatformNoSuffix. You'll therefore have to redefine that as well:
implicit def JVMNoSuffixCrossProjectBuilderOps(
builder: CrossProject.Builder): JVMNoSuffixCrossProjectOps =
new JVMNoSuffixCrossProjectOps(builder)
implicit class JVMNoSuffixCrossProjectOps(project: CrossProject) {
def jvm: Project = project.projects(JVMPlatformNoSuffix)
def jvmSettings(ss: Def.SettingsDefinition*): CrossProject =
jvmConfigure(_.settings(ss: _*))
def jvmConfigure(transformer: Project => Project): CrossProject =
project.configurePlatform(JVMPlatformNoSuffix)(transformer)
}
Once you have all of that in your build (hidden away in a project/JVMPlatformNoSuffix.scala in order not to pollute the .sbt file), you can define the above cross-project as:
lazy val coreBase = crossProject(JVMPlatformNoSuffix, JSPlatform)
.crossType(CrossType.Pure)
.in(file("core"))
.settings(...)
lazy val core = coreBase.jvm
lazy val coreJS = coreBase.js
without any need to explicitly patch the project IDs.

sbt illegal dynamic reference in runMain

I'm trying to run a code generator, and passing it the filename to write the output:
resourceGenerators in (proj, Compile) += Def.task {
val file = (resourceManaged in (proj, Compile)).value / "swagger.yaml"
(runMain in (proj, Compile)).toTask(s"api.swagger.SwaggerDump $file").value
Seq(file)
}.value
However, this gives me:
build.sbt:172: error: Illegal dynamic reference: file
(runMain in (proj, Compile)).toTask(s"api.swagger.SwaggerDump $file").value
Your code snippet has two problems:
You use { ... }.value instead of { ... }.taskValue. The type of resource generators is Seq[Task[Seq[File]]] and when you do value, you get Seq[File] not Task[Seq[File]]. That causes a legitimate compile error.
The dynamic variable file is used as the argument of toTask, which the current macro implementation prohibits.
Why static?
Sbt forces task implementations to have static dependencies on other tasks. Otherwise, sbt cannot perform task deduplication and cannot provide correct information in the inspect commands. That means that whichever task evaluation you perform inside a task cannot depend on a variable (a value known only at runtime), as your file in toTask does.
To overcome this limitation, there exists dynamic tasks, whose body allows you to return a task. Every "dynamic dependency" has to be defined inside a dynamic task, and then you can depend on the hoisted up dynamic values in the task that you return.
Dynamic solution
The following Scastie is the correct implementation of your task. I copy-paste the code so that folks can have a quick look, but go to that Scastie to check that it successfully compiles and runs.
resourceGenerators in (proj, Compile) += Def.taskDyn {
val file = (resourceManaged in (proj, Compile)).value / "swagger.yaml"
Def.task {
(runMain in (proj, Compile))
.toTask(s"api.swagger.SwaggerDump $file")
.value
Seq(file)
}
}.taskValue
Discussion
If you had fixed the taskValue error, should your task implementation correctly compile?
In my opinion, yes, but I haven't looked at the internal implementation good enough to assert that your task implementation does not hinder task deduplication and dependency extraction. If it does not, the illegal reference check should disappear.
This is a current limitation of sbt that I would like to get rid of, either by improving the whole macro implementation (hoisting up values and making sure that dependency analysis covers more cases) or by just improving the "illegal references checks" to not be over pessimistic. However, this is a hard problem, takes time and it's not likely to happen in the short term.
If this is an issue for you, please file a ticket in sbt/sbt. This is the only way to know the urgency of fixing this issue, if any. For now, the best we can do is to document it.

Scala Macro - error referencing class symbol "not found: value <class>"

I'm trying to create a Scala macro that generates code like:
val x = new com.foo.MyClass()
where com.foo.MyClass is definitely on the classpath at compile time and run time in the project using the macro.
I'm using the following c.Tree to generate the code:
Apply(Select(New(Ident(TermName("com.foo.MyClass"))), termNames.CONSTRUCTOR), List())
Printing the output of the show and showRaw commands indicate that the correct code is generated, however it seems that com.foo.MyClass either isn't on the class path during macro expansion or during compilation immediately after.
I'm seeing the following error generated at the usage point of the macro (the macro impl itself is defined in a separate project):
[ERROR] /src/main/java/foo/MyWhatever.scala:10: not found: value com.foo.MyClass
[ERROR] MyMacros.someMacro(someInput)
[ERROR]
Why is it failing to find this class on the classpath even though it's a Java file in the same project? I tried -Ymacro-debug-verbose and com.foo.MyClass isn't in the output, but a bunch of other Java & Scala classes are. I can't find a pattern to which classes are on the classpath for the Macro expansion.
Thanks for any help!
Okay! I managed to answer my own question. It turns out it works to use c.mirror.staticClass("com.foo.MyClass") to use compile-time reflection to get a class Symbol, then use Quasi-quotes.
My solution:
val classSymbol = c.mirror.staticClass("com.foo.MyClass")
val newClassTree = q"new ${classSymbol.toType}()"
c.Expr { newClassTree } // Success! This compiles and runs

Is it possible to ignore JVM-only properties and safely export to JavaScript?

I have a basic project setup following this Play with ScalaJS example. Other examples I have found using this same pattern would separate the case classes (models) from what would traditionally be their companion objects. That is, the case class would live in the "shared" sub-project, and the "companion object" (really just some object) would live in the "server" sub-project.
It would be highly preferable to keep these two within the same file (i.e. put important stuff in the real companion object), as it is very convenient to place type-class instances there and have them resolve properly. For example:
case class User(id: Int, name: String)
object User {
val default = User(1, "Guest")
// I need this for the back-end, but don't need to export to JS
implicit val reads: Reads[User] = ...
}
Unfortunately, this leads to a linking error, as the Reads type exists solely on the JVM (just one of many). But, if I were to move val reads into a different file, the implicit resolution of Reads[User] would break throughout the "server" sub-project, without adding explicit imports (which would be annoying).
Is it possible to explicitly ignore certain properties in the ScalaJS export, while still allowing them to compile for the JVM? I'd like the User case class to export, and possibly even other properties of its companion object, but others that exist on the JVM only could be ignored without disrupting the front-end.
The way I have worked around this in the past (in the Scala.js codebase itself), is by a PlattformExtensions trait that mix into the cross compiled object but is different for JVM and JS:
object User extends UserPlattformExtensions {
val default = User(1, "Guest")
}
In your JVM project:
trait UserPlattformExtensions {
implicit val reads: Reads[User] = ???
}
In your JS project:
trait UserPlattformExtensions
In your file organization (with a standard cross project), this would look like the following:
project/
shared/
src/main/
User.scala
jvm/
src/main/
UserPlattformExtensions.scala
js/
src/main/
UserPlattformExtensions.scala
There are no dependency issues, since to the compiler, the source files are assembled as follows:
sources in projectJVM:
shared/src/main/User.scala
jvm/src/main/UserPlattformExtensions.scala
sources in projectJS:
shared/src/main/User.scala
jvm/src/main/UserPlattformExtensions.scala
So to each individual compilation run, this whole construct is simply an object that inherits from a trait. Which source directories the sources come from do not matter to the compilation.