Scala Postfix operator warning contradicts with Scaladoc - scala

My code:
val exitStatus = url #> outfile !
scala.sys.process:
new URL("http://www.scala-lang.org/") #> new File("scala-lang.html") !
Warning:
postfix operator ! should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn] This can be achieved by adding the import clause 'import scala.language.postfixOps'
[warn] or by setting the compiler option -language:postfixOps.
[warn] See the Scaladoc for value scala.language.postfixOps for a discussion
[warn] why the feature should be explicitly enabled.
[warn] val exitStatus = url #> outfile !
[warn] ^
[warn] one warning found
WTF???

Keep calm and follow the warning.
import scala.language.postfixOps
...
val exitStatus = url #> outfile !
...
And... no warning! :)
The reason for these is so that people new to Scala don't use these by accident and end up more confused about syntax. I'm not sure I agree with this rationale, but it seems to work with my coworkers/friends, so there's definitely something to it.
As an aside here is the Scaladoc page that details all of these. You can also enable these as compiler flags or through build.sbt.
-language:dynamics # Allow direct or indirect subclasses of scala.Dynamic
-language:existential # Existential types (besides wildcard types) can be written and inferred
-language:experimental.macros # Allow macro defintion (besides implementation and application)
-language:higherKinds # Allow higher-kinded types
-language:implicitConversions # Allow definition of implicit functions called views
-language:postfixOps # Allow postfix operator notation, such as `1 to 10 toList'
-language:reflectiveCalls # Allow reflective access to members of structural types

Related

Typesafe Config SBT Multi Module Error When Resolving Placeholder

I'm having a SBT Multi Module project where in some of the sub modules, I'm actually having a place holder to resolve certain configurations. The project structure looks like this:
core
src
main
resources
application.conf
mod1
src
main
resources
application.conf
mod2
src
main
resources
application.conf
In the module2, in my application.conf, I have the following:
# The environment representation of the configurations
# ~~~~~
app {
environment = "test"
name = ${NAME}-split # TODO: Get the default name from application.conf which should also be located here
}
As it can be seen that the NAME is a place holder that I would like to inherit from either the default application.conf that I include or pass it in via a command line argument. As expected, I get to see compiler error like this:
[error] at java.base/java.lang.Thread.run(Thread.java:829)
[error] Caused by: com.typesafe.config.ConfigException$UnresolvedSubstitution: application.test.conf # file:/home/runner/work/housing-price-prediction-data-preparation/housing-price-prediction-data-preparation/split/target/scala-2.12/test-classes/application.test.conf: 7: Could not resolve substitution to a value: ${NAME}
[error] at com.typesafe.config.impl.ConfigReference.resolveSubstitutions(ConfigReference.java:108)
Normally you would put a reference.conf per module and an application.conf at the top application level.
Also note the module level configs are resolved first (in order, first th lower modules than the ones that depend on those). Then the application.conf and finally overrides from the command line.
The order is important, you can't depend on things not yet set, or if they are set to an intem value and then overriden. the dependent config will be based on the one effective when it is resolved.

How to differentiate between a script and normal class files in Scala?

In the book, Programming in Scala 5th Edition, the author says the following for two classes:
Neither ChecksumAccumulator.scala nor Summer.scala are scripts, because they end in a definition. A script, by contrast, must end in a result expression.
The ChecksumAccumulator.scala is as follows:
import scala.collection.mutable
class CheckSumAccumulator:
private var sum = 0
def add(b: Byte): Unit = sum += b
def checksum(): Int = ~(sum & 0XFF) + 1
object CheckSumAccumulator:
private val cache = mutable.Map.empty[String, Int]
def calculate(s: String): Int =
if cache.contains(s) then
cache(s)
else
val acc = new CheckSumAccumulator
for c<-s do
acc.add((c >> 8).toByte)
acc.add(c.toByte)
val cs = acc.checksum()
cache += (s -> cs)
cs
whereas the Summer.scala is as follows:
import CheckSumAccumulator.calculate
object Summer:
def main(args: Array[String]): Unit =
for arg <- args do
println(arg + ": " + calculate(arg))
But when I run the Summer.scala file, I get a different error than what mentioned by the author:
➜ learning-scala git:(main) ./scala3-3.0.0-RC3/bin/scala Summer.scala
-- [E006] Not Found Error: /Users/avirals/dev/learning-scala/Summer.scala:1:7 --
1 |import CheckSumAccumulator.calculate
| ^^^^^^^^^^^^^^^^^^^
| Not found: CheckSumAccumulator
longer explanation available when compiling with `-explain`
1 error found
Error: Errors encountered during compilation
➜ learning-scala git:(main)
The author mentioned that the error would be around not having a result expression.
I also tried to compile CheckSumAccumulator only and then run Summer.scala as a script without compiling it:
➜ learning-scala git:(main) ./scala3-3.0.0-RC3/bin/scalac CheckSumAccumulator.scala
➜ learning-scala git:(main) ✗ ./scala3-3.0.0-RC3/bin/scala Summer.scala
<No output, given no input>
➜ learning-scala git:(main) ✗ ./scala3-3.0.0-RC3/bin/scala Summer.scala Summer of love
Summer: -121
of: -213
love: -182
It works.
Obviously, when I compile both, and then run Summer.scala, it works as expected. However, the differentiation of Summer.scala as a script vs normal file is unclear to me.
Let's start top-down...
The most regular way to compile Scala is to use a build tool like SBT/Maven/Mill/Gradle/etc. This build tool will help with a few things: downloading dependencies/libraries, downloading Scala compiler (optional), setting up CLASS_PATH and most importantly running scalac compiler and passing all flags to it. Additionally it can package compiled class files into JARs and other formats and do much more. Most relevant part is CP and compilation flags.
If you strip off the build tool you can compile your project by manually invoking scalac with all required arguments and making sure your working directory matches package structure, i.e. you are in the right directory. This can be tedious because you need to download all libraries manually and make sure they are on the class path.
So far build tool and manual compiler invocation are very similar to what you can also do in Java.
If you want to have an ah-hoc way of running some Scala code there are 2 options. scala let's you run scripts or REPL by simply compiling your uncompiled code before it executes it.
However, there are some caveats. Essentially REPL and shell scripts are the same - Scala wraps your code in some anonymous object and then runs it. This way you can write any expression without having to follow convention of using main function or App trait (which provides main). It will compile the script you are trying to run but will have no idea about imported classes. You can either compile them beforehand or make a large script that contains all code. Of course if it starts getting too large it's time to make a proper project.
So in a sense there is no such thing as script vs normal file because they both contain Scala code. The file you are running with scala is a script if it's an uncompiled code XXX.scala and "normal" compiled class XXX.class otherwise. If you ignore object wrapping I've mentioned above the rest is the same just different steps to compile and run them.
Here is the traditional 2.xxx scala runner code snippet with all possible options:
def runTarget(): Option[Throwable] = howToRun match {
case AsObject =>
ObjectRunner.runAndCatch(settings.classpathURLs, thingToRun, command.arguments)
case AsScript if isE =>
ScriptRunner(settings).runScriptText(combinedCode, thingToRun +: command.arguments)
case AsScript =>
ScriptRunner(settings).runScript(thingToRun, command.arguments)
case AsJar =>
JarRunner.runJar(settings, thingToRun, command.arguments)
case Error =>
None
case _ =>
// We start the repl when no arguments are given.
if (settings.Wconf.isDefault && settings.lint.isDefault) {
// If user is agnostic about -Wconf and -Xlint, enable -deprecation and -feature
settings.deprecation.value = true
settings.feature.value = true
}
val config = ShellConfig(settings)
new ILoop(config).run(settings)
None
}
This is what's getting invoked when you run scala.
In Dotty/Scala3 the idea is similar but split into multiple classes and classpath logic might be different: REPL, Script runner. Script runner invokes repl.

value YpartialUnification is not a member of scala.tools.nsc.Settings

I'm trying to run scala cats in REPL. Following cat's instructions I have installed ammonite REPL and put following imports in predef.sc
nterp.configureCompiler(_.settings.YpartialUnification.value = true)
import $ivy.`org.typelevel::cats-core:2.2.0-M1`, cats.implicits._
I got this error when run amm.
predef.sc:1: value YpartialUnification is not a member of scala.tools.nsc.Settings
val res_0 = interp.configureCompiler(_.settings.YpartialUnification.value = true)
^
Compilation Failed
In Scala 2.13 partial unification is enabled by default and -Ypartial-unification flag has been removed by Partial unification unconditional; deprecate -Xexperimental #6309
Partial unification is now enabled unless -Xsource:2.12 is specified.
The -Ypartial-unification flag has been removed and the -Xexperimental
option, which is now redundant, has been deprecated.
thus the compiler no longer accepts -Ypartial-unification.

Bug an error when accessing a class that doesn't have access to type information

There are some classes in the library that have an IMPORT that cannot be resolved.
For example, org.scalatest.tools.Framework in ScalaTest.
If I add scalatest as a dependent library, it will be added to the test classpath, but this import will not be resolved in the normal test classpath.
There is no SBT module in the test classpath.
import sbt.testing.{Event => SbtEvent, Framework => SbtFramework, Runner => SbtRunner, Status => SbtStatus, _}
I need to scan for classes under a specific package arrangement in macro and search for classes with specific features.
def collectXxx(targets: List[c.Symbol]) {
targets.filter { x =>
{
x.isModule || (
x.isClass &&
!x.isAbstract &&
x.asClass.primaryConstructor.isMethod
} && x.typeSignature.baseClasses.contains(XxxTag.typeSymbol)
}
}
This will filter to symbols that are object / class and inherit from Xxx.
This will work in most cases, but if there is a class in targets that cannot be compiled as is, such as scalatest, the compiler error cannot be avoided.
The moment baseClasses is accessed, the macro deployment status is set to global error.
[error] <macro>:1:26: Symbol 'type sbt.testing.Framework' is missing from the classpath.
[error] This symbol is required by 'class org.scalatest.tools.Framework'.
[error] Make sure that type Framework is in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
[error] A full rebuild may help if 'Framework.class' was compiled against an incompatible version of sbt.testing.
[error] type `fresh$macro$612` = org.scalatest.tools.Framework
[error] ^
If you look at the stack trace in debug mode, I was setting global_error when I accessed each property of StubClassSymbol.
java.lang.Throwable
at scala.reflect.internal.Symbols$StubSymbol.fail(Symbols.scala:3552)
at scala.reflect.internal.Symbols$StubSymbol.info(Symbols.scala:3563)
at scala.reflect.internal.Symbols$StubSymbol.info$(Symbols.scala:3563)
at scala.reflect.internal.Symbols$StubClassSymbol.info(Symbols.scala:3567)
| => cat scala.reflect.internal.Symbols$StubClassSymbol.info(Symbols.scala:3567)
at scala.reflect.internal.Types$TypeRef.baseClasses(Types.scala:2593)
at scala.reflect.internal.Types.computeBaseClasses(Types.scala:1703)
at scala.reflect.internal.Types.computeBaseClasses$(Types.scala:1680)
at scala.reflect.internal.SymbolTable.computeBaseClasses(SymbolTable.scala:28)
at scala.reflect.internal.Types.$anonfun$defineBaseClassesOfCompoundType$2(Types.scala:1781)
at scala.reflect.internal.Types$CompoundType.memo(Types.scala:1651)
at scala.reflect.internal.Types.defineBaseClassesOfCompoundType(Types.scala:1781)
at scala.reflect.internal.Types.defineBaseClassesOfCompoundType$(Types.scala:1773)
at scala.reflect.internal.SymbolTable.defineBaseClassesOfCompoundType(SymbolTable.scala:28)
at scala.reflect.internal.Types$CompoundType.baseClasses(Types.scala:1634)
at refuel.internal.AutoDIExtractor.$anonfun$recursivePackageExplore$3(AutoDIExtractor.scala:119)
I thought of a way to get around this.
Perhaps when the import fails to resolve, that TypeSymbol becomes a StubClassSymbol.
So I parsed the structure of the Symbol that went into error and added a condition to filter it if a StubClassSymbol was found. And this one has worked.
!x.typeSignature.asInstanceOf[ClassInfoTypeApi].parents.exists { pr =>
pr.typeSymbol.isClass &&
pr.typeSymbol.asClass.isInstanceOf[scala.reflect.internal.Symbols#StubClassSymbol]
}
But I think this is really pushy. Is there any other way around it? And I wonder if this really covers all cases.

Scaladoc could not find any member to link for external Java library

In the documentation of my Scala project, I want to link to a member from an external Java library.
/**
* Checks whether log entries at [[org.tinylog.Level.TRACE]] will be output.
*
* #return `true` if [[org.tinylog.Level.TRACE]] level is enabled, `false` if disabled
*/
def isTraceEnabled(): Boolean = macro TaggedLoggerMacro.isTraceEnabled
org.tinylog.Level.TRACE is an enum value from a Java project. IntelliJ can resolve all these links well, but scaladoc fails unfortunately.
My command:
C:\bin\java\jdk-9\bin\java -Xbootclasspath/a:C:\Users\martin\.m2\repository\org\scala-lang\scala-library\2.12.9\scala-library-2.12.9.jar;C:\Users\martin\.m2\repository\org\scala-lang\scala-compiler\2.12.9\scala-compiler-2.12.9.jar;C:\Users\martin\.m2\repository\org\scala-lang\scala-reflect\2.12.9\scala-reflect-2.12.9.jar;C:\Users\martin\.m2\repository\org\scala-lang\modules\scala-xml_2.12\1.0.6\scala-xml_2.12-1.0.6.jar;C:\Users\martin\.m2\repository\org\scala-lang\scala-library\2.12.0\scala-library-2.12.0.jar -classpath C:\Users\martin\.m2\repository\net\alchim31\maven\scala-maven-plugin\4.1.1\scala-maven-plugin-4.1.1.jar scala_maven_executions.MainWithArgsInFile scala.tools.nsc.ScalaDoc C:\data\martin\TEMP\scala-maven-14328178310010851042.args
My arguments:
-doc-external-doc
"C:\tinylog 2.0\tinylog-api\target\tinylog-api-2.1-SNAPSHOT.jar#https://tinylog.org/v2/javadoc/"
-classpath
C:\Users\martin\.m2\repository\org\scala-lang\scala-library\2.12.9\scala-library-2.12.9.jar;C:\Users\martin\.m2\repository\org\scala-lang\scala-reflect\2.12.9\scala-reflect-2.12.9.jar;C:\Users\martin\.m2\repository\org\tinylog\tinylog-api\2.1-SNAPSHOT\tinylog-api-2.1-SNAPSHOT.jar
-doc-format:html
-doc-title
"tinylog Scala API 2.1-SNAPSHOT API"
-d
"C:\tinylog 2.0\tinylog-api-scala\target\site\scaladocs"
"C:\tinylog 2.0\tinylog-api-scala\src\main\scala\org\tinylog\scala\Logger.scala"
"C:\tinylog 2.0\tinylog-api-scala\src\main\scala\org\tinylog\scala\LoggerMacro.scala"
"C:\tinylog 2.0\tinylog-api-scala\src\main\scala\org\tinylog\scala\TaggedLogger.scala"
"C:\tinylog 2.0\tinylog-api-scala\src\main\scala\org\tinylog\scala\TaggedLoggerMacro.scala"
Output (115 warnings in total):
C:\PROGS\dev\var\private\tinylog 2.0\tinylog-api-scala\src\main\scala\org\tinylog\scala\TaggedLogger.scala:242: warning: Could not find any member to link for "org.tinylog.Level.TRACE".
/**
^
C:\PROGS\dev\var\private\tinylog 2.0\tinylog-api-scala\src\main\scala\org\tinylog\scala\TaggedLogger.scala:229: warning: Could not find any member to link for "org.tinylog.Level.TRACE".
/**
^
How can I properly link to a member from an external Java library?
As pme has mentioned in the comments, scaladoc does not support linking to Javadoc:
results from scaladoc not processing Javadoc comments in Java sources nor linking to Javadoc
(https://www.scala-sbt.org/1.x/docs/Howto-Scaladoc.html)
Therefore, I have decided to use the full URLs for linking the enum values from my external Java library. This is not really convenient, but it works.
/**
* Checks whether log entries at [[https://tinylog.org/v2/javadoc/org/tinylog/Level.html#TRACE TRACE]] will be output.
*
* #return `true` if [[https://tinylog.org/v2/javadoc/org/tinylog/Level.html#TRACE TRACE]] level is enabled, `false` if disabled
*/
def isTraceEnabled(): Boolean = macro LoggerMacro.isTraceEnabled
I see this as a workaround and not as an elegant solution. Any better solutions are welcome :)