I want to run an ammonite Scala script from an application and get the result:
object TestMain extends App {
val path = os.Path("scripts/ammtest/Test.sc", base = os.pwd)
val res = ammonite.Main().runScript(path, Nil)
println(s"res = $res")
}
However this only works if the script is wrapped in a #main annotation:
Test.sc:
#main def main(): Int = {
42
}
Is there any way to avoid having to include the #main def main() part?
I would like the syntax to use a DSL that returns an instance of a class.
The returned class instance is needed by the application that is starting the script.
Related
I need to mock (with scalamock 4.4.0/scalatest 3.2.9) the behaviour of Hadoop file system. I found out, that I can't mock it directly, so I'm trying to get around this problem.
On the first, try I created wrapper (in the scope of main logic):
class HadoopFsWrapper(path: Path)(implicit fs: org.apache.hadoop.fs.FileSystem) {
def listStatus(path) = fs.listStatus(path)
}
class HadoopService(path: Path)(implicit val fs: HadoopFsWrapper){
def someLogic() = fs.listStatus(path)
}
I declare this class both in main and test code with defining of implicit HadoopFsWrapper.
implicit val fs = mock[HadoopFsWrapper]
val hs = HadoopService(path)
// other test logic
And this done well.
But I don't think, its a good idea to modify main code for test purpose.
Other way I considered is to create wrapper only in test scope and apply some kind of "decorator" to deceive HadoopService, which expects FileSystem to be defined in main scope:
// only in test scope
class HadoopFsWrapper(path: Path) extends org.apache.hadoop.fs.FileSystem {
def listStatus(path) = listStatus(path)
// dummies for implemented FileSystem-methods
}
object HadoopFsWrapper {
def apply(): org.apache.hadoop.fs.FileSystem = new HadoopFsWrapper
}
// main logic
class HadoopService(path: Path)(implicit val fs: org.apache.hadoop.fs.FileSystem){
def someLogic() = fs.listStatus(path)
}
But when I'm trying to mock HadoopFsWrapper it's defined as null.
Suppose I want to use macros in Scala 3 to count the number of places a certain method doSomething() was used in the code:
// Macros.scala
import scala.quoted.{Expr, Quotes}
object Macros {
private var count: Int = 0
inline def doSomething() = ${increment()}
private def increment()(using Quotes) =
count = count + 1
Expr("some result")
inline def callCount() = ${getCount()}
private def getCount()(using Quotes) =
Expr(count)
}
And I have an object that uses doSomething() a few times:
// Runner.scala
object Runner {
def run() =
Macros.doSomething()
Macros.doSomething()
Macros.doSomething()
}
And I want to show, at runtime, the call count:
// Main.scala
object Main {
def main(args: Array[String]): Unit =
println(Macros.callCount())
}
Depending on the order in which these macros were compiled, the main function will print either 0 or 3. If I could control this order, I would instruct the compiler to compile Main.scala last, so that I get the expected value 3. Is that possible?
No, there's no way to control it, and you just mustn't write that kind of code.
There is an Apache Spark Scala project (runnerProject) that uses another one in the same package (sourceProject). The aim of the source project is to get the name and version of the Databricks job that is running.
The problem with the following method is that when it is called from the runnerProject, it returns the sourceProject's details, not the runnerProject's name and version.
sourceProject's method:
class EnvironmentInfo(appName: String) {
override def getJobDetails(): (String, String) = {
val classPackage = getClass.getPackage
val jobName = classPackage.getImplementationTitle
val jobVersion = classPackage.getImplementationVersion
(jobName, jobVersion)
}
}
runnerProject uses sourceProject as a package:
import com.sourceProject.environment.{EnvironmentInfo}
class runnerProject {
def start(
environment: EnvironmentInfo
): Unit = {
// using the returned parameters of the environment
}
How can this issue be worked around in a way that the getJobDetails() runs in the sourceProject, so that it can be called from other projects as well, not just the runnerProject. And also, it should return the details about the "caller" job.
Thank you in advance! :)
Try the following, it gets the calling class name from the stack trace, and uses that to get the actual class, and it's package.
class EnvironmentInfo(appName: String) {
override def getJobDetails(): (String, String) = {
val callingClassName = Thread.currentThread.getStackTrace()(2).getClassName
val classPackage = Class.forName(callingClassName).getPackage
val jobName = classPackage.getImplementationTitle
val jobVersion = classPackage.getImplementationVersion
(jobName, jobVersion)
}
}
It will work if you call it directly, it might give you the wrong package if you call it from within a lambda function.
Lets say I have the following structure:
src
- main
- scala
- model
- One.scala
- Two.scala
- main
- Test.scala
where Test.scala extends App and takes in a parameter:
object Test extends App {
val param: String = args.head
// Based on param, I want to use either One or Two?
}
// sbt run Two
How do i use definitions in either One.scala or Two.scala depending on the runtime value of param.
Appreciate any/all inputs..
Make sure that One and Two share some common interface, choose the instance of this interface at runtime, then import the members of the instance:
trait CommonInterface {
def foo(): Unit
}
object One extends CommonInterface { def foo() = println("1") }
object Two extends CommonInterface { def foo() = println("2") }
object Main extends App {
// check args etc...
val ci = if (args(0) == "one") One else Two
import ci._
// do something with `foo` here
foo()
}
I have a Jenkins file, and i'm trying to instantiate a groovy class from my shared library. I get "unable to resolve class Test "
I have a src/com/org/foo.groovy file in a shared library :
package com.org
class Test implements Serializable{
String val
Test(val) {
this.val = val
}
}
and I'm trying to instantiate it in my jenkinsfile
#Library('Shared-Library#master')
import com.org //also tried to use with .foo with no success
def t = new Test("a") //doesnt work
def t = new foo.Test("a")//doesnt work
def t = new com.org.foo.Test("a")//doesnt work
What does work is if I refer to the file as a class (which I don't have the access to its constructor). That is:
#Library('Shared-Library#master')
def t = new foo.com.org.foo()
This is nice, and lets me use foo functions. However, I lose the power to give the class constants and construct it with parameters.
Any idea how I can define and use a class from shared library?
Thanks
The scope of your class is a default scope. you can change the scope to public
It throwing an error because you have created an object of a class outside the script block. try below code and it should work. Try below code
#Library('Shared-Library#master')
import com.org.*;
stages{
stage('Demo') {
steps{
script{
def t = new Test("a") //this should work
}
}
}
}