Can pureconfig use camel case config - scala

I'm using pureconfig pureconfig lib with pureconfig-yaml module. Everything works like a charm, my only problem is that I have to convert the property names from camel case to kebab case.
Painful examples from real world:
case class Config(log4JPath: String, registryURL: String, HOUR_FORMAT: String)
Yaml:
log-4-j-path: /conf/log4j.properties
registry-url: http://foo.com
hour-_-format: dd-mm-yy
I don't want to maintain 2 different case types and think about how to convert from one to the other, I would love to have pure copy&paste scala class -> yaml config solution. Is there a chance I could achieve camel case on both sides ?
Edit:
I've created a wrapper around pureconfig lib, which does some config overriding by environment variables. Client should use the wrapper in following manner:
val conf: Config = ConfigLoader(file).load[Config]
However this is not sufficient and the client needs to provide 2 imports:
// to find implicit reader
import pureconfig.generic.auto._
// to use Camelcase - as suggested from the answer
import ConfigLoader.productHint
It would be great if the wrapper (ConfigLoader) could deal with the imports and they would not be left on client's responsibility.
Moreover the imports are identified as "Unused" by IntelliJ IDE and when "optimize imports" is triggered or "Optimize imports on the fly" is enabled the imports are auto-erased. According to this thread (accepted answer is not working for me), we can solve this with "Mark import as always used...", however this is not an ideal solution because other team members will have to do the same for every project or we should commit .idea to VCS (which I'd like to avoid).
I'm attaching screenshot of my test (dependency pureconfig.generic.auto._ have already been marked as allways used):

Yes you can. Take a look to the documentation - field-mappings.
import pureconfig._
import pureconfig.generic.auto._
import pureconfig.generic.ProductHint
// Case classes should be final ;)
final case class Config(log4JPath: String, registryURL: String, HOUR_FORMAT: String)
val yaml =
"""log4JPath: /conf/log4j.properties
|registryURL: http://foo.com
|HOUR_FORMAT: dd-mm-yy""".stripMargin
implicit val indentityHint: ProductHint[Conf] =
ProductHint[Conf](new ConfigFieldMapping {
def apply(fieldName: String) = fieldName // Basically the identity.
})
loadYaml[Config](yaml)
// res: ConfigReader.Result[Config] = Right(Config("/conf/log4j.properties", "http://foo.com", "dd-mm-yy"))
(Note, this was tested in ammonite, using pureconfig 0.11.0).

Related

Sealed trait and dynamic case objects

I have a few enumerations implemented as sealed traits and case objects. I prefer using the ADT approach because of the non-exhaustive warnings and mostly because we want to avoid the type erasure. Something like this
sealed abstract class Maker(val value: String) extends Product with Serializable {
override def toString = value
}
object Maker {
case object ChryslerMaker extends Vendor("Chrysler")
case object ToyotaMaker extends Vendor("Toyota")
case object NissanMaker extends Vendor("Nissan")
case object GMMaker extends Vendor("General Motors")
case object UnknownMaker extends Vendor("")
val tipos = List(ChryslerMaker, ToyotaMaker, NissanMaker,GMMaker, UnknownMaker)
private val fromStringMap: Map[String, Maker] = tipos.map(s => s.toString -> s).toMap
def apply(key: String): Option[Maker] = fromStringMap.get(key)
}
This is working well so far, now we are considering providing access to other programmers to our code to allow them to configure on site. I see two potential problems:
1) People messing up and writing things like:
case object ChryslerMaker extends Vendor("Nissan")
and people forgetting to update the tipos
I have been looking into using a configuration file (JSON or csv) to provide these values and read them as we do with plenty of other elements, but all the answers I have found rely on macros and seem to be extremely dependent on the scala version used (2.12 for us).
What I would like to find is:
1a) (Prefered) a way to create dynamically the case objects from a list of strings making sure the objects are named consistently with the value they hold
1b) (Acceptable) if this proves too hard a way to obtain the objects and the values during the test phase
2) Check that the number of elements in the list matches the number of case objects created.
I forgot to mention, I have looked briefly to enumeratum but I would prefer not to include additional libraries unless i really understand the pros and cons (and right now I am not sure how enumerated compares with the ADT approach, if you think this is the best way and can point me to such discussion that would work great)
Thanks !
One idea that comes to my mind is to create an SBT SourceGenerator task.
That will read an input JSON, CSV, XML or whatever file, that is part of your project and will create a scala file.
// ----- File: project/VendorsGenerator.scala -----
import sbt.Keys._
import sbt._
/**
* An SBT task that generates a managed source file with all Scalastyle inspections.
*/
object VendorsGenerator {
// For demonstration, I will use this plain List[String] to generate the code,
// you may change the code to read a file instead.
// Or maybe this will be good enough.
final val vendors: List[String] =
List(
"Chrysler",
"Toyota",
...
"Unknow"
)
val generatorTask = Def.task {
// Make the 'tipos' List, which contains all vendors.
val tipos =
vendors
.map(vendorName => s"${vendorName}Vendor")
.mkString("val tipos: List[Vendor] = List(", ",", ")")
// Make a case object for each vendor.
val vendorObjects = vendors.map { vendorName =>
s"""case object ${vendorName}Vendor extends Vendor { override final val value: String = "${vendorName}" }"""
}
// Fill the code template.
val code =
List(
List(
"package vendors",
"sealed trait Vendor extends Product with Serializable {",
"def value: String",
"override final def toString: String = value",
"}",
"object Vendors extends (String => Option[Vendor]) {"
),
vendorObjects,
List(
tipos,
"private final val fromStringMap: Map[String, Vendor] = tipos.map(v => v.toString -> v).toMap",
"override def apply(key: String): Option[Vendor] = fromStringMap.get(key.toLowerCase)",
"}"
)
).flatten
// Save the new file to the managed sources dir.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala"
IO.writeLines(vendorsFile, code)
Seq(vendorsFile)
}
}
Now, you can activate your source generator.
This task will be run each time, before the compile step.
// ----- File: build.sbt -----
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue
Please note that I suggest this, because I have done it before and because I don't have any macros nor meta programming experience.
Also, note that this example relays a lot in Strings, which make the code a little bit hard to understand and maintain.
BTW, I haven't used enumeratum, but giving it a quick look looks like the best solution to this problem
Edit
I have my code ready to read a HOCON file and generate the matching code. My question now is where to place the scala file in the project directory and where will the files be generated. I am a little bit confused because there seems to be multiple steps 1) compile my scala generator, 2) run the generator, and 3) compile and build the project. Is this right?
Your generator is not part of your project code, but instead of your meta-project (I know that sounds confusing, you may read this for understanding that) - as such, you place the generator inside the project folder at the root level (the same folder where is the build.properties file for specifying the sbt version).
If your generator needs some dependencies (I'm sure it does for reading the HOCON) you place them in a build.sbt file inside that project folder.
If you plan to add unit test to the generator, you may create an entire scala project inside the meta-project (you may give a look to the project folder of a open source project (Yes, yes I know, confusing again) in which I work for reference) - My personal suggestion is that more than testing the generator itself, you should test the generated file instead, or better both.
The generated file will be automatically placed in the src_managed folder (which lives inside target and thus it is ignored from your source code version control).
The path inside that is just by order, as everything inside the src_managed folder is included by default when compiling.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala" // Path to the file to write.`
In order to access the values defined in the generated file on your source code, you only need to add a package to the generated file and import the values from that package in your code (as with any normal file).
You don't need to worry about anything related with compilation order, if you include your source generator in your build.sbt file, SBT will take care of everything automatically.
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue // Activate the source generator.
SBT will re-run your generator everytime it needs to compile.
"BTW I get "not found: object sbt" on the imports".
If the project is inside the meta-project space, it will find the sbt package by default, don't worry about it.

How to get the full class name of a dynamically created class in Scala

I have a situation where I have to get the fully qualified name of a class I generate dynamically in Scala. Here's what I have so far.
import scala.reflect.runtime.universe
import scala.tools.reflect.ToolBox
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val generatedClass = "class Foo { def addOne(i: Int) = i + 1 }"
tb.compile(tb.parse(generatedClass))
val fooClass:String = ???
Clearly this is just a toy example, but I just don't know how to get the fully qualified name of Foo. I tried sticking a package declaration into the code but that threw an error when calling tb.compile.
Does anyone know how to get the fully qualified class name or (even better) to specify the package that Foo gets compiled under?
Thanks
EDIT
After using the solution proposed I was able to get the class name. However, the next step is the register this class to take some actions later. Specifically I'm trying to make use of the UDTRegistration within Apache Spark to handle my own custom UserDefinedTypes. This strategy works fine when I manually create all the types, however, I want to use them to extend other types I may not know about.
After reading this it seems like what I'm trying to do might not be possible using code compiled at runtime using reflection. Maybe a better solution is to use Scala macros, but I'm very new to that area.
You may use define instead of compile to generate new class and get its package
val cls = tb.define(tb.parse(generatedClass).asInstanceOf[universe.ImplDef])
println(cls.fullName) //__wrapper$1$d1de39015284494799acd2875643f78e.Foo

Scala JSR 223 importing types/classes

The following example fails because the definition for Stuff can't be found:
package com.example
import javax.script.ScriptEngineManager
object Driver5 extends App {
case class Stuff(s: String, d: Double)
val e = new ScriptEngineManager().getEngineByName("scala")
println(e.eval("""import Driver5.Stuff; Stuff("Hello", 3.14)"""))
}
I'm unable to find any import statement that allows me to use my own classes inside of the eval statement. Am I doing something wrong? How does one import classes to be used during eval?
EDIT: Clarified example code to elicit more direct answers.
The Scripting engine does not know the context. It surely can't access all the local variables and imports in the script, since they are not available in the classfiles. (Well, variable names may be optionally available as a debug information, but it is virtually impossible to use them for this purpose.)
I am not sure if there is a special API for that. Imports are different across various languages, so bringing an API that should fit them all can be difficult.
You should be able to add the imports to the eval-ed String. I am not sure if there is a better way to do this.

Configure play-slick and samples

I'm currently trying to use Play! Framework 2.2 and play-slick (master branch).
In the play-slick code I would like to override driver definition in order to add the Oracle Driver (I'm using slick-extension). In the Config.Scala of play-slick I just saw /** Extend this to add driver or change driver mapping */ ...
I'm coming from far far away (currently reading Programming In Scala) so there's a lot to learn. So my questions are :
Can someone explain me how to extend this Config object ? this object is used in others classes ... Is the cake apttern useful here ?
Talking about cake pattern, I read the computer-database example provided by play-slick. This sample uses the cake pattern and import play.api.db.slick.Config.driver.simple._ If I'm using Oracle driver I cannot use this import, am I wrong ? How can I use the cake pattern to define an implicit session ?
Thanks a lot.
Waiting for your advices and I'm still studying the play-slick code at home :)
To extend the Config trait I do not think the cake pattern is required. You should be able to create your Config object like this:
import scala.slick.driver.ExtendedDriver
object MyExtendedConfig extends play.api.db.slick.Config {
override def driverByName: String => Option[ExtendedDriver] = {name: String =>
super.driverByName(name) orElse Map("oracledriverstring" -> OracleDriver).get(name)
}
lazy val app = play.api.Play.current
lazy val driver: ExtendedDriver = driver()(app)
}
To be able to use it you only need to do: import MyExtendedConfig.driver._ instead of import play.slick.db.api.Config.driver._. BTW, I see that the type of the driverByName could have been a Map instead of a Function making it easier to extend. This shouldn't break though, but it would be easier to do it.
I think Jonas Bonér's old blog is a great place to read what the cake pattern is (http://jonasboner.com/2008/10/06/real-world-scala-dependency-injection-di/). My naive understanding of it is that you have a cake pattern when you have layers that uses the self types:
trait FooComponent{ driver: ExtendedDriver =>
import driver.simple._
class Foo extends Table[Int]("") {
//...
}
}
There are 2 use cases for the cake pattern in slick/play-slick: 1) if you have tables that references other tables (as in the computer database sample) 2) to have control over exactly which database is used at which time or if you use many many different types. By using the Config you do not really need the cake pattern as long as you only have 2 different DBs (one for prod and one for test), which is the point of the Config.
Hope this answers your questions and good luck on reading Programming in Scala (loved that book :)

How to import several implicit at once?

I have several implicit context for my application.
like
import scala.collection.JavaConversions._
import HadoopConversion._
etc
Right now I have to copy paste all those imports at each file. Is it possible to combine them in one file and make only one import?
A good technique that some libraries provide by default is bundling up implicits into a trait. This way you can compose sets of implicits by defining a trait that extends other implicit bundling traits. And then you can use it at the top of your scala file with the following.
import MyBundleOfImplicits._
Or be more selective by mixing it in only where you need it.
object Main extends App with MyBundleOfImplicits {
// ...
}
Unfortunately with something like JavaConversions, to use this method you will need to redefine all the implicits you want to use inside a trait.
trait JavaConversionsImplicits {
import java.{lang => jl}
import java.{util => ju}
import scala.collection.JavaConversions
implicit def asJavaIterable[A](i : Iterable[A]): jl.Iterable[A] = JavaConversions.asJavaIterable(i)
implicit def asJavaIterator[A](i : Iterator[A]): ju.Iterator[A] = JavaConversions.asJavaIterator(i)
}
trait MyBundleOfImplicits extends JavaConversionsImplicits with OtherImplicits
Scala does not have first-class imports. So the answer to your question is no. But there is an exception for the scala REPL. You can put all your imports in a file and then just tell the REPL where it is located. See this question.
The other answers/comments are already comprehensive. But if you just want to reduce COPY/PASTEs, all mainstream IDE/text-editors support text templating ('live template' in IntelliJ IDEA, 'template' in Eclipse, 'snippets' in TextMate ...) that will definitely make your life easier.