Getting field name as string - scala

I have a trait that overrides toString to print the values of all fields:
/**
* Interface for classes that provide application configuration.
*/
trait Configuration {
/** abstract fields defined here. e.g., **/
def dbUrl: String
/**
* Returns a list of fields to be excluded by [[toString]]
*/
protected def toStringExclude: Seq[String]
/**
* Returns a String representation of this object that prints the values for all configuration fields.
*/
override def toString: String = {
val builder = new StringBuilder
val fields = this.getClass.getDeclaredFields
for (f <- fields) {
if (!toStringExclude.contains(f.getName)) {
f.setAccessible(true)
builder.append(s"${f.getName}: ${f.get(this)}\n")
}
}
builder.toString.stripSuffix("\n")
}
}
A concrete class currently looks like this:
class BasicConfiguration extends Configuration {
private val config = ConfigFactory.load
override val dbUrl: String = config.getString("app.dbUrl")
/**
* #inheritdoc
*/
override protected def toStringExclude: Seq[String] = Seq("config")
}
The problem is, if config were renamed at some point, the IDE would miss "config" in toStringExclude as it's just a string. So I'm trying to find a way to get the name of a field as a string, like getFieldName(config).

Using https://github.com/dwickern/scala-nameof,
import com.github.dwickern.macros.NameOf._
class BasicConfiguration extends Configuration {
private val config = ConfigFactory.load
override val dbUrl: String = config.getString("app.dbUrl")
/**
* #inheritdoc
*/
override protected def toStringExclude: Seq[String] = Seq(nameOf(config))
}

I don't like this and I wouldn't do this but here's this:
class BasicConfiguration extends Configuration {
private val config = ConfigFactory.load
override val dbUrl: String = config.getString("app.dbUrl")
private val excludeFields: Set[Any] = Set(config)
override protected val toStringExclude: Seq[String] = {
this.getClass
.getDeclaredFields
.filter(field => Try(field.get(this)).fold(_ => false, a => excludeFields.contains(a)))
.map(_.getName)
.toList
}
}

Related

Scala Factory Design Pattern,what is "loose" coupling?

I am studying about the Factory Design Pattern in Scala.One of the benefits of this method is Loose Coupling between Object Creation logic and Client (from http://www.journaldev.com/10350/factory-design-pattern-in-scala). How can we achieve this and why we need loose coupling at all? Can someone give some examples and explain this?
As I understood you are looking for an example on Factory Pattern in Scala, you can find the usecase with some simple examples here
Factory pattern is more useful when creating API library classes (classes which can be used by other classes). Hope the below example will explain it clearly,
trait ContentAPI {
def getContentById(id : Int) : String
def getContentByTitle(title: String) : String
def getAllContent(title: String) : String }
object ContentAPI {
protected class ContentAPI(config: APIConfig) extends PageContentAPI {
/**
* get all content of a page parsing the page title
*
* #param query title parsed
* #return all content as a String
*/
override def getAllContent(query: String) : String = {
var pageContent: String = " Some text"
pageContent
}
/**
* get only the content body text parsing the page title
*
* #param query title parsed
* #return content as a String
*/
override def getContentByTitle(query: String) : String = {
var pageContent: String = " Some String "
pageContent
}
/**
* get only the content body text parsing the page id
*
* #param query id parsed
* #return content as a string
*/
override def getContentById(query: Int) : String
var pageContent: String = "some string"
pageContent
}
}
class APIConfig(path: String) {
val inputPath = path
}
/** Configuration object for the Content API */
object APIConfig {
protected class APIConfigBuilder(dataPath: String) {
def setDataPath(path: String): APIConfigBuilder = {
new APIConfigBuilder(path)
}
def getOrCreate(): APIConfig = {
new APIConfig(dataPath)
}
}
def withBuilder(): APIConfigBuilder = {
new APIConfigBuilder("")
}
}
def apply(config: APIConfig): ContentAPI = {
new ContentAPI(config)
}
}
/** Client Object */
object APIClient{
def main(args: Array[String]): Unit = {
import Arguments.WIKI_DUMP_BASE
val dataPath = "path"
val contentAPI = ContentAPI(APIConfig.withBuilder().setDataPath(dataPath).getOrCreate())
contentAPI.getAllContent("Anarchism").show()
contentAPI.getContentByTitle("Anarchism").show()
contentAPI.getContentById(12).show()
}
}

too many arguments for method apply: (car: play.api.data.Form[models.CarroFormData])

I have this error bellow and can't find the solution or how to debug what's being passed to apply.
Can anyone help?
too many arguments for method apply: (car:
play.api.data.Form[models.CarroFormData])(implicit messages:
play.api.i18n.Messages)play.twirl.api.HtmlFormat.Appendable in class
index
Controller Form
def add = Action { implicit request =>
CarroForm.form.bindFromRequest.fold(
// if any error in submitted data
errorForm => Ok(views.html.admin.index(errorForm, Seq.empty[Carro])),
data => repo.create(carro.name, carro.description, carro.img, carro.keywords).map { _ =>
// If successful, we simply redirect to the index page.
Redirect(routes.application.index)
})
}
This is the Model
package dal
import javax.inject.{ Inject, Singleton }
import play.api.db.slick.DatabaseConfigProvider
import slick.driver.JdbcProfile
import models._
import scala.concurrent.{ Future, ExecutionContext }
/**
* A repository for people.
*
* #param dbConfigProvider The Play db config provider. Play will inject this for you.
*/
#Singleton
class CarroRepository #Inject() (dbConfigProvider: DatabaseConfigProvider)(implicit ec: ExecutionContext) {
// We want the JdbcProfile for this provider
private val dbConfig = dbConfigProvider.get[JdbcProfile]
// These imports are important, the first one brings db into scope, which will let you do the actual db operations.
// The second one brings the Slick DSL into scope, which lets you define the table and other queries.
import dbConfig._
import driver.api._
/**
* Here we define the table. It will have a name of people
*/
private class CarroTable(tag: Tag) extends Table[Carro](tag, "carro") {
/** The ID column, which is the primary key, and auto incremented */
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
/** The name column */
def name = column[String]("name")
/** The description column */
def description = column[String]("description")
/** The img column */
def img = column[String]("img")
/** The keywords column */
def keywords = column[String]("keywords")
/**
* This is the tables default "projection".
*
* It defines how the columns are converted to and from the Person object.
*
* In this case, we are simply passing the id, name and page parameters to the Person case classes
* apply and unapply methods.
*/
def * = (id, name, description, img, keywords) <> ((Carro.apply _).tupled, Carro.unapply)
}
/**
* The starting point for all queries on the people table.
*/
private val carro = TableQuery[CarroTable]
/**
* Create a person with the given name and age.
*
* This is an asynchronous operation, it will return a future of the created person, which can be used to obtain the
* id for that person.
*/
def create(name: String, description: String, img:String, keywords: String): Future[Carro] = db.run {
// We create a projection of just the name and age columns, since we're not inserting a value for the id column
(carro.map(p => (p.name, p.description, p.img, p.keywords))
// Now define it to return the id, because we want to know what id was generated for the person
returning carro.map(_.id)
// And we define a transformation for the returned value, which combines our original parameters with the
// returned id
into ((nameAge, id) => Carro(id, nameAge._1, nameAge._2, nameAge._3, nameAge._4))
// And finally, insert the person into the database
) += (name, description, img, keywords)
}
/**
* List all the people in the database.
*/
def list(): Future[Seq[Carro]] = db.run {
carro.result
}
}
package models
import play.api.data.Form
import play.api.data.Forms._
import play.api.libs.json._
case class Carro(id: Long, name:String, description:String, img:String, keywords:String)
case class CarroFormData(name: String, description: String, img: String, keywords: String)
object CarroForm {
val form = Form(
mapping(
"name" -> nonEmptyText,
"description" -> nonEmptyText,
"img" -> nonEmptyText,
"keywords" -> nonEmptyText
)(CarroFormData.apply)(CarroFormData.unapply)
)
}
object Carros {
var carros: Seq[Carro] = Seq()
def add(carros: Carro): String = {
carros = carros :+ carro.copy(id = carro.length) // manual id increment
"User successfully added"
}
def delete(id: Long): Option[Int] = {
val originalSize = carro.length
carro = carro.filterNot(_.id == id)
Some(originalSize - carro.length) // returning the number of deleted users
}
def get(id: Long): Option[Carro] = carro.find(_.id == id)
def listAll: Seq[Carro] = carro
implicit val carroFormat = Json.format[Carro]
}
View Code
#(car: Form[CarroFormData])(implicit messages: Messages)
#import helper._
#main(new Main("Car Dealers", "Compra e venda de carros", "logo.png", "carro, compra, venda")) {
<div class="container">
<h1>Hello</h1>
#form(routes.AdminCarro.add()) {
#inputText(person("name"))
#inputText(person("description"))
#inputText(person("img"))
#inputText(person("keywords"))
)
<div class="buttons">
<input type="submit" value="Add Car"/>
</div>
}
</div>
}
At your controller code:
def add = Action { implicit request =>
CarroForm.form.bindFromRequest.fold(
// if any error in submitted data
errorForm => Ok(views.html.admin.index(errorForm, Seq.empty[Carro])),
data => repo.create(carro.name, carro.description, carro.img, carro.keywords).map { _ =>
// If successful, we simply redirect to the index page.
Redirect(routes.application.index)
}
)
}
At the errorForm, you are calling the index view with two arguments:
Ok(views.html.admin.index(errorForm, Seq.empty[Carro]))
But your view declares just one argument:
#(car: Form[CarroFormData])(implicit messages: Messages)
Just remove the Seq.empty[Carro] from the call in your controller and everything should work as expected. If you still getting the same error, see if there are other places where this view is called the same wrong way (with two arguments) or try to sbt clean your project before sbt run.

scala: how to use a class like a variable

Is it possible to refer to different classes on each pass of an iteration?
I have a substantial number of Hadoop Hive tables, and will be processing them with Spark. Each of the tables has an auto-generated class, and I would like to loop through the tables, instead of the tedious, non-code reuse copy/paste/handCodeIndividualTableClassNames technique resorted to first.
import myJavaProject.myTable0Class
import myJavaProject.myTable1Class
object rawMaxValueSniffer extends Logging {
/* tedious sequential: it works, and sometimes a programmer's gotta do... */
def tedious(args: Array[String]): Unit = {
val tablePaths = List("path0_string_here","path1_string")
var maxIds = ArrayBuffer[Long]()
FileInputFormat.setInputPaths(conf, tablePaths(0))
AvroReadSupport.setAvroReadSchema(conf.getConfiguration, myTable0Class.getClassSchema)
ParquetInputFormat.setReadSupportClass(conf, classOf[AvroReadSupport[myTable0Class]])
val records = sc.newAPIHadoopRDD(conf.getConfiguration,
classOf[ParquetInputFormat[myTable0Class]],
classOf[Void],
classOf[myTable0Class]).map(x => x._2)
maxIds += records.map(_.getId).collect().max
FileInputFormat.setInputPaths(conf, tablePaths(1))
AvroReadSupport.setAvroReadSchema(conf.getConfiguration, myTable1Class.getClassSchema)
ParquetInputFormat.setReadSupportClass(conf, classOf[AvroReadSupport[myTable1Class]])
val records = sc.newAPIHadoopRDD(conf.getConfiguration,
classOf[ParquetInputFormat[myTable1Class]],
classOf[Void],
classOf[myTable1Class]).map(x => x._2)
maxIds += records.map(_.getId).collect().max
}
/* class as variable, used in a loop. I have seen the mountain... */
def hopedFor(args: Array[String]): Unit = {
val tablePaths = List("path0_string_here","path1_string")
var maxIds = ArrayBuffer[Long]()
val tableClasses = List(classOf[myTable0Class],classOf[myTable1Class]) /* error free, but does not get me where I'm trying to go */
var counter=0
tableClasses.foreach { tc =>
FileInputFormat.setInputPaths(conf, tablePaths(counter))
AvroReadSupport.setAvroReadSchema(conf.getConfiguration, tc.getClassSchema)
ParquetInputFormat.setReadSupportClass(conf, classOf[AvroReadSupport[tc]])
val records = sc.newAPIHadoopRDD(conf.getConfiguration,
classOf[ParquetInputFormat[tc]],
classOf[Void],
classOf[tc]).map(x => x._2)
maxIds += records.map(_.getId).collect().max /* all the myTableXXX classes have getId() */
counter += 1
}
}
}
/* the classes being referenced... */
#org.apache.avro.specific.AvroGenerated
public class myTable0Class extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
public static final org.apache.avro.Schema SCHEMA$ = new org.apache.avro.Schema.Parser().parse("{\"type\":\"record\",\"name\":\"rsivr_surveyquestiontypes\",\"namespace\":\"myJavaProject\",\"fields\":[{\"name\":\"id\",\"type\":\"in t\"},{\"name\":\"description\",\"type\":\"st,ing\"},{\"name\":\"scale_range\",\"type\":\"int\"}]}");
public static org.apache.avro.Schema getClassSchema() { return SCHEMA$; }
#Deprecated public int id;
yada.yada.yada0
}
#org.apache.avro.specific.AvroGenerated
public class myTable1Class extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
public static final org.apache.avro.Schema SCHEMA$ = new org.apache.avro.Schema.Parser().parse("{\"type\":\"record\",\"name\":\"rsivr_surveyresultdetails\",\"namespace\":\"myJavaProject\",\"fields\":[{\"name\":\"id\",\"type\":\"in t\"},{\"name\":\"survey_dts\",\"type\":\"string\"},{\"name\":\"survey_id\",\"type\":\"int\"},{\"name\":\"question\",\"type\":\"int\"},{\"name\":\"caller_id\",\"type\":\"string\"},{\"name\":\"rec_msg\",\"type\":\"string\"},{\"name\ ":\"note\",\"type\":\"string\"},{\"name\":\"lang\",\"type\":\"string\"},{\"name\":\"result\",\"type\":\"string\"}]}");
public static org.apache.avro.Schema getClassSchema() { return SCHEMA$; }
#Deprecated public int id;
yada.yada.yada1
}
Something like this, perhaps:
def doStuff[T <: SpecificRecordBase : ClassTag](index: Int, schema: => Schema, clazz: Class[T]) = {
FileInputFormat.setInputPaths(conf, tablePaths(index))
AvroReadSupport.setAvroReadSchema(conf.getConfiguration, schema)
ParquetInputFormat.setReadSupportClass(conf, classOf[AvroReadSupport[T]])
val records = sc.newAPIHadoopRDD(conf.getConfiguration,
classOf[ParquetInputFormat[T]],
classOf[Void],
clazz).map(x => x._2)
maxIds += records.map(_.getId).collect().max
}
Seq(
(classOf[myTable0Class], myTable0Class.getClassSchema _),
(classOf[myTable1Class], myTable1Class.getClassSchema _)
).zipWithIndex
.foreach { case ((clazz, schema), index) => doStuff(index, schema, clazz) }
You could use reflection to invoke getClassSchema instead (clazz.getMethod("getClassSchema").invoke(null).asInstanceOf[Schema]), then you would not need to pass it in as a aprameter, just clazz would be enough, but that's kinda cheating ... I like this approach a bit better.

Scala: class type required but T found

I've found similar issues of this particular problem, however the problem was due to someone trying to instantiate T directly. Here I'm trying to create a trait that is a general interface to extend classes and store them automatically in a database such as Riak using classOf[T]. Using Scala 2.10.
Here's my code:
trait RiakWriteable[T] {
/**
* bucket name of data in Riak holding class data
*/
def bucketName: String
/**
* determine whether secondary indices will be added
*/
def enable2i: Boolean
/**
* the actual bucket
*/
val bucket: Bucket = enable2i match {
case true => DB.client.createBucket(bucketName).enableForSearch().execute()
case false => DB.client.createBucket(bucketName).disableSearch().execute()
}
/**
* register the scala module for Jackson
*/
val converter = {
val c = new JSONConverter[T](classOf[T], bucketName)
JSONConverter.registerJacksonModule(DefaultScalaModule)
c
}
/**
* store operation
*/
def store = bucket.store(this).withConverter(converter).withRetrier(DB.retrier).execute()
/**
* fetch operation
*/
def fetch(id: String): Option[T] = {
val u = bucket.fetch(id, classOf[T]).withConverter(converter).withRetrier(DB.retrier).r(DB.N_READ).execute()
u match {
case null => None
case _ => Some(u)
}
}
}
Compiler error is class type required but T found.
Example usage (pseudo-code):
class Foo
object Foo extends RiakWriteable[Foo]
Foo.store(object)
So I'm guessing that a manifest of T is not being properly defined. Do I need to implicitly define this somewhere?
Thanks!
Here's an intermediary solution, though it leaves out the converter registration (which I may leave permanently for this use case, not sure yet).
/**
* trait for adding write methods to classes
*/
trait RiakWriteable[T] {
/**
* bucket name of data in Riak holding class data
*/
def bucketName: String
/**
* determine whether secondary indices will be added
*/
def enable2i: Boolean
/**
* the actual bucket
*/
val bucket: Bucket = enable2i match {
case true => DB.client.createBucket(bucketName).enableForSearch().execute()
case false => DB.client.createBucket(bucketName).disableSearch().execute()
}
/**
* store operation
*/
def store(o: T) = bucket.store(o).withRetrier(DB.retrier).execute()
/**
* fetch operation
*/
def fetch(id: String)(implicit m: ClassTag[T]) = {
val u = bucket.fetch(id, classTag[T].runtimeClass).withRetrier(DB.retrier).r(DB.N_READ).execute()
u match {
case null => None
case _ => Some(u)
}
}
}

How to create an instance of a model with the ebean framework and scala in Play 2.2

I would like to instance a model object of the Ebean project with scala and the fremework Play 2.2. I face to an issue with the ID autogenerate and the class parameteres / abstraction :
#Entity
class Task(#Required val label:String) extends Model{
#Id
val id: Long
}
object Task {
var find: Model.Finder[Long, Task] = new Model.Finder[Long, Task](classOf[Long], classOf[Task])
def all(): List[Task] = find.all.asScala.toList
def create(label: String) {
val task = new Task(label)
task.save
}
def delete(id: Long) {
find.ref(id).delete
}
}
The error : "class Task needs to be abstract, since value id is not defined". Any idea to avoid this problem?
I found the solution thanks this link : http://www.avaje.org/topic-137.html
import javax.persistence._
import play.db.ebean._
import play.data.validation.Constraints._
import scala.collection.JavaConverters._
#Entity
#Table( name="Task" )
class Task{
#Id
var id:Int = 0
#Column(name="title")
var label:String = null
}
/**
* Task Data Access Object.
*/
object Task extends Dao(classOf[Task]){
def all(): List[Task] = Task.find.findList().asScala.toList
def create(label: String) {
var task = new Task
task.label = label
Task.save(task)
}
def delete(id: Long) {
Task.delete(id)
}
}
And the DAO :
/**
* Dao for a given Entity bean type.
*/
abstract class Dao[T](cls:Class[T]) {
/**
* Find by Id.
*/
def find(id:Any):T = {
return Ebean.find(cls, id)
}
/**
* Find with expressions and joins etc.
*/
def find():com.avaje.ebean.Query[T] = {
return Ebean.find(cls)
}
/**
* Return a reference.
*/
def ref(id:Any):T = {
return Ebean.getReference(cls, id)
}
/**
* Save (insert or update).
*/
def save(o:Any):Unit = {
Ebean.save(o);
}
/**
* Delete.
*/
def delete(o:Any):Unit = {
Ebean.delete(o);
}