How to instantiate lexical.Scanner in a JavaTokenParsers class? - scala

I am writing a parser which inherits from JavaTokenParsers in that I have a function as follow:
import scala.util.parsing.combinator.lexical._
import scala.util.parsing._
import scala.util.parsing.combinator.RegexParsers;
import scala.util.parsing.combinator.syntactical.StdTokenParsers
import scala.util.parsing.combinator.token.StdTokens
import scala.util.parsing.combinator.lexical.StdLexical
import scala.util.parsing.combinator.lexical.Scanners
import scala.util.parsing.combinator.lexical.Lexical
import scala.util.parsing.input._
import scala.util.parsing.combinator.syntactical._
import scala.util.parsing.combinator.token
import scala.util.parsing.combinator._
class ParseExp extends JavaTokenParsers{
//some code for parsing
def parse(s:String) = {
val tokens = new lexical.Scanner(s)
phrase(expr)(tokens)
}
}
I am getting the following error :
type Scanner is not a member of package scala.util.parsing.combinator.lexical
[error] val tokens = new lexical.Scanner(s)
[error] ^
Why I have this error while I have imported all packages?

The JavaTokenParsers does not implement the Scanners trait. So you would need to extends also from this trait (or a trait that extends it) in order to have access to this class.
Unless your expr parser accepts the Reader as a parameter (not from its apply method), you'd need to override the type of elements and the input type if I'm not mistaken to make this working.
Also is there any reason you need to have a Reader[Token]?.
If you don't need a Reader[Token]and since you give your input in a plain string,
phrase(expr)(new CharSequenceReader(s))
should work.

Related

Providing implicit evidence for context bounds on Object

I'm trying to write some abstractions in some Spark Scala code, but running into some issues when using objects. I'm using Spark's Encoder which is used to convert case classes to database schema's here as an example, but I think this question goes for any context bound.
Here is a minimal code example of what I'm trying to do:
package com.sample.myexample
import org.apache.spark.sql.Encoder
import scala.reflect.runtime.universe.TypeTag
case class MySparkSchema(id: String, value: Double)
abstract class MyTrait[T: TypeTag: Encoder]
object MyObject extends MyTrait[MySparkSchema]
Which fails with the following compilation error:
Unable to find encoder for type com.sample.myexample.MySparkSchema. An implicit Encoder[com.sample.myexample.MySparkSchema] is needed to store com.sample.myexample.MySparkSchema instances in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
I tried defining the implicit evidence in the object like such: (the import statement was suggested by IntelliJ, but it looks a bit weird)
import com.sample.myexample.MyObject.encoder
object MyObject extends MyTrait[MySparkSchema] {
implicit val encoder: Encoder[MySparkSchema] = Encoders.product[MySparkSchema]
}
Which fails with the error message
MyTrait.scala:13:25: super constructor cannot be passed a self reference unless parameter is declared by-name
One other thing I tried is to convert the object to a class and provide implicit evidence to the constructor:
class MyObject(implicit evidence: Encoder[MySparkSchema]) extends MyTrait[MySparkSchema]
This compiles and works fine, but at the expense of MyObject now being a class instead.
Question: Is it possible to provide implicit evidence for the context bounds when extending a trait? Or does the implicit evidence force me to make a constructor and use class instead?
Your first error almost gives you the solution, you have to import spark.implicits._ for Product types.
You could do this:
val spark: SparkSession = SparkSession.builder().getOrCreate()
import spark.implicits._
Full Example
package com.sample.myexample
import org.apache.spark.sql.Encoder
import scala.reflect.runtime.universe.TypeTag
case class MySparkSchema(id: String, value: Double)
abstract class MyTrait[T: TypeTag: Encoder]
val spark: SparkSession = SparkSession.builder().getOrCreate()
import spark.implicits._
object MyObject extends MyTrait[MySparkSchema]

Initialising variables in a mock class scala

I am writing unit tests for an akka actor model implementation. The system contains classes and traits that need to be initialised. My issue lies with the testing of the methods. When I mock required parameters for a class, it removes the intelij compiler error, however all of the variables are set to null.
I have attempted to use
when(mock.answer).thenReturn(42)
and directly assigning the variables
val mock.answer = 42
The above two through compilation errors. "When" is not recognised and directly assigning values cases a runtime error.
Any insight would be much appreciated.
I am not sure if I understood your issue correctly, but try the self contained code snippet below and let me know if it is not clear enough:
import org.junit.runner.RunWith
import org.scalatest.junit.JUnitRunner
import org.scalatest.mockito.MockitoSugar
import org.scalatest.{FunSuite, Matchers}
import org.mockito.Mockito.when
#RunWith(classOf[JUnitRunner])
class MyTest extends FunSuite with Matchers with MockitoSugar {
trait MyMock {
def answer: Int
}
test("my mock") {
val myMock = mock[MyMock]
when(myMock.answer).thenReturn(42)
myMock.answer should be(42)
}
}

`exception during macro expansion: [error] scala.reflect.macros.TypecheckException` when using quill

I'm pretty new to Scala, Play, and Quill and I'm not sure what I'm doing wrong. I have my project split up into models, repositories, and services (and controllers, but that is not relevant for this question). Right now, I'm getting this error for the lines in my services that are making changes to the database:
exception during macro expansion: scala.reflect.macros.TypecheckException: Can't find implicit `Decoder[models.AgentId]`. Please, do one of the following things:
1. ensure that implicit `Decoder[models.AgentId]` is provided and there are no other conflicting implicits;
2. make `models.AgentId` `Embedded` case class or `AnyVal`.
And I'm getting this error for all the other lines in my services:
exception during macro expansion: [error] scala.reflect.macros.TypecheckException: not found: value quote
I found a similar ticket, but the same fix does not work for me (I am already requiring ctx as an implicit variable, so I can't import it as well. I'm totally at a loss and if anyone has any suggestions, I would be happy to try anything. I'm using the following versions:
Scala 2.12.4
Quill 2.3.2
Play 2.6.6
The code:
db/package.scala
package db
import io.getquill.{PostgresJdbcContext, SnakeCase}
package object db {
class DBContext(config: String) extends PostgresJdbcContext(SnakeCase, config)
trait Repository {
val ctx: DBContext
}
}
repositories/AgentsRepository.scala
package repositories
import db.db.Repository
import models.Agent
trait AgentsRepository extends Repository {
import ctx._
val agents = quote {
query[Agent]
}
def agentById(id: AgentId) = quote { agents.filter(_.id == lift(id)) }
def insertAgent(agent: Agent) = quote {
query[Agent].insert(_.identifier -> lift(agent.identifier)
).returning(_.id)
}
}
services/AgentsService.scala
package services
import db.db.DBContext
import models.{Agent, AgentId}
import repositories.AgentsRepository
import scala.concurrent.ExecutionContext
class AgentService(implicit val ex: ExecutionContext, val ctx: DBContext)
extends AgentsRepository {
def list: List[Agent] =
ctx.run(agents)
def find(id: AgentId): List[Agent] =
ctx.run(agentById(id))
def create(agent: Agent): AgentId = {
ctx.run(insertAgent(agent))
}
}
models/Agent.scala
package models
import java.time.LocalDateTime
case class AgentId(value: Long) extends AnyVal
case class Agent(
id: AgentId
, identifier: String
)
I am already requiring ctx as an implicit variable, so I can't import it as well
You don't need to import a context itself, but everything which is inside in order to make it work
import ctx._
Make sure to place it before ctx.run called, as in https://github.com/getquill/quill/issues/998#issuecomment-352189214

Bundle imports in Scala

In my Scala project, almost all my files have these imports:
import eu.timepit.refined._
import eu.timepit.refined.api.Refined
import eu.timepit.refined.auto._
import eu.timepit.refined.numeric._
import spire.math._
import spire.implicits._
import com.wix.accord._
import com.wix.accord.dsl._
import codes.reactive.scalatime._
import better.files._
import java.time._
import scala.collection.mutable
...
...
What is the best way to DRY this in Scala? Can I specify all of them for my project (using some kind of sbt plugin?) or at the package level?
I've seen a few approaches that kinda solve what you're looking for. Check out
Imports defined
https://github.com/mongodb/casbah/blob/master/casbah-core/src/main/scala/Implicits.scala
Small example of this approach:
object Imports extends Imports with commons.Imports with query.Imports with query.dsl.FluidQueryBarewordOps
object BaseImports extends BaseImports with commons.BaseImports with query.BaseImports
object TypeImports extends TypeImports with commons.TypeImports with query.TypeImports
trait Imports extends BaseImports with TypeImports with Implicits
#SuppressWarnings(Array("deprecation"))
trait BaseImports {
// ...
val WriteConcern = com.mongodb.casbah.WriteConcern
// More here ...
}
trait TypeImports {
// ...
type WriteConcern = com.mongodb.WriteConcern
// ...
}
Imports used
https://github.com/mongodb/casbah/blob/master/casbah-core/src/main/scala/MongoClient.scala
When they use this import object, it unlocks all your type aliases for you. For example, WriteConcern
import com.mongodb.casbah.Imports._
// ...
def setWriteConcern(concern: WriteConcern): Unit = underlying.setWriteConcern(concern)
Essentially they wrap up all the imports into a common Import object, then just use import com.mycompany.Imports._
Doobie does something similar where most of the end-users just import doobie.imports._
https://github.com/tpolecat/doobie/blob/series/0.3.x/yax/core/src/main/scala/doobie/imports.scala
Again, a sample from this pattern:
object imports extends ToDoobieCatchSqlOps with ToDoobieCatchableOps {
/**
* Alias for `doobie.free.connection`.
* #group Free Module Aliases
*/
val FC = doobie.free.connection
/**
* Alias for `doobie.free.statement`.
* #group Free Module Aliases
*/
val FS = doobie.free.statement
// More here ...
}
The main difference in this approach between the package object style is you get more control over what/when to import. I've used both patterns, usually a package object for common utility methods I'll need across an internal package. And for libraries, specifically the users of my code, I can attach certain implicit definitions to an import object like in doobie mentioned above that will unlock a DSL syntax for the user using a single import.
I would probably go with the scala.Predef approach: basically, alias the types and expose the objects I want to make available. So e.g.
package com.my
package object project {
type LocalDate = java.time.LocalDate
type LocalDateTime = java.time.LocalDateTime
type LocalTime = java.time.LocalTime
import scala.collection.mutable
type MutMap[A, B] = mutable.Map[A, B]
val MutMap = mutable.Map
// And so on....
}
Now, wherever you start a file with package com.my.project, all of the above will be automatically available. Btw, kudos also to #som-snytt for pointing this out.

Scala MapReduce : [error] method reduce overrides nothing

I got stuck with this error, I wrote my TableReducer code like this:
class treducer extends TableReducer[Text, IntWritable, ImmutableBytesWritable]{
override def reduce(key: Text, values: java.lang.Iterable[IntWritable], context:Reducer[Text, IntWritable, ImmutableBytesWritable, Mutation]#Context){
var i=0
for (v <- values) {
i += v.get()
}
val put = new Put(Bytes.toBytes(key.toString())) // be sure to comment on toString.getBytes
put.add(Families.cf.bytes , Qualifiers.count.bytes, Bytes.toBytes(i))
context.write(null, put)
}
}
With this import:
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.HBaseAdmin
import org.apache.hadoop.hbase.client.HTable
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.client.Get
import java.io.IOException
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase._
import org.apache.hadoop.hbase.client._
import org.apache.hadoop.hbase.io._
import org.apache.hadoop.hbase.mapreduce._
import org.apache.hadoop.io._
import scala.collection.JavaConversions._
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
import org.apache.hadoop.mapreduce.Job
import org.apache.hadoop.mapreduce.Mapper
import org.apache.hadoop.mapreduce.ReduceContext
import org.apache.hadoop.mapreduce.Reducer
But got this error:
[error] /home/ans4175/activator/scala-hbase/src/main/scala/com/example/Hello.scala:85: method reduce overrides nothing.
[error] Note: the super classes of class treducer contain the following, non final members named reduce:
[error] protected[package mapreduce] def reduce(x$1: org.apache.hadoop.io.Text,x$2: Iterable[org.apache.hadoop.io.IntWritable],x$3: org.apache.hadoop.mapreduce.Reducer[org.apache.hadoop.io.Text,org.apache.hadoop.io.IntWritable,org.apache.hadoop.hbase.io.ImmutableBytesWritable,org.apache.hadoop.io.Writable]#Context): Unit
[error] override def reduce(key: Text, values: java.lang.Iterable[IntWritable], context:Reducer[Text, IntWritable, ImmutableBytesWritable, Mutation]#Context){
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
I don't know whats the problem. I have followed like this post
https://github.com/rawg/scala-hbase-wordcount/blob/master/src/main/scala/WordCountReducer.scala,
https://github.com/vadimbobrov/calc/blob/master/src/main/scala/com/os/job/InterpolatorReducer.scala
Thank you in advance
The error provides your answer. You've incorrectly declared one of the arguments.
The compiler has indicated that the third argument's type is:
Reducer[Text,IntWritable,ImmutableBytesWritable,Writable]#Context
Your override declares a method with a third argument of this type:
Reducer[Text, IntWritable, ImmutableBytesWritable, Mutation]#Context
Changing Mutation to Writable will allow the compiler to override the correct method.
when you override some method in a child class that means that same method must be there in parent class with same signature and your program is failing in that only.
this kind of errors typically indicates that signature of method in child is not matching with that in parent class, hence technically speaking you are not overriding anything and hence scala is telling you that
[error] /home/ans4175/activator/scala-hbase/src/main/scala/com/example/Hello.scala:85: method reduce overrides nothing.
Note: the super classes of class treducer contain the following, non final members named reduce: