How to describe Option[FiniteDuration] in proto with scalapb - scala

I am trying to use proto3 and scalapb, but I am unable to map FiniteDuration as well as unable to use it as Option. Can anyone please advise on the same
case class Message(id:Int , interval: Option[FiniteDuration])

Given that you configured your scalabp in sbt as described in installation guide
You need to define a custom type for a field to be converted from int64, mapped by default as Long, into FiniteDuration
For example:
syntax="proto3";
import "scalapb/scalapb.proto";
option java_package = "my.app.proto";
message Message {
int32 id = 1;
optional int64 interval = 2 [(scalapb.field).type = "scala.concurrent.duration.FiniteDuration"];
}
This will generate a case class that looks what you need.
ScalaPB will rely on implicit resolution to compile this and apply mapping from Long to FiniteDuration. For this you need to define a scalapb.TypeMapper[Long, FiniteDuration] in package object of the same package where case class is generated, ie my.app.proto.message.
package my.app.proto
import scalapb.TypeMapper
import java.util.concurrent.TimeUnit
import scala.concurrent.duration.FiniteDuration
package object message {
implicit val finiteDuration: TypeMapper[Long, FiniteDuration] =
TypeMapper[Long, FiniteDuration](s => FiniteDuration.apply(s, TimeUnit.MILLISECONDS))(_.toMillis)
}

Related

Scala "constructor DCAwareRoundRobinPolicy in class DCAwareRoundRobinPolicy cannot be accessed in object CassandraConnector"

I am a scala newbie and really need help with an issue I've been experiencing with some code. I have set up the following code to create cassandra connections:
package some.package
import java.net.InetAddress
import java.util
import com.datastax.driver.core._
import com.datastax.driver.core.policies.{DCAwareRoundRobinPolicy, TokenAwarePolicy}
import scala.collection.JavaConversions._
object CassandraConnector {
def getCluster(contactPointIpString: String, preferred_dc: String): Cluster = {
val contactPointIpStrings = contactPointIpString.split(",").toList
val contactPointIpList = contactPointIpStrings.flatMap { ipAddress: String => InetAddress.getAllByName(ipAddress) }
println(s"Building Cluster w/ Contact Points: $contactPointIpList")
Cluster.builder()
.addContactPoints(contactPointIpList)
.withClusterName("multi_dc_user_data")
.withLoadBalancingPolicy(new TokenAwarePolicy(new DCAwareRoundRobinPolicy(preferred_dc)))
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_ONE))
.build()
}
}
While building the project in IntelliJ, the following error happens:
constructor DCAwareRoundRobinPolicy in class DCAwareRoundRobinPolicy cannot be accessed in object CassandraConnector
.withLoadBalancingPolicy(new TokenAwarePolicy(new DCAwareRoundRobinPolicy(preferred_dc)))
Found something similar to my problem here, and tried to change my code to the following:
val dcAwareRoundRobinPolicyBuilder = new DCAwareRoundRobinPolicy.Builder
val dcAwareRoundRobinPolicy = dcAwareRoundRobinPolicyBuilder.withLocalDc(preferred_dc)
Cluster.builder()
.addContactPoints(contactPointIpList)
.withClusterName("multi_dc_user_data")
.withLoadBalancingPolicy(new TokenAwarePolicy(dcAwareRoundRobinPolicy.build()))
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_ONE))
.build()
This resolved the issue and now the build finishes successfully but I am not so sure if what I've done is necessarily correct, so I'd really appreciate any help in this regard.
DCAwareRoundRobinPolicy uses the Builder Pattern to decouple the public API for instance creation from the particular implementation of the constructors for the instances. This is accomplished by not exposing a public constructor and instead only allowing DCAwareRoundRobinPolicy.Builder to construct instances of DCAwareRoundRobinPolicy.
You're using it as it's intended (assuming that you're passing options which match what you're expecting).

Is there a way to globally set my json decoding to handle upper and lower case, camel case from snake_case?

My JSON looks like:
created_at
While my case classes have properties like
createdAt
Is there a way to globally set this during my encoding/decoding? I don't want to have to customize my decoder/encoder for each property.
I tried getting circe-generic-extras and did this:
implicit val customConfig: Configuration = Configuration.default.withSnakeCaseMemberNames
But when I run my unit tests, I still get the error:
Actual: Left(DecodingFailure(Attempt to decode value on failed cursor, List(DownField(createdAt))))
I have so many properties in my case classes that I don't want to write this out manually to fix this, looking for a auto/lazy way :)
import io.circe._
import io.circe.generic.semiauto._
import io.circe.generic.extras.Configuration
case class MyComponent(
createdAt: String
)
object MyComponent {
implicit val customConfig: Configuration = Configuration.default.withSnakeCaseMemberNames
implicit val componentDecoder: Decoder[MyComponent] = deriveDecoder[MyComponent]
implicit val componentEncoder: Encoder[MyComponent] = deriveEncoder[MyComponent]
}
And my spec2 test:
val decodedComponent = parser.decode[MyComponent](jsonString)
decodedComponent must_=== Right(expected)

Can you import user-defined scala classes into proto?

I have been tasked to create a general case class that would cover different kinds of config (existing already in the system).
Here's a sample of the configs:
package ngage.sdk.configs
...
object Config {
case class TokenConfig(expiry: FiniteDuration)
case class SagaConfig(timeOut: FiniteDuration, watcherTimeout: FiniteDuration)
case class ElasticConfig(connectionString: String, deletePerPage: Int)
case class S3Config(bucket: String)
case class RedisConfig(host: String, port: Int)
...
I looked to protobuf for a solution, but I don't know how to import the case classes as mentioned above.
Here's how I started:
syntax = "proto3";
package ngage.sdk.distributedmemory.config;
import "scalapb/scalapb.proto";
import "pboptions.proto";
import "google/protobuf/duration.proto";
import "ngage.sdk.configs.Config"; //this became red
option (scalapb.options) = {
flat_package: true
single_file: true
};
message ShardMemoryConfig {
option (ngage.type_id) = 791;
int32 size = 1;
oneof config { //everything inside oneof is red
ngage.sdk.configs.RedisConfig redis = 100002;
ngage.sdk.configs.ElasticConfig elastic = 100003;
ngage.sdk.configs.S3Config s3 = 100004;
}
}
Is it even possible to import user-defined scala classes to protobuf?
You'd need a library which generates .proto files from case class definitions, I don't know if one exists (but I would expect not). PBDirect lets you write/read case classes directly, but then your ShardMemoryConfig should also be a case class using RedisConfig etc.

How to create an Encoder for Scala collection (to implement custom Aggregator)?

Spark 2.3.0 with Scala 2.11. I'm implementing a custom Aggregator according to the docs here. The aggregator requires 3 types for input, buffer, and output.
My aggregator has to act upon all previous rows in the window so I declared it like this:
case class Foo(...)
object MyAggregator extends Aggregator[Foo, ListBuffer[Foo], Boolean] {
// other override methods
override def bufferEncoder: Encoder[ListBuffer[Mod]] = ???
}
One of the override methods is supposed to return the encoder for the buffer type, which in this case is a ListBuffer. I can't find any suitable encoder for org.apache.spark.sql.Encoders nor any other way to encode this so I don't know what to return here.
I thought of creating a new case class which has a single property of type ListBuffer[Foo] and using that as my buffer class, and then using Encoders.product on that, but I am not sure if that is necessary or if there is something else I am missing. Thanks for any tips.
You should just let Spark SQL do its work and find the proper encoder using ExpressionEncoder as follows:
scala> spark.version
res0: String = 2.3.0
case class Mod(id: Long)
import org.apache.spark.sql.Encoder
import scala.collection.mutable.ListBuffer
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
scala> val enc: Encoder[ListBuffer[Mod]] = ExpressionEncoder()
enc: org.apache.spark.sql.Encoder[scala.collection.mutable.ListBuffer[Mod]] = class[value[0]: array<struct<id:bigint>>]
I cannot see anything in org.apache.spark.sql.Encoders that could be used to directly encode a ListBuffer, or for that matter even a List
One option seems to be going with putting it in a case class, as you suggested:
import org.apache.spark.sql.Encoders
case class Foo(field: String)
case class Wrapper(lb: scala.collection.mutable.ListBuffer[Foo])
Encoders.product[Wrapper]
Another option could be to use kryo:
Encoders.kryo[scala.collection.mutable.ListBuffer[Foo]]
Or finally you could look at ExpressionEncoders, which extend Encoder:
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
ExpressionEncoder[scala.collection.mutable.ListBuffer[Foo]]
This is the best solution, as it keeps everything transparent to catalyst and therefore allows it to do all of its wonderful optimisations.
One thing I noticed whilst having a play:
ExpressionEncoder[scala.collection.mutable.ListBuffer[Foo]].schema == ExpressionEncoder[List[Foo]].schema
I haven't tested any of the above whilst executing aggregations, so there may be runtime issues. Hope this is helpful.

Bundle imports in Scala

In my Scala project, almost all my files have these imports:
import eu.timepit.refined._
import eu.timepit.refined.api.Refined
import eu.timepit.refined.auto._
import eu.timepit.refined.numeric._
import spire.math._
import spire.implicits._
import com.wix.accord._
import com.wix.accord.dsl._
import codes.reactive.scalatime._
import better.files._
import java.time._
import scala.collection.mutable
...
...
What is the best way to DRY this in Scala? Can I specify all of them for my project (using some kind of sbt plugin?) or at the package level?
I've seen a few approaches that kinda solve what you're looking for. Check out
Imports defined
https://github.com/mongodb/casbah/blob/master/casbah-core/src/main/scala/Implicits.scala
Small example of this approach:
object Imports extends Imports with commons.Imports with query.Imports with query.dsl.FluidQueryBarewordOps
object BaseImports extends BaseImports with commons.BaseImports with query.BaseImports
object TypeImports extends TypeImports with commons.TypeImports with query.TypeImports
trait Imports extends BaseImports with TypeImports with Implicits
#SuppressWarnings(Array("deprecation"))
trait BaseImports {
// ...
val WriteConcern = com.mongodb.casbah.WriteConcern
// More here ...
}
trait TypeImports {
// ...
type WriteConcern = com.mongodb.WriteConcern
// ...
}
Imports used
https://github.com/mongodb/casbah/blob/master/casbah-core/src/main/scala/MongoClient.scala
When they use this import object, it unlocks all your type aliases for you. For example, WriteConcern
import com.mongodb.casbah.Imports._
// ...
def setWriteConcern(concern: WriteConcern): Unit = underlying.setWriteConcern(concern)
Essentially they wrap up all the imports into a common Import object, then just use import com.mycompany.Imports._
Doobie does something similar where most of the end-users just import doobie.imports._
https://github.com/tpolecat/doobie/blob/series/0.3.x/yax/core/src/main/scala/doobie/imports.scala
Again, a sample from this pattern:
object imports extends ToDoobieCatchSqlOps with ToDoobieCatchableOps {
/**
* Alias for `doobie.free.connection`.
* #group Free Module Aliases
*/
val FC = doobie.free.connection
/**
* Alias for `doobie.free.statement`.
* #group Free Module Aliases
*/
val FS = doobie.free.statement
// More here ...
}
The main difference in this approach between the package object style is you get more control over what/when to import. I've used both patterns, usually a package object for common utility methods I'll need across an internal package. And for libraries, specifically the users of my code, I can attach certain implicit definitions to an import object like in doobie mentioned above that will unlock a DSL syntax for the user using a single import.
I would probably go with the scala.Predef approach: basically, alias the types and expose the objects I want to make available. So e.g.
package com.my
package object project {
type LocalDate = java.time.LocalDate
type LocalDateTime = java.time.LocalDateTime
type LocalTime = java.time.LocalTime
import scala.collection.mutable
type MutMap[A, B] = mutable.Map[A, B]
val MutMap = mutable.Map
// And so on....
}
Now, wherever you start a file with package com.my.project, all of the above will be automatically available. Btw, kudos also to #som-snytt for pointing this out.