Scala 3: Using experimental types - scala

The following scala 3 code following documentation works fine
import scala.compiletime.ops.string.*
#main def refinedTypeInAction: Unit =
val hello: "hello " + "world" = "hello world"
println(hello)
However, the one below
import scala.annotation.experimental
#experimental
object UseExperimental:
def x: scala.compiletime.ops.string.Substring["hamburger", 4, 8] = "urge"
throws compile time error as shown below
[error] -- [E008] Not Found Error: /Users/viswanath/projects/myproject/src/main/scala/RefinedType.scala:16:38
[error] 16 | def x: scala.compiletime.ops.string.Substring["hamburger", 4, 8] = "urge"
[error] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[error] | type Substring is not a member of object scala.compiletime.ops.string
[error] one error found
I did refer to https://docs.scala-lang.org/scala3/reference/other-new-features/experimental-defs.html but I'm struggling to get past the compilation error. Please help!

Related

Scala Circe cannot decode model with list as member

I have a model, containing a list as a member variable, that I am trying to serialize using Circe in Scale.
The model in question -
case class Order(id: Long, tableId: Long, items: List[Item]) {
}
object Order {
implicit val encoder: Encoder[Order] = deriveEncoder[Order]
implicit val decoder: Decoder[Order] = deriveDecoder[Order]
}
Also, the Item class -
case class Item(id: Long, name: String, serving: String) {
}
object Item {
implicit val encoder: Encoder[Item] = deriveEncoder[Item]
implicit val decoder: Decoder[Item] = deriveDecoder[Item]
}
I am using Circe's semi-auto encoder feature. However, when trying to read data from the database using quill, I am encountering this exception -
[error] /Users/in-rmoitra/Projects/PetProjects/Restrofit-Backend/src/main/scala/models/repository/OrderRepository.scala:17:69: exception during macro expansion:
[error] scala.reflect.macros.TypecheckException: Can't find implicit `Decoder[List[models.Item]]`. Please, do one of the following things:
[error] 1. ensure that implicit `Decoder[List[models.Item]]` is provided and there are no other conflicting implicits;
[error] 2. make `List[models.Item]` `Embedded` case class or `AnyVal`.
[error]
[error] at scala.reflect.macros.contexts.Typers.$anonfun$typecheck$3(Typers.scala:32)
[error] at scala.reflect.macros.contexts.Typers.$anonfun$typecheck$2(Typers.scala:26)
[error] at scala.reflect.macros.contexts.Typers.doTypecheck$1(Typers.scala:25)
[error] at scala.reflect.macros.contexts.Typers.$anonfun$typecheck$7(Typers.scala:38)
[error] at scala.reflect.internal.Trees.wrappingIntoTerm(Trees.scala:1731)
[error] at scala.reflect.internal.Trees.wrappingIntoTerm$(Trees.scala:1728)
[error] at scala.reflect.internal.SymbolTable.wrappingIntoTerm(SymbolTable.scala:18)
[error] at scala.reflect.macros.contexts.Typers.typecheck(Typers.scala:38)
[error] at scala.reflect.macros.contexts.Typers.typecheck$(Typers.scala:20)
[error] at scala.reflect.macros.contexts.Context.typecheck(Context.scala:6)
[error] at scala.reflect.macros.contexts.Context.typecheck(Context.scala:6)
[error] at io.getquill.context.QueryMacro.expandQueryWithMeta(QueryMacro.scala:41)
[error] at io.getquill.context.QueryMacro.expandQuery(QueryMacro.scala:20)
[error] at io.getquill.context.QueryMacro.runQuery(QueryMacro.scala:12)
[error] val ordersFuture: Future[List[(Order, (OrderItem, Item))]] = run(query)
From my limited knowledge of Circe and what I already looked up, the docs say that you do not need to create a decoder for List[A] if you already have a decoder for [A].
It would be great if someone could throw light on what seems to be happening here.
Your Circe code is fine. If you execute
println(
parse("""
|{ "id" : 1,
| "tableId" : 2,
| "items" : [
| { "id": 3,
| "name" : "a",
| "serving" : "b"
| },
| { "id": 4,
| "name" : "c",
| "serving" : "d"
| }
| ]
|}
""".stripMargin)
.flatMap(json => json.as[Order])
)
you'll get
Right(Order(1,2,List(Item(3,a,b), Item(4,c,d))))
So the trouble is in your Quill code.
And don't confuse io.circe.Decoder and io.getquill.context.jdbc.Decoders#Decoder.
https://getquill.io/#extending-quill-custom-encoding
`exception during macro expansion: [error] scala.reflect.macros.TypecheckException` when using quill

Error while using "newAPIHadoopFile" API

I am writing the following code to load a file into Spark using newAPIHadoopFile API.
val lines = sc.newAPIHadoopFile("new_actress.list",classOf[TextInputFormat],classOf[Text],classOf[Text])
But I am getting the following error:
scala> val lines = sc.newAPIHadoopFile("new_actress.list",classOf[TextInputFormat],classOf[Text],classOf[Text])
<console>:34: error: inferred type arguments [org.apache.hadoop.io.Text,org.apache.hadoop.io.Text,org.apache.hadoop.mapred.TextInputFormat] do not conform to method newAPIHadoopFile's type parameter bounds [K,V,F <: org.apache.hadoop.mapreduce.InputFormat[K,V]]
val lines = sc.newAPIHadoopFile("new_actress.list",classOf[TextInputFormat],classOf[Text],classOf[Text])
^
<console>:34: error: type mismatch;
found : Class[org.apache.hadoop.mapred.TextInputFormat](classOf[org.apache.hadoop.mapred.TextInputFormat])
required: Class[F]
val lines = sc.newAPIHadoopFile("new_actress.list",classOf[TextInputFormat],classOf[Text],classOf[Text])
^
<console>:34: error: type mismatch;
found : Class[org.apache.hadoop.io.Text](classOf[org.apache.hadoop.io.Text])
required: Class[K]
val lines = sc.newAPIHadoopFile("new_actress.list",classOf[TextInputFormat],classOf[Text],classOf[Text])
^
<console>:34: error: type mismatch;
found : Class[org.apache.hadoop.io.Text](classOf[org.apache.hadoop.io.Text])
required: Class[V]
val lines = sc.newAPIHadoopFile("new_actress.list",classOf[TextInputFormat],classOf[Text],classOf[Text])
^
What am I doing wrong in the code?
TextInputFormat takes <LongWritable,Text>.
Note: be focused on extends part in both **InputFormat
#InterfaceAudience.Public
#InterfaceStability.Stable
public class TextInputFormat
extends FileInputFormat<LongWritable,Text>
that means you can not set both types for FileInputFormat as Text. If you want to use FileInputFormat you need to do something like:
You can try:
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat
import org.apache.hadoop.io.Text
import org.apache.hadoop.io.LongWritable
val lines = sc.newAPIHadoopFile("test.csv", classOf[TextInputFormat], classOf[LongWritable], classOf[Text])
but in case you still want to use both types as Text you can use KeyValueTextInputFormat which is defined as:
#InterfaceAudience.Public #InterfaceStability.Stable public class
KeyValueTextInputFormat extends FileInputFormat<Text,Text>
You can try:
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat
import org.apache.hadoop.io.Text
val lines = sc.newAPIHadoopFile("test.csv", classOf[KeyValueTextInputFormat], classOf[Text], classOf[Text])

Combine Two Slick Futures and then execute them together

I have written this code and I am trying to combine two futures obtained from separate SQL operations.
package com.example
import tables._
import scala.concurrent.{Future, Await}
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration.Duration
import slick.backend.DatabasePublisher
import slick.driver.H2Driver.api._
object Hello {
def main(args: Array[String]): Unit = {
val db = Database.forConfig("h2mem1")
try {
val people = TableQuery[Persons]
val setupAction : DBIO[Unit] = DBIO.seq(
people.schema.create
)
val setupFuture : Future[Unit] = db.run(setupAction)
val populateAction: DBIO[Option[Int]] = people ++= Seq(
(1, "test1", "user1"),
(2, "test2", "user2"),
(3, "test3", "user3"),
(4, "test4", "user4")
)
val populateFuture : Future[Option[Int]] = db.run(populateAction)
val combinedFuture : Future[Option[Int]] = setupFuture >> populateFuture
val r = combinedFuture.flatMap { results =>
results.foreach(x => println(s"Number of rows inserted $x"))
}
Await.result(r, Duration.Inf)
}
finally db.close
}
}
But I get an error when I try to compile this code
[error] /Users/abhi/ScalaProjects/SlickTest2/src/main/scala/Hello.scala:29:
value >> is not a member of scala.concurrent.Future[Unit]
[error] val combinedFuture : Future[Option[Int]] = setupFuture >>
populateFuture
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
The same code works, If I nest the populateFuture inside the map function of the setupFuture. But I don't want to write nested code because it will become very messy once there are more steps to do.
So I need a way to combine all futures into a single future and then execute it.
Edit:: I also tried combining the two actions
val combinedAction = setupAction.andThen(populateAction)
val fut1 = combinedAction.map{result =>
result.foreach{x =>println(s"number or rows inserted $x")}
}
Await.result(fut1, Duration.Inf)
but got error
/Users/abhi/ScalaProjects/SlickTest/src/main/scala/com/example/Hello.scala:31: type mismatch;
[error] found : scala.concurrent.Future[Option[Int]]
[error] required: PartialFunction[scala.util.Try[Unit],?]
[error] val combinedAction = setupAction.andThen(populateAction)
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 3 s, completed Jun 26, 2015 3:50:51 PM
Mohitas-MBP:SlickTest abhi$
According to http://slick.typesafe.com/doc/3.0.0/api/index.html#slick.dbio.DBIOAction, andThen() is what you are looking for:
val combinedAction = setupAction.andThen(populateAction)
val results = db.run(combinedAction)
populateAction will only run after setupAction completed successfully. This is crucial in your case since slick is fully non-blocking. The code you have now will cause problems at runtime. Both actions in your code will run asynchronously at the same time. There is no way to determine which action is executed first. But because populateAction depends on setupAction, you must ensure setupAction is executed first. Therefore use andThen.

Get request with Rapture Http

I'm building an API with Rapture in Scala and having trouble resolving an issue with an implicit not being in scope. Here is the output from the error that I'm receiving.
[error] /Users/Petesta/Documents/scala-project/src/main/scala/scala-project/main.scala:35: an implicit TimeSystem is required; please import timeSystems.numeric or timeSystems.javaUtil
[error] Error occurred in an application involving default arguments.
[error] val response = h.get()
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 5 s, completed Oct 16, 2014 3:36:10 PM
Here is the code that it is failing on.
def getUser(userName: String) = {
val h = Http / "some_url" / "user" / userName /? Map('key -> "value")
val response = h.get()
}
I'm not sure what to do because I've tried importing both libraries separately and the error is still the same.
I've also added the -Xlog-implicits flag to see if something else is causing the error but no additional information is outputted.
Is there a good resource anywhere with using the rapture-net library for HTTP requests? I couldn't find one except for Jon Pretty's slides at Scala By The Bay. I couldn't figure out a way to pass in a url with query strings into rapture-uri since it expects function invocation to look like this uri"url_dot_domain_with_query_strings".slurp[Char].
Any ideas?
The compilation error is not entirely correct in this case. You need 1 of the 2 imports AND you need to specify a timeout value.
def getUser(userName: String) = {
import timeSystems.numeric
val h = Http / "some_url" / "user" / userName /? Map('key -> "value")
val response = h.get(timeout = 5000L)
}
I don't really know of a good resource on it, but your basic single code line is correct. The biggest problem with the library is really documentation about the imports required. But this is what I found works for me:
def getGoogle() = {
import rapture.codec._
import rapture.io._
import rapture.uri._
import rapture.net._
import encodings.`UTF-8`
uri"http://google.com".slurp[Char]
}

Akka -- type mismatch; [error] found : Unit [error] required: scala.sys.process.ProcessLogger

I try to write example code to combine akka and actor. But I got the error message when compile the code.
The code is really simple as showed below.
So, What have I got wrong?
[error] /home/qos/workspaces/actors/actors.scala:20: type mismatch;
[error] found : Unit
[error] required: scala.sys.process.ProcessLogger
[error] execute(cmd)
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
The code is
import scala.sys.process._
import akka.actor._
object TryActor {
def main(args: Array[String]) {
val akkaSystem = ActorSystem("akkaSystem")
val worker = akkaSystem.actorOf(Props[Worker], name = "work0")
worker ! Command("ls")
}
case class Command(cmd: String)
class Worker extends Actor {
def receive = {
case Command(cmd) => {
println(cmd)
"echo recieve message from someone" !
execute(cmd.toString)
}
}
def execute(cmd: String) {
val process = Process(cmd.toString)
process ! ProcessLogger(_ => {})
}
}
}
It's interpreting execute(cmd.toString) as the argument to !, because newlines don't necessarily end statements. To fix this, don't use postfix syntax, which is deprecated for a reason:
def receive = {
case Command(cmd) => {
println(cmd)
"echo recieve message from someone".!
execute(cmd.toString)
}
}