Summoning Encoder[OneAnd[NonEmptyList, Int]]? - scala

Given:
# import $ivy.`io.circe::circe-core:0.9.3`
import $ivy.$
# import $ivy.`io.circe::circe-generic:0.9.3`
import $ivy.$
# import cats._, cats.data._, io.circe._, io.circe.Encoder._, io.circe.Decoder._
import cats._, cats.data._, io.circe._, io.circe.Encoder._, io.circe.Decoder._
# val x: OneAnd[NonEmptyList, Int] = OneAnd(1, NonEmptyList(2, Nil))
x: OneAnd[NonEmptyList, Int] = OneAnd(1, NonEmptyList(2, List()))
# import io.circe.syntax._
import io.circe.syntax._
# x.asJson
cmd5.sc:1: could not find implicit value for parameter encoder: io.circe.Encoder[cats.data.OneAnd[cats.data.NonEmptyList,Int]]
val res5 = x.asJson
^
Compilation Failed
Perhaps I'm missing an import in order to use https://github.com/circe/circe/blob/58107ee7c82769f56e5cd932c21493dfe239b6d6/modules/core/shared/src/main/scala/io/circe/Encoder.scala#L343-L350's Encoder#encodeOneAnd?
Please let me know how to resolve it.
Thanks

Adding the import
import io.circe.generic.auto._
solved it for me. Hope this helps.

Related

value na is not a member of?

hello i just started to learn scala.
and just follow the tutorial in udemy.
i was followed the same code but give me an error.
i have no idea about that error.
and this my code
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.sql.SparkSession
import org.apache.log4j._
import org.apache.spark.ml.feature.{CountVectorizer, CountVectorizerModel}
import org.apache.spark.ml.feature.Word2Vec
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.sql.Row
Logger.getLogger("org").setLevel(Level.ERROR)
val spark = SparkSession.builder().getOrCreate()
val data = spark.read.option("header","true").
option("inferSchema","true").
option("delimiter","\t").
format("csv").
load("dataset.tsv").
withColumn("subject", split($"subject", " "))
val logRegDataAll = (data.select(data("label")).as("label"),$"subject")
val logRegData = logRegDataAll.na.drop()
and give me error like this
scala> :load LogisticRegression.scala
Loading LogisticRegression.scala...
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.sql.SparkSession
import org.apache.log4j._
import org.apache.spark.ml.feature.{CountVectorizer, CountVectorizerModel}
import org.apache.spark.ml.feature.Word2Vec
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.sql.Row
spark: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession#1efcba00
data: org.apache.spark.sql.DataFrame = [label: string, subject: array<string>]
logRegDataAll: (org.apache.spark.sql.Dataset[org.apache.spark.sql.Row], org.apache.spark.sql.ColumnName) = ([label: string],subject)
<console>:43: error: value na is not a member of (org.apache.spark.sql.Dataset[org.apache.spark.sql.Row], org.apache.spark.sql.ColumnName)
val logRegData = logRegDataAll.na.drop()
^
thanks for helping
You can see clearly
val logRegDataAll = (data.select(data("label")).as("label"),$"subject")
This returns
(org.apache.spark.sql.Dataset[org.apache.spark.sql.Row], org.apache.spark.sql.ColumnName)
So there is an extra parantheses ) data("label")) which should be data.select(data("label").as("label"),$"subject") in actual.

Unit test case to mock postgresql Connection and statements in SCALA

I am very much new to Scala and need to write a test case which will mock the Postgresql connections and Statements.However unable to do so and getting the error.Can anyone help me.Below is the code that I've written
Thanks in advance !!
import org.apache.spark.sql.types.{StringType, StructField, StructType}
import org.apache.spark.sql.Column`
import org.slf4j.LoggerFactory
import java.nio.file.Paths
import java.sql.ResultSet
import java.io.InputStream
import java.io.Reader
import java.util
import java.io.File
import java.util.UUID
import java.nio.file.attribute.PosixFilePermission
import com.typesafe.config.ConfigFactory
import org.apache.spark.sql.{DataFrame, SQLContext}
import org.scalatest.{Matchers, WordSpecLike, BeforeAndAfter}
import org.scalactic.{Good, Bad, Many, One}
import scala.collection.JavaConverters._
import spark.jobserver.{SparkJobValid, SparkJobInvalid}
import spark.jobserver.api.{JobEnvironment, SingleProblem}
import org.apache.spark.sql.{Column, Row, DataFrame}
import java.sql.Connection
import java.sql.DriverManager
import java.sql.ResultSet
import org.junit.Assert
import org.junit.Before
import org.junit.Test
import org.junit.runner.RunWith
import org.easymock.EasyMock.expect
import org.powermock.api._
import org.powermock.core.classloader.annotations.PrepareForTest
import java.io.FileReader
import org.scalamock.scalatest.MockFactory
import org.powermock.core.classloader.annotations.PrepareForTest
import org.powermock.api.mockito.PowerMockito
import org.powermock.api.mockito.PowerMockito._
import org.postgresql.copy.CopyManager
import scala.collection.JavaConversions._
import org.mockito.Matchers.any
import java.sql.Statement
class mockCopyManager(){
def copyIn(command : String , fR:java.io.FileReader) :Unit ={
println("Run Command {}".format(command))
}
}
class AdvisoretlSpec extends WordSpecLike with Matchers with
MockFactory {
val sc = SparkUnitTestContext.hiveContext
import SparkUnitTestContext.defaultSizeInBytes
"Class Advisoretl job" should {
"load data in "{
val csvMap : Map[String,String] = Map("t1"->"t1.csv","t2"->"t2.csv")
val testObj = new Advisoretl()
val mockStatement = mock[Statement]
val mockConnection=mock[Connection]
val a:String = "TRUNCATE TABLE t1"
val b:String = "TRUNCATE TABLE t2"
PowerMockito.mockStatic(classOf[DriverManager])
val mockCopyManager=mock[CopyManager]
PowerMockito.when(DriverManager.getConnection(any[String]), Nil: _*).thenReturn(mockConnection)
(mockConnection.createStatement _).when().returns(mockStatement)
(mockStatement.executeUpdate _).when(a).returns(1)
(mockStatement.executeUpdate _).when("TRUNCATE TABLE t2").returns(1)
(mockCopyManager.copyIn _).when(*).returns(1)*/
val fnResult = testObj.connectionWithPostgres("a", "b", "c", "target/testdata", csvMap)
fnResult should be ("OK")
}
}
}'

How to do grouped full imports in Scala?

So these are all valid way to import in Scala.
scala> import scala.util.matching.Regex
import scala.util.matching.Regex
scala> import scala.util.matching._
import scala.util.matching._
scala> import scala.util.matching.{Regex, UnanchoredRegex}
import scala.util.matching.{Regex, UnanchoredRegex}
But how to do a valid grouped full import?
scala> import scala.util.{control._, matching._}
<console>:1: error: '}' expected but '.' found.
import scala.util.{control._, matching._}
^
You can't use import sub-expression as an import selector. According to specification on Import Clauses
The most general form of an import expression is a list of import selectors
{ x1 => y1,…,xn => yn, _ }
Regarding your question, the closest one-liner is:
scala> import scala.util._, control._, matching._
import scala.util._
import control._
import matching._

value lookup is not a member of org.apache.spark.rdd.RDD[(String, String)]

I have got a problem when I tired to compile my scala program with SBT.
I have import the class I need .Here is part of my code.
import java.io.File
import java.io.FileWriter
import java.io.PrintWriter
import java.io.IOException
import org.apache.spark.{SparkConf,SparkContext}
import org.apache.spark.rdd.PairRDDFunctions
import scala.util.Random
......
val data=sc.textFile(path)
val kv=data.map{s=>
val a=s.split(",")
(a(0),a(1))
}.cache()
kv.first()
val start=System.currentTimeMillis()
for(tg<-target){
kv.lookup(tg.toString)
}
The error detail is :
value lookup is not a member of org.apache.spark.rdd.RDD[(String, String)]
[error] kv.lookup(tg.toString)
What confused me is I have import import org.apache.spark.rdd.PairRDDFunctions,
but it doesn't work . And when I run this in Spark-shell ,it runs well.
import org.apache.spark.SparkContext._
to have access to the implicits that let you use PairRDDFunctions on a RDD of type (K,V).
There's no need to directly import PairRDDFunctions

Compiled Querys in Slick

I need to compile a query in Slick with Play and PostgreSQL
val bioMaterialTypes: TableQuery[Tables.BioMaterialType] = Tables.BioMaterialType
def getAllBmts() = for{ bmt <- bioMaterialTypes } yield bmt
val queryCompiled = Compiled(getAllBmts _)
but in Scala IDE I get this error in the Apply of Compiled
Multiple markers at this line
- Computation of type () => scala.slick.lifted.Query[models.Tables.BioMaterialType,models.Tables.BioMaterialTypeRow,Seq]
cannot be compiled (as type C)
- not enough arguments for method apply: (implicit compilable: scala.slick.lifted.Compilable[() =>
scala.slick.lifted.Query[models.Tables.BioMaterialType,models.Tables.BioMaterialTypeRow,Seq],C], implicit driver:
scala.slick.profile.BasicProfile)C in object Compiled. Unspecified value parameters compilable, driver.
This are my imports:
import scala.concurrent.Future
import scala.slick.jdbc.StaticQuery.staticQueryToInvoker
import scala.slick.lifted.Compiled
import scala.slick.driver.PostgresDriver
import javax.inject.Inject
import javax.inject.Singleton
import models.BioMaterialType
import models.Tables
import play.api.Application
import play.api.db.slick.Config.driver.simple.TableQuery
import play.api.db.slick.Config.driver.simple.columnExtensionMethods
import play.api.db.slick.Config.driver.simple.longColumnType
import play.api.db.slick.Config.driver.simple.queryToAppliedQueryInvoker
import play.api.db.slick.Config.driver.simple.queryToInsertInvoker
import play.api.db.slick.Config.driver.simple.stringColumnExtensionMethods
import play.api.db.slick.Config.driver.simple.stringColumnType
import play.api.db.slick.Config.driver.simple.valueToConstColumn
import play.api.db.slick.DB
import play.api.db.slick.DBAction
You can simply do
val queryCompiled = Compiled(bioMaterialTypes)