Why `import Scalaz._` is shadowing `import scalaz.std.scalaFuture._` - scala

Why
import Scalaz._
is shadowing
import scalaz.std.scalaFuture._ ?
When I remove import Scalaz._, it works, but I would understand what happens.

Related

Can not resolve import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat;

Can not resolve
import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat;
import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat.AccessibilityActionCompat;
Need Replace
import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat;
import android.support.v4.view.accessibility.AccessibilityNodeInfoCompat.AccessibilityActionCompat;
to
import androidx.core.view.accessibility.AccessibilityNodeInfoCompat;
import androidx.core.view.accessibility.AccessibilityNodeInfoCompat.AccessibilityActionCompat;

Unit test case to mock postgresql Connection and statements in SCALA

I am very much new to Scala and need to write a test case which will mock the Postgresql connections and Statements.However unable to do so and getting the error.Can anyone help me.Below is the code that I've written
Thanks in advance !!
import org.apache.spark.sql.types.{StringType, StructField, StructType}
import org.apache.spark.sql.Column`
import org.slf4j.LoggerFactory
import java.nio.file.Paths
import java.sql.ResultSet
import java.io.InputStream
import java.io.Reader
import java.util
import java.io.File
import java.util.UUID
import java.nio.file.attribute.PosixFilePermission
import com.typesafe.config.ConfigFactory
import org.apache.spark.sql.{DataFrame, SQLContext}
import org.scalatest.{Matchers, WordSpecLike, BeforeAndAfter}
import org.scalactic.{Good, Bad, Many, One}
import scala.collection.JavaConverters._
import spark.jobserver.{SparkJobValid, SparkJobInvalid}
import spark.jobserver.api.{JobEnvironment, SingleProblem}
import org.apache.spark.sql.{Column, Row, DataFrame}
import java.sql.Connection
import java.sql.DriverManager
import java.sql.ResultSet
import org.junit.Assert
import org.junit.Before
import org.junit.Test
import org.junit.runner.RunWith
import org.easymock.EasyMock.expect
import org.powermock.api._
import org.powermock.core.classloader.annotations.PrepareForTest
import java.io.FileReader
import org.scalamock.scalatest.MockFactory
import org.powermock.core.classloader.annotations.PrepareForTest
import org.powermock.api.mockito.PowerMockito
import org.powermock.api.mockito.PowerMockito._
import org.postgresql.copy.CopyManager
import scala.collection.JavaConversions._
import org.mockito.Matchers.any
import java.sql.Statement
class mockCopyManager(){
def copyIn(command : String , fR:java.io.FileReader) :Unit ={
println("Run Command {}".format(command))
}
}
class AdvisoretlSpec extends WordSpecLike with Matchers with
MockFactory {
val sc = SparkUnitTestContext.hiveContext
import SparkUnitTestContext.defaultSizeInBytes
"Class Advisoretl job" should {
"load data in "{
val csvMap : Map[String,String] = Map("t1"->"t1.csv","t2"->"t2.csv")
val testObj = new Advisoretl()
val mockStatement = mock[Statement]
val mockConnection=mock[Connection]
val a:String = "TRUNCATE TABLE t1"
val b:String = "TRUNCATE TABLE t2"
PowerMockito.mockStatic(classOf[DriverManager])
val mockCopyManager=mock[CopyManager]
PowerMockito.when(DriverManager.getConnection(any[String]), Nil: _*).thenReturn(mockConnection)
(mockConnection.createStatement _).when().returns(mockStatement)
(mockStatement.executeUpdate _).when(a).returns(1)
(mockStatement.executeUpdate _).when("TRUNCATE TABLE t2").returns(1)
(mockCopyManager.copyIn _).when(*).returns(1)*/
val fnResult = testObj.connectionWithPostgres("a", "b", "c", "target/testdata", csvMap)
fnResult should be ("OK")
}
}
}'

How to do grouped full imports in Scala?

So these are all valid way to import in Scala.
scala> import scala.util.matching.Regex
import scala.util.matching.Regex
scala> import scala.util.matching._
import scala.util.matching._
scala> import scala.util.matching.{Regex, UnanchoredRegex}
import scala.util.matching.{Regex, UnanchoredRegex}
But how to do a valid grouped full import?
scala> import scala.util.{control._, matching._}
<console>:1: error: '}' expected but '.' found.
import scala.util.{control._, matching._}
^
You can't use import sub-expression as an import selector. According to specification on Import Clauses
The most general form of an import expression is a list of import selectors
{ x1 => y1,…,xn => yn, _ }
Regarding your question, the closest one-liner is:
scala> import scala.util._, control._, matching._
import scala.util._
import control._
import matching._

Why recommendProductsForUsers is not a member of org.apache.spark.mllib.recommendation.MatrixFactorizationModel

i have build recommendations system using Spark with ALS collaboratife filtering mllib
my snippet code :
bestModel.get
.predict(toBePredictedBroadcasted.value)
evrything is ok, but i need change code for fullfilment requirement, i read from scala doc in here
i need to use def recommendProducts
but when i tried in my code :
bestModel.get.recommendProductsForUsers(100)
and error when compile :
value recommendProductsForUsers is not a member of org.apache.spark.mllib.recommendation.MatrixFactorizationModel
[error] bestModel.get.recommendProductsForUsers(100)
maybe anyone can help me
thx
NB : i use Spark 1.5.0
my import :
import com.datastax.spark.connector._
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._
import java.io.File
import scala.io.Source
import org.apache.log4j.Logger
import org.apache.log4j.Level
import org.apache.spark.rdd._
import org.apache.spark.mllib.recommendation.{ALS, Rating, MatrixFactorizationModel}
import org.apache.spark.sql.SQLContext
import org.apache.spark.broadcast.Broadcast

Compiled Querys in Slick

I need to compile a query in Slick with Play and PostgreSQL
val bioMaterialTypes: TableQuery[Tables.BioMaterialType] = Tables.BioMaterialType
def getAllBmts() = for{ bmt <- bioMaterialTypes } yield bmt
val queryCompiled = Compiled(getAllBmts _)
but in Scala IDE I get this error in the Apply of Compiled
Multiple markers at this line
- Computation of type () => scala.slick.lifted.Query[models.Tables.BioMaterialType,models.Tables.BioMaterialTypeRow,Seq]
cannot be compiled (as type C)
- not enough arguments for method apply: (implicit compilable: scala.slick.lifted.Compilable[() =>
scala.slick.lifted.Query[models.Tables.BioMaterialType,models.Tables.BioMaterialTypeRow,Seq],C], implicit driver:
scala.slick.profile.BasicProfile)C in object Compiled. Unspecified value parameters compilable, driver.
This are my imports:
import scala.concurrent.Future
import scala.slick.jdbc.StaticQuery.staticQueryToInvoker
import scala.slick.lifted.Compiled
import scala.slick.driver.PostgresDriver
import javax.inject.Inject
import javax.inject.Singleton
import models.BioMaterialType
import models.Tables
import play.api.Application
import play.api.db.slick.Config.driver.simple.TableQuery
import play.api.db.slick.Config.driver.simple.columnExtensionMethods
import play.api.db.slick.Config.driver.simple.longColumnType
import play.api.db.slick.Config.driver.simple.queryToAppliedQueryInvoker
import play.api.db.slick.Config.driver.simple.queryToInsertInvoker
import play.api.db.slick.Config.driver.simple.stringColumnExtensionMethods
import play.api.db.slick.Config.driver.simple.stringColumnType
import play.api.db.slick.Config.driver.simple.valueToConstColumn
import play.api.db.slick.DB
import play.api.db.slick.DBAction
You can simply do
val queryCompiled = Compiled(bioMaterialTypes)