Compiled Querys in Slick - postgresql

I need to compile a query in Slick with Play and PostgreSQL
val bioMaterialTypes: TableQuery[Tables.BioMaterialType] = Tables.BioMaterialType
def getAllBmts() = for{ bmt <- bioMaterialTypes } yield bmt
val queryCompiled = Compiled(getAllBmts _)
but in Scala IDE I get this error in the Apply of Compiled
Multiple markers at this line
- Computation of type () => scala.slick.lifted.Query[models.Tables.BioMaterialType,models.Tables.BioMaterialTypeRow,Seq]
cannot be compiled (as type C)
- not enough arguments for method apply: (implicit compilable: scala.slick.lifted.Compilable[() =>
scala.slick.lifted.Query[models.Tables.BioMaterialType,models.Tables.BioMaterialTypeRow,Seq],C], implicit driver:
scala.slick.profile.BasicProfile)C in object Compiled. Unspecified value parameters compilable, driver.
This are my imports:
import scala.concurrent.Future
import scala.slick.jdbc.StaticQuery.staticQueryToInvoker
import scala.slick.lifted.Compiled
import scala.slick.driver.PostgresDriver
import javax.inject.Inject
import javax.inject.Singleton
import models.BioMaterialType
import models.Tables
import play.api.Application
import play.api.db.slick.Config.driver.simple.TableQuery
import play.api.db.slick.Config.driver.simple.columnExtensionMethods
import play.api.db.slick.Config.driver.simple.longColumnType
import play.api.db.slick.Config.driver.simple.queryToAppliedQueryInvoker
import play.api.db.slick.Config.driver.simple.queryToInsertInvoker
import play.api.db.slick.Config.driver.simple.stringColumnExtensionMethods
import play.api.db.slick.Config.driver.simple.stringColumnType
import play.api.db.slick.Config.driver.simple.valueToConstColumn
import play.api.db.slick.DB
import play.api.db.slick.DBAction

You can simply do
val queryCompiled = Compiled(bioMaterialTypes)

Related

Packaging scala class on databricks (error: not found: value dbutils)

Trying to make a package with a class
package x.y.Log
import scala.collection.mutable.ListBuffer
import org.apache.spark.sql.{DataFrame}
import org.apache.spark.sql.functions.{lit, explode, collect_list, struct}
import org.apache.spark.sql.types.{StructField, StructType}
import java.util.Calendar
import java.text.SimpleDateFormat
import org.apache.spark.sql.functions._
import spark.implicits._
class Log{
...
}
Everything runs fine on same notebook, but once I try to create package that I could use in other notebooks I get errors:
<notebook>:11: error: not found: object spark
import spark.implicits._
^
<notebook>:21: error: not found: value dbutils
val notebookPath = dbutils.notebook.getContext().notebookPath.get
^
<notebook>:22: error: not found: value dbutils
val userName = dbutils.notebook.getContext.tags("user")
^
<notebook>:23: error: not found: value dbutils
val userId = dbutils.notebook.getContext.tags("userId")
^
<notebook>:41: error: not found: value spark
var rawMeta = spark.read.format("json").option("multiLine", true).load("/FileStore/tables/xxx.json")
^
<notebook>:42: error: value $ is not a member of StringContext
.filter($"Name".isin(readSources))
Anyone knows how to package this class with these libs?
Assuming you are running Spark 2.x, the statement import spark.implicits._ only works when you have SparkSession object in the scope. The object Implicits is defined inside the SparkSession object. This object extends the SQLImplicits from previous verisons of spark Link to SparkSession code on Github. You can check the link to verify
package x.y.Log
import scala.collection.mutable.ListBuffer
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions.{lit, explode, collect_list, struct}
import org.apache.spark.sql.types.{StructField, StructType}
import java.util.Calendar
import java.text.SimpleDateFormat
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
class Log{
val spark: SparkSession = SparkSession.builder.enableHiveSupport().getOrCreate()
import spark.implicits._
...[rest of the code below]
}

how to store Select Statemnt data to var in scala play2.6 slick

I have written One SQL select query and i want to store the result returned from this query to some Variable how to do that
val count=(sql"""SELECT count(User_ID) from user_details_table where email=$email or Mobile_no=$Mobile_no""".as[(String)] )
val a1=Await.result(dbConfig.run(count), 1000 seconds)
Ok(Json.toJson(a1.toString()))
here i am not able to find out the id that is returning from this query
this is my complete code what i am trying to do
import javax.inject.Inject
import play.api.mvc.{AbstractController, ControllerComponents}
import javax.inject.Inject
import play.api.mvc.{AbstractController, ControllerComponents}
import scala.concurrent.Await
import javax.inject.Inject
import play.api.mvc._
import com.google.gson.{FieldNamingPolicy, Gson, GsonBuilder}
import play.api.libs.json.Json
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Await, Future}
import javax.inject.Inject
import org.joda.time.format.DateTimeFormat
import play.api.libs.json.{JsPath, Writes}
import slick.jdbc.GetResult
import scala.concurrent.ExecutionContext.Implicits.global
//import play.api.mvc._
import org.joda.time.{DateTime, Period}
import play.api.libs.json.Json
import play.api.mvc._
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
import com.google.gson.Gson
class adduserrs #Inject()(cc: ControllerComponents) extends AbstractController(cc)
{
def adduser(Name:String,Mobile_no:String,email:String,userpassword:String,usertype:String) = Action
{
import play.api.libs.json.{JsPath, JsValue, Json, Writes}
val gson: Gson = new GsonBuilder().setFieldNamingPolicy(FieldNamingPolicy.UPPER_CAMEL_CASE).create
val dbConfig = Database.forURL("jdbc:mysql://localhost:3306/equineapp?user=root&password=123456", driver = "com.mysql.jdbc.Driver")
var usertypeid=0;
if(usertype=="Owner")
{
usertypeid=1;
}
else if(usertype=="Practitioner")
{
usertypeid=2;
}
val count=(sql"""SELECT count(User_ID) from user_details_table where email=$email or Mobile_no=$Mobile_no""".as[(String)] )
val a1=Await.result(dbConfig.run(count), 1000 seconds)
Ok(Json.toJson(a1.toString()))
if (count==0) {
val setup1 = sql"call addusersreg ($Name,$Mobile_no,$email,$userpassword,$usertypeid);".as[(String, String, String, String, Int)]
val res = Await.result(dbConfig.run(setup1), 1000 seconds)
Ok(Json.toJson(1))
}
else {
Ok(Json.toJson(0))
}
}
from above code iam just trying to insert userdetails in database
if user exists in db then it will return response as 0 or else it will return response as 1
Ok, so here you are only counting, so perhaps you just need a variable of type Long:
SQL("select count(*) from User where tel = {telephoneNumber}")
.on('telephobeNumber -> numberThatYouPassedToTheMethod).executeQuery()
.as(SqlParser.scalar[Long].single)
You just totally changed the question, anyway, for the error you mentioned in the comment, the reason is that you have no connection as well as you did not define the database you want to use (default or otherwise). All the database calls are within the following block:
db.withConnection{
implicit connection =>
//SQL queries live here.
}
Moreover you need to db is injected if it is not the default database:
class myTestModel #Inject()(#NamedDatabase("nonDefaultDB") db: Database){???}
Follow MVC Architecture: For consistency with model-view-controller architecture, all your database calls should be within models classes. The controller method needs to call the models method for the result.

scala import library wildcard

I am new to scala. Please be gentle.
The import below imports everything (every class, trait and object) under ml.
import org.apache.spark.ml._
but NOT ParamMap, which is under
import org.apache.spark.ml.param._
In other words, for the code below, if I do:
import org.apache.spark.ml.param._
import org.apache.spark.ml._
class Kmeans extends Transformer {
def copy(extra: ParamMap): Unit = {
defaultCopy(extra)
}}
Then I have no import errors, but if I comment import org.apache.spark.ml.param._:
//import org.apache.spark.ml.param._
import org.apache.spark.ml._
class Kmeans extends Transformer {
def copy(extra: ParamMap): Unit = {
defaultCopy(extra)
}}
It gives an import error on ParamMap.
Question
why isn't this import org.apache.spark.ml.param.ParamMap included import org.apache.spark.ml.param._
Scala imports are not recursive - import org.apache.spark.ml._ means import all classes and fields directly under ml package but not the ones under its sub-packages.
Since ParamMap is under one of ml's sub-packages (ml.param), you'll have to import that package or ParamMap class directly.

value async is not a member of object play.api.mvc.Action

The following is my method definition:
import play.api.mvc.{Action, Controller}
import java.io.{ByteArrayInputStream, FileInputStream, IOException, File}
import play.api.Logger._
import play.api.libs.concurrent.Execution.Implicits.defaultContext
import scala.concurrent.Future
import play.api.libs.iteratee.Enumerator
import play.api.mvc.ResponseHeader
import play.api.mvc.SimpleResult
import org.apache.commons.io.IOUtils
import java.nio.ByteBuffer
def do_something(name: String, address: String) = Action.async(parse.multipartFormData) {
/* Some code */
}
I am getting the following compilation error:
value async is not a member of object play.api.mvc.Action
Action.async was first introduced in Play 2.2. But both 2.1 and 2.2 aren't supported anymore, so you should consider upgrading (the version as of this posting is 2.5.2).
See the API docs for:
Play 2.1 - Action
Play 2.2 - Action

value lookup is not a member of org.apache.spark.rdd.RDD[(String, String)]

I have got a problem when I tired to compile my scala program with SBT.
I have import the class I need .Here is part of my code.
import java.io.File
import java.io.FileWriter
import java.io.PrintWriter
import java.io.IOException
import org.apache.spark.{SparkConf,SparkContext}
import org.apache.spark.rdd.PairRDDFunctions
import scala.util.Random
......
val data=sc.textFile(path)
val kv=data.map{s=>
val a=s.split(",")
(a(0),a(1))
}.cache()
kv.first()
val start=System.currentTimeMillis()
for(tg<-target){
kv.lookup(tg.toString)
}
The error detail is :
value lookup is not a member of org.apache.spark.rdd.RDD[(String, String)]
[error] kv.lookup(tg.toString)
What confused me is I have import import org.apache.spark.rdd.PairRDDFunctions,
but it doesn't work . And when I run this in Spark-shell ,it runs well.
import org.apache.spark.SparkContext._
to have access to the implicits that let you use PairRDDFunctions on a RDD of type (K,V).
There's no need to directly import PairRDDFunctions