Json response In scala slick - scala

I am executing Store Procedure In Scala controller class but not able to get result In proper json Format
val dbConfig = Database.forURL("jdbc:mysql://localhost:3306/equineapp?user=root&password=123456", driver = "com.mysql.jdbc.Driver")
val setup1 = sql"call HorsePrfile ($HorseId);".as[(Int,String)]
val res = Await.result(dbConfig.run(setup1), 1000 seconds)
// val json = Json.toJson(res)
Ok(Json.toJson(res.toList))
can anyone tell me how to return json response from above code with header also
this is my model class
case class HorseProfile(HorseID: Int,HorsePicUrl:String)
case class HorseProfileData(HorseID: Int, HorsePicUrl:String)
object HorseForm1 {
val form = Form(
mapping(
"HorseID" -> number,
"HorsePicUrl" ->nonEmptyText ,
)(HorseProfileData.apply)(HorseProfileData.unapply)
)
implicit val fooWrites: Writes[HorseProfile] = (
(__ \ 'HorseID).write[Int] and (__ \ 'HorsePicUrl).write[String]
)(foo => (foo.HorseID, foo.HorsePicUrl))
// val horsepics = TableQuery[HorseProfilePicDef]
}
//
//
class HorseProfilePicDef(tag: Tag) extends Table[HorseProfile](tag, "horse_profile_image") {
def HorseID1 = column[Int]("horseid")
def HorsePicUrl = column[String]("horseimage")
//def HorsePics = column[Blob]("horseimage")
// def * = (HorseID, HorsePics)
override def * =
(HorseID1, HorsePicUrl) <>(HorseProfile.tupled, HorseProfile.unapply)
}
object HorseProfilePics {
val dbConfig = Database.forURL("jdbc:mysql://localhost:3306/equineapp?user=root&password=123456", driver = "com.mysql.jdbc.Driver")
//val dbConfig1 = dbConfig.get[JdbcProfile](Play.current)
val horsepics = TableQuery[HorseProfilePicDef]
implicit val HorseProfile: Writes[HorseProfile] = Json.writes[HorseProfile]
implicit val userJsonFormat = Json.format[HorseProfile]
// val itemWrites: OWrites[(HorseProfile)] = (
// (__ \ "flight").write[HorseProfile]
//// (__ \ "scheduledFlight").write[ScheduledFlight] and
//// (__ \ "airline").write[Airline] and
//// (__ \ "airport1").write[Airport] and
//// (__ \ "airport2").write[Airport]
// ).tupled
implicit val testWriter: OWrites[HorseProfile] = Json.writes[HorseProfile]
//val resultWrites: Writes[Seq[( HorseProfile )]] = Writes.seq(HorseProfile)
implicit def seqWrites[T](implicit fmt: Writes[T]): Writes[Seq[T]] = new Writes[Seq[T]] {
def writes(ts: Seq[T]) = JsArray(ts.toList.map(t => toJson(t)(fmt)))
}
// def HorseImage(HorseId : Int): Future[String] = {
//
//
// val setup1 = sql"call HorsePrfile ($HorseId);".as[(Int, String)]
//
//
// //val res = Await.result(dbConfig.run(setup1), 1000 seconds)
// dbConfig.run(setup1).map(seqWrites =>seqWrites.toString).recover {
// case ex: Exception => ex.getCause.getMessage
//
//
// }
// }
// def writes(tweet: HorseProfile): JsValue = {
// // tweetSeq == Seq[(String, play.api.libs.json.JsString)]
// val tweetSeq = Seq(
// "username" -> JsString(tweet.HorsePicUrl)
//
// )
// JsObject(tweetSeq)
// }
// implicit val locationWrites: Writes[HorseProfile] = (
// (JsPath \ "HorseID").write[Int] and
// (JsPath \ "HorsePicUrl").write[String]
// )(unlift(HorseProfile.))
val resultWrites: Writes[Seq[(HorseProfile)]] = Writes.seq(testWriter)
// def HorseImage( HorseId : Int): Future[String]= {
// val a1=horsepics.filter(i => i.HorseID === HorseId)
// dbConfig.run(horsepics.filter(_.HorseID === HorseId).result.headOption).map(results =>results.toString)
//
// // dbConfig.run(horsepics.filter(_.HorseID === HorseId).result.headOption
//
// }
// def addnew(HorseId:Int,Horsepicurl:String):Future[String]=
// {
//
// val project = HorseProfile(HorseId,Horsepicurl)
// dbConfig.run(HorseProfilePics+=project).map(res => "User successfully added").recover {
// case ex: Exception => ex.getCause.getMessage
// }
//
// //dbConfig.run(users.insertOrUpdate(project))
// }
def gethorseimage(HorseId : Int):Future[Seq[HorseProfile]] = {
// dbConfig.run(horsepics.filter(i => i.HorseID === HorseId ).result.headOption)
val set=horsepics.filter(_.HorseID1 === HorseId )
dbConfig.run(set.result)
}
//
I have declared column here as string but in my database I have declared this column as BLOB and trying to insert data and retrive data as Json format

First of all never ever use Await.result in your actual implementation. What is the point of using Slick, if you don't want to use its concurrent abilities?
Second, you need to map the result, something like:
dbCall.map{
returnedData =>
??? //The following step 3 and four here
}.recover{
//In case something went wrong with your DB call.
case e => InternalServerError("Db failure")
}
Third, you need to represent your returnedData as a case class.
Fourth you use the Json writer to turn a case class into its Json representation:
implicit val someVal: OWrites[SomeCaseClass] = Json.writes[SomeCaseClass]
Json.toJson(someVal)
Update
So based on your comment, I implemented it, such that it uses your return type and turns it into Json. Here it is:
import play.api.libs.json.{Json, OWrites}
case class HorseProfile(i: Int, value: String)
val dbResult: Option[HorseProfile] = Some(HorseProfile(77,"iVBORw0KGgoAAAANSUhE"))
implicit val horseProfileWrites: OWrites[HorseProfile] = Json.writes[HorseProfile]
Json.toJson(dbResult)
And the result I'm getting is this:
res0: play.api.libs.json.JsValue = {"i":77,"value":"iVBORw0KGgoAAAANSUhE"}

Related

Use a functional construct to crate a List of classes from REST response

I wrote the below code to:
Query a REST endpoint
Create an instance of the case class Pair from the name and value fields in the response
The Pair instance added to the list lb.
This code is very imperative:
case class Pair(name: String, value: Double, update: String)
var lb = new ListBuffer[Currency]()
for (elem <- Data.updates) {
try {
val request = "http://...../values/" + elem
val response = scala.io.Source.fromURL(request).mkString
val jsonObject = Json.parse(response)
val value = (jsonObject \ "value").get.toDouble
val update = (jsonObject \ "update").get
lb += Pair(elem , value, update)
} catch {
case e: IOException => println("Exception occured accessing data for " +elem)
}
}
The above code provides the context of the problem. I've refactored and written a block that runs on Scastie with changes but hopefully conveys the same problem:
import play.api.libs.json.{Json, __}
val updates = List("test1" , "test2")
case class Pair(name: String, value: Double, update: String)
var lb = new scala.collection.mutable.ListBuffer[Pair]()
for (elem <- updates) {
val request = "http://www.google.com" + elem
val response : String = "{\"value\" : 1 , \"update\" : \"test\"}"
val jsonObject = Json.parse(response)
val value = (jsonObject \ "value").get.toString().toDouble
val update = (jsonObject \ "update").get.toString()
lb += Pair(elem , value, update)
}
println(lb.toList)
Scastie: https://scastie.scala-lang.org/RmBUUem5SxGT2dYLIqLryQ
I need to replace the ListBuffer with List and somehow populate the List using a functional construct instead of a loop?
Just use .map:
val lb = updates.map { elem =>
...
Pair(elem, value, update)
}
I'd first define a method:
def getPairFromElem(elem: String): Pair = {
val request = "http://www.google.com" + elem
val response: String = "{\"value\" : 1 , \"update\" : \"test\"}"
val jsonObject = Json.parse(response)
val value = (jsonObject \ "value").get.toString().toDouble
val update = (jsonObject \ "update").get.toString()
Pair(elem , value, update)
}
Then use it:
val updates = List("test1" , "test2")
val lb = updates.map(getPairFromElem)
Code run at Scastie.

Unable to Analyse data

val patterns = ctx.getBroadcastState(patternStateDescriptor)
The imports I made
import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.api.common.state.{MapStateDescriptor, ValueState, ValueStateDescriptor}
import org.apache.flink.api.scala.typeutils.Types
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.datastream.BroadcastStream
import org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
import org.apache.flink.streaming.api.scala._
import org.apache.flink.util.Collector
Here's the code
val env = StreamExecutionEnvironment.getExecutionEnvironment
val properties = new Properties()
properties.setProperty("bootstrap.servers","localhost:9092")
val patternStream = new FlinkKafkaConsumer010("patterns", new SimpleStringSchema, properties)
val patterns = env.addSource(patternStream)
var patternData = patterns.map {
str =>
val splitted_str = str.split(",")
PatternStream(splitted_str(0).trim, splitted_str(1).trim, splitted_str(2).trim)
}
val logsStream = new FlinkKafkaConsumer010("logs", new SimpleStringSchema, properties)
// logsStream.setStartFromEarliest()
val logs = env.addSource(logsStream)
var data = logs.map {
str =>
val splitted_str = str.split(",")
LogsTest(splitted_str.head.trim, splitted_str(1).trim, splitted_str(2).trim)
}
val keyedData: KeyedStream[LogsTest, String] = data.keyBy(_.metric)
val bcStateDescriptor = new MapStateDescriptor[Unit, PatternStream]("patterns", Types.UNIT, Types.of[PatternStream]) // first type defined is for the key and second data type defined is for the value
val broadcastPatterns: BroadcastStream[PatternStream] = patternData.broadcast(bcStateDescriptor)
val alerts = keyedData
.connect(broadcastPatterns)
.process(new PatternEvaluator())
alerts.print()
// println(alerts.getClass)
// val sinkProducer = new FlinkKafkaProducer010("output", new SimpleStringSchema(), properties)
env.execute("Flink Broadcast State Job")
}
class PatternEvaluator()
extends KeyedBroadcastProcessFunction[String, LogsTest, PatternStream, (String, String, String)] {
private lazy val patternStateDescriptor = new MapStateDescriptor("patterns", classOf[String], classOf[String])
private var lastMetricState: ValueState[String] = _
override def open(parameters: Configuration): Unit = {
val lastMetricDescriptor = new ValueStateDescriptor("last-metric", classOf[String])
lastMetricState = getRuntimeContext.getState(lastMetricDescriptor)
}
override def processElement(reading: LogsTest,
readOnlyCtx: KeyedBroadcastProcessFunction[String, LogsTest, PatternStream, (String, String, String)]#ReadOnlyContext,
out: Collector[(String, String, String)]): Unit = {
val metrics = readOnlyCtx.getBroadcastState(patternStateDescriptor)
if (metrics.contains(reading.metric)) {
val metricPattern: String = metrics.get(reading.metric)
val metricPatternValue: String = metrics.get(reading.value)
val lastMetric = lastMetricState.value()
val logsMetric = (reading.metric)
val logsValue = (reading.value)
if (logsMetric == metricPattern) {
if (metricPatternValue == logsValue) {
out.collect((reading.timestamp, reading.value, reading.metric))
}
}
}
}
override def processBroadcastElement(
update: PatternStream,
ctx: KeyedBroadcastProcessFunction[String, LogsTest, PatternStream, (String, String, String)]#Context,
out: Collector[(String, String, String)]
): Unit = {
val patterns = ctx.getBroadcastState(patternStateDescriptor)
if (update.metric == "IP") {
patterns.put(update.metric /*,update.operator*/ , update.value)
}
// else if (update.metric == "username"){
// patterns.put(update.metric, update.value)
// }
// else {
// println("No required data found")
// }
// }
}
}
Sample Data :- Logs Stream
"21/09/98","IP", "5.5.5.5"
Pattern Stream
"IP","==","5.5.5.5"
I'm unable to analyse data by getting desired result, i.e = 21/09/98,IP,5.5.5.5
There's no error as of now, it's just not analysing the data
The code is reading streams (Checked)
One common source of trouble in cases like this is that the API offers no control over the order in which the patterns and the data are ingested. It could be that processElement is being called before processBroadcastElement.

How to using Akka Stream with Akk-Http to stream the response

I'm new to Akka Stream. I used following code for CSV parsing.
class CsvParser(config: Config)(implicit system: ActorSystem) extends LazyLogging with NumberValidation {
import system.dispatcher
private val importDirectory = Paths.get(config.getString("importer.import-directory")).toFile
private val linesToSkip = config.getInt("importer.lines-to-skip")
private val concurrentFiles = config.getInt("importer.concurrent-files")
private val concurrentWrites = config.getInt("importer.concurrent-writes")
private val nonIOParallelism = config.getInt("importer.non-io-parallelism")
def save(r: ValidReading): Future[Unit] = {
Future()
}
def parseLine(filePath: String)(line: String): Future[Reading] = Future {
val fields = line.split(";")
val id = fields(0).toInt
try {
val value = fields(1).toDouble
ValidReading(id, value)
} catch {
case t: Throwable =>
logger.error(s"Unable to parse line in $filePath:\n$line: ${t.getMessage}")
InvalidReading(id)
}
}
val lineDelimiter: Flow[ByteString, ByteString, NotUsed] =
Framing.delimiter(ByteString("\n"), 128, allowTruncation = true)
val parseFile: Flow[File, Reading, NotUsed] =
Flow[File].flatMapConcat { file =>
val src = FileSource.fromFile(file).getLines()
val source : Source[String, NotUsed] = Source.fromIterator(() => src)
// val gzipInputStream = new GZIPInputStream(new FileInputStream(file))
source
.mapAsync(parallelism = nonIOParallelism)(parseLine(file.getPath))
}
val computeAverage: Flow[Reading, ValidReading, NotUsed] =
Flow[Reading].grouped(2).mapAsyncUnordered(parallelism = nonIOParallelism) { readings =>
Future {
val validReadings = readings.collect { case r: ValidReading => r }
val average = if (validReadings.nonEmpty) validReadings.map(_.value).sum / validReadings.size else -1
ValidReading(readings.head.id, average)
}
}
val storeReadings: Sink[ValidReading, Future[Done]] =
Flow[ValidReading]
.mapAsyncUnordered(concurrentWrites)(save)
.toMat(Sink.ignore)(Keep.right)
val processSingleFile: Flow[File, ValidReading, NotUsed] =
Flow[File]
.via(parseFile)
.via(computeAverage)
def importFromFiles = {
implicit val materializer = ActorMaterializer()
val files = importDirectory.listFiles.toList
logger.info(s"Starting import of ${files.size} files from ${importDirectory.getPath}")
val startTime = System.currentTimeMillis()
val balancer = GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val balance = builder.add(Balance[File](concurrentFiles))
val merge = builder.add(Merge[ValidReading](concurrentFiles))
(1 to concurrentFiles).foreach { _ =>
balance ~> processSingleFile ~> merge
}
FlowShape(balance.in, merge.out)
}
Source(files)
.via(balancer)
.withAttributes(ActorAttributes.supervisionStrategy { e =>
logger.error("Exception thrown during stream processing", e)
Supervision.Resume
})
.runWith(storeReadings)
.andThen {
case Success(_) =>
val elapsedTime = (System.currentTimeMillis() - startTime) / 1000.0
logger.info(s"Import finished in ${elapsedTime}s")
case Failure(e) => logger.error("Import failed", e)
}
}
}
I wanted to to use Akka HTTP which would give all ValidReading entities parsed from CSV but I couldn't understand on how would I do that.
The above code fetches file from server and parse each lines to generate ValidReading.
How can I pass/upload CSV via akka-http, parse the file and stream the resulted response back to the endpoint?
The "essence" of the solution is something like this:
import akka.http.scaladsl.server.Directives._
val route = fileUpload("csv") {
case (metadata, byteSource) =>
val source = byteSource.map(x => x)
complete(HttpResponse(entity = HttpEntity(ContentTypes.`text/csv(UTF-8)`, source)))
}
You detect that the uploaded thing is a multipart-form-data with a chunk named "csv". You get the byteSource from that. Do the calculation (insert your logic to the .map(x=>x) part). Convert your data back to ByteString. Complete the request with the new source. This will make your endoint like a proxy.

Empty Iterator : Asynchronous cassandra write

I am trying to implement asynchronous cassandra writes on objects (not RDD) using TableWriter. Code snippet below:
class CassandraOperations[T] extends Serializable with Logging {
/**
* Saves the data from object or Iterator of object to a Cassandra table asynchronously. Uses the specified column names.
* You can check whether this action is completed or not by callback on Future.
*/
def saveToCassandraAsync(
cc: CassandraConnector,
keyspaceName: String,
tableName: String,
columns: ColumnSelector = AllColumns,
data: Iterator[T],
writeConf: WriteConf = WriteConf(ttl = TTLOption.constant(80000)))(implicit rwf: RowWriterFactory[T]):
Future[Unit] = {
implicit val ec = ExecutionContext.global
val writer = TableWriter(cc, keyspaceName, tableName, columns, writeConf)
val futureAction = Future(writer.write(TaskContext.get(), data: Iterator[T]))
futureAction
}
}
And then wait using:
Await.result(resultFuture, TIMEOUT seconds)
the data is available when the execution reaches the write method on line :
val futureAction = Future(writer.write(TaskContext.get(), data: Iterator[T]))
But data is empty when the execution reaches the definition def write(taskContext: TaskContext, **data**: Iterator[T]) of function :
def write(taskContext: TaskContext, data: Iterator[T]) {
val updater = OutputMetricsUpdater(taskContext, writeConf)
connector.withSessionDo { session =>
val protocolVersion = session.getCluster.getConfiguration.getProtocolOptions.getProtocolVersion
val rowIterator = new CountingIterator(data)
val stmt = prepareStatement(session).setConsistencyLevel(writeConf.consistencyLevel)
val queryExecutor = new QueryExecutor(
session,
writeConf.parallelismLevel,
Some(updater.batchFinished(success = true, _, _, _)),
Some(updater.batchFinished(success = false, _, _, _)))
val routingKeyGenerator = new RoutingKeyGenerator(tableDef, columnNames)
val batchType = if (isCounterUpdate) Type.COUNTER else Type.UNLOGGED
val boundStmtBuilder = new BoundStatementBuilder(
rowWriter,
stmt,
protocolVersion = protocolVersion,
ignoreNulls = writeConf.ignoreNulls)
val batchStmtBuilder = new BatchStatementBuilder(
batchType,
routingKeyGenerator,
writeConf.consistencyLevel)
val batchKeyGenerator = batchRoutingKey(session, routingKeyGenerator) _
val batchBuilder = new GroupingBatchBuilder(
boundStmtBuilder,
batchStmtBuilder,
batchKeyGenerator,
writeConf.batchSize,
writeConf.batchGroupingBufferSize,
rowIterator)
val rateLimiter = new RateLimiter((writeConf.throughputMiBPS * 1024 * 1024).toLong, 1024 * 1024)
logDebug(s"Writing data partition to $keyspaceName.$tableName in batches of ${writeConf.batchSize}.")
for (stmtToWrite <- batchBuilder) {
queryExecutor.executeAsync(stmtToWrite)
assert(stmtToWrite.bytesCount > 0)
rateLimiter.maybeSleep(stmtToWrite.bytesCount)
}
queryExecutor.waitForCurrentlyExecutingTasks()
if (!queryExecutor.successful)
throw new IOException(s"Failed to write statements to $keyspaceName.$tableName.")
val duration = updater.finish() / 1000000000d
logInfo(f"Wrote ${rowIterator.count} rows to $keyspaceName.$tableName in $duration%.3f s.")
if (boundStmtBuilder.logUnsetToNullWarning) {
logWarning(boundStmtBuilder.UnsetToNullWarning)
}
}
}
}
so I see empty iterator.
Please guide on what can be the issue.

Exception in thread "main" java.lang.NumberFormatException

I am new to scala. When i try to run the example program PageRank its showing the following error..
Exception in thread "main" java.lang.NumberFormatException: For input
string: "5" at
scala.collection.immutable.StringLike$class.parseBoolean(StringLike.scala:240)
at
scala.collection.immutable.StringLike$class.toBoolean(StringLike.scala:228)
at scala.collection.immutable.StringOps.toBoolean(StringOps.scala:31)
at
spark.bagel.examples.WikipediaPageRank$.main(WikipediaPageRank.scala:30)
at
spark.bagel.examples.WikipediaPageRank.main(WikipediaPageRank.scala)
import spark._
import spark.SparkContext._
import spark.bagel._
import spark.bagel.Bagel._
import scala.xml.{XML,NodeSeq}
object WikipediaPageRank {
def main(args: Array[String]) {
if (args.length < 5) {
System.err.println("Usage: WikipediaPageRank <inputFile> <threshold> <numPartitions> <host> <usePartitioner>")
System.exit(-1)
}
System.setProperty("spark.serializer", "spark.KryoSerializer")
System.setProperty("spark.kryo.registrator", classOf[PRKryoRegistrator].getName)
val inputFile = args(0)
val threshold = args(1).toDouble
val numPartitions = args(2).toInt
val host = args(3)
val usePartitioner = args(4).toBoolean
val sc = new SparkContext(host, "WikipediaPageRank")
// Parse the Wikipedia page data into a graph
val input = sc.textFile(inputFile)
println("Counting vertices...")
val numVertices = input.count()
println("Done counting vertices.")
println("Parsing input file...")
var vertices = input.map(line => {
val fields = line.split("\t")
val (title, body) = (fields(1), fields(3).replace("\\n", "\n"))
val links =
if (body == "\\N")
NodeSeq.Empty
else
try {
XML.loadString(body) \\ "link" \ "target"
} catch {
case e: org.xml.sax.SAXParseException =>
System.err.println("Article \""+title+"\" has malformed XML in body:\n"+body)
NodeSeq.Empty
}
val outEdges = links.map(link => new String(link.text)).toArray
val id = new String(title)
(id, new PRVertex(1.0 / numVertices, outEdges))
})
if (usePartitioner)
vertices = vertices.partitionBy(new HashPartitioner(sc.defaultParallelism)).cache
else
vertices = vertices.cache
println("Done parsing input file.")
// Do the computation
val epsilon = 0.01 / numVertices
val messages = sc.parallelize(Array[(String, PRMessage)]())
val utils = new PageRankUtils
val result =
Bagel.run(
sc, vertices, messages, combiner = new PRCombiner(),
numPartitions = numPartitions)(
utils.computeWithCombiner(numVertices, epsilon))
// Print the result
System.err.println("Articles with PageRank >= "+threshold+":")
val top =
(result
.filter { case (id, vertex) => vertex.value >= threshold }
.map { case (id, vertex) => "%s\t%s\n".format(id, vertex.value) }
.collect.mkString)
println(top)
}
}
Please help me in solving the error.