I am playing with Akka Http client side. I have created a simple request but how can I unmarshal the respose? in the server side it is easy to use circe to marshal the response, but I have difficulties in the client side
import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.Uri.Query
import akka.http.scaladsl.model._
import akka.http.scaladsl.unmarshalling.Unmarshal
import akka.stream.Materializer
import scala.concurrent.ExecutionContext
class QuestionsFetcher {
import de.heikoseeberger.akkahttpcirce.CirceSupport._
import io.circe.generic.auto._
val baseUrl = "https://somewhere.com"
def fetch(tag: String)(implicit ac: ActorSystem, materializer: Materializer) = {
implicit val ec: ExecutionContext = ac.dispatcher
val fromDate = DateTime.now.minus(1000 * 60 * 60 * 24)
val uri = Uri(baseUrl).withQuery(Query("order"->"desc"))
val request = HttpRequest(HttpMethods.GET, uri)
Http().singleRequest(request)
.map(r => Unmarshal(r.entity.withContentType(ContentTypes.`application/json`)).to[Items])
}
}
When running the code I am getting
ErrorFuture(io.circe.ParsingFailure: expected json value got (line 1, column 1))
the content was gziped - so after running the decoder flow
it was ok
Related
With code below I read and print the content of file using Akka streams :
package playground
import java.nio.file.Paths
import akka.actor.ActorSystem
import akka.stream.scaladsl.{FileIO, Framing, Sink, Source}
import akka.util.ByteString
import akka.stream.ActorMaterializer
object Greeter extends App {
implicit val system = ActorSystem("map-management-service")
implicit val materializer = ActorMaterializer()
FileIO.fromPath(Paths.get("a.csv"))
.via(Framing.delimiter(ByteString("\n"), 256, true).map(_.utf8String)).runForeach(println)
}
My understanding of using Akka streams is that if the file changes/updates the processing code, in this case println is fired so each time the file is updated the entire file is re-read. But this is not occurring - the file is read once.
How should this be modified such that each time the file a.csv is updated the file is re-read and the println code is re-executed
Alpakka's DirectoryChangesSource could fit your use case. For example:
import akka.stream.alpakka.file.DirectoryChange
import akka.stream.alpakka.file.scaladsl.DirectoryChangesSource
implicit val system = ActorSystem("map-management-service")
implicit val materializer = ActorMaterializer()
val myFile = Paths.get("a.csv")
val changes = DirectoryChangesSource(Paths.get("."), pollInterval = 3.seconds, maxBufferSize = 1000)
changes
.filter {
case (path, dirChange) =>
path.endsWith(myFile) && (dirChange == DirectoryChange.Creation || dirChange == DirectoryChange.Modification)
}
.flatMapConcat(_ => FileIO.fromPath(myFile).via(Framing.delimiter(ByteString("\n"), 256, true)))
.map(_.utf8String)
.runForeach(println)
The above snippet prints the file contents when the file is created and whenever the file is modified, polling in three-second intervals.
I'd like to expand on Jeffrey's answer with a fully runnable Ammonite script:
import $ivy.`com.lightbend.akka::akka-stream-alpakka-file:1.1.1`
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.{ FileIO, Framing }
import akka.stream.alpakka.file.DirectoryChange
import akka.stream.alpakka.file.scaladsl.DirectoryChangesSource
import akka.util.ByteString
import java.nio.file.Paths
import scala.concurrent.duration._
implicit val system = ActorSystem("map-management-service")
implicit val materializer = ActorMaterializer()
val myFile = Paths.get("a.csv")
val changes = DirectoryChangesSource(Paths.get("."), pollInterval = 3.seconds, maxBufferSize = 1000)
changes
.filter {
case (path, dirChange) =>
path.endsWith(myFile) && (dirChange == DirectoryChange.Creation || dirChange == DirectoryChange.Modification)
}
.flatMapConcat {
case (path, _) => FileIO.fromPath(path).via(Framing.delimiter(ByteString("\n"), 256, true))
}
.map(_.utf8String)
.runForeach(println)
Please direct upvotes to his answer for the original idea.
I have a scala project that uses akka. I want the execution context to be available throughout the project. So I've created a package object like this:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import com.datastax.driver.core.Cluster
package object connector {
implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val executionContext = executionContext
implicit val session = Cluster
.builder
.addContactPoints("localhost")
.withPort(9042)
.build()
.connect()
}
In the same package I have this file:
import akka.stream.alpakka.cassandra.scaladsl.CassandraSource
import akka.stream.scaladsl.Sink
import com.datastax.driver.core.{Row, Session, SimpleStatement}
import scala.collection.immutable
import scala.concurrent.Future
object CassandraService {
def selectFromCassandra()() = {
val statement = new SimpleStatement(s"SELECT * FROM animals.alpakka").setFetchSize(20)
val rows: Future[immutable.Seq[Row]] = CassandraSource(statement).runWith(Sink.seq)
rows.map{item =>
print(item)
}
}
}
However I am getting the compiler error that no execution context or session can be found. My understanding of the package keyword was that everything in that object will be available throughout the package. But that does not seem work. Grateful if this could be explained to me!
Your implementation must be something like this, and hope it helps.
package.scala
package com.app.akka
package object connector {
// Do some codes here..
}
CassandraService.scala
package com.app.akka
import com.app.akka.connector._
object CassandraService {
def selectFromCassandra() = {
// Do some codes here..
}
}
You have two issue with your current code.
When you compile your package object connector it is throwing below error
Error:(14, 35) recursive value executionContext needs type
implicit val executionContext = executionContext
Issue is with implicit val executionContext = executionContext line
Solution for this issue would be as below.
implicit val executionContext = ExecutionContext
When we compile CassandraService then it is throwing error as below
Error:(17, 13) Cannot find an implicit ExecutionContext. You might pass
an (implicit ec: ExecutionContext) parameter to your method
or import scala.concurrent.ExecutionContext.Implicits.global.
rows.map{item =>
Error clearly say that either we need to pass ExecutionContext as implicit parameter or import scala.concurrent.ExecutionContext.Implicits.global. In my system both issues are resolved and its compiled successfully. I have attached code for your reference.
package com.apache.scala
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import com.datastax.driver.core.Cluster
import scala.concurrent.ExecutionContext
package object connector {
implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val executionContext = ExecutionContext
implicit val session = Cluster
.builder
.addContactPoints("localhost")
.withPort(9042)
.build()
.connect()
}
package com.apache.scala.connector
import akka.stream.alpakka.cassandra.scaladsl.CassandraSource
import akka.stream.scaladsl.Sink
import com.datastax.driver.core.{Row, SimpleStatement}
import scala.collection.immutable
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
object CassandraService {
def selectFromCassandra() = {
val statement = new SimpleStatement(s"SELECT * FROM animals.alpakka").setFetchSize(20)
val rows: Future[immutable.Seq[Row]] = CassandraSource(statement).runWith(Sink.seq)
rows.map{item =>
print(item)
}
}
}
I have written One SQL select query and i want to store the result returned from this query to some Variable how to do that
val count=(sql"""SELECT count(User_ID) from user_details_table where email=$email or Mobile_no=$Mobile_no""".as[(String)] )
val a1=Await.result(dbConfig.run(count), 1000 seconds)
Ok(Json.toJson(a1.toString()))
here i am not able to find out the id that is returning from this query
this is my complete code what i am trying to do
import javax.inject.Inject
import play.api.mvc.{AbstractController, ControllerComponents}
import javax.inject.Inject
import play.api.mvc.{AbstractController, ControllerComponents}
import scala.concurrent.Await
import javax.inject.Inject
import play.api.mvc._
import com.google.gson.{FieldNamingPolicy, Gson, GsonBuilder}
import play.api.libs.json.Json
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Await, Future}
import javax.inject.Inject
import org.joda.time.format.DateTimeFormat
import play.api.libs.json.{JsPath, Writes}
import slick.jdbc.GetResult
import scala.concurrent.ExecutionContext.Implicits.global
//import play.api.mvc._
import org.joda.time.{DateTime, Period}
import play.api.libs.json.Json
import play.api.mvc._
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
import com.google.gson.Gson
class adduserrs #Inject()(cc: ControllerComponents) extends AbstractController(cc)
{
def adduser(Name:String,Mobile_no:String,email:String,userpassword:String,usertype:String) = Action
{
import play.api.libs.json.{JsPath, JsValue, Json, Writes}
val gson: Gson = new GsonBuilder().setFieldNamingPolicy(FieldNamingPolicy.UPPER_CAMEL_CASE).create
val dbConfig = Database.forURL("jdbc:mysql://localhost:3306/equineapp?user=root&password=123456", driver = "com.mysql.jdbc.Driver")
var usertypeid=0;
if(usertype=="Owner")
{
usertypeid=1;
}
else if(usertype=="Practitioner")
{
usertypeid=2;
}
val count=(sql"""SELECT count(User_ID) from user_details_table where email=$email or Mobile_no=$Mobile_no""".as[(String)] )
val a1=Await.result(dbConfig.run(count), 1000 seconds)
Ok(Json.toJson(a1.toString()))
if (count==0) {
val setup1 = sql"call addusersreg ($Name,$Mobile_no,$email,$userpassword,$usertypeid);".as[(String, String, String, String, Int)]
val res = Await.result(dbConfig.run(setup1), 1000 seconds)
Ok(Json.toJson(1))
}
else {
Ok(Json.toJson(0))
}
}
from above code iam just trying to insert userdetails in database
if user exists in db then it will return response as 0 or else it will return response as 1
Ok, so here you are only counting, so perhaps you just need a variable of type Long:
SQL("select count(*) from User where tel = {telephoneNumber}")
.on('telephobeNumber -> numberThatYouPassedToTheMethod).executeQuery()
.as(SqlParser.scalar[Long].single)
You just totally changed the question, anyway, for the error you mentioned in the comment, the reason is that you have no connection as well as you did not define the database you want to use (default or otherwise). All the database calls are within the following block:
db.withConnection{
implicit connection =>
//SQL queries live here.
}
Moreover you need to db is injected if it is not the default database:
class myTestModel #Inject()(#NamedDatabase("nonDefaultDB") db: Database){???}
Follow MVC Architecture: For consistency with model-view-controller architecture, all your database calls should be within models classes. The controller method needs to call the models method for the result.
I'm using Alpakkas UDP.bindFlow to forward incoming UDP datagrams to a Kafka broker. The legacy application that is sending these datagrams requires a UDP response from the same port the message was sent to. I am struggling to model this behaviour as it would require me to connect the output of the flow to its input.
I tried this solution, but it does not work because the response datagram is sent from a different source port:
import java.net.InetSocketAddress
import akka.actor.ActorSystem
import akka.kafka.ProducerSettings
import akka.kafka.scaladsl.Producer
import akka.stream.ActorMaterializer
import akka.stream.alpakka.udp.Datagram
import akka.stream.alpakka.udp.scaladsl.Udp
import akka.stream.scaladsl.{Flow, Source}
import akka.util.ByteString
import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.common.serialization.StringSerializer
object UdpInput extends App {
implicit val system: ActorSystem = ActorSystem()
implicit val materializer: ActorMaterializer = ActorMaterializer()
val socket = new InetSocketAddress("0.0.0.0", 40000)
val udpBindFlow = Udp.bindFlow(socket)
val producerSettings = ProducerSettings(system, new StringSerializer, new StringSerializer)
val kafkaSink = Flow[Datagram].map(toProducerRecord).to(Producer.plainSink(producerSettings))
def toProducerRecord(datagram: Datagram) = new ProducerRecord[String, String]("udp", datagram.data.utf8String)
def toResponseDatagram(datagram: Datagram) = Datagram(ByteString("OK"), datagram.remote)
// Does not model the behaviour I'm looking for because
// the response datagram is sent from a different source port
Source.asSubscriber
.via(udpBindFlow)
.alsoTo(kafkaSink)
.map(toResponseDatagram)
.to(Udp.sendSink)
.run
}
I ended up using GraphDSL to implement a cyclic graph. Thanks to dvim for pointing me in the right direction!
import java.net.InetSocketAddress
import akka.actor.ActorSystem
import akka.kafka.ProducerSettings
import akka.kafka.scaladsl.Producer
import akka.stream.alpakka.udp.Datagram
import akka.stream.alpakka.udp.scaladsl.Udp
import akka.stream.scaladsl.GraphDSL.Implicits._
import akka.stream.scaladsl.{Broadcast, Flow, GraphDSL, MergePreferred, RunnableGraph, Source}
import akka.stream.{ActorMaterializer, ClosedShape}
import akka.util.ByteString
import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.common.serialization.StringSerializer
object UdpInput extends App {
implicit val system: ActorSystem = ActorSystem()
implicit val materializer: ActorMaterializer = ActorMaterializer()
val producerSettings = ProducerSettings(system, new StringSerializer, new StringSerializer)
val socket = new InetSocketAddress("0.0.0.0", 40000)
val udpBindFlow = Udp.bindFlow(socket)
val udpResponseFlow = Flow[Datagram].map(toResponseDatagram)
val kafkaSink = Flow[Datagram].map(toProducerRecord).to(Producer.plainSink(producerSettings))
def toProducerRecord(datagram: Datagram) = new ProducerRecord[String, String]("udp", datagram.data.utf8String)
def toResponseDatagram(datagram: Datagram) = Datagram(ByteString("OK"), datagram.remote)
RunnableGraph.fromGraph(GraphDSL.create() { implicit b =>
val merge = b.add(MergePreferred[Datagram](1))
val bcast = b.add(Broadcast[Datagram](2))
Source.asSubscriber ~> merge ~> udpBindFlow ~> bcast ~> kafkaSink
merge.preferred <~ udpResponseFlow <~ bcast
ClosedShape
}).run
}
I wanted to use Alpakka for handling S3 upload and download with Akka Steams. However, I got stuck with using Source produced by S3Client within Akka Http routes. The error message I get is:
[error] found : akka.stream.scaladsl.Source[akka.util.ByteString,_$1] where type _$1
[error] required: akka.http.scaladsl.marshalling.ToResponseMarshallable
[error] complete(source)
I assume that it is some annoying trivial thing, like missing implicit import, but I was not able to pinpoint what I am missing.
I've created some minimal example to illustrate the issue:
import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.server.Directives._
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.Source
import akka.util.ByteString
import scala.concurrent.ExecutionContext
class Test {
implicit val actorSystem: ActorSystem = ActorSystem()
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val executionContext: ExecutionContext = actorSystem.dispatcher
val route = (path("test") & get) {
def source: Source[ByteString, _] = ??? // just assume that I am able to get that value
complete(source) // here error happens
}
Http().bindAndHandle(route, "localhost", 8000)
}
Do you have some suggestions, what can I try? I am using
libraryDependencies += "com.typesafe.akka"%% "akka-http" % "10.0.5"
You need to create an HttpEntity from the source, and give it a content-type.
complete(HttpEntity(ContentTypes.`application/json`, source))