I'm developing a client-server application using Akka Http and Akka Streams.
The main idea is that the server must feed the http response with a Source from an Akka streams.
The problem is that the server accumulates some elements before sending the first message to the client. However, I need the server to send element to element as soon as a new element is produced by the source.
Code example:
case class Example(id: Long, txt: String, number: Double)
object MyJsonProtocol extends SprayJsonSupport with DefaultJsonProtocol {
implicit val exampleFormat = jsonFormat3(Test)
}
class BatchIterator(batchSize: Int, numberOfBatches: Int, pause: FiniteDuration) extends Iterator[Array[Test]]{
val range = Range(0, batchSize*numberOfBatches).toIterator
val numberOfBatchesIter = Range(0, numberOfBatches).toIterator
override def hasNext: Boolean = range.hasNext
override def next(): Array[Test] = {
println(s"Sleeping for ${pause.toMillis} ms")
Thread.sleep(pause.toMillis)
println(s"Taking $batchSize elements")
Range(0, batchSize).map{ _ =>
val count = range.next()
Test(count, s"Text$count", count*0.5)
}.toArray
}
}
object Server extends App {
import MyJsonProtocol._
implicit val jsonStreamingSupport: JsonEntityStreamingSupport = EntityStreamingSupport.json()
.withFramingRenderer(
Flow[ByteString].intersperse(ByteString(System.lineSeparator))
)
implicit val system = ActorSystem("api")
implicit val materializer = ActorMaterializer()
implicit val executionContext = system.dispatcher
def fetchExamples(): Source[Array[Test], NotUsed] = Source.fromIterator(() => new BatchIterator(5, 5, 2 seconds))
val route =
path("example") {
complete(fetchExamples)
}
val bindingFuture = Http().bindAndHandle(route, "localhost", 9090)
println("Server started at localhost:9090")
StdIn.readLine()
bindingFuture.flatMap(_.unbind()).onComplete(_ ⇒ system.terminate())
}
Then, if I execute:
curl --no-buffer localhost:9090/example
I get all the elements at the same time instead of receiving an element every 2 seconds.
Any idea about how I can "force" the server to send every element as it comes out from the source?
Finally, I've found the solution. The problem was that the source is synchronous... So the solution is just to call to the function async
complete(fetchExamples.async)
Related
I try to develop gRPC server with Akka-gRPC and Slick. I also use Airframe for DI.
Source code is here
The issue is that it cause failure if it receive request when execute as gRPC server.
If it doesn't start as a gRPC server, but just reads resources from the database, the process succeeds.
What is the difference?
At Follows, It read object from database with slick.
...Component is airframe object. It will use by main module.
trait UserRepository {
def getUser: Future[Seq[Tables.UsersRow]]
}
class UserRepositoryImpl(val profile: JdbcProfile, val db: JdbcProfile#Backend#Database) extends UserRepository {
import profile.api._
def getUser: Future[Seq[Tables.UsersRow]] = db.run(Tables.Users.result)
}
trait UserResolveService {
private val repository = bind[UserRepository]
def getAll: Future[Seq[Tables.UsersRow]] =
repository.getUser
}
object userServiceComponent {
val design = newDesign
.bind[UserResolveService]
.toSingleton
}
Follows is gRPC Server source code.
trait UserServiceImpl extends UserService {
private val userResolveService = bind[UserResolveService]
private val system: ActorSystem = bind[ActorSystem]
implicit val ec: ExecutionContextExecutor = system.dispatcher
override def getAll(in: GetUserListRequest): Future[GetUserListResponse] = {
userResolveService.getAll.map(us =>
GetUserListResponse(
us.map(u =>
myapp.proto.user.User(
1,
"t_horikoshi#example.com",
"t_horikoshi",
myapp.proto.user.User.UserRole.Admin
)
)
)
)
}
}
trait GRPCServer {
private val userServiceImpl = bind[UserServiceImpl]
implicit val system: ActorSystem = bind[ActorSystem]
def run(): Future[Http.ServerBinding] = {
implicit def ec: ExecutionContext = system.dispatcher
val service: PartialFunction[HttpRequest, Future[HttpResponse]] =
UserServiceHandler.partial(userServiceImpl)
val reflection: PartialFunction[HttpRequest, Future[HttpResponse]] =
ServerReflection.partial(List(UserService))
// Akka HTTP 10.1 requires adapters to accept the new actors APIs
val bound = Http().bindAndHandleAsync(
ServiceHandler.concatOrNotFound(service, reflection),
interface = "127.0.0.1",
port = 8080,
settings = ServerSettings(system)
)
bound.onComplete {
case Success(binding) =>
system.log.info(
s"gRPC Server online at http://${binding.localAddress.getHostName}:${binding.localAddress.getPort}/"
)
case Failure(ex) =>
system.log.error(ex, "occurred error")
}
bound
}
}
object grpcComponent {
val design = newDesign
.bind[UserServiceImpl]
.toSingleton
.bind[GRPCServer]
.toSingleton
}
Follows is main module.
object Main extends App {
val conf = ConfigFactory
.parseString("akka.http.server.preview.enable-http2 = on")
.withFallback(ConfigFactory.defaultApplication())
val system = ActorSystem("GRPCServer", conf)
val dbConfig: DatabaseConfig[JdbcProfile] =
DatabaseConfig.forConfig[JdbcProfile](path = "mydb")
val design = newDesign
.bind[JdbcProfile]
.toInstance(dbConfig.profile)
.bind[JdbcProfile#Backend#Database]
.toInstance(dbConfig.db)
.bind[UserRepository]
.to[UserRepositoryImpl]
.bind[ActorSystem]
.toInstance(system)
.add(userServiceComponent.design)
.add(grpcComponent.design)
design.withSession(s =>
// Await.result(s.build[UserResolveService].getUser, Duration.Inf)) // success
// Await.result(s.build[UserServiceImpl].getAll(GetUserListRequest()), Duration.Inf)) // success
s.build[GRPCServer].run() // cause IllegalStateException when reciece request.
)
}
When UserResolveService and UserServiceImpl are called directly, the process of loading an object from the database is successful.
However, when running the application as a gRPC Server, an error occurs when a request is received.
Though I was thinking all day, I couldn't resolve...
Will you please help me to resolve.
It resolved. if execute async process, It has to start gRPC server with newSession.
I fix like that.
I'm trying to wrap my head around akka streams and the way to handle web sockets, but some things are quite clear to me.
For starters, I'm trying to accomplish one-way communication from some client to the server and communication between the same server and some other client.
client1 -----> Server <------> client2
I was looking at the example provided here.
The resulting code looks something like this:
1) starting with the controller
class Test #Inject()(#Named("connManager") myConnectionsManager: ActorRef, cc: ControllerComponents)
(implicit val actorSystem: ActorSystem,
val mat: Materializer,
implicit val executionContext: ExecutionContext)
extends AbstractController(cc) {
private def wsFutureFlow(id: String): Future[Flow[String, String, NotUsed]] = {
implicit val timeout: Timeout = Timeout(5.seconds)
val future = myConnectionsManager ? CreateRemote(id)
val futureFlow = future.mapTo[Flow[String, String, NotUsed]]
futureFlow
}
private def wsFutureLocalFlow: Future[Flow[String, String, NotUsed]] = {
implicit val timeout: Timeout = Timeout(5.seconds)
val future = myConnectionsManager ? CreateLocal
val futureFlow = future.mapTo[Flow[String, String, NotUsed]]
futureFlow
}
def ws: WebSocket = WebSocket.acceptOrResult[String, String] {
rh =>
wsFutureFlow(rh.id.toString).map { flow =>
Right(flow)
}
}
def wsLocal: WebSocket = WebSocket.acceptOrResult[String, String] {
_ =>
wsFutureLocalFlow.map { flow =>
Right(flow)
}
}
}
As for the connection manager actor. That would be the equivalent of the UserParentActor from the example.
class MyConnectionsManager #Inject()(childFactory: MyTestActor.Factory)
(implicit ec: ExecutionContext, mat: Materializer) extends Actor with InjectedActorSupport {
import akka.pattern.{ask, pipe}
implicit val timeout: Timeout = Timeout(2.seconds)
override def receive: Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case CreateLocal =>
val child = injectedChild(childFactory(), "localConnection")
context.become(onLocalConnected(child))
privatePipe(child)
case Terminated(child) =>
println(s"${child.path.name} terminated...")
}
def onLocalConnected(local: ActorRef): Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case x: SendToLocal => local ! x
}
private def privatePipe(child: ActorRef) = {
val future = (child ? Init).mapTo[Flow[String, String, _]]
pipe(future) to sender()
() // compiler throws exception without this: non-unit value discarded
}
}
And the MyTestActor looks like this:
class MyTestActor #Inject()(implicit mat: Materializer, ec: ExecutionContext) extends Actor {
val source: Source[String, Sink[String, NotUsed]] = MergeHub.source[String]
.recoverWithRetries(-1, { case _: Exception => Source.empty })
private val jsonSink: Sink[String, Future[Done]] = Sink.foreach { json =>
println(s"${self.path.name} got message: $json")
context.parent ! SendToLocal(json)
}
private lazy val websocketFlow: Flow[String, String, NotUsed] = {
Flow.fromSinkAndSourceCoupled(jsonSink, source).watchTermination() { (_, termination) =>
val name = self.path.name
termination.foreach(_ => context.stop(self))
NotUsed
}
}
def receive: Receive = {
case Init =>
println(s"${self.path.name}: INIT")
sender ! websocketFlow
case SendToLocal(x) =>
println(s"Local got from remote: $x")
case msg: String => sender ! s"Actor got message: $msg"
}
}
What I don't understand, apart from how sinks and sources actually connect to the actors, is the following. When I start up my system, I send a few messages to the actor. However, after I close the connection to an actor named remote, and continue sending messages to the one called "localConnection", the messages get sent to DeadLetters:
[info] Done compiling.
[info] 15:49:20.606 - play.api.Play - Application started (Dev)
localConnection: INIT
localConnection got message: test data
Local got from remote: test data
localConnection got message: hello world
Local got from remote: hello world
remote-133: INIT
remote-133 got message: hello world
Local got from remote: hello world
remote-133 got message: hello from remote
Local got from remote: hello from remote
[error] 15:50:24.449 - a.a.OneForOneStrategy - Monitored actor [Actor[akka://application/user/connManager/remote-133#-998945083]] terminated
akka.actor.DeathPactException: Monitored actor [Actor[akka://application/user/connManager/remote-133#-998945083]] terminated
deadLetters got message: hello local
I assume this is because of the exception thrown... Can anyone explain to me as to why the message gets sent to DeadLetters?
Apart from that, I would like to know why I keep getting a compiler exception without the "()" returned at the end of privatePipe?
Also, should I be doing anything differently?
I realised that the exception was being thrown because I forgot to handle the Terminated message in the new behaviour of the MyConnectionsManager actor.
def onLocalConnected(local: ActorRef): Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case Terminated(child) => println(s"${child.path.name} terminated...")
case x: SendToLocal => local ! x
}
It seems to be working now.
I am able to make Play app join the existing Akka cluster and then make ask call to actor running on another ActorSystem and get results back. But I am having trouble with couple of things -
I see below in logs when play tries to join the cluster. I suspect that Play is starting its own akka cluster? I am really not sure what it means.
Could not register Cluster JMX MBean with name=akka:type=Cluster as it is already registered. If you are running multiple clust
ers in the same JVM, set 'akka.cluster.jmx.multi-mbeans-in-same-jvm = on' in config`
Right now I m re-initializing the actorsystem every time when the request comes to Controller which I know is not right way do it. I am new to Scala, Akka, Play thing and having difficulty figuring out how to make it Singleton service and inject into my controller.
So far I have got this -
class DataRouter #Inject()(controller: DataController) extends SimpleRouter {
val prefix = "/v1/data"
override def routes: Routes = {
case GET(p"/ip/$datatype") =>
controller.get(datatype)
case POST(p"/ip/$datatype") =>
controller.process
}
}
case class RangeInput(start: String, end: String)
object RangeInput {
implicit val implicitWrites = new Writes[RangeInput] {
def writes(range: RangeInput): JsValue = {
Json.obj(
"start" -> range.start,
"end" -> range.end
)
}
}
}
#Singleton
class DataController #Inject()(cc: ControllerComponents)(implicit exec: ExecutionContext) extends AbstractController(cc) {
private val logger = Logger("play")
implicit val timeout: Timeout = 115.seconds
private val form: Form[RangeInput] = {
import play.api.data.Forms._
Form(
mapping(
"start" -> nonEmptyText,
"end" -> text
)(RangeInput.apply)(RangeInput.unapply)
)
}
def get(datatype: String): Action[AnyContent] = Action.async { implicit request =>
logger.info(s"show: datatype = $datatype")
logger.trace(s"show: datatype = $datatype")
//val r: Future[Result] = Future.successful(Ok("hello " + datatype ))
val config = ConfigFactory.parseString("akka.cluster.roles = [gateway]").
withFallback(ConfigFactory.load())
implicit val system: ActorSystem = ActorSystem(SharedConstants.Actor_System_Name, config)
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val executionContext = system.dispatcher
val ipData = system.actorOf(
ClusterRouterGroup(RandomGroup(Nil), ClusterRouterGroupSettings(
totalInstances = 100, routeesPaths = List("/user/getipdata"),
allowLocalRoutees = false, useRoles = Set("static"))).props())
val res: Future[String] = (ipData ? datatype).mapTo[String]
//val res: Future[List[Map[String, String]]] = (ipData ? datatype).mapTo[List[Map[String,String]]]
val futureResult: Future[Result] = res.map { list =>
Ok(Json.toJson(list))
}
futureResult
}
def process: Action[AnyContent] = Action.async { implicit request =>
logger.trace("process: ")
processJsonPost()
}
private def processJsonPost[A]()(implicit request: Request[A]): Future[Result] = {
logger.debug(request.toString())
def failure(badForm: Form[RangeInput]) = {
Future.successful(BadRequest("Test"))
}
def success(input: RangeInput) = {
val r: Future[Result] = Future.successful(Ok("hello " + Json.toJson(input)))
r
}
form.bindFromRequest().fold(failure, success)
}
}
akka {
log-dead-letters = off
log-dead-letters-during-shutdown = off
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = ${myhost}
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://MyCluster#localhost:2541"
]
} seed-nodes = ${?SEEDNODE}
}
Answers
Refer to this URL. https://www.playframework.com/documentation/2.6.x/ScalaAkka#Built-in-actor-system-name has details about configuring the actor system name.
You should not initialize actor system on every request, use Play injected actor system in the Application class, if you wish to customize the Actor system, you should do it through modifying the AKKA configuration. For that,
you should create your own ApplicationLoader extending GuiceApplicationLoader and override the builder method to have your own AKKA configuration. Rest of the things taken care by Play like injecting this actor system in Application for you.
Refer to below URL
https://www.playframework.com/documentation/2.6.x/ScalaDependencyInjection#Advanced:-Extending-the-GuiceApplicationLoader
I'm using some sample Scala code to make a server that receives a file over websocket, stores the file temporarily, runs a bash script on it, and then returns stdout by TextMessage.
Sample code was taken from this github project.
I edited the code slightly within echoService so that it runs another function that processes the temporary file.
object WebServer {
def main(args: Array[String]) {
implicit val actorSystem = ActorSystem("akka-system")
implicit val flowMaterializer = ActorMaterializer()
val interface = "localhost"
val port = 3000
import Directives._
val route = get {
pathEndOrSingleSlash {
complete("Welcome to websocket server")
}
} ~
path("upload") {
handleWebSocketMessages(echoService)
}
val binding = Http().bindAndHandle(route, interface, port)
println(s"Server is now online at http://$interface:$port\nPress RETURN to stop...")
StdIn.readLine()
binding.flatMap(_.unbind()).onComplete(_ => actorSystem.shutdown())
println("Server is down...")
}
implicit val actorSystem = ActorSystem("akka-system")
implicit val flowMaterializer = ActorMaterializer()
val echoService: Flow[Message, Message, _] = Flow[Message].mapConcat {
case BinaryMessage.Strict(msg) => {
val decoded: Array[Byte] = msg.toArray
val imgOutFile = new File("/tmp/" + "filename")
val fileOuputStream = new FileOutputStream(imgOutFile)
fileOuputStream.write(decoded)
fileOuputStream.close()
TextMessage(analyze(imgOutFile))
}
case BinaryMessage.Streamed(stream) => {
stream
.limit(Int.MaxValue) // Max frames we are willing to wait for
.completionTimeout(50 seconds) // Max time until last frame
.runFold(ByteString(""))(_ ++ _) // Merges the frames
.flatMap { (msg: ByteString) =>
val decoded: Array[Byte] = msg.toArray
val imgOutFile = new File("/tmp/" + "filename")
val fileOuputStream = new FileOutputStream(imgOutFile)
fileOuputStream.write(decoded)
fileOuputStream.close()
Future(Source.single(""))
}
TextMessage(analyze(imgOutFile))
}
private def analyze(imgfile: File): String = {
val p = Runtime.getRuntime.exec(Array("./run-vision.sh", imgfile.toString))
val br = new BufferedReader(new InputStreamReader(p.getInputStream, StandardCharsets.UTF_8))
try {
val result = Stream
.continually(br.readLine())
.takeWhile(_ ne null)
.mkString
result
} finally {
br.close()
}
}
}
}
During testing using Dark WebSocket Terminal, case BinaryMessage.Strict works fine.
Problem: However, case BinaryMessage.Streaming doesn't finish writing the file before running the analyze function, resulting in a blank response from the server.
I'm trying to wrap my head around how Futures are being used here with the Flows in Akka-HTTP, but I'm not having much luck outside trying to get through all the official documentation.
Currently, .mapAsync seems promising, or basically finding a way to chain futures.
I'd really appreciate some insight.
Yes, mapAsync will help you in this occasion. It is a combinator to execute Futures (potentially in parallel) in your stream, and present their results on the output side.
In your case to make things homogenous and make the type checker happy, you'll need to wrap the result of the Strict case into a Future.successful.
A quick fix for your code could be:
val echoService: Flow[Message, Message, _] = Flow[Message].mapAsync(parallelism = 5) {
case BinaryMessage.Strict(msg) => {
val decoded: Array[Byte] = msg.toArray
val imgOutFile = new File("/tmp/" + "filename")
val fileOuputStream = new FileOutputStream(imgOutFile)
fileOuputStream.write(decoded)
fileOuputStream.close()
Future.successful(TextMessage(analyze(imgOutFile)))
}
case BinaryMessage.Streamed(stream) =>
stream
.limit(Int.MaxValue) // Max frames we are willing to wait for
.completionTimeout(50 seconds) // Max time until last frame
.runFold(ByteString(""))(_ ++ _) // Merges the frames
.flatMap { (msg: ByteString) =>
val decoded: Array[Byte] = msg.toArray
val imgOutFile = new File("/tmp/" + "filename")
val fileOuputStream = new FileOutputStream(imgOutFile)
fileOuputStream.write(decoded)
fileOuputStream.close()
Future.successful(TextMessage(analyze(imgOutFile)))
}
}
This is my code for the server written using Akka framework:
case class Sentence(data: String)
case class RawTriples(triples: List[String])
trait Protocols extends DefaultJsonProtocol {
implicit val sentenceRequestFormat = jsonFormat1(Sentence)
implicit val rawTriplesFormat = jsonFormat1(RawTriples)
}
trait Service extends Protocols {
implicit val system: ActorSystem
implicit def executor: ExecutionContextExecutor
implicit val materializer: Materializer
val openie = new OpenIE
def config: Config
val logger: LoggingAdapter
lazy val ipApiConnectionFlow: Flow[HttpRequest, HttpResponse, Any] =
Http().outgoingConnection(config.getString("services.ip-api.host"), config.getInt("services.ip-api.port"))
def ipApiRequest(request: HttpRequest): Future[HttpResponse] = Source.single(request).via(ipApiConnectionFlow).runWith(Sink.head)
val routes = {
logRequestResult("akka-http-microservice") {
pathPrefix("openie") {
post {
decodeRequest{
entity(as[Sentence]){ sentence =>
complete {
var rawTriples = openie.extract(sentence.data)
val resp: MutableList[String] = MutableList()
for(rtrip <- rawTriples){
resp += (rtrip.toString())
}
val response: List[String] = resp.toList
println(response)
response
}
}
}
}
}
}
}
}
object AkkaHttpMicroservice extends App with Service {
override implicit val system = ActorSystem()
override implicit val executor = system.dispatcher
override implicit val materializer = ActorMaterializer()
override val config = ConfigFactory.load()
override val logger = Logging(system, getClass)
Http().bindAndHandle(routes, config.getString("http.interface"), config.getInt("http.port"))
}
The server accepts a POST request containing a sentence and returns a json array in return. It works fine but if I am making requests to it too frequently using parallelized code, then it gives 500 Internal server error. I wanted to know is there any parameter which I can set in the server to avoid that (number of ready threads for accepting requests etc).
In log files, the error is logged as:
[ERROR] [05/31/2017 11:48:38.110]
[default-akka.actor.default-dispatcher-6]
[akka.actor.ActorSystemImpl(default)] Error during processing of
request: 'null'. Completing with 500 Internal Server Error response.
The doc on the bindAndHandle method shows what you want:
/**
* Convenience method which starts a new HTTP server at the given endpoint and uses the given `handler`
* [[akka.stream.scaladsl.Flow]] for processing all incoming connections.
*
* The number of concurrently accepted connections can be configured by overriding
* the `akka.http.server.max-connections` setting. Please see the documentation in the reference.conf for more
* information about what kind of guarantees to expect.
*
* To configure additional settings for a server started using this method,
* use the `akka.http.server` config section or pass in a [[akka.http.scaladsl.settings.ServerSettings]] explicitly.
*/
akka.http.server.max-connections is probably what you want. As the doc suggests, you can also dig deeper into the akka.http.server config section.
Add following in application.conf file
akka.http {
server {
server-header = akka-http/${akka.http.version}
idle-timeout = infinite
request-timeout = infinite
}
}