How to publish a basic message to RabbitMQ exchange using op-rabbit - scala

I've been trying to get a very simple app working to publish messages to a RabbitMQ exchange using the Scala op-rabbit library to no avail.
I'm clearly doing something wrong, but the docs are very limited regarding message publishing.
I can get the actor to connect to RabbitMQ. However, upon publishing a message, it doesn't appear in Rabbit.
Here is the code I'm using to publish the message:
object RmqPublisher extends App {
val actorSystem = ActorSystem("my-actor")
private lazy val config: Config = ConfigFactory.load()
val rabbitControl: ActorRef =
actorSystem.actorOf(Props {
new RabbitControl(
ConnectionParams.fromConfig(config.getConfig("op-rabbit.rabbit"))
)
}
)
rabbitControl ! Message.exchange("Test", "amq.direct", "my_routing_key")
}
Here is my config:
op-rabbit {
topic-exchange-name = amq.direct
channel-dispatcher = "op-rabbit.default-channel-dispatcher"
default-channel-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "fork-join-executor"
# Configuration for the fork join pool
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 4
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 100
}
rabbit {
exchange-name ="amq.direct"
routing-keys = "my_routing_key"
virtual-host = "/"
hosts = ["localhost"]
username = "guest"
password = "guest"
port = 5672
ssl = false
connection-timeout = "5s"
max-tps = 1000
}
}
The logs suggest it is connected successfully as can be seen below:
[INFO] [08/04/2020 21:49:39.219] [my-actor-op-rabbit.default-channel-dispatcher-5] [akka://my-actor/user/$a/connection/$a] akka://my-actor/user/$a/connection/$a connected
[INFO] [08/04/2020 21:49:39.223] [my-actor-akka.actor.default-dispatcher-4] [akka://my-actor/user/$a/connection] akka://my-actor/user/$a/connection connected to amqp://guest#{localhost:5672}:5672//
[INFO] [08/04/2020 21:49:39.230] [my-actor-akka.actor.default-dispatcher-2] [akka://my-actor/user/$a/connection/confirmed-publisher-channel] akka://my-actor/user/$a/connection/confirmed-publisher-channel connected
Any ideas what I'm doing wrong?

Related

Grpc parallel Stream communication leads to error:AkkaNettyGrpcClientGraphStage

I have two services: one that sends stream data and the second one receives it using akka-grpc for communication. When source data is provided Service one is called to process and send it to service two via grpc client. It's possible that multiple instances of server one runs at the same time when multiple source data are provided at the same time.In long running test of my application. I see below error in service one:
ERROR i.a.g.application.actors.DbActor - GraphStage [akka.grpc.internal.AkkaNettyGrpcClientGraphStage$$anon$1#59d40805] terminated abruptly, caused by for example materializer or act
akka.stream.AbruptStageTerminationException: GraphStage [akka.grpc.internal.AkkaNettyGrpcClientGraphStage$$anon$1#59d40805] terminated abruptly, caused by for example materializer or actor system termination.
I have never shutdown actor systems but only kill actors after doing their job. Also I used proto3 and http2 for request binding. Here is a piece of my code in service one:
////////////////////server http binding /////////
val service: HttpRequest => Future[HttpResponse] =
ServiceOneServiceHandler(new ServiceOneServiceImpl(system))
val bound = Http().bindAndHandleAsync(
service,
interface = config.getString("akka.grpc.server.interface"),
port = config.getString("akka.grpc.server.default-http-port").toInt,
connectionContext = HttpConnectionContext(http2 = Always))
bound.foreach { binding =>
logger.info(s"gRPC server bound to: ${binding.localAddress}")
}
////////////////////client /////////
def send2Server[A](data: ListBuffer[A]): Future[ResponseDTO] = {
val reply = {
val thisClient = interface.initialize()
interface.call(client = thisClient, req = data.asInstanceOf[ListBuffer[StoreRequest]].toList)
}
reply
}
///////////////// grpc communication //////////
def send2GrpcServer[A](data: ListBuffer[A]): Unit = {
val reply = send2Server(data)
Await.ready(reply, Duration.Inf) onComplete {
case util.Success(response: ResponseDTO) =>
logger.info(s"got reply message: ${res.description}")
//////check response content and stop application if desired result not found in response
}
case util.Failure(exp) =>
//////stop application
throw exp.getCause
}
}
Error occurred exactly after waiting for service 2 response :
Await.ready(reply, Duration.Inf)
I can't catch the cause of error.
UPDATE
I found that some stream is missed such that service one sends an stream an indefinitely wait for the response and service two does not receive any thing to reply to service one but still don't know why stream is missed
I also updated akka grpc plugin but has no sense:
addSbtPlugin("com.lightbend.akka.grpc" % "sbt-akka-grpc" % "0.6.1")
addSbtPlugin("com.lightbend.sbt" % "sbt-javaagent" % "0.1.4")

Suppressing logs in scala?

I have a scala project that uses akka-http. It is connected to a couchbase database and I want to be able to suppress some of the logs showing up in the terminal such as :
16:52:00.970 [cb-computations-1] DEBUG com.couchbase.client.core.node.Node - [127.0.0.1/localhost]: Adding Service VIEW
My application.conf looks like this:
akka {
log-config-on-start = off
stdout-loglevel = INFO
loggers = ["akka.event.slf4j.Slf4jLogger"]
default-dispatcher {
fork-join-executor {
parallelism-min = 8
}
}
test {
timefactor = 1
}
}
http {
host = "0.0.0.0"
host = ${?HOST}
port = 4000
port = ${?PORT}
}
I've tried changing it but nothing works!

Akka: Simple Akka Cluster not working.

We are creating simple akka cluster sample and follow Akka In Action book. we are creating 3 seed nodes as mention in following code:
akka {
loglevel = INFO
stdout-loglevel = INFO
event-handlers = ["akka.event.Logging$DefaultLogger"]
log-dead-letters = 0
log-dead-letters-during-shutdown = off
actor {
provider = cluster
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
hostname = ${?HOST}
port = ${?PORT}
}
}
cluster {
seed-nodes = [
"akka.tcp://words#127.0.0.1:2551",
"akka.tcp://words#127.0.0.1:2552",
"akka.tcp://words#127.0.0.1:2553"
]
roles = ["seed"]
role {
seed.min-nr-of-members = 1
}
}
}
Actor system code:
object Launcher extends App {
val seedConfig = ConfigFactory.load("seed")
val seedSystem = ActorSystem("words", seedConfig)
}
When start actor system from one terminal, the seed node load but according to logs, port 2552 is used but my expectations are 2551. After opening second terminal when trying to run again actor system, I am facing following exception:
[ERROR] [03/10/2017 15:51:45.245] [words-akka.remote.default-remote-dispatcher-13] [NettyTransport(akka://words)] failed to bind to /127.0.0.1:2552, shutting down Netty transport
[error] (run-main-0) org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport.$anonfun$listen$1(NettyTransport.scala:417)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
This error is generate because, in first terminal our port 2552 is used but not sure why second time same port is use. Our assumptions are, maybe configurations is not loaded. So, how we can fix this ?

Akka Persistence and Mongodb: Persistence failure when replaying events for persistenceId

I am uisng akka-persistence with mongodb using this https://github.com/ironfish/akka-persistence-mongo/ mongodb plugins. when i am running my code, i am getting following error:
[ERROR] [11/19/2016 16:47:29.355] [transaction-system-akka.actor.default-dispatcher-5] [akka://transaction-system/user/$a] Persistence failure when replaying events for persistenceId [balanceTransactions]. Last known sequence number [0] (akka.persistence.RecoveryTimedOut)
I am not getting, what is the meaning of this error and how can i resolve this error. Following is my reference.conf file:
akka {
persistence {
journal {
plugin = "casbah-snapshot"
}
snapshot-store {
plugin = "casbah-snapshot"
}
}
}
casbah-snapshot {
mongo-url = "mongodb://localhost:27017/user.events"
woption = 1
wtimeout = 10000
load-attempts = 5
}
After changing my reference.conf file, my example works successfully. Below is valid reference.conf file.
akka {
stdout-loglevel = off // defaults to WARNING can be disabled with off. The stdout-loglevel is only in effect during system startup and shutdown
log-dead-letters-during-shutdown = off
loglevel = info
log-dead-letters = off
log-config-on-start = off // Log the complete configuration at INFO level when the actor system is started
loggers = ["akka.event.slf4j.Slf4jLogger"]
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
persistence {
journal {
plugin = "casbah-journal"
}
}
}
casbah-journal {
mongo-url = "mongodb://localhost:27017/transaction.events"
woption = 1
wtimeout = 10000
load-attempts = 5
}

how to add more than one additional conf file in application.conf in scala

Hi I am trying to create additional conf files and include them in application.conf but when I try to fetch the values from the second conf file i got this error
com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'directUserReadMongoActor-dispatcher'
com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'directUserReadMongoActor-dispatcher'
here is my application.conf file
include "DirectUserWriteMongoActor"
include "DirectUserReadMongoActor"
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
}
here is my first conf file named "DirectUserWriteMongoActor"
akka {
actor{
############################### Setting for a Router #####################################
directUserWritwMongoActor-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "fork-join-executor"
# Configuration for the fork join pool
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 10
} #end default-dispatcher
############################### Setting for a Router #####################################
deployment{
/directUserWritwMongoActorRouter{
router = round-robin
nr-of-instances = 5
}
}#end deployment
} #end Actor
} #end Akka
and here is my second conf file "DirectUserReadMongoActor"
akka {
actor{
############################### Setting for a Router #####################################
directUserReadMongoActor-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "fork-join-executor"
# Configuration for the fork join pool
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 10
} #end default-dispatcher
############################### Setting for a Router #####################################
deployment{
/directUserReadMongoActorRouter{
router = round-robin
nr-of-instances = 5
}
}#end deployment
} #end Actor
} #end Akka
and here is my code in scala object
val config = ConfigFactory.load().getConfig("akka.actor")
println("throughput is "+config.getString("directUserWritwMongoActor-dispatcher.throughput"))
println("throughput is of read "+config.getString("directUserReadMongoActor-dispatcher.throughput"))
the problem is in second println line beacuse when i comment this second println line the value of directUserWritwMongoActor-dispatcher.throughput displayed successfuly but not for directUserReadMongoActor-dispatcher.throughput
i want to print both values please help me where i am doing wrong