Akka Persistence and Mongodb: Persistence failure when replaying events for persistenceId - mongodb

I am uisng akka-persistence with mongodb using this https://github.com/ironfish/akka-persistence-mongo/ mongodb plugins. when i am running my code, i am getting following error:
[ERROR] [11/19/2016 16:47:29.355] [transaction-system-akka.actor.default-dispatcher-5] [akka://transaction-system/user/$a] Persistence failure when replaying events for persistenceId [balanceTransactions]. Last known sequence number [0] (akka.persistence.RecoveryTimedOut)
I am not getting, what is the meaning of this error and how can i resolve this error. Following is my reference.conf file:
akka {
persistence {
journal {
plugin = "casbah-snapshot"
}
snapshot-store {
plugin = "casbah-snapshot"
}
}
}
casbah-snapshot {
mongo-url = "mongodb://localhost:27017/user.events"
woption = 1
wtimeout = 10000
load-attempts = 5
}

After changing my reference.conf file, my example works successfully. Below is valid reference.conf file.
akka {
stdout-loglevel = off // defaults to WARNING can be disabled with off. The stdout-loglevel is only in effect during system startup and shutdown
log-dead-letters-during-shutdown = off
loglevel = info
log-dead-letters = off
log-config-on-start = off // Log the complete configuration at INFO level when the actor system is started
loggers = ["akka.event.slf4j.Slf4jLogger"]
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
persistence {
journal {
plugin = "casbah-journal"
}
}
}
casbah-journal {
mongo-url = "mongodb://localhost:27017/transaction.events"
woption = 1
wtimeout = 10000
load-attempts = 5
}

Related

How to publish a basic message to RabbitMQ exchange using op-rabbit

I've been trying to get a very simple app working to publish messages to a RabbitMQ exchange using the Scala op-rabbit library to no avail.
I'm clearly doing something wrong, but the docs are very limited regarding message publishing.
I can get the actor to connect to RabbitMQ. However, upon publishing a message, it doesn't appear in Rabbit.
Here is the code I'm using to publish the message:
object RmqPublisher extends App {
val actorSystem = ActorSystem("my-actor")
private lazy val config: Config = ConfigFactory.load()
val rabbitControl: ActorRef =
actorSystem.actorOf(Props {
new RabbitControl(
ConnectionParams.fromConfig(config.getConfig("op-rabbit.rabbit"))
)
}
)
rabbitControl ! Message.exchange("Test", "amq.direct", "my_routing_key")
}
Here is my config:
op-rabbit {
topic-exchange-name = amq.direct
channel-dispatcher = "op-rabbit.default-channel-dispatcher"
default-channel-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "fork-join-executor"
# Configuration for the fork join pool
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 4
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 100
}
rabbit {
exchange-name ="amq.direct"
routing-keys = "my_routing_key"
virtual-host = "/"
hosts = ["localhost"]
username = "guest"
password = "guest"
port = 5672
ssl = false
connection-timeout = "5s"
max-tps = 1000
}
}
The logs suggest it is connected successfully as can be seen below:
[INFO] [08/04/2020 21:49:39.219] [my-actor-op-rabbit.default-channel-dispatcher-5] [akka://my-actor/user/$a/connection/$a] akka://my-actor/user/$a/connection/$a connected
[INFO] [08/04/2020 21:49:39.223] [my-actor-akka.actor.default-dispatcher-4] [akka://my-actor/user/$a/connection] akka://my-actor/user/$a/connection connected to amqp://guest#{localhost:5672}:5672//
[INFO] [08/04/2020 21:49:39.230] [my-actor-akka.actor.default-dispatcher-2] [akka://my-actor/user/$a/connection/confirmed-publisher-channel] akka://my-actor/user/$a/connection/confirmed-publisher-channel connected
Any ideas what I'm doing wrong?

Bridge startup error connecting to Zookeeper

Just getting the following error trying to setup the Bridge component using Zookeeper, according to the steps described in https://docs.corda.r3.com/website/releases/3.1/bridge-configuration-file.html?highlight=zookeeper.
> java -jar corda-bridgeserver-3.1.jar
BridgeSupervisorService: active = false
[ERROR] 20:59:31-0300 [main-EventThread] imps.EnsembleTracker.processConfigData - Invalid config event received: {server.1=10.102.32.104:2888:3888:participant, version=100000000, server.3=10.102.32.108:2888:3888:participant, server.2=10.102.32.107:2888:3888:participant}
[ERROR] 20:59:32-0300 [main-EventThread] imps.EnsembleTracker.processConfigData - Invalid config event received: {server.1=10.102.32.104:2888:3888:participant, version=100000000, server.3=10.102.32.108:2888:3888:participant, server.2=10.102.32.107:2888:3888:participant}
My bridge.conf:
bridgeMode = BridgeInner
outboundConfig {
artemisBrokerAddress = "10.102.32.97:10010"
alternateArtemisBrokerAddresses = [ "10.102.32.98:10010" ]
}
bridgeInnerConfig {
floatAddresses = ["10.102.32.103:12005", "10.102.32.105:12005"]
expectedCertificateSubject = "CN=Float Local,O=Local Only,L=London,C=GB"
customSSLConfiguration {
keyStorePassword = "bridgepass"
trustStorePassword = "trustpass"
sslKeystore = "./bridgecerts/bridge.jks"
trustStoreFile = "./bridgecerts/trust.jks"
crlCheckSoftFail = true
}
}
haConfig {
haConnectionString = "zk://10.102.32.104:2181,zk://10.102.32.107:2181,zk://10.102.32.108:2181"
}
networkParametersPath = ./network-parameters
Any thoughts?
This error is harmless. It indicates that the Dockerised Zookeeper has bad IP addresses, so when the Apache Curator is sent the dynamic topology, some checks fail. It does not invalidate the static configuration and everything should work fine.
Note that as of Corda Enterprise 3.2, you must use the Zookeeper version that is compatible with the Apache Curator library, which is 3.5.3-beta, and NOT the latest version.

Suppressing logs in scala?

I have a scala project that uses akka-http. It is connected to a couchbase database and I want to be able to suppress some of the logs showing up in the terminal such as :
16:52:00.970 [cb-computations-1] DEBUG com.couchbase.client.core.node.Node - [127.0.0.1/localhost]: Adding Service VIEW
My application.conf looks like this:
akka {
log-config-on-start = off
stdout-loglevel = INFO
loggers = ["akka.event.slf4j.Slf4jLogger"]
default-dispatcher {
fork-join-executor {
parallelism-min = 8
}
}
test {
timefactor = 1
}
}
http {
host = "0.0.0.0"
host = ${?HOST}
port = 4000
port = ${?PORT}
}
I've tried changing it but nothing works!

Akka: Simple Akka Cluster not working.

We are creating simple akka cluster sample and follow Akka In Action book. we are creating 3 seed nodes as mention in following code:
akka {
loglevel = INFO
stdout-loglevel = INFO
event-handlers = ["akka.event.Logging$DefaultLogger"]
log-dead-letters = 0
log-dead-letters-during-shutdown = off
actor {
provider = cluster
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
hostname = ${?HOST}
port = ${?PORT}
}
}
cluster {
seed-nodes = [
"akka.tcp://words#127.0.0.1:2551",
"akka.tcp://words#127.0.0.1:2552",
"akka.tcp://words#127.0.0.1:2553"
]
roles = ["seed"]
role {
seed.min-nr-of-members = 1
}
}
}
Actor system code:
object Launcher extends App {
val seedConfig = ConfigFactory.load("seed")
val seedSystem = ActorSystem("words", seedConfig)
}
When start actor system from one terminal, the seed node load but according to logs, port 2552 is used but my expectations are 2551. After opening second terminal when trying to run again actor system, I am facing following exception:
[ERROR] [03/10/2017 15:51:45.245] [words-akka.remote.default-remote-dispatcher-13] [NettyTransport(akka://words)] failed to bind to /127.0.0.1:2552, shutting down Netty transport
[error] (run-main-0) org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport.$anonfun$listen$1(NettyTransport.scala:417)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
This error is generate because, in first terminal our port 2552 is used but not sure why second time same port is use. Our assumptions are, maybe configurations is not loaded. So, how we can fix this ?

Scala Akka router configuration not found in configuration file when it's really there

I would be grateful for some help with this Akka configuration question. This is probably something obvious, but I've been googling and looking at it all morning and have not yet come up with an answer.
My ActorSystem is created thus:
val config = ConfigFactory.load("akka.conf")
implicit val system = ActorSystem("fubar", config)
The classloader sees my configuration file since changes to logging levels take effect.
After creating the ActorSystem, my code does the following:
val mpr = system.actorOf(Props[ProcessingFlow])
In the ProcessingFlow the following:
val actorSink: Sink[PackedRecord, ActorRef] = Sink.actorSubscriber(Props[RCStreamReceiver])
In RCStreamReceiver the following:
val logWorkers = context.actorOf(FromConfig.props(Props[LogWorker]), "logWorker")
Yet, I get the following:
Configuration missing for router [akka://fubar/user/$a/StreamSupervisor-0/flow-3-2-actorSubscriberSink/logWorker] in 'akka.actor.deployment' section.
This in spite of my config file looking like this:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "WARNING"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
logger-startup-timeout = 15s
actor {
provider = "akka.cluster.ClusterActorRefProvider"
debug {
log-config-on-start = off
autoreceive = off
lifecycle = off
unhandled = on
event-stream = off
}
deployment {
default-dispatcher {
fork-join-executor {
parallelism-min = 8
}
}
/"*"/logWorker = {
router = balancing-pool
nr-of-instances = 5
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 5
core-pool-size-max = 5
}
}
/"*"/editWorker = {
router = balancing-pool
nr-of-instances = 5
routees.paths = ["/foo/editWorker"]
}
/"*"/unknownWorker = {
router = balancing-pool
nr-of-instances = 5
routees.paths = ["/foo/unknownWorker"]
}
}
}
}