Lagom's embedded Kafka fails to start after killing Lagom process once - scala

I've played around with lagom-scala-word-count Activator template and I was forced to kill the application process. Since then embedded kafka doesn't work - this project and every new I create became unusable. I've tried:
running sbt clean, to delete embedded Kafka data
creating brand new project (from other activator templates)
restarting my machine.
Despite this I can't get Lagom to work. During first launch I get following lines in log:
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 1 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 2 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 4 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
Next launches result in:
[info] Starting Kafka
[info] Starting Cassandra
....Kafka Server closed unexpectedly.
....
[info] Cassandra server running at 127.0.0.1:4000
I've posted full server.log from lagom-internal-meta-project-kafka at https://gist.github.com/szymonbaranczyk/a93273537b42aafa45bf67446dd41adb.
Is it possible that some corrupted Embedded Kafka's data is stored globally on my pc and causes this?

For future reference, as James mentioned in comments, you have to delete the folder lagom-internal-meta-project-kafka in target/lagom-dynamic-projects. I don't know why it's not get deleted automatically.

Related

Kogito Kafka messages - Message Trigger information is not complete TriggerMetaData

has someone has ever encounter this error when using Kogito (with quarkus) to send Kafka messages from a flow (bpmn2 file) throwing an "Intermediate Message" from the intermediate events?
[ERROR] com.package.xxxxxx.whenRunTestSupermarketSuccess Time elapsed: 0.006 s
<<< ERROR!
java.lang.RuntimeException:
java.lang.RuntimeException: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
[error]: Build step org.kie.kogito.quarkus.common.deployment.KogitoAssetsProcessor#generateSources
threw an exception: org.kie.kogito.codegen.process.ProcessCodegenException: Error while elaborating
process id = "supermarketFlow", packageName = "com.package.xxxxxx": Message Trigger information is
not complete TriggerMetaData [name=cartCreated, type=ProduceMessage, dataType=String, modelRef=null, ownerId=1]
at org.kie.kogito.codegen.process.ProcessCodegen.internalGenerate(ProcessCodegen.java:325)
at org.kie.kogito.codegen.core.AbstractGenerator.generate(AbstractGenerator.java:69)
I can sense modelRef=null could be the issue but as far as I've seen in the official documentation, is not mention. I suppose I should give it a value on the bpmn file but I don't where.
I'm trying to send Kafka messages after running some logic, for the moment no one consuming the messages.
May be the issue is related to this thread?
BPMN 2.0, kogito, triggering signal or message from another process

Unable to start tomcat 9 with flowable war- PUBLIC.ACT_DE_DATABASECHANGELOGLOCK error

I have downloaded flowable from flowable.com/open-source and placed the flowable-ui.war and flowable-rest.war in tomcat 9.0.52 webapps folder.
When i start server after some time i can see below line repeating in cmd and server getting stopped.
SELECT LOCKED FROM PUBLIC.ACT_DE_DATABASECHANGELOGLOCK WHERE ID=1
2021-08-13 20:45:05.818 INFO 8316 --- [ main] l.lockservice.StandardLockService : Waiting for changelog lock.
Why is this issue occurring I have not made any changes?
The message
l.lockservice.StandardLockService : Waiting for changelog lock.
occurs when Flowable waits for the lock for the DB changes to be released.
If that doesn't happen it means that some other node has picked up the log and not released it properly. I would suggest you manually deleting the lock from that table (ACT_DE_DATABASECHANGELOGLOCK).
In addition to that, there is no need to run both flowable-ui.war and flowable-rest.war. flowable-rest.war is a subset of flowable-ui.war.

com.esotericsoftware.kryo.KryoException: java.lang.NullPointerException

I have a problem of deserailisation with a complex class case (see log),
Ps => despite the warning, I noticed nothing as a malfunction,
I use in my project:
akka 2.6.11 ( akka cluster , akka streams , akka pubsub )
scala 2.12,
play 2.6
log server:
[warn] 2020-11-30 19:09:31,504 - akka.remote.artery.Deserializer - Failed to deserialize message from [akka://application#127.0.0.1:2551] with serializer id [123454323] and manifest []. com.esotericsoftware.kryo.KryoException: java.lang.NullPointerException
I use in my Kamon project and I think this is the cause of the excpetion,
Link project in github https://github.com/ykhilaji/play-jobs-example/
the documantion is not up to date, to launch two instances, you must launch:
node 1: ./conf/script/runNode1.sh
node 2: ./conf/script/runNode1.sh
to simulate a stress test : ab -m POST -k -c 250 -n 2000 http://localhost/yk/jobs/10000

Mongooseim 3.6.0 postgress connection Issue

Hi I am new to mongooseim , I am planing to setup mongooseim in my local setup with postgres database connection. I have installed mongooseim 3.6.0 in ubuntu 14.04 machine. And create database in postgres & add the schema for that.Then I have done the following changes in mongooseim.cfg file.
{outgoing_pools, [
{rdbms, global, default, [{workers, 1}], [{server, {psql, "localhost", 5432, "mongooseim", "postgres", "password"}}]}
]}.
And this one.
{rdbms_server_type, pgsql}.
These are the changes I have done for default config file. Then When I restart the server it gives this error. postgres server running & user credentials are working.
020-06-05 18:17:46.815 [info] <0.249.0> msg: "Starting reporters with []\n", options: []
2020-06-05 18:17:47.034 [notice] <0.130.0>#lager_file_backend:143 Changed loglevel of /var/log/mongooseim/ejabberd.log to info
2020-06-05 18:17:47.136 [info] <0.43.0> Application mnesia exited with reason: stopped
2020-06-05 18:17:47.621 [error] <0.593.0>#mongoose_rdbms_psql:connect CRASH REPORT Process <0.593.0> with 0 neighbours crashed with reason: call to undefined function mongoose_rdbms_psql:connect({psql,"server",5432,"mongooseim","postgres","password"}, 5000)
2020-06-05 18:17:47.621 [error] <0.592.0>#mongoose_rdbms_psql:connect Supervisor 'wpool_pool-mongoose_wpool$rdbms$global$default-process-sup' had child 'wpool_pool-mongoose_wpool$rdbms$global$default-1' started with wpool_process:start_link('wpool_pool-mongoose_wpool$rdbms$global$default-1', mongoose_rdbms, [{server,{psql,"server",5432,"mongooseim","postgres","password"}}], [{queue_manager,'wpool_pool-mongoose_wpool$rdbms$global$default-queue-manager'},{time_checker,'wpool_pool-mongoose_wpool$rdbms$global$default-time-checker'},...]) at undefined exit with reason call to undefined function mongoose_rdbms_psql:connect({psql,"server",5432,"mongooseim","postgres","password"}, 5000) in context start_error
2020-06-05 18:17:47.622 [error] <0.589.0> Supervisor 'mongoose_wpool$rdbms$global$default' had child 'wpool_pool-mongoose_wpool$rdbms$global$default-process-sup' started with wpool_process_sup:start_link('mongoose_wpool$rdbms$global$default', 'wpool_pool-mongoose_wpool$rdbms$global$default-process-sup', [{queue_manager,'wpool_pool-mongoose_wpool$rdbms$global$default-queue-manager'},{time_checker,'wpool_pool-mongoose_wpool$rdbms$global$default-time-checker'},...]) at undefined exit with reason {shutdown,{failed_to_start_child,'wpool_pool-mongoose_wpool$rdbms$global$default-1',{undef,[{mongoose_rdbms_psql,connect,[{psql,"server",5432,"mongooseim","postgres","password"},5000],[]},{mongoose_rdbms,connect,4,[{file,"/root/deb/mongooseim/_build/prod/lib/mongooseim/src/rdbms/mongoose_rdbms.erl"},{line,668}]},{mongoose_rdbms,init,1,[{file,"/root/deb/mongooseim/_build/prod/lib/mongooseim/src/rdbms/mongoose_rdbms.erl"},{line,431}]},{wpool_process,init,1,[{file,"/root/deb/mongooseim/_build/..."},...]},...]}}} in context start_error
2020-06-05 18:17:47.622 [error] <0.583.0>#mongoose_wpool_mgr:handle_call:105 Pool not started: {error,{{shutdown,{failed_to_start_child,'wpool_pool-mongoose_wpool$rdbms$global$default-process-sup',{shutdown,{failed_to_start_child,'wpool_pool-mongoose_wpool$rdbms$global$default-1',{undef,[{mongoose_rdbms_psql,connect,[{psql,"server",5432,"mongooseim","postgres","password"},5000],[]},{mongoose_rdbms,connect,4,[{file,"/root/deb/mongooseim/_build/prod/lib/mongooseim/src/rdbms/mongoose_rdbms.erl"},{line,668}]},{mongoose_rdbms,init,1,[{file,"/root/deb/mongooseim/_build/prod/lib/mongooseim/src/rdbms/mongoose_rdbms.erl"},{line,431}]},{wpool_process,init,1,[{file,"/root/deb/mongooseim/_build/default/lib/worker_pool/src/wpool_process.erl"},{line,85}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}}}}},{child,undefined,'mongoose_wpool$rdbms$global$default',{wpool,start_pool,['mongoose_wpool$rdbms$global$default',[{worker,{mongoose_rdbms,[{server,{psql,"server",5432,"mongooseim","postgres","password"}}]}},{pool_sup_shutdown,infinity},{workers,1}]]},temporary,infinity,supervisor,[wpool]}}}
2020-06-05 18:17:47.678 [warning] <0.615.0>#service_mongoose_system_metrics:report_transparency:129 We are gathering the MongooseIM system's metrics to analyse the trends and needs of our users, improve MongooseIM, and know where to focus our efforts. For more info on how to customise, read, enable, and disable these metrics visit:
- MongooseIM docs -
https://mongooseim.readthedocs.io/en/latest/operation-and-maintenance/System-Metrics-Privacy-Policy/
- MongooseIM GitHub page - https://github.com/esl/MongooseIM
The last sent report is also written to a file /var/log/mongooseim/system_metrics_report.json
2020-06-05 18:17:48.404 [warning] <0.1289.0>#nkpacket_stun:get_stun_servers:231 Current NAT is changing ports!

How to fix Active MQ Wildfly warning - Page file 000002707.page had incomplete records at position 373,401 at record number 9?

How to fix following Wildfly Warnings:
2019-10-09 15:15:04,179 WARN [org.apache.activemq.artemis.core.server] (Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2#216f6b3e-1182524264)) AMQ222033: Page file 000002707.page had incomplete records at position 373,401 at record number 9
2019-10-09 15:15:05,182 WARN [org.apache.activemq.artemis.core.server] (Thread-1 (ActiveMQ-remoting-threads-ActiveMQServerImpl::serverUUID=3bea749a-88f7-11e7-b497-27b2839ef45c-1594512315-717665458)) AMQ222033: Page file 000002707.page had incomplete records at position 373,401 at record number 9
2019-10-09 15:15:05,185 WARN [org.apache.activemq.artemis.core.server] (Thread-11 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2#216f6b3e-1182524264)) AMQ222033: Page file 000002707.page had incomplete records at position 373,401 at record number 9
I only get one link on google which suggests server crashes but i am not sure how to stop this -> https://developer.jboss.org/thread/154232
It contains Apache Camel Project which picks 20,000 messages put on queue and many of them discards and other processed not sure if that are related
The linked forum post does a fine job of explaining the likely reason why page file was corrupted. To "fix" the problem I recommend you consume all the messages you can from the affected queue, stop the broker, and remove the corrupted page file.