We have 2 servers that cluster just fine when we do not deploy any EAR files.
Server 1:
2015-03-26 08:23:00,339 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (Incoming-13,10.200.51.14:62610) org.jboss.messaging.core.impl.postoffice.GroupMember$ControlMembershipListener#1f15227c got new view [10.200.51.14:62610|1] [10.200.51.14:62610, 10.200.51.16:58992], old view is [10.200.51.14:62610|0] [10.200.51.14:62610]
2015-03-26 08:23:00,339 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (Incoming-13,10.200.51.14:62610) I am (10.200.51.14:62610)
2015-03-26 08:23:00,339 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (Incoming-13,10.200.51.14:62610) New Members : 1 ([10.200.51.16:58992])
2015-03-26 08:23:00,355 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (Incoming-13,10.200.51.14:62610) All Members : 2 ([10.200.51.14:62610, 10.200.51.16:58992])
2015-03-26 08:24:32,140 INFO [org.jboss.cache.RPCManagerImpl] (Incoming-16,10.200.51.14:62610) Received new cluster view: [10.200.51.14:62610|2] [10.200.51.14:62610]
Server2:
2015-03-26 08:23:00,011 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (main) All Members : 2 ([10.200.51.14:62610, 10.200.51.16:58992])
Multicast is successfully configured and working.
The clustering does NOT occur when EAR files are included at JBoss startup.
We see NAKACK messages (on Server1) when Server2 starts... but clustering does not complete.
2015-03-26 14:28:41,105 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,10.200.51.14:7900) sender 10.200.51.16:7900 not found in xmit_table
2015-03-26 14:28:41,105 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,10.200.51.14:7900) range is null
We see multiple NAKACK messages (on Server2) when it starts:
2015-03-26 14:27:47,488 WARN [org.jgroups.protocols.pbcast.NAKACK] (OOB-4,10.200.51.16:50648) 10.200.51.16:50648] discarded message from non-member 10.200.51.14:59139, my view is [10.200.51.16:50648|0] [10.200.51.16:50648]
2015-03-26 14:27:47,675 WARN [org.jgroups.protocols.pbcast.NAKACK] (OOB-4,10.200.51.16:50648) 10.200.51.16:50648] discarded message from non-member 10.200.51.14:59139, my view is [10.200.51.16:50648|0] [10.200.51.16:50648]
and
2015-03-26 14:28:34,038 WARN [org.jgroups.protocols.pbcast.NAKACK] (Incoming-4,10.200.51.16:50648) 10.200.51.16:50648] discarded message from non-member 10.200.51.14:59139, my view is [10.200.51.16:50648|0] [10.200.51.16:50648]
2015-03-26 14:28:34,038 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-9,10.200.51.16:50648) sender 10.200.51.14:59139 not found in xmit_table
2015-03-26 14:28:34,038 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-9,10.200.51.16:50648) range is null
2015-03-26 14:28:40,356 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,10.200.51.16:7900) sender 10.200.51.14:7900 not found in xmit_table
2015-03-26 14:28:40,356 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,10.200.51.16:7900) range is null
We also see a JBoss messaging error on Server2 after the EAR file completes its deployment:
2015-03-26 14:32:38,557 ERROR [org.jboss.messaging.util.ExceptionUtil] (WorkManager(2)-7) SessionEndpoint[ej-53z3kq7i-1-gg3wjq7i-1d1bnk-gf1k5a] createConsumerDelegate [4k-dcm4kq7i-1-gg3wjq7i-1d1bnk-gf1k5a]
java.lang.IllegalStateException: org.jboss.messaging.core.impl.postoffice.GroupMember#692dba20 response not received from 10.200.51.14:59139 - there may be others
at org.jboss.messaging.core.impl.postoffice.GroupMember.multicastControl(GroupMember.java:262)
This application is CA Identity Manager R12.6 SP4, with 2 EAR files involved.
We have discussed this clustering issue with CA, and they indicate something is mis-configured within the JBoss AS.
Does anyone have any idea how we might troubleshoot and resolve this problem???
Related
I have followed this demo and get the following error.
https://github.com/confluentinc/cp-demo
https://github.com/confluentinc/cp-demo/blob/7.0.1-post/docker-compose.yml
I replace KSQL_BOOTSTRAP_SERVERS with my own kafka server and get the following error, what could be the cause of this issue?
[2022-02-08 11:03:09,095] INFO Logging initialized #838ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2022-02-08 11:03:09,130] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2022-02-08 11:03:09,204] INFO Adding listener with HTTP/2: https://0.0.0.0:8085 (io.confluent.rest.ApplicationServer)
[2022-02-08 11:03:09,491] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: No listener configured with requested scheme http
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.getSchemeAndPortForIdentity(KafkaSchemaRegistry.java:303)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:148)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
How to fix following Wildfly Warnings:
2019-10-09 15:15:04,179 WARN [org.apache.activemq.artemis.core.server] (Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2#216f6b3e-1182524264)) AMQ222033: Page file 000002707.page had incomplete records at position 373,401 at record number 9
2019-10-09 15:15:05,182 WARN [org.apache.activemq.artemis.core.server] (Thread-1 (ActiveMQ-remoting-threads-ActiveMQServerImpl::serverUUID=3bea749a-88f7-11e7-b497-27b2839ef45c-1594512315-717665458)) AMQ222033: Page file 000002707.page had incomplete records at position 373,401 at record number 9
2019-10-09 15:15:05,185 WARN [org.apache.activemq.artemis.core.server] (Thread-11 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2#216f6b3e-1182524264)) AMQ222033: Page file 000002707.page had incomplete records at position 373,401 at record number 9
I only get one link on google which suggests server crashes but i am not sure how to stop this -> https://developer.jboss.org/thread/154232
It contains Apache Camel Project which picks 20,000 messages put on queue and many of them discards and other processed not sure if that are related
The linked forum post does a fine job of explaining the likely reason why page file was corrupted. To "fix" the problem I recommend you consume all the messages you can from the affected queue, stop the broker, and remove the corrupted page file.
I've played around with lagom-scala-word-count Activator template and I was forced to kill the application process. Since then embedded kafka doesn't work - this project and every new I create became unusable. I've tried:
running sbt clean, to delete embedded Kafka data
creating brand new project (from other activator templates)
restarting my machine.
Despite this I can't get Lagom to work. During first launch I get following lines in log:
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 1 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 2 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 4 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
Next launches result in:
[info] Starting Kafka
[info] Starting Cassandra
....Kafka Server closed unexpectedly.
....
[info] Cassandra server running at 127.0.0.1:4000
I've posted full server.log from lagom-internal-meta-project-kafka at https://gist.github.com/szymonbaranczyk/a93273537b42aafa45bf67446dd41adb.
Is it possible that some corrupted Embedded Kafka's data is stored globally on my pc and causes this?
For future reference, as James mentioned in comments, you have to delete the folder lagom-internal-meta-project-kafka in target/lagom-dynamic-projects. I don't know why it's not get deleted automatically.
I have a requirement to send an email after registration of a new user my ATG application.
I have created a template jsp and created a scenario in ATG for that.
I also configured config/atg/scenario/IndividualEmilSender.properties with below key-value-
contextPathPrefix=/teststore
siteHttpServerName=localhost
siteHttpServerPort=8080
And /config/atg/userprofiling/email/TemplateEmailSender.properties as:-
$class=atg.userprofiling.email.TemplateEmailInfoImpl
mailingName=Your Mailing
contextPathPrefix=/teststore
messageSubject^=/atg/dynamo/service/SMTPEmail.defaultSubject
messageFrom^=/atg/dynamo/service/SMTPEmail.defaultFrom
contentProcessor=/atg/userprofiling/email/HtmlContentProcessor
fillFromTemplate=true
templateURL=/NewUserRegistered.jsp
loggingDebug=true
But getting following exception-
ERROR [ScenarioManager] Error while processing individual timer message InstanceTimerMessage[17000001,/TestStore/RegistrationScenario.sdl,NewMembers,3,in 1 mins]; rolling back the transaction java.lang.NullPointerException
at atg.scenario.action.SendEmail.createTemplateEmailInfo(SendEmail.java:193)
at atg.scenario.action.SendEmail.execute(SendEmail.java:526)
at atg.process.ProcessManagerService.executeAction(ProcessManagerService.java:14001)
at atg.process.ProcessManagerService.takeIndividualTransition(ProcessManagerService.java:13408)
at atg.process.ProcessManagerService.receiveIndividualTimerMessage(ProcessManagerService.java:12732)
at atg.process.ProcessManagerService.receiveMessage(ProcessManagerService.java:11416)
at atg.process.ProcessManagerService.receiveMessage(ProcessManagerService.java:11341)
at atg.dms.patchbay.ElementManager.deliverMessage(ElementManager.java:316)
at atg.dms.patchbay.InputPort.onMessage(InputPort.java:190)
at atg.dms.patchbay.InputDestination.onMessage(InputDestination.java:397)
at atg.dms.patchbay.InputDestinationConsumer.processMessageDelivery(InputDestinationConsumer.java:501)
at atg.dms.patchbay.InputDestinationConsumer.runXATransactions(InputDestinationConsumer.java:371)
at atg.dms.patchbay.InputDestinationConsumer.run(InputDestinationConsumer.java:245)
at java.lang.Thread.run(Thread.java:662)
10:34:32,527 INFO [ScenarioManager] DEBUG [message]: message ID:170000 failed a total of 1 times so far
10:34:32,543 ERROR [MessagingManager] An error occurred while MessageSink with nucleus name "/atg/scenario/ScenarioManager" was receiving a Message from input port "IndividualTimers": javax.jms.JMSException: CONTAINER:atg.process.ProcessException; SOURCE:java.lang.NullPointerException
10:34:32,558 INFO [ScenarioManager] DEBUG received message on port IndividualTimers message: jms-msg:ID:170000
10:34:32,558 INFO [ScenarioManager] DEBUG [message]: not processing message ID:170000 after 1 failed delivery attempts
Please help to resolve this!
Thanks!
You are getting a NullPointerException because your DefaultEmailInfo is not configured correctly.
Have a look at the chapter around Sending Targeted E-mail as well as the SendMail action in the documentation.
It is likely that you are missing one or more of the required configuration changes.
I have added "Signal Intermediate Event" to a human-task as a boundary-event as I have given in previous question.
Sometimes the signal is processed successfully and sometimes it is NOT.
JBPM runtime just updates the process-instance-info and doesn't process the signal.
I am using StatefulKnowledgeSession.signalEvent() , it is just updating the ProcessInstanceInfo at the backend and the event doesn't cancel the current activity in progress
What could be the problem ? Any bug related to this 'Signal Intermediate Event'? .
LOG:
08:34:38,955 INFO [stdout] (http--0.0.0.0-8280-20) 2013-03-13 08:34:38,954 [http--0.0.0.0-8280-20] DEBUG web.mvc.controller.SignalController - A new PROCESS signal recieved ..putProcessOnHOLD
08:34:38,966 INFO [stdout] (http--0.0.0.0-8280-20) 2013-03-13 08:34:38,966 [http--0.0.0.0-8280-20] DEBUG org.drools.container.spring.beans.persistence.DroolsSpringTransactionManager - Current TX name (According to TransactionSynchronizationManager) : core.service.impl.event.ExternalEventManagerImpl.dispatchSignal
08:34:38,978 INFO [stdout] (http--0.0.0.0-8280-20) 2013-03-13 08:34:38,978 [http--0.0.0.0-8280-20] DEBUG org.drools.container.spring.beans.persistence.DroolsSpringTransactionManager - Current TX: org.springframework.transaction.support.DefaultTransactionStatus#3dda5edd
08:34:38,987 INFO [stdout] (http--0.0.0.0-8280-20) Hibernate: select processins0_.InstanceId as InstanceId1_0_, processins0_.id as id1_0_, processins0_.lastModificationDate as lastModi3_1_0_, processins0_.lastReadDate as lastRead4_1_0_, processins0_.processId as processId1_0_, processins0_.processInstanceByteArray as processI6_1_0_, processins0_.startDate as startDate1_0_, processins0_.state as state1_0_, processins0_.OPTLOCK as OPTLOCK1_0_ from ProcessInstanceInfo processins0_ where processins0_.InstanceId=?
08:34:39,014 INFO [stdout] (http--0.0.0.0-8280-20) Hibernate: update ProcessInstanceInfo set id=?, lastModificationDate=?, lastReadDate=?, processId=?, processInstanceByteArray=?, startDate=?, state=?, OPTLOCK=? where InstanceId=? and OPTLOCK=?
Environment : JBPM 5.4.0.Final, Jboss 7.1.0.Final
When you say that the engine updates the ProcessInstanceInfo, I assume you are referring to the last read date only (in case the process instance is not moving forward as expected)? Or other fields as well?
The engine should process each request the same way. So I assume that the process instance itself might not always be in the same state? If the signal for example arrives before or after the human task is active, it will not cause any changes to the process instance itself.