Kogito Kafka messages - Message Trigger information is not complete TriggerMetaData - apache-kafka

has someone has ever encounter this error when using Kogito (with quarkus) to send Kafka messages from a flow (bpmn2 file) throwing an "Intermediate Message" from the intermediate events?
[ERROR] com.package.xxxxxx.whenRunTestSupermarketSuccess Time elapsed: 0.006 s
<<< ERROR!
java.lang.RuntimeException:
java.lang.RuntimeException: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
[error]: Build step org.kie.kogito.quarkus.common.deployment.KogitoAssetsProcessor#generateSources
threw an exception: org.kie.kogito.codegen.process.ProcessCodegenException: Error while elaborating
process id = "supermarketFlow", packageName = "com.package.xxxxxx": Message Trigger information is
not complete TriggerMetaData [name=cartCreated, type=ProduceMessage, dataType=String, modelRef=null, ownerId=1]
at org.kie.kogito.codegen.process.ProcessCodegen.internalGenerate(ProcessCodegen.java:325)
at org.kie.kogito.codegen.core.AbstractGenerator.generate(AbstractGenerator.java:69)
I can sense modelRef=null could be the issue but as far as I've seen in the official documentation, is not mention. I suppose I should give it a value on the bpmn file but I don't where.
I'm trying to send Kafka messages after running some logic, for the moment no one consuming the messages.
May be the issue is related to this thread?
BPMN 2.0, kogito, triggering signal or message from another process

Related

AWS DMS task failing after some time in CDC mode

I'm having trouble in setting up a task migrating the data in a RDS Database (PostgreSQL, engine 10.15) into an S3 bucket in the initial migration + CDC mode.
Both endpoints are configured and tested successfully.
I have created the task twice, both times it ran a couple of hours at most, the first time the initial dump went fine and some of the incremental dumps took place as well, the second time only the initial dump finished and no incremental dump was performed before the task failed.
The error message is now:
Last Error Task 'data-migration-bp-dev' was suspended after 9 successive recovery failures Stop Reason FATAL_ERROR Error Level FATAL_
but just after it failed for the first time it was:
Last Error An internal WAL conversational protocol error has occurred. Task error notification received from subtask 0, thread 0 reptask/replicationtask.c:2859 1020452 Error executing source loop; Stream component failed at subtask 0, component st_0_data-migration-rds-bp-dev; Stream component 'st_0_data-migration-rds-bp-dev' terminated reptask/replicationtask.c:2866 1020452 Stop Reason RECOVERABLE_ERROR Error Level RECOVERABLE
In the CloudWatch logs I see the following error messages:
SOURCE_CAPTURE I: Streaming initiated successfully (postgres_pglogical.c:274)
SOURCE_CAPTURE I: #1 : Non-monotonic LSN sequence: Current LSN '00000000/00000000' < Previous LSN '000001E3/94016430'. Event is ignored. (postgres_endpoint_wal_engine.c:710)
SOURCE_CAPTURE I: Unable to resolve attributes for relation id '28804'. Aborting action. (postgres_pglogical.c:1643)
SOURCE_CAPTURE I: End of CDC / CAPTURE events for POSTGRES endpoint. (postgres_endpoint_capture.c:520)
SOURCE_CAPTURE I: CAPTURE ended with exceptions. (postgres_endpoint_capture.c:527)
SOURCE_CAPTURE E: Could not find relation id '28804' in hash. 1020483 (postgres_pglogical.c:1470)
SOURCE_CAPTURE E: Failed to parse relation from dml command 1020483 (postgres_pglogical.c:2515)
SOURCE_CAPTURE E: Failed to find relation id on target while processing message from source 1020452 (postgres_endpoint_wal_engine.c:805)
SOURCE_CAPTURE E: WAL stream loop ended abnormally. (STATUS_PROTOCOL_ERROR) 1020452 (postgres_endpoint_wal_engine.c:992)
SOURCE_CAPTURE E: WAL reader terminated with irrecoverable error. 1020452 (postgres_endpoint_capture.c:496)
TASK_MANAGER I: Task - data-migration-bp-dev is in ERROR state, updating starting status to AR_NOT_APPLICABLE (repository.c:5102)
SOURCE_CAPTURE E: Error executing source loop 1020452 (streamcomponent.c:1870)
TASK_MANAGER E: Stream component failed at subtask 0, component st_0_data-migration-rds-bp-dev 1020452 (subtask.c:1409)
SOURCE_CAPTURE E: Stream component 'st_0_data-migration-rds-bp-dev' terminated 1020452 (subtask.c:1578)
TASK_MANAGER E: Task error notification received from subtask 0, thread 0 1020452 (replicationtask.c:2859)
TASK_MANAGER E: Error executing source loop; Stream component failed at subtask 0, component st_0_data-migration-rds-bp-dev; Stream component 'st_0_data-migration-rds-bp-dev' terminated 1020452 (replicationtask.c:2866)
TASK_MANAGER E: Task 'data-migration-bp-dev' encountered a recoverable error, retry attempt # 0 (repository.c:5184)
At this point I should mention, that we had to configure the pglogical plugin and restart the database, but we got an error in the end, which we ignored since the DMS task started after that operation.
ERROR: current database is not configured as pglogical node
HINT: create pglogical node first
Is the problem of our failing DMS task related to the pglogical plugin configuration? If so, how can we configure it for it to work (our db engine should be compatible with it, no?)? And if not, how to fix it?
Thank you in advance!
Should anyone get the same error in the future, here is what we were told by the AWS tech specialist:
There is a known (to AWS) issue with the pglogical plugin. The solution requires using the test_decoding plugin instead.
Enforce using the test_decoding plugin on the DMS Endpoint by specifying pluginName=test_decoding in Extra Connection Attributes
Create a new DMS task using this endpoint (using the old task may cause it to fail due to dissynchronization between the task and the logs)
It sure did resolve the issue, but we still don't know what the problem really was with the plugin that is strongly suggested everywhere in the DMS documentation (at the moment).

Akka connection actor has terminated

I'm working on a REST API that uses Akka, we inherited it from a previous team, and none of us have experience with Akka before this.
Akka is being used to process the data the API is returning, and acting as the HTTP server.
Recently when the API was under load, we started getting failures like so:
),HttpProtocol(HTTP/1.1)), Response: HttpResponse(500 Internal Server Error,List(),HttpEntity.Strict(text/plain; charset=UTF-8,
Error Code: 500
Type: Internal Server Error
Stack Trace:
akka.stream.StreamTcpException: The connection actor has terminated. Stopping now.
),HttpProtocol(HTTP/1.1)), Time: 6430 ms
I have no idea where the above error is happening in the code, or how to appropriately handle this error when it happens.
Can anyone give suggestions on how to trace this down further, or suggestions on how to handle and recover from these types of issues?

Error received while deploying code to Pixhawk4 from Simulink

I am trying to run an example Simulink model provided by Mathworks, but while deploying the model to the Pixhawk flight controller, I am receiving the following error:
Error(s) encountered while building "px4demo_uORBReadWrite":
### Failed to generate all binary outputs.
Caused by:
Validation error(s):
### Validating other build tools ...
Unable to locate build tool "GNU PX4 Archiver": echo
I have followed all the steps to configure the hardware properly, except the last part where I was not able to receive the accelerometer data too. The error received while testing the connection at that time was:
Error reading data from the serial port. Operation timed out before requested data was received.
Can someone explain why I am having this issue?
Thanks

GCS Connector Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found

We are trying to run Hive queries on HDP 2.1 using GCS Connector, it was working fine until yesterday but since today morning our jobs are randomly started failing. When we restart them manually they just work fine. I suspect it's something to do with number of parallel Hive jobs running at a given point of time.
Below is the error message:
vertexId=vertex_1407434664593_37527_2_00, diagnostics=[Vertex Input: audience_history initializer failed., java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found]
DAG failed due to vertex failure. failedVertices:1 killedVertices:0
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask
Any help will be highly appreciated.
Thanks!

ATG- Issue in sending email through atg scenario

I have a requirement to send an email after registration of a new user my ATG application.
I have created a template jsp and created a scenario in ATG for that.
I also configured config/atg/scenario/IndividualEmilSender.properties with below key-value-
contextPathPrefix=/teststore
siteHttpServerName=localhost
siteHttpServerPort=8080
And /config/atg/userprofiling/email/TemplateEmailSender.properties as:-
$class=atg.userprofiling.email.TemplateEmailInfoImpl
mailingName=Your Mailing
contextPathPrefix=/teststore
messageSubject^=/atg/dynamo/service/SMTPEmail.defaultSubject
messageFrom^=/atg/dynamo/service/SMTPEmail.defaultFrom
contentProcessor=/atg/userprofiling/email/HtmlContentProcessor
fillFromTemplate=true
templateURL=/NewUserRegistered.jsp
loggingDebug=true
But getting following exception-
ERROR [ScenarioManager] Error while processing individual timer message InstanceTimerMessage[17000001,/TestStore/RegistrationScenario.sdl,NewMembers,3,in 1 mins]; rolling back the transaction java.lang.NullPointerException
at atg.scenario.action.SendEmail.createTemplateEmailInfo(SendEmail.java:193)
at atg.scenario.action.SendEmail.execute(SendEmail.java:526)
at atg.process.ProcessManagerService.executeAction(ProcessManagerService.java:14001)
at atg.process.ProcessManagerService.takeIndividualTransition(ProcessManagerService.java:13408)
at atg.process.ProcessManagerService.receiveIndividualTimerMessage(ProcessManagerService.java:12732)
at atg.process.ProcessManagerService.receiveMessage(ProcessManagerService.java:11416)
at atg.process.ProcessManagerService.receiveMessage(ProcessManagerService.java:11341)
at atg.dms.patchbay.ElementManager.deliverMessage(ElementManager.java:316)
at atg.dms.patchbay.InputPort.onMessage(InputPort.java:190)
at atg.dms.patchbay.InputDestination.onMessage(InputDestination.java:397)
at atg.dms.patchbay.InputDestinationConsumer.processMessageDelivery(InputDestinationConsumer.java:501)
at atg.dms.patchbay.InputDestinationConsumer.runXATransactions(InputDestinationConsumer.java:371)
at atg.dms.patchbay.InputDestinationConsumer.run(InputDestinationConsumer.java:245)
at java.lang.Thread.run(Thread.java:662)
10:34:32,527 INFO [ScenarioManager] DEBUG [message]: message ID:170000 failed a total of 1 times so far
10:34:32,543 ERROR [MessagingManager] An error occurred while MessageSink with nucleus name "/atg/scenario/ScenarioManager" was receiving a Message from input port "IndividualTimers": javax.jms.JMSException: CONTAINER:atg.process.ProcessException; SOURCE:java.lang.NullPointerException
10:34:32,558 INFO [ScenarioManager] DEBUG received message on port IndividualTimers message: jms-msg:ID:170000
10:34:32,558 INFO [ScenarioManager] DEBUG [message]: not processing message ID:170000 after 1 failed delivery attempts
Please help to resolve this!
Thanks!
You are getting a NullPointerException because your DefaultEmailInfo is not configured correctly.
Have a look at the chapter around Sending Targeted E-mail as well as the SendMail action in the documentation.
It is likely that you are missing one or more of the required configuration changes.