Pact:Verify not picking up kafka config file. Same file gets picked up fine during consumer test run - apache-kafka

We have implemented contract testing using message pact and directly accessing Kafka Topics for retrieving the messages from queues. Kafka topics can be accessed using authentication PLAINTEXT. So we have a separate LoginModule defined in a config file with username and password. When I do the test from consumer end it is picking up the correct config file and the scripts are running. But when I run pact:verify using the same setting in the script, LoginModule is not getting recognized and I get an error "unable to find LoginModule class". From pact side I am getting an error "Failed to invoke provider method". Have anyone faced such issues with using pact with kafka before please ?

Are you talking about this one? github.com/reevoo/pact-messages If so, we are not currently supporting pact-messages as we have yet to finalize the base level tech with http/json.
This has been brought up in the past and is known within the Foundation, but we'd rather lock down the core technology before trying to tackle other message protocols/formats.

Related

Kafka Connect: Error detection when worker fails

I'am submitting a connector to kafka. The connector created is sftp connector. Now when the password is wrong the connector sends back success response when the connector fails. The password is wrong response is not given at that time. This is a single scenario there could be mutliple scenarios like this. Now when I use the <host>/connectors/<connector-name>/status, I get the error saying failed to establish connection. But this endpoint has a little delay. If I'am immediately trying after creating the connector, I may not get any response(404).
What is the proper way of handling this using the status api call.Is there any delay that needs to be used before firing this API. Or can it be handled while submitting the connector to API?
When you create the connector, it naturally needs to load the JAR(s) responsible for the tasks, then distribute the tasks to actually start the connector code (which is responsible for connecting to the SFTP server with the connection details).
Therefore, the delay is natural, and there's no way to know your connection details are incorrect unless you try to use them before launching the connector.

Mirth Connect send old messages when changing server

I have a Mirth application installed in Ubuntu server. I try to move the application from one server to another server (DRC server). When I moved the application, somehow the Mirth keep sending old messages to the channel.
The source of sending channel is using Database Reader and connecter type for destinations is using TCP Sender. Im using Mirth Connect version 3.5.2
Does anyone know why this is happening. Is there any log files that I need to clear when moving the application from one server to another?
This can happen for several reasons. Application logic, queued messages. My guess is you moved appdata directory along with installation, if so you must be seeing similar stats from where you moved.
Mirth stores all channels information, transactions etc. by default under appdata folder. If you are using default settings it'll use derby db. You can connect to that DB with any DB client support JDBC. i.e.
SQuirelL or DB Visualizer and that can give you an idea what's happening.
I recommend you to make a clear setup. Then, you can export/import your channels into your new environment. You can also consider using any other DB, oracle/sqlserver/mysql.. for Mirth. Current version is 3.9.10 and it has better support for DBs other than derby.
As mentioned in the comments your application logic also matters.

looking for current example of MDB consuming messages from remote queue in Wildfly 10

I have a Wildfly 10 instance which defines a queue, publishes to that queue as well as receives from that queue via an MDB.
That has been have accomplished.
Now I want to add a second Wildfly 10 instance, running on another machine, which will also receive messages from that same (remote) queue defined in the first instance.
I've spent 2 days looking for a current example of how to do this.
There are tons of questions, and some outdated answers.
It seems like the one of the most trivial things to expect from a Q implementation, yet i cannot find any example.
Would someone please refer me to a good and current example (Wildlfy 10) of what needs to be done as far as annotation of the MDB, configuration of the standalone-full.xml, and and security requirements?
I looked into a similar scenario and I had as well trouble to find good documentation.
There are several ways to connect JMS-Queues together:
JMS core bridges
JMS bridges
Connections to a remote server (using a remote connector or properties directly in your MDB).
JMS-Clustering
… ?
I created a demo project at Github which uses "JMS-Bridges" to forward messages to another server. The project also uses remote connections to listen to messages of a remote server. The readme explains step by step how I would configure "Wildfly 10" servers that they use the same destination for JMS messages.
The best source of information concerning this topic seems to be the Messaging documentation of the Red Hat JBoss Enterprise Application Platform 7.0 which is as well valid for Wildfly 10.

Verify HornetQ user name & password using JSOSS CLI

I have added Hornetq user using add-uesr.sh script.
I want to write a script to verify hornetq username and password entered by someone is correct or not (before configuring any other components like JMS queues etc).
Does JBOSS CLI provide a way to check the validity of authentication details?
Thank you!
You will not be able to use the JBoss CLI.
I suggest using a Hornteq JMS Client, just extend an example program and pass in username and password via command line paramters or property files. You could also check the correct queues and topics are available to the given user. You can also use the JBoss CLI in Java to check a message is delivered to a queue or topic etc. Make sure your program uses the correct exit code for success or failure.

Use an instance of Orion Context Broker FIWare

It is my first time with Fiware technologies and I want to test an instance of the FI-PPP Testbed for Orion Context Broker. I have the service end point (http://catalogue.fi-ware.org/enablers/configuration-manager-orion-context-broker/instances) but I don't know how I can use this information. I'm calling the service through REST Console Chrome extension and I don't have any response useful.
What are the steps to test Orion Context Broker through the instance from http://catalogue.fi-ware.org/enablers ??
UPDATE:
I'm reading https://forge.fi-ware.org/plugins/mediawiki/wiki/fiware/index.php/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_Quick_Start_for_Programmers and I don't have clear if I need to install a Linux machine or I need to use a Virtual Machine from Fi-Lab.
Could anybody help me???
Thanks in advance.
I don't recomend you to use the Configuration Manager catalogue entry except if you have a powerful reason to do that. Use Publish/Subscribe Broker entry instead (see this post about differences between Configuration Manager and Publish/Subscribe Broker).
Taking into account that, the Orion Context Broker instance that you should use is the one at
orion.lab.fi-ware.org:1026. You need an authentication token to use it, a simple way of getting that token is described in the Orion Quick Start Guide.