I have this in my code
consumer = session.createConsumer(session.createQueue("myQueue"));
It throws the following exception
javax.jms.InvalidDestinationException: AMQ119019: Queue already exists test_simple_transaction_receiver
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:406)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:304)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.createQueue(ActiveMQSessionContext.java:546)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.internalCreateQueue(ClientSessionImpl.java:1622)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.createQueue(ClientSessionImpl.java:249)
at org.apache.activemq.artemis.jms.client.ActiveMQSession.createConsumer(ActiveMQSession.java:628)
at org.apache.activemq.artemis.jms.client.ActiveMQSession.createConsumer(ActiveMQSession.java:331)
at consumeMessage(ReceiverClient.java:143)
Artemis 2.0.0 has a new addressing model. if you need backward compatibility you have to configure the acceptors in such way the older clients would connect.
So, I would recommend just updating your client.
I used a wrong version of artemis-jms-client. My broker is Artemis 2.0.0 and I used artemis-jms-client 1.5.3.
With a matching client library version, the receiving works.
I made this mistake once already, so I am posting about it here in case it helps somebody else, or me third time still.
Related
We use Kafka, Kafka connect and Schema-registry in our stack. Version is 2.8.1(Confluent 6.2.1).
We use Kafka connect's configs(key.converter and value.converter) with value: io.confluent.connect.avro.AvroConverter.
It registers a new schema for topics automatically. But there's an issue, AvroConverter doesn't specify subject-level compatibility for a new schema
and the error appears when we are trying to get config for the schema via REST API /config: Subject 'schema-value' does not have subject-level compatibility configured
If we specify the request parameter defaultToGlobal then global compatibility is returned. But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ.
How can I specify subject-level compatibility when registering a new schema via AvroConverter?
Last I checked, the only properties that can be provided to any of the Avro serializer configs that affect the Registry HTTP client are the url, whether to auto-register, and whether to use the latest schema version.
There's no property (or even method call) that sets either the subject level or global config during schema registration
You're welcome to check out the source code to verify this
But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ
Doesn't sound like a Connect problem. Create a PR for AKHQ project to fix the request
As of 2021-10-26, I used akhq 0.18.0 jar and confluent-6.2.0, the schema registry in akhq is working fine.
Note: I also used confluent-6.2.1, seeing exactly the same error. So, you may want to switch back to 6.2.0 to give a try.
P.S: using all only for my local dev env, VirtualBox, Ubuntu.
#OneCricketeer is correct.
There is no possibility to specify subject-level compatibility in AvroConverter unfortunately.
I see only two solutions:
Override AvroConverter to add property and functionality to send an additional request to API /config/{subject} after registering the schema.
Contribute to AKHQ to support defaultToGlobal parameter. But in this case, we also need to backport schema-registry RestClient. Github issue
The second solution is more preferable till the user would specify the compatibility level in the settings of the converter. Without this setting in the native AvroConverter, we have to use the custom converter for every client who writes a schema. And it makes a lot of effort.
For me, it looks strange why the client cannot set up the compatibility at the moment of registering the schema and has to use a different request for it.
We are using kafka-connect-mqsource
is a connector for copying data from IBM MQ into Apache Kafka.
we currently are using MQ version 7.5.0.8 but I can see from readme that it only support down to IBM MQ v8.0 or later.
if we use this code base we facing some exception as given
{"name":"kafka-mq-connector", "connector":{"state":"RUNNING", "worker_id":"192.888.002.05:7070"}, "tasks":[{"state":"FAILED","trace":"org.apache.kafka.connect.errors.ConnectException:com.ibm.msg.client.jms.DetailedJMSException: JMSCC0091: The provider factory for connection type 'com.ibm.msg.client.wmq' could not be loaded.\n\t
at com.mqkafka.kafkaconnector.MQReader.configure(MQReader.java:122).\n\t
at com.mqkafka.kafkaconnector.MQSourceTask.start(MQSourceTask.java:36).\n\t
at org.apache.kafka.connect.runtime.WorkerSourceTask.execut(WorkerSourceTask.java:157).\n\t
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170).\n\t
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214).\n\t
at
I am not sure whether its compatibility issue or some other issue. Can you please help with the fixes required to make this connector work, thanks in advance.
I use Reactive Couchbase (this is Scala port for Java SDK - https://github.com/ReactiveCouchbase/ReactiveCouchbase-core)
And for query this use http endpoint (http:// mycouchbaseadress:8093 /query?q=N1QL Comand) but response for server is "Unrecognized parameter in request: q".
I Find in stackoverflow to start cbq-engine so try to launch 'cbq-engine -couchbase http:// mycouchbaseadress:8093 /' but have error ''flag provided but not defined: -couchbase"
My couchbase version is 4.1 community
Do you know how I can send my n1ql query to server by endpoint?
It seems like there is a bug in ReactiveCouchbase, or at least its N1QL support was developed against an outdated beta version of the feature.
With Couchbase Server 4.0 GA and above, you don't need to run cbq-engine (this was the process used during N1QL's beta).
The problem is that in the code, the q= parameter is used where it should now be statement= (or a JSON body).
There is a open pull-request that happens to fix that issue among other things, but it's been opened a long time.
Trying to dynamically create and provide security metadata for artemis mq topics (as opposed to defining them statically in broker.xml).
For that purpose I've implemented (as described here) the SecuritySettingPlugin interface.
Now, the issue is the getSecurityRoles/populateSecurityRoles of the implementation are called only at server startup.
So, at some point in time after the mq server has been started, a topic will be created :
org.apache.activemq.artemis.api.jms.management.JMSServerControl.createTopic("newTopic")
Now I would like artemis to call again my SecuritySettingPlugin implementation to get the updated security roles (which will include configuration for the newly created newTopic).
Is that possible ?
P.S. security-invalidation-interval does not invalidate roles configuration cache.
Seems there is a way to customize an address security by API :
ActiveMQServerControl.addSecuritySettings()
Logs:
OUT 08:52:27.158 [reactivemongo-akka.actor.default-dispatcher-4] ERROR reactivemongo.core.actors.MongoDBSystem - The primary is unavailable, is there a network problem?
ERR reactivemongo.core.errors.GenericDriverException: MongoError['socket disconnected']
ERR at reactivemongo.core.actors.MongoDBSystem$$anonfun$4$$anonfun$applyOrElse$30.apply(actors.scala:390) ~[org.reactivemongo.reactivemongo_2.11-0.11.6.jar:0.11.6]
Our rest api, written in Scala (utilising the Spray and Akka frameworks) is deployed on a cloud.
We've tried setting the KeepAlive flag in ReactiveMongoOptions and then implemented a Jenkins job to periodically hit the database to keep it alive. However since adding these we've not seen the issue reoccur.
Rather than assume this has fixed it, before pushing to production, we are trying to reproduce the issue. Any ideas on what may be the cause or how we can reproduce this?