Subject does not have subject-level compatibility configured - apache-kafka

We use Kafka, Kafka connect and Schema-registry in our stack. Version is 2.8.1(Confluent 6.2.1).
We use Kafka connect's configs(key.converter and value.converter) with value: io.confluent.connect.avro.AvroConverter.
It registers a new schema for topics automatically. But there's an issue, AvroConverter doesn't specify subject-level compatibility for a new schema
and the error appears when we are trying to get config for the schema via REST API /config: Subject 'schema-value' does not have subject-level compatibility configured
If we specify the request parameter defaultToGlobal then global compatibility is returned. But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ.
How can I specify subject-level compatibility when registering a new schema via AvroConverter?

Last I checked, the only properties that can be provided to any of the Avro serializer configs that affect the Registry HTTP client are the url, whether to auto-register, and whether to use the latest schema version.
There's no property (or even method call) that sets either the subject level or global config during schema registration
You're welcome to check out the source code to verify this
But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ
Doesn't sound like a Connect problem. Create a PR for AKHQ project to fix the request

As of 2021-10-26, I used akhq 0.18.0 jar and confluent-6.2.0, the schema registry in akhq is working fine.
Note: I also used confluent-6.2.1, seeing exactly the same error. So, you may want to switch back to 6.2.0 to give a try.
P.S: using all only for my local dev env, VirtualBox, Ubuntu.

#OneCricketeer is correct.
There is no possibility to specify subject-level compatibility in AvroConverter unfortunately.
I see only two solutions:
Override AvroConverter to add property and functionality to send an additional request to API /config/{subject} after registering the schema.
Contribute to AKHQ to support defaultToGlobal parameter. But in this case, we also need to backport schema-registry RestClient. Github issue
The second solution is more preferable till the user would specify the compatibility level in the settings of the converter. Without this setting in the native AvroConverter, we have to use the custom converter for every client who writes a schema. And it makes a lot of effort.
For me, it looks strange why the client cannot set up the compatibility at the moment of registering the schema and has to use a different request for it.

Related

Vertx 4.2.1 Redis ConfigReader issue

Running into some strange issues while using redis 6.2.6 as the config server. The config is stored using the HSET commands
HSET appt-src-svc-local vertx '{"listen.port": 8080}'
HSET appt-src-svc-local mongo '{"host":"127.0.0.1", "port":27017}'
...
When using Redis 4, the config can be retrieved correctly. If we switch to V 6.2.6, the RedisConfigStore is unable to parse the Response correctly.
Any help will be much appreciated.
TIA
I believe the vertx-config hasn't been verified to work correctly with the new protocol negotiation feature to support both old and new servers. For now, you could try to force the protocol to fall back to RESP2 (old one) and open an issue on GitHub to test and support any protocol.
To disable the protocol negotiation, you need to configure redis client with:
protocolNegotiation: false

Debezium Server and using variables in the application.properties file

I'm trying to get Debezium Server running so that I can use GCP (Google) PubSub, and not have to use Kafka and the Kafka connectors. I have it mostly running, however, I'm having trouble with the using variables in the tranforms section to define a Topic name.
According to the documentation, when using the Outbox transformation, I can choose the Topic name by using the variable ${routedByValue} for the setting route.topic.replacement and this will use the value that is determined by the setting route.by.field. If the replacement setting is omitted, it will use a default topic name of outbox.event.<route.by.field value>.
When I try to use this variable in the 'application.properties' file ...
debezium.transforms.outbox.route.by.field=aggregate_type
debezium.transforms.outbox.route.topic.replace=${routedByValue}
... the Debezium Server stops with a NoSuchElementException, saying it cannot expand routedByValue. If I omit that setting, it works fine and defines the topic name as outbox.event.<route.by.field value>.
How can I use this variable correctly in the 'applications.properties' file so I can customise the topic name (e.g. route.topic.replace=myservice.${routedByValue})?
The way I got this to work was to do the following ...
debezium.transforms.outbox.route.by.file=aggregate_type
debezium.transforms.outbox.route.topic.replacement=$1
I believe this works because omitted from the config is another setting - debezium.transforms.outbox.route.topic.regex - and this has a default value of - (?<routedByValue>.*).
If I understand the documentation correctly, the $1 refers to the first group in the regex expression. In my case, this will return whatever the value of aggregrate_type equates to.
I'm using Debezium Server 2.1 with Pulsar as sink type and the #Dazfl answer solve my issue !
debezium.transforms=outbox
debezium.transforms.outbox.type=io.debezium.transforms.outbox.EventRouter
debezium.transforms.outbox.route.topic.replacement=outbox.event.transactions.$1
Although the Debezium Server docs says to use $routedByValue, this do not works as expeceted...

UI console to browse topics on Message Hub

I have a Message Hub instance on Bluemix, and am able to produce / consume messages off it. I was looking for a quick, reasonable way to browse topics / messages to see what's going on. Something along the lines of kafka-topics-ui.
I installed kafka-topics-ui locally, but could not get it to connect to Message Hub. I used the kafka-rest-url value from the MessageHub credentials in the kafka-topics-ui configuration file (env.js), but could not figure out where to provide the API key.
Alternatively, in the Bluemix UI, under Kibana, I can see log entries for creating the topic. Unfortunately, I could not see log entries for messages in the topic (perhaps I'm looking the wrong place or have wrong filters?).
My guess is I'm missing something basic. Is there a way to either:
configure a tool such as kafka-topics-ui to connect to MessageHub,
or,
browse topic messages easily?
Cheers.
According to Using the Kafka REST API on Bluemix you need an additional header in all API requests:
-H "X-Auth-Token: APIKEY"
A quick solution is to edit the topic-ui code and include your token in every request. Another solution would be to use a Chrome plugin that can inject the above header. For a more formal solution, i have opened a ticket on github

How to dynamically configure security for Artemis MQ addresses

Trying to dynamically create and provide security metadata for artemis mq topics (as opposed to defining them statically in broker.xml).
For that purpose I've implemented (as described here) the SecuritySettingPlugin interface.
Now, the issue is the getSecurityRoles/populateSecurityRoles of the implementation are called only at server startup.
So, at some point in time after the mq server has been started, a topic will be created :
org.apache.activemq.artemis.api.jms.management.JMSServerControl.createTopic("newTopic")
Now I would like artemis to call again my SecuritySettingPlugin implementation to get the updated security roles (which will include configuration for the newly created newTopic).
Is that possible ?
P.S. security-invalidation-interval does not invalidate roles configuration cache.
Seems there is a way to customize an address security by API :
ActiveMQServerControl.addSecuritySettings()

spring cloud auto refresh config server property

I have configured spring cloud config which picks up property from Github. If I post to /refresh, I am also able to get the updated value in my application.
Now I want to get properties updated automatically. That means I don't want to hit refresh API to get the changes reflected in my application from Github property file to my application.
Do I need to implement Rabbitmq and cloud bus for it or there is any other simple way to do it?
Also there document says that we need to add a dependency on the spring-cloud-config-monitor library for push notification.
http://projects.spring.io/spring-cloud/spring-cloud.html#_push_notifications_and_spring_cloud_bus
But I did not find any such dependency in maven to be added. Not sure if my understanding is wrong. Please help.
You would need a Config server with Spring Cloud Bus and RabbitMQ (or Kafka or Redis) support.
RabbitMQ with the following exchange:
name: springCloudBus
type: topic
durable: true
autoDelete: false
internal: false
The config server would send data to the topic once it receives push events from Git (Github, Bitbucket, GitLab) via a webhook to http://<config-server>/monitor
And a client application with Config and RabbitMQ libraries, subscribed to the previous exchange to receive messages of the properties that need to be refreshed.
More could be found in my blog at: http://tech.asimio.net/2017/02/02/Refreshable-Configuration-using-Spring-Cloud-Config-Server-Spring-Cloud-Bus-RabbitMQ-and-Git.html with a brief explanation of the configuration, logs and full source code for the Config server and client app.
They are not generally available yet. You need to add http://repo.spring.io/milestone/ as a maven repository and use a milestone release.