Create subjects automatically from Schema Registry - Subject not found - apache-kafka

I have Schema Registry container http://registry-server:8081
ProducerConfig:
bootstrap.servers : [PLAINTEXT://kafka-server:9092]
value.serializer : class org.apache.kafka.common.serialization.ByteArraySerializer
and a Standalone service that acts as a producer and has its property set as below
Producer
"schema.registry.url", "http://registry-server:8081"
"bootstrap.servers", "http://kafka-server:9092
"value.converter.value.subject.name.strategy", true
"auto.register.schemas", false
"value.serializer", DelegatingSerialzer.class
KafkaAvroSerializerConfig
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
But when the Standalone service attempt to send a request to schema registry, something like below
http://registry-server:8081/subjects/topicName-value?deleted=false
I constantly receive Subject Not Found error.
Is it because auto.register.schemas is set to false in the Producer from standalone service and that's why it is failing to create subjects ?
How can I auto register schema and auto create Subject from the Schema Registry service ?
btw, kafka, schema-registery and standalone-app are containers

Is it because auto.register.schemas is set to false in the Producer from standalone service and that's why it is failing to create subjects ?
Probably
auto register schema and auto create Subject from the Schema Registry service ?
You can't. The registry needs to have external POST HTTP requests sent to it to create schemas for subjects. You cannot create subject without any schema.
The registry itself doesn't start with any schema, usually only your client applications do, so there is nothing it can (auto) register itself.

Related

Kafka connect S3 source failing with read-only registry

I am trying to read avro records stored in S3 in order to put them back in a kafka topic using the S3 source provided by confluent.
I already have the topics and the registry setup with the right schemas but when the connect S3 source tries to serialize the my records to the topics I get this error
Caused by: org.apache.kafka.common.errors.SerializationException:
Error registering Avro schema: ... at
io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:121)
at
io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:143)
at
io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:84)
... 15 more Caused by:
io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException:
Subject com-row-count-value is in read-only mode; error code: 42205
at
io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:495)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:486)
at
io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:459)
at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:214)
at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:276)
at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:252)
at
io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:75)
it seems that the connect producer does not try to get the schema_id if it exists but tries to write it but my registry is readonly.
Anyone knows if this is an issue or there are some configuration I am missing ?
If you're sure the correct schema for that subject is already registered by some other means, you can try to set auto.register.schemas to false in the serializer configuration.
See here for more details: https://docs.confluent.io/platform/current/schema-registry/serdes-develop/index.html#handling-differences-between-preregistered-and-client-derived-schemas

Messages are not getting consumed

I have added the below configuration in application.properties file of Spring Boot with Camel implementation but the messages are not getting consumed. Am I missing any configuration? Any pointers to implement consumer from Azure event hub using kafka protocol and Camel ?
bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
The route looks like this:
from("kafka:{{topicName}}?brokers=NAMESPACENAME.servicebus.windows.net:9093" )
.log("Message received from Kafka : ${body}");
I found the solution for this issue. Since I was using the Spring Boot Auto configuration (camel-kafka-starter), the entry on the application.properties file had to be modified as given below:
camel.component.kafka.brokers=NAMESPACENAME.servicebus.windows.net:9093
camel.component.kafka.security-protocol=SASL_SSL
camel.component.kafka.sasl-mechanism=PLAIN
camel.component.kafka.sasl-jaas-config =org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
The consumer route for the Azure event hub with Kafka protocol will look like this:
from("kafka:{{topicName}}")
.log("Message received from Kafka : ${body}");
Hope this solution helps to consume events from Azure event hub in Camel using Kafka protocol

Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set from Kafka rest proxy

I am trying to use kafka rest proxy for AWS MSK cluster.
MSK Encryption details:
Within the cluster
TLS encryption: Enabled
Between clients and brokers
TLS encryption: Enabled
Plaintext: Not enabled
I have created topic "TestTopic" on MSK and then I have created another EC2 instance in the same VPC as MSK to work as Rest proxy. Here are details from kafka-rest.properties:
zookeeper.connect=z-3.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181,z-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181
bootstrap.servers=b-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096,b-2.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="password";
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
ssl.truststore.location=/tmp/kafka.client.truststore.jks
I have also created rest-jaas.properties file with below content:
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="username"
password="password";
};
and then set the java.security.auth.login.config using:
export KAFKA_OPTS=-Djava.security.auth.login.config=/home/ec2-user/confluent-6.1.1/rest-jaas.properties
After this I started Kafka rest proxy using:
./kafka-rest-start /home/ec2-user/confluent-6.1.1/etc/kafka-rest/kafka-rest.properties
But when I tried to put an event on the TestTopic by calling service from postman:
POST: http://IP_of_ec2instance:8082/topics/TestTopic
I am getting 500 error. But in the EC2 instance I can see error:
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:441)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:291)
at io.confluent.kafkarest.ProducerPool.buildNoSchemaProducer(ProducerPool.java:120)
at io.confluent.kafkarest.ProducerPool.buildBinaryProducer(ProducerPool.java:106)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:71)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:60)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:53)
at io.confluent.kafkarest.DefaultKafkaRestContext.getProducerPool(DefaultKafkaRestContext.java:54)
... 64 more
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:141)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:106)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:92)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:139)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:74)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:120)
at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:449)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:430)
... 71 more
I can also see that value of sasl.jaas.config = null in the ProducerConfig values.
Could someone please help me with this. Thanks in advance!
Finally the issue was fixed. I am updating the fix here so that it can be beneficial for someone:
kafka-rest.properties file should have below text:
zookeeper.connect=z-3.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181,z-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181
bootstrap.servers=b-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096,b-2.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096
client.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="username";
client.security.protocol=SASL_SSL
client.sasl.mechanism=SCRAM-SHA-512
Neither there was a need to create file rest-jaas.properties nor export KAFKA_OPTS was needed.
After these changes, I was able to put the messages in the kafka topic using scram authentication.

Provide Kafka Client property at runtime to use kafka auth

In the micronaut-kafka documentation there is some info how to set custom properties. Via the application.yml file or directly using the annotation:
#KafkaClient(
id="product-client",
acks = KafkaClient.Acknowledge.ALL,
properties = #Property(name = ProducerConfig.RETRIES_CONFIG, value = "5")
)
public interface ProductClient {
...
}
I have to provide the sasl.jaas.config property at runtime, as the clients use authentication and the secrets are resolved on startup. After the secrets are resolved, the kafka consumer/producer should be initialised.
What is the best way to achieve this?
Thanks!
I don't think micronaut have this setting in the current version.
But you can just add this row under your producer config in application.yml:
sasl.jaas.config: com.sun.security.auth.module.Krb5LoginModule required blablabla";
and read this via placeholder like:
Property(name = "sasl.jaas.config", value = "${kafka.producers.your_producer_id.sasl.jaas.config}")

Kafka - Error while fetching metadata with correlation id - LEADER_NOT_AVAILABLE

I have setup Kafka cluster locally. Three broker's with properties :
broker.id=0
listeners=PLAINTEXT://:9092
broker.id=1
listeners=PLAINTEXT://:9091
broker.id=2
listeners=PLAINTEXT://:9090
Things were working fine but I am now getting error :
WARN Error while fetching metadata with correlation id 1 : {TRAIL_TOPIC=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
I am also trying to write messages via Java based client & I am getting error : unable to fetch metadata in 6000ms.
I faced the same problem, it is because the topic does not exist and the configuration of broker auto.create.topics.enable by default is set to false. I was using bin/connect-standalone so I didn't specify topics I would use.
I changed this config to true and it solved my problem.