I am using helm to deploy kafka using cp-helm-charts
I have enabled zookeeper, kafka, schema registry and control center components. In control center UI I am able to create a topic and set a schema for the topic. However schema validation is not enabled and it is still possible to write arbitrary text to the topic.
I am trying to enable schema validation as described here
by adding these options to my helm values:
cp-control-center:
configurationOverrides:
"confluent.schema.registry.url": http://data-cp-schema-registry:8081
"confluent.value.schema.validation": true
But it has no effect.
QUESTION:
How to enable schema validation for cp-helm-charts kafka?
The idea is to restrict all the contents that does not match specified schema.
Schema validation is only applicable to Confluent Server (the broker), or the topics, not Control Center container, so you'll need to move that override to the kafka configuration instead (and verify it's using the cp-server image)
It's worth mentioning that that's a paid feature of Confluent Enterprise.
Related
We are using kafka topics for our micro-services to communicate with each other. We are also using schemas to define the content of our topics.
On our various stages we explicitely deploy the schemas for each topic as part of the deployment process. However, on our local developer laptops (where we have a docker container running a local kafka and schema-registry instance) we do not want to do this.
We are using Spring-Boot and spring-kafka.
Accordingly, we have the following two config files:
application.yml
spring.kafka.producer.properties.auto.register.schemas=false
*application-local.yml
spring.kafka.producer.properties.auto.register.schemas=true
This works well, our schemas are automatically registered with the local schema-registry when we write to a kafka-topic for the first time.
However, after we've made some schema changes, our publish now fails telling us that the new schema is not compatible with the previously installed schema. Checking the local schema registry, we see that the auto-registered schema was registered with compatibility=BACKWARD whereas on our staged registries we work with compatibility=NONE (we're well aware of the issues this may bring with regard to breaking changes -> this is handled in the way we work with our data).
Is there any way to make the auto-registration use NONE instead of BACKWARD?
Any new subject will inherit the global compatibility level of the Registry; you cannot set it when registering a schema without making a secondary out-of-band compatibility HTTP request (in other words, prior to actually producing any data which may register the schema on its own).
During local development, I would suggest deleting the schema from your registry until you are content with the schema rather than worrying about local compatibility changes.
You could also set the default compatibility of the container to NONE
In confluent cloud we don't want that anyone can create a topic without any kind of control, we would like to automate this somehow.
However, I don't see anywhere that creation of topics can be controlled in anyway in confluent-cloud.
Is it possible to do this?
auto.create.topics.enable is disabled by default on Confluent Cloud, and cannot be enabled on Basic or Standard clusters, only Dedicated.
I have a Kafka cluster running with Zookeeper, Confluent Schema registry and Kafka security manager(KSM). KSM, https://github.com/conduktor/kafka-security-manager, is software makes it easy to manager Kafka ACL with a csv file instead of using the command line tool.
The confluent schema registry let us store Avro schema for Kafka. It is currently open and I need to secured it. I want to give every user the READ or GET permission only. I am currently using kubernetes to deploy all the tools.
How can I do that with KSM? Where can I find examples?
Thank you
Kafka ACLs don't apply to the Schema Registry, they would apply to the underlying _schemas topic, which you'd setup in the Registry's configuration
The API itself can be secured using TLS and HTTP Authentication
https://docs.confluent.io/platform/current/schema-registry/security/index.html
give every user the READ or GET permission only.
I don't think you can lock down HTTP method level access to specific users, you'll likely need a proxy for this, but also without POST, there's no way to register topics...
I came across the following article on how to use the schema registry available in the confluent platform.
https://docs.confluent.io/platform/current/schema-registry/schema-validation.html
According to that article, we can specify confluent.schema.registry.url in server.properties to point Kafka to the schema registry.
My question is, is it possible to point a Kafka cluster which is not a part of confluent platform deployment, to a schema registry using confluent.schema.registry.url?
Server-side schema validation is part of Confluent Server, not Apache Kafka.
I will make sure that that docs page gets updated to be more clear - thanks for raising it.
I'm using librdkafka to develop in C++ kafka message producer.
Is there a way to create topic with custom replication factor, different than default one?
CONFIGURATION.md does not mention explicitly any parameter, but Kafka tools allow for this.
While auto topic creation is currently supported by librdkafka, it merely uses the broker's topic default configuration.
What you need is manual topic creation from the client. The broker support for this was recently added in KIP-4, which is also supported through librdkafka's Admin API.
See the rd_kafka_CreateTopics() API.