How to replicate schema with Kafka mirror maker? - apache-kafka

We are using the mirror maker to sync on-premise and AWS Kafka topics. How can a topic with its schema registered in on-premise be replicated exactly the same in other clusters (AWS in this case)?
How Avro schema is replicated using mirror maker?

MirrorMaker only copies byte arrays, not schemas. And doesn't care about the format of the data
As of Confluent 4.x or later, Schema Registry added endpoint GET /schemas/ids/(number). So, if your consumers are configured to the original registry, this shouldn't matter since your destination consumers can lookup the schema ID.
You otherwise can mirror the _schemas topic as well, as recommend by Confluent when using Confluent Replicator
If you absolutely need one-to-one schema copying, you would need to implement a MessageHandler interface, and pass this on to the MirrorMaker command, to get and post the schema, similar to the internal logic I added to this Kafka Connect plugin (which you could use Connect instead of MirrorMaker). https://github.com/OneCricketeer/schema-registry-transfer-smt

Related

MongoDB Atlas Source Connector Single Topic

I am using Confluent MongoDB Atlas Source Connector to pull data from MongoDB collection to Kafka. I have noticed that the connector is creating multiple topics in the Kafka Cluster. I need the data to be available on one topic so that the consumer application can consume the data from the topic. How can I do this?
Besides, why the Kafka connector is creating so many topics? isn't is difficult for consumer applications to retrieve the data with that approach?
Kafka Connect creates 3 internal topics for the whole cluster for managing its own workload. You should never need/want external consumers to use these
In addition to that, connectors can create their own topics. Debezium for example creates a "database history topic", and again, this shouldn't be read outside of the Connect framework.
Most connectors only need to create one for the source to pull data into, which is what consumers actually should care about

What is the use of confluent schema registry if Kafka can use Avro without it

The difference between vanilla apache Avro and Avro with confluent schema registry is that when using apache avro , we send schema+message in kafka topic whereas in confluent schema registry , we send schemaID+message in kafka topic ? So here , schema registry helps in performance improvement via schema look up in registry. Is there any other benefit of using confluent schema registry ? Also , does apache avro supports compatabilty rules of schema evolution like schema registry ?
Note: There are other implementations of a "Schema Registry" that can use used with Kafka.
Here are a list of reasons
Clients can discover schemas without interacting with Kafka. For example, Apache Hive / Presto / Spark can download schemas from the Registry to perform analytics.
The registry is centrally responsible for compatibility checks rather than pushing each client to operate that on their own (to answer your second question)
The same applies to any serialization format, as well, not only Avro

Using confluent cp-schema-registry, does it have to talk to the same Kafka you are using for producers/consumers?

We already have Kafka running in production. And unfortunately it's an older version, 0.10.2. I want to start using cp-schema-registry, from the community edition of Confluent Platform. That would mean installing the older 3.2.2 image of schema registry for compatibility with our old kafka.
From what I've read in the documentation, it seems that Confluent Schema Registry uses Kafka as it's backend for storing it's state. But the clients that are producing to/reading from Kafka topics talk to Schema Registry independently of Kafka.
So I am wondering if it would be easier to manage in production, running Schema Registry/Kafka/Zookeeper in one container all together, independent of our main Kafka cluster. Then I can use the latest version of everything. The other benefit is that standing up this new service component up could not cause any unexpected negative consequence to the existing Kafka cluster.
I find the documentation doesn't really explain well what the pros/cons of each deployment strategy are. Can someone offer guidance on how they have deployed schema registry in an environment with an existing Kafka? What is the main advantage of connecting schema registry to your main Kafka cluster?
Newer Kafka clients are backwards compatible with Kafka 0.10, so there's no reason you couldn't use a newer Schema Registry than 3.2
In the docs
Schema Registry that is included in Confluent Platform 3.2 and later is compatible with any Kafka broker that is included in Confluent Platform 3.0 and later
I would certainly avoid putting everything in one container... That's not how they're meant to be used and there's no reason you would need another Zookeeper server
Having a secondary Kafka cluster only to hold one topic of schemas seems unnecessary when you could store the same information on your existing cluster
the clients that are producing to/reading from Kafka topics talk to Schema Registry independently of Kafka
Clients talk to both. Only Avro schemas are sent over HTTP before your regular client code reaches the topic. No, schemas and client data do not have to be part of the same Kafka cluster
Anytime anyone deploys Schema Registry, it's being added to "an existing Kafka", just the difference is yours might have more data in it

How is schema from Schema-Registry is propagated over Replicator

How do schemas from Confluent Schema-Registry get propagated by Confluent-Replicator to destination Kafka-Cluster and Schema-Registry?
Is each replicated message schema contained in it or are schemas replicated somehow separately through a separate topic?
I didn't see any configuration possibilities in Confluent-Replicator regarding this.
It sounds like you are asking how the schema registry can be used in a multi data center environment. There's a pretty good doc on this https://docs.confluent.io/current/schema-registry/docs/multidc.html
Replicator can be used to keep the schema registry data in sync on the backend as shown in the doc.
Schemas are not stored with the topic, only their ID's. And the _schemas topic is not replicated, only the ID's stored within the replicated topics.
On a high-level, if you use the AvroConverter with Replicator, it'll deserialize the message from the source cluster, optionally rename the topic as per the replicator configuration, then serialize the message and send the new subject name to the destination cluster + registry.
Otherwise, if you use the ByteArrayConverter, it will not inspect the message, and it just copies it along to the destination cluster with no registration.
A small optimization on the Avro way would be to only inspect that the message is Avro encoded on the first 5 bytes, as per the Schema Registry specification, then perform HTTP lookups to the source subject using Schema Registry REST API GET /schemas/ids/:id, again rename topic if needed to the destination cluster, and POST the schema there. A similar approach can work in any Consumer/Producer pair such as a MirrorMaker MessageHandler implementation.

schema registry : Share partially/ authorization system

We need to share part of our Schema registry with another company and don't want them to see all the schemas. They also need to do the same for theirs.
Is there any way that each of us can share only part of our schema registry ?
Out of the box, no.
Assuming each Schema Registry is hooked to a separate Kafka Clusters (call them yours and theirs), what you could do, is
Write a Kafka Streams application to filter() the messages you want them to see to a _schemas_theirs topic.
Use MirrorMaker, or Confluent Replicator, to copy your local _schemas_theirs topic to the theirs Cluster's _schemas topic that is being read by the other registry.
Have them do the same thing, copying their filtered data into yours Kafka Cluster's _schemas topic