Using confluent cp-schema-registry, does it have to talk to the same Kafka you are using for producers/consumers? - apache-kafka

We already have Kafka running in production. And unfortunately it's an older version, 0.10.2. I want to start using cp-schema-registry, from the community edition of Confluent Platform. That would mean installing the older 3.2.2 image of schema registry for compatibility with our old kafka.
From what I've read in the documentation, it seems that Confluent Schema Registry uses Kafka as it's backend for storing it's state. But the clients that are producing to/reading from Kafka topics talk to Schema Registry independently of Kafka.
So I am wondering if it would be easier to manage in production, running Schema Registry/Kafka/Zookeeper in one container all together, independent of our main Kafka cluster. Then I can use the latest version of everything. The other benefit is that standing up this new service component up could not cause any unexpected negative consequence to the existing Kafka cluster.
I find the documentation doesn't really explain well what the pros/cons of each deployment strategy are. Can someone offer guidance on how they have deployed schema registry in an environment with an existing Kafka? What is the main advantage of connecting schema registry to your main Kafka cluster?

Newer Kafka clients are backwards compatible with Kafka 0.10, so there's no reason you couldn't use a newer Schema Registry than 3.2
In the docs
Schema Registry that is included in Confluent Platform 3.2 and later is compatible with any Kafka broker that is included in Confluent Platform 3.0 and later
I would certainly avoid putting everything in one container... That's not how they're meant to be used and there's no reason you would need another Zookeeper server
Having a secondary Kafka cluster only to hold one topic of schemas seems unnecessary when you could store the same information on your existing cluster
the clients that are producing to/reading from Kafka topics talk to Schema Registry independently of Kafka
Clients talk to both. Only Avro schemas are sent over HTTP before your regular client code reaches the topic. No, schemas and client data do not have to be part of the same Kafka cluster
Anytime anyone deploys Schema Registry, it's being added to "an existing Kafka", just the difference is yours might have more data in it

Related

Does kafka support schema registries out of the box, or is it a confluent platform feature?

I came across the following article on how to use the schema registry available in the confluent platform.
https://docs.confluent.io/platform/current/schema-registry/schema-validation.html
According to that article, we can specify confluent.schema.registry.url in server.properties to point Kafka to the schema registry.
My question is, is it possible to point a Kafka cluster which is not a part of confluent platform deployment, to a schema registry using confluent.schema.registry.url?
Server-side schema validation is part of Confluent Server, not Apache Kafka.
I will make sure that that docs page gets updated to be more clear - thanks for raising it.

How AWS MSK and Confluent Schema Registry and Confluent Kafka connect recommended to use together?

We are planning to use AWS MSK service for Managed Kafka and Schema Registry and Kafka Connect services from Confluent together to run our connectors (Elasticsearch Sink Connector). We have planned to run Schema Registry and Connectors in EC2.
As per the Confluent team, They could not officially support Confluent Schema Registry and Kafka Connect if we use MSK for Kafka.
So, Anyone can share their experience? like
if Anybuddy has used a combination of MSK and Confluent services together in the production environment?
Is there any risk in using this kind of combination?
Is it recommended or not to use this combination?
How is Confluent community support if we will face any issue with Connectors?
Any other suggestions, comments, or alternatives?
We already have a Confluent Corporate Platform license but We want to have managed Kafka service that's why we have chosen AWS MKS as it's very cost-effective than Confluent Cloud as per our analysis?
Kindly please share your thoughts and Thanks in advance.
Thanks
Objectively answering your question this is something doable but it depends where is your major pain.
From the licensing perspective there is nothing that forces you to have a Confluent subscription just to use Kafka Connect or Schema Registry, as they are based on the Apache License 2.0 and Confluent Community License respectively.
From the technical perspective you can run both Kafka Connect and Schema Registry on EC2 and; as long they are running in the same VPC that the MSK cluster they will work flawlessly.
From the cost perspective you will have to evaluate how much it costs to have Kafka Connect and Schema Registry being managed by you and/or your team. Think not only about the install and setup phase but the manage and evolve phase as well. The software might not have any cost but the effort to operate these components can be translated into cost.
How is Confluent community support if we will face any issue with Connectors?
The Kafka community is usually very helpful whether if you ask for help in the Apache Kafka users group or the community that Confluent owns in Slack. Of course, it is all about best effort and you can't rely on them to get support. It may take several days until some good Samaritan decide to help you. Which also translates to cost: how much costs being down and/or waiting for a resolution?
I am no longer a Confluent employee and therefore I won't even try to convince you to buy from them. But you should evaluate this component of cost and check if using Confluent Cloud wouldn't provide you a more cost effective solution since it includes a managed version of Kafka, Kafka Connect, and Schema Registry. In my experience, the managed Kafka on Confluent Cloud is not that costly and the managed Schema Registry is "free", but using a managed connector can be very costly and it can be worse depending of the number of tasks that you configure in the managed connector. This is the only gotcha that you ought to watch out.
AWS MSK now supports fully managed free schema registry service that easily integrates with Kafka and other AWS services like Kinesis, Glue etc. It's much easier to get started with it.

Kafka design questions - Kafka Connect vs. own consumer/producer

I need to understand when to use Kafka connect vs. own consumer/producer written by developer. We are getting Confluent Platform. Also to achieve fault tolerant design do we have to run the consumer/producer code ( jar file) from all the brokers ?
Kafka connect is typically used to connect external sources to Kafka i.e. to produce/consume to/from external sources from/to Kafka.
Anything that you can do with connector can be done through
Producer+Consumer
Readily available Connectors only ease connecting external sources to Kafka without requiring the developer to write the low-level code.
Some points to remember..
If the source and sink are both the same Kafka cluster, Connector doesn't make sense
If you are doing changed-data-capture (CDC) from a database and push them to Kafka, you can use a Database source connector.
Resource constraints: Kafka connect is a separate process. So double check what you can trade-off between resources and ease of development.
If you are writing your own connector, it is well and good, unless someone has not already written it. If you are using third-party connectors, you need to check how well they are maintained and/or if support is available.
do we have to run the consumer/producer code ( jar file) from all the brokers ?
Don't run client code on the brokers. Let all memory and disk access be reserved for the broker process.
when to use Kafka connect vs. own consumer/produce
In my experience, these factors should be taken into consideration
You're planning on deploying and monitoring Kafka Connect anyway, and have the available resources to do so. Again, these don't run on the broker machines
You don't plan on changing the Connector code very often, because you must restart the whole connector JVM, which would be running other connectors that don't need restarted
You aren't able to integrate your own producer/consumer code into your existing applications or simply would rather have a simpler produce/consume loop
Having structured data not tied to the a particular binary format is preferred
Writing your own or using a community connector is well tested and configurable for your use cases
Connect has limited options for fault tolerance compared to the raw producer/consumer APIs, with the drawbacks of more code, depending on other libraries being used
Note: Confluent Platform is still the same Apache Kafka
Kafka Connect:
Kafka Connect is an open-source platform which basically contains two types: Sink and Source. The Kafka Connect is used to fetch/put data from/to a database to/from Kafka. The Kafka connect helps to use various other systems with Kafka. It also helps in tracking the changes (as mentioned in one of the answers Changed Data Capture (CDC) ) from DB's to Kafka. The system maintains the offset, in order to read/write data from that particular offset to Kafka or any other database.
For more details, you can refer to https://docs.confluent.io/current/connect/index.html
The Producer/Consumer:
The Producer and Consumer are just an end system, which use the Kafka to produce and consume topics to/from Kafka. They are used where we want to broadcast the data to various consumers in a consumer group. This kind of system also maintains the lag and offsets of data for the consumer groups.
No, you don't need to run any producer/consumer while running Kafka connect. In case you want to check there is no data loss you can run the consumer while running Source Connectors. In case, of Sink Connectors, the already produced data can be verified in your database, by running their particular select queries.

Confluent Schema Registry as a stand alone service

Can Confluent Schema Registry used by applications outside of Kafka Streams? I am specifically interested in using this component with message queues other than Apache Kafka, such as Cloud Pub/Sub. Based on investigations the component seem like tightly coupled with applications using Confluent Platform.
Well, the Confluent Schema Registry does depend on Kafka (it's where the schemas are actually stored). You don't need the rest of Confluent Platform.
While there is a Storage interface that could, in theory, be re-written against an external system, I am not aware of a way to change out the default implementation.
Once you had Kafka (and subsequently Zookeeper), the REST API itself could be wrapped by any external serialization library. Flink, NiFi, and StreamSets for example, have taken this approach for Avro schema management.

Migrating topics,ACL and messages from apache kafka to confluent platform

We are migrating our application from Apache Kafka to Confluent Platform .
Apache Kafka version:1.1.0
Confluent :4.1.0
Tried these options:
Manually copying the zookeeper logs and Kafka Logs- Not an optimal way
because of volume and data correctness.
Mirror Maker - This will replicate newly created topics and ACL. It will not
migrate old details in Apache Kafka
Please suggest better approaches on this.
You can keep your existing Kafka and Zookeeper installation.
Confluent does not change any way these run or manage data.
You can configure the REST Proxy, Schema Registry, Control Center, KSQL, etc. to use your existing bootstrap servers or Zookeeper connection; nothing should need migrated, you're only adding extra consumer/producer services which just happen to be provided by Confluent.
If you later plan on upgrading your brokers, then you can start up new ones from the Confluent package, migrate the partitions, then shut down the old ones. Similarly for Zookeeper, but make sure that you have at least 2 up during this process, and always have an odd number of them available after your transition