Hey I would like to use the Confluent schema registry with the Avro Serializers: The documentation now basically says: do not use the same schema for multiple different topics
Can anyone explain to me why?
I reasearch the source code and it basically stores the schema in a kafka topic as follows (topicname,magicbytes,version->key) (schema->value)
Therefore I don't see the problem of using the schema multiple times expect redundancy?
I think you are referring to this comment in the documentation:
We recommend users use the new producer in org.apache.kafka.clients.producer.KafkaProducer. If you are using a version of Kafka older than 0.8.2.0, you can plug KafkaAvroEncoder into the old producer in kafka.javaapi.producer. However, there will be some limitations. You can only use KafkaAvroEncoder for serializing the value of the message and only send value of type Avro record. The Avro schema for the value will be registered under the subject recordName-value, where recordName is the name of the Avro record. Because of this, the same Avro record type shouldn’t be used in more than one topic.
First, the commenter above is correct -- this only refers to the old producer API pre-0.8.2. It's highly recommended that you use the new producer anyway as it is a much better implementation, doesn't depend on the whole core jar, and is the client which will be maintained going forward (there isn't a specific timeline yet, but the old producer will eventually be deprecated and then removed).
However, if you are using the old producer, this restriction is only required if the schema for the two subjects might evolve separately. Suppose that you did write two applications that wrote to different topics, but use the same Avro record type, let's call it record. Now both applications will register it/look it up under the subject record-value and get assigned version=1. This is all fine as long as the schema doesn't change. But lets say application A now needs to add a field. When it does so, the schema will be registered under subject record-value and get assigned version=2. This is fine for application A, but application B has either not been upgraded to handle this schema, or worse, the schema isn't even valid for application B. However, you lose the protection the schema registry normally gives you -- now some other application could publish data of that format into the topic used by application B (it looks ok because record-value has that schema registered). Now application B could see data which it doesn't know how to handle since its not a schema it supports.
So the short version is that because with the old producer the subject has to be shared if you also use the same schema, you end up coupling the two applications and the schemas they must support. You can use the same schema across topics, but we suggest not doing so since it couples your applications (and their development, the teams developing them, etc).
Related
In all the documentation it’s clear described how to handle compatible changes with Schema Registry with compatibility types.
But how to introduce incompatible changes without disturbing the downstream consumers directly, so that the can migrated in their own pace?
We have the following situation (see image) where the producer is producing the same message in both schema versions:
Image
The problem is how to migrated the app’s and the sink connector in a controlled way, where business continuity is important and the consumer are not allowed to process the same message (in the new format).
consumer are not allowed to process the same message (in the new format).
Your consumers need to be aware of the old format while consuming the new one; they need to understand what it means to consume the "same message". That's up to you to code, not something Connect or other consumers can automatically determine, with or without a Registry.
In my experience, the best approach to prevent duplicate record processing across various topics is to persist unique ids (UUID) as part of each record, across all schema versions, and then query some source of truth for what has been processed already, or not. When not processed, insert these ids into that system after the records have been.
This may require placing a stream processing application that filters already processed records out of a topic before the sink connector will consume it
I figure what you are looking for is kind of an equivalent to a topic-offset, but spanning multiple ones. Technically this is not provided by Kafka and with good reasons I'd like to add. The solution would be very specific to each use case, but I figure it boils all down to introducing your own functional offset attribute in both streams.
Consumers will have to maintain state in regards to what messages have been processed when switching to another topic filtering out messages that were processed from the other topic. You could use your own sequence numbering or timestamps to keep track of process across topics. Using a sequence will be easier keeping track of the progress as only one value needs to be stored at consumer end. When using UUIDs or other non-sequence ids will potentially require a more complex state keeping mechanism.
Keep in mind that switching to a new topic will probably mean that lots of messages will have to be skipped and depending on the amount this might cause a delay that you need to be willing to accept.
Is there a way to configure Confluent Schema Registry and/or Kafka Streams to prevent schema evolution?
Motivation
We have multiple Kafka Streams jobs producing messages for the same topic. All of the jobs should send messages with the same schema, but due to misconfiguration of the jobs, it has happened that some of them send messages with fields missing. This has caused issues downstream and is something we want to prevent.
When this happens, we can see a schema evolution in the schema registry as expected.
Solution
We checked the documentation for Confluent Schema Registry and/or Kafka Streams, but couldn't find a way to prevent the schema evolution.
Hence, we consider to modify the Kafka Streams jobs to read the schema from Confluent Schema Registry before sending it. If the received schema matches the local schema of the messages, only then we send them.
Is this the right way to go or did we miss a better option?
Update: we found an article on medium for validating the schema against the schema registry before sending.
It depends which language and library you use and what kind of APIs do they provide. If you are publishing generic records, you can read and parse .avdl or .avsc file into the record type and build your event. Which means if event you are trying to build wouldn't be compatible with the current schema you won't be able even to build that event hence won't be able to modify existing schema. So in this case simply store with your source code a static schema. With specific record it is more or less the same, you can build your Java/C# or other language classes based on the schema, you build them then simply new them up and publish. Does it make any sense?)
PS. I worked with C# libs for Kafka maybe some other languages do not have that support or have some other better options
I have many microservices reading/writing Avro messages in Kafka.
Schemas are great. Avro is great. But is a schema registry really needed? It helps centralize Schemas, yes, but do the microservices really need to query the registry? I don't think so.
Each microservice has a copy of the schema, user.avsc, and an Avro-generated POJO: User extends SpecificRecord. I want a POJO of each Schema for easy manipulation in the code.
Write to Kafka:
byte [] value = user.toByteBuffer().array();
producer.send(new ProducerRecord<>(TOPIC, key, value));
Read from Kafka:
User user = User.fromByteBuffer(ByteBuffer.wrap(record.value()));
Schema Registry gives you a way for broader set of applications and services to use the data, not just your Java-based microservices.
For example, your microservice streams data to a topic, and you want to send that data to Elasticsearch, or a database. If you've got the Schema Registry you literally hook up Kafka Connect to the topic and it now has the schema and can create the target mapping or table. Without a Schema Registry each consumer of the data has to find out some other way what the schema of the data is.
Taken the other way around too - your microservice wants to access data that's written into a Kafka topic from elsewhere (e.g. with Kafka Connect, or any other producer) - with the Schema Registry you can simply retrieve the schema. Without it you start coupling your microservice development to having to know about where the source data is being produced and its schema.
There's a good talk about this subject here: https://qconnewyork.com/system/files/presentation-slides/qcon_17_-_schemas_and_apis.pdf
Do they need to? No, not really.
Should you save yourself some space on your topic and not send the schema as part of the message or require the consumers to have the schema to read anything? Yes, and that is what the AvroSerializer is doing for you - externalizing that data elsewhere that is consumable as simply a REST API.
The deserializer then must know how that schema is gotten, and you can configure it with specific.avro.reader=true property rather than manually invoking the fromByteBuffer yourself, letting the AvroDeserializer handle it.
Also, in larger orgs, shuffling around a single user.avsc file (even if version controlled) doesn't control that copy becoming stale over time or handle evolution in a clean way.
One of the most important features of the Schema Registry is to manage the evolution of schemas. It provides the layer of compatibility checking. By setting an appropriate Compatibility Type you determine the allowed schema changes.
You can find all the available Compatibility Types here.
There is somethigng I'm trying to understand about how Avro-serialized messages are treated by Kafka and Schema Registry - from this post I've understood the schema ID is stored in an predictable place in each message so it seems that we can have messages of varous schemas in the same topic and be able to find the right schema and deserialize them successfully based on just that. On the other hand I see many people seem to be using expression "a schema attached to a topic", this however implies one schema per topic..
So which is right? Can I take advantages of the Schema Registry (like i.e. KSql) and have messages of various types (or schemas) in the same topic?
Typically you have 1:1 topic/schema relationship, but it is possible (and valid) to have multiple schemas per topic in some situations. For more information, see https://www.confluent.io/blog/put-several-event-types-kafka-topic/
I am working on Kafka and as a beginner the following question popped out of my mind.
Every time we design the schema for Avro, we create the Java object out of it through its jars.
Now we use that object to populate data and send it from Producer.
For consuming the message we generate the Object again in Consumer. Now the objects generated in both places Producer & Consumer contains a field
"public static final org.apache.avro.Schema SCHEMA$" which actually stores the schema as a String.
If that is the case then why should kafka use schema registry at all ? The schema is already available as part of the Avro objects.
Hope my question is clear. If someone can answer me, It would be of great help.
Schema Registry is the repository which store the schema of all the records sent to Kafka. So when a Kafka producer send some records using KafkaAvroSerializer. The schema of the record is extracted and stored in schema registry and the actual record in Kafka only contains the schema-id.
The consumer when de-serializing the record fetches the schema-id and use it to fetch the actual schema from schema- registry. The record is then de-serialized using the fetched schema.
So in short Kafka does not keep a copy of schema in every record instead it is stored in schema registry and referenced via schema-id.
This helps in saving space while storing records also to detect any schema compatibility issue between various clients.
https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html
Schema registry is a central repo for all the schemas and helps in enforcing schema compatibility rules while registering new schemas , without which schema evolution would be difficult.
Based on the configured compatibility ( backward, forward , full etc) , the schema registry will restrict adding new schema which doesn't confirm to the configured compatibility.