Why does The Confluent Schema Registry set default compatibility type as BACKWARD? - confluent-platform

BACKWARD Compatibility Types requires updating consumers first. Otherwise old consumer might having issue processing data written in new schema.
Most use cases I have seen so far of Kafka are having more consumers than producers. Therefore updating all consumers seems a much harder tasks than updating all producers(if more than one).
On the other hand, with Confluent's Serializers, producer is one the registering new schema with Schema Registry, even with this compatibility check in place, there is nothing stopping registering new schema with BACKWARD compatibility type but potentially breaking existing old consumers.
Then my question is why does The Confluent Schema Registry set default compatibility type as BACKWARD?

Related

Schema registry incompatible changes

In all the documentation it’s clear described how to handle compatible changes with Schema Registry with compatibility types.
But how to introduce incompatible changes without disturbing the downstream consumers directly, so that the can migrated in their own pace?
We have the following situation (see image) where the producer is producing the same message in both schema versions:
Image
The problem is how to migrated the app’s and the sink connector in a controlled way, where business continuity is important and the consumer are not allowed to process the same message (in the new format).
consumer are not allowed to process the same message (in the new format).
Your consumers need to be aware of the old format while consuming the new one; they need to understand what it means to consume the "same message". That's up to you to code, not something Connect or other consumers can automatically determine, with or without a Registry.
In my experience, the best approach to prevent duplicate record processing across various topics is to persist unique ids (UUID) as part of each record, across all schema versions, and then query some source of truth for what has been processed already, or not. When not processed, insert these ids into that system after the records have been.
This may require placing a stream processing application that filters already processed records out of a topic before the sink connector will consume it
I figure what you are looking for is kind of an equivalent to a topic-offset, but spanning multiple ones. Technically this is not provided by Kafka and with good reasons I'd like to add. The solution would be very specific to each use case, but I figure it boils all down to introducing your own functional offset attribute in both streams.
Consumers will have to maintain state in regards to what messages have been processed when switching to another topic filtering out messages that were processed from the other topic. You could use your own sequence numbering or timestamps to keep track of process across topics. Using a sequence will be easier keeping track of the progress as only one value needs to be stored at consumer end. When using UUIDs or other non-sequence ids will potentially require a more complex state keeping mechanism.
Keep in mind that switching to a new topic will probably mean that lots of messages will have to be skipped and depending on the amount this might cause a delay that you need to be willing to accept.

Confluent Schema Registry/Kafka Streams: prevent schema evolution

Is there a way to configure Confluent Schema Registry and/or Kafka Streams to prevent schema evolution?
Motivation
We have multiple Kafka Streams jobs producing messages for the same topic. All of the jobs should send messages with the same schema, but due to misconfiguration of the jobs, it has happened that some of them send messages with fields missing. This has caused issues downstream and is something we want to prevent.
When this happens, we can see a schema evolution in the schema registry as expected.
Solution
We checked the documentation for Confluent Schema Registry and/or Kafka Streams, but couldn't find a way to prevent the schema evolution.
Hence, we consider to modify the Kafka Streams jobs to read the schema from Confluent Schema Registry before sending it. If the received schema matches the local schema of the messages, only then we send them.
Is this the right way to go or did we miss a better option?
Update: we found an article on medium for validating the schema against the schema registry before sending.
It depends which language and library you use and what kind of APIs do they provide. If you are publishing generic records, you can read and parse .avdl or .avsc file into the record type and build your event. Which means if event you are trying to build wouldn't be compatible with the current schema you won't be able even to build that event hence won't be able to modify existing schema. So in this case simply store with your source code a static schema. With specific record it is more or less the same, you can build your Java/C# or other language classes based on the schema, you build them then simply new them up and publish. Does it make any sense?)
PS. I worked with C# libs for Kafka maybe some other languages do not have that support or have some other better options

Kafka topic - schema registry compatibility

I have v001 version topic and v001-value schema subject.
I made a breaking change in schema, where some optional fields are made mandatory.
Although, messages in kafka before this change have this all fields. Whether I have to create new topic for this change?
The Schema Registry will tell you if you have an incompatible schema, depending on how you have it configured.
https://docs.confluent.io/current/schema-registry/develop/api.html#post--compatibility-subjects-(string-%20subject)-versions-(versionId-%20version)
have to create new topic for this change
Not necessarily, you could delete the schema in the registry, then push on a new one.

What is the value of an Avro Schema Registry?

I have many microservices reading/writing Avro messages in Kafka.
Schemas are great. Avro is great. But is a schema registry really needed? It helps centralize Schemas, yes, but do the microservices really need to query the registry? I don't think so.
Each microservice has a copy of the schema, user.avsc, and an Avro-generated POJO: User extends SpecificRecord. I want a POJO of each Schema for easy manipulation in the code.
Write to Kafka:
byte [] value = user.toByteBuffer().array();
producer.send(new ProducerRecord<>(TOPIC, key, value));
Read from Kafka:
User user = User.fromByteBuffer(ByteBuffer.wrap(record.value()));
Schema Registry gives you a way for broader set of applications and services to use the data, not just your Java-based microservices.
For example, your microservice streams data to a topic, and you want to send that data to Elasticsearch, or a database. If you've got the Schema Registry you literally hook up Kafka Connect to the topic and it now has the schema and can create the target mapping or table. Without a Schema Registry each consumer of the data has to find out some other way what the schema of the data is.
Taken the other way around too - your microservice wants to access data that's written into a Kafka topic from elsewhere (e.g. with Kafka Connect, or any other producer) - with the Schema Registry you can simply retrieve the schema. Without it you start coupling your microservice development to having to know about where the source data is being produced and its schema.
There's a good talk about this subject here: https://qconnewyork.com/system/files/presentation-slides/qcon_17_-_schemas_and_apis.pdf
Do they need to? No, not really.
Should you save yourself some space on your topic and not send the schema as part of the message or require the consumers to have the schema to read anything? Yes, and that is what the AvroSerializer is doing for you - externalizing that data elsewhere that is consumable as simply a REST API.
The deserializer then must know how that schema is gotten, and you can configure it with specific.avro.reader=true property rather than manually invoking the fromByteBuffer yourself, letting the AvroDeserializer handle it.
Also, in larger orgs, shuffling around a single user.avsc file (even if version controlled) doesn't control that copy becoming stale over time or handle evolution in a clean way.
One of the most important features of the Schema Registry is to manage the evolution of schemas. It provides the layer of compatibility checking. By setting an appropriate Compatibility Type you determine the allowed schema changes.
You can find all the available Compatibility Types here.

Confluent Schema Registry Avro Schema

Hey I would like to use the Confluent schema registry with the Avro Serializers: The documentation now basically says: do not use the same schema for multiple different topics
Can anyone explain to me why?
I reasearch the source code and it basically stores the schema in a kafka topic as follows (topicname,magicbytes,version->key) (schema->value)
Therefore I don't see the problem of using the schema multiple times expect redundancy?
I think you are referring to this comment in the documentation:
We recommend users use the new producer in org.apache.kafka.clients.producer.KafkaProducer. If you are using a version of Kafka older than 0.8.2.0, you can plug KafkaAvroEncoder into the old producer in kafka.javaapi.producer. However, there will be some limitations. You can only use KafkaAvroEncoder for serializing the value of the message and only send value of type Avro record. The Avro schema for the value will be registered under the subject recordName-value, where recordName is the name of the Avro record. Because of this, the same Avro record type shouldn’t be used in more than one topic.
First, the commenter above is correct -- this only refers to the old producer API pre-0.8.2. It's highly recommended that you use the new producer anyway as it is a much better implementation, doesn't depend on the whole core jar, and is the client which will be maintained going forward (there isn't a specific timeline yet, but the old producer will eventually be deprecated and then removed).
However, if you are using the old producer, this restriction is only required if the schema for the two subjects might evolve separately. Suppose that you did write two applications that wrote to different topics, but use the same Avro record type, let's call it record. Now both applications will register it/look it up under the subject record-value and get assigned version=1. This is all fine as long as the schema doesn't change. But lets say application A now needs to add a field. When it does so, the schema will be registered under subject record-value and get assigned version=2. This is fine for application A, but application B has either not been upgraded to handle this schema, or worse, the schema isn't even valid for application B. However, you lose the protection the schema registry normally gives you -- now some other application could publish data of that format into the topic used by application B (it looks ok because record-value has that schema registered). Now application B could see data which it doesn't know how to handle since its not a schema it supports.
So the short version is that because with the old producer the subject has to be shared if you also use the same schema, you end up coupling the two applications and the schemas they must support. You can use the same schema across topics, but we suggest not doing so since it couples your applications (and their development, the teams developing them, etc).