Do I really need avro4s when using kafka schema registry? - scala

I noticed confluent has a kafka serializer that will let me serialize and de-serialize my case classes from my kafka topic, and it will pull the schema from the registry.
If this is the case, what benefit would I get by using avro4s?

You have no obligation to use avro4s. In fact you do not have to use Avro at all. Kafka does not care about the format you use for serialization. Although, Avro is the defacto (de)serializer for Kafka, and the one you noticed within Confluent suit (Schema Registry?), is also Avro. The only thing you need is to add dependency to Avro: https://mvnrepository.com/artifact/org.apache.avro/avro/1.10.1
Also, use sbt-avro plugin. This one is not necessary, but your life will be very hard without it: https://github.com/sbt/sbt-avro

Related

How to deserialize avro message using mirrormaker?

I want to replicate a kafka topic to an azure event hub.
The messages are in avro format and uses a schema that is behind a schema registry with USER_INFO authentication.
Using a java client to connect to kafka, I can use a KafkaAvroDeserializer to deserialize the message correctly.
But this configuration doesn't seems to work with mirrormaker.
Is is possible to deserialize the avro message using mirrormaker before sending it ?
Cheers
For MirrorMaker1, the consumer deserializer properties are hard-coded
Unless you plan on re-serializing the data into a different format when the producer sends data to EventHub, you should stick to using the default ByteArrayDeserializer.
If you did want to manipulate the messages in any way, that would need to be done with a MirrorMakerMessageHandler subclass
For MirrorMaker2, you can use AvroConverter followed by some transforms properties, but still ByteArrayConverter would be preferred for a one-to-one byte copy.

Confluent Schema Registry/Kafka Streams: prevent schema evolution

Is there a way to configure Confluent Schema Registry and/or Kafka Streams to prevent schema evolution?
Motivation
We have multiple Kafka Streams jobs producing messages for the same topic. All of the jobs should send messages with the same schema, but due to misconfiguration of the jobs, it has happened that some of them send messages with fields missing. This has caused issues downstream and is something we want to prevent.
When this happens, we can see a schema evolution in the schema registry as expected.
Solution
We checked the documentation for Confluent Schema Registry and/or Kafka Streams, but couldn't find a way to prevent the schema evolution.
Hence, we consider to modify the Kafka Streams jobs to read the schema from Confluent Schema Registry before sending it. If the received schema matches the local schema of the messages, only then we send them.
Is this the right way to go or did we miss a better option?
Update: we found an article on medium for validating the schema against the schema registry before sending.
It depends which language and library you use and what kind of APIs do they provide. If you are publishing generic records, you can read and parse .avdl or .avsc file into the record type and build your event. Which means if event you are trying to build wouldn't be compatible with the current schema you won't be able even to build that event hence won't be able to modify existing schema. So in this case simply store with your source code a static schema. With specific record it is more or less the same, you can build your Java/C# or other language classes based on the schema, you build them then simply new them up and publish. Does it make any sense?)
PS. I worked with C# libs for Kafka maybe some other languages do not have that support or have some other better options

how can I pass KafkaAvroSerializer into a Kafka ProducerRecord?

I have messages which are being streamed to Kafka. I would like to convert the messages in avro binary format (means to encode them).
I'm using the confluent platform. I have a Kafka ProducerRecord[String,String] which sends the messages to the Kafka topic.
Can someone provide with a (short) example? Or recommend a website with examples?
Does anyone know how I can pass a instance of a KafkaAvroSerializer into the KafkaProducer?
Can I use inside the ProducerRecord a Avro GenericRecord instance?
Kind regards
Nika
You need to use the KafkaAvroSerializer in your producer config for the either serializer config, as well as set the schema registry url in the producer config as well (AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG)
That serializer will Avro-encode primitives and strings, but if you need complex objects, you could try adding Avro4s, for example. Otherwise, GenericRecord will work as well.
Java example is here - https://docs.confluent.io/current/schema-registry/serializer-formatter.html

Why do we need to specify Serializer in Apache Kafka?

I got this doubt from this question.
When I am not using Kafka Streams, Why do I need to use Serializer while creating ZkClient?
Kafka havily uses zookeeper for storing metadata (topics). For that library com.101tec::zkClient is used. According to source code ZkClient it requires ZkSerializer for serializing/deserializing data send/retreived from zookeeper. Kafka inside itself has implementation of ZkSerializer: ZKStringSerializer (defied in zkUtils).
However, for usual interaction with kafka (producing / consuming) you do not need to create ZkClient. It is required only for 'administrative' work.

Kafka Connect: How can I send protobuf data from Kafka topics to HDFS using hdfs sink connector?

I have a producer that's producing protobuf messages to a topic. I have a consumer application which deserializes the protobuf messages. But hdfs sink connector picks up messages from the Kafka topics directly. What would the key and value converter in etc/schema-registry/connect-avro-standalone.properties be set to? What's the best way to do this? Thanks in advance!
Kafka Connect is designed to separate the concern of serialization format in Kafka from individual connectors with the concept of converters. As you seem to have found, you'll need to adjust the key.converter and value.converter classes to implementations that support protobufs. These classes are commonly implemented as a normal Kafka Deserializer followed by a step which performs a conversion from serialization-specific runtime formats (e.g. Message in protobufs) to Kafka Connect's runtime API (which doesn't have any associated serialization format -- it's just a set of Java types and a class to define Schemas).
I'm not aware of an existing implementation. The main challenge in implementing this is that protobufs is self-describing (i.e. you can deserialize it without access to the original schema), but since its fields are simply integer IDs, you probably wouldn't get useful schema information without either a) requiring that the specific schema is available to the converter, e.g. via config (which makes migrating schemas more complicated) or b) a schema registry service + wrapper format for your data that allows you to look up the schema dynamically.