Wanted to know if there is a better way to solve the problem that we are having. Here is the flow:
Our client code understands only protocol buffers (protobuf). On the server side, our gateway gets the protobuf and puts it onto Kafka.
Now avrò is the recommended encoding scheme, so we put the specific protobuf within avro (as a byte array) and we put it onto the message bus. The reason we do this is to avoid having to do entire protobuf->avro conversion.
On the consumer side, it reads the avro message, gets the protobuf out of it and works on that.
How reliable is protobuf with Kafka? Are there a lot of people using it? What exactly are the advantages/disadvantages of using Kafka with protobuf?
Is there a better way to handle our use case/scenario?
thanks
Kafka doesn't differentiate between encoding schemes since at the end every message flows in and out of kafka as binary.
Both Proto-buff and Avro are binary based encoding schemes, why would you want to wrap a proto-buff inside an Avro schema, when you can directly put the proto-buff message into Kafka?
Related
I want to replicate a kafka topic to an azure event hub.
The messages are in avro format and uses a schema that is behind a schema registry with USER_INFO authentication.
Using a java client to connect to kafka, I can use a KafkaAvroDeserializer to deserialize the message correctly.
But this configuration doesn't seems to work with mirrormaker.
Is is possible to deserialize the avro message using mirrormaker before sending it ?
Cheers
For MirrorMaker1, the consumer deserializer properties are hard-coded
Unless you plan on re-serializing the data into a different format when the producer sends data to EventHub, you should stick to using the default ByteArrayDeserializer.
If you did want to manipulate the messages in any way, that would need to be done with a MirrorMakerMessageHandler subclass
For MirrorMaker2, you can use AvroConverter followed by some transforms properties, but still ByteArrayConverter would be preferred for a one-to-one byte copy.
I only found references to MirrorMaker v2.
Can I reuse org.apache.kafka.connect.mirror.MirrorSourceConnector as if it were a "plain" Connector with Kafka as a source, or is there something else, hopefully simpler, available?
I'm trying to use KafkaConnect and (a combination of) its SMTs to simulate message routing behaviour found in other message brokers.
For example, I would like to consume from a topic, extract values from the message (either headers or payload), and route the message to another topic within the same cluster depending on the data found in the message.
Thank you
within the same cluster
Then that's what Kafka Streams or ksqlDB are for. You can import and use SMT methods directly via code, although you also need to use the Connect Converter classes to get Schema/Struct types that most of the SMT's require
While you could use MirrorMaker, intercluster relocation is not its purpose
I am having a kafka program to send a single message from producer and it is consumed by the consumer successfully.But i have a question is there any way to send a file instead of a single text message..If yes How can it be done?
You can send whatever you want in Kafka. Kafka won't infere anything from your data, it will handle it as a simple array of bytes. However be careful about the size of your Kafka record, some cluster parameters like message.max.bytes ( and many other) have to be updated accordingly.
So to answer your question, just read your file using any kind of IO reader ( depending on your programming language) and just send your file using a bytes or String serializer.
Yannick
I have messages which are being streamed to Kafka. I would like to convert the messages in avro binary format (means to encode them).
I'm using the confluent platform. I have a Kafka ProducerRecord[String,String] which sends the messages to the Kafka topic.
Can someone provide with a (short) example? Or recommend a website with examples?
Does anyone know how I can pass a instance of a KafkaAvroSerializer into the KafkaProducer?
Can I use inside the ProducerRecord a Avro GenericRecord instance?
Kind regards
Nika
You need to use the KafkaAvroSerializer in your producer config for the either serializer config, as well as set the schema registry url in the producer config as well (AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG)
That serializer will Avro-encode primitives and strings, but if you need complex objects, you could try adding Avro4s, for example. Otherwise, GenericRecord will work as well.
Java example is here - https://docs.confluent.io/current/schema-registry/serializer-formatter.html
I was wondering about what types of data we could have in Kafka topics.
As I know in application level this is a key-value pairs and this could be the data of type which is supported by the language.
For example we send some messages to the topic, could it be some json, parquet files, serialized data or we operate with the messages only like with the plain text format?
Thanks for you help.
There are various message formats depending on if you are talking about the APIs, the wire protocol, or the on disk storage.
Some of these Kafka Message formats are described in the docs here
https://kafka.apache.org/documentation/#messageformat
Kafka has the concept of a Serializer/Deserializer or SerDes (pronounced Sir-Deez).
https://en.m.wikipedia.org/wiki/SerDes
A Serializer is a function that can take any message and converts it into the byte array that is actually sent on the wire using the Kafka Protocol.
A Deserializer does the opposite, it reads the raw message bytes portion of the Kafka wire protocol and re-creates a message as you want the receiving application to see it.
There are built-in SerDes libraries for Strings, Long, ByteArrays, ByteBuffers and a wealth of community SerDes libraries for JSON, ProtoBuf, Avro, as well as application specific message formats.
You can build your own SerDes libraries as well see the following
How to create Custom serializer in kafka?
On the topic it's always just serialised data. Serialisation happens in the producer before sending and deserialisation in the consumer after fetching. Serializers and deserializers are pluggable, so as you said at application level it's key value pairs of any data type you want.