I have Filebeat which outputs to Kafka topic and would like to make sure that messages are in correct format by using Avro schema.
In Filebeat documentation, there are mentioned two possible output codecs, JSON or format.
In Kafka, Schema registry can be used to store Avro schemas.
My questions:
Is is possible for Filebeat to send Avro message (validated against Avro schema); and if so, can schema be linked from Schema registry so it won't need to be physically copied over on each new version of schema ?
Can Filebeat's JSON or format message be considered as Avro by Kafka ?
If Filebeat can't produce validated Avro message, can JSON send be validated at Kafka's side, when trying to write to topic ? If so, can non-valid messages be dropped or logged somewhere ?
Related
I have an app that produces an array of messages in raw JSON periodically. I was able to convert that to Avro using the avro-tools. I did that because I needed the messages to include schema due to the limitations of Kafka-Connect JDBC sink. I can open this file on notepad++ and see that it includes the schema and a few lines of data.
Now I would like to send this to my central Kafka Broker and then use Kafka Connect JDBC sink to put the data in a database. I am having a hard time understanding how I should be sending these Avro files I have to my Kafka Broker. Do I need a schema registry for my purposes? I believe Kafkacat does not support Avro so I suppose I will have to stick with the kafka-producer.sh that comes with the Kafka installation (please correct me if I am wrong).
Question is: Can someone please share the steps to produce my Avro file to a Kafka broker without getting Confluent getting involved.
Thanks,
To use the Kafka Connect JDBC Sink, your data needs an explicit schema. The converter that you specify in your connector configuration determines where the schema is held. This can either be embedded within the JSON message (org.apache.kafka.connect.json.JsonConverter with schemas.enabled=true) or held in the Schema Registry (one of io.confluent.connect.avro.AvroConverter, io.confluent.connect.protobuf.ProtobufConverter, or io.confluent.connect.json.JsonSchemaConverter).
To learn more about this see https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained
To write an Avro message to Kafka you should serialise it as Avro and store the schema in the Schema Registry. There is a Go client library to use with examples
without getting Confluent getting involved.
It's not entirely clear what you mean by this. The Kafka Connect JDBC Sink is written by Confluent. The best way to manage schemas is with the Schema Registry. If you don't want to use the Schema Registry then you can embed the schema in your JSON message but it's a suboptimal way of doing things.
I have a Kafka topic where the values are MessagePack-encoded.
Is there any way to sink the records from this topic into MongoDB using the MongoDB Kafka connector, or must the record values simply be stored as JSON?
You will need to find or create your own Kafka Connect Converter, then add that package to each Connect worker's classpath, followed by setting it as your key/value converter setting, from which the existing Mongo Sink Connector can deserialize the messages into a Struct and Schema form, and handle correctly.
JSON was never a requirement. Avro and Protobuf should work as well
I am new to both NiFi and Avro. So, according to my understanding if we use schema registry the schema won't be added to Avro content that is being published to Kafka, only schema ID will be sent is that correct??
How can I publish and consume through Kafka using Horton works Schema Registry, using Avro serialization and deserialization?
In Nifi ConvertJsonToAvro schema will be embedded while sending.SO, is there any other processor which will use schema registry and won't send schema while publishing.
On publishing side you would use PublishKafkaRecord (with the version corresponding to your Kafka broker) and you would configure it with a JsonTreeReader and an AvroRecordSetWriter. In the record writer you would configure the Schema Write Strategy as Hortonworks Content Encoded.
On consuming side your would ConsumeKafkaRecord (same version as publish) and you would configure it with an AvroRecordReader and a JsonRecordWriter. In the reader you would configure the Schema Access Strategy as Hortonworks Content Encoded.
I'm trying to read data from DB2 using Kafka and then to write it to HDFS. I use distributed confluent platform with standard JDBC and HDFS connectors.
As the HDFS connector needs to know the schema, it requires avro data as an input. Thus, I have to specify the following avro converters for the data fed to Kafka (in etc/kafka/connect-distributed.properties):
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
I then run my JDBC connector and check with the console-avro-consumer that I can successfully read the data fetched from the DB2.
However, when I launch the HDFS Connector, it does not work anymore. Instead, it outputs SerializationException:
Error deserializing Avro message for id -1
... Unknown magic byte!
To check if this is a problem with the HDFS connector, I tried to use a simple FileSink connector instead. However, I saw exactly the same exception when using the FileSink (and the file itself was created but stayed empty).
I then carried out the following experiment: Instead of using avro converter for the key and value I used json converters:
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schema.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schema.enable=false
This fixed the problem with the FileSink connector, i.e., the whole pipeline from DB2 to the file worked fine. However, for the HDFS connector this solution is infeasible as the connector needs the schema and consequently avro format as an input.
It feels to me that the deserialization of avro format in the sink connectors is not implemented properly as the console-avro-consumer can still successfully read the data.
Does anyone have any idea of what could be the reason of this behavior? I'd also appreciate an idea of a simple fix for this!
check with the console-avro-consumer that I can successfully read the data fetched
I'm guessing you didn't add --property print.key=true --from-beginning when you did that.
Its possible that the latest values are Avro, but connect is clearly failing somewhere on the topic, so you need to scan it to find out where that happens
If using JsonConverter works, and the data is actually readable JSON on disk, then it sounds like the JDBC Connector actually wrote JSON, not Avro
If you are able to pinpoint the offset for the bad message, you can use the regular console consumer with the connector group id set, then add --max-messages along with a partition and offset specified to skip those events
I'm trying to use Kafka connect sink to write files from Kafka to HDFS.
My properties looks like:
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
flush.size=3
format.class=io.confluent.connect.hdfs.parquet.ParquetFormat
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
schema.compatability=BACKWARD
key.converter.schemas.enabled=false
value.converter.schemas.enabled=false
schemas.enable=false
And When I'm trying to run the connector I got the following exception:
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
I'm using Confluent version 4.0.0.
Any suggestions please?
My understanding of this issue is that if you set schemas.enable=true, you tell kafka that you would like to include the schema into messages that kafka must transfer. In this case, a kafka message does not have a plain json format. Instead, it first describes the schema and then attaches the payload (i.e., the actual data) that corresponds to the schema (read about AVRO formatting). And this leads to the conflict: On the one hand you've specified JsonConverter for your data, on the other hand you ask kafka to include the schema into messages. To fix this, you can either use AvroConverter with schemas.enable = true or JsonCOnverter with schemas.enable=false.