Kafka Stream Ksql Json - apache-kafka

Does kafka stream / Ksql actually support json natively somehow? What are the other formats supported ? I have seen that is it possible to have flat json interpreted as table. I want to understand that part a bit better; what are the other formats that kafka-streams via Ksql that can be queried via SQL? How is that possible or supported? What's the native support?

KSQL
For Value formats, KSQL Supports AVRO, JSON and DELIMITED (like CSV).
You can find the documentation here:
https://docs.confluent.io/current/ksql/docs/developer-guide/syntax-reference.html
Kafka Streams
Kafka Streams comes with some primitive/basic SerDes (Serializers / Deserializers) under the org.apache.kafka.common.serialization package.
You can find the documentation here:
https://kafka.apache.org/22/documentation/streams/developer-guide/datatypes.html
Confluent also provides schema-registry compatible Avro SerDes for data in generic Avro and in specific Avro format. You can find the documentation here:
https://docs.confluent.io/current/streams/developer-guide/datatypes.html#avro
You can also use basic SerDe implementation for JSON that comes with the examples:
https://github.com/apache/kafka/blob/2.2/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview/PageViewTypedDemo.java#L83
As a last resort, you can always create your own custom SerDes. For that, you must:
Write a serializer for your data type T by implementing
org.apache.kafka.common.serialization.Serializer.
Write a deserializer for T by implementing
org.apache.kafka.common.serialization.Deserializer.
Write a serde for T by implementing
org.apache.kafka.common.serialization.Serde, which you either do
manually (see existing SerDes in the previous section) or by
leveraging helper functions in Serdes such as
Serdes.serdeFrom(Serializer<T>, Deserializer<T>). Note that you will
need to implement your own class (that has no generic types) if you
want to use your custom serde in the configuration provided to
KafkaStreams. If your serde class has generic types or you use
Serdes.serdeFrom(Serializer<T>, Deserializer<T>), you can pass your
serde only via methods calls (for example
builder.stream("topicName", Consumed.with(...))).

Related

Working with Protobuf Kafka messages from Hive

Regular (JSON) Kafka topics can be easily connected to Hive as external tables, like this:
CREATE EXTERNAL TABLE
dummy_table (
`field1` BIGINT,
`field2` STRING,
`field3` STRING
)
STORED BY
'org.apache.hadoop.hive.kafka.KafkaStorageHandler'
TBLPROPERTIES (
"kafka.topic" = "dummy_topic",
"kafka.bootstrap.servers" = "dummybroker:9092")
But what about Protobuf encoded topics? Can they be connected, too? I wasn't able to find any examples of this on the net.
If yes - how (where) in code should .Proto file be specified?
You'd have to add kafka.serde.class to the properties.
Assuming you're using Confluent Schema Registry w/ Proto messages, only Avro is supported
Otherwise, there was an old project called Elephant-Bird for adding Protobuf support to Hive. I'm not sure if it still works, or can be used for the Kafka Serde config. But assuming it can, your Proto file would need to be placed in HDFS, for example, and gathered by each Hive map task

Glue avro schema registry with flink and kafka for any object

I am trying to registry and serialize an abject with flink, kafka, glue and avro. I've seen this method which I'm trying.
Schema schema = parser.parse(new File("path/to/avro/file"));
GlueSchemaRegistryAvroSerializationSchema<GenericRecord> test= GlueSchemaRegistryAvroSerializationSchema.forGeneric(schema, topic, configs);
FlinkKafkaProducer<GenericRecord> producer = new FlinkKafkaProducer<GenericRecord>(
kafkaTopic,
test,
properties);
My problem is that this system doesn't allow to include an object different than GenericRecord, the object that I want to send is another and is very big. So big that is too complex to transform to GenericRecord.
I don't find too much documentation. How can I send an object different than GenericRecord, or any way to include my object inside GenericRecord?
I'm not sure if I understand correctly, but basically the GlueSchemaRegistryAvroSerializationSchema has another method called forSpecific that accepts SpecificRecord. So, You can use avro generation plugin for Your build tool depending on what You use (e.g. for sbt here) that will generate classes from Your avro schema that can then be passed to forSpecific method.

using kafka with schema-registry, in app with multiple topics and SubjectNameStrategy

To begin with, I have found a way how to do this, more or less. But it's really bad code. So I'm looking for suggestions how to solve this better if this approach exist.
To lay something to work with. Assume you have app, which sends avro to n topics and uses schema registry. Assume(at first) that you don't want to use avro unions, since they bring some issues along. N-1 topics are easy, 1 schema per topic. But then, you have data, you need to send in order, which means 1 topic and specified group key, but these data don't have same schema. So to do that, you need to register multiple schema per that topic in schema registry, which implies use of key.subject.name.strategy=io.confluent.kafka.serializers.subject.RecordNameStrategy or similar. And here it becomes ugly.
But that setting is per schema registry instance, so you have to declare 2(or more) schema registry instances, one per each SubjectNameStrategy key/value combination. This will work.
But then, according to documentation, RecordNameStrategy is java-platform only (!), so if you would like to create service, which is not language specific (which you would most probably like to do in 2021 ...), you cannot use RecordNameStrategy.
So if you cannot use RecordNameStrategy, and for some reason you still want to use avro and schema registry, IIUC you have no other choice, than to use avro unions on top level, and use defaut TopicNameStrategy, which is fine now, since you have single unioned schema. But top-level unions weren't nice to me in past, since deserializer don't know, naturally, which type would you like to deserialize from the data. So theoretically a way out of this could be using say Cloudevents standard(or something similar), setting cloudevent type attribute in respect to which type from union was used to serialize data, and then have type->deserializer map, to be able to pick correct deserializer for avro-encoded data in received cloudevents message. This will work, and not only for java.
So to wrap up, here are 2 generally described solutions to very simple problem. But to be honest, these seems extremely complicated for widely accepted solution (avro/schema-registry). I'd like to know, if there is easier way through this.
This is a common theme, particularly in CQRS-like systems in which commands may be ordered (eg create before update or delete etc). In these cases, using Kafka, it's often not desirable to publish the messages over multiple topics. You are correct that there are two solutions for sending messages with multiple schemas on the same topic: either a top-level union in the avro schema, or multiple schemas per topic.
You say you don't want to use top-level unions in the schema, so I'll address the case of multiple schemas per topic. You are correct that this excludes the use of any subject naming strategy that includes only the topic name to define the subject. So TopicNameStrategy is out.
But then, according to documentation, RecordNameStrategy is java-platform only (!), so if you would like to create service, which is not language specific (which you would most probably like to do in 2021 ...), you cannot use RecordNameStrategy.
This is worthy of some clarification.... In the confluent way of things, the 'schema-registry aware avro serializers' first register your writer schema in the registry against a subject name to obtain a schema id. They then prefix your avro bytes with that schema id before publishing to kafka. See the 'Confluent Wire Format' at https://docs.confluent.io/platform/current/schema-registry/serdes-develop/index.html#wire-format.
So the subject naming is a choice in the serializer library; the deserializer just resolves a schema by the id prefixing the kafka message. The confluent Java serializers make this subject naming configurable and define strategies TopicNameStrategy, RecordNameStrategy and TopicRecordNameStrategy. See https://docs.confluent.io/platform/current/schema-registry/serdes-develop/index.html#subject-name-strategy. The three strategies are conventions for defining 'scopes' over which schemas will be tested for compatibility in the registry (per-topic, per-record, or a combination). You've identified RecordNameStrategy fits your use case for having multiple avro schemas per topic.
However, I think your concern about non-Java support for RecordNameStrategy can be set aside. In the serializer, the subject naming is free to be implemented however the serializer developer chooses. Having worked on this stuff in Java, Python, Go and NodeJS, I've experienced some variety in how third-party serializers work in this regard. Nevertheless, working non-Java libs do exist.
If all else fails, you can write your own 'schema-registry aware serializer' that registers the schema with your chosen subject name prior to encoding the confluent wire-format for Kafka. I've had happy outcomes from other tooling by keeping to one of the well-known confluent strategies, so I can recommend mimicking them.

Reading runtime values using AvroIO class

I need to read an AVRO file in Apache Beam using AvroIO by passing the schema and filepath dynamically. Is there any way we can pass a ValueProvider or a side input or anything else to AvroIO.read.
Below is the code that I'm using:
PCollection<GenericRecord> records =p.apply(AvroIO.readGenericRecords(dynamicallyProvidedSchema)
.from(dynamicallyProvidedFilePath));
AvroIO.read().from() can take a ValueProvider. For dynamically provided schema, Beam 2.2 (release is currently in progress) includes AvroIO.parseGenericRecords() that lets you avoid specifying a schema altogether, you just have to specify a function from GenericRecord to your custom type.

How to find out Avro schema from binary data that comes in via Spark Streaming?

I set up a Spark-Streaming pipeline that gets measuring data via Kafka. This data was serialized using Avro. The data can be of two types - EquidistantData and DiscreteData. I created these using an avdl file and the sbt-avrohugger plugin. I use the variant that generates Scala case classes that inherit from SpecificRecord.
In my receiving application, I can get the two schemas by querying EquidistantData.SCHEMA$ and DiscreteData.SCHEMA$.
Now, my Kafka stream gives me RDDs whose value class is Array[Byte]. So far so good.
How can I find out from the byte array which schema was used when serializing it, i.e., whether to use EquidistantData.SCHEMA$ or DiscreteData.SCHEMA$?
I thought of sending an appropriate info in the message key. Currently, I don't use the message key. Would this be a feasible way or can I get the schema somehow from the serialized byte array I received?
Followup:
Another possibility would be to use separate topics for discrete and equidistant data. Would this be feasible?