pyspark stream from kafka topic with avro format returns null dataframe - pyspark

I have a topic with Avro format and I want to read it as a stream in Pyspark but the output is null. my data is like this:
{
"ID": 559,
"DueDate": 1676362642000,
"Number": 1,
"__deleted": "false"
}
and the schema in the schema registry is:
{
"type": "record",
"name": "Value",
"namespace": "test",
"fields": [
{
"name": "ID",
"type": "long"
},
{
"name": "DueDate",
"type": {
"type": "long",
"connect.version": 1,
"connect.name": "io.debezium.time.Timestamp"
}
},
{
"name": "Number",
"type": "long"
},
{
"name": "StartDate",
"type": [
"null",
{
"type": "long",
"connect.version": 1,
"connect.name": "io.debezium.time.Timestamp"
}
],
"default": null
},
{
"name": "__deleted",
"type": [
"null",
"string"
],
"default": null
}
],
"connect.name": "test.Value"
}
and the schema in Pyspark I defined is:
schema = StructType([
StructField("ID",LongType(),False),
StructField("DueDate",LongType(),False),
StructField("Number",LongType(),False),
StructField("StartDate",LongType(),True),
StructField("__deleted",StringType(),True)
])
and in the result, I see null Dataframe
I expect the value in the Dataframe same as the records in Kafka topic but all columns are null

Related

Add MySQL column comment as a metadata to Avro schema through the Debezium connector

Kafka Connect is used through the Confluent platform and io.debezium.connector.mysql.MySqlConnector is used as the Debezium connector.
In my case, MySQL table includes columns with sensitive data and these columns must be tagged as sensitive for further use.
SHOW FULL COLUMNS FROM astronauts;
+---------+--------------+--------------------+------+-----+---------+-------+---------------------------------+------------------+
| Field | Type | Collation | Null | Key | Default | Extra | Privileges | Comment |
+---------+--------------+--------------------+------+-----+---------+-------+---------------------------------+------------------+
| orderid | int | NULL | YES | | NULL | | select,insert,update,references | |
| name | varchar(100) | utf8mb4_0900_ai_ci | NO | | NULL | | select,insert,update,references | sensitive column |
+---------+--------------+--------------------+------+-----+---------+-------+---------------------------------+------------------+
Notice MySQL comment for the name column.
Based on this table, I would like to have this Avro schema in the Schema registry:
{
"connect.name": "dbserver1.inventory.astronauts.Envelope",
"connect.version": 1,
"fields": [
{
"default": null,
"name": "before",
"type": [
"null",
{
"connect.name": "dbserver1.inventory.astronauts.Value",
"fields": [
{
"default": null,
"name": "orderid",
"type": [
"null",
"int"
]
},
{
"name": "name",
"type": {
"MY_CUSTOM_ATTRIBUTE": "sensitive column",
"type": "string"
}
}
],
"name": "Value",
"type": "record"
}
]
},
{
"default": null,
"name": "after",
"type": [
"null",
"Value"
]
},
{
"name": "source",
"type": {
"connect.name": "io.debezium.connector.mysql.Source",
"fields": [
{
"name": "version",
"type": "string"
},
{
"name": "connector",
"type": "string"
},
{
"name": "name",
"type": "string"
},
{
"name": "ts_ms",
"type": "long"
},
{
"default": "false",
"name": "snapshot",
"type": [
{
"connect.default": "false",
"connect.name": "io.debezium.data.Enum",
"connect.parameters": {
"allowed": "true,last,false,incremental"
},
"connect.version": 1,
"type": "string"
},
"null"
]
},
{
"name": "db",
"type": "string"
},
{
"default": null,
"name": "sequence",
"type": [
"null",
"string"
]
},
{
"default": null,
"name": "table",
"type": [
"null",
"string"
]
},
{
"name": "server_id",
"type": "long"
},
{
"default": null,
"name": "gtid",
"type": [
"null",
"string"
]
},
{
"name": "file",
"type": "string"
},
{
"name": "pos",
"type": "long"
},
{
"name": "row",
"type": "int"
},
{
"default": null,
"name": "thread",
"type": [
"null",
"long"
]
},
{
"default": null,
"name": "query",
"type": [
"null",
"string"
]
}
],
"name": "Source",
"namespace": "io.debezium.connector.mysql",
"type": "record"
}
},
{
"name": "op",
"type": "string"
},
{
"default": null,
"name": "ts_ms",
"type": [
"null",
"long"
]
},
{
"default": null,
"name": "transaction",
"type": [
"null",
{
"connect.name": "event.block",
"connect.version": 1,
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "total_order",
"type": "long"
},
{
"name": "data_collection_order",
"type": "long"
}
],
"name": "block",
"namespace": "event",
"type": "record"
}
]
}
],
"name": "Envelope",
"namespace": "dbserver1.inventory.astronauts",
"type": "record"
}
Notice the custom schema field named MY_CUSTOM_ATTRIBUTE.
Debezium 2.0 supports schema doc from column comments [DBZ-5489], however, personally I think the doc field attribute is not appropriate since:
any implementation of a schema registry or system that processes the schemas is free to drop those fields when encoding/decoding and its fully spec compliant
Additionally, the doc field is solely intended to provide information to a user of the schema and is not intended as a form of metadata that downstream programs can rely on
source: https://avro.apache.org/docs/1.10.2/spec.html#Schema+Resolution
Based on the Avro schema docs, custom attributes for Avro schemas are allowed and these attributes are known as metadata:
A JSON object, of the form:
{"type": "typeName" ...attributes...}
where typeName is either a primitive or derived type name, as defined below. Attributes not defined in this document are permitted as metadata, but must not affect the format of serialized data.
source: https://avro.apache.org/docs/1.10.2/spec.html#schemas
I think Debezium transformations might be a solution, however, I have the following problems:
No idea how to get MySQL column comments in my custom transformation
org.apache.kafka.connect.data.SchemaBuilder does not allow to add custom attributes, afaik just doc and the specific field
Here are several native transformations for reference: https://github.com/apache/kafka/tree/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/

Can I use single s3-sink connector to point same field name for the timeStamp field by using different type of Avro Schema's for different topics?

schema for topic t1
{
"type": "record",
"name": "Envelope",
"namespace": "t1",
"fields": [
{
"name": "before",
"type": [
"null",
{
"type": "record",
"name": "Value",
"fields": [
{
"name": "id",
"type": {
"type": "long",
"connect.default": 0
},
"default": 0
},
{
"name": "createdAt",
"type": [
"null",
{
"type": "string",
"connect.version": 1,
"connect.name": "io.debezium.time.ZonedTimestamp"
}
],
"default": null
},
],
"connect.name": "t1.Value"
}
],
"default": null
},
{
"name": "after",
"type": [
"null",
"Value"
],
"default": null
}
],
"connect.name": "t1.Envelope"
}
schema for topic t2
{
"type": "record",
"name": "Value",
"namespace": "t2",
"fields": [
{
"name": "id",
"type": {
"type": "long",
"connect.default": 0
},
"default": 0
},
{
"name": "createdAt",
"type": [
"null",
{
"type": "string",
"connect.version": 1,
"connect.name": "io.debezium.time.ZonedTimestamp"
}
],
"default": null
}
],
"connect.name": "t2.Value"
}
s3-sink Connector configuration
connector.class=io.confluent.connect.s3.S3SinkConnector
behavior.on.null.values=ignore
s3.region=us-west-2
partition.duration.ms=1000
flush.size=1
tasks.max=3
timezone=UTC
topics.regex=t1,t2
aws.secret.access.key=******
locale=US
format.class=io.confluent.connect.s3.format.json.JsonFormat
partitioner.class=io.confluent.connect.storage.partitioner.TimeBasedPartitioner
value.converter.schemas.enable=false
name=s3-sink-connector
aws.access.key.id=******
errors.tolerance=all
value.converter=org.apache.kafka.connect.json.JsonConverter
storage.class=io.confluent.connect.s3.storage.S3Storage
key.converter=org.apache.kafka.connect.storage.StringConverter
s3.bucket.name=s3-sink-connector-bucket
path.format=YYYY/MM/dd
timestamp.extractor=RecordField
timestamp.field=after.createdAt
By using this connector configuration I got error for t2 topic that is "createdAt field does not exist".
If I set timestamp.field = createdAt then error is thrown for t1 topic "createdAt field does not exist".
How can I point "createdAt" field in both schemas at the same time by using same connector for both?
Is it possible to achieve this by using a single s3-sink connector configuration ?
If this scenario is possible then how can I do this, which properties I have to use for achieve this?
If anybody has idea about this, please suggest on this.
If there is any other way to do this then please suggest that way also.
All topics will need the same timestamp field; there's no way to configure topic-to-field mappings.
Your t2 schema doesn't have an after field, so you need to run two separate connectors
The field is also required to be present in all records, otherwise the partitioner won't work.

org.apache.avro.AvroTypeException: Unknown union branch EventId

I am trying to convert a json to avro using 'kafka-avro-console-producer' and publish it to kafka topic.
I am able to do that flat json/schema's but for below given schema and json I am getting
"org.apache.avro.AvroTypeException: Unknown union branch EventId" error.
Any help would be appreciated.
Schema :
{
"type": "record",
"name": "Envelope",
"namespace": "CoreOLTPEvents.dbo.Event",
"fields": [{
"name": "before",
"type": ["null", {
"type": "record",
"name": "Value",
"fields": [{
"name": "EventId",
"type": "long"
}, {
"name": "CameraId",
"type": ["null", "long"],
"default": null
}, {
"name": "SiteId",
"type": ["null", "long"],
"default": null
}],
"connect.name": "CoreOLTPEvents.dbo.Event.Value"
}],
"default": null
}, {
"name": "after",
"type": ["null", "Value"],
"default": null
}, {
"name": "op",
"type": "string"
}, {
"name": "ts_ms",
"type": ["null", "long"],
"default": null
}],
"connect.name": "CoreOLTPEvents.dbo.Event.Envelope"
}
And Json input is like below :
{
"before": null,
"after": {
"EventId": 12,
"CameraId": 10,
"SiteId": 11974
},
"op": "C",
"ts_ms": null
}
And in my case I cant alter schema, I can alter only json such a way that it works
If you are using the Avro JSON format, the input you have is slightly off. For unions, non-null values need to be specified such that the type information is listed: https://avro.apache.org/docs/current/spec.html#json_encoding
See below for an example which I think should work.
{
"before": null,
"after": {
"CoreOLTPEvents.dbo.Event.Value": {
"EventId": 12,
"CameraId": {
"long": 10
},
"SiteId": {
"long": 11974
}
}
},
"op": "C",
"ts_ms": null
}
Removing "connect.name": "CoreOLTPEvents.dbo.Event.Value" and "connect.name": "CoreOLTPEvents.dbo.Event.Envelope" as The RecordType can only contains {'namespace', 'aliases', 'fields', 'name', 'type', 'doc'} keys.
Could you try with below schema and see if you are able to produce the msg?
{
"type": "record",
"name": "Envelope",
"namespace": "CoreOLTPEvents.dbo.Event",
"fields": [
{
"name": "before",
"type": [
"null",
{
"type": "record",
"name": "Value",
"fields": [
{
"name": "EventId",
"type": "long"
},
{
"name": "CameraId",
"type": [
"null",
"long"
],
"default": "null"
},
{
"name": "SiteId",
"type": [
"null",
"long"
],
"default": "null"
}
]
}
],
"default": null
},
{
"name": "after",
"type": [
"null",
"Value"
],
"default": null
},
{
"name": "op",
"type": "string"
},
{
"name": "ts_ms",
"type": [
"null",
"long"
],
"default": null
}
]
}

Confluent Kafka producer message format for nested records

I have a AVRO schema registered in a kafka topic and am trying to send data to it. The schema has nested records and I'm not sure how I correctly send data to it using confluent_kafka python.
Example schema: *ingore any typos in schema (real one is very large, just an example)
{
"namespace": "company__name",
"name": "our_data",
"type": "record",
"fields": [
{
"name": "datatype1",
"type": ["null", {
"type": "record",
"name": "datatype1_1",
"fields": [
{"name": "site", "type": "string"},
{"name": "units", "type": "string"}
]
}]
"default": null
}
{
"name": "datatype2",
"type": ["null", {
"type": "record",
"name": "datatype2_1",
"fields": [
{"name": "site", "type": "string"},
{"name": "units", "type": "string"}
]
}]
"default": null
}
]
}
I am trying to send data to this schema using confluent_kafka python version. When I have done this prior, the records were not nested and I would use a typical dictionary key: value pairs and serialize it. How can I send nested data to work with schema.
What I tried so far...
message = {'datatype1':
{'site': 'sitename',
'units': 'm'
}
}
this version does not cause any kafka errors, but the all of the columns show up as null
and...
message = {'datatype1':
{'datatype1_1':
{'site': 'sitename',
'units': 'm'
}
}
}
This version produced a kafka error with the schema.
If you use namespaces, you don't have to worry about naming collisions and you can properly structure your optional records:
for example, both
{
"meta": {
"instanceID": "something"
}
}
And
{}
are valid instances of:
{
"doc": "Survey",
"name": "Survey",
"type": "record",
"fields": [
{
"name": "meta",
"type": [
"null",
{
"name": "meta",
"type": "record",
"fields": [
{
"name": "instanceID",
"type": [
"null",
"string"
],
"namespace": "Survey.meta"
}
],
"namespace": "Survey"
}
],
"namespace": "Survey"
}
]
}

how to create stream in ksql from topic with decimal type column

I want to create a stream from kafka topic that monitor a mysql table. mysql table has columns with decimal(16,4) type and when I create stream with this command:
create stream test with (KAFKA_TOPIC='dbServer.Kafka.DailyUdr',VALUE_FORMAT='AVRO');
stream created and run but columns with decimal(16,4) type don't appear in result stream.
source topic value schema:
{
"type": "record",
"name": "Envelope",
"namespace": "dbServer.Kafka.DailyUdr",
"fields": [
{
"name": "before",
"type": [
"null",
{
"type": "record",
"name": "Value",
"fields": [
{
"name": "UserId",
"type": "int"
},
{
"name": "NationalCode",
"type": "string"
},
{
"name": "TotalInputOcted",
"type": "int"
},
{
"name": "TotalOutputOcted",
"type": "int"
},
{
"name": "Date",
"type": "string"
},
{
"name": "Service",
"type": "string"
},
{
"name": "decimalCol",
"type": [
"null",
{
"type": "bytes",
"scale": 4,
"precision": 16,
"connect.version": 1,
"connect.parameters": {
"scale": "4",
"connect.decimal.precision": "16"
},
"connect.name": "org.apache.kafka.connect.data.Decimal",
"logicalType": "decimal"
}
],
"default": null
}
],
"connect.name": "dbServer.Kafka.DailyUdr.Value"
}
],
"default": null
},
{
"name": "after",
"type": [
"null",
"Value"
],
"default": null
},
{
"name": "source",
"type": {
"type": "record",
"name": "Source",
"namespace": "io.debezium.connector.mysql",
"fields": [
{
"name": "version",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "connector",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "name",
"type": "string"
},
{
"name": "server_id",
"type": "long"
},
{
"name": "ts_sec",
"type": "long"
},
{
"name": "gtid",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "file",
"type": "string"
},
{
"name": "pos",
"type": "long"
},
{
"name": "row",
"type": "int"
},
{
"name": "snapshot",
"type": [
{
"type": "boolean",
"connect.default": false
},
"null"
],
"default": false
},
{
"name": "thread",
"type": [
"null",
"long"
],
"default": null
},
{
"name": "db",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "table",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "query",
"type": [
"null",
"string"
],
"default": null
}
],
"connect.name": "io.debezium.connector.mysql.Source"
}
},
{
"name": "op",
"type": "string"
},
{
"name": "ts_ms",
"type": [
"null",
"long"
],
"default": null
}
],
"connect.name": "dbServer.Kafka.DailyUdr.Envelope"
}
my problem is in decimalCol column
KSQL does not yet support DECIMAL data type.
There is an issue here that you can track and upvote if you think it would be useful.