Can I update Apache Atlas metadata by adding a message directly into the Kafka topic? - apache-kafka

I am trying to add a message to Entities_Topic to update the kafka_Topic type metadata in Apache Atlas. I wrote the data according to the JSON format of Message, but it didn't work.
application.log is displayed as follows:
graph rollback due to exception AtlasBaseException:Instance kafka_topicwith unique attribute {qualifiedName=atlas_test00#primary # clusterName to use in qualified name of entities. Default: primary} does not exist (GraphTransactionInterceptor:202)
graph rollback due to exception AtlasBaseException:Instance __AtlasUserProfile with unique attribute {name=admin} does not exist(GraphTransactionInterceptor:202)
And here is the message I passed into Kafka Topic earlier:
{"version":{"version":"1.0.0","versionParts":[1]},"msgCompressionKind":"NONE","msgSplitIdx":1,"msgSplitCount":1,"msgSourceIP":"192.168.1.110","msgCreatedBy":"","msgCreationTime":1664440029000,"spooled":false,"message":{"type":"ENTITY_NOTIFICATION_V2","entity":{"typeName":"kafka_topic","attributes":{"qualifiedName":"atlas_test_k1#primary # clusterName to use in qualifiedName of entities. Default: primary","name":"atlas_test01","description":"atlas_test_k1"},"displayText":"atlas_test_k1","isIncomplete":false},"operationType":"ENTITY_CREATE","eventTime":1664440028000}}
It is worth noting that there is no GUID in the message and I do not know how to create it manually. Also, I changed the time according to the timestamp of the current time. The JSON is passed in through the Kafka tool Offset Explorer.
My team leader wants to update the metadata by sending messages directly into Kafka, and I'm just trying to see if that's possible.
How can I implement this idea, or please tell me what's wrong.

Related

How to rename the id header of a debezium mongodb connector outbox message

I am trying to use the outbox event router of debezium for mongodb. The consumer is a spring cloud stream application. I cannot deserialize the message because spring cloud expects the message id header to be UUID, but it receives byte[]. I have tried different deserializers to no avail. I am thinking of renaming the id header in order to skip this spring cloud check, or remove it altogether. I have tried the ReplaceField SMT but it does not seem to modify the header fields.
Also is there a way to overcome this in spring?
The solution to the initial question is to use the DropHeaders SMT(https://docs.confluent.io/platform/current/connect/transforms/dropheaders.html).
This will remove the id header that is populated by debezium.
But as Oleg Zhurakousky mentioned, moving to a newer version of spring-cloud-stream without #StreamListener solves the underlying problem.
Apparently #StreamListener checks if a message has an id header and it demands to be of type Uuid. By using the new functional way of working with spring-cloud-stream, the id header is actually overwritten with a new generated value. This means that the value populated by debezium (the id column form the outbox table) is ignored. I guess if you need to check for duplicate delivery, maybe it is better to create your own header instead of using the id. I do not know if spring-cloud-stream generates the same id for the same message if it is redelivered.
Also keep in mind that even in the newer versions of spring-cloud-stream, if you use the deprecated #StreamListener, you will have the same problem.

Customize Debezium pubsub message

I am trying to use debezium server to stream "some" changes in a postgresql table. Namely this table being tracked has a json type column named "payload". I would like the message streamed to pubsub by debezium to contain only the contents of the payload column. Is that possible?
I ve explored the custom transformations provided by debezium but from what I could get it would only allow me to enrich the published message with extra fields, but not to publish only certain fields, which is what I want to do.
Edit:
The closest I got to what I wanted was to use the outbox transform but that published the following message:
{
"schema":{
...
},
"payload:{
"key":"value"
}
Whereas what I would like the message to be is:
{"key":"value"}
I ve tried adding an ExtractNewRecordState transform but still got the same results. My application.properties file looks like:
debezium.transforms=outbox,unwrap
debezium.transforms.outbox.type=io.debezium.transforms.outbox.EventRouter
debezium.transforms.outbox.table.field.event.key=grouping_key
debezium.transforms.outbox.table.field.event.payload.id=id
debezium.transforms.outbox.route.by.field=target
debezium.transforms.outbox.table.expand.json.payload=true
debezium.transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
debezium.transforms.unwrap.add.fields=payload
Many thanks,
Daniel

Apache Nifi: Is there a way to publish messages to kafka with a message key as combination of multiple attributes?

I have a requirement where I need to read a CSV and publish to Kafka topic in Avro format. During the publish, I need to set the message key as the combination of two attributes. Let's say I have an attribute called id and an attribute called group. I need my message key to be id+"-"+group. Is there a way I can achieve this in Apache nifi flow? Setting the message key to a single attribute works fine for me.
Yes, in the PublishKafka_2_0 (or whatever version you're using), set the Kafka Key property to construct your message key using NiFi Expression Language. For your example, the expression ${id}-${group} will form it (e.g. id=myId & group=MyGroup -> myId-myGroup).
If you don't populate this property explicitly, the processor looks for the attribute kafka.key, so if you had previously set that value, it would be passed through.
Additional information after comment 2020-06-15 16:49
Ah, so the PublishKafkaRecord will publish multiple messages to Kafka, each correlating with a record in the single NiFi flowfile. In this case, the property is asking for a field (a record term meaning some element of the record schema) to use to populate that message key. I would suggest using UpdateRecord before this processor to add a field called messageKey (or whatever you like) to each record using Expression Language, then reference this field in the publishing processor property.
Notice the (?)s on each property which indicates what is or isn't allowed:
When a field doesn't except expression languages, use an updateAttribute processor to set the combined value you need. Then you use the combined value downstream.
Thank you for your inputs. I had to change my initial design of producing with a key combination to actually partitioning the file based on a specific field using PartitionRecord processor. I have a date field in my CSV file and there can be multiple records per date. I partition based on this date field and produce to the kafka topics using the id field as key per partition. The kafka topic name is dynamic and is suffixed with the date value. Since I plan to use Kafka streams to read data from these topics, this is a much better design than the initial one.

Schema Registry - Confluent AvroSerializer/AvroDeserializer

Based on the my understanding
Producer : In First call local cache of Schema registry is empty.Then schema related to the object definition to serialize is loaded. then produces looks in the local cache to check whether the schema correspondant to object definition to serialize already exists in the cache , if not , it request to the schema registry .
Consumer: Schema registry will call every time a schema ID is not already in the local cache of AvroDeserlzier .
Two Questions :
Now Question here is , If Suppose schema is not being captured in local cache how many times Schema registry will try to store it local during Serialization process at producer ?
In consumer side , Schema registry will call every time a schema ID is not already in the local cache of AvroDeserlzier for all records ?
If you have an infinite cache miss, I believe the HTTP call will keep getting called to find/send the ID again and again, yes, however the chances the schema is not being cached between the first request/response and (de)serialization seem unlikely because they happen in the code very close together. (note, it's open source, so you can verify this as well)

Why in the API rework was StoreName not specified in the table method of Kafka StreamsBuilder?

In the Kafka StreamsBuilder the signature for table is only:
table(java.lang.String topic)
https://kafka.apache.org/10/javadoc/org/apache/kafka/streams/StreamsBuilder.html
Where as before you were able to provide a store name:
table(java.lang.String topic, java.lang.String queryableStoreName)
https://kafka.apache.org/0110/javadoc/org/apache/kafka/streams/kstream/KStreamBuilder.html
Why was this removed?
It was not removed, but the API was reworked. Please read the upgrade notes for API changes: https://kafka.apache.org/11/documentation/streams/upgrade-guide
For this change in particular, the full details are documented via KIP-182: https://cwiki.apache.org/confluence/display/KAFKA/KIP-182%3A+Reduce+Streams+DSL+overloads+and+allow+easier+use+of+custom+storage+engines
You can specify the store name via Materialized parameter now:
table(String topic, Materialized materialized);