Pass headers in Debezium http sink - debezium

I am using http sink as in doc. I need to pass headers to http sink url, is there a way to do that? TIA

Related

Validating AVRO schema on Confluent server

If we enable the property confluent.value.schema.validation on Confluent server, how is the actual validation performed? Does the broker deserialize the message and check its format? Or does it validate only that the message has the correct id of the schema?
It would need to deserialize the data, even partially to actually get the ID, so yes, it does both.
Try testing by forging a Avro Kafka record with an existing ID but an invalid payload for the schema of that ID.

MassTransit Kafka Rider get raw message

I need to get raw message that was sent to kafka for logging.
For example, if validation context.Message was failed.
I tried answer from this Is there a way to get raw message from MassTransit?, but it doesn't work and context.TryGetMessage<JToken>() return null all the time.
The Confluent.Kafka client does not expose the raw message data, only the deserialized message type. Therefore, MassTransit does not have a message body accessible.

ProcessorContext#header() is Empty

We have kafka stream application. Producer is adding header in kafka message before sending it to Kafka Streaming application.
In Kafka streaming app we are using AbstractProcessor and context.forward(null, Optional.of(event)); to forward message to another topic.
But header is getting lossed. I want header to be as it is from input message to output topic.
ProcessorContext Interface. headers() method says Returns the headers of the current input record but it's empty in my case though I am sending message with header.
* Returns the headers of the current input record; could be null if it is not available
* #return the headers
*/
Headers headers();
Kafka Stream API Version: 2.3.1
context.headers() shoud be called with in process() if using a Processor or transform() if using a Transformer.

Why base64 encode/decode in Kafka REST Proxy?

Producer serializes the message and send them to Broker in byte arrays. And Consumers deserializes those byte arrays. Broker always stores and passes byte arrays. This is how I understood.
But when you use REST Proxy in Kafka, Producer encodes the message with base64, and Consumer decodes those base64 messages.
A Python example of Producer and Consumer :
# Producer using the REST Proxy
payload = {"records" :
[{
"key":base64.b64encode("firstkey"),
"value":base64.b64encode("firstvalue")
}]}
# Consumer using the REST Proxy
print "Message Key:" + base64.b64decode(message["key"])
Why do you send message in base64 to the Broker instead of byte arrays?
When using REST Proxy, a Broker stores messages in base64 format?
When a Producer wants to send a message 'Man', it serializes into bytes (bits). A Broker will store it as 010011010110000101101110. When a Consumer gets this message, it will deserialize back to Man.
However, according to Confluent document :
Data formats - The REST Proxy can read and write data using JSON, raw bytes encoded with base64 or using JSON-encoded Avro.
Therefore, a Producer using REST Proxy will change the message Man into TWFu (base64 encode) and send this to a Broker, and a Consumer using REST Proxy will base64 decode this back to Man.
As you already answered the broker always stores the data in a binary format.
Answering why base 64 is needed instead I found this on the confluent documentation (https://www.confluent.io/blog/a-comprehensive-rest-proxy-for-kafka/):
The necessity of using base64 encoding is more clear when you have to send raw binary data through the Rest Proxy:
If you opt to use raw binary data, it cannot be embedded directly in JSON, so the API uses a string containing the base64 encoded data.

Kafka DSL stream swallow custom headers

Is it possible to forward incoming messages with custom headers from topic A to B in DSL stream processor?
I notice all of my incomming messages in topic A contains custom headers, but when I put them into topic B all headers are swallowed by stream processor.
I usestream.to(outputTopic); method to process messages.
I have found this task, which is still OPEN.
https://issues.apache.org/jira/browse/KAFKA-5632?src=confmacro
Your observation is correct. Up to Kafka 1.1, Kafka Streams drops records headers.
Record header support is added in (upcoming) Kafka 2.0 allowing to read and modify headers using the Processor API (cf. https://issues.apache.org/jira/browse/KAFKA-6850). With KAFKA-6850, record headers will also be preserved (ie, auto-forwarded) if the DSL is used.
The mentioned issue KAFKA-5632 is about header manipulation at DSL level, that is still not supported in Kafka 2.0.
To manipulate headers using the DSL in Kafka 2.0, you can mix-and-match Processor API into the DSL by using KStream#transformValues(), #transform() or #process().