Akka Flow how to apply filter on a JSON data coming from Kafka Topic - scala

JSON data coming from Kafka Topic like
{
"Action": "die",
"Actor": {
"Attributes": {
"exit": "0",
"node.name": "node-1",
"name": "6a5426de4d6e"
},
"ID": "09a2576ec87e416aaa943f566e54a375d9c325885038195125c4674b104276b6"
}
}
My Akka Flow code snippet is like below
def getEvents: Flow[KafkaMessage, ConsumerMessage.CommittableOffset, NotUsed] =
Flow[KafkaMessage]
.via(getfromSource) //akka source
.wireTap(_.value.map(pprint.log(_))) // I am able to console all the events from kafka topic.
I want to apply filter (get all exit=0) condition on top of the events getting from the topic by using .filter . But unable to make it work . Any guidance or reference will be helpful

From your comments, getfromSource returns Either[Throwable,TopicEvent] so you have to do something like this:
Flow[KafkaMessage]
.via(getfromSource)
.filter {
case Right(event) => event.Actor.Attributes.exitCode.toInt > 0
case Left(_) => false // or true? Up to you..
}
Note that the pattern matching can be simplified if you want to.

Related

POST request to JIRA REST API to create issue of type Minutes

my $create_issue_json = '{"fields": { "project": { "key": "ABC" }, "summary": "summary for version 1", "description": "Creating an issue via REST API", "issuetype": { "name": "Minutes" }}}';
$tx1 = $jira_ua->post($url2 => json => decode_json($create_issue_json));
my $res1 = $tx1->res->body;
I try to create a jira issue of type Minutes but POST expects some fields which are not available in the issue of type Minutes. The below is the response.
{"errorMessages":["Brands: Brands is required.","Detection: Detection is required."],"errors":{"versions":"Affects Version/s is required.","components":"Component/s is required."}}
I also tried to fetch the schema using createMeta api but don't find any useful info. The below is the response from createmeta.
{"maxResults":50,"startAt":0,"total":3,"isLast":true,"values":[
{
"self":"https://some_url.com/rest/api/2/issuetype/1",
"id":"1",
"description":"A problem which impairs or prevents the functions of the product.",
"iconUrl":"https://some_url.com:8443/secure/viewavatar?size=xsmall&avatarId=25683&avatarType=issuetype",
"name":"Bug",
"subtask":false},
{
"self":"https://some_url.com:8443/rest/api/2/issuetype/12",
"id":"12",
"description":"An issue type to document minutes of meetings, telecons and the like",
"iconUrl":"https://some_url.com:8443/secure/viewavatar?size=xsmall&avatarId=28180&avatarType=issuetype",
"name":"Minutes",
"subtask":false
},
{
"self":"https://some_url.com:8443/rest/api/2/issuetype/23",
"id":"23",
"description":"Used to split an existing issue of type \"Bug\"",
"iconUrl":"https://some_url.com:8443/images/icons/cmts_SubBug.gif",
"name":"Sub Bug",
"subtask":true
}
]
}
It looks like there Jira Admin has added these as manadatory fields for all the issuetypes which I came to know after speaking with him. He has now individual configuration for different issue types and am able to create minutes.

Wiremock Request Templating for JSON Payload with multiple allowed keys, but same response

Trying to mock an API endpoint that allows a request with 2 possible payloads, but the same response:
Request Option 1
{
"key1": "value1"
}
Request Option 2
{
"key2": "value2"
}
Based on the Request Templating documentation, I see that there's an option to define some regex for matchesJsonPath.
However, I'm unable to figure out how to provide a configuration that will allow key1 or key2.
This is what I'd tried, but it doesn't seem to work:
{
// ... other configs
"request": {
"bodyPatterns": [
{
"matchesJsonPath": "$.(key1|key2)"
}
]
}
}
Is it possible to provide 1 definition that supports both payloads, or do I have to create 2 stubs?
Note: I am using a standalone Wiremock Docker image, so options for more complex handling using Java are limited.
Your JsonPath matcher is formatted incorrectly. You need to apply a filter/script (denoted by ?()). More information about how JsonPath matchers work can be found here.
Here is what the properly formatted JsonPath matcher could look like:
{
"matchesJsonPath": "$[?(#.key1 || #.key2)]"
}
If you need the key1 and key2 to have specific values, that would look like:
{
"matchesJsonPath": "$[?(#.key1 == 'value1' || #.key2 == 'value2')]"
}

How to set a document id in kafka elasticsearch sink connector as a combination of two fields?

I have a certain json where i need to set the document id as combination of two fields.
{
"Event_start_time": "2021-05-16T08:27:21.164Z",
"allbeat": {
"heartbeat": {
"pkt_loss_pct": 0,
"type": "ping",
"bu_id": 1,
"minimum_rtt": 32.248,
"jitter": 0.09999999999999788,
"target_state": "Up",
"average_rtt": 32.35,
"maximum_rtt": 32.436,
"tenant_id": 1,
"target": "google.com",
"port": 0
}
}
}
From the above document can we set a key with the combination of Event_start_time and allbeat.heartbeat.target using the available SMT's?
There is not an available Single Message Transform that I'm aware of that will do this. You could write your own, or you could use stream processing (e.g. Kafka Streams, ksqlDB) to do it

Apache Kafka & JSON Schema

I am starting to get into Apache Kafka (Confluent) and have some questions regarding the use of schemas.
First, is my general understanding correct that a schema is used for validating the data? My understanding of schemas is that when the data is "produced", it checks if the Keys and Values fit the predefined concept and splits them accordingly.
My current technical setup is as follows:
Python:
from confluent_kafka import Producer
from config import conf
import json
# create producer
producer = Producer(conf)
producer.produce("datagen-topic", json.dumps({"product":"table","brand":"abc"}))
producer.flush()
in Confluent, i set up a json key schema for my topic:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"brand": {
"type": "string"
},
"product": {
"type": "string"
}
},
"required": [
"product",
"brand"
],
"type": "object"
}
Now, when I produce the data, the message in Confluent contains only content in "Value". Key and Header are null:
{
"product": "table",
"brand": "abc"
}
Basically it doesn't make a difference if I have this schema set up or not, so I guess it's just not working as I set it up. Can you help me where my way of thinking is wrong or where my code is lacking input?
The Confluent Python library Producer class doesn't interact with the Registry in any way, so your message wouldn't be validated.
You'll want to use SerializingProducer like in the example - https://github.com/confluentinc/confluent-kafka-python/blob/master/examples/json_producer.py
If you want non-null keys and headers, you'll need to pass those on to the send method

How to iterate over json response array

I have a question about Gatling.
I need to get the following response:
[
{
"id": 1,
"name": "Jack"
},
{
"id": 2,
"name": "John"
}
]
grab those ids, iterate over them and make a new request for each of them.
So far I have this:
.exec(
http("Image list")
.get("/api/img")
.headers(headers_0)
.check(
jsonPath("$..id").findAll.saveAs("imgs")
)
)
It successfuly saves ids to "imgs" which is session variable but I am not able to iterate over them or process it at all. How can I process it? I am new to Gatling and Scala so I have no idea how to approach this. Please help.
You can treat the imgs session variable as a Scala List:
val ids = session("imgs").as[List[Int]]
ids.foreach(id => ...)
An update to reflect the fact that the internal implementation is now a Vector, as OP has discovered:
val ids = session("imgs").as[Seq[Int]]
I found a solution.
The only possible format is Seq. In my case this solves the problem:
val imageIds = session("imgs").as[Seq[String]]