Empty data while reading data from kafka using Trident Topology - apache-kafka

I am new to Trident. I am writing a trident topology which reads data from kafka. Topic name is 'test'. I have local kafka setup. I started zookeeper, kafka in local. And created a topic 'test' in kafka and opened the producer and typed the message 'Hello Kafka!'.
I want to read the message 'Hello Kafka' from the 'test' topic using trident.
Below is my code. I am getting empty tuple.
TridentTopology topology = new TridentTopology();
BrokerHosts brokerHosts = new ZkHosts("localhost:2181");
TridentKafkaConfig kafkaConfig = new TridentKafkaConfig(brokerHosts, "test");
kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
kafkaConfig.bufferSizeBytes = 1024 * 1024 * 4;
kafkaConfig.fetchSizeBytes = 1024 * 1024 * 4;
kafkaConfig.forceFromStart = false;
OpaqueTridentKafkaSpout opaqueTridentKafkaSpout = new OpaqueTridentKafkaSpout(kafkaConfig);
topology.newStream("TestSpout", opaqueTridentKafkaSpout).parallelismHint(1)
.each(new Fields(), new TestFilter()).parallelismHint(1)
.each(new Fields(), new Utils.PrintFilter());
and this is my TestFilter class code
public TestFilter()
{
//
}
#Override
public boolean isKeep(TridentTuple tuple) {
boolean isKeep=true;
System.out.println("TestFilter is called...");
if (tuple != null && tuple.getValues().size()>0) {
System.out.println("data from kafka ::: "+tuple.getValues());
}
return isKeep;
}
Whenever i type message in kafka producer to the 'test' topic, first sysout getting printed but it doesn't pass the if loop. I am simply getting message 'TestFilter is called...' not more than that.
I want to get the actual data i produced to the 'test' topic. How?

The problems lies in the parameters to Stream.each. The relevant portion of the javadoc for the method is:
each(Fields inputFields, Filter filter)
The documentation is't too clear about it, but the semantic is that you should specifies all the fields used by your filter using the inputFields parameter.
Storm will then apply a projection on the input tuple and forward it to the filter.
Given that you didn't specified any input fields, the projection resulted in an empty tuple thus resulting in the failure of the tuple.getValues().size()>0 condition inside the filter.
It's worth mentioning also the other variants of each:
each(Fields inputFields, Function function, Fields functionFields)
each(Function function, Fields functionFields)
These will apply the provided function on the projection of the input tuple, appending the resulting tuple to the original input tuple renaming the new fields as functionFields (i.e. the projection is only used for applying the function).
In particular the second version is equivalent to invoke each with inputFields set to null (or new Fields()) and will result in an empty tuple getting passed to function.

Related

NiFi Avro Kafka message nano-timestamp (19 digits) cast to timestamp with milliseconds

I'm now facing an issue converting Kafka's message record of type long for nano-seconds (19 digits) to a string timestamp with milliseconds. The messages are coming in Avro format and contain different schemas (so we can`t statically define one schema) stored in Confluent Schema Registry. The current process is:
1) ConsumeKafkaRecord_2_0 which reads the message and stores the Avro schema coming from Confluent Schema Registry into avro.schema attribute
2) UpdateAttribute which is looking for a pattern of a timestamp record in the avro.schema and adds "logicalType":"timestamp-micros" (because i can`t find timestamp-nanos type in the Avro specification)
3) ConvertRecord which converts the Avro flowfile using avro.schema into JSON. It uses the logicalType assigned in the previous step and converts the 19 digits long into the yyyy-MM-dd HH:mm:SS.SSSSSS. Here the issue is that 19 digits is a nano-timestamp type which is missing in Avro specification, so we only can use timestamp-micros type and receive the year 51000+ values.
4) ReplaceText - this processor gives us a workaround for an issue described above and we are replacing the values of the 5-digits-year pattern with a "correct" datetime (with milliseconds, because Java somehow can`t work with microseconds) using and expression: ${'$1':toDate('yyyyy-MM-dd HH:mm:ss.SSSSSS'):toNumber():toString():substring(0, 13):toNumber():toDate():format('yyyy-MM-dd HH:mm:ss.SSS')}
After that we go on with other processors, the workaround works but with a strange issue - our resulting timestamps differ for a few milliseconds from what we receive in Kafka. I can only guess this is the result of the transformations described above. That`s why my question is - is there a better way to handle 19-digit values coming in the Avro messages (the schemas are in Confluent Schema Registry, the pattern for timestamp fields in schema is known) so that they are cast into correct millisecond timestamps? Maybe some kind of field value replacement (substring of 13 digits from 19-digit value) in Avro flowfile content based on its schema which is embedded/stored in avro.schema attribute?
Please let me know if something is unclear and if some additional details are needed. Thanks a lot in advance!
The following solution worked for our case, a Groovy script which converts one avro file into another (both schema and content):
#Grab('org.apache.avro:avro:1.8.2')
import org.apache.avro.*
import org.apache.avro.file.*
import org.apache.avro.generic.*
//function which is traversing through all records (including nested ones)
def convertAvroNanosecToMillisec(record){
record.getSchema().getFields().forEach{ Schema.Field field ->
if (record.get(field.name()) instanceof org.apache.avro.generic.GenericData.Record){
convertAvroNanosecToMillisec(record.get(field.name()))
}
if (field.schema().getType().getName() == "union"){
field.schema().getTypes().forEach{ Schema unionTypeSchema ->
if(unionTypeSchema.getProp("connect.name") == "io.debezium.time.NanoTimestamp"){
record.put(field.name(), Long.valueOf(record.get(field.name()).toString().substring(0, 13)))
unionTypeSchema.addProp("logicalType", "timestamp-millis")
}
}
} else {
if(field.schema().getProp("connect.name") == "io.debezium.time.NanoTimestamp"){
record.put(field.name(), Long.valueOf(record.get(field.name()).toString().substring(0, 13)))
field.schema().addProp("logicalType", "timestamp-millis")
}
}
}
return record
}
//start flowfile processing
def flowFile = session.get()
if(!flowFile) return
try {
flowFile = session.write(flowFile, {inStream, outStream ->
// Defining avro reader and writer
DataFileStream<GenericRecord> reader = new DataFileStream<>(inStream, new GenericDatumReader<GenericRecord>())
DataFileWriter<GenericRecord> writer = new DataFileWriter<>(new GenericDatumWriter<GenericRecord>())
def contentSchema = reader.schema //source Avro schema
def records = [] //list will be used to temporary store the processed records
//reading all records from incoming file and adding to the temporary list
reader.forEach{ GenericRecord contentRecord ->
records.add(convertAvroNanosecToMillisec(contentRecord))
}
//creating a file writer object with adjusted schema
writer.create(contentSchema, outStream)
//adding records to the output file from the temporary list and closing the writer
records.forEach{ GenericRecord contentRecord ->
writer.append(contentRecord)
}
writer.close()
} as StreamCallback)
session.transfer(flowFile, REL_SUCCESS)
} catch(e) {
log.error('Error appending new record to avro file', e)
flowFile = session.penalize(flowFile)
session.transfer(flowFile, REL_FAILURE)
}

Apache Beam Windowing on a signals phase

Updated: Is it possible to window a data stream on a signals phase.
For example, there is a stream of timestamp, key, value:
[<t0, k1, 0>, <t1, k1, 98>, <t2, k1, 145>, <t4, k1, 0>, <t3, k1, 350>, <t5, k1, 40>, <t6, k1, 65>, <t7, k1, 120>, <t8, k1, 240>, <t9, k1, 352>].
The output would be two windows for key k1:
t0 - t3: [0, 98, 145, 350]
t4 - t9: [0, 40, 65, 120, 240, 352]
E.g. every time the value hits 0, start a new window for the group.
After your question edit and use case clarification I would recommend to look into custom windowing to extend the standard sessions. As a starting point I built the following example (it can be improved upon).
Through WindowFn.AssignContext we can access the element() that it's being windowed into a proto-session. If it's equal to a given stopValue the window length will be confined to the minimum instead of using gapDuration for that purpose:
#Override
public Collection<IntervalWindow> assignWindows(AssignContext c) {
Duration newGap = c.element().getValue().equals(this.stopValue) ? new Duration(1) : gapDuration;
return Arrays.asList(new IntervalWindow(c.timestamp(), newGap));
}
Then, when merging the sorted windows we'll check if they do overlap but also that the window duration is not equal to 1 ms.
Collections.sort(sortedWindows);
List<MergeCandidate> merges = new ArrayList<>();
MergeCandidate current = new MergeCandidate();
for (IntervalWindow window : sortedWindows) {
// get window duration and check if it's a stop session request
Long windowDuration = new Duration(window.start(), window.end()).getMillis();
if (current.intersects(window) && !windowDuration.equals(1L)) {
current.add(window);
} else {
merges.add(current);
current = new MergeCandidate(window);
}
}
merges.add(current);
for (MergeCandidate merge : merges) {
merge.apply(c);
}
Of course, we also can add some code so that we can provide different stopping values: a stopValue field, a withStopValue method, constructors, display data if using the Dataflow Runner, etc.
/** Value that closes the session. */
private final Integer stopValue;
/** Creates a {#code StopSessions} {#link WindowFn} with the specified gap duration. */
public static StopSessions withGapDuration(Duration gapDuration) {
return new StopSessions(gapDuration, 0);
}
/** Creates a {#code StopSessions} {#link WindowFn} with the specified stop value. */
public StopSessions withStopValue(Integer stopValue) {
return new StopSessions(gapDuration, stopValue);
}
/** Creates a {#code StopSessions} {#link WindowFn} with the specified gap duration and stop value. */
private StopSessions(Duration gapDuration, Integer stopValue) {
this.gapDuration = gapDuration;
this.stopValue = stopValue;
Now in our pipeline we can import and use the new StopSessions class with:
import org.apache.beam.sdk.transforms.windowing.StopSessions; // custom one
...
.apply("Window into StopSessions", Window.<KV<String, Integer>>into(StopSessions
.withGapDuration(Duration.standardSeconds(10))
.withStopValue(0)))
To mimic your example we create some data with:
.apply("Create data", Create.timestamped(
TimestampedValue.of(KV.of("k1", 0), new Instant()), // <t0, k1, 0>
TimestampedValue.of(KV.of("k1",98), new Instant().plus(1000)), // <t1, k1, 98>
TimestampedValue.of(KV.of("k1",145), new Instant().plus(2000)), // <t2, k1, 145>
TimestampedValue.of(KV.of("k1",0), new Instant().plus(4000)), // <t4, k1, 0>
...
With standard sessions the output would be:
user=k1, scores=[0,145,350,120,0,40,65,98,240,352], window=[2019-06-08T19:13:46.785Z..2019-06-08T19:14:05.797Z)
And with the custom one I get the following:
user=k1, scores=[350,145,98], window=[2019-06-08T21:18:51.395Z..2019-06-08T21:19:03.407Z)
user=k1, scores=[0], window=[2019-06-08T21:18:54.407Z..2019-06-08T21:18:54.408Z)
user=k1, scores=[65,240,352,120,40], window=[2019-06-08T21:18:55.407Z..2019-06-08T21:19:09.407Z)
user=k1, scores=[0], window=[2019-06-08T21:18:50.395Z..2019-06-08T21:18:50.396Z)
Changing the stopValue with .withStopValue(<int>) works as expected. The 98, 145 and 350 events are in a different session than the rest. Please note that this is not exactly like in the description as the stopValue gets assigned to a separate window instead of the new one but it can be filtered downstream and it gives you an idea on how to proceed. I would like to revisit this and also look for a Python implementation, too.
All files here.
Likely not, from your description. There are at least two problems:
PCollections in Beam are unordered and distributed:
there are no guarantees in the model that events from one group will arrive in that order;
data-driven triggers are not supported (probably for similar reasons):
https://beam.apache.org/documentation/programming-guide/#data-driven-triggers
However you can look into stateful processing and see if you can handle this manually. E.g. you accumulate all the incoming events in the state and then from time to time analyze the accumulated events and emit the results.
Or if you can extract/assign a common key in your business logic, then you might want to check if GroupByKey+ParDo or Combine would be helpful.
See:
https://beam.apache.org/blog/2017/02/13/stateful-processing.html
https://docs.google.com/document/d/1zf9TxIOsZf_fz86TGaiAQqdNI5OO7Sc6qFsxZlBAMiA/edit
https://beam.apache.org/documentation/programming-guide/#combine
https://beam.apache.org/documentation/programming-guide/#groupbykey

SparkSQL performance issue with collect method

We are currently facing a performance issue in sparksql written in scala language. Application flow is mentioned below.
Spark application reads a text file from input hdfs directory
Creates a data frame on top of the file using programmatically specifying schema. This dataframe will be an exact replication of the input file kept in memory. Will have around 18 columns in the dataframe
var eqpDF = sqlContext.createDataFrame(eqpRowRdd, eqpSchema)
Creates a filtered dataframe from the first data frame constructed in step 2. This dataframe will contain unique account numbers with the help of distinct keyword.
var distAccNrsDF = eqpDF.select("accountnumber").distinct().collect()
Using the two dataframes constructed in step 2 & 3, we will get all the records which belong to one account number and do some Json parsing logic on top of the filtered data.
var filtrEqpDF =
eqpDF.where("accountnumber='" + data.getString(0) + "'").collect()
Finally the json parsed data will be put into Hbase table
Here we are facing performance issues while calling the collect method on top of the data frames. Because collect will fetch all the data into a single node and then do the processing, thus losing the parallel processing benefit.
Also in real scenario there will be 10 billion records of data which we can expect. Hence collecting all those records in to driver node will might crash the program itself due to memory or disk space limitations.
I don't think the take method can be used in our case which will fetch limited number of records at a time. We have to get all the unique account numbers from the whole data and hence I am not sure whether take method, which takes
limited records at a time, will suit our requirements
Appreciate any help to avoid calling collect methods and have some other best practises to follow. Code snippets/suggestions/git links will be very helpful if anyone have had faced similar issues
Code snippet
val eqpSchemaString = "acoountnumber ....."
val eqpSchema = StructType(eqpSchemaString.split(" ").map(fieldName =>
StructField(fieldName, StringType, true)));
val eqpRdd = sc.textFile(inputPath)
val eqpRowRdd = eqpRdd.map(_.split(",")).map(eqpRow => Row(eqpRow(0).trim, eqpRow(1).trim, ....)
var eqpDF = sqlContext.createDataFrame(eqpRowRdd, eqpSchema);
var distAccNrsDF = eqpDF.select("accountnumber").distinct().collect()
distAccNrsDF.foreach { data =>
var filtrEqpDF = eqpDF.where("accountnumber='" + data.getString(0) + "'").collect()
var result = new JSONObject()
result.put("jsonSchemaVersion", "1.0")
val firstRowAcc = filtrEqpDF(0)
//Json parsing logic
{
.....
.....
}
}
The approach usually take in this kind of situation is:
Instead of collect, invoke foreachPartition: foreachPartition applies a function to each partition (represented by an Iterator[Row]) of the underlying DataFrame separately (the partition being the atomic unit of parallelism of Spark)
the function will open a connection to HBase (thus making it one per partition) and send all the contained values through this connection
This means the every executor opens a connection (which is not serializable but lives within the boundaries of the function, thus not needing to be sent across the network) and independently sends its contents to HBase, without any need to collect all data on the driver (or any one node, for that matter).
It looks like you are reading a CSV file, so probably something like the following will do the trick:
spark.read.csv(inputPath). // Using DataFrameReader but your way works too
foreachPartition { rows =>
val conn = ??? // Create HBase connection
for (row <- rows) { // Loop over the iterator
val data = parseJson(row) // Your parsing logic
??? // Use 'conn' to save 'data'
}
}
You can ignore collect in your code if you have large set of data.
Collect Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.
Also this can cause the driver to run out of memory, though, because collect() fetches the entire RDD/DF to a single machine.
I have just edited your code, which should work for you.
var distAccNrsDF = eqpDF.select("accountnumber").distinct()
distAccNrsDF.foreach { data =>
var filtrEqpDF = eqpDF.where("accountnumber='" + data.getString(0) + "'")
var result = new JSONObject()
result.put("jsonSchemaVersion", "1.0")
val firstRowAcc = filtrEqpDF(0)
//Json parsing logic
{
.....
.....
}
}

Flink: join file with kafka stream

I have a problem I don't really can figure out.
So I have a kafka stream that contains some data like this:
{"adId":"9001", "eventAction":"start", "eventType":"track", "eventValue":"", "timestamp":"1498118549550"}
And I want to replace 'adId' with another value 'bookingId'.
This value is located in a csv file, but I can't really figure out how to get it working.
Here is my mapping csv file:
9001;8
9002;10
So my output would ideally be something like
{"bookingId":"8", "eventAction":"start", "eventType":"track", "eventValue":"", "timestamp":"1498118549550"}
This file can get refreshed every hour at least once, so it should pick up changes to it.
I currently have this code which doesn't work for me:
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(30000); // create a checkpoint every 30 seconds
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
DataStream<String> adToBookingMapping = env.readTextFile(parameters.get("adToBookingMapping"));
DataStream<Tuple2<Integer,Integer>> input = adToBookingMapping.flatMap(new Tokenizer());
//Kafka Consumer
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", parameters.get("bootstrap.servers"));
properties.setProperty("group.id", parameters.get("group.id"));
FlinkKafkaConsumer010<ObjectNode> consumer = new FlinkKafkaConsumer010<>(parameters.get("inbound_topic"), new JSONDeserializationSchema(), properties);
consumer.setStartFromGroupOffsets();
consumer.setCommitOffsetsOnCheckpoints(true);
DataStream<ObjectNode> logs = env.addSource(consumer);
DataStream<Tuple4<Integer,String,Integer,Float>> parsed = logs.flatMap(new Parser());
// output -> bookingId, action, impressions, sum
DataStream<Tuple4<Integer, String,Integer,Float>> joined = runWindowJoin(parsed, input, 3);
public static DataStream<Tuple4<Integer, String, Integer, Float>> runWindowJoin(DataStream<Tuple4<Integer, String, Integer, Float>> parsed,
DataStream<Tuple2<Integer, Integer>> input,long windowSize) {
return parsed.join(input)
.where(new ParsedKey())
.equalTo(new InputKey())
.window(TumblingProcessingTimeWindows.of(Time.of(windowSize, TimeUnit.SECONDS)))
//.window(TumblingEventTimeWindows.of(Time.milliseconds(30000)))
.apply(new JoinFunction<Tuple4<Integer, String, Integer, Float>, Tuple2<Integer, Integer>, Tuple4<Integer, String, Integer, Float>>() {
private static final long serialVersionUID = 4874139139788915879L;
#Override
public Tuple4<Integer, String, Integer, Float> join(
Tuple4<Integer, String, Integer, Float> first,
Tuple2<Integer, Integer> second) {
return new Tuple4<Integer, String, Integer, Float>(second.f1, first.f1, first.f2, first.f3);
}
});
}
The code only runs once and then stops, so it doesn't convert new entries in kafka using the csv file. Any ideas on how I could process the stream from Kafka with the latest values from my csv file?
Kind regards,
darkownage
Your goal appears to be to join steaming data with a slow-changing catalog (i.e. a side-input). I don't think the join operation is useful here because it doesn't store the catalog entries across windows. Also, the text file is a bounded input whose lines are read once.
Consider using connect to create a connected stream, and store the catalog data as managed state to perform lookups into. The operator's parallelism would need to be 1.
You may find a better solution by researching 'side inputs', looking at the solutions that people use today. See FLIP-17 and Dean Wampler's talk at Flink Forward.

Mahout clustered points

When I run kmeans clustering in Mahoot I get two folders, clusters-x and clusteredPoints.
I have read cluster centers using cluster dumper, but I somehow can't get to clusteredPoints? Concretely, I need to do it from code.
The strange thing is that I file size in clusteredPoints is always 128 bytes, and when I try to loop through results, using next code, it just goes out of the loop, like there is no result, but I get the cluster centers, which leads to assumption that points are clustered.
IntWritable key = new IntWritable();
WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable();
while (reader.next(key, value)) {
System.out.println(
value.toString() + " belongs to cluster " + key.toString());
}
It just goes out of the loop?
It is really strange, any help would be great, thanks.
You need to open up your final cluster file ('clusteredPoints/part-m-0') with:
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
SequenceFile.Reader reader = new SequenceFile.Reader(fs, new Path("output/clusteredPoints/part-m-0"), conf);
then, assuming your keys are int's, iterate through it (as you already did), with:
IntWritable key = new IntWritable();
WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable();
while (reader.next(key, value)) {
LOG.info("{} belongs to cluster {}", value.toString(), key.toString());
}
reader.close();
I can post a fully working example if you still have trouble doing this.