I have a Dataflow Pipeline with streaming data, and I am using an Apache Beam Side Input of a bounded data source, which may have updates. How do I trigger a periodic update of this side input? E.g. The side input should be refreshed once every 12 hours.
With reference to https://beam.apache.org/documentation/patterns/side-inputs/, this is how I implemented the pipeline with side input:
PCollectionView<Map<Integer, Map<String, Double>>> sideInput = pipeline
// We can think of it as generating "fake" events every 5 minutes
.apply("Use GenerateSequence source transform to periodically emit a value",
GenerateSequence.from(0).withRate(1, Duration.standardMinutes(WINDOW_SIZE)))
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(WINDOW_SIZE))))
.apply(Sum.longsGlobally().withoutDefaults()) // what does this do?
.apply("DoFn periodically pulls data from a bounded source", ParDo.of(new FetchData()))
.apply("Build new Window whenever side input is called",
Window.<Map<Integer, Map<String, Double>>>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane()))
.discardingFiredPanes())
.apply(View.asSingleton());
pipeline
.apply(...)
.apply("Add location to Event",
ParDo.of(new DoFn<>).withSideInputs(sideInput))
.apply(...)
Is this the correct way of implementation?
You can follow the "Slowly updating side input using windowing" part of the mentioned link.
It suggests PeriodicImpulse, which can be used to produce a sequence of elements at fixed runtime intervals.
Related
I have a materialized view that is updated from many streams. Every one enrich it partially. Order doesn't matter. Updates comes in not specified time. Is following algorithm is a good approach:
Update comes and I check what is stored in materialized view via get(), that this is an initial one so enrich and save.
Second comes and get() shows that partial update exist - add next information
... and I continue with same style
If there is a query/join, object that is stored has a method that shows that the update is not complete isValid() that could be used in KafkaStreams#filter().
Could you share please is this a good plan? Is there any pattern in Kafka streams world that handle this case?
Please advice.
Your plan looks good , you have the general idea, but you'll have to use the lower Kafka Stream API : Processor API.
There is a .transform operator that allow you to access a KeyValueStatestore, inside this operation implementation you are free to decide if you current aggregated value is valid or not.
Therefore send it downstream or returning null waiting for more information.
Context: I have a pipeline that listen to pub sub, the message to pubsub is published by an object change notification from a google cloud storage. The pipeline process the file using a XmlIO splitting it, so far so good.
The problem is: In the pubsub message (and in the object stored in the google cloud storage) I have some metadata that I would like to merge with the data from the XmlIO to compose the elements that the pipeline will process, how can I achieve this?
You can create a custom window and windowfn that stores the metadata from the pubsub message that you want to use later to enrich the individual records.
Your pipeline will look as follows:
ReadFromPubsub -> Window.into(CopyMetadataToCustomWindowFn) -> ParDo(ExtractFilenameFromPubsubMessage) -> XmlIO -> ParDo(EnrichRecordsWithWindowMetadata) -> Window.into(FixedWindows.of(...))
To start, you'll want to create a subclass of IntervalWindow that stores the metadata that you need. After that, create a subclass of WindowFn where in #assignWindows(...) you copy the metadata from the pubsub message into the IntervalWindow subclass you created. Apply your new windowfn using the Window.into(...) transform. Now each of the records that flow through the XmlIO transform will be within your custom windowfn that contains the metadata.
For the second step, you'll need to extract the relevant filename from the pubsub message to pass to the XmlIO transform as input.
For the third step, you want to extract out the custom metadata from the window in a ParDo/DoFn that is after the XmlIO. The records within XmlIO will preserve the windowing information that was passed through it (note that not all transforms do this but almost all do). You can state that your DoFn needs the window to be passed to your #ProcessElement, for example:
class EnrichRecordsWithWindowMetadata extends DoFn<...> {
#ProcessElement
public void processElement(#Element XmlRecord xmlRecord, MyCustomMetadataWindow metadataWindow) {
... enrich record with metadata on window ...
}
}
Finally, it is a good idea to revert to one of the standard windowfns such as FixedWindows since the metadata on the window is no longer relevant.
You can use directly pub/sub notification from Google Cloud Storage instead of introducing OCN in middle.
Google also suggest to use pub/sub. If you receive the pub/sub notification you can get the message attributes in it.
data = request.get_json()
object_id = data['message']['attributes']['objectGeneration']
bucket_name = data['message']['attributes']['bucketId']
object_name = data['message']['attributes']['objectId']
I'm trying to figure out how I can raise a notification when a new value is inserted on my influxDB and send a notification to an HTTP endpoint with the data of the new inserted measurement sample. I'm not sure if it's the goal of Kapacitor (I'm new on the TICK stack) or it's better to use another tool (any suggestion will be welcome).
Thanks in advance.
Best regards,
Albert.
In Kapacitor there is two types of task namely batch and stream. The former is meant for processing historical data and stream is for real time purpose.
Looking at your requirement I guess it is obvious that stream is the way to go as it will enable you to watch data from an influxdb's measurement in real time. For invoking an endpoint in TICK script you can use the HttpPostNode node.
Example (Pseudo code ONLY):
var data = stream
|from()
.database('myInfluxDB')
.retentionPolicy('autogen')
.measurement('measurement_ABCD')
|window()
.period(10s)
.every(10s)
data
|httpPost('http://your.service.url/api/endpoint_xyz')
In this instance the TICK script will watch for new inserted data on measurement, measurement_ABCD for a window period of 10 seconds before doing a HTTP POST to the defined endpoint and this entire process will repeat again every 10 seconds.
That is, you have a moving window of 10 seconds.
Reference:
https://docs.influxdata.com/kapacitor/v1.3/nodes/http_post_node/
I'm running into Scala,Apache Spark world and I'm trying to understand how to create a "pipeline" that will generate a DataFrame based on the events I receive.
For instance, the idea is that when I receive a specific log/event I have to insert/update a row in the DF.
Let's make a real example.
I would like to create a DataFrame that will represent the state of the users present in my database(postgres,mongo whatever).
When i say state, I mean the current state of the user(ACTIVE,INCOMPLETE,BLOCKED, etc). This states change based on the users activity, so then I will receive logs(JSON) with key "status": "ACTIVE" and so on.
So for example, I'm receiving logs from a Kafka topic.. at some point I receive a log which I'm interested because it defines useful information about the user(the status etc..)
I take this log, and I create a DF with this log in it.
Then I receive the 2nd log, but this one was performed by the same user, so the row needs to be updated(if the status changed of course!) so no new row but update the existing one. Third log, new user, new information so store as a new row in the existing DF.. and so on.
At the end of this process/pipeline, I should have a DF with the information of all the users present in my db and their "status" so then I can say "oh look at that, there are 43 users that are blocked and 13 that are active! Amazing!"
This is the idea.. the process must be in real time.
So far, I've tried this using files not connecting with a kafka topic.
For instance, I've red file as follow:
val DF = mysession.read.json("/FileStore/tables/bm2ube021498209258980/exampleLog_dp_api-fac53.json","/FileStore/tables/zed9y2s11498229410434/exampleLog_dp_api-fac53.json")
which generats a DF with 2 rows with everything inside.
+--------------------+-----------------+------+--------------------+-----+
| _id| _index|_score| _source|_type|
+--------------------+-----------------+------+--------------------+-----+
|AVzO9dqvoaL5S78GvkQU|dp_api-2017.06.22| 1|[2017-06-22T08:40...|DPAPI|
| AVzO9dq5S78GvkQU|dp_api-2017.06.22| 1|[null,null,[Wrapp...|DPAPI|
+--------------------+-----------------+------+--------------------+-----+
in _source there are all the nested things(the status I mentioned is here!).
Then I've selected some useful information like
DF.select("_id", "_source.request.user_ip","_source.request.aw", "_type").show(false)
+--------------------+------------+------------------------------------+-----+
|_id |user_ip |aw |_type|
+--------------------+------------+------------------------------------+-----+
|AVzO9dqvoaL5S78GvkQU|111.11.11.12|285d5034-dfd6-44ad-9fb7-ba06a516cdbf|DPAPI|
|AVzO9dq5S78GvkQU |111.11.11.82|null |DPAPI|
+--------------------+------------+------------------------------------+-----+
again, the idea is to create this DF with the logs arriving from a kafka topic and upsert the log in this DF.
Hope I explained well, I don't want a "code" solution I'd prefer hints or example on how to achieve this result.
Thank you.
As you are looking for resources I would suggest the following:
Have a look at the Spark Streaming Programming Guide (https://spark.apache.org/docs/latest/streaming-programming-guide.html) and the Spark Streaming + Kafka Integration Guide (https://spark.apache.org/docs/2.1.0/streaming-kafka-0-10-integration.html).
Use the Spark Streaming + Kafka Integration Guide for information on how to open a Stream with your Kafka content.
Then have a look at the possible transformations you can perform with Spark Streaming on it in chapter "Transformations on DStreams" in the Spark Streaming Programming Guide
Once you have transformed the stream in a way that you can perform a final operation on it have a look at "Output Operations on DStreams" in the Spark Streaming Programming Guide. I think especially .forEachRDD could be what you are looking for - as you can do an operation (like checking whether a certain key word is in your string and based on this do a database call) for each element of the stream.
I receive an error from time and time due to incompatible data in my source data set compared to my target data set. I would like to control the action that the pipeline determines based on error types, maybe output or drop those particulate rows, yet completing everything else. Is that possible? Furthermore, is it possible to get a hold of the actual failing line(s) from Data Factory without accessing and searching in the actual source data set in some simple way?
Copy activity encountered a user error at Sink side: ErrorCode=UserErrorInvalidDataValue,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Column 'Timestamp' contains an invalid value '11667'. Cannot convert '11667' to type 'DateTimeOffset'.,Source=Microsoft.DataTransfer.Common,''Type=System.FormatException,Message=String was not recognized as a valid DateTime.,Source=mscorlib,'.
Thanks
I think you've hit a fairly common problem and limitation within ADF. Although the datasets you define with your JSON allow ADF to understand the structure of the data, that is all, just the structure, the orchestration tool can't do anything to transform or manipulate the data as part of the activity processing.
To answer your question directly, it's certainly possible. But you need to break out the C# and use ADF's extensibility functionality to deal with your bad rows before passing it to the final destination.
I suggest you expand your data factory to include a custom activity where you can build some lower level cleaning processes to divert the bad rows as described.
This is an approach we often take as not all data is perfect (I wish) and ETL or ELT doesn't work. I prefer the acronym ECLT. Where the 'C' stands for clean. Or cleanse, prepare etc. This certainly applies to ADF because this service doesn't have its own compute or SSIS style data flow engine.
So...
In terms of how to do this. First I recommend you check out this blog post on creating ADF custom activities. Link:
https://www.purplefrogsystems.com/paul/2016/11/creating-azure-data-factory-custom-activities/
Then within your C# class inherited from IDotNetActivity do something like the below.
public IDictionary<string, string> Execute(
IEnumerable<LinkedService> linkedServices,
IEnumerable<Dataset> datasets,
Activity activity,
IActivityLogger logger)
{
//etc
using (StreamReader vReader = new StreamReader(YourSource))
{
using (StreamWriter vWriter = new StreamWriter(YourDestination))
{
while (!vReader.EndOfStream)
{
//data transform logic, if bad row etc
}
}
}
}
You get the idea. Build your own SSIS data flow!
Then write out your clean row as an output dataset, which can be the input for your next ADF activity. Either with multiple pipelines, or as chained activities within a single pipeline.
This is the only way you will get ADF to deal with your bad data in the current service offerings.
Hope this helps