Features of GStreamer - streaming

Does GStreamer have the following functionalities/features, or is it possible to implement them on top of GStreamer:
Time windows: set up the graph such that a sink pad of one element does not just receive the current frame, but also n previous frames and m future frames. Including when seeking to a new position.
No data copies when passing data between elements, instead reusing the same buffer.
Having shared data between mutiple elements on different branches, that changes with time, but is buffered in such a way that all elements get the same value for it for the same frame index.

Q1) Time windows
You need to write your plugin using GstAdapter.
Q2) No data copies when passing data between elements
It's done by default. No data is copied from element to element unless required. It just passes a pointer to an instance of GstBuffer. If an element is like encoder or filter, which needs to work on a buffer to produce new data, a new GstBuffer instance is created with newly generated data in GstMemory, obviously.
Q3) Having shared data between mutiple elements
Not sure exactly what you mean. Is is possible to achieve what you want by using GstMemory share? Take a look at gst_memory_share(), gst_buffer_copy_region(), or gst_adapter_get_buffer().

Related

How to "join" a frequently updating stream with an irregularly updating stream in Apache Beam?

I have a stream of measurements keyed by an ID PCollection<KV<ID,Measurement>> and something like a changelog stream of additional information for that ID PCollection<KV<ID,SomeIDInfo>>. New data is added to the measurement stream quite regularly, say once per second for every ID. The stream with additional information on the other hand is only updated when a user performs manual re-configuration. We can't tell often this happens and, in particular, the update frequency may vary among IDs.
My goal is now to enrich each entry in the measurements stream by the additional information for its ID. That is, the output should be something like PCollection<KV<ID,Pair<Measurement,SomeIDInfo>>>. Or, in other words, I would like to do a left join of the measurements stream with the additional information stream.
I would expect this to be a quite common use case. Coming from Kafka Streams, this can be quite easily implemented with a KStream-KTable-Join. With Beam, however, all my approaches so far seem not to work. I already thought about the following ideas.
Idea 1: CoGroupByKey with fixed time windows
Applying a window to the measurements stream would not be an issue. However, as the additional information stream is updating irregularly and also significantly less frequently than the measurements stream, there is no reasonable common window size such that there is at least one updated information for each ID.
Idea 2: CoGroupByKey with global window and as non-default trigger
Refining the previous idea, I thought about using a processing-time trigger, which fires e.g. every 5 seconds. The issue with this idea is that I need to use accumulatingFiredPanes() for the additional information as there might be no new data for a key between two firings, but I have to use discardingFiredPanes() for the measurements stream as otherwise my panes would quickly become too large. This simply does not work. When I configure my pipeline that way, also the additional information stream discards changes. Setting both trigger to accumulating it works, but, as I said, this is not scalable.
Idea 3: Side inputs
Another idea would be to use side inputs, but also this solution is not really scalable - at least if I don't miss something. With side inputs, I would create a PCollectionView from the additional information stream, which is a map of IDs to the (latest) additional information. The "join" can than be done in a DoFn with a side input of that view. However, the view seems to be shared by all instances that perform the side input. (It's a bit hard to find any information regarding this.) We would like to not make any assumptions regarding the amount of IDs and the size of additional info. Thus, using a side input seems also not to work here.
The side input option you discuss is currently the best option, although you are correct about the scalability concern due to the side input being broadcast to all workers.
Alternatively, you can store the infrequently-updated side in an external key-value store and just do lookups from a DoFn. If you go this route, it's generally useful to do a GroupByKey first on the main input with ID as a key, which lets you cache the lookups with a good cache-hit ratio.

KubeFlow, handling large dynamic arrays and ParallelFor with current size limitations

I've been struggling to find a good solution for this manner for the past day and would like to hear your thoughts.
I have a pipeline which receives a large & dynamic JSON array (containing only stringified objects),
I need to be able to create a ContainerOp for each entry in that array (using dsl.ParallelFor).
This works fine for small inputs.
Right now the array comes in as a file http url due to pipeline input arguements size limitations of argo and Kubernetes (or that is what I understood from the current open issues), but - when I try to read the file from one Op to use as input for the ParallelFor I encounter the output size limitation.
What would be a good & reusable solution for such a scenario?
Thanks!
the array comes in as a file http url due to pipeline input arguements size limitations of argo and Kubernetes
Usually the external data is first imported into the pipeline (downloaded and output). Then the components use inputPath and outputPath to pass big data pieces as files.
The size limitation only applies for the data that you consume as value instead of file using inputValue.
The loops consume the data by value, so the size limit applies to them.
What you can do is make this data smaller. For example if your data is a JSON list of big objects [{obj1}, {obj2}, ... , {objN}], you can transform it to list of indexes [1, 2, ... , N], pass that list to the loop and then inside the loop you can have a component that uses the index and the data to select a single piece to work on N ->{objN}.

How to model very large work queues in Akka?

I am writing a scala script to download all items from the hacker news API. There are ~12M items, each being a JSON of ~200 bytes.
I identified the following issues:
Storing the data: I tried to save each item as a single JSON file, but it became very hard just to barely list them (using Linux, ext4 file system). So I changed it to just append JSON items to multiple (100) files (by taking the item's id module 100).
Keeping track of what has been downloaded, because I want to be able to stop/continue the application. First I tried writing the downloaded ids to a textfile, but it turned out a little bit buggy. So now I just read all the items and collect the ids. (It works.)
All this is done with 1 Master actor and an arbitrary number of Worker actors (tens). The Master has a Queue[Int] and pops it and Workers ask for work.
The problem I am having is fairly simple but I haven't been able to solve it in a nice way.
I can collect the ids from items already downloaded in a list. But what I really need is the complement to that set; I need all the items I have not downloaded, up to the highest item id.
I tried using a range (1 to maxItemId) and subtracting the set of done jobs but it is really slow. reaaaaaaally slow.
Now I am using a Stream, and when a worker asks for a job, I check if the stream's (the next job) has already been done. If so, I give it to the Worker. Otherwise I check the next one.
The problem with this approach is that I can not put jobs back at the stream if they fail. That would be easy with the Queue; but then again I am having trouble just setting up the queue with millions of items.
What could be a better approach to this? I don't think the issues here are trivial, this is a very large number of tasks to perform and keep track of, but it shouldn't be so hard as well.
Thanks!
As far as I understood your question, I think you don't need a very complicated data structure here.
Assuming your ids are sequential from 1 to maxItemId, you can use an array of Boolean with maxItemId size to keep track of processed items. You initialize this array by reading the processed ids. And you find the next job by searching for the next false entry.
Assuming that your maxItemId is around 12M, iterating over all items is pretty much instantaneous.

How should I store my large MATLAB data files during analysis?

I am having issues with 'data overload' while processing point cloud data in MATLAB. This is what I am currently doing:
I begin with my raw data files, each in the order of ~30Mb each.
I then do initial processing on them to extract n individual objects and remove outlying points, which are all combined into a 1 x n structure, testset, saved into testset.mat (~100Mb).
So far so good. Now things become complicated:
For each point in each object in testset, I will compute one of a number of features, which ends up being a matrix of some size (for each point). The size of the matrix, and some other properties of the computation, are parameters of the calculations. I save these computed features in a 1 x n cell array, each cell of which contains an array of the matrices for each point.
I then save this cell array in a .mat file, where the name specified the parameters, the name of the test data used and the types of features extracted. For example:
testset_feature_type_A_5x5_0.2x0.2_alpha_3_beta_4.mat
Now for each of these files, I then do some further processing (using a classification algorithm). Again there are more parameters to set.
So now I am in a tricky situation, where each final piece of the initial data has come through some path, but the path taken (and the parameters set along that path) are not intrinsically held with the data itself.
So my question is:
Is there a better way to do this? Can anyone who has experience in working with large datasets in MATLAB suggest a way to store the data and the parameter settings more efficiently, and more integrally?
Ideally, I would be able to look up a certain piece of data without having to use regex on the file strings—but there is also an incentive to keep individually processed files separate to save system memory when loading them in (and to help prevent corruption).
The time taken for each calculation (some ~2 hours) prohibits computing data 'on the fly'.
For a similar problem, I have created a class structure that does the following:
Each object is linked to a raw data file
For each processing step, there is a property
The set method of the properties saves the data to file (in a directory with the same name as
the raw data file), stores the file name, and updates a "status" property to indicate that this step is done.
The get method of the properties loads the data if the file name has been stored and the status indicates "done".
Finally, the objects can be saved/loaded, so that I can do some processing now, save the object, later load it and I immediately know how far along the particular data set is in the processing pipeline.
Thus, the only data in memory is the data that is currently being worked on, and you can easily know which data set is at which processing stage. Furthermore, if you set up your methods to accept arrays of objects, you can do very convenient batch processing.
I'm not completely sure if this is what you need, but the save command allows you to store multiple variables inside a single .mat file. If your parameter settings are, for example, stored in an array, then you can save this together with the data set in a single .mat file. Upon loading the file, both the dataset and the array with parameters are restored.
Or do you want to be able to load the parameters without loading the file? Then I would personally opt for the cheap solution of having a second set of files with just the parameters (but similar filenames).

What is the difference between a view and a stream?

In the Scala 2.8 collections framework, what is the difference between view and toStream?
In a view elements are recomputed each time they are accessed. In a stream elements are retained as they are evaluated.
For example:
val doubled = List(1,2,3,4,5,6,7,8,9,10).view.map(_*2)
println(doubled.mkString(" "))
println(doubled.mkString(" "))
will re-evaluate the map for each element twice. Once for the first println, and again for the second. In contrast
val doubled = List(1,2,3,4,5,6,7,8,9,10).toStream.map(_*2)
println(doubled.mkString(" "))
println(doubled.mkString(" "))
will only double the elements once.
A view is like a recipe to create a collection. When you ask for elements of a view it carries out the recipe each time.
A stream is like a guy with a bunch of dry-erase cards. The guy knows how to compute subsequent elements of the collection. You can ask him for the next element of the collection and gives you a card with the element written on it and a string tied from the card to his finger (to help him remember). Also, before he gives you a card he unties the first string from his finger and ties it to the new card.
If you hold onto the first card (i.e. keep a reference to the head of the stream) you might eventually run out of cards (i.e. memory) when you ask for the next element, but if you don't need to go back to the first elements you can cut the string and hand the unneeded cards back to the guy and he can re-use them (they're dry-erase afterall). This is how a stream can represent an infinite sequence without running out of memory.
Geoff's answer covers almost everything, but I want to add that a Stream is a List-like sequence, while every kind of collections (maps, sets, indexed seqs) have views.
Another way to explain this if you know apache spark would be that using stream is like caching the spark dataset, whereas using view is like using an uncached dataset, meaning that every time you call some action on it, it will reevaluate everything in the DAG.