Migrating lowpass filter from scriptProcessor (onaudioprocess) to AudioWorkletProcessor (process) - web-audio-api

I'm facing an issue while migrating my library from the deprecated scriptProcessor to AudioWorklet.
Current implementation with ScriptProcessor
It currently use the AudioProcessingEvent, inputBuffer property, which is an AudioBuffer.
I apply to this inputBuffer a lowpass filter thanks to OfflineAudioContext then analyze the peaks (of bass frequencies) to count and compute BPM candidates.
The issue is that the lowpass filter work can't be done within the AudioWorkletProcessor. (OfflineAudioContext is not defined)
How to apply a lowpass filter to the sample provided by the process method of an AudioWorkletProcessor (the same way as it's doable with the onaudioprocess event data) ? Thanks
[SOLUTION] AudioWorklet implementation
For the end user it will looks like this:
import { createRealTimeBpmProcessor } from 'realtime-bpm-analyzer';
const realtimeAnalyzerNode = await createRealTimeBpmProcessor(audioContext);
// Set the source with the HTML Audio Node
const track = document.getElementById('track');
const source = audioContext.createMediaElementSource(track);
// Lowpass filter
const filter = audioContext.createBiquadFilter();
filter.type = 'lowpass';
// Connect stuff together
source.connect(filter).connect(realtimeAnalyzerNode);
source.connect(audioContext.destination);
realtimeAnalyzerNode.port.onmessage = (event) => {
if (event.data.message === 'BPM') {
console.log('BPM', event);
}
if (event.data.message === 'BPM_STABLE') {
console.log('BPM_STABLE', event);
}
};
You can find the full code under the version 3 (pre-released right now).

You could make sure to apply the lowpass filter to the signal before it reaches the AudioWorkletNode. Something like this should work.
const biquadFilterNode = new BiquadFilterNode(audioContext);
const audioWorkletNode = new AudioWorkletNode(
audioContext,
'the-name-of-your-processor'
);
yourInput
.connect(biquadFilterNode)
.connect(audioWorkletNode);
As a result your process() function inside the AudioWorkletProcessor gets called with the filtered signal.
However I think your current implementation doesn't really use a lowpass filter. I might be wrong but it looks like startRendering() is never called which means the OfflineAudioContext is not processing any data.
If that is true you may not need the lowpass filter at all for your algorithm to work.

Related

How to combine the elements of an arbitrary number of dependent Fluxes?

In the non reactive world the following code snippet is nothing special:
interface Enhancer {
Result enhance(Result result);
}
Result result = Result.empty();
result = fooEnhancer.enhance(result);
result = barEnhancer.enhance(result);
result = bazEnhancer.enhance(result);
There are three different Enhancer implementations taking a Result instance, enhancing it and returning the enhanced result. Let's assume the order of the enhancer calls matters.
Now what if these methods are replaced by reactive variants returning a Flux<Result>? Because the methods depend on the result(s) of the preceding method, we cannot use combineLatest here.
A possible solution could be:
Flux.just(Result.empty())
.switchMap(result -> first(result)
.switchMap(result -> second(result)
.switchMap(result -> third(result))))
.subscribe(result -> doSomethingWith(result));
Note that the switchMap calls are nested. As we are only interested in the final result, we let switchMap switch to the next flux as soon as new events are emitted in preceding fluxes.
Now let's try to do it with a dynamic number of fluxes. Non reactive (without fluxes), this would again be nothing special:
List<Enhancer> enhancers = <ordered list of different Enhancer impls>;
Result result = Result.empty();
for (Enhancer enhancer : enhancers) {
result = enhancer.enhance(result);
}
But how can I generalize the above reactive example with three fluxes to deal with an arbitrary number of fluxes?
I found a solution using recursion:
#FunctionalInterface
interface FluxProvider {
Flux<Result> get(Result result);
}
// recursive method creating the final Flux
private Flux<Result> cascadingSwitchMap(Result input, List<FluxProvider> fluxProviders, int idx) {
if (idx < fluxProviders.size()) {
return fluxProviders.get(idx).get(input).switchMap(result -> cascadingSwitchMap(result, fluxProviders, idx + 1));
}
return Flux.just(input);
}
// code using the recursive method
List<FluxProvider> fluxProviders = new ArrayList<>();
fluxProviders.add(fooEnhancer::enhance);
fluxProviders.add(barEnhancer::enhance);
fluxProviders.add(bazEnhancer::enhance);
cascadingSwitchMap(Result.empty(), fluxProviders, 0)
.subscribe(result -> doSomethingWith(result));
But maybe there is a more elegant solution using an operator/feature of project-reactor. Does anybody know such a feature? In fact, the requirement doesn't seem to be such an unusual one, is it?
switchMap feels inappropriate here. If you have a List<Enhancer> by the time the Flux pipeline is declared, why not apply a logic close to what you had in imperative style:
List<Enhancer> enhancers = <ordered list of different Enhancer impls>;
Mono<Result> resultMono = Mono.just(Result.empty)
for (Enhancer enhancer : enhancers) {
resultMono = resultMono.map(enhancer::enhance); //previousValue -> enhancer.enhance(previousValue)
}
return resultMono;
That can even be performed later at subscription time for even more dynamic resolution of the enhancers by wrapping the whole code above in a Mono.defer(() -> {...}) block.

ag-Grid set filter and sort model without triggering event

I am updating sort & filter models via api:
this.gridApi.setFilterModel(filterModels);
this.gridApi.setSortModel(sortModels);
The problem with this is I have a server request bound to the change even of both sort & filter so when user changes then the data is updated. This means when I change model on code like restoring a state or resetting the filters it causes multiple requests.
Is there a way to update the filter/sort model without triggering the event?
I see there is a ColumnEventType parameter but couldn't see how it works. Can I specify some variable that I can look for inside my event handlers to get them to ignore calls that are not generated from user?
I am trying to manage URL state so when url query params change my code sets the models in the grids but this ends up causing the page to reload multiple times because the onFilter and onSort events get called when the model is set and there is no way I can conceive to prevent this.
At the time, you are going to have to manage this yourself, ie, just before you call the setModel, somehow flag this in a shared part of your app (maybe a global variable)
Then when you react to these events, check the estate of this, to guess where it came from.
Note that at the moment, we have added source to the column events, but they are not yet for the model events, we are planning to add them though, but we have no ETA
Hope this helps
I had to solve similar issue. I found solution which working for my kind of situation. Maybe this help someone.
for (let j = 0; j < orders.length; j++) {
const sortModelEntry = orders[j];
if (typeof sortModelEntry.property === 'string') {
const column: Column = this.gridColumnApi.getColumn(sortModelEntry.property);
if (column && ! column.getColDef().suppressSorting) {
column.setSort(sortModelEntry.direction.toLowerCase());
column.setSortedAt(j);
}
}
this.gridApi.refreshHeader();
Where orders is array of key-value object where key is name of column and value is sorting directive (asc/desc).
Set filter without refresh was complicated
for (let j = 0; j < filters.length; j++) {
const filterModelEntry = filters[j];
if (typeof filterModelEntry.property === 'string') {
const column: Column = this.gridColumnApi.getColumn(filterModelEntry.property);
if (column && ! column.getColDef().suppressFilter) {
const filter: any = this.gridApi.getFilterApi(filterModelEntry.property);
filter['filter'] = filterModelEntry.command;
filter['defaultFilter'] = filterModelEntry.command;
filter['eTypeSelector'].value = filterModelEntry.command;
filter['filterValue'] = filterModelEntry.value;
filter['filterText'] = filterModelEntry.value;
filter['eFilterTextField'].value = filterModelEntry.value;
column.setFilterActive(true);
}
}
}
Attributes in filter:
property - name of column
command - filter action (contains, equals, ...)
value - value used in filter
For anyone else looking for a solution to this issue in Nov 2020, tapping into onFilterModified() might help. This gets called before onFilterChanged() so setting a value here (eg. hasUserManuallyChangedTheFilters = false, etc.) and checking the same in the filter changed event is a possible workaround. Although, I haven't found anything similar for onSortChanged() event, one that gets called before the sorting is applied to the grid.
I am not sure any clean way to achieve this but I noticed that FilterChangedEvent has "afterFloatingFilter = false" only if filterModel was updated from ui.
my workaround is as below
onFilterChanged = event:FilterChangedEvent) => {
if(event.afterFloatingFilter === undefined) return;
console.log("SaveFilterModel")
}

Spark - how to handle with lazy evaluation in case of iterative (or recursive) function calls

I have a recursive function that needs to compare the results of the current call to the previous call to figure out whether it has reached a convergence. My function does not contain any action - it only contains map, flatMap, and reduceByKey. Since Spark does not evaluate transformations (until an action is called), my next iteration does not get the proper values to compare for convergence.
Here is a skeleton of the function -
def func1(sc: SparkContext, nodes:RDD[List[Long]], didConverge: Boolean, changeCount: Int) RDD[(Long] = {
if (didConverge)
nodes
else {
val currChangeCount = sc.accumulator(0, "xyz")
val newNodes = performSomeOps(nodes, currChangeCount) // does a few map/flatMap/reduceByKey operations
if (currChangeCount.value == changeCount) {
func1(sc, newNodes, true, currChangeCount.value)
} else {
func1(sc, newNode, false, currChangeCount.value)
}
}
}
performSomeOps only contains map, flatMap, and reduceByKey transformations. Since it does not have any action, the code in performSomeOps does not execute. So my currChangeCount does not get the actual count. What that implies, the condition to check for the convergence (currChangeCount.value == changeCount) is going to be invalid. One way to overcome is to force an action within each iteration by calling a count but that is an unnecessary overhead.
I am wondering what I can do to force an action w/o much overhead or is there another way to address this problem?
I believe there is a very important thing you're missing here:
For accumulator updates performed inside actions only, Spark guarantees that each task’s update to the accumulator will only be applied once, i.e. restarted tasks will not update the value. In transformations, users should be aware of that each task’s update may be applied more than once if tasks or job stages are re-executed.
Because of that accumulators cannot be reliably used for managing control flow and are better suited for job monitoring.
Moreover executing an action is not an unnecessary overhead. If you want to know what is the result of the computation you have to perform it. Unless of course the result is trivial. The cheapest action possible is:
rdd.foreach { case _ => }
but it won't address the problem you have here.
In general iterative computations in Spark can be structured as follows:
def func1(chcekpoinInterval: Int)(sc: SparkContext, nodes:RDD[List[Long]],
didConverge: Boolean, changeCount: Int, iteration: Int) RDD[(Long] = {
if (didConverge) nodes
else {
// Compute and cache new nodes
val newNodes = performSomeOps(nodes, currChangeCount).cache
// Periodically checkpoint to avoid stack overflow
if (iteration % checkpointInterval == 0) newNodes.checkpoint
/* Call a function which computes values
that determines control flow. This execute an action on newNodes.
*/
val changeCount = computeChangeCount(newNodes)
// Unpersist old nodes
nodes.unpersist
func1(checkpointInterval)(
sc, newNodes, currChangeCount.value == changeCount,
currChangeCount.value, iteration + 1
)
}
}
I see that these map/flatMap/reduceByKey transformations are updating an accumulator. Therefore the only way to perform all updates is to execute all these functions and count is the easiest way to achieve that and gives the lowest overhead compared to other ways (cache + count, first or collect).
Previous answers put me on the right track to solve a similar convergence detection problem.
foreach is presented in the docs as:
foreach(func) : Run a function func on each element of the dataset. This is usually done for side effects such as updating an Accumulator or interacting with external storage systems.
It seems like instead of using rdd.foreach() as a cheap action to trigger accumulator increments placed in various transformations, it should be used to do the incrementing itself.
I'm unable to produce a scala example, but here's a basic java version, if it can still help:
// Convergence is reached when two iterations
// return the same number of results
long previousCount = -1;
long currentCount = 0;
while (previousCount != currentCount){
rdd = doSomethingThatUpdatesRdd(rdd);
// Count entries in new rdd with foreach + accumulator
rdd.foreach(tuple -> accumulator.add(1));
// Update helper values
previousCount = currentCount;
currentCount = accumulator.sum();
accumulator.reset();
}
// Convergence is reached

jayData complex filter evaluation

I am new to jayData and am trying to filter on an entity set. The filter needs to perform an complex evaluation beyond what I saw in the samples.
Here is a working sample of what I am trying to accomplish (the listView line isn't and is just there to show what I plan to do with the data):
function () {
var weekday = moment().isoWeekday()-1;
console.log(weekday);
var de = leagueDB.DailyEvents.toArray(function (events) {
console.log(events);
var filtered = [];
for (var e = 0; e < events.length;e++) {
console.log(events[e]);
console.log(events[e].RecurrenceRule);
var rule = RRule.fromString(events[e].RecurrenceRule);
var ruleOptions = rule.options.byweekday;
var isDay = ruleOptions.indexOf(weekday);
console.log(ruleOptions, isDay);
if(isDay =! -1)
{
filtered.push(events[e]);
}
}
$("#listView").kendoListView({dataSource:filtered});
});
Basically it is just evaluating a recurring rule string to see if the current day meets that criteria, if so add that event to the list for viewing.
But it blows up when I try to do this:
eventListLocal:leagueDB.DailyEvents.filter(function(e){
console.log("The Weekday is:"+viewModel.weekday);
console.log(e);
console.log("The recurrence rule is:"+e.RecurrenceRule);
var rruleOptions = viewModel.rruleOptions(e.RecurrenceRule);
if (rruleOptions !== -1) {
return true;
}
}).asKendoDataSource()
The error that is generating is:
Exception: Unable to resolve type:undefined
The thing is it seems to be occurring on "e" and the console logs like the event is not being passed in. However, I am not seeing a list either. In short I am really confused as to what is going on.
Any help would be appreciated.
Thanks,
You can't write filter expressions such as this.
When you write .filter(...), jaydata will parse your expression and then it will generate filter for underlying provider, for example where for webSql and $filter for oDataProvider.
Both JayData expression parser and the data provider itself should understand your filter.
Your filter is not suitable for this approach, because most of your codes are not familiar for jaydata expression parser and the underlying data provider, for example your console.log etc.
You can simplify your filter, or you should load all your data into an array, and then you can use filter method of array itself, there, you can write any filter you like, and your filter will work. Of course this has performance issue in some scenarios when your data set is large.
Read more on http://jaydata.org/tutorials/entityexpressions-the-heart-of-jaydata

Combining parts of Stream

I've got an observable watching a log that is continuously being written too. Each line is a new onNext call. Sometimes the log outputs a single log item over multiple lines. Detecting this is easy, I just can't find the right RX call.
I'd like to find a way to collect the single log items into a List of lines, and onNext the list when the single log item is complete.
Buffer doesn't seem right as this isn't time based, it's algorithm based.
GroupBy might be what I want, but the documentation is confusing for it. It also seems that the observables it creates probably won't have onComplete called until the completion of the source observable.
This solution can't delay the log much (preferably not at all). I need to be reading the log as close to real time as possible, and order matters.
Any push in the right direction would be great.
This is a typical reactive parsing problem. You could use Rxx Parsers, or for a native solution you can build your own state machine with either Scan or by defining an async iterator. Scan is preferable for simple parsers and often uses a Scan-Where-Select pattern.
Async iterator state machine example: Turnstile
Scan parser example (untested):
IObservable<string> lines = ReadLines();
IObservable<IReadOnlyList<string>> parsed = lines.Scan(
new
{
ParsingItem = (IEnumerable<string>)null,
Item = (IEnumerable<string>)null
},
(state, line) =>
// I'm assuming here that items never span lines partially.
IsItem(line)
? IsItemLastLine(line)
? new
{
ParsingItem = (IEnumerable<string>)null,
Item = (state.ParsingItem ?? Enumerable.Empty<string>()).Concat(line)
}
: new
{
ParsingItem = (state.ParsingItem ?? Enumerable.Empty<string>()).Concat(line),
Item = (List<string>)null
}
: new
{
ParsingItem = (IEnumerable<string>)null,
Item = new[] { line }
})
.Where(result => result.Item != null)
.Select(result => result.Item.ToList().AsReadOnly());