i want to create a button that allows the user to tap on it and thereby set a beats per minute. i will also have touches moved up and down on it to adjust faster and slower. (i have already worked out this bit).
what are some appropriate ways to get the times that the user has clicked on the button to get an average time between presses and thereby work out a tempo.
Overall
You best use time() from time.h instead of an NSDate. At the rate of beats the overhead of creating an NSDate could result in an important loss of precision.
I believe time_t is guaranteed to be of double precision, therefore you're safe to use time() in combination with difftime().
Use the whole screen for this, don't just give the user 1 small button.
Two idea
Post-process
Store all times in an array.
Trim the result. Remove elements from the start and end that are more than a threshold from the average.
Get the average from the remaining values. That's your speed.
If it's close to a common value, use that.
Adaptive
Use 2 variables. One is called speed and the other error.
After the first 2 beats calculate the estimated speed, set error to speed.
After each beat
queue = Fifo(5) # First-in, first-out queue. Try out
# different values for the length
currentBeat = now - timeOflastBeat
currentError = |speed - currentBeat|
# adapt
error = (error + currentError) / 2 # you have to experiment how much
# weight currentError should have
queue.push(currentBeat) # push newest speed on queue
# automatically removes the oldest
speed = average(queue)
As soon as error gets smaller than a certain threshold you can stop and tell the user you've determined the speed.
Go crazy with the interface. Make the screen flash whenever the user taps. Extra sparks for a tap that is nearly identical to the expected time.
Make the background color correspond to the error. Make it brighter the smaller the error gets.
Each time the button is pressed, store the current date/time (with [NSDate date]). Then, the next time it's pressed, you can calculate the difference with -[previousDate timeIntervalSinceNow] (negative because it's subtracting the current date from the previous), which will give you the number of seconds.
Related
TL;DR:
Can I get Grafana to show me the previous data point, when the currently selected time period does not have a data point? I have an example which sounds ridiculous, but at least it's simple to understand: I send data every 1 minute, and I wish to zoom into the last 30 seconds, and still see data. You may ask "why not just zoom out to 2 minutes" but the reason is that other data is on the same graph that has updated more often, and I wish to compare with that data. Also, for the more lengthy reasons below.
If not, how can I achieve what I want to achieve, see below?
Context
For a few years, I have been monitoring the water level in three of our basement sumps (which have pumps installed) by sending this data from Node-RED to InfluxDB, then visualising the sump levels in Grafana. I have set up three waterproof ultrasonic distance sensors, each pointed down a pipe that is inserted vertically into each sump. The water fills the pipe and the distance sensor, connected to an Arduino, sends me the reading. The Arduino also has other sensors connected (temp / humidity) and deals with distance calibrations to calculate the percent full of each sump. All this data is sent to Node-RED. In total, I am sending 4 values per sump: distance measurement in mm, percent full, temp, humidity. So that's 12 fields. Data is sent every 2 seconds, because I wished to have a reasonably high resolution to see nice curves in graphs.
Also I decided to store all this data so that I could later troubleshoot issues (we have had sewage floods resulting in water not being able to be pumped away, etc...) and design some warning systems for these issues based on data.
Storing 12 values for every 2 seconds, over the course of a number of years, takes up a lot of space (8GB).
Nature of the data
Storing this resolution of data has also helped me be able to describe the nature of the data. I will do so here.
(1) Non-meaningful NOISE (see below) - the percent-full reading goes up and down by 1 or 2 percent every couple of seconds:
(2) Meaningful DRIFT (see below) - I don't mean sensor drift, I am referring to actual water levels changing slowly over time, e.g. over 1 day or 1 week. Perhaps condensation on the walls drips down into the sump, or water evaporates from the sump, and the value can waver by a few percent over the course of a day. Each sump has slightly different characteristics.
(3) Meaningful MONITORING DATA - during wet weather, depending on rainfall amount, the sumps fill up over the course of say 30 mins to 3 hours. Then the pumps run and the water level drops again, wavers a bit, then the sumps continue to fill up. If the rain stopped, you can see a lovely curve as the water fills in progressively more slowly (see the green line below):
Solution to downsample
I know Influx has its own downsampling possibilities, however because of the nature of the data (which can hardly vary for 2 months but when it does, I really need to capture it in detail), I don't think lowering the sample rate is a great idea.
I have some understanding of digital filters (e.g. low pass etc) but have never programmed one myself. So I have written a basic filter in javascript (a Node-RED function) to filter the data in realtime as follows: only send each reading when it has changed from the previous one by x amount. (And update the previous one, when that occurs.)
This has already vastly reduced the amount of data being stored, and I can vary x to filter out noise shown in my first graph above, at the expense of resolution when the pumps run. Even if I set the x value to 2, it still vastly reduces data over long periods of dry weather.
So - onto my problem! Now data is not being logged to InfluxDB unless there is some meaningful change. Which means that when I zoom in to e.g. 15 minute timeframe of data, there is nothing to see.
Grafana does have the option of "fill (previous)" but this draws a line between points on the existing graph, rather than showing the previous data as if it hasn't changed since that point. Now my grafana dashboard looks a bit sad :(
One proposed solution is, in addition to sending "delta" data, send "summary" data, that is - send a full suite of data every 1 minute regardless of whether data changed or not. But then we get noise back again, and pointless storage.
Any other ideas?
I am having problems with the 'windowing', as it's called, of pitch-shifted grains of sounds using the Web Audio API. When I play the sound, I successfully set the initial gain to 0, using setValueAtTime, and successfully fade the sound in using linearRampToValueAtTime. However, I am not successful in scheduling a fade-out to occur slightly before the sound ends. It may be because the sound is pitch-shifted, although in the code below, I believe I have set the parameters correctly. Though maybe the final parameter in setTargetAtTime is not suitable because I don't fully understand that parameter, despite having read Chris's remarks on the matter. How does one successfully schedule a fade-out when the sound is pitch-shifted, using setTargetAtTime? Below is my code. You can see the piece itself at https://vispo.com/animisms/enigman2/2022
source.buffer = audioBuffer;
source.connect(this.GrainObjectGain);
this.GrainObjectGain.connect(app.mainGain);
source.addEventListener('ended', this.soundEnded);
//------------------------------------------------------------------------
// Now we do the 'windowing', ie, when we play it, we fade it in,
// and we set it to fade out at the end of play. The fade duration
// in seconds is a maximum of 10 milliseconds and
// a minimum of 10% of the duration of the sound. This helps
// eliminate pops.
source.playbackRate.value = this.playbackRate;
var fadeDurationInSeconds = Math.min(0.01,0.1*duration*this.playbackRate);
this.GrainObjectGain.gain.setValueAtTime(0, app.audioContext.currentTime);
this.GrainObjectGain.gain.linearRampToValueAtTime(app.constantGrainGain, app.audioContext.currentTime+fadeDurationInSeconds);
this.GrainObjectGain.gain.setTargetAtTime(0, app.audioContext.currentTime+duration*this.playbackRate-fadeDurationInSeconds, fadeDurationInSeconds);
source.start(when, offset, duration);
Given your comment below I guess you want to schedule the fade out fadeDurationInSeconds before the sound ends.
Since you change playbackRate you need to divide the original duration by that playbackRate to get the actual duration.
Changing your setTargetAtTime() call as follows should schedule the fade out at the desired point in time.
this.GrainObjectGain.gain.setTargetAtTime(
0,
app.audioContext.currentTime + duration / this.playbackRate - fadeDurationInSeconds,
fadeDurationInSeconds
);
Please not that setTargetAtTime() actually never reaches the target. At least in theory. It comes reasonably close though after some time. But that time is longer as the timeConstant.
The relevant text from the spec can be found here: https://webaudio.github.io/web-audio-api/#dom-audioparam-settargetattime
Start exponentially approaching the target value at the given time with a rate having the given time constant. Among other uses, this is useful for implementing the "decay" and "release" portions of an ADSR envelope. Please note that the parameter value does not immediately change to the target value at the given time, but instead gradually changes to the target value.
The timeConstant parameter roughly defines the time it takes to reach 2/3 of the desired signal attenuation.
It's mentioned here: https://webaudio.github.io/web-audio-api/#dom-audioparam-settargetattime-timeconstant
More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value 1−1/𝑒 (around 63.2%) given a step input response (transition from 0 to 1 value).
Given a sequence of numbers that trend overtime, I would like to use Reactive Extensions to give an alert when there is a sudden absolute change spike or drop. i.e 101.2, 102.4, 101.4, 100.9, 95, 93, 85... and then increasing slowly back to 100.
The alert would be triggered on the drop from 100.9 to 95, each would have a timestamp looking for an an alert of the form:
LargeChange
TimeStamp
Distance
Percentage
I believe i need to start with Buffer(60, 1) for a 60 sample moving average (of a minute frequency between samples).
Whilst that would give the average value, I can't assign an arbitrary % to trigger the alert since this could vary from signal to signal - one may have more volatility that the other.
To get volatility I would then take a longer historical time frame Buffer(14, 1) (these would be 14 days of daily averages of the same signal).
I would then calculate the difference between each value in the buffer and the 14 day average, square and add all these deviations, and divide by the number of samples.
My questions are please:
How would I perform the above volatility calculation, or is it better to just do this outside of RX and update the new volatility value once daily external to the observable stream calculation (this may make more sense to avoid me having to run 14 days worth of 1 minute samples through it)?
How would we combine the fast moving average and volatility level (updated once per day) to give alerts? I am seeing Scan and DistinctUntilChanged on posts on SO, but cant work out how to put together.
I would start by breaking this down into steps. (For simplicity I'll assume the original data source is an observable called values.)
Convert values into a moving averages observable (we'll call this averages here).
Combine values and averages into an observable that can watch for "extremes".
For step 1, you may be able to use the built-in Window method that Slugart mentioned in a comment or the similar Buffer method. A Select call after the Window or Buffer can be used to process the array into a single average value object. Something like:
averages = values.Buffer(60, 1)
.Select((buffer) => { /* do average and std dev calcuation here */ });
If you need sliding windows, you may have to implement your own operator, but I could easily be unaware of one that does exist. Scan along with a queue seem like a good basis for such an operator if you need to write it.
For step 2, you will probably want to start with CombineLatest followed by a Where clause. Something like:
extremes = values.CombineLatest(averages, (v, a) => new { Current = v, Average = a })
.Where((value) = { /* check if value.Current is out of deviation from value.Average */ });
The nice part of this approach is that you can choose between having averages be computed directly from values in line like we did here or be some other source of volatility information with minimal effect on the rest of the code.
Note that the CombineLatest call may cause two subscriptions to values, one directly and one indirectly via a subscription to averages. If the underlying implementation of values makes this undesirable, use Publish and RefCount to get around this.
Also note that CombineLatest will output a value each time either values or averages outputs a value. This means that you will get two events every time averages updates, one for the values update and one for the averages update triggered by the value.
If you are using sliding windows, that would mean a double update on every value, and it would probably be better to simply include the current value on the Scan output and skip the CombineLatest altogether. You would have something like this instead:
averages = values.Scan((v) => { /* build sliding window and attach current value */ });
extremes = averages.Where((a) => { /* check if current value is out of deviation for the window */ });
Once you have extremes, you can subscribe to it and trigger your alerts.
I have a periodic backend process and I would like to visualize the history of the length of cycles on my dashboard. Is it possible?
I have full control over the data/metrics I generate, so I could perhaps increment a counter every time a cycle completes (a cycle takes about 3 days), so I would get counter updates every 3 days or so. Then how could I get Grafana to report the length of each cycle? (for instance: 72h; 69h; 74h; etc.) The actual widget doesn't matter, but I need something visual to tell me at once if cycles are getting faster or slower.
Any pointers or ideas are welcome.
It looks like a standard time series: X-axis - time, Y-axis - duration [s]:
Then you may add:
trend line
aggregations (min/max/avg/derivation/diff/...)
moving average
other math functions, which are available in used datasource
In prometheus, I have a monotonically increasing counter (ifHCInOctets from IF-MIB, in this case).
In Grafana, I can create a graph using the simple query ifHCInOctets{job='snmp',instance='$Device',ifDescr=~'eth0'} and see the counter graphed over different time ranges by selecting the desired range in the upper-right.
This is almost exactly what I want. However, I would like the graph to always start at zero and increase from there. The use-case is that I want to visualize my data usage over the course of a month to see how quickly I am approaching my data cap. (I already create a gauge object using increase(ifHCInOctets{...}[$__range]) function which shows me how much I have used in total over the given time range, but I'd like to be able to visualize that usage over time.)
Basically, I want ifHCInOctets{...} - X where X is the value of ifHCInOctets at the start of the range. My first thought was:
ifHCInOctets{...} - ifHCInOctets{...} offset $__range
But that seems to show me each data point minus the data point $__range time prior to it (rather than just subtracting the starting value from all points).
I then tried creating a query variable with the query query_result(ifHCInOctets{...} offset $__range) and setting it to update on time range change. This almost seemed to work, but the resulting graph always seemed to start slightly negative, depending on the time range selected, which made me think it wasn't doing what I thought it was.
I have also tried various forms of sum, sum_over_time, and increase, all to no avail.
You're probably looking for something like this
ifHCInOctets
-
min_over_time(
(ifHCInOctets
and
(month(timestamp(ifHCInOctets)) == scalar(month(vector($__to / 1000)))))[31d:]
)
But it doesn't take into account counter resets. And is ugly and inefficient as hell. It's basically the current value minus the min_over_time calculated over samples in the previous 31 days that fell into the same month as Grafana's $__to timestamp.
You probably want to set up a recording rule based on this expression (that adds year, month and day labels to a metric) and then calculate the increase() over any given month (including the current month). That takes into account both counter resets and counters that did not exist at the beginning of the month.