libpd crashes with vlines~ in (libpd) clock_unset (?) - puredata

I have a iOS project developed with XCode that uses libpd to load a Pure Data patch. My project uses a mix of [osc~] and [phasor~] with modulated parameters (pitch, volume, etc). My app is in 64-bit as now required. I am using the latest version of Pure Data and libpd.
It crashes in one place. I have an [osc~] that has its pitch modulated by an envelope. When I change the value of the length of the envelope (= modulation rate) on a device, it randomly crashes during testing but always on the same line of libpd code. I thought it had to do with how fast the parameter was changed but no, it also happens when the parameter is slowly changed.
Here is below a (reduced) patch where the problem occurs. I have recently catched up with Pure Data. Any suggestions or corrections are welcome.
modulatedOscillator.pd
Here is a screenshot of the crash in XCode with the code sequence and the line of clock_unset that crashes.
(source: pdpatchrepo.info)
Full picture here.
I have done some printing and it crashes in this function:
void clock_unset(t_clock *x){
if (x->c_settime >= 0){
if (x == clock_setlist) clock_setlist = x->c_next;
else{
t_clock *x2 = clock_setlist;
while (x2->c_next != x) x2 = x2->c_next;
x2->c_next = x->c_next;
}
x->c_settime = -1;
}
}
On this line :
while (x2->c_next != x) x2 = x2->c_next;
With a printed value of : x2->c_next==NULL
Anyone has experienced something similar?
Thanks.

Related

How to throw an error during a parameter check without if-statements in Google-Earth-Engine?

I am working on a new version of the bfast monitor algorithm in Google Earth Engine. See the code of the original algorithm on Github.
The function bfastMonitor() takes user-defined parameters and applies some parameter checks before starting actual calculations. When the user-defined parameter settings are incompatible with the algorithm, an error should be raised.
During the parameter check, two types of if statements are made: statements that only check the parameter boundaries and raise an error at incompatible values, and statements that check and rewrite the contents of a parameter and raise an error at incompatible values. For the sake of the focus of this question, I will consider only the latter one.
Obviously, in a conventional coding paradigm, if-statements can be used to do this parameter check, however, using if-statements goes against the client-server model of GEE.
Consider the period parameter, which can only be 2,4,6,8, or 10. This parameter code used to index a list later in the code (line 459 on Github), where a period-value of 4 means that the list should be indexed at position 1, for instance.
Currently the implementation looks like this, using if-statements:
period = period||10
if (period == 2) {
period = 0;
} else if (period == 4){
period = 1;
}else if (period == 6){
period = 2;
}else if (period == 8){
period = 3;
}else if (period == 10){
period = 4;
}else {
alert("Error: for period parameter, we only have 2, 4, 6, 8,10. Choose one of these values");
}
Later on, the period parameter is used a form like this (from Github):
var exampleList = ee.List([0.001, 0.002, 0.003, 0.004, 0.005]);
var exampleValue = exampleList[period];
The code could be rewritten easily to get rid of the if-statements, like this for instance:
var period = ee.Number(6);
var periodDict = ee.Dictionary({
'2':0,
'4':1,
'6':2,
'8':3,
'10':4,
});
var exampleList = ee.List([0.001, 0.002, 0.003, 0.004, 0.005]);
var exampleValue = exampleList.get(periodDict.get(period.format()));
But then I don't know how to retain the opportunity to throw an error when the value for period is out of bounds.
How can I check the parameters of a function in Google Earth Engine and throw errors while avoiding if-statements?
There is nothing at all wrong with using a JavaScript if statement when it works. The advice you linked is about using ee.Algorithms.If which is unfortunately often inefficient — that's completely unrelated. The usual problem with a JavaScript if is when you're trying to use it on a server-side value that hasn't been computed yet.
But in your case, it looks like you want to validate a user-provided parameter. if is a perfectly fine way to do this.
I'll suggest one improvement: instead of using alert("error message");, use throw new Error:
throw new Error("For period parameter, we only have 2, 4, 6, 8,10. Choose one of these values");
This has two advantages:
It doesn't pop a dialog that the user must interact with before fixing the problem, but just results in an error message in the usual place, the Code Editor's Console.
It will stop the rest of the code from executing, which alert() doesn't.

Problem normalizing log_e "Pitch" voiceAnalytics Swift iOS 13

I am retrieving values from SFVoiceAnalytics "Pitch." My goal is to transform the data to the raw Fundamental Frequency. According to the documentation the values are returned log_e.
When I apply exp() to the values returned I get the following ranges:
Male voice: [0.25, 1.85], expected: [85, 180]
Female voice: [0.2,1.6], expected: [165, 255]
For sake of simplicity I am using Apple's sample code "Recognizing Speech in Live Audio."
Thanks for the help!!
Documentation: https://developer.apple.com/documentation/speech/sfvoiceanalytics/3229976-pitch
if let result = result {
// returned pitch values
for segment in result.bestTranscription.segments {
if let pitchSegment = segment.voiceAnalytics?.pitch.acousticFeatureValuePerFrame {
for p in pitchSegment {
let pitch = exp(p)
print(pitch)
}
}
}
// Update the text view with the results.
self.textView.text = result.bestTranscription.formattedString
isFinal = result.isFinal
}
I ran into a similar problem lately and ultimately used another solution to retrieve pitch data.
I went with a pitch detection library for Swift called Beethoven. It detects pitches in real-time, whereas the voice analytics of SFSpeechRecognizer only returns them once the transcription is complete.
Beethoven hasn't been updated to work with Swift 5, but I didn't find it too difficult to get it to work.
Also, upon digging up why the values in voiceAnalytics were as they were, I found out via the documentation that the pitch is a normalized pitch estimate:
The value is a logarithm (base e) of the normalized pitch estimate for each frame.
My interpretation of this is likely that the values were normalized (divided by) the fundamental frequency, so I'm not sure it's possible to use this data to recover the absolute frequencies. It seems best used to convey interval changes from pitch-to-pitch.

AVAudioPCMBuffer built programmatically, not playing back in stereo

I'm trying to fill an AVAudioPCMBuffer programmatically in Swift to build a metronome. This is the first real app I'm trying to build, so it's also my first audio app. Right now I'm experimenting with different frameworks and methods of getting the metronome looping accurately.
I'm trying to build an AVAudioPCMBuffer with the length of a measure/bar so that I can use the .Loops option of the AVAudioPlayerNode's scheduleBuffer method. I start by loading my file(2 ch, 44100 Hz, Float32, non-inter, *.wav and *.m4a both have same issue) into a buffer, then copying that buffer frame by frame separated by empty frames into the barBuffer. The loop below is how I'm accomplishing this.
If I schedule the original buffer to play, it will play back in stereo, but when I schedule the barBuffer, I only get the left channel. As I said I'm a beginner at programming, and have no experience with audio programming, so this might be my lack of knowledge on 32 bit float channels, or on this data type UnsafePointer<UnsafeMutablePointer<float>>. When I look at the floatChannelData property in swift, the description makes it sound like this should be copying two channels.
var j = 0
for i in 0..<Int(capacity) {
barBuffer.floatChannelData.memory[j] = buffer.floatChannelData.memory[i]
j += 1
}
j += Int(silenceLengthInSamples)
// loop runs 4 times for 4 beats per bar.
edit: I removed the glaring mistake i += 1, thanks to hotpaw2. The right channel is still missing when barBuffer is played back though.
Unsafe pointers in swift are pretty weird to get used to.
floatChannelData.memory[j] only accesses the first channel of data. To access the other channel(s), you have a couple choices:
Using advancedBy
// Where current channel is at 0
// Get a channel pointer aka UnsafePointer<UnsafeMutablePointer<Float>>
let channelN = floatChannelData.advancedBy( channelNumber )
// Get channel data aka UnsafeMutablePointer<Float>
let channelNData = channelN.memory
// Get first two floats of channel channelNumber
let floatOne = channelNData.memory
let floatTwo = channelNData.advancedBy(1).memory
Using Subscript
// Get channel data aka UnsafeMutablePointer<Float>
let channelNData = floatChannelData[ channelNumber ]
// Get first two floats of channel channelNumber
let floatOne = channelNData[0]
let floatTwo = channelNData[1]
Using subscript is much clearer and the step of advancing and then manually
accessing memory is implicit.
For your loop, try accessing all channels of the buffer by doing something like this:
for i in 0..<Int(capacity) {
for n in 0..<Int(buffer.format.channelCount) {
barBuffer.floatChannelData[n][j] = buffer.floatChannelData[n][i]
}
}
Hope this helps!
This looks like a misunderstanding of Swift "for" loops. The Swift "for" loop automatically increments the "i" array index. But you are incrementing it again in the loop body, which means that you end up skipping every other sample (the Right channel) in your initial buffer.

tracking position/location in test keyboard & mouse mode (not ppt) with new vizconnect

I can track location fine pre - vizconnect using code like this:-
vrpn7 = viz.add('vrpn7.dle')
posTracker = vrpn7.addTracker('PPT0#WorldViz-PC', 0 )
and then
x,y,z = posTracker.getPosition()
but I now use the new vizconnect e.g.
vizconnect.go( 'vizconnect_hmd_ppt.py' )
I'm wondering what the recommended way is to then access the trackers from my main project '.py' file and particularly when I'm using a keyboard/mouse scenario to simulate movement for during program development.
Any advice would be most welcome.
Thanks
Actually it was pretty straightforward:
first check the names of the trackers using:
print( vizconnect.getTrackerDict() )
it may return something like this
'mouse_and_keyboard_walking'
along with some others e.g. inertia cube, then do
gTracker = vizconnect.getTracker( 'mouse_and_keyboard_walking' )
or
gTracker = vizconnect.getTracker( 'PPT0#WorldViz-PC' )
then periodically call (probably on a callback() ):-
x, y, z = gTracker.getPosition()

iPhone audio analysis

I'm looking into developing an iPhone app that will potentially involve a "simple" analysis of audio it is receiving from the standard phone mic. Specifically, I am interested in the highs and lows the mic pics up, and really everything in between is irrelevant to me. Is there an app that does this already (just so I can see what its capable of)? And where should I look to get started on such code? Thanks for your help.
Look in the Audio Queue framework. This is what I use to get a high water mark:
AudioQueueRef audioQueue; // Imagine this is correctly set up
UInt32 dataSize = sizeof(AudioQueueLevelMeterState) * recordFormat.mChannelsPerFrame;
AudioQueueLevelMeterState *levels = (AudioQueueLevelMeterState*)malloc(dataSize);
float channelAvg = 0;
OSStatus rc = AudioQueueGetProperty(audioQueue, kAudioQueueProperty_CurrentLevelMeter, levels, &dataSize);
if (rc) {
NSLog(#"AudioQueueGetProperty(CurrentLevelMeter) returned %#", rc);
} else {
for (int i = 0; i < recordFormat.mChannelsPerFrame; i++) {
channelAvg += levels[i].mPeakPower;
}
}
free(levels);
// This works because one channel always has an mAveragePower of 0.
return channelAvg;
You can get peak power in either dB Free Scale (with kAudioQueueProperty_CurrentLevelMeterDB) or simply as a float in the interval [0.0, 1.0] (with kAudioQueueProperty_CurrentLevelMeter).
Don't forget to activate level metering for AudioQueue first:
UInt32 d = 1;
OSStatus status = AudioQueueSetProperty(mQueue, kAudioQueueProperty_EnableLevelMetering, &d, sizeof(UInt32));
Check the 'SpeakHere' sample code. it will show you how to record audio using the AudioQueue API. It also contains some code to analyze the audio realtime to show a level meter.
You might actually be able to use most of that level meter code to respond to 'highs' and 'lows'.
The AurioTouch example code performs Fourier analysis
on the mic input. Could be a good starting point:
https://developer.apple.com/iPhone/library/samplecode/aurioTouch/index.html
Probably overkill for your application.