AudioKit parameters range - swift

I am new to AudioKit, newish to Swift to be honest.
I cannot find in the documentation any reference to the range of many or any of the .notation parameters in the synths.
So for instance obviously frequencyCutOff is generally in Hz so the range is 0 - 30k or whatever. I'm assuming most have a 0-1 range as standard. There are many others which I would have thought will have min and max values that may be more defined.
Envelopes for instance. ADSR. Are they supposed to be in seconds? if so can I have a release time of 1000s ?! The docs are a little vague to me or am I missing something. Thanks

I'm not in the AudioKit team, but as I use daily AudioKit, I can answer you.
Yes you are right, range of many parameters are not clearly specified in the docs.
But you can find them by searching in the AudioKit Playgrounds examples.
Most of the AudioKit objects are covered here and you will find many info about parameters range.
Alternatively, you can also have a look in the AudioKit Xcode project exemples.
About ADSR, yes value is seconds for Attack, Decay and Release.
Hope it helps!

Related

Variable Length MIDI Duration Algorithm

I'm trying to compile MIDI files, and I reached an issue with the duration values for track events. I know these values (according to this http://www.ccarh.org/courses/253/handout/vlv/) are variable length quantities where each byte is made up of a continuation bit (0 for no following duration byte and 1 for a following duration byte) and the rest of the number in a 7 bit representation.
For example, 128 would be represented as such:
1_0000001 0_0000000
The problem is that I'm having trouble wrapping my head around this concept, and am struggling to come up with an algorithm that can convert a decimal number to this format. I would appreciate it if someone could help me with this. Thanks in advance.
There is no need to re-invent the wheel. The official MIDI specification has example code for dealing with variable length values. You can freely download the specs from the official MIDI.org website.

AudioKit AKMetronome callback timing seems imprecise or quantized

I'm new to AudioKit and digital audio in general, so I'm sure there must be something I'm missing.
I'm trying to get precise timing from AKMetronome by getting the timestamp of each callback. The timing seems to be quantized in some way though, and I don't know what it is.
Example: if my metronome is set to 120, each callback should be exactly 0.5 seconds apart. But if I calculate the difference from one tick to the next, I get this:
0.49145491666786256
0.49166241666534916
0.5104563333334227
0.4917322500004957
0.5104953749978449
0.49178879166720435
0.5103940000008151
0.4916401666669117
It's always one of 2 values, within a very small margin of error. I want to be able to calculate when the next tick is coming so I can trigger animation a few frames ahead, but this makes it difficult. Am I overlooking something?
edit: I came up with a solution since I originally posted this question, but I'm not sure if it's the only or best solution.
I set the buffer to the smallest size using AKSettings.BufferLength.veryShort
With the smallest buffer, the timestamp is always on within a millisecond or two. I'm still not sure though if I'm doing this right, or whether this is the intended behavior of the AKCallback. It seems like the callback should be on time even with a longer buffer.
Are you using Timer to calculate the time difference? From my point of view and based on my findings, the issue is related to the Timer that is not meant to be precise in ios see the thread (Accuracy of NSTimer).
Alternatively, you can look into AVAudioTime (https://audiokit.io/docs/Extensions/AVAudioTime.html)

Tempo and time signatures from MIDI

I'm currently building a software for displaying music notes from MIDI file. I can get every letter of tones from NoteOn and NoteOff events but I don`t know how I get or how calculate types of notes (whole, half, eigth..) and other time signatures.
How I can get it? I looked for some example but without success.
MIDI doesn't represent notes in absolute quantities, like in classical music. Instead, the length of the note continues until a corresponding note off event is parsed (also it's quite common that MIDI files use a note on event with 0 velocity as a note off, just keep this in mind). So basically you will need to translate the time in ticks between the two events to musical time to know whether use a whole, half, quarter note, etc.
This translation obviously depends on knowing the tempo and time signature, which are MIDI meta events. More information about parsing those can be found here:
http://www.sonicspot.com/guide/midifiles.html
Basically you take the PPQ to find the number of milliseconds per tick, then use the time signature and tempo to find the length of a quarter note in milliseconds. There are some answers on StackOverflow with this conversion, but I'm writing this post on my phone and can't be bothered to look them up right now. :-)
Hope this points you in the right direction!

How can I Compare 2 Audio Files Programmatically?

I want to compare 2 audio files programmatically.
For example: I have a sound file in my iPhone app, and then I record another one. I want to check if the existing sound matches the recorded sound or not ( - similar to voice recognition).
How can I accomplish this?
Have a server doing audio fingerprinting computation that is not suitable for mobile device anyway. And then your mobile app uploads your files to the server and gets the analysis result for display. So I don't think programming language implementing it matters much. Following are a few AF implementations.
Java: http://www.redcode.nl/blog/2010/06/creating-shazam-in-java/
VC++: http://code.google.com/p/musicip-libofa/
C#: https://web.archive.org/web/20190128062416/https://www.codeproject.com/Articles/206507/Duplicates-detector-via-audio-fingerprinting
I know the question has been asked a long time ago, but a clear answer could help someone else.
The libraries from Echoprint ( website: echoprint.me/start ) will help you solve the following problems :
De-duplicate a big collection
Identify (Track, Artist ...) a song on a hard drive or on a server
Run an Echoprint server with your data
Identify a song on an iOS device
PS: For more music-oriented features, you can check the list of APIs here.
If you want to implement Fingerprinting by yourself, you should read the docs listed as references here, and probably have a look at musicip-libofa on Google Code
Hope this will help ;)
Apply bandpass filter to reduce noise
Normalize for amplitude
Calculate the cross-correlation
It can be fairly Mhz intensive.
The DSP details are in the well known text:
Digital Signal Processing by
Alan V. Oppenheim and Ronald W. Schafer
I think as well you may try to select a few second sample from both audio track, mnormalise them in amplitude and reduce noise with a band pass filter and after try to use a correlator.
for instance you may take a 5 second sample of one of the thwo and made it slide over the second one computing a cross corelation for any time you shift. (be carefull that if you take a too small pachet you may have high correlation when not expeced and you will soffer the side effect due to the croping of the signal and the crosscorrelation).
After yo can collect an array with al the results of the cross correlation and get the index of the maximun.
You should then set experimentally up threshould o decide when yo assume the pachet to b the same. this will change depending on the quality of the audio track you are comparing.
I implemented a correator to receive and distinguish preamble in wireless communication. My script is actually done in matlab. if you are interested i can try to find the common part and send it to you.
It would be a too long code to be pasted hene in the forum. if you want just let me know and i will send it to ya asap.
cheers

iPhone Temperature Sensor

My question is very similar to this one: iPhone Proximity Sensor. There's clearly some manner of thermometer within the iPhone that's readable by the OS. Has anyone uncovered the super-secret undocumented APIs to read this sensor?
I doubt this sensor is for ambient temperature - rather I suspect it is for overheating of the circuits. If that is all you want then great, but again, I think it would be useless for ambient temperature.
just my opinion.
All i could find was CTGetTemperature in CoreTelephony of all places.
I don't know about previous models, but my iPhone4 goes from cool-ish to very warm in a matter of minutes depending on the various radio usages. So unless "good enough" = "within 20 degrees F or so", then probably not good for ambient measurement.
Unless (maybe you meant this) you could also track radio usage and subtract a temperature variable depending on radio usages. phew. complicated. Easier to just query NWS.
command to get all super-secret names which related with temperature in CoreTelephony framework
nm "/Applications/Xcode463.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/6.1 (10B141)/Symbols/System/Library/Frameworks/CoreTelephony.framework/CoreTelephony" | grep empera