Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm developing an app that would be listening waiting for a certain frequency to be detected by the microphone. As far as I researched, I think the best way to do this is by performing a Fast Fourier Transform to the real time audio signal (if anyone think there's a better way to do it, I would love to hear from you). It works fine on iOS but I can't find a way to make it work on watchOS
I implemented the famous tempi-fft with no problems on iOS and Swift 5, the problems comes with different libraries not present or limited on watchOS SDK, as I can't use AudioUnit, AURenderCallback to detect new data on buffer, can't adjust the preferred buffer duration and sample rate to the AudioSession on WatchOS, etc. I'm not an expert at all with the 'audio engineering' side, so I don't know how to approach this. I researched a lot but i didn't find implementations on watchOS for this.
I only found this project AccelerateWatch, but this was uploaded years ago when Accelerate framework was not available on watchOS yet.
Any help on this would be greatly appreciated.
I have a chromatic tuner app for the Apple Watch in the iOS App Store that does exactly this, running completely inside the Apple Watch component.
Both the iOS app and the watchOS Watch app component use an AVAudioInputNode installTapOnBus to acquire AVAudioPCMBuffer microphone sample buffers in near real-time. Then the apps feed blocks of the audio sample data to either Goertzel routines or Accelerate framework vDSP FFT functions for further frequency analysis. Then the app does further processing on the FFT frequency data to detect and estimate audio pitch (which is often very different from the FFT frequency magnitude peak). The watchOS display is then animated at frame rate using SpriteKit to output the results.
Related
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to detect sound frequency for particular sound in iphone?
I am seeking a way to find the frequency of mic input.
I can record my voice or a sin noise in a temporary file.
For the recording I am using the AVFoundation framework.
I used this framework to find the peak of the signal.
What can I do to find the frequency of the signal?
There is a quite good example project from Apple. Here is the link to the aurioTouch2 sample app:
https://developer.apple.com/library/ios/samplecode/aurioTouch2/Introduction/Intro.html
I guess you need to start using the CoreAudio Library instead of the AVFoundation. To get the frequency you probably need to use a FFT (Fast Fourier Transformation). This is done in the example above. It even visualizes a Spectogram (Frequency over time).
Or maybe this stackoverflow posts could help you:
Get Frequency for Audio Input on iPhone
I have an mp3 that I want to play slowly without altering the pitch. Is there a way to do it? I can convert into other forms, if I can do it in other forms. This is for a iPhone app that I am developing.
Thanks
A time-pitch modification/stretching algorithm or library could be used for this. You could develop some DSP analysis/resynthesis code for a phase vocoder, a PSOLA pitch corrector, or use a commecial library such as Dirac.
See the stackoverflow question: iPhone voice changer for some other answers to a similar question.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
iPhone voice changer
I want to make an applications similar to Talking Tom Cat, But i am not able to identify the approach for sound.
How I can convert my sound into cat sound?
Look up independent time pitch stretching of audio. It's a digital signal processing technique. One method that can be used is the phase vocoder technique of sound analysis/resynthesis in conjunction with resampling. There seem to be a couple companies selling libraries to do suitable time pitch modification.
The technique to convert a normal voice into a squeaky voice is called time-scale modification of speech. One way is to take the speech and reduce the pitch by a certain amount. Another approach is to stretch/compress the speech so that the frequencies in the voice get scaled by an appropriate amount. These are techniques in digital signal processing.
the great link to download the sample code that provides all your needs,here also refer here for gaining more knowledge regarding your question.
In the init method of the Sample Code of HelloWorldLayer.mm ,u can see three float values as
time = 0.7;
pitch = 0.8;
formant = pow(2., 0./12.);
just adjust the pitch value to 1.9,it would be really a nice cat sound!!!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm making an iphone game, currently using openAL for SFX, we want to keep the game under 10 meg.
iphone (through openAL atleast) only natively plays uncompressed PCM.
What would be the most straightforward way of getting music from some sort of good compressed format (mp3, aac, ogg etc) into my game?
Is there some sort of decoder api? should I be using openAL?
EDIT:
OK, we've done some calculations, and we should be able to fit everything in nicely with a simple 64kb/s compression scheme, so I'm looking for the easiest way to decode a compressed file (preferably from memory) to raw pcm in memory for use with open al. we will also need a streaming decoder, it is not necessary for it to be able to decode the stream from memory, but it would be nice. We want to put looping in for the track, so it would be ideal if the decoder had “random access” so you could move around the track easily.
The most compressed way would be a tracker or MIDI. That lets you store only the score for the music, not sound samples.
Maybe this is what you're looking for.
Another option would be to compile an open source synth/sampler in your game, and communicate a MIDI player to the synth samples/sounds, that could give you really good music ( I am a Software Engineering student and almost studied music ( 15 years of Piano/Synths etc ), that could be a bit more complex, BUT, can give you awesome sounds being played by the synth with light weight MIDI tracks to trigger the synth sounds.
Good luck, and I think this is actually a really nice ( a bit more complex maybe ) , way to go!
Ok, I've just got it all set up.
I used Audio Queues for the music stream simply because it is the ONE part of the iPhone SDK that is well documented:
http://developer.apple.com/iphone/library/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AboutAudioQueues/AboutAudioQueues.html#//apple_ref/doc/uid/TP40005343-CH5-SW1
for the general explanation and
http://developer.apple.com/iphone/library/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQPlayback/PlayingAudio.html#//apple_ref/doc/uid/TP40005343-CH3-SW1
for a great example (that is a bit buggy)
this seems to work fine along side openAL. and it does seem that it will allow us to jump randomly about the stream at a sample level of accuracy.
the only remaining thing to do is get it so we can load from memory, this is done by replacing the:
AudioFileOpenURL
Call with a call to:
AudioFileOpenWithCallbacks
Unfortunately the documentation for the Audio File Services is... poor... to say the least.
read it at your own risk, and then after it inexplicably doesn't work read:
http://developer.apple.com/iphone/library/qa/qa2009/qa1676.html
where it tells you that the callbacks are actually optional, and by supplying a write one you are telling it to open the file for writing which it can't do for MP4s, so just pass NULL for the write and setSize callback.
I would also like to say that the two answers here recommending MIDI are almost certainly the right way to go if you really want the smallest size.
Unfortunately we are lazy and just wanted something in quickly, it also came to light that we could fit in moderately compressed audio. I was also slightly worried about the performance implication (I don't know what that would be for midi) but apparently this method for compressed audio here uses hardware decoding.
If you want to go the traditional compressed audio route hopefully this helps, you could also try the Audio Converter Services if you want to use the data with openal, unfortunately this API also falls into the embarrassingly poorly documented reference manuals. I don't even know if it works, and if it does, I'm not sure if it uses the hardware decoders.
This question already has answers here:
Real-time Pitch Shifting on the iPhone
(5 answers)
Closed 7 years ago.
What is the best way to accomplish this?
From what I have read so far, it seems you have to setup IO Remote (which is a pain itself). Do you need to do a FFT? Are there any examples out there?
Can I just speed up / slow down playback?
Thanks
OpenAL lets you pitch-shift with the AL_PITCH source property. Maybe you could run your audio through OpenAL and use that.
I've never developed for an iPhone, but if you have sufficient control over the buffer sent to the audiodevice, then you could do the following:
say you have a buffer read from an audiofile, if you send only every even sample to the audiodevice (probably put in some other buffer which is passed to some function), then it will double the frequency and half the time to play the file.
If you want something in between you need to compute in between samples or resample the audiofile = interpolate values between successive samples.