Does changing the playback rate of a source node alter the pitch? - web-audio-api

According to Mozilla Developer Network, the browser applies pitch correction to audio after the playbackRate property of a source node is changed.
However, according to Chrome Developer docs, changing the playbackRate of a source node is a means of varying pitch.
Do the docs contradict? A quick experiment shows that neither Chrome nor Firefox seem to preserve pitch when the playback rate is changed. A higher playback rate yields higher pitch.

Yes, the docs do contradict. As the Chrome FAQ states, there's currently no native pitch correction in the Web Audio API.

The docs in MDN are wrong, I just fixed them. Thanks for telling us !

Related

MIDI and Android: can you achieve individual "note bending" only by assigning a separate channel to every note being played?

I'm in the process of devising my idea for some music-related Android app.
It will probably feature playback using the internal MIDI sound bank (I thought of using free soundfonts but then I'm not sure how easy it is to achieve any pitch shifting at all as it's sample-based rather than synthesized), and the issue is that I want to assure it would play correctly in situations when one note slides to another without the rest shifting as well.
To my understanding the MIDI messages controlling pitch shifting are the portamento CC. Now, it's defined that MIDI supports up to 16 channels, and it's usually a channel per instrument. Could a possible solution during multitrack playback be to have every instrument within a piece play in a separate MIDI player instance and then map at any moment every note generated within that instrument to a different channel of the available 16?
Thank you.
In MIDI 1.0, the only way to have per-note pitch bend is to put the note, whose pitch you want to bend, in its own channel. This is called MPE in the official MIDI 1.0 spec.

Is there a way to select the bit rate while using AVPlayer for HTTP live audio streaming?

I'm using AVPlayer to stream audio content delivered in two quality formats.
The problem is that when passing from a lower format to a higher one ( done automatically by the framework when wi-fi is available ) there is a delay while playing.
Is there a way to manually select a desired quality in order to prevent that delay?
It's possible now in iOS8.
Checkout preferredPeakBitRate on AVPlayerItem.
Following copied from Apple's documentation:
The desired limit, in bits per second, of network bandwidth consumption for this item.
SWIFT: var preferredPeakBitRate: Double
OBJECTIVE-C: #property(nonatomic) double preferredPeakBitRate
Set preferredPeakBitRate to non-zero to indicate that the player should attempt to limit item playback to that bit rate, expressed in bits per second.
If network bandwidth consumption cannot be lowered to meet the preferredPeakBitRate, it will be reduced as much as possible while continuing to play the item.
Update: This was accurate at the time for iOS 4. For an updated iOS 8 answer, see here.
I've researched this very question for myself and have not found an answer which means I'm pretty positive there is no way to do this. The Apple docs don't always give all the details of what you can do with things but if you look at all the available properties, methods, etc you will find that there is nothing to allow you to tweak the stream.
I think this is the whole point of HLS. Apple wants iPhone users to have the best streaming experience possible. If they gave the developer the controls to tweak which stream is being used then that defeats the purpose. The system knows best when it comes to switching streams. If the phone cannot handle the additional bandwidth then it won't (or shouldn't) switch to the higher stream. Some things that I have found that you may want to look at...
Are your files chunked into 10 second increments? If it's more than that you might want to shorten them.
Some file conversion programs don't get the bit rates exactly right and if that is the case your phone may think it has the bandwidth for, say, a 96 kbps feed but in reality your feed is 115 kbps. Take a look at the accepted answer in this post: iPhone - App Rejected again, HTTP Live Streaming 64kbps baseline feed
Use Pantomime, is a lightweight framework for iOS, OSX and tvOS that can read and parse HTTP Live Streaming manifests.
Pantomime

DIRAC2 for real time pitch shifting and autotune?

Has anyone implemented the DIRAC2 library from http://www.dspdimension.com/technology-licensing/dirac2-iphone/ for real time pitch correction on the iPhone? The library doesn't appear to support real time processing but perhaps someone has done it?
Thx
I integrated DIRAC2 into an iPhone app so I could modify the playback speed, and it does indeed work in real-time on an iPhone 4. I had to use the lowest possible settings to keep the CPU usage down, but it does play without any skips and I am able to change the playback speed seamlessly.
Running the same project on a 3GS device yielded lesser results - namely that the audio had enough skipping that it wasn't really usable. One caveat to this, though, is that I was running my test on the free version of DIRAC2 which only supports a 44100 sample rate, which is much higher than I need. If the pro version is used and you slash the sample rate down to 22050 or lower, it might work on a 3GS, but don't quote me on that.
Anything older than a 3GS has absolutely no chance of real-time playback.
Hope this helps.
Confirmed from DSP Dimensions, that the current DIRAC2 library will not work in real time.

iPhone: CPU power to do DSP/Fourier transform/frequency domain?

I want to analyze MIC audio on an ongoing basis (not just a snipper or prerecorded sample), and display frequency graph and filter out certain aspects of the audio. Is the iPhone powerful enough for that? I suspect the answer is a yes, given the Google and iPhone voice recognition, Shazaam and other music recognition apps, and guitar tuner apps out there. However, I don't know what limitations I'll have to deal with.
Anyone play around with this area?
Apple's sample code aurioTouch has a FFT implementation.
The apps that I've seen do some sort of music/voice recognition need an internet connection, so it's highly likely that these just so some sort of feature calculation on the audio and send these features via http to do the recognition on the server.
In any case, frequency graphs and filtering have been done before on lesser CPUs a dozen years ago. The iPhone should be no problem.
"Fast enough" may be a function of your (or your customer's) expectations on how much frequency resolution you are looking for and your base sample rate.
An N-point FFT is on the order of N*log2(N) computations, so if you don't have enough MIPS, reducing N is a potential area of concession for you.
In many applications, sample rate is a non-negotiable, but if it was, this would be another possibility.
I made an app that calculates the FFT live
http://www.itunes.com/apps/oscope
You can find my code for the FFT on GitHub (although it's a little rough)
http://github.com/alexbw/iPhoneFFT
Apple's new iPhone OS 4.0 SDK allows for built-in computation of the FFT with the "Accelerate" library, so I'd definitely start working with the new OS if it's a central part of your app's functionality.
You cant just port FFT code written in C into your app...there is the thumb compiler option that complicates floating point arithmetic. You need to put it in arm mode

Real-time Pitch Shifting on the iPhone

I have a children's iPhone application that I am writing and I need to be able to shift the pitch of a sound sample using Core Audio. Does anyone have any example code I could look at where this is done. There are many music and game apps in the app store that do this so I know I am not the first one. However, I cannot find any examples of it being done.
you can use dirac-2 from dsp dimension for pitch shifting on the iphone. quote: -
"DIRAC2 is available as both a commercial object library offering unlimited sample rates and phase locked multichannel support and as a free single channel, 44.1/48kHz LE version."
use the soundtouch open source project to change pitch
Here is the link : http://www.surina.net/soundtouch/
Once you add soundtouch to your project, you have to give the input sound file path, output sound file path and pitch change as the input.
Since it takes more time to process your sound its better to modify soundtouch so that when you record the voice, directly give the data for processing. It will make your application better.
I know it's too late for the person who asked but it is really a valuable link (As I found) for any one else who is looking for the solution of the same problem.
So Here we have latest DIRAC3 with it's own audio player classes which will take care of run time pitch and speed(explore for god knows what more) shifting. Run the sample and have huge round of applause for that.
Try Dirac - it's the best technology out there and it's available on Win, Linux, MacOS X and iOS. We're using it in all our products (and a couple of others do as well, search for "Capo" on the App Store). They're at version 3 now which has seen a huge increase in performance since previous versions. Hope this helps.
See: Related question
How much control over pitch do you need... could you precalculate all the different sounds?
If the answer is yes, then you can just pick the right sounds and play them.
You could also use Audio Converter Services in conjunction with AVAudioPlayer, which will allow you to resample the audio (which will effectively repitch them, though they'll change duration).
Alternatively, as the related question points out, you could use OpenAL and AL_PITCH