Storing web audio processed data from a custom worklet to turn into wav file - web-audio-api

I was wondering if there was a way to store the data in a custom audio worklet for further processing on the client side, ie turning it into a WAV file? I've seen that it's possible to output an audio stream to a MediaRecorder, but that results in the creation of lossy audio via the ogg codec. If possible, I would like access to the raw PCM data from the worklet processor so I can encode it as WAV or another lossless format.
My hunch is that this can be accomplished by attaching something to the global audio scope and retrieving it from the audio context, but I'm not sure. Help would be appreciated!

Answering own question, I see that there is now the possibility to use PCM as a codec, ie in https://github.com/muaz-khan/RecordRTC/. This is unfortunately undocumented in most major web audio documentation, but as it exists in several modern browsers, it is good enough for my needs!

Related

how to insert and overwrite audio file in iOS

Am developing an application which has audio recorder. The user should be able to play the audio file & insert recording into that.. cut unwanted audio .. overwrite some part of audio file.
Have you seen how to Insert , overwrite audio files -Audio Editing iphone? but no one answered this...
Atleast suggest me a way to implement this....
Thanx in advance...
What type of audio file are you talking about? You will almost certainly need to convert whatever you are using into PCM WAV data for this type of manipulation. Luckily, Core Audio, which others have pointed you towards has some convenience methods for doing this.
Once you have the raw PCM data, you can insert by simply inserting other PCM data at the desired point in the data. You want to make sure you don't do something like write in the middle of a stereo packet, or something like that, but besides that, most simply-formatted PCM data is pretty easy to manipulate. Think of it like a string -- you can start with "Hello World" and change it to "Hello, Beautiful World" but simply inserting data in the middle.
Overwriting is the same principal.
Once you are done with the edits, you'll need to transform the PCM data back into whatever format you had saved in before.
Have a look at Core Audio
Core Audio provides software interfaces for implementing audio features in applications you create for iOS and OS X. Under the hood, it handles all aspects of audio on each of these platforms. In iOS, Core Audio capabilities include recording, playback, sound effects, positioning, format conversion, and file stream parsing, as well as:

objective-c record audio session output

I am writing an app that generates music. I am using OpenAL to: modify gain; modify pitch; mix audio; and play the resulting audio. I now need to record the audio as it is being played. I understand that OpenAL does not let you record the output audio. The other options I have found is to use audio units. However because I need to mix/pitch/gain the audio and record it, it seems I need to write all the audio processing so I can have access to the output buffer. Is this correct? Or is there a different iOS API I can use to do this. If not then is there a 3rd party solution already that lets me record the output (paid solutions are fine)?
You are correct.
Audio Units are the only iOS public API that allows an app to both process and then record audio.
Trying to record the OpenAL output may well be a violation of Apple's rules against using non-public APIs.
The alternative may be to completely rewrite the portions of OpenAL you need (there may be open source for some portions) running on top of the RemoteIO Audio Unit.
The best way to go is likely to be Core Audio, since it will give you as much flexibility as you need. Take a look into the Extended Audio File Services reference pages.
Using and extended audio file you should be able to set up a file format and audio stream buffer to send the final mixed output to, and then use the ExtAudioFileWrite() function to write the samples to the file.

Which one of these is better for short audio input in iPhone- .caf or .wav?

I am making a simple application for iPhone, and I want to enter a short audio file on an object click. Which of .caf and .wav would be better?
I am building a simple application in Cocos2d in which balloons produce a pop sound when clicked. What are the memory issues with both sound versions?
If you do not need specific Core Audio Format features, then WAV has more universal support (and it would be my default choice for that reason).
Core Audio Format basically functions as a container for other audio file formats, including WAV. Core Audio Format has many great features, but it's not evident from the description that you need any of these.
In response to a deleted comment, which was moved to the question:
I can't speak for Cocos2d specifically, so I will write about the file formats in general: WAV does not use data compression. CAF may. If it is a short sound file, you probably don't want data compression (because it requires a good amount of processing to convert to LPCM for playback). If you play the pop often, then you will want to hold onto an uncompressed version of the audio data for easy processing. 1 second will require 44100 * 2 bytes at CD quality in memory (per channel).
For a short sound file such as a balloon pop, a 16 bit WAV file sounds ideal. In that sense, the memory difference should not be a deciding factor. If you have a lot of audio files, or long audio files to load into memory, then the situation changes. For now, I don't consider memory to be a problem in your case. Since CAF is a container, its uncompressed representation will be nearly identical (the difference will be a little more header data in the CAF).
A CAF file is a basically Core Audio Format. So it is well suited for the Apple frameworks. The best advantage of CAF over WAV is while recording when you can have files more than 4 GB and also in CAF you don't need to update the WAV header after each packet recording.
Anyway, I assume you don't need these features related to CAF. And as Justin said, I do believe that WAV will be the better option as you can have more support for WAV than the CAF format.

Is it possible to access decoded audio data from Audio Queue Services?

I have an app in the App Store for streaming compressed music files over the network. I'm using Audio Queue Services to handle the playback of the files.
I would like to do some processing on the files as they are streamed. I tested the data in the audio buffers, but it is not decompressed audio. If I'm streaming an mp3 file, the buffers are just little slices of mp3 data.
My question is: is it ever possible to access the uncompressed PCM data when using Audio Queue Services? Am I forced to switch to using Audio Units?
You could use Audio Conversion Services to do your own MP3-to-PCM conversion, apply your effect, then put PCM into the queue instead of MP3. It's not going to be terribly easy, but you'd still be saved the threading challenges of doing this directly with audio units (which would require you to do your own conversion anyways and then probably use a CARingBuffer to send samples between the download/convert thread and the realtime callback thread from the IO unit)

Is there a "simple" way to play linear PCM audio on the iPhone?

I'm making a media playback app which gets given uncompressed linear PCM (über raw) audio from a third-party decoder, but I'm going crazy when it comes to just playing back the most simple audio format I can imagine..
The final app will get the PCM data progressively as the source file streams from a server, so I looked at Audio Queues initially (since I can seemingly just give it bytes on the go), but that turned out to be mindfudge - especially since Apple's own Audio Queue sample code seems to go off on a magical field trip of OpenGL and needless encapsulation..
I ended up using an AVAudioPlayer during (non-streaming) prototyping; I currently create a WAVE header and add it to the beginning of the PCM data to get AVAudioPlayer to accept it (since it can't take raw PCM)
Obviously that's no use for streaming (WAVE sets the entire file length in the header, so data can't be added on-the-go)..
..essentially: is there a way to just give iPhone OS some PCM bytes and have it play them?
You should revisit the AudioQueue code. Once you strip away all the guff you should have about 2 pages of code plus a callback in which you can supply the raw PCM. This callback is also synchronous with your main loop, so you don't even have to worry about locking.