I want to generate a sound wave programmatically and play it with AVAudioPlayer. I have the code to encode my waveform as linear PCM, 44100Hz, mono, 8 bits per sample.
I am not clear on what kind of envelope I need to wrap around this buffer so that AVAudioPlayer recognizes it as PCM.
PCM is just a digital representation of an analog audio signal. Unfortunately, it doesn't encapsulate any of the metadata about the audio - channels, bit depth, or sample rate - all necessary to properly read said PCM data. I would assume AVAudioPlayer would accept this PCM data wrapped in an NSData object as long as you were able to set those variables manually in the AVAudioPlayer object. Unfortunately, those variables are read only, so even though the documentation says AVAudioPlayer can handle anything that Core Audio can handle, it has no way to handle raw LPCM data.
As stated by zoul, I would imagine that the easiest way to go about this is throwing in a WAV header, since the header informs AVPlayer of the above necessary variables. It's 44 bytes, is easily mocked up, and is defined nicely - I used the same definition given above to implement wav header encoding and decoding. Also, it's just prepended to your unmodified LPCM data.
Maybe adding a WAV header would help?
I posted a Swift 5 example (as a GitHub Gist) of converting a buffer of audio float samples into an in-memory WAV file to use with AVAudioPlayer initWithData, here: https://gist.github.com/hotpaw2/4eb1ca16c138178113816e78b14dde8b
Related
What I have
An mp3 file, 16kHz, 1 channel. Read like:
[data,Fs] = audioread('file.mp3');
This file is playable in Windows Media Player i.e., and works fine.
What I want
To play it inside matlab. After reading it, I've tried to play it, like:
soundsc(data);
However, it doesn't sound even near to how it should (neither using sound instead of soundsc).
The Problem then is..
How can I play this mp3 vector inside matlab? Is it even possible? Or do I need to convert it to other format so I can work with it? (wav I guess?)
You are missing the sample frequency. You need
soundsc(data, Fs)
If not present, the Fs argument defaults to 8192 Hz, which is not the correct one.
Also, note that if you don't need scaling you can use
sound(data, Fs)
which will run a little faster.
I'm not sure if it's possible to achieve what I want, but basically I have a NSDictionary which represents a recording. It's a timeline of what sound id was played at what point in time.
I have it so that you can play back this timeline/recording, and it works perfectly.
I'm wondering if there is anyway to take this timeline, and export it as a single sound that could be saved to a computer if the device was synced with iTunes.
So basically I'm asking if I can take a timeline of sounds, play it back and have these sounds stitched together as a single sound, that can then be exported.
I'm using OpenAL as my sound framework and the sound files are all CAFs.
Any help or guidance is appreciated.
Thanks!
You will need:
A good understanding of linear PCM audio format (See Wikipedia's Linear PCM page).
A good understanding of audio sample-rates and some basic maths to convert your timings into sample-offsets.
An awareness of how two's-complement binary numbers (signed/unsigned, 16-bit, 32-bit, etc.) are stored in computers, and how the endian-ness of a processor affects this.
Patience, interest in learning, and a strong desire to get this working.
Here's what to do:
Enable file sharing in your app (UIFileSharingEnabled=YES in info.plist and write files to /Documents directory).
Render the used sounds into memory buffers containing linear PCM audio data (if they are not already, i.e. if they are compressed). You can do this using the offline rendering functionality of Audio Queues (see Apple audio queue docs). It will make things a lot easier if you render them all to the same PCM format and sample rate (For example 16-bit signed samples #44,100Hz, I'll use this format for all examples), and use the same format for your output. I recommend starting off with a Mono format then adding stereo once you get it working.
Choose an uncompressed output format and mix your sounds into a single stream:
3.1. Allocate a buffer large enough, or open a file stream to write to.
3.2. Write out any headers (for example if using WAV format output instead of raw PCM) and write zeros (or the mid-point of your sample range if not using a signed sample format) for any initial silence before your first sound starts. For example if you want 0.1 seconds silence before your first sound, write 4410 (0.1 * 44100) zero-samples i.e. write 4410 shorts (16-bit) all with zero.
3.3. Now keep track of all 'currently playing' sounds and mix them together. Start with an empty list of 'currently playing sounds and keep track of the 'current time' of the sample you are mixing, for each sample you write out increment the 'current time' by 1.0/sample_rate. When it gets time for another sound to start, add it to the 'currently playing' list with a sample offset of 0. Now to do the mixing, you iterate through all of the 'currently playing' sounds and add together their current sample, then increment the sample offset for each of them. Write the summed value into the output buffer. For example if soundA starts at 0.1 seconds (after the silence) and soundB starts at 0.2 seconds, you will be doing the equivalent of output[8820] = soundA[4410] + soundB[0]; for sample 8820 and then output[8821] = soundA[4411] + soundB[1]; for sample 8821, etc. As a sound ends (you get to the end of its samples) simply remove it from the 'currently playing' list and keep going until the end of your audio data.
3.4. The simple mixing (sum of samples) described above does have some problems. For example if two samples have values that add up to a number larger than 32767, this cannot be stored in a signed-16-bit number, this is called clipping. For now, just clamp the value to 32767, and get it working... later on come back and implement a simple limiter (see description at end).
Now that you have a mixed version of your track in an uncompressed linear PCM format, that might be enough, so write it to /Documents. If you want to write it in a compressed format, you will need to get the source for an audio encoder and run your linear PCM output through that.
Simple limiter:
Let's choose to limit the top 10% of the sample range, so if the absolute value is greater than 29490 (int limitBegin = (int)(32767 * 0.9f);) we will scale down the value. The maximum possible peak would be int maxSampleValue = 32767 * numPlayingSounds; and we want to scale values above limitBegin to peak at 32767. So do the summation into sampleValue as per the very simple mixer described above, then:
if(sampleValue > limitBegin)
{
float overLimit = (sampleValue - limitBegin) / (float)(maxSampleValue - limitBegin);
sampleValue = limitBegin + (int)(overLimit * (32767 - limitBegin));
}
If you're paying attention, you will have noticed that when numPlayingSounds changes (for example when a new sound starts), the limiter becomes more (or less) harsh and this may result in abrupt volume changes (within the limited range) to accommodate the extra sound. You can use the maximum number of playing sounds instead, or devise some clever way to ramp up the limiter over a few milliseconds.
Remember that this is operating on the absolute value of sampleValue (which may be negative in signed formats), so the code here is just to demonstrate the idea. You'll need to write it properly to handle limiting at both ends (peak and trough) of your sample range. Also, there are some tricks you can do to optimize all of the above during the mixing - you will probably spot these while you're writing the mixer, be careful and get it working first, then go back and refactor/optimize if needed.
Also remember to consider the endian-ness of the platform you are using and the file-format you are writing to, as you may need to do some byte-swapping.
One approach which isn't too hard if your files are stored in a simple format is just to combine them together manually. That is, create a new file with the caf format and manually put together the pieces you want.
This will be really easy if the sounds are uncompressed (linear PCM). But, read the documents on the caf file format here:
http://developer.apple.com/library/mac/#documentation/MusicAudio/Reference/CAFSpec/CAF_spec/CAF_spec.html#//apple_ref/doc/uid/TP40001862-CH210-SW1
I'm in the process of switching from AVAudioPlayer to OpenAL using the Finch sound engine. I need to do metering, i.e. get the average peak levels. Finch sound engine does not provide this, and I'm completely new to OpenAL. How can I do this? Any examples would be really appreciated.
I'm assuming you're looking for a drop-in replacement of AVAudioPlayer's peakPowerForChannel: method. Unfortunately, there is none. You'll have to roll your own.
OpenAL "sounds" are a combination of a "buffer" (your sample data, loaded in memory) and a "source," which represents something like properties you want applied to your sample data.
The easy approach to OpenAL playback is to load the entire file into memory and just play the whole thing in one call. However, you can use an NSInputStream to read in a chunk of PCM sample data from a file into an OpenAL buffer, use alBufferData() to compute your peak power using your own function, play the chunk using your source, and then repeat until EOF.
I know you are intending to use Finch, but you should give AudioQueues a real close lookover (if metering is a critical feature for you). It is much better designed for this type of application. In particular, the kAudioQueueProperty_CurrentLevelMeterDB property will provide you with either peak RMS (mPeakPower) or average RMS levels (mAveragePower), which you can read as often as you like.
Good luck and happy coding!
Some resources that might be helpful:
http://kcat.strangesoft.net/openal-tutorial.html
http://connect.creativelabs.com/openal/Documentation/OpenAL_Programmers_Guide.pdf
http://www.hydrogenaudio.org/forums/index.php?showtopic=78578
http://developer.apple.com/mac/library/documentation/MusicAudio/Reference/AudioQueueReference/Reference/reference.html
I'm making an Iphone game, we need to use a compressed format for sound, and we want to be able to loop SEAMLESSLY back to a specific sample in the audio file (so there is an intro, then it loops back to an offset)
currently THE ONLY export process I have found that will allow seamless looping (reports the right priming and padding frame numbers, no clicking when looping ect) is using apple's afconvert to a aac format in a caf file.
but when we try and encode to lower bitrates, it automatically re samples the sound! we do NOT want to have the sound re sampled, every other encoder I have encountered has an option to set the output sample rate, but I can't find it for this one.
on another note, if anyone has had any luck with seamless looping of a compressed file format using audio queues, let me know.
currently I'm working off the information found at:
http://developer.apple.com/mac/library/qa/qa2009/qa1636.html
note that this DID work PERFECTLY when I left the bitrate for the encode at default (~128kbs) but when I set it to 32kbps - with the -b option - it resampled, and looping clicks now.
It needs to be at least 48kbps. 32kbps will downsample to a lower sample rate.
I think you are confusing sample rate (typical values: 32kHz, 44.1kHz, 48kHz) and bit rate (typical values: 128kbps, 160kbps, 192kbps).
For a bit rate, 32kbps is extremely low. Sound will have bad quality at this bit rate. You probably intended to set the sample rate to 32kHz instead, which is also not outright typical, but makes more sense.
When compressing to AAC and uncompressing back to WAV, you will not get the same audio file back, because in AAC, the audio data is represented in a completely different format than in raw wave. E.g. you can have shifts by few microseconds, which are necessary to convert to the compressed format. You can not completely get around this with any highly compressed format.
The clicking sound originates from the sudden change between two samples which are played in direct succession. This is likely taking place because the offset to which you jump back in your loop does not end up to be at exactly the same position in the AAC file as it was in the WAV file (as explained above, there can shifts by microseconds).
You will not get around these slight changes when compressing. Instead, you have to compensate for them after compression by adjusting the offset. That means you have to open the compressed sound file in an audio editor, e.g. Audacity, and manually find another offset close to the original one, which is suitable for looping.
How to find an offset which is suitable for looping?
Zoom in to the waveform's end. Look at how the waveform looks there. Then zoom in to the waveform at the original offset and search in its neighbourhood for an offset at which the waveform connects seamlessly to the end of the waveform.
For an example how this shoud look like, open the uncompressed audio file in the audio editor and examine the end of the waveform and the offset there.
I need to know if it is possible to create a 30 second sample MP3 from a WAV file. The generated MP3 file must feature a fade at the start and end.
Currently using ffmpeg, but can not find any documentation that would support being able to do such a thing.
Could someone please provide me the name of software (CLI, *nix only) that could achieve this?
This will
trim out from Position 45 sec. the next 30 seconds (0:45.0 30) and
fade the first 5 seconds (0:5) and the last 5 seconds (0 0:5) and
convert from wav to mp3
sox infile.wav outfile.mp3 trim 0:45.0 30 fade h 0:5 0 0:5
Check out SoX - Sound eXchange
I have not used it myself but one of my friends speaks highly of it.
From web page (highlighted my me):
SoX is a cross-platform (Windows,
Linux, MacOS X, etc.) command line
utility that can convert various
formats of computer audio files in to
other formats. It can also apply
various effects to these sound files,
and, as an added bonus, SoX can play
and record audio files on most
platforms.
The best way to do this is to apply the 30-second truncation, fade in and fade out to the WAV audio data before converting it to an MP3. If your conversion library has a method that takes an array of samples, this is very easy to do. If the method only accepts a WAV file (either in-memory or on disk), then this is slightly less easy as you have to learn the WAV file format (which is easy to write but somewhat more difficult to read). Either way, applying gain and/or attenuation to time-domain sample data (as in a WAV file) is much easier than trying to apply these effects to frequency-domain data (as in an MP3 file).
Of course, if your conversion library already does all this, it's best to just use that and not worry about it yourself.