Encoding images to video with ffmpeg - iphone

I am trying to encode series of images to one video file. I am using code from api-example.c, its works, but it gives me weird green colors in video. I know, I need to convert my RGB images to YUV, I found some solution, but its doesn't works, the colors is not green but very strange, so thats the code:
// Register all formats and codecs
av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int i, out_size, size, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
picture= avcodec_alloc_frame();
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "could not open %s\n", filename);
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
#pragma mark -
AVFrame* outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
#pragma mark -
for(i=1;i<77;i++) {
fflush(stdout);
int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:#"10%d", i]];
CGImageRef newCgImage = [image CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);
avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB8, c->width, c->height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB8,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
free(buffer);
buffer = NULL;
}
/* get the delayed frames */
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, outbuf_size, f);
}
/* add sequence end code to have a real mpeg file */
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
printf("\n");
Please give me advice how to fix that problem.

You can see article http://unick-soft.ru/Articles.cgi?id=20. But it is article on Russian, but it includes code samples and VS Example.

Has anyone found a fix for this? I am seeing the green video problem on the decode side. That is, when I decode incoming PIX_FMT_YUV420 packets and then swsscale them to PIX_FMT_RGBA.
Thanks!
EDIT:
The green images are probably due to an arm optimization backfiring. I used this to fix the problem in my case:
http://ffmpeg-users.933282.n4.nabble.com/green-distorded-output-image-on-iPhone-td2231805.html
I guess the idea is to not specify any architecture (the config will you a warning about the architecture being unknown but you can continue to 'make' anyway). That way, the arm optimizations are not used. There maybe a slight performance hit (if any), but atleast it works! :)

I think the problem is most likely that you are using PIX_FMT_RGB8 as your input pixel format. This does not mean 8 bits per channel like the commonly used 24-bit RGB or 32-bit ARGB. It means 8 bits per pixel, meaning that all three color channels are housed in a single byte. I am guessing that this is not the format of your image since it is quite uncommon, so you need to use PIX_FMT_RGB24 or PIX_FMT_RGB32 depending on whether or not your input image has an alpha channel. See this documentation page for info on the pixel formats.

Related

iOS Audio Unit + Speex encoding is distorted

I am working on an iOS project that needs to encode and decode Speex audio using a remoteIO audio unit as input / output.
The problem I am having is although speex doesn't print any errors, the audio I get is somewhat recognizable as voice but very distorted, sort of sounds like the gain was just cranked up in a robotic way.
Here are the encode and decode functions (Input to encode is 320 bytes of signed integers from the audio unit render function, Input to decode is 62 bytes of compressed data ):
#define AUDIO_QUALITY 10
#define FRAME_SIZE 160
#define COMP_FRAME_SIZE 62
char *encodeSpeexWithBuffer(spx_int16_t *buffer, int *insize) {
SpeexBits bits;
void *enc_state;
char *outputBuffer = (char *)malloc(200);
speex_bits_init(&bits);
enc_state = speex_encoder_init(&speex_nb_mode);
int quality = AUDIO_QUALITY;
speex_encoder_ctl(enc_state, SPEEX_SET_QUALITY, &quality);
speex_bits_reset(&bits);
speex_encode_int(enc_state, buffer, &bits);
*insize = speex_bits_write(&bits, outputBuffer, 200);
speex_bits_destroy(&bits);
speex_encoder_destroy(enc_state);
return outputBuffer;
}
short *decodeSpeexWithBuffer(char *buffer) {
SpeexBits bits;
void *dec_state;
speex_bits_init(&bits);
dec_state = speex_decoder_init(&speex_nb_mode);
short *outTemp = (short *)malloc(FRAME_SIZE * 2);
speex_bits_read_from(&bits, buffer, COMP_FRAME_SIZE);
speex_decode_int(dec_state, &bits, outTemp);
speex_decoder_destroy(dec_state);
speex_bits_destroy(&bits);
return outTemp;
}
And the audio unit format:
// Describe format
audioFormat.mSampleRate = 8000.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
No errors are reported anywhere and I have confirmed that the Audio Unit is processing at a sample rate of 8000
After a few days of going crazy over this I finally figured it out. The trick with Speex is that you must initialize a SpeexBit and encoder void* and use them throughout the entire session. Because I was recreating them for every piece of the encode it was causing strange sounding results.
Once I moved:
speex_bits_init(&bits);
enc_state = speex_encoder_init(&speex_nb_mode);
Out of the while loop everything worked great.

ffmpeg +libx264 iPhone -> 'avcodec_encode_video' return always 0 . please advice

av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int out_size, size, outbuf_size;
//FILE *f;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
NSLog(#"codec = %i",codec);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
/* put sample parameters */
c->bit_rate = 400000;
c->bit_rate_tolerance = 10;
c->me_method = 2;
/* resolution must be a multiple of two */
c->width = 352;//width;//352;
c->height = 288;//height;//288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
//c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
c ->me_range = 16;
c ->max_qdiff = 4;
c ->qmin = 10;
c ->qmax = 51;
c ->qcompress = 0.6f;
'avcodec_encode_video' is always 0 .
I guess that because 'non-strictly-monotonic PTS' warning, do you konw same situation?
For me also it returns 0 always. But encodes fine. I dont think there is an issue if it returns 0. In the avcodec.h, you can see this
"On error a negative value is returned, on success zero or the number
* of bytes used from the output buffer."

iPhone AudioUnitRender error -50 in device

I am working on one project in which i have used AudioUnitRender it runs fine in simulator but gives -50 error in the device.
If anyone have faced similar problem please give me some solution.
RIOInterface* THIS = (RIOInterface *)inRefCon;
COMPLEX_SPLIT A = THIS->A;
void *dataBuffer = THIS->dataBuffer;
float *outputBuffer = THIS->outputBuffer;
FFTSetup fftSetup = THIS->fftSetup;
uint32_t log2n = THIS->log2n;
uint32_t n = THIS->n;
uint32_t nOver2 = THIS->nOver2;
uint32_t stride = 1;
int bufferCapacity = THIS->bufferCapacity;
SInt16 index = THIS->index;
AudioUnit rioUnit = THIS->ioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList);
NSLog(#"%d",renderErr);
if (renderErr < 0) {
return renderErr;
}
data regarding sample size and frame...
bytesPerSample = sizeof(SInt16);
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1;
//asbd.mBytesPerPacket = asbd.mBytesPerFrame * asbd.mFramesPerPacket;
asbd.mBytesPerPacket = bytesPerSample * asbd.mFramesPerPacket;
//asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;
asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;
asbd.mSampleRate = sampleRate;
thanks in advance..
The length of the buffer (inNumberFrames) can be different on the device and the simulator. From my experience it is often larger on the device. When you use your own AudioBufferList this is something you have to take into account. I would suggest allocating more memory for the buffer in the AudioBufferList.
I know this thread is old, but I just found the solution to this problem.
The buffer duration for the device is different from that on the simulator. So you have to change the buffer duration:
Float32 bufferDuration = ((Float32) <INSERT YOUR BUFFER DURATION HERE>) / sampleRate; // buffer duration in seconds
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferDuration), &bufferDuration);
Try adding kAudioFormatFlagsNativeEndian to your list of stream description format flags. Not sure if that will make a difference, but it can't hurt.
Also, I'm suspicious about the use of THIS for the userData member, which definitely does not fill that member with any meaningful data by default. Try running the code in a debugger and see if that instance is correctly extracted and casted. Assuming it is, just for fun try putting the AudioUnit object into a global variable (yeah, I know..) just to see if it works.
Finally, why use THIS->bufferList instead of the one passed into your render callback? That's probably not good.

AudioConverterConvertBuffer problem with insz error

I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format
_
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 2;
_streamFormat.mBytesPerPacket = 4;
_streamFormat.mBytesPerFrame = 4;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100;
_streamFormat.mReserved = 0;
to this format
_streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0;
_streamFormatOutput.mBitsPerChannel = 16;
_streamFormatOutput.mChannelsPerFrame = 1;
_streamFormatOutput.mBytesPerPacket = 2;
_streamFormatOutput.mBytesPerFrame = 2;
_streamFormatOutput.mFramesPerPacket = 1;
_streamFormatOutput.mSampleRate = 44100;
_streamFormatOutput.mReserved = 0;
and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows
This is to set the channel map for PCM output file
SInt32 channelMap[1] = {0};
status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap);
and this is to convert the buffer in a while loop
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(#"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(#"The buffer size is %d",audioBuffer.mDataByteSize);
numBytesIO = audioBuffer.mDataByteSize;
convertedBuf = malloc(sizeof(char)*numBytesIO);
status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf);
char errchar[10];
NSLog(#"status audio converter convert %d",status);
if (status != 0) {
NSLog(#"Fail conversion");
assert(0);
}
NSLog(#"Bytes converted %d",numBytesIO);
status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf);
NSLog(#"status for writebyte %d, bytes written %d",status,numBytesIO);
free(convertedBuf);
if (numBytesIO != audioBuffer.mDataByteSize) {
NSLog(#"Something wrong in writing");
assert(0);
}
countByteBuf = countByteBuf + numBytesIO;
But the insz problem is there... so it cant convert. I would appreciate any input
Thanks in advance
First, you cannot use AudioConverterConvertBuffer() to convert anything where input and output byte size is different. You need to use AudioConverterFillComplexBuffer(). This includes performing any kind of sample rate conversions, or adding/removing channels.
See Apple's documentation on AudioConverterConvertBuffer(). This was also discussed on Apple's CoreAudio mailing lists, but I'm afraid I cannot find a reference right now.
Second, even if this could be done (which it can't) you are passing the same number of bytes allocated for output as you had for input, despite actually requiring half of the number of bytes (due to reducing number of channels from 2 to 1).
I'm actually working on using AudioConverterConvertBuffer() right now, and the test files are mono while I need to play stereo. I'm currently stuck with the converter performing conversion only of the first chunk of the data. If I manage to get this to work, I'll try to remember to post the code. If I don't post it, please poke me in comments.

Audio Processing: Playing with volume level

I want to read a sound file from application bundle, copy it, play with its maximum volume level(Gain value or peak power, I'm not sure about the technical name of it), and then write it as another file to the bundle again.
I did the copying and writing part. Resulting file is identical to input file. I use AudioFileReadBytes() and AudioFileWriteBytes() functions of AudioFile services in AudioToolbox framework to do that.
So, I have the input file's bytes and also its audio data format(via use of AudioFileGetProperty() with kAudioFilePropertyDataFormat) but I can't find a variable in these to play with the original file's maximum volume level.
To clarify my purpose, I'm trying to produce another sound file of which volume level is increased or decreased relative to the original one, so I don't care about the system's volume level which is set by the user or iOS.
Is that possible to do with the framework I mentioned? If not, are there any alternative suggestions?
Thanks
edit:
Walking through Sam's answer regarding some audio basics, I decided to expand the question with another alternative.
Can I use AudioQueue services to record existing sound file(which is in the bundle) to another file and play with the volume level(with the help of framework) during the recording phase?
update:
Here's how I'm reading the input file and writing the output. Below code lowers the sound level for "some" of the amplitude values but with lots of noise. Interestingly, if I choose 0.5 as amplitude value it increases the sound level instead of lowering it, but when I use 0.1 as amplitude value it lowers the sound. Both cases involve disturbing noise. I think that's why Art is talking about normalization, but I've no idea about normalization.
AudioFileID inFileID;
CFURLRef inURL = [self inSoundURL];
AudioFileOpenURL(inURL, kAudioFileReadPermission, kAudioFileWAVEType, &inFileID)
UInt32 fileSize = [self audioFileSize:inFileID];
Float32 *inData = malloc(fileSize * sizeof(Float32)); //I used Float32 type with jv42's suggestion
AudioFileReadBytes(inFileID, false, 0, &fileSize, inData);
Float32 *outData = malloc(fileSize * sizeof(Float32));
//Art's suggestion, if I've correctly understood him
float ampScale = 0.5f; //this will reduce the 'volume' by -6db
for (int i = 0; i < fileSize; i++) {
outData[i] = (Float32)(inData[i] * ampScale);
}
AudioStreamBasicDescription outDataFormat = {0};
[self audioDataFormat:inFileID];
AudioFileID outFileID;
CFURLRef outURL = [self outSoundURL];
AudioFileCreateWithURL(outURL, kAudioFileWAVEType, &outDataFormat, kAudioFileFlags_EraseFile, &outFileID)
AudioFileWriteBytes(outFileID, false, 0, &fileSize, outData);
AudioFileClose(outFileID);
AudioFileClose(inFileID);
You won't find amplitude scaling operations in (Ext)AudioFile, because it's about the simplest DSP you can do.
Let's assume you use ExtAudioFile to convert whatever you read into 32-bit floats. To change the amplitude, you simply multiply:
float ampScale = 0.5f; //this will reduce the 'volume' by -6db
for (int ii=0; ii<numSamples; ++ii) {
*sampOut = *sampIn * ampScale;
sampOut++; sampIn++;
}
To increase the gain, you simply use a scale > 1.f. For example, an ampScale of 2.f would give you +6dB of gain.
If you want to normalize, you have to make two passes over the audio: One to determine the sample with the greatest amplitude. Then another to actually apply your computed gain.
Using AudioQueue services just to get access to the volume property is serious, serious overkill.
UPDATE:
In your updated code, you're multiplying each byte by 0.5 instead of each sample. Here's a quick-and-dirty fix for your code, but see my notes below. I wouldn't do what you're doing.
...
// create short pointers to our byte data
int16_t *inDataShort = (int16_t *)inData;
int16_t *outDataShort = (int16_t *)inData;
int16_t ampScale = 2;
for (int i = 0; i < fileSize; i++) {
outDataShort[i] = inDataShort[i] / ampScale;
}
...
Of course, this isn't the best way to do things: It assumes your file is little-endian 16-bit signed linear PCM. (Most WAV files are, but not AIFF, m4a, mp3, etc.) I'd use the ExtAudioFile API instead of the AudioFile API as this will convert any format you're reading into whatever format you want to work with in code. Usually the simplest thing to do is read your samples in as 32-bit float. Here's an example of your code using ExtAudioAPI to handle any input file format, including stereo v. mono
void ScaleAudioFileAmplitude(NSURL *theURL, float ampScale) {
OSStatus err = noErr;
ExtAudioFileRef audiofile;
ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile);
assert(audiofile);
// get some info about the file's format.
AudioStreamBasicDescription fileFormat;
UInt32 size = sizeof(fileFormat);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);
// we'll need to know what type of file it is later when we write
AudioFileID aFile;
size = sizeof(aFile);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile);
AudioFileTypeID fileType;
size = sizeof(fileType);
err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType);
// tell the ExtAudioFile API what format we want samples back in
AudioStreamBasicDescription clientFormat;
bzero(&clientFormat, sizeof(clientFormat));
clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame;
clientFormat.mBytesPerFrame = 4;
clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBitsPerChannel = 32;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mSampleRate = fileFormat.mSampleRate;
clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
// find out how many frames we need to read
SInt64 numFrames = 0;
size = sizeof(numFrames);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);
// create the buffers for reading in data
AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1));
bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame;
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames;
bufferList->mBuffers[ii].mNumberChannels = 1;
bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
}
// read in the data
UInt32 rFrames = (UInt32)numFrames;
err = ExtAudioFileRead(audiofile, &rFrames, bufferList);
// close the file
err = ExtAudioFileDispose(audiofile);
// process the audio
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
float *fBuf = (float *)bufferList->mBuffers[ii].mData;
for (int jj=0; jj < rFrames; ++jj) {
*fBuf = *fBuf * ampScale;
fBuf++;
}
}
// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile);
// tell the ExtAudioFile API what format we'll be sending samples in
err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
// write the data
err = ExtAudioFileWrite(audiofile, rFrames, bufferList);
// close the file
ExtAudioFileDispose(audiofile);
// destroy the buffers
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
free(bufferList->mBuffers[ii].mData);
}
free(bufferList);
bufferList = NULL;
}
I think you should avoid working with 8 bits unsigned chars for audio, if you can.
Try to get the data as 16 bits or 32 bits, that would avoid some noise/bad quality issues.
For most common audio file formats there isn't a single master volume variable. Instead you will need to take (or convert to) the PCM sound samples and perform at least some minimal digital signal processing (multiply, saturate/limit/AGC, quantization noise shaping, and etc.) on each sample.
If the sound file is normalized, there's nothing you can do to make the file louder. Except in the case of poorly encoded audio, volume is almost entirely the realm of the playback engine.
http://en.wikipedia.org/wiki/Audio_bit_depth
Properly stored audio files will have peak volume at or near the maximum value available for the file's bit depth. If you attempt to 'decrease the volume' of a sound file, you'll essentially just be degrading the sound quality.