I am using flutter_sound package to record audio from mic. It provides data in stream of Uint8List. So how can I calculate amplitude from it. I have found many answers in other language but I was having hard time interpreting it into dart.
for reference,
Reading in a WAV file and calculating RMS
Detect silence when recording
how can i translate byte[] buffer to amplitude level
if anyone can interpret this into dart so that I can calculate amplitude
I was not able to calculate amplitude from Uint8List so what I did was write my own native code to directly get amplitude from native you can check out my package here. (I am busy somewhere else so I'm not right now maintaining it but will surely do in future.)
And I don't know but flutter_sound provides maybe false values or I was calculating wrong but I was always getting nearly the same values even when the volume is high or low.
Now, if you want to calculate amplitude then it should be based on values you're getting,
If you are getting values between -32768 and 32767 then from my understanding what you do is define a timeframe and for that timeframe calculate the RMS of it.
The RMS is your amplitude.
Now, as #richard said in the comment to normalize with 32768 if you have Int16List. But I guess that is doing too much work. We get a stream of Int16List so data may be coming every 200ms or whatever it is set at the plugin level. Calculating RMS for each Int16List is a very intensive task so I don't think it's good to do it on the flutter side. Get data from native directly and just pass it flutter using platform channels.
Now, if you want to try then calculate the RMS of an Int16List then for a defined timeframe take the mean of those values.
I may well be wrong but hope this helps or guides you in the right direction. If anyone finds a better way please post it here.
Related
The docs for AVAudioPlayer say averagePowerForChannel: "Returns the average power for a given channel, in decibels, for the sound being played." But it doesn't say what "average" means, or how/if that average is weighted. I'm particularly interested in what the range of samples is, specifically the time interval and whether it goes forward into the future at all.
AVAudioRecorder: peak and average power says that the formula used to calculate power is RMS (root mean square) over a number of samples, but also doesn't say how many samples are used.
Optional bonus question: if I want to calculate what the average power will be in, say, 100 msec -- enough time for an animation to begin now and reach an appropriate level soon -- what frameworks should I be looking into? Confusion with meters in AVAudioRecorder says I can get ahold of the raw sample data with AudioQueue or RemoteIO, and then I can use something like this: Android version of AVAudioPlayer's averagePowerForChannel -- but I haven't found sample code (pardon the pun).
I am working on a project in matlab to take a predetermined audio file and change the sample rate dynamically from data generated in real time. I have hit a very stubborn roadblock with the dsp.audioplayer object. It doesn't allow change in either the sample rate or the sample size once it's state is locked. My thoughts right now are to vary the sample size that I pull from the wav file and scale it using a fir rate conversion filter. Is this an option worth perusing? Are there any other ways around this problem?
In the latest MATLAB release samplerate is tunable in dsp.audioplayer. Tunable Means you can change the property value after the object is locked.
Your workaround is good when this is not possible.
I made a sine LUT for VHDL, using 256 elements.
Im using MIDI input, so values range 8.17Hz (note #0) to 12543.85z (note #127).
I have another LUT that calculates how many value must be sent to my 48 kHz codec in order to play the sound (the 8.17Hz frequency will need 48000/8.17 = 5870 values).
I have another LUT that contains an index factor, which is 256/num_Values, which is used to call values from the sin table (ex: 100*256/5870 = 4 (with integer rounding)).
I send this index factor to another VHDL file, which is used to calculate which value should be sent back. (ex: index = index_factor*step_counter)
When I get this index, I divide it by 100, and call sineLUT[index] to get the value that I need to generate a sine wave at the desired frequency.
The problem is, only the last 51 notes seem to work for me, and I do not know why. It seems to get stuck on a constant note at anything below that frequency (<650 hz) , and just decrease in volume every time I try to lower the note.
If you need parts of my code, let me know.
Just guessing, I suspect your step_counter isn't going through enough cycles, so your index (into the sine lut) doesn't go through a full 360 degrees for the lower frequencies.
For anything more helpful, you'll probably have to post code.
As an aside, why aren't you using something more like a conventional DDS? Analog Devices has a nice write-up on the basics: DDS Tutorial
I want to add a few bytes of data to a sound file (for example a song). The sound file will be transmitted via radio to a received who uses for example the iPhone microphone to pick up the sound, and an application will show the original bytes of data. Preferably it should not be hearable for humans.
What is such technology called? Are there any applications that can do this?
Libraries/apps that can be used on iPhone?
It's audio steganography. There are algorithms to do it. Refer to here.
I've done some research, and it seems the way to go is:
Use low audio frequencies.
Spread the "bits" around randomly - do not use a pattern as it will be picked up by the listener. "White noise" is a good clue. The random pattern is known by the sender and receiver.
Use Fourier transform to pick up frequency and amplitude
Clean up input data.
Use checksum/redundancy-algorithms to compensate for loss.
I'm writing a prototype and am having a bit difficulty in picking up the right frequency as if has a ~4 Hz offset (100 Hz becomes 96.x Hz when played and picked up by the microphone).
This is not the answer, but I hope it helps.
I'm using Aran Mulhollan' RemoteIOPlayer, using audioqueues in the SDK iphone.
I can without problems:
- adding two signals to mix sounds
- increasing sound volume by multiplying the UInt32 I get from the wav files
BUT every other operation gives me warped and distorted sound, and in particular I can't divide the signal. I can't seem to figure out what I'm doing wrong, the actual result of the division seems fine; some aspect of sound / signal processing must obviously be eluding me :)
Any help appreciated !
Have you tried something like this?
- (void)setQueue:(AudioQueueRef)ref toVolume:(float)newValue {
OSStatus rc = AudioQueueSetParameter(ref, kAudioQueueParam_Volume, newValue);
if (rc) {
NSLog(#"AudioQueueSetParameter returned %d when setting the volume.\n", rc);
}
}
First of all the code you mention does not use AudioQueues, it uses AudioUnits. The best way to mix audio in the iphone is using the mixer units that are inbuilt, there is some code on the site you downloaded your original example from here. Other than that what i would check in your code os that you have the correct data type. Are you trying your operations on Unsigned ints when you should be using signed ones? often that produces warped results (understandably)
The iPhone handles audio as 16-bit integer. Most audio files are already normalized so that the peak sample values are the maximum that fit in a 16-bit signed integer. That means if you add two such samples together, you get overflow, or in this case, audio clipping. If you want to mix two audio sources together and ensure there's no clipping, you must average the samples: add them together and divide by two. Or you set the volume to half. If you're using decibels, that would be about a -6 dB change.