In my game one audio is paying using
[[SimpleAudioEngine sharedEngine]playeffect:#"audio.aac"];
function .
When I touch one sprite I played another audio which 1 sec.
My problem is that my first "audio.aac" stops when I touch sprite continuously for 8 - 10 times, any solution for it.
Help will be appreciated.
You have to use .wav 44100 Hz 16 bit stereo of .caff formats to play multiply sound effects at once.
You could use
[[SimpleAudioEngine]sharedEngine]playBackroundMusic:#"backgroundmusic.aac"];
to have the 2 minute clip loop while your playEffect
runs separately.
Related
I have a Dice rolling game. I have a 6 dice which is applied random force in random direction in a box. Now, when Dice collide with each other as well as within the box wall the sound need to be produced.
Currently when i add sound to each dice and trigger it when the dice collide, the sound is wired when all of them plays at the same time.
Is there a better way to produce the sound like real when all 6 dice collide with each other and with the walls of the box.
The flange-like effect you hear happens when two identical sounds are played with a very small delay causing their wavelengths to amplify and dampen themselves.
To avoid such effect you have many options:
To just avoid playing same sfx with delays less than inconceivable to user. (you are probably playing each dice hit sfx twice right now)
Use different samples and play them randomly. (if you can't generate new samples try modifying the ones you have by simply changing their pitch by say 3%-10% to have enough different samples)
If second option does not satisfy your need (project size increase) you can use third party plugins such as master audio to create several custom-pitched sounds out of one single sound at run-time.
You can change the pitch in code (at run-time) and make sure two close hits never play with same (or very close) pitch
It's actually pretty difficult to produce realistic collision sound for multiple objects collision.
If you use the same AudioClip for each dice-to-dice or dice-to-box collision event and trigger them on collision event, the end result will simply sound like an echoed version of the AudioClip with various delays.
If you choose to use a variety of collision AudioClips as a pool to choose from, you might produce ok end results as long as you can guarantee there is no two collision sounds with the same AudioClip playing during any given time period.
The best solution probably is to obtain several recordings of the real scenario (dice rolling and colliding in a box), and randomly play one when you are simulating the collision in game. As long as the duration of the AudioClip matches the simulation, it will be relatively hard to spot it's faked.
Right now I have a loop that loops through an array of file player audio units and tells them what position in the audio file to start playing. (this works) In this same loop I have the following code to tell the units when to start playing (-1 makes them play in the next render cycle). The problem is that they are not starting at the same time because the first track starts playing before i have had a chance to tell the third track to play. What I want to say is "track one, you play in exactly 5 cycles, Track 2 you play in exactly 4 cycles, Track 3 you play in exactly 3 cycles... etc. that way they play at the same time. Is this the right approach? If so, what value do you set for startTime.mSampleTime ? I have not found any documentation that tells me how to do this. Thanks
// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime, 0, sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;
startTime.mSampleTime = -1;
AudioUnitSetProperty(fileUnitArray[mycount], kAudioUnitProperty_ScheduleStartTimeStamp, kAudioUnitScope_Global, 0, &startTime, sizeof(startTime));
I was not able to track down any information on setting mSampleTime to any other value then -1 (i.e start on the next cycle) but I was able to work around the problem. Instead of keeping the AUGraph running, using AudioUnitReset to reset the file player audio unit and then using the code above to restart the file players, I store the current file player play-head position, stop the AUGraph completely, reinitialize the AUGraph with the current play-head position instead of telling it to start at position zero, and then restart the AUGRAPH.
What you need to do is schedule the playback of the audio files from the render thread.
You load the files on a different thread but you tell the files to play in the next render cycle from the render thread. This way all of your files wil start at the same time.
I'm using OpenAL in my app to play sounds based on *.caf audio files.
There's a tutorial which describes how to generate white noise in OpenAL:
amplitude - rand(2*amplitude)
But they're creating a buffer with 1000 samples and then just loop that buffer with
alSourcei(source, AL_LOOPING, AL_TRUE);
The problem with this approach: Looping white noise just doesn't work like this because of DC offset. There will be a noticeable wobble in the sound. I know because I tried looping dozens of white noise regions generated in different applications and all of them had the same problem. Even after trying to crossfade and making sure the regions are cut to zero crossings.
Since (from my understanding) OpenAL is more low-level than Audio Units or Audio Queues, there must be a way to generate white noise on the fly in a continuous manner such that no looping is required.
Maybe someone can point out some helpful resources on that topic.
The solution with the least change might just be to create a much longer OpenAL noise buffer (several seconds) such that the wobble is at too low rate to easily hear. Any waveform hidden in a 44Hz repeat (1000 samples at 44.1k sample rate) is within normal human hearing range.
I am making a rhythm game. I need to play a sound with different tempo. In other words e.g. if I have [AVAudioPlayer play] 8 times in 2 seconds.
Check out the
enableRate
and
rate
property on the AVAudioPlayer Class. After you create the audioplayer, but before you play, set
audioPlayer.enableRate=YES;
then after you play, set rate to a number above or below 1.0 to speed up or slow down the track. For music, less than 0.8 or more than 1.2 starts to sound bad, but for a few BMP up or down, it will easily do the trick.
Note that play sets the rate to 1 and stop sets the rate to 0, so be sure to set the desired rate after playing.
I've used Pitch Shifting using the Fourier Transform – Source Code
http://www.dspdimension.com/download/
I'm trying to create a guitar hero type game and I'm working on a horizontal metronome where a dot crosses a certain vertical line means to play a note.
The metronome starts when the note is 5 seconds from being played then moves across and finally hits the vertical line. Are there any algorithms for making the dot move at the correct speed so it hits the line in exactly 5 seconds.
Also the image movement is very choppy. Is there any way to smooth the movement of the image across the view?
I would appreciate any feedback, thanks.
High-school physics - velocity = distance / time
Also, you probably don't really want "5 seconds" as your fixed preparation time for every song. 1 or 2 bars (at the song's tempo) would be better.