I have a challenging task
Task:
Just say something into your iPhone or capture a friend saying something.
Then application make it sound even MORE ridiculous with your choice of over a dozen different voices.
Voices of animals, guitar, drum etc.
In this task we have to convert Pitch Manipulation I suppose.
The easiest way to change the pitch would be to speed it up or slow it down (half speed = down one octave, 2x speed = up one octave). But there are algorithms for maintaining speed...
http://www.dspdimension.com/admin/time-pitch-overview/
http://users.ecel.ufl.edu/~cdeng/pitch_shifting_algorithm.htm
Pitch shifting is already been discussed here
Real-time Pitch Shifting on the iPhone
You will have to add animal, guitar, drum sounds to the output stream for extra effects.
Related
Are there some common algorithms for implementing MIDI pitch bend for single notes and multi voices (e.g. chords). I am implementing this in kind of an intuitive way, but I would really like to know if I am not totally off-track! For single notes I currently I am sending a pitch bend message to the channel, just before the note on message, and resetting the pitch bend by sending it the center value of 2^13, right after the note off message has been sent to keep the channel clean for the next coming note!
I am specially interested in how to deal with channels for implementing the pitch bends.
Any help or hint to appropriate readings is highly appreciated.
PS: here is how I have implemented pitch bend for a single note (https://github.com/teymuri/cu/blob/main/mid.py#L61)
A pitch bend message affects all sounds on the channel. So you should send it when you want the pitch to change.
The sound might not stop immediately after a note-off message. You should not reset the pitch bend until you are sure that the sound has ended. (Or don't reset it at all; the pitch of silence does not matter.)
If you want to do microtonality, you pretty much have to use one channel per note.
How can I achieve a wind effect like this.(See link in comments).
Not the SKFieldNode, I'm looking to achieve the design.
How can I do it using PARTICLES?
I've tried playing with the SMOKE template in the PARTICLE EDITOR. I tried changing the different parameters/numbers in the editor yet couldn't achieve this effect.
I really need help, I've been trying to do this for weeks.
Thanks!
EDIT:
I'm looking to achieve 2 different effects in the video.
Adding 2 images to be more precise about what I'm looking for.
1) Straight/Wavy Wind - The straight/wavy wind(see in video in the place I drew because it has very low opacity)
2) Swirly/Hurricane Wind - The swirly/hurricane kind of wind that spins around in swirly motion.
I hope this edit will be the most helpful.
These are two different particle effects, which are best done with tens of thousands of very small particles, or even millions of them, in a post processing effects program like After Effects, using something like TrapCode's Particular, or similar. Then rendered out as low resolution image sequence animations, and composited where you need them, when you need them. You can get away with low resolution because they're wispy, additive and faint. This will be the most performant and common way to do this. Doing this with "real" particles on iOS will drain the battery and hog all the CPU and GPU to do the combination of physics and rendering required.
simple question hard answer:
I'd like to be able to read if the device (and the user) is running/walking holding his device. I know that the iPhone accelerometer calculates acceleration so if the user runs at a constant speed, there will be no signal spotted.
Any help on that ?
I actually used to work on that...what you can do is to detect with the accelerometer and gyro the frequency of the movement. If you plot a chart, you will see a periodic behavior when you walk or run. Do some "field" testing and you could see how those frequency change between walking and running. It's pretty cool.
Try dynamic time warping (DTW).
First, you build a small "database" of motions that you would like to recognize.
Then, in your application you compare the current sensor readings with DTW to the ones in the database and pick the most similar one.
I am working on a Labyrinth style app for iPhone using Chipmunk and openAL. I got everything working except the ball rolling sound. What I have tried is playing a small sound for each update in the ball's position so that the overall effect sounds like the ball is rolling. Based on advice on this forum I tired using velocity of the ball to adjust pitch of the sound. I have the following problems:
I cant hear the sound at all when I am playing this sound in a chipmunk call back. I can hear it elsewhere.
Even if I got this working somehow, the sound I should play has to be very very short as the ball doenst take too long to roll. THere has to be a alternate way.
Can anybody please help? I can even pay for a simple application that did this if the sound is also included.
I recommend cheating... record (or find somewhere) some longish looping sounds of the ball rolling at different speeds. Have one of them playing, based on the speed of the ball. As the ball's speed changes, you can cross-fade from one sample to another. My guess is that that will sound more realistic than just varying the pitch of a single sample.
Of course, it may be enough just to have one longish looping sample, and only vary the volume proportional to the ball's speed. I'll have to go track down my labyrinth game and check. :)
In considering the design of marble-in-maze games where you tilt the table to get the ball to the end of the maze without going down one of the holes, I wonder whether anyone here has considered the modelling of the sound of the ball hitting the walls...
The ball doesn't always make the same sound.
This other question covers the rolling sound:
Sound of a rolling ball
But I am more interested in the bouncing sound - I am often struck by how unrealistic it is in most people's version of the game.
What are the factors to consider to work out how to produce a realistic sound?
How must the sample or raw data then be processed or generated?
There are some good links in the Sound Modeling section of this page from a course at Carnegie Mellon: http://www-2.cs.cmu.edu/~djames/pbmis/index.html. The instructor, Doug James, is now at Cornell as does similar research there (http://www.cs.cornell.edu/projects/Sound/).
I've never tried to implement any of these methods, but I suspect that they're overkill and/or too slow for a small game. However, you might be able to generate several samples offline and choose an appropriate one at runtime.
Hope that helps.