Are there some common algorithms for implementing MIDI pitch bend for single notes and multi voices (e.g. chords). I am implementing this in kind of an intuitive way, but I would really like to know if I am not totally off-track! For single notes I currently I am sending a pitch bend message to the channel, just before the note on message, and resetting the pitch bend by sending it the center value of 2^13, right after the note off message has been sent to keep the channel clean for the next coming note!
I am specially interested in how to deal with channels for implementing the pitch bends.
Any help or hint to appropriate readings is highly appreciated.
PS: here is how I have implemented pitch bend for a single note (https://github.com/teymuri/cu/blob/main/mid.py#L61)
A pitch bend message affects all sounds on the channel. So you should send it when you want the pitch to change.
The sound might not stop immediately after a note-off message. You should not reset the pitch bend until you are sure that the sound has ended. (Or don't reset it at all; the pitch of silence does not matter.)
If you want to do microtonality, you pretty much have to use one channel per note.
Related
I am trying to use a KLT tracker for human tracking in a CCTV footage. The people are very close to the CCTV. I noticed that some time people change the orientation of the heads and also the frame rate is slightly slow. I have read from Rodrigues et al. paper Section 3.4 that the:
"This simple procedure (KLT tracking procedure) is extremely robust and can establish matches between head detections where the head HAS NOT BEEN continuously detected continuously detected due to pose variation or partial occlusions due to other members of the crowd".
Paper can be found in this link : Rodriguez et al.
1). I understood that the KLT tracker is robust to pose variations and occlusions. Am I right?
I was trying to track one single person in footage till now by using the MATLAB KLT as in :
MATLAB KLT
However, the points were not being found after JUST 3 frames.
2). Can someone explain why this is happening or else a better solution to this. Maybe using a particle/Kalman filter should be better?
I do not recommend using a KLT tracker for close CCTV cameras due to the following reasons:
1. CCTV frame rate is typically low, so people change their appearance significantly between frames
2. Since the camera is close to the people, they also change their appearance over time due to perspective effects (e.g. face can be seen when person is far from camera, but as he/she gets closer, only the top of the head is seen).
3. Due to closeness, people also significantly change scale and aspect ratio, which is a challenge for some head detectors.
KLT only works well when the neighborhood of the pixel, including both foreground and background, remains similar. The above properties make this less likely for most pixels. I can only recommend KLT as an additional motion based hint for tracking, as a vector of field of part motions.
Most single person trackers do not adapt well to scale change. I suggest you start with some state of the art tracker, like Struck (C++ code by Sam Hare available here), and modify the search routine to work with scale change.
KLT by itself only works for short-term tracking. The problem is that you lose points because of tracking errors, 3D rotation, occlusion, or objects leaving the field of view. For long-term tracking you need some way of replenishing the points. In the multiple face tracking example the new points are acquired by periodically re-detecting the faces.
Your particular case sounds a little strange. You should not be losing all the points after just 3 frames. If this happens than either the object is moving too fast, or your frame rate is too low.
I would like to detect when a user jumps and the intensity of that jump. I'm coming up short finding good resources for this behavior.
Is there any library which handles this ?
How easy or difficult is it to get accurate data ?
(i.e. the difference between a real jump and the user rapidly moving their phone downwards)
All you would need to do is to read the accelerometer readings. To determine the different between a jump and the user moving the phone you would detect the sudden impact. So you are sampling the rate at which the accelerometer data changes. If it rapidly changes past your threshold you create then it must be a jump and vice versa. Checkout CoreMotion
Here is a tutorial that is outdated but the generally idea is the same.
Detecting a bump (sudden impact)
I'm wanting to figure out if a user is not moving at all, walking, or running using the iPhone. I'm not trying to implement a pedometer. I just want to know around about if someone is moving briskly, slowly, or not at all. I don't need mph or anything like that.
I think the accelerometer may be able to do this for me, but I was wondering if someone knows of any tutorials or example code that might be able to point me in the right direction?
Thanks to all that reply
The accelerometer won't do you any good here - it will only capture changes in velocity.
Just track the current location periodically and calculate the speed.
There are no hard thresholds for walking vs. running motion, so you will have to experiment a bit. The AccelerometerGraph sample code should get you started on how to get and interpret accelerometer data.
The Accelerometer is good, but if the user has an iPhone 4 or iPad 2 you should use the gyroscope.
CMMotionManager and Event Handeling Guide - Motion Events
Apple Documentation is the best example you can get!
People have a different bounce in their step between walking and running which can be measured with the accelerometer, but this differs between individuals (what shoes they are wearing, what surface they are upon, what part of the body is attached to the iPhone etc.), and this motion can probably be imitated by shaking the iPhone just right while standing still.
Experiment by recording the two types of acceleration profiles, and then use some sort of pattern matching to pick the most likely profile candidate from the current recorded acceleration data.
I have a challenging task
Task:
Just say something into your iPhone or capture a friend saying something.
Then application make it sound even MORE ridiculous with your choice of over a dozen different voices.
Voices of animals, guitar, drum etc.
In this task we have to convert Pitch Manipulation I suppose.
The easiest way to change the pitch would be to speed it up or slow it down (half speed = down one octave, 2x speed = up one octave). But there are algorithms for maintaining speed...
http://www.dspdimension.com/admin/time-pitch-overview/
http://users.ecel.ufl.edu/~cdeng/pitch_shifting_algorithm.htm
Pitch shifting is already been discussed here
Real-time Pitch Shifting on the iPhone
You will have to add animal, guitar, drum sounds to the output stream for extra effects.
In considering the design of marble-in-maze games where you tilt the table to get the ball to the end of the maze without going down one of the holes, I wonder whether anyone here has considered the modelling of the sound of the ball hitting the walls...
The ball doesn't always make the same sound.
This other question covers the rolling sound:
Sound of a rolling ball
But I am more interested in the bouncing sound - I am often struck by how unrealistic it is in most people's version of the game.
What are the factors to consider to work out how to produce a realistic sound?
How must the sample or raw data then be processed or generated?
There are some good links in the Sound Modeling section of this page from a course at Carnegie Mellon: http://www-2.cs.cmu.edu/~djames/pbmis/index.html. The instructor, Doug James, is now at Cornell as does similar research there (http://www.cs.cornell.edu/projects/Sound/).
I've never tried to implement any of these methods, but I suspect that they're overkill and/or too slow for a small game. However, you might be able to generate several samples offline and choose an appropriate one at runtime.
Hope that helps.