Goal is to detect whether an object/s(can be multiple) is stationary in a ROI for a period of time (Application: Blocking the zebra lane detection). So it means obeserving each blob with respect to time t
Input = Video file
So, let's say the pedestrian crossing lane is the ROI. Background subtraction happens inside ROI only, then each blob(vehicle) will be observed separately for time t if they have been motionless there.
What I'm thinking is getting the position of the blob at frame 1 and frame n (time threshold) and check if the position is the same. But this must be applied on each blob assuming there are multiple blobs. So a loop is involved here to process each blob one by one. But what about processing each blob by getting its position at frame 1 and frame n, then compare if it's the same(if so then it has been motionless for time t therefore it's "blocking"). Then move on to the next blob.
My logic written on java code:
//assuming "blobs" is an arraylist containing all the blobs in the image
int initialPosition = 0, finalPosition = 0;
static int violatorCount=0;
for(int i=0; i<blobs.size(); i++){ //iterate to each blob to process them separately
initialPosition = blobs.get(i).getPosition();
for(int j=0; j<=timeThreshold; j++){
if(blobs.get(i) == null){ //if blob is no longer existing on frame j
break;
}
finalPosition = blobs.get(i).getPosition();
}
if(initialPosition == finalPosition){
violatorCount++;
}
//output count on top-right part of window
}
Can you share guys the logic on how to implement the goal/idea in either Matlab or OpenCV?
Optical Flow is an option thanks to PSchn. Any other options I can consider
Sounds like optical flow. You could yous the OpenCV implementation. Pass your points to cv::calcOpticalFlowPyrLK along with the next image (see here). Then you could check for the distance between to points and dicide what to do.
I dont know if it works, just an idea.
Related
I would like to fill this auditorium seating area with chairs (in the editor) and have them all face the same focal point (the stage). I will then be randomly filling the chairs with different people (during runtime). After each run the chairs should stay the same, but the people should be cleared so that during the next run the crowd looks different.
The seating area does not currently have a collider attached to it, and neither do the chairs or people.
I found this code which has taken care of rotating the chairs so they target the same focal point. But I'm still curious if there are any better methods to do this.
//C# Example (LookAtPoint.cs)
using UnityEngine;
[ExecuteInEditMode]
public class LookAtPoint : MonoBehaviour
{
public Vector3 lookAtPoint = Vector3.zero;
void Update()
{
transform.LookAt(lookAtPoint);
}
}
Additional Screenshots
You can write a editor script to automatically place them evenly. In this script,
I don't handle world and local/model space in following code. Remember to do it when you need to.
Generate parallel rays that come from +y to -y in a grid. The patch size of this grid depends on how big you chair and the mesh(curved space) is. To get a proper patch size. Get the bounding box of a chair(A) and the curved space mesh(B), and then devide them(B/A) and use the result as the patch size.
Mesh chairMR;//Mesh of the chair
Mesh audiMR;//Mesh of the auditorium
var patchSizeX = audiMR.bounds.size.X;
var patchSizeZ = audiMR.bounds.size.Z;
var countX = audiMR.bounds.size.x / chairMR.bounds.size.x;
var countZ = audiMR.bounds.size.z / chairMR.bounds.size.z;
So the number of rays you need to generate is about countX*countZ. Patch size is (patchSizeX, patchSizeZ).
Then, origin points of the rays can be determined:
//Generate parallel rays that come form +y to -y.
List<Ray> rays = new List<Ray>(countX*countZ);
for(var i=0; i<countX; ++i)
{
var x = audiMR.bounds.min.x + i * sizeX + tolerance /*add some tolerance so the placed chairs not intersect each other when rotate them towards the stage*/;
for(var i=0; i<countZ; ++i)
{
var z = audiMR.bounds.min.z + i * sizeZ + tolerance;
var ray = new Ray(new Vector3(x, 10000, z), Vector3.down);
//You can also call `Physics.Raycast` here too.
}
}
Get postions to place chairs.
attach a MeshCollider to your mesh temporarily
foreach ray, Physics.Raycast it (you can place some obstacles on places that will not have a chair placed. Set special layer for those obstacles.)
get hit point and create a chair at the hit point and rotate it towards the stage
Reuse these hit points to place your people at runtime.
Convert each of them into a model/local space point. And save them into json or asset via serialization for later use at runtime: place people randomly.
I have a set of mask images that I need to use everytime I recognise a previously-known scene on my camera. All the mask images are in IplImage format. There will be instances where, for example, the camera has panned to a slightly different but nearby location. this means that if I do a template matching somewhere in the middle of the current scene, I will be able to recognise the scene with some amount of shift of the template in this scene. All I need to do is use those shifts to adjust the mask image ROIs so that they can be overlayed appropriately based on the template-matching. I know that there are functions such as:
cvSetImageROI(Iplimage* img, CvRect roi)
cvResetImageROI(IplImage* img);
Which I can use to set crop/uncrop my image. However, it didn't work for me quit the way I expected. I would really appreciate if someone could suggest an alternative or what I am doing wrong, or even what I haven't thought of!
**I must also point out that I need to keep the image size same at all times. The only thing that will be different is the actual area of interest in the image. I can probably use the zero/one padding to cover the unused areas.
I believe a solution without making too many copies of the original image would be:
// Make a new IplImage
IplImage* img_src_cpy = cvCreateImage(cvGetSize(img_src), img_src->depth, img_src->nChannels);
// Crop Original Image without changing the ROI
for(int rows = roi.y; rows < roi.height; rows++) {
for(int cols = roi.x; rows < roi.width; cols++) {
img_src_cpy->imageData[(rows-roi.y)*img_src_cpy->widthStep + (cols-roi.x)] = img_src[rows*img_src + cols];
}
{
//Now copy everything to the original image OR simply return the new image if calling from a function
cvCopy(img_src_cpy, img_src); // OR return img_src_cpy;
I tried the code out on itself and it is also fast enough for me (executes in about 1 ms for 332 x 332 Greyscale image)
I have some 3D models that I render in OpenGL in a 3D space, and I'm experiencing some headaches in moving the 'character' (that is the camera) with rotations and translation inside this world.
I receive the input (ie the coordinates where to move/the dregrees to turn) from some extern event (image a user input or some data from a GPS+compass device) and the kind of event is rotation OR translation .
I've wrote this method to manage these events:
- (void)moveThePlayerPositionTranslatingLat:(double)translatedLat Long:(double)translatedLong andRotating:(double)degrees{
[super startDrawingFrame];
if (degrees != 0)
{
glRotatef(degrees, 0, 0, 1);
}
if (translatedLat != 0)
{
glTranslatef(translatedLat, -translatedLong, 0);
}
[self redrawView];
}
Then in redrawView I'm actualy drawing the scene and my models. It is something like:
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
NSInteger nModels = [models count];
for (NSInteger i = 0; i < nModels; i++)
{
MD2Object * mdobj = [models objectAtIndex:i];
glPushMatrix();
double * deltas = calloc(sizeof(double),2);
deltas[0] = currentCoords[0] - mdobj.modelPosition[0];
deltas[1] = currentCoords[1] - mdobj.modelPosition[1];
glTranslatef(deltas[0], -deltas[1], 0);
free(deltas);
[mdobj setupForRenderGL];
[mdobj renderGL];
[mdobj cleanupAfterRenderGL];
glPopMatrix();
}
[super drawView];
The problem is that when translation an rotation events are called one after the other: for example when I'm rotating incrementally for some iterations (still around the origin) then I translate and finally rotate again but it appears that the last rotation does not occur around the current (translated) position but around the old one (the old origin). I'm well aware that this happens when the order of transformations is inverted, but I believed that after a drawing the new center of the world was given by the translated system.
What am I missing? How can I fix this? (any reference to OpenGL will be appreciated too)
I would recommend not doing cummulative transformations in the event handler, but internally storing the current values for your transformation and then only transforming once, but I don't know if this is the behaviour that you want.
Pseudocode:
someEvent(lat, long, deg)
{
currentLat += lat;
currentLong += long;
currentDeg += deg;
}
redraw()
{
glClear()
glRotatef(currentDeg, 0, 0, 1);
glTranslatef(currentLat, -currentLong, 0);
... // draw stuff
}
It sounds like you have a couple of things that are happening here:
The first is that you need to be aware that rotations occur about the origin. So when you translate then rotate, you are not rotating about what you think is the origin, but the new origin which is T-10 (the origin transformed by the inverse of your translation).
Second, you're making things quite a bit harder than you really need. What you might want to consider instead is to use gluLookAt. You essentially give it a position within your scene and a point in your scene to look at and an 'up' vector and it will set up the scene properly. To use it properly, keep track of where you camera is located, call that vector p, and a vector n (for normal ... indicates the direction you're looking) and u (your up vector). It will make things easier for more advanced features if n and u are orthonormal vectors (i.e. they are orthoginal to each other and have unit length). If you do this, you can compute r = n x u, (your 'right' vector), which will be a normal vector orthoginal to the other two. You then 'look at' p+n and provide the u as the up vector.
Ideally, your n, u and r have some canonical form, for instance:
n = <0, 0, 1>
u = <0, 1, 0>
r = <1, 0, 0>
You then incrementally accumulate your rotations and apply them to the canonical for of your oritentation vectors. You can use either Euler Rotations or Quaternion Rotations to accumulate your rotations (I've come to really appreciate the quaternion approach for a variety of reasons).
I'm working off of some of Apple's sample code for mixing audio (http://developer.apple.com/library/ios/#samplecode/MixerHost/Introduction/Intro.html) and I'd like to display an animated spectrogram (think itunes spectrogram in the top center that replaces the song title with moving bars). It would need to somehow get data from the audio stream live since the user will be mixing several loops together. I can't seem to find any tutorials online about anything to do with this.
I know I am really late for this question. But I just found some great resource to solve this question.
Solution:
Instantiate an audio unit to record samples from the microphone of the iOS device.
Perform FFT computations with the vDSP functions in Apple’s Accelerate framework.
Draw your results to the screen using a UIImage
Computing the FFT or spectrogram of an audio signal is a fundamental audio signal processing. As an iOS developer, whether you want to simply find the pitch of a sound, create a nice visualisation, or do some front end processing it’s something you’ve likely thought about if you are at all interested in audio. Apple does provide a sample application for illustrating this task (aurioTouch). The audio processing part of that App is obscured, however, imho, by extensive use of Open GL.
The goal of this project was to abstract as much of the audio dsp out of the sample app as possible and to render a visualisation using only the UIKit. The resulting application running in the iOS 6.1 simulator is shown to the left. There are three components. rscodepurple1 A simple ‘exit’ button at the top, a gain slider at the bottom (allowing the user to adjust the scaling between the magnitude spectrum and the brightness of the colour displayed), and a UIImageView displaying power spectrum data in the middle that refreshes every three seconds. Note that the frequencies run from low to the high beginning at the top of the image. So, the top of the display is actually DC while the bottom is the Nyquist rate. The image shows the results of processing some speech.
This particular Spectrogram App records audio samples at a rate of 11,025 Hz in frames that are 256 points long. That’s about 0.0232 seconds per frame. The frames are windowed using a 256 point Hanning window and overlap by 1/2 of a frame.
Let’s examine some of the relevant parts of the code that may cause confusion. If you want to try to build this project yourself you can find the source files in the archive below.
First of all look at the content of the PerformThru method. This is a callback for an audio unit. Here, it’s where we read the audio samples from a buffer into one of the arrays we have declared.
SInt8 *data_ptr = (SInt8 *)(ioData->mBuffers[0].mData);
for (i=0; i<inNumberFrames; i++) {
framea[readlas1] = data_ptr[2];
readlas1 += 1;
if (readlas1 >=33075) {
readlas1 = 0;
dispatch_async(dispatch_get_main_queue(), ^{
[THIS printframemethod];
});
}
data_ptr += 4;
}
Note that framea is a static array of length 33075. The variable readlas1 keeps track of how many samples have been read. When the counter hits 33075 (3 seconds at this sampling frequency) a call to another method printframemethod is triggered and the process restarts.
The spectrogram is calculated in printframemethod.
for (int b = 0; b < 33075; b++) {
originalReal[b]=((float)framea[b]) * (1.0 ); //+ (1.0 * hanningwindow[b]));
}
for (int mm = 0; mm < 250; mm++) {
for (int b = 0; b < 256; b++) {
tempReal[b]=((float)framea[b + (128 * mm)]) * (0.0 + 1.0 * hanningwindow[b]);
}
vDSP_ctoz((COMPLEX *) tempReal, 2, &A, 1, nOver2);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
scale = (float) 1. / 128.;
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
for (int b = 0; b < nOver2; b++) {
B.realp[b] = (32.0 * sqrtf((A.realp[b] * A.realp[b]) + (A.imagp[b] * A.imagp[b])));
}
for (int k = 0; k < 127; k++) {
Bspecgram[mm][k]=gainSlider.value * logf(B.realp[k]);
if (Bspecgram[mm][k]<0) {
Bspecgram[mm][k]=0.0;
}
}
}
Note that in this method we first cast the signed integer samples to floats and store in the array originalReal. Then the FFT of each frame is computed by calling the vDSP functions. The two-dimensional array Bspecgram contains the actual magnitude values of the Short Time Fourier Transform. Look at the code to see how these magnitude values are converted to RGB pixel data.
Things to note:
To get this to build just start a new single-view project and replace the delegate and view controller and add the aurio_helper files. You need to link the Accelerate, AudioToolbox, UIKit, Foundation, and CoreGraphics frameworks to build this. Also, you need PublicUtility. On my system, it is located at /Developer/Extras/CoreAudio/PublicUtility. Where you find it, add that directory to your header search paths.
Get the code:
The delegate, view controller, and helper files are included in this zip archive.
A Spectrogram App for iOS in purple
Apple's aurioTouch example app (on developer.apple.com) has source code for drawing an animated frequency spectrum plot from recorded audio input. You could probably group FFT bins into frequency ranges for a coarser bar graph plot.
I am making a basic platform game for the iPhone and I have encountered a problem with my collision detection.
if (CGRectIntersectsRect(player.frame, platform.frame))
pos2 = CGPointMake(0.0, +0.0);
else
pos2 = CGPointMake(0.0, +10.0);
The collision detection is to stop in-game-gravity existing when the player is on a platform, the problem is with the fact that the collision detection is the rectangle around the player, is there anyway to do collision detection for the actual shape of an image (with transparency) rather that the rectangle around it?
You'll have to program this on your own, and beware the pixel-by-pixel collision is probably too expensive for the iPhone. My recommendation is to write a Collidable protocol (called an interface in every other programming language), give it a collidedWith:(Collidable *)c function, and then just implement that for any object that you want to allow collision for. Then you can write case-by-case collision logic. Similarly, you can make a big superclass that has all the information you'd need for collision (in your case either an X, Y, width, and height, or an X, Y, and a pixel data array) and a collidesWith method. Either way you can write a bunch of different collision methods - if you're only doing pixel collision for a few things, it won't be much of a performance hit. Typically, though, it's better to do bounding box collision or some other collision based on geometry, as it is significantly faster.
The folks over at metanetsoftware made some great tutorials on collision techniques, among them axis separation collsion and grid based collision, the latter of which sounds like it would be more viable for your game. If you want to stick with brute force collision detection, however (checking every object against every other object), then making a bounding box that is simply smaller than the image is typically the proper way to go. This is how many successful platformers did it, including Super Mario Brothers.You might also consider weighted bounding boxes - that is, you have one bounding box for one type of object and a different sized one for others. In Mario, for example, you have a larger box to hit coins with than you do enemies.
Now, even though I've warned you to do otherwise, I'll oblige you and put in how to do pixel-based collision. You're going to want to access the pixel data of your CGImage, then iterate through all the pixels to see if this image shares a location with any other image. Here's some code for it.
for (int i = 0; i < [objects count]; i++)
{
MyObject *obj1 = [objects objectAtIndex:i];
//Compare every object against every other object.
for (int j = i+1; j < [objects count]; j++)
{
MyObject *obj2 = [objects objectAtIndex:j];
//Store whether or not we've collided.
BOOL collided = NO;
//First, do bounding box collision. We don't want to bother checking
//Pixels unless we are within each others' bounds.
if (obj1.x + obj1.imageWidth >= obj2.x &&
obj2.x + obj2.imageWidth >= obj1.x &&
obj1.y + obj1.imageHeight >= obj2.y &&
obj2.y + obj2.imageGeight >= obj1.y)
{
//We want to iterate only along the object with the smallest image.
//This way, the collision checking will take the least time possible.
MyObject *check = (obj1.imageWidth * obj1.imageHeight < obj2.imageWidth * obj2.imageHeight) ? obj1 : obj2;
//Go through the pixel data of the two objects.
for (int x = check.x; x < check.x + check.imageWidth && !collided; x++)
{
for (int y = check.y; y < check.y + check.imageHeight && !collided; y++)
{
if ([obj1 pixelIsOpaqueAtX:x andY:y] && [obj2 pixelIsOpaqueAtX:x andY:y])
{
collided = YES;
}
}
}
}
}
}
I made it so pixelIsOpaque takes a global coordinate rather than a local coordinate, so when you program that part you have to be careful to subtract the x and y out of that again, or you'll be checking out of the bounds of your image.