Only recording motion using gaussian mixture models - matlab

I am using this example on Gaussian mixture models.
I have a video displaying moving cars, but it's on a street that isn't very busy. A few cars go past every now and again, but the vast majority of the time there isn't any motion in the background. It gets pretty tedious watching nothing moving, so I would like to cut that time out. Is it possible to remove the still frames from the video, only leaving the motion frames? I guess it would essentially crop the video.

The example you give uses a foreground detector. Still frame should not have foreground pixels detected. You can then choose to skip them when building a demo video of your results.
You can build your new video by creating a rule of the type if N frames in a row do not contain foreground, do not write these frames in the output video.
This is just an idea...

Related

iPhone TrueDepth front camera innacurate face tracking - skewed transformation

I am using an app that was developed using the ARKit framework. More specifically, I am interested in the 3D facial mesh and the face orientation and position with respect to the phone's front camera.
Having said that I record video with subjects performing in front of the front camera. During these recordings, I have noticed that some videos resulted in inaccurate transformations with the face being placed in the back of the camera whereas the rotation being skewed (not orthogonal basis).
I do not have a deep understanding of how the TrueDepth camera combines all its sensors to track and reconstruct the 3D facial structure and so I do not know what could potentially cause this issue. Although I have experimented with different setups e.g. different subjects, with and without a mirror, screen on and off, etc. I still have not been able to identify the source of the inaccurate transformation. Could it be a combination of the camera angle interfering with the mirror?
Below I attached two recordings of myself that resulted in incorrect (above) and correct (below) estimated transformations.
Do you have any idea of what might be the problem? Thank you in advance.

How to render/export frames offline in Metal?

I'm new to Metal and I was able to achieve a simple simple scene Quad rotating on the screen, but I want to export a 1 minute video/frames without having to do in real-time.
So think about the user just opens app and taps the 'export button' and the cpu/gpu goes full speed to output a 1 minute video/frames of the Quad rotating without previewing it.
I know how to convert frames to video using AVFoundation, but not my 3d scene into frames without doing it realtime.
can someone point me where to look?.
Thank you so much!
I adapted my answer here and the Apple Metal game template to create this sample, which demonstrates how to record a video file directly from a sequence of frames rendered by Metal.
Since all rendering in Metal draws to a texture, it's not too hard to adapt normal Metal code so that it's suitable for rendering offline into a movie file. To recap the core recording process:
Create an AVAssetWriter that targets your URL of choice
Create an AVAssetWriterInput of type .video so you can write video frames
Wrap an AVAssetWriterInputPixelBufferAdaptor around the input so you can append CVPixelBuffers as frames to the video
After you start recording, each frame, copy the pixels from your rendered frame texture into a pixel buffer obtained from the adapter's pixel buffer pool.
When you're done, mark the input as finished and finish writing to the asset writer.
As for driving the recording, since you aren't getting delegate callbacks from an MTKView or CADisplayLink, you need to do it yourself. The basic pattern looks like this:
for t in stride(from: 0, through: duration, by: frameDelta) {
draw(in: renderBuffer, depthTexture: depthBuffer, time: t) { (texture) in
recorder.writeFrame(forTexture: texture, time: t)
}
}
If your rendering and recording code is asynchronous and thread-safe, you can throw this on a background queue to keep your interface responsive. You could also throw in a progress callback to update your UI if your rendering takes a long time.
Note that since you're not running in real-time, you'll need to ensure that any animation takes into account the current frame time (or the timestep between frames) so things run at the proper rate when played back. In my sample, I do this by just having the rotation of the cube depend directly on the frame's presentation time.

MATLAB - Run webcam parallel to processing

Hello and thank you in advance.
I am working on a MATLAB algorithm, using the computer vision toolbox, to detect objects from a live camera feed, displaying frames with bounding boxes on a deployable video player.
Due to limitations of my hardware, the detection will be slower than the maximum FPS delivered by the camera.
Now, I'd like to display the webcam video feed at maximum speed, not waiting for the detection to finish so that I will have a fluent output video with detections whenever they will be inserted.
Is there a way?
My first approach was to use the parfeval function to somehow run the detection parallel, but failed due to my lack of knowledge, how to give the frame to the detector and insert the resulting bounding boxes to the frame, "whenever they are finished".

How to "render" a Box2D scene on iPhone

I'm currently using box2d with cocos2d on iPhone. I have quite a complex scene set up, and I want the end user to be able to record it as video as part of the app. I have implemented a recorder using the AVAssetWriter etc. and have managed to get it recording frames grabbed from OpenGL pixel data.
However, this video recording seems to a) slow down the app a bit, but more importantly b) only record a few frames per second at best.
This led me to the idea of rendering a Box2D scene, manually firing ticks and grabbing an image every tick. However, dt could be an issue here.
Just wondering if anyone has already done this, or if anyone has any better ideas?
A good solution I guess would be to use a screen recorder solution like ScreenFlow or similar...
I think your box2d is a good idea... however, you would want to used a fixed-time step. if you use dt the steps in the physics simulation will be to big, and box2d will be unstable and jittery.
http://gafferongames.com/game-physics/fix-your-timestep/
The frame rate will take a hit, but you'll get every frame. I don't think you'll be able to record every frame and still maintain a steady frame rate - that seems to be asking a lot of the hardware.

Stuttering animation in iPhone OpenGL ES although fps is high

I am building a 2d OpenGL es application For iPad it displays a background texture and numerous textures on top of it which are always in motion.
Every frame their location is recalculated based on time delta and speed and the entire thing is being rendered at 60 fps successfully, but still as the movement speed of the sprites raises, thing look stuttering.
Any ideas? Are there inherit problems with what I'm doing? Are there known design patterns for smooth animation?
Try to compute time delta as average of last N frames because it's possible to have some frames that take more time than others. Do you use time delta for animations? It is very important to use it! Also try to load all resources at loading time instead loading them when you use them. This also may slow some frames.
If you take a look at the time deltas, you'll probably find they're not very consistent frame-to-frame. This probably isn't because the frames are taking different amounts of time to render, it's just an artifact of the batched, pipelined nature of the GLES driver.
Try using some kind of moving average of the time deltas instead: http://en.wikipedia.org/wiki/Moving_average
Note that even though the time deltas you're observing in your application aren't very consistent, the fact that you're getting 60fps implies the frames are probably getting displayed at fixed intervals (I'm assuming you're getting one frame per display v-sync). What this means is that the delay between your application making GL calls and the resulting frame appearing on the display is changing. It's this change that you're trying to hide with a moving average.