Beginner at kinect - matlab : Kinect does not start - matlab

hello I am trying to set up kinect1 in matlab environment and I cannot get joint coordinates out of kinect even though I have a preview of the captured depth. In the preview it says " waiting to start" when I actually started the video.

There are two different features which you don't want to mix up:
There is the preview function. By calling preview(vid) a preview window is opened and the camera runs. The preview is there to help you set up your camera, point it to the right spot etc. When you're finished with that, close the preview manually or via closepreview(vid).
When you are ready for image acquisition, call start(vid). With img = getdata(vid,1) you can then read 1 frame from the camera and save it to img. When you're finished with acquisition, call close(vid) to stop the camera.
The camera itsself starts capturing images as soon as start is called, so even if you wait some seconds after calling start, the first image will be the one captured right then. There exist several properties to control the acquisition, it is best to have a look at all properties of vid.
You can manually specify a trigger to take an image by first setting triggerconfig(vid,'manual'), then starting the camera and finally calling trigger(vid) to take an image.
The number of frames that is acquired after calling start or trigger is specified by the FramesPerTrigger parameter of vid. To continuously acquire images, set it to inf. It is possible to use getdata to read any number of frames, e.g. getdata(vid,5);. Note that this only works if 5 frames are actually available on the camera. You get the number of available frames from the FramesAvailable property of vid.
You can put the image acquisition inside a for loop to continuously acquire images
n = 1000;
vid = videoinput('kinect',2);
set(vid,'FramesPerTrigger',n);
start(vid);
for k=1:n
img = getdata(vid,1);
% do magic stuff with img
end
stop(vid);

Related

How to getting acquired frames at full speed ? - Image Event Listener does not seem to be executing after every event

My goal is to read out 1 pixel from the GIF camera in VIEW mode (live acquisition) and save it to a file every time the data is updated. The camera is ostensibly updating every 0.0001 seconds, because this is the minimum acquisition time Digital Micrograph lets me select in VIEW mode for this camera.
I can attach an Image Event Listener to the live image of the camera, with the message map (messagemap = "data_changed:MyFunctiontoExecute"), and MyFunctiontoExecute is being successfully ran, giving me a file with numerous pixel values.
However, if I let this event listener run for a second, I only obtain close to 100 pixel values, when I was expecting closer 10,000 (if the live image is being updated every 0.0001 seconds).
Is this because the Live image is not updated as quickly I think?
The event-listener certainly is executed at each event.
However, the live-display of a high-speed camera will near-certainly not update at each acquired-frame. It will either perform some sort of cumulative or sampled display. The exact answer will depend on the exact system you are on and configurations that are made.
It should be noted that super-high frame-rates can usually only be achieved by dedicated firmware and optimized systems. It's unlikely that a "general software approach" - in particular of interpreted and non-compiled code - will be able to provide the necessary speed. This type of approach the problem might be doomed from the start.
(Instead, one will likely have to create a buffer and then set-up the system to acquire data directly into the buffer at highest-possible frame rate. This will be coding the camera-acquisition directly)

MATLAB: webcam video acquisition

The Logitech C910 webcam spec indicates image and video capture. Because image and video capture are listed separately, I am assuming that they are encoded and sent differently: effectively forming two different 'channels' to select from. If this understanding is incorrect, please respond with an explanation of the true nature.
This reference indicates a maximum frame-rate of 15 because of windows.
My search returned webcam video acquisition that comprises a series of time lapsed images 'stitched' together
% Connect to the webcam.
cam = webcam
% Open Video File
vidWriter = VideoWriter('frames.avi');
open(vidWriter);
% Write images file
for index = 1:20
% Acquire frame for processing
img = snapshot(cam);
% Write frame to video
writeVideo(vidWriter,img);
end
%Close file and cam
close(vidWriter);
clear cam
MATLAB has captured successfully images with the C910.
Question
If possible within MATLAB, how does one configure the webcam's **video* frame rate and save the video stream to .avi or the like? (not writing still images to video file as depicted above).
Perhaps someone with experience or sharper Google skills can provide an example of bridging the webcam's video (vs the image) stream into MATLAB. Any example that can be tested is greatly appreciated.
A good way to 'jumpstart' or accelerate newbie familiarity is with the image acquisition tool:
imaqtool
The tool seems to be GUI wrapper that reduces the command-line syntax to a GUI. Note that the bottom right panel shows the command line equivalent of a GUI interaction.
Configurable video capture paramaters comprise:
Resolution & ROI (Region of Interest)
Frame Rate
Video type (.avi .mp4) etc.

Display live processed webcam stream using matlab

I am trying to employ a chroma key algorithm on a live video. I need to take a live webcam input, process it in real time and display it. I already have the chroma key algorithm working on images.
How do I process the webcam input and display it immediately. I have tried using snapshot() and passing the image to the chroma key algorithm but it is too slow even if I increase the rate of snapshots. I want a smooth output.
[ Also, if there is a better alternative than Matlab please let me know. ]
Instead of using getsnapshot() which connects to the camera and disconnects on EVERY single frame (thus slow framerates), try to use videoinput and then preview the connection: http://www.mathworks.de/de/help/imaq/preview.html
This example is made for you:
http://www.mathworks.de/products/imaq/code-examples.html?file=/products/demos/shipping/imaq/demoimaq_LiveHistogram.html
As shown you can even define a callback-handler-function which is called on every newly received frame.
You must set TriggerType to manual, or else getsnapshot() will create (and destroy) a conection to the camera everytime you need a frame. By setting it to manual, you can start the camera once, get the frames and stop the camera when you're done.
Here is an example:
vidobj = videoinput('winvideo', 1, 'RGB24_640x480');
triggerconfig(vidobj, 'manual');
start(vidobj);
while true % Or any stop condition
img = getsnapshot(vidobj);
% Process the frame.
...
imshow(img);
drawnow;
end

Matlab: How to distribute workload?

I am currently trying to record footage of a camera and represent it with matlab in a graphic window using the "image" command. The problem I'm facing is the slow redraw of the image and this of course effects my whole script. Here's some quick pseudo code to explain my program:
figure
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
image(I);
end
AcquireImageFromCamera() is a mex coming from an API for the camera.
Now without displaying the acquired image the script easily grabbs all frames coming from the camera (it records with a limited framerate). But as soon as I display every image for a real-time video stream, it slows down terribly and therefore frames are lost as they are not captured.
Does anyone have an idea how I could split the process of acquiring images and displaying them in order to use multiple cores of the CPU for example? Parallel computing is the first thing that pops into my mind, but the parallel toolbox works entirely different form what I want here...
edit: I'm a student and in my faculty's matlab version all toolboxes are included :)
Running two threads or workers is going to be a bit tricky. Instead of that, can you simply update the screen less often? Something like this:
figure
count = 0;
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
count = count + 1;
if count == 5
count = 0;
image(I);
end
end
Another thing to try is to call image() just once to set up the plot, then update pixels directly. This should be much faster than calling image() every frame. You do this by getting the image handle and changing the CData property.
h = image(I); % first frame only
set(h, 'CData', newPixels); % other frames update pixels like this
Note that updating pixels like this may then require a call to drawnow to show the change on screen.
I'm not sure how precise your pseudo code is, but creating the image object takes quite a bit of overhead. It is much faster to create it once and then just set the image data.
figure
himg = image(I)
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
set(himg,'cdata',Frame);
drawnow; %Also make sure the screen is actually updated.
end
Matlab has a video player in Computer vision toolbox, which would be faster than using image().
player = vision.VideoPlayer
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
step(player, Frame);
end

Varying intensity in the frames captured by camera (uEye)

I have been using Matlab to capture images from an uEye Camera at regular
intervals and use them for processing. The following is the small piece of
code that I am using to achieve that,
h=actxcontrol('UEYECAM.uEyeCamCtrl.1','position',[250 100 640 480]);
d=h.InitCamera(1);
check = 1;
str_old = 'img000.jpeg';
m = h.SaveImage('img000.jpeg');
pause(60);
And following are the images captured by the camera. There was no change in the
lighting conditions outside but you can notice the difference in the intensity
levels in the image captured by the camera.
Is there any reason for this?
Solved thanks to Zaphod
Allow sometime for the camera to adjust its exposure. I did it by moving the pause
statement to just after the InitCamera() command to delay the image capture by the
camera and give it enough time to adjust itself.
Solved thanks to Zaphod
Allow sometime for the camera to adjust its exposure. I did it by moving the pause statement to just after the InitCamera() command to delay the image capture by the camera and give it enough time to adjust itself.
h=actxcontrol('UEYECAM.uEyeCamCtrl.1','position',[250 100 640 480]);
d=h.InitCamera(1);
pause(60);
check = 1;
str_old = 'img000.jpeg';
m = h.SaveImage('img000.jpeg');