Varying intensity in the frames captured by camera (uEye) - matlab

I have been using Matlab to capture images from an uEye Camera at regular
intervals and use them for processing. The following is the small piece of
code that I am using to achieve that,
h=actxcontrol('UEYECAM.uEyeCamCtrl.1','position',[250 100 640 480]);
d=h.InitCamera(1);
check = 1;
str_old = 'img000.jpeg';
m = h.SaveImage('img000.jpeg');
pause(60);
And following are the images captured by the camera. There was no change in the
lighting conditions outside but you can notice the difference in the intensity
levels in the image captured by the camera.
Is there any reason for this?
Solved thanks to Zaphod
Allow sometime for the camera to adjust its exposure. I did it by moving the pause
statement to just after the InitCamera() command to delay the image capture by the
camera and give it enough time to adjust itself.

Solved thanks to Zaphod
Allow sometime for the camera to adjust its exposure. I did it by moving the pause statement to just after the InitCamera() command to delay the image capture by the camera and give it enough time to adjust itself.
h=actxcontrol('UEYECAM.uEyeCamCtrl.1','position',[250 100 640 480]);
d=h.InitCamera(1);
pause(60);
check = 1;
str_old = 'img000.jpeg';
m = h.SaveImage('img000.jpeg');

Related

Matlab Kinect Depth Imaging

I'm working with the kinect camera and trying to display real-life depth imaging using the ptCloud method combining the RGB and Depth Sensor. However just using the initial setup my image is disfigured missing pertinent information, is there anyway to improve this so that it captures more data. I have also attached an image of what i mean. Any help would be great thank you!
colorDevice = imaq.VideoDevice('kinect',1)
depthDevice = imaq.VideoDevice('kinect',2)
step(colorDevice);
step(depthDevice);
colorImage = step(colorDevice);`enter code here`
depthImage = step(depthDevice);
gridstep = 0.1;
ptCloud = pcfromkinect(depthDevice,depthImage,colorImage);
player = pcplayer(ptCloud.XLimits,ptCloud.YLimits,ptCloud.ZLimits,...
'VerticalAxis','y','VerticalAxisDir','down');
xlabel(player.Axes,'X (m)');
ylabel(player.Axes,'Y (m)');
zlabel(player.Axes,'Z (m)');
for i = 1:1000
colorImage = step(colorDevice);
depthImage = step(depthDevice);
ptCloud = pcfromkinect(depthDevice,depthImage,colorImage);
ptCloudOut = pcdenoise(ptCloud);
view(player,ptCloudOut);
end
release(colorDevice);
release(depthDevice);
From the looks of the image, you are trying to capture a cabinet with a TV screen in the middle. In cases like these, the TV screen actually absorbs the IR emitted from the sensor or reflects it at oblong angles/multiple reflections etc. Therefore Kinect is unable to capture the depth data. Furthermore, since when you want to display the RGB data on top of the point cloud, it tries to align the two and rejects any depth data that is not aligned with the RGB image pixels.
So in order to improve your depth data acquisition, you could either take care that there are no reflective surfaces like screen, mirrors etc in the scene. Also, try displaying the depth data without the RGB overlay, which will hopefully improve the point cloud shown.

Raspicam library's frame rate and image

I use raspicam library from here. I can change frame rate at src/private/private_impl.cpp file. After the frame rate to 60, I can receive the frame rate 60, but the object size in the image is changed. I attached two images one is captured using 30fps and another one is captured using 60fps.
Why I have bigger object size using 60fps and how can I have normal object size (same as using 30fps)?
The first image is usign 30fps and second image is using 60fps.
According to description here, the higher frame rate modes require cropping on the sensor for 8M pixel camera. At the default 30fps the GPU code will have chosen the 1640x922 mode, so gives full field of view (FOV). Exceed 40fps and it will switch to the cropped 1280x720 mode. In either case the GPU will then resize it to the size you requested. Resize a smaller FOV to the same size and any object in the scene will use more pixels.Can use 5M pixel camera if no cropping is required.
I should use Field of view, zoom or cropping rather than object size is bigger.
It is also possible to keep the image the same size at higher frame rates by explicitly choosing a camera mode that does "binning" (which combines multiple sensor pixels into one image pixel) for both lower- and higher-rate capture. Binning is helpful because it effectively increases the sensitivity of your camera.
See https://www.raspberrypi.org/blog/new-camera-mode-released/ for details when the "new" higher frame rates were announced.
Also, the page in the other answer has a nice picture with the various frame sizes, and a good description of the available camera modes. In particular, modes 4 and higher are binning, starting with 2x2 binning (so 4 sensor pixels contribute to 1 image pixel) and ending with 4x4 (so 16 sensor pixels contribute to 1 image pixel).
Use the sensor_mode parameter to the PiCamera constructor to choose a mode.

Beginner at kinect - matlab : Kinect does not start

hello I am trying to set up kinect1 in matlab environment and I cannot get joint coordinates out of kinect even though I have a preview of the captured depth. In the preview it says " waiting to start" when I actually started the video.
There are two different features which you don't want to mix up:
There is the preview function. By calling preview(vid) a preview window is opened and the camera runs. The preview is there to help you set up your camera, point it to the right spot etc. When you're finished with that, close the preview manually or via closepreview(vid).
When you are ready for image acquisition, call start(vid). With img = getdata(vid,1) you can then read 1 frame from the camera and save it to img. When you're finished with acquisition, call close(vid) to stop the camera.
The camera itsself starts capturing images as soon as start is called, so even if you wait some seconds after calling start, the first image will be the one captured right then. There exist several properties to control the acquisition, it is best to have a look at all properties of vid.
You can manually specify a trigger to take an image by first setting triggerconfig(vid,'manual'), then starting the camera and finally calling trigger(vid) to take an image.
The number of frames that is acquired after calling start or trigger is specified by the FramesPerTrigger parameter of vid. To continuously acquire images, set it to inf. It is possible to use getdata to read any number of frames, e.g. getdata(vid,5);. Note that this only works if 5 frames are actually available on the camera. You get the number of available frames from the FramesAvailable property of vid.
You can put the image acquisition inside a for loop to continuously acquire images
n = 1000;
vid = videoinput('kinect',2);
set(vid,'FramesPerTrigger',n);
start(vid);
for k=1:n
img = getdata(vid,1);
% do magic stuff with img
end
stop(vid);

Display live processed webcam stream using matlab

I am trying to employ a chroma key algorithm on a live video. I need to take a live webcam input, process it in real time and display it. I already have the chroma key algorithm working on images.
How do I process the webcam input and display it immediately. I have tried using snapshot() and passing the image to the chroma key algorithm but it is too slow even if I increase the rate of snapshots. I want a smooth output.
[ Also, if there is a better alternative than Matlab please let me know. ]
Instead of using getsnapshot() which connects to the camera and disconnects on EVERY single frame (thus slow framerates), try to use videoinput and then preview the connection: http://www.mathworks.de/de/help/imaq/preview.html
This example is made for you:
http://www.mathworks.de/products/imaq/code-examples.html?file=/products/demos/shipping/imaq/demoimaq_LiveHistogram.html
As shown you can even define a callback-handler-function which is called on every newly received frame.
You must set TriggerType to manual, or else getsnapshot() will create (and destroy) a conection to the camera everytime you need a frame. By setting it to manual, you can start the camera once, get the frames and stop the camera when you're done.
Here is an example:
vidobj = videoinput('winvideo', 1, 'RGB24_640x480');
triggerconfig(vidobj, 'manual');
start(vidobj);
while true % Or any stop condition
img = getsnapshot(vidobj);
% Process the frame.
...
imshow(img);
drawnow;
end

Matlab: How to distribute workload?

I am currently trying to record footage of a camera and represent it with matlab in a graphic window using the "image" command. The problem I'm facing is the slow redraw of the image and this of course effects my whole script. Here's some quick pseudo code to explain my program:
figure
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
image(I);
end
AcquireImageFromCamera() is a mex coming from an API for the camera.
Now without displaying the acquired image the script easily grabbs all frames coming from the camera (it records with a limited framerate). But as soon as I display every image for a real-time video stream, it slows down terribly and therefore frames are lost as they are not captured.
Does anyone have an idea how I could split the process of acquiring images and displaying them in order to use multiple cores of the CPU for example? Parallel computing is the first thing that pops into my mind, but the parallel toolbox works entirely different form what I want here...
edit: I'm a student and in my faculty's matlab version all toolboxes are included :)
Running two threads or workers is going to be a bit tricky. Instead of that, can you simply update the screen less often? Something like this:
figure
count = 0;
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
count = count + 1;
if count == 5
count = 0;
image(I);
end
end
Another thing to try is to call image() just once to set up the plot, then update pixels directly. This should be much faster than calling image() every frame. You do this by getting the image handle and changing the CData property.
h = image(I); % first frame only
set(h, 'CData', newPixels); % other frames update pixels like this
Note that updating pixels like this may then require a call to drawnow to show the change on screen.
I'm not sure how precise your pseudo code is, but creating the image object takes quite a bit of overhead. It is much faster to create it once and then just set the image data.
figure
himg = image(I)
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
set(himg,'cdata',Frame);
drawnow; %Also make sure the screen is actually updated.
end
Matlab has a video player in Computer vision toolbox, which would be faster than using image().
player = vision.VideoPlayer
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
step(player, Frame);
end