I have a video file that I need to visualize in Matlab. From this file, I extract down-sampled thumbnails and merge them into a single image. This image is displayed using imshow command and gives the overview of the whole video.
I would like click (or hover) by mouse over any thumbnail and automatically extract from the video full sized version and display it in new figure.
What functions I need to implement such a functionality?
Roughly:
Connect to your video file using a VideoReader object.
Get the NumberOfFrames property of the VideoReader object.
Use the read method of the VideoReader object to read in frames.
Loop from 1 to NumberOfFrames, and read in each frame.
Having read in each frame, store it in the kth plane of an M-by-N-by-3-by-K array, where K is the number of frames. (the 3 is if your video is RGB - it would be 1 if grayscale).
Also, resize each frame to a thumbnail using the function imresize, and store it in the kth plane of an m-by-n-by-3-by-K array, where m < M and n < N.
Once stored in this form, display the results in a figure using the command montage, which will lay all the thumbnails out nicely for you in a grid.
Add a callback to the figure that fires when you click on the image. The callback should get the current mouse position (at the time of the click), determine which frame was clicked on, and create a new figure which displays the corresponding plane from the unresized array.
Does that sound possible?
Related
I presently have a gui that captures avi file. It works as follows:
controller
The image in the controller is the preview from the camera. When I push the Start Record button, the program will catch the image with getsnapshot() function and writeVideo() function to write the image to a video file.
And I get time information with clock function after the getsnapshot() function. (I find when use [frame, metadata] = getsnapshot(obj), metadata is empty. I do not know why).
I want to ask if I can save the time information to the video file, (e.g., avi file) in real time. I do not know how to do it. Anyone has ideas?
I don't know of any way to add custom header fields to an AVI file to store your additional time information. However, you could add an extra row to the bottom of your image frame that contains the encoded time stamp information for that frame. This will depend on the format of your frame data, but here's an example to illustrate:
Let's say frame is your typical RGB (Truecolor) image, with three color planes and uint8 data values, like the built-in sample image 'peppers.png':
img = imread('peppers.png');
The clock function returns the current data and time as a date vector, which is a six-element double-precision vector. This requires 48 bytes of storage. We can cast the 1-by-6 double vector to a 1-by-48 uint8 vector using the typecast function, then add it to the beginning of an additional row of the red image plane (padding the rest of the row and other planes with zeroes automatically):
img(end+1, 1:48, 1) = typecast(clock, 'uint8');
The extra row at the bottom is fairly inconspicuous:
And we can reconstitute the time stamp and original image like so:
t = typecast(img(end, 1:48, 1), 'double');
img = img(1:(end-1), :, :);
I need to crop a specific area of a video in MATLAB to be replayed and saved as that specific area. Currently I only know of a way to separate all the frames, crop them, and then put them back together as a video - is there an easier way or tool to crop a video in MATLAB or am I just going to have to rely on frame-by-frame cropping?
Matlab is generally horrible for video processing. I would recommend using a generic video editor. If you have to use matlab, there are a couple of toolboxes on fileexchange which will serve your purpose (for short videos in the most generic formats and also require image toolbox).
Description
With Movie Editor you can:
- Load movies (avi's only)
- Cut movies
- Crop movies
- Split movies into separate color layers
- Rotate movies
- Save movies as avi or mpg (thanks to David Foti)
- Save independent frames as bmp, jpg, png, and tif
- You can always stroll through the movie using the slider and edittext underneath the image (maybe somebody can combine it with the 'Interactive MATLAB Movie Player' of Don Orofino.
Maybe you can add a function? The user interface is pretty self-explanatory. But questions are welcome. An example of a before- and after-movie are added to the zip-file.
Following is the code that I wrote a while ago to process video files. Before executing this file save the ROI1.m file in the path.
%frame by frame processing of video files
clear all;
close all;
clc;
mov=VideoReader('C:\Users\Syd_R\OneDrive\Desktop\entrap\holo_bright_10_MMStack_Pos0.ome.avi');
vidFrames=read(mov);
nFrames=mov.NumberOfFrames;
A=vidFrames(:,:,1);
for fr=1:nFrames
set(0,'DefaultFigureVisible','off')
elseX=vidFrames(:,:,fr);
if exist('position')==0
ROI1
else
imshow(elseX)
I2 = imcrop(elseX,[position(:,1) position(:,2) position(:,3) position(:,4)]);
end
mycell_h(fr)={I2};
end
close all;
set(0,'DefaultFigureVisible','on')
% This file should be saved (as ROI1.m) in the same path as this file will be called while executing the main file
% ROI
if exist('A')==1;
figure, imshow(A);
h = imrect(gca,[10 10 512 512]);
position = wait(h); % returns coordinates in "position" when user doubleclicks on rectangle
figure, imshow(A)
I2=imcrop(A,position);
phROI2=I2;
figure(11);
imshow(phROI2);
imwrite(phROI2, 'roi', 'tiff')
end;
The cropped frames will be saved in the cell [mycell_h]. To view the cropped frames, for example:
imshow(mycell_h{1,1})
I currently use Matlab's imshow to output an image at every iteration of a diffusion filter process, i.e. multiple times per second.
Sometimes during filtering I want a closer look at specific image parts.
However, when using the ('Parent', handle) name-value pair for imshow the magnification and position gets reset.
Is there a way to update the underlying image but having the magnification and position intact?
You can update the cdata in the current axis to your new data matrix which will keep all other settings the same. If this is in a loop, you probably need to call drawnow. E.g:
x=randn(100);
figure;imagesc(x);
Now zoom / pan / do whatever manipulations you want.
f=gca;
x=randn(100);
f.Children.CData = x;
This method of updating of child data is recommended by Matlab as more efficient than destroying the axis child Image and recreating each frame (can't remember the source, it was in one of the help files).
Edit: Just remembered that this syntax won't work on older versions of matlab (pre 2015 or so). In that case, use get/set syntax:
set(get(gca,'Children'),'CData',x);
I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.
In one GUI (viewer) I have an image that shows a 2D slice through a 3D image cube. A toolbar button opens a second GUI (z-profile) that plots a 2D graph showing the z-profile of one pixel in the image cube. What I want is to be able to update this plot dynamically when a different pixel is clicked in the original viewer GUI. I've looked in to linkdata but I'm not sure if that can be used to link across two GUIs. Is there a simple way to do this without re-creating the second GUI each time a new pixel is clicked and feeding in the new input location?
You can definitely doing it without recreating the second GUI every time.
Without knowing your specific code I would say that you should store a reference to the second GUI in the first GUI, then in a callback for clicking a pixel in the first GUI, change data in the second GUI via the stored reference (e.g. figure handle). You can store arbitrary data in a figure, for example by using function guidata. A bit of code.
...
figure2 = figure();
figure1 = figure('WindowButtonDownFcn',#myCallback);
guidata(figure1, figure2);
...
function myCallback(obj,eventdata)
figure2 = guidata(obj);
...
Even easier but a bit more error-prone would be to use global variables for storing the references.