D_xx: 1x5 distortion vector in Kitti calib_cam_to_cam.txt - kitti

In kitti calib_cam_to_cam.txt as specified in README:
calib_cam_to_cam.txt: Camera-to-camera calibration
S_xx: 1x2 size of image xx before rectification
K_xx: 3x3 calibration matrix of camera xx before rectification
D_xx: 1x5 distortion vector of camera xx before rectification
R_xx: 3x3 rotation matrix of camera xx (extrinsic)
T_xx: 3x1 translation vector of camera xx (extrinsic)
S_rect_xx: 1x2 size of image xx after rectification
R_rect_xx: 3x3 rectifying rotation to make image planes co-planar
P_rect_xx: 3x4 projection matrix after rectification
What are the D_xx: 1x5 distortion vector? (k1,k2,p1,p2,k3)? or (k1,k2,k3,k4,k5)? Is there other place that provides more detail specification? Please help. Thanks.

Related

Determinant of hessian matrix of a grayscale image is too small in matlab

I am trying to find determinant of hessian matrix of a 50x50 grayscale image. Determinant of matrix I am getting is a very small value i.e 4.7612e-134. I think I am missing something. My code is below. Thanks
% computing second derivatives in each direction first
[gx, gy] = gradient(double(sliceOfImageK2));
[gxx, gxy] = gradient(gx);
[gyx, gyy] = gradient(gy);
hessianMatrix = [gxx gxy; gxy gyy];
determinantHessianMatrix = det(hessianMatrix)
I don't think you should assemble a 100x100 matrix if you want to call it Hessian. Assemble instead a 2x2 matrix per each of the 50x50 (2500) pixels where you are sampling your derivatives.
These are the 2500 hessians, expressed in a 2500x4 matrix:
H = [gxx(:) gxy(:) gyx(:) gyy(:)]
Here expressed as 2500 2x2 matrices:
H_ = reshape(H', 2, 2, length(H))
And these are the determinants of each 2x2 matrix:
D = H(:,1).*H(:,4) - H(:,2).*H(:,3)
Here as a 50x50 matrix with the determinant of the Hessian at each pixel, if that is what you are after:
reshape(D, 50, 50)

I'm trying to find eigenvalues and vectors of a grayscale image and getting error "Matrix dimensions must agree"

The code is giving error "matrix dimension must agree". So what changes should i make?
%reading a image
I =imread('C:\Program Files\MATLAB\R2013a\New folder\fac.jpg');
m = mean(I,2);
I = double(I)- double(repmat(m,10,1));
%calculating covariance matrix
c=cov(I);
%calculating eigenvalues and eigenvectors
[eigenvalue,eigenvector]=eig(c);
First, make sure that I is a 2D matrix. This is necessary for cov to work. Secondly, use repmat(m,n,p), where n and p are such that size(repmat(m,n,p))==size(I).
Example
I =imread('myImg.jpg'); % 63x83x3 matrix containing 3D RGB information.
I = rgb2gray(I); % 3D RGB to 2D gray scale. Now I is a 63x83 matrix.
m = mean(I,2);
I = double(I)- double(repmat(m,1,83));
c=cov(I);
[eigenvalue,eigenvector]=eig(c);

Resampling an image of Unequal Dimensions

I have an 3d image of dimensions(182 x 218 x 182).
How could I downsample this image in MATLAB to an image of equal dimensions (like 128 x 128 x 128)?
Try this:
im=rand(2,3,4); %%% input image
ny=3;nx=3;nz=5; %% desired output dimensions
[y x z]=...
ndgrid(linspace(1,size(im,1),ny),...
linspace(1,size(im,2),nx),...
linspace(1,size(im,3),nz));
imOut=interp3(im,x,y,z);
I stole this answer from resizing 3D matrix (image) in MATLAB

Reconstruct 3D scene from two 2D images

This is the first time I do the image processing. So I have a lot of questions:
I have two pictures which are taken from different position, one from the left and the other one from the right like the picture below.[![enter image description here][1]][1]
Step 1: Read images by using imread function
I1 = imread('DSC01063.jpg');
I2 = imread('DSC01064.jpg');
Step 2: Using camera calibrator app in matlab to get the cameraParameters
load cameraParams.mat
Step 3: Remove Lens Distortion by using undistortImage function
[I1, newOrigin1] = undistortImage(I1, cameraParams, 'OutputView', 'same');
[I2, newOrigin2] = undistortImage(I2, cameraParams, 'OutputView', 'same');
Step 4: Detect feature points by using detectSURFFeatures function
imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600);
imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);
Step 5: Extract feature descriptors by using extractFeatures function
features1 = extractFeatures(rgb2gray(I1), imagePoints1);
features2 = extractFeatures(rgb2gray(I2), imagePoints2);
Step 6: Match Features by using matchFeatures function
indexPairs = matchFeatures(features1, features2, 'MaxRatio', 1);
matchedPoints1 = imagePoints1(indexPairs(:, 1));
matchedPoints2 = imagePoints2(indexPairs(:, 2));
From there, how can I construct the 3D point cloud ??? In step 2, I used the checkerboard as in the picture attach to calibrate the camera[![enter image description here][2]][2]
The square size is 23 mm and from the cameraParams.mat I know the intrinsic matrix (or camera calibration matrix K) which has the form K=[alphax 0 x0; 0 alphay y0; 0 0 1].
I need to compute the Fundamental matrix F, Essential matrix E in order to calculate the camera matrices P1 and P2, right ???
After that when I have the camera matrices P1 and P2, I use the linear triangulation methods to estimate 3D point cloud. Is it the correct way??
I appreciate if you have any suggestion for me?
Thanks!
To triangulate the points you need the so called "camera matrices" and the points in 2D in each of the images (that you already have).
In Matlab you have the function triangulate, that does the job for you.
If you have calibrated the cameras, you shoudl have this information already. Anyways, you have here an example of how to create the "stereoParams" object needed for the triangulation.
Yes, that is the correct way. Now that you have matched points, you can use estimateFundamentalMatrix to compute the fundamental matrix F. Then you get the essential matrix E by multiplying F by extrinsics. Be careful about the order of multiplication, because the intrinsic matrix in cameraParameters is transposed relative to what you see in most textbooks.
Now, you have to decompose E into a rotation and a translation, from which you can construct the camera matrix for the second camera using cameraMatrix. You also need the camera matrix for the first camera, for which the rotation would be a 3x3 identity matrix, and translation will be a 3-element 0 vector.
Edit: there is now a cameraPose function in MATLAB, which computes an up-to-scale relative pose ('R' and 't') given the Fundamental matrix and the camera parameters.

Interpolating along the 2-D image slices

I have a set of 100 2-D image slices of the same size. I have used MATLAB to stack them to create a volumetric data. While the size of the 2-D slices is 480x488 pixels, the direction in which the images are stacked is not wide enough to visualize the volume in different orientation when projected. I need to interpolate along the slices to increase the size for visualization.
Can somebody please give me an idea or tip about how to do it?
Edit: Anotated projected microscopy-images
The figure 1 is the top-view of the projected volume.
The figure 2 is the side-view of the projected volume.
When I change the rotation-angle, and try to visualize the volume in different orientation, e.g. side-view (figure 2), is what I see as in figure 2.
I want to expand the side view by interpolating along the image slices.
Here is an adapted example from the MATLAB documentation on how to visualize volumetric data (similar to yours) using isosurfaces:
%# load MRI dataset: 27 slices of 128x128 images
load mri
D = squeeze(D); %# 27 2D-images
%# view slices as countours
contourslice(D,[],[],1:size(D,3))
colormap(map), view(3), axis tight
%# apply isosurface
figure
%#D = smooth3(D);
p = patch( isosurface(D,5) );
isonormals(D, p);
set(p, 'FaceColor',[1,.75,.65], 'EdgeColor','none')
daspect([1 1 .5]), view(3), axis tight, axis vis3d
camlight, lighting gouraud
%# add isocaps
patch(isocaps(D,5), 'FaceColor','interp', 'EdgeColor','none');
colormap(map)
MATLAB has a function interp3 that can be used for interpolation, assuming that the data is uniformly discretised.
Check out the documentation.
Hope this helps.
EDIT: The MATLAB function interp3 works as follows:
vi = interp3(x, y, z, v, xi, yi, zi);
I assume that your "stack" of slices defines the arrays x, y, z, v as 3D arrays, where x, y are the coordinates of the pixels in the plane, z is the "height" of each slice and v is the actual image slices, maybe as "intensity" values for the pixels.
If you want to interpolate new image slices at intermediate z values you could specify these levels in the zi array. The arrays xi, yi would again represent the coordinates of the pixels in the plane.
I created a function to interpolate along image slices. Below is the code:
function res = interp_along_slices( vol, scale )
% Interpolation along the image slices
% Get the size of the volume
[r c p] = size(vol);
% Pre-allocate the array:
% the third dimension is scale times the p
vol_interp = zeros(r,c,scale*p);
% interpolate along the image slices
for inr = 1:r;
for jnr = 1:c;
xi = vol(inr,jnr,:);
vol_interp(inr,jnr,:) = interp(xi, scale);
end;
end;
res = vol_interp;
end