I am trying to build a 3d constuct in MATLAB with multicylinders.
This is the 3d code for it:
SC3d = multicylinder(basesize,hSC) %create SC
VS3d = multicylinder(basesize,hVS, "Zoffset",-hVS) %create VS
Sense3d = multicylinder(R1, hEL, "Zoffset",hSC) %create sensor
Inject3d = multicylinder([R1+R2 R3], hEL, "Void", [true false], "Zoffset", hSC) %create inject
model = createpde
model.Geometry = [SC3d VS3d Sense3d]
pdegplot(model,"CellLabels","on","FaceAlpha",0.5)
I have 2 errors showing up, one is that how do I make a hollow cylinder with z-offset? This is for Inject3d
Second is that how do I add multiple cylinders into a same model/assembly for pde toolbox?
If you have a better way please let me know too!
Thanks!
Related
I'm a beginner in matlab and I'm trying to transform a photo according to a function given in the code.
My aim is to see where some points of the R^2 plan go. For example, i'd like to transform :
But I can't figure this out.
I found some good conversations on this topic:
https://www.mathworks.com/matlabcentral/answers/81975-is-it-possible-to-pass-an-image-as-an-input-to-a-function-in-matlab
and good functions like :
https://www.mathworks.com/help/images/ref/imtransform.html
https://www.mathworks.com/help/images/ref/imwarp.html
but I don't understand what to do with that because I don't have a matrix but just like the function "1/z"...
The aim is to do something better than this :
How to plot the Wolfram Alpha grid? [MATLAB]
I've tried to add colors to the mesh graph but I ve not succed in doing so... I could only find how to change uniformly the colors, like setting all in green...
If you have another solution not using an image but constructing a
grid of a range of colors and then deforming it (like in the link) or
even better, instead of a grid, creating a whole plan with an uniform
distribution of the colors... it also fixes the problem!
Thank you !
You can use the surf function to plot a grid with colored patches. If you use the same code in my answer to your previous question, you could visualize the original grid with colors as follows:
C = X.^2 + Y.^2; %change this to any function you like to get different color patterns
surf(X,Y,C);
view([0, 90]); %view the mesh from above
Now, if you want to see how the transformed mesh looks like, you can do:
surf(U,V,C);
view([0, 90]);
where U and V are computed according to my previous answer.
Edit: Added sample code for transforming an image using geometricTransform2d and imwarp.
clear
clc
A = imread('peppers.png');
figure(1)
imshow(A)
t1 = geometricTransform2d(#ftransform);
Rin = imref2d(size(A),[-1 1],[-1 1]);
Rout = imref2d(size(A),[-5 5],[-5 5]);
B = imwarp(A, Rin, t1,'OutputView',Rout);
figure(2);
imshow(B)
function Xt = ftransform(X)
Z = complex(X(:,1),X(:,2));
Zt = 1./Z;
Xt(:,1) = real(Zt);
Xt(:,2) = imag(Zt);
end
Thank you for your attention first.
Recently I am trying to use the Matlab program provided by Andrea Fusiello1, Emanuele Trucco2, Alessandro Verri3 in the A compact algorithm for rectification of stereo pairs to rectify the pictures got from the two cameras in my research project about stereo calibration.
Though the Matlab code is not complex, how to get the projection matrixs of the two cameras still confused me.
I used the following Matlab code to get the Internal matrix and R and T of each camera. And I think I can get the projection matrix by using the formula: P = A1*[R|T]. However, as you can see in the picture, the consequence is strange.
So I think there is something wrong with the projection matrixs I got. Could anyone told me how to get the projection matrixs correctly?
matlab code:
numImages = 9;
files = cell(1, numImages);
for i = 1:numImages
files{i} = fullfile(matlabroot, 'toolbox', 'vision', 'visiondata', ...
'calibration', 'left', sprintf('left%d.bmp', i));
end
[imagePoints, boardSize] = detectCheckerboardPoints(files);
squareSize = 120;
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
cameraParams = estimateCameraParameters(imagePoints, worldPoints);
imOrig = imread(fullfile(matlabroot, 'toolbox', 'vision', 'visiondata', ...
'calibration', 'left', 'left9.bmp'));
[imagePoints, boardSize] = detectCheckerboardPoints(imOrig);
[R, t] = extrinsics(imagePoints, worldPoints, cameraParams);
The consequence:
There is a built in function cameraMatrix in the Computer Vision System Toolbox to compute the camera projection matrix.
However, if you are trying to do stereo rectification, you should calibrate a stereo pair of cameras using Stereo Camera Calibrator app, and then use rectifyStereoImage function. See this example.
The thing to keep in mind is that the functions in the Computer Vision System Toolbox use the post-multiply convention, i.e. row vector times the matrix. Because of this, the rotation matrices and the camera projection matrix are transposes of their conterparts in Trucco and Veri, and the other textbooks. So the formula used by cameraMatrix is
P = [R;t] * K
So P ends up being 4-by-3, and not 3-by-4. This may explain why you are getting weird results.
I have got the following in Matlab (solution as in the example in http://uk.mathworks.com/help/matlab/ref/viewmtx.html):
subplot(211)
h = ezplot3('cos(t)', 'sin(t)', 'sin(5*t)', [-pi pi]);
data = get(h,{'XData','YData','Zdata'});
data = [cat(1,data{:})', ones(numel(data{1}),1)];
% Projection matrix on screen
[az,el] = view(); A = viewmtx(az,el);
data_transformed = A*data';
subplot(212)
plot(data_transformed(1,:), data_transformed(2,:))
That transformation does not work with:
h = ezplot3('t', 'sin(t)', '20*cos(t)', [0 10*pi]);
How to get the screen projection of the 3rd plot?
Also, any links to the math behind the projection, with examples would be nice too :)
The projection depends on view. If you try with various view values, the project in 2D will produce different results.
For example, [az,el]=view(60,30); and you will have this projection.
and [az,el]=view(30,15); you will have this image
It turns out you need to normalize by the DataAspectRatio, so the viewTransform matrix becomes:
[az, el] = view(gca);
A = viewmtx(az,el) * makehgtform('scale',1./get(gca,'DataAspectRatio'));
The full answer can be seen on http://uk.mathworks.com/matlabcentral/answers/248362-screen-2d-projection-of-3d-plot
I am doing a project to detect people in crowd using HOG-LBP. I want to make it for real-time application. I've read in some references, integral image/histogram can increase the speed of the performance from sliding window detection. I want to ask, how to implement integral image on my sliding window detection:
here is the code for integral image from matlab:
A = (cumsum(cumsum(double(img)),2));
and here my sliding window detection code:
im = strcat ('C:\Documents\Crowd_PETS09\S1\L1\Time_13-57\View_001\frame_0150.jpg');
im = imread (im);
figure (1), imshow(im);
win_size= [32, 32];
[lastRightCol lastRightRow d] = size(im);
counter = 1;
%% Scan the window by using sliding window object detection
% this for loop scan the entire image and extract features for each sliding window
% Loop on scales (based on size of the window)
for s=1
disp(strcat('s is',num2str(s)));
X=win_size(1)*s;
Y=win_size(2)*s;
for y = 1:X/4:lastRightCol-Y
for x = 1:Y/4:lastRightRow-X
%get four points for boxes
p1 = [x,y];
p2 = [x+(X-1), y+(Y-1)];
po = [p1; p2] ;
% cropped image based on the four points
crop_px = [po(1,1) po(2,1)];
crop_py = [po(1,2) po(2,2)];
topLeftRow = ceil(min(crop_px));
topLeftCol = ceil(min(crop_py));
bottomRightRow = ceil(max(crop_px));
bottomRightCol = ceil(max(crop_py));
cropedImage = im(topLeftCol:bottomRightCol,topLeftRow:bottomRightRow,:);
%Get the feature vector from croped image
HOGfeatureVector{counter}= getHOG(double(cropedImage));
LBPfeatureVector{counter}= getLBP(cropedImage);
LBPfeatureVector{counter}= LBPfeatureVector{counter}';
boxPoint{counter} = [x,y,X,Y];
counter = counter+1;
x = x+2;
end
end
end
where should i put the integral image code?
i am really appreciate, if someone can help me to figure it out.
Thank you.
The integral image is most suited for the Haar-like features. Using it for HOG or LBP would be tricky. I would suggest to first get your algorithm working, and then think about optimizing it.
By the way, the Computer Vision System Toolbox includes the extractHOGFeatures function, which would be helpful. Here's an example of training a HOG-SVM classifier to recognize hand-written digits. Also there is a vision.PeopleDetector object, which uses a HOG-SVM classifier to detect people. You could either use it directly for your project, or use it to evaluate performance of your own algorithm.
I am new to optical flow and computer vision in general and I started to work with a simple demo example from Matlab.
The objective of it is to use a video and plot the motion vectors onto the screen. I am using the following code:
%% initialization
close all
clear all
% Create reader
reader = vision.VideoFileReader;
reader.Filename = 'viptraffic.avi';
% Create viewer
viewer = vision.DeployableVideoPlayer;
%%viewer.FrameRate = 10;
%Create Optical Flow
optical = vision.OpticalFlow; %how pixels are moving from one frame to the next
optical.OutputValue = 'Horizontal and vertical components in complex form'; %will allow us to draw a vector
%%%on the vision so that we see how the pixels are moving from one frame to the next
%%We pass the horizontal and vertical components to the shape inserter
%%below
% Display vector fields
shapes = vision.ShapeInserter;
shapes.Shape = 'Lines';
shapes.BorderColor = 'white';
R = 1:4:120;%%downsample the optical flow field
C = 1:4:160;%%downsample the optical flow field
[Cv, Rv] = meshgrid (C, R); %%% display a grid on the image and take every fourth value
Rv = Rv(:)';
Cv = Cv(:)';
%% Execution
reset(reader)
%Set up for stream
while ~isDone(reader)
I = step(reader);
of = step(optical,rgb2gray(I));
size(of)
ofd = of(R,C);
size(ofd)
H = imag(ofd)*20;
V = real(ofd)*20;
%Draw lines on top of image
lines = [Rv;Cv; Rv+H(:)'; Cv+V(:)']; %%start and a finish , start+movement, end+movement
% lines = [Cv;Rv;Cv;Rv];
Ishp = step(shapes,I,lines);
step(viewer,Ishp);
end
release(viewer);
I do not why the vector lines are not plotting properly.
Can anyone help me?
Thanks
PS: here is the result:
Try using
lines = [Rv(:); Cv(:); Rv(:)+H(:); Cv(:)+V(:)];
instead of
lines = [Rv;Cv; Rv+H(:)'; Cv+V(:)'];
Better yet, if you have a recent version of Matlab, try using the insertShape function instead of vision.ShapeInserter.
Edit:
If you have a recent version of the Computer Vision System Toolbox, try the new optical flow functions: opticalFlowHS, opticalFlowLK, opticalFlowLKDoG, and opticalFlowFarneback.