I want to calculate the magnetic field from a given image using biot savarts law. For example if I have a picture of a triangle, I say that this triangle forms a closed wire carrying current. Using image derivatives I can get the co-ordinates and direction of the current (normals included). I am struggling implementing this...need a bit of help with logic too. Here is what I have:
Img = imread('littletriangle.bmp');
Img = Img(:,:,1);
Img = double(Img);
[x,y] = size(Img);
[Ix, Iy] = gradient(Img);
biot savart equation is:
b = mu/4*pi sum(Idl x rn / r^2)
where mu/4pi is const, I is current magnitude, rn distance unit vector between a pixel and current, r^2 is the squared magnitude of the displacement between a pixel and the current.
So just to start off, I read the image in, turn it into a binary and then take the image gradient. This gives me the location and orientation of the 'current'. I now need to calculate the magnetic field from this 'current' at every pixel in the image. I am only interested in getting the magnetic field in the x-y plane. anything just to start me off would be brilliant!
For wire
B = mu * I /(2*pi*r)
B is vector and has. Direction is perpendicular on line between wire an point of interest. Fastest way to rotate vector by 90° is just swapping (x.y) so it becomes (y,x) vector
What about current? If you deal whit current then current is homogenous inside of wire (can be triangle) and I in upper direction is just normalized I per point and per Whole I.
So how to do this?
Get current per pixel (current / number of pixel in shape)
For each point calculate B using (r calculated form protagora) as sum of all other mini wires expressed as pixel using upper equation. (B is vector and has also direction, so keep track of B as (x,y) )
having picture of 100*100 will yield (100*100)*(100*100) calculations of B equation or something less if you will not calculate filed from empty space.
B is at the end instead of just mu * I /(2*pi*r) sum of all wire and I becomes dI
You do not need to apply any derivatives, just integration (sum)
Related
I have to make a sinusoidal curve in an image to output an equal straight line in the resulting image.
An example of input sinusoidal image:
What I think is one solution should be:
Placing down the origin of x and y coordinates at the start of the curve, so we will have y=0 at the starting point. Then points on the upper limit will be counted as such that y= y-(delta_y) and for lower limits, y=y+(delta_y)
So to make upper points a straight line, our resulting image will be:
O[x,y-delta_y]= I[x,y];
But how to calculate deltaY for each y on horizontal x axis (it is showing the distance of curve points from horizontal axis)
Another solution could be, to save all information of the curve in a variable and to plot it as a straight line, but how to do it?
Since the curve is blue we can use information from the blue and red channels to extract the curve. Simply subtraction of red channel from blue channel will highlight the curve:
a= imread('kCiTx.jpg');
D=a(:,:,3)-a(:,:,1);
In each column of the image position of the curve is index of the row that it's value is the maximum of that column
[~,im]=max(D);
so we can use row position to shift each column so to create a horizontal line. shifting each column potentially increases size of the image so it is required to increase size of the image by padding the image from top and bottom by the size of the original image so the padded image have the row size of 3 times of the original image and padding value is 255 or white color
pd = padarray(a,[size(a,1) 0 0], 255);
finally for each channel cirshift each column with value of im
for ch = 1:3
for col = 1: size(a,2)
pd(:,col,ch) = circshift(pd(:,col,ch),-im(col));
end
end
So the result will be created with this code:
a= imread('kCiTx.jpg');
D=a(:,:,3)-a(:,:,1);
%position of curve found
[~,im]=max(D);
%pad image
pd = padarray(a,[size(a,1) 0 0], 255);
%shift columns to create a flat curve
for ch = 1:3
for col = 1: size(a,2)
pd(:,col,ch) = circshift(pd(:,col,ch),-im(col));
end
end
figure,imshow(pd,[])
If you are sure you have a sinusoid in your initial image, rather than calculating piece-meal offsets, you may as well estimate the sinusoidal equation:
amplitude = (yMax - yMin)/2
offset = (yMax + yMin)/2
(xValley needs to be the valley immediately after xPeak, alternately you could do peak to peak, but it changes the equation, so this is the more compact version (ie you need to see less of the sinusoid))
frequencyScale = π / (xValley - xPeak)
frequencyShift = xFirstZeroCrossRising
If you are able to calculate all of those, this is then your equation:
y = offset + amplitude * sin(frequencyScale * (x + frequencyShift))
This equation should be all you need to store from one image to be able to shift any other image, it can also be used to generate your offsets to exactly cancel your sinusoid in your image.
All of these terms should be able to be estimated from the image with relatively little difficulty. If you can not figure out how to get any of the terms from the image, let me know and I will explain that one in particular.
If you are looking for some type of distance plot:
Take your first point on the curvy line and copy it into your output image, then measure the distance between that point and the next point on the (curvy) line. Use that distance to offset (from the first point) that next point into the output image only along the x axis. You may want to do it pixel by pixel, or grab clumps of pixels through averaging or jumping (as pixel by pixel will give you some weird digital noise)
If you want to be cleaner, you will want to set up a step size which was sufficiently small to approximately match the maximum sinusoidal curvature without too much error. Then, estimate the distance as stated above to set up bounds and then interpolate each pixel between the start and end point into the image averaging into bins based on approximate position. IE if a pixel from the original image would fall between two bins in the output image, you would split it and add its weighted parts to those two bins.
for my project i have to segment the abnormalities in a CT brain image.
I want to do that by comparing the right side of the brain with the left side. This could be done
by using the intensity difference of the image. For example, blood is brighter than the brain tissue in
CT images. Due to the fact that the right and left side of the brain are nearly symmetric, it is possible
to find a abnormality in one side by comparing that with the other side. Using Matlab, I want to work
with the Dicom files of the CT images. I want to segment the abnormal area by comparing both sides of the brain.
After segmenting the abnormalities in 2D, I want to register the 2D images and create a 3D reconstruction.
Does anyone perhaps know, what the best coding method is (in Matlab) for comparing the left and right side of a Dicom image?
Maybe take a look at this paper: http://www.sciencedirect.com/science/article/pii/S0167865503000497
It explains how to find the symmetry plane in 3D MRI images, but the method should also work on CT. First you search for the center of mass in your image. Next you calculate the axes of the ellipsoid of inertia and evaluate the symmetry. Finally, you can improve the symmetry plane using the downhill simplex method.
Hope this helps!
EDIT: here is how I would tackle this problem:
search for the center of mass R
ind = find(ones(size(image)));
ind = reshape(ind, size(image,1), size(image,2), size(image,3)); %for a 3D volume
[x,y,z] = ind2sub(size(image), ind); %for a 3D volume
Rx = image.*x;
Ry = image.*y;
Rz = image.*z;
Rx = round(1/sum(image(:)) * sum(Rx(:)));
Ry = round(1/sum(image(:)) * sum(Ry(:)));
Rz = round(1/sum(image(:)) * sum(Rz(:)));
Rx, Ryand Rznow contain the position of the center of mass in your image. The code is easily adaptable to 2D.
Now, look for the axes of the ellipsoid of inertia:
for p=0:2
for q=0:2
for r=0:2
if p+q+r==2
integr = image.*(x-Rx).^p.*(y-Ry).^q.*(z-Rz).^r;
m = sum(integr(:));
if p==2, xx=1; yy=1;
elseif p>0 && q>0, xx=1; yy=2;
elseif p>0 && r>0, xx=1; yy=3;
elseif q==2, xx=2; yy=2;
elseif q>0 && r>0, xx=2; yy=3;
elseif r==2, xx=3; yy=3;
end
M(xx,yy) = m;
M(yy,xx) = m;
end
end
end
end
[V,~] = eig(M);
The matrix V contains the directions of the axes of the ellipsoid of inertia. These are first guesses for the symmetry plane.
Evaluate the symmetry. This is the hard part, because you have to rotate the image around all three (or two, in 2D) possible symmetry planes. I have used the affine3D and imwarp commands, but it's quite cumbersome. Make sure to define the different axes through the center of mass found before. A possible measure of symmetry is mu = 1 - ||image - mirrored_ image||^2 / (2*||image||^2). The axis with the highest mu value is the best symmetry plane.
If you are not happy about the symmetry axis, you can improve it using the downhill simplex method, see e.g. https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method
Now you have you original image and the mirrored image around the midsagittal plane. Subtracting both should give you an idea of abnormalities.
I hope this is clear. For more information, please check the excellent paper by Tuzikov et al. mentioned above.
I have set of 3D points.
Points_[x,y,z]% n*3 where n is number of points
I want to fit a plane (it is floor) and check height of plane. I think it is 2D problem .
z=bo+b1x+b2y;
I can't find a link for 2D ransac plane fitting. Can someone please give this link or file.
Secondly, Some softwares (commercial)gives height value of plane. It is mean or some complex value.
Regards,
If you form the following "A" matrix
A = [ones(numel(Points_X),1), Points_X(:), Points_Y(:)];
where the (:) is to give you column vectors (in case they were not to begin with)
Then you can write your equation as the classic linear system of equations:
A*b = Points_Z(:);
where b = [b0; b1; b2] -- a column vector of the parameters you are trying to determine.
This has the canonical solution
b=A\Points_Z(:)
or b=pinv(A)*Points_Z(:)
See help on mldivide and pinv.
You must have 3 or more points which don't all lie on a line. For an overdetermined system like this, pinv and \ will basically produce the same results. If they are nearly colinear, there may be some advantage using .
The 3 parameters in b are basically the height of the plane above the origin, the x slope, and the y slope of the plane. If you think about it, the "height" of a plane IS your z term. You can talk about height above some point (like the origin). Now, if you want height at the center of mass of the sampled points, you would then do
z_mean = [1 mean(Points_X(:) ) mean( Points_Y(:) )] * b
which is probably just equivalent to mean( Points_Z(:) ). For this definition to be meaningful, you would have to ensure that you have a uniformly spaced grid over the region of interest.
There may be other definitions, depending on your application. For instance, if you are trying to find the height at a center of a room, with points sampled along the walls and interior, then replacing mean with median might be more appropriate.
From the given matrices A and B I need to compute a new matrix C. Matrix A represents image pixels and C is a horizontally shifted version of A. The tricky part: this shift is defined per pixel by the values in the disparity matrix B. For exampe: While pixel (1,1) needs to be shifted 0.1 units to the right, pixel (1,2) needs to be shifted 0.5 units to the left.
I implemented this as backward-mapping, where for each pixel in C I compute the required source position in A (which is simply my current pixel's location minus the corresponding offset in B). Since non-integer shifts are allowed, I need to interpolate the new pixel value.
Doing this in Matlab, of course, takes quite some time as images get larger. Is there any built-in function I can utilize for this task?
I assume that matrix A is an image, which means that the pixels are regularly spaced, which means you can use INTERP2. I also assume that you calculate for each pixel individually the interpolated value from A. You can, however, perform the lookup in one step, which will be quite a bit faster.
Say A is a 100x100 image, and B is a 10000-by-2 array with [shiftUpDown,shiftLeftRight] for each pixel. Then you'd calculate C this way:
%# create coordinate grid for image A
[xx,yy] = ndgrid(1:100,1:100);
%# linearize the arrays, and add the offsets
xx = xx(:);
yy = yy(:);
xxShifted = xx + B(:,1);
yyShifted = yy + B(:,2);
%# preassign C to the right size and interpolate
C = A;
C(:) = interp2(xx,yy,A(:),xxShifted,yyShifted);
The function interp2 interpolates values on a regularly spaced grid, such as a bitmap image. If your pixels didn't lie on a regular grid, then you would use griddata.
i have a 3d model in a coordinate system that i need to project on a 2d plane using perspective projection, i used this projection equation C*(RT * P') where C is the calibration matrix
[f 0 px
0 f py
0 0 1]
px and py are coordinates of origin point i put them both by zero, R is the rotation matrix and T is the translation matrix i put them both in one matrix that represent both of them and i used a translation value of 3meters (in pixel value : 9448.82 approximately , not sure if this conversion is right) on Z-axis and 1 meter on Y axis, f is the focal length and i'm not sure of the value i used but i calculated it by this equation:
f= (image width) * (image focal length) / (7.81) , i got the 7.81 value from my camera brand website as it is supposed to be an internal camera parameter and this is what i'm not sure if it is right.
this is a screenshot of the model that i'm trying to project.
and this is the model after projection ... it seems to me like it is scaled over X axis it doesn't feel like it's exactly the same model and after filling the gaps between the points by some filling algorithm :
here is the result it is more unlike the original model ... any help please about where the problem is so i can fix it .. thanks :)