i want design a dll for geometric points
i have x value and y value from that i have to enter more x value and y value and two x points and two y points means.i have to find the distance...this is one dll then second class library we have to get the points from first class library and calculate slope angel
You can simple calc it from Pythagoras.
Related
I have an image of a cytoskeleton. There are a lot of small objects inside and I want to calculate the length between all of them in every axis and to get a matrix with all this data. I am trying to do this in matlab.
My final aim is to figure out if there is any axis with a constant distance between the object.
I've tried bwdist and to use connected components without any luck.
Do you have any other ideas?
So, the end goal is that you want to globally stretch this image in a certain direction (linearly) so that the distances between nearest pairs end up the closest together, hopefully the same? Or may you do more complex stretching ? (note that with arbitrarily complex one you can always make it work :) )
If linear global one, distance in x' and y' is going to be a simple multiplication of the old distance in x and y, applied to every pair of points. So, the final euclidean distance will end up being sqrt((SX*x)^2 + (SY*y)^2), with SX being stretch in x and SY stretch in y; X and Y are distances in X and Y between pairs of points.
If you are interested in just "the same" part, solution is not so difficult:
Find all objects of interest and put their X and Y coordinates in a N*2 matrix.
Calculate distances between all pairs of objects in X and Y. You will end up with 2 matrices sized N*N (with 0 on the diagonal, symmetric and real, not sure what is the name for that type of matrix).
Find minimum distance (say this is between A an B).
You probably already have this. Now:
Take C. Make N-1 transformations, which all end up in C->nearestToC = A->B. It is a simple system of equations, you have X1^2*SX^2+Y1^2*SY^2 = X2^2*SX^2+Y2*SY^2.
So, first say A->B = C->A, then A->B = C->B, then A->B = C->D etc etc. Make sure transformation is normalized => SX^2 + SY^2 = 1. If it cannot be found, the only valid transformation is SX = SY = 0 which means you don't have solution here. Obviously, SX and SY need to be real.
Note that this solution is unique except in case where X1 = X2 and Y1 = Y2. In this case, grab some other point than C to find this transformation.
For each transformation check the remaining points and find all nearest neighbours of them. If distance is always the same as these 2 (to a given tolerance), great, you found your transformation. If not, this transformation does not work and you should continue with the next one.
If you want a transformation that minimizes variations between distances (but doesn't require them to be nearly equal), I would do some optimization method and search for a minimum - I don't know how to find an exact solution otherwise. I would pick this also in case you don't have linear or global stretch.
If i understand your question correctly, the first step is to obtain all of the objects center of mass points in the image as (x,y) coordinates. Then, you can easily compute all of the distances between all points. I suggest taking a look on a histogram of those distances which may provide some information as to the nature of distance distribution (for example if it is uniformly random, or are there any patterns that appear).
Obtaining the center of mass points is not an easy task, consider transforming the image into a binary one, or some sort of background subtraction with blob detection or/and edge detector.
For building a histogram you can use histogram.
I want to determine a point in space by geometry and I have math computations that gives me several theta values. After evaluating the theta values, I could get N 1 x 3 dimension matrix where N is the number of theta evaluated. Since I have my targeted point, I only need to decide which of the matrices is closest to the target with adequate focus on the three coordinates (x,y,z).
Take a view of the analysis in the figure below:
Fig 1: Determining Closest Point with all points having minimal error
It can easily be seen that the third matrix is closest using sum(abs(Matrix[x,y,z])). However, if the method is applied on another figure given below, obviously, the result is wrong.
Fig 2: One Point has closest values with 2-axes of the reference point
Looking at point B, it is closer to the reference point on y-,z- axes but just that it strayed greatly on x-axis.
So how can I evaluate the matrices and select the closest one to point of reference and adequate emphasis will be on error differences in all coordinates (x,y,z)?
If your results is in terms of (x,y,z), why don't evaluate the euclidean distance of each matrix you have obtained from the reference point?
Sort of matlab code:
Ref_point = [48.98, 20.56, -1.44];
Curr_point = [x,y,z];
Xd = (x-Ref_point(1))^2 ;
Yd = (y-Ref_point(2))^2 ;
Zd = (z-Ref_point(3))^2 ;
distance = sqrt(Xd + Yd + Zd);
%find the minimum distance
Say I have a discrete vector field u(x,y) and v(x,y). I have another scalar field vort(x,y). x and y are a meshgrid style set of coordinates. I want to set a contour level of my scalar vort, and integrate the vector field around that closed contour. How can I do this when I have discrete data, not a function?
contour(x,y,vort,[0.5 0.5]); %for example
I can extract from this the data points at all locations on the contour, but how do I integrate the vector field onto this curve?
I sorted this by:
use contourc to find the coordinates of the points in the loop
use improfile to interpolate to find the u and v values at a specified number of points around the loop
Find the angle (a) of the loop at each point
Integrate u * cos(a) - v * sin(a) using trapz
I am trying to create an orthogonal coordinate system based on two "almost" perpendicular vectors, which are deduced from medical images. I have two vectors, for example:
Z=[-1.02,1.53,-1.63];
Y=[2.39,-1.39,-2.8];
that are almost perpendicular, since their inner product is equal to 5e-4.
Then I find their cross product to create my 3rd basis:
X=cross(Y,Z);
Even this third vector is not completely orthogonal to Z and Y, as their inner products are in the order of -15 and -16, but I guess that is almost zero. In order to use this set of vectors as my orthogonal basis for a local coordinate system, I assume they should be almost completely perpendicular. I first thought I can do this by rounding my vectors to less decimal figures, but did not help. I guess I need to find a way to alter my initial vectors a little to make them more perpendicular, but I don't know how to do that.
I would appreciate any suggestions.
Gram-Schmidt is right as pointed out above.
Basically, you want to subtract the component of Y that is in the direction of Z from Y (Note: you can alternatively operate on Z instead of Y).
The component of Y in the Z direction is given by:
dot(Y,Z)*Z/(norm(Z)^2)
(projection of Y onto Z)
Note that if Y is orthogonal to Z, then this is 0.
So:
Y = Y - dot(Y,Z)*Z/(norm(Z)^2)
and Z stays unchanged.
let V=Y+aZ
Z dot V = 0 so you can solve a and get V
Now use V and Z as you basis
You may need to normalize the vectors and use double type to get the desired precision.
I have a joint density function for two independent variables X and Y. And I now want to sample new x,y from this distribution.
What I believe I have to do is to find the joint cumulative distribution and then somehow sample from it. I kinda know how to do this in 1D, but I find it really hard to understand how to do it in 2D.
I also used the matlab function cumtrapz to find the cumulative distribution function for the above pdf.
Just to be clear, what i want to do is to sample random values x,y from this empirical distribution.
Can someone please point me in the right direction here?!
EDIT: I have data values and I use
[pdf bins] = hist3([N Y])
I then normalize the pdf and do
cumulativeDistribution = cumtrapz(pdfNormalize)
And yes (to the comment below) X,Y are suppose to be independent.
If you know how to sample a distribution in 1D then you can extend it to 2D. Create the marginal distribution for X. Take a sample from that, say X1. Then in your 2D distribution fix one variate X=X1 and sample for Y, i.e., sample Y from 1D distribution fXY(X1, Y).
Given a joint distribution in say, two random variable X and Y, you can compute a CDF for X alone by summing over all possible values of Y, i.e. P(X<=x)=Sum[P[X=x_i and Y=y_j],{x_i<=x and all values of y_j}]. Once you have P(X<=x) in hand, there are well-known methods for sampling a value of X, let's call it a. Now that you have a, compute P(Y<=y given X=a)=Sum[P[X=a and Y=y_j],{y_j<=y}]/Sum[P[X=a and Y=y_j],{all values of y_j}]. Sample Y using the same approach that gave you X, yielding Y=b. Done.
The same approach generalizes to more that two random jointly distributed random variable.