Calculation of specific dimension - calculator

I have not found correct formula for calculating length out of other dimensions.
I have a plate.
I have given Height = 0.585 mm, Weigth 25000 kg, Width = 2 m Density is 7.874 g/cm3
And I need Length.
Please can you specify how to calculate exactly that I can understand how to do it?
I need it for some calculating tools
I know its a bit stupid to ask this...
BR,
Patric

25000000g/7,874g/cm3 = 3175006,35cm3
3175006,35 cm3 = H * W * L
3175006,35 cm3/(0,0585cm*200cm) = L
L = 271368,06cm

Related

gradient and derivative

I am trying to find the first derivative of a Gaussian for an image (using Matlab) and I tried two ways.One using the gradient and one calculating the derivative but the results look different from each other.
Method 1
k=7,s=3% kernel,st.dev
f = fspecial('gaussian', [k k], s)
[Gx,Gy] = gradient(f)
Method 2
k=7,s=3% kernel,st.dev
[x,y] = meshgrid(-floor(k/2):floor(k/2), -floor(k/2):floor(k/2))
G = exp(-(x.^2+y.^2)/(2*s^2))/(2*pi*(s^2))
Gn=G/sum(G(:))
Gx = -x.*Gn/(s^2)
Gy = -y.*Gn/(s^2)
Gx and Gy should be the same from the two methods but there is a difference in the values. Does anyone know why that is? I was expecting that they will be the same. Is there a preferred way to calculate the derivative?
Thank you.
Edit: changed the G definition per Conrad's suggestion but problem still persists.
This looks wrong:
G = exp(-(x.^2+y.^2)/(2*s^2))/(2*pi*s);
Assuming this is supposed to the Normal density for (X,Y) where X and Y are independent zero mean RV with equal SDs = s, this should be:
G = exp(-(x.^2+y.^2)/(2*s^2))/(2*pi*(s^2));
(The term in front of the exponential in each component is 1/(sqrt(2*pi)*s) and you have this twice giving 1/(2*pi*s^2) )
According to the docs of fspecial: http://www.mathworks.com/help/images/ref/fspecial.html
so it seems that you should alter G to be:
G = exp(-(x.^2+y.^2)/(2*s^2));

Iteration of matrix-vector multiplication which stores specific index-positions

I need to solve a min distance problem, to see some of the work which has being tried take a look at:
link: click here
I have four elements: two column vectors: alpha of dim (px1) and beta of dim (qx1). In this case p = q = 50 giving two column vectors of dim (50x1) each. They are defined as follows:
alpha = alpha = 0:0.05:2;
beta = beta = 0:0.05:2;
and I have two matrices: L1 and L2.
L1 is composed of three column-vectors of dimension (kx1) each.
L2 is composed of three column-vectors of dimension (mx1) each.
In this case, they have equal size, meaning that k = m = 1000 giving: L1 and L2 of dim (1000x3) each. The values of these matrices are predefined.
They have, nevertheless, the following structure:
L1(kx3) = [t1(kx1) t2(kx1) t3(kx1)];
L2(mx3) = [t1(mx1) t2(mx1) t3(mx1)];
The min. distance problem I need to solve is given (mathematically) as follows:
d = min( (x-(alpha_p*t1_k - beta_q*t1_m)).^2 + (y-(alpha_p*t2_k - beta_q*t2_m)).^2 +
(z-(alpha_p*t3_k - beta_q*t3_m)).^2 )
the values x,y,z are three fixed constants.
My problem
I need to develop an iteration which can give me back the index positions from the combination of: alpha, beta, L1 and L2 which fulfills the min-distance problem from above.
I hope the formulation for the problem is clear, I have been very careful with the index notations. But if it is still not so clear... the step size for:
alpha is p = 1,...50
beta is q = 1,...50
for L1; t1, t2, t3 is k = 1,...,1000
for L2; t1, t2, t3 is m = 1,...,1000
And I need to find the index of p, index of q, index of k and index of m which gives me the min. distance to the point x,y,z.
Thanks in advance for your help!
I don't know your values so i wasn't able to check my code. I am using loops because it is the most obvious solution. Pretty sure that someone from the bsxfun-brigarde ( ;-D ) will find a shorter/more effective solution.
alpha = 0:0.05:2;
beta = 0:0.05:2;
L1(kx3) = [t1(kx1) t2(kx1) t3(kx1)];
L2(mx3) = [t1(mx1) t2(mx1) t3(mx1)];
idx_smallest_d =[1,1,1,1];
smallest_d = min((x-(alpha(1)*t1(1) - beta(1)*t1(1))).^2 + (y-(alpha(1)*t2(1) - beta(1)*t2(1))).^2+...
(z-(alpha(1)*t3(1) - beta(1)*t3(1))).^2);
%The min. distance problem I need to solve is given (mathematically) as follows:
for p=1:1:50
for q=1:1:50
for k=1:1:1000
for m=1:1:1000
d = min((x-(alpha(p)*t1(k) - beta(q)*t1(m))).^2 + (y-(alpha(p)*t2(k) - beta(q)*t2(m))).^2+...
(z-(alpha(p)*t3(k) - beta(q)*t3(m))).^2);
if d < smallest_d
smallest_d=d;
idx_smallest_d= [p,q,k,m];
end
end
end
end
end
What I am doing is predefining the smallest distance as the distance of the first combination and then checking for each combination rather the distance is smaller than the previous shortest distance.

Angle between two vectors matlab

I want to calculate the angle between 2 vectors V = [Vx Vy Vz] and B = [Bx By Bz].
is this formula correct?
VdotB = (Vx*Bx + Vy*By + Vz*Bz)
Angle = acosd (VdotB / norm(V)*norm(B))
and is there any other way to calculate it?
My question is not for normalizing the vectors or make it easier. I am asking about how to get the angle between this two vectors
Based on this link, this seems to be the most stable solution:
atan2(norm(cross(a,b)), dot(a,b))
There are a lot of options:
a1 = atan2(norm(cross(v1,v2)), dot(v1,v2))
a2 = acos(dot(v1, v2) / (norm(v1) * norm(v2)))
a3 = acos(dot(v1 / norm(v1), v2 / norm(v2)))
a4 = subspace(v1,v2)
All formulas from this mathworks thread. It is said that a3 is the most stable, but I don't know why.
For multiple vectors stored on the columns of a matrix, one can calculate the angles using this code:
% Calculate the angle between V (d,N) and v1 (d,1)
% d = dimensions. N = number of vectors
% atan2(norm(cross(V,v2)), dot(V,v2))
c = bsxfun(#cross,V,v2);
d = sum(bsxfun(#times,V,v2),1);%dot
angles = atan2(sqrt(sum(c.^2,1)),d)*180/pi;
The traditional approach to obtaining an angle between two vectors (i.e. arccos(dot(u, v) / (norm(u) * norm(v))), as presented in some of the other answers) suffers from numerical instability in several corner cases. The following code works for n-dimensions and in all corner cases (it doesn't check for zero length vectors, but that's easy to add). See notes below.
% Get angle between two vectors
function a = angle_btw(v1, v2)
% Returns true if the value of the sign of x is negative, otherwise false.
signbit = #(x) x < 0;
u1 = v1 / norm(v1);
u2 = v2 / norm(v2);
y = u1 - u2;
x = u1 + u2;
a0 = 2 * atan(norm(y) / norm(x));
if not(signbit(a0) || signbit(pi - a0))
a = a0;
elseif signbit(a0)
a = 0.0;
else
a = pi;
end;
end
This code is adapted from a Julia implementation by Jeffrey Sarnoff (MIT license), in turn based on these notes by Prof. W. Kahan (page 15).
You can compute VdotB much faster and for vectors of arbitrary length using the dot operator, namely:
VdotB = sum(V(:).*B(:));
Additionally, as mentioned in the comments, matlab has the dot function to compute inner products directly.
Besides that, the formula is what it is so what you are doing is correct.
This function should return the angle in radians.
function [ alpharad ] = anglevec( veca, vecb )
% Calculate angle between two vectors
alpharad = acos(dot(veca, vecb) / sqrt( dot(veca, veca) * dot(vecb, vecb)));
end
anglevec([1 1 0],[0 1 0])/(2 * pi/360)
>> 45.00
The solution of Dennis Jaheruddin is excellent for 3D vectors, for higher dimensional vectors I would suggest to use:
acos(min(max(dot(a,b)/sqrt(dot(a,a)*dot(b,b)),-1),1))
This fixes numerical issues which could bring the argument of acos just above 1 or below -1. It is, however, still problematic when one of the vectors is a null-vector. This method also only requires 3*N+1 multiplications and 1 sqrt. It, however also requires 2 comparisons which the atan method does not need.

generate 3-d random points with minimum distance between each of them?

there.
I am going to generate 10^6 random points in matlab with this particular characters.
the points should be inside a sphere with radious 25, the are 3-D so we have x, y, z or r, theta, phi.
there is a minimum distance between each points.
first, i decided to generate points and then check the distances, then omit points with do not have these condition. but, it may omit many of points.
another way is to use RSA (Random Sequential Addition), it means generate points one by one with this minimum distance between them. for example generate first point, then generate second randomly out of the minimum distance from point 1. and go on till achieving 10^6 points.
but it takes lots of time and i can not reach 10^6 points, since the speed of searching appropriate position for new points will take long time.
Right now I am using this program:
Nmax=10000;
R=25;
P=rand(1,3);
k=1;
while k<Nmax
theta=2*pi*rand(1);
phi=pi*rand(1);
r = R*sqrt(rand(1));
% convert to cartesian
x=r.*sin(theta).*cos(phi);
y=r.*sin(theta).*sin(phi);
z=r.*cos(theta);
P1=[x y z];
r=sqrt((x-0)^2+(y-0)^2+(z-0)^2);
D = pdist2(P1,P,'euclidean');
% euclidean distance
if D>0.146*r^(2/3)
P=[P;P1];
k=k+1;
end
i=i+1;
end
x=P(:,1);y=P(:,2);z=P(:,3); plot3(x,y,z,'.');
How can I efficiently generate points by these condition?
thank you.
I took a closer look at your algorithm, and concluded there is NO WAY it will ever work - at least not if you really want to get a million points in that sphere. There is a simple picture that explains why not - this is a plot of the number of points that you need to test (using your technique of RSA) to get one additional "good" point. As you can see, this goes asymptotic at just a few thousand points (I ran a slightly faster algorithm against 200k points to produce this):
I don't know if you ever tried to compute the theoretical number of points you could fit in your sphere when you have them perfectly arranged, but I'm beginning to suspect the number is a good deal smaller than 1E6.
The complete code I used to investigate this, plus the output it generated, can be found here. I never got as far as the technique I described in my earlier answer... there was just too much else going on in the setup you described.
EDIT:
I started to think it might not be possible, even with "perfect" arrangement, to get to 1M points. I made a simple model for myself as follows:
Imagine you start on the "outer shell" (r=25), and try to fit points at equal distances. If you divide the area of the "shell" by the area of one "exclusion disk" (of radius r_sub_crit), you get a (high) estimate of the number of points at that distance:
numpoints = 4*pi*r^2 / (pi*(0.146 * r^(2/3))^2) ~ 188 * r^(2/3)
The next "shell" in should be at a radius that is 0.146*r^(2/3) less - but if you think of the points as being very carefully arranged, you might be able to get a tiny bit closer. Again, let's be generous and say the shells can be just 1/sqrt(3) closer than the criteria. You can then start at the outer shell and work your way in, using a simple python script:
import scipy as sc
r = 25
npts = 0
def rc(r):
return 0.146*sc.power(r, 2./3.)
while (r > rc(r)):
morePts = sc.floor(4/(0.146*0.146)*sc.power(r, 2./3.))
npts = npts + morePts
print morePts, ' more points at r = ', r
r = r - rc(r)/sc.sqrt(3)
print 'total number of points fitted in sphere: ', npts
The output of this is:
1604.0 more points at r = 25
1573.0 more points at r = 24.2793037966
1542.0 more points at r = 23.5725257555
1512.0 more points at r = 22.8795314897
1482.0 more points at r = 22.2001865995
1452.0 more points at r = 21.5343566722
1422.0 more points at r = 20.8819072818
1393.0 more points at r = 20.2427039885
1364.0 more points at r = 19.6166123391
1336.0 more points at r = 19.0034978659
1308.0 more points at r = 18.4032260869
1280.0 more points at r = 17.8156625053
1252.0 more points at r = 17.2406726094
1224.0 more points at r = 16.6781218719
1197.0 more points at r = 16.1278757499
1171.0 more points at r = 15.5897996844
1144.0 more points at r = 15.0637590998
1118.0 more points at r = 14.549619404
1092.0 more points at r = 14.0472459873
1066.0 more points at r = 13.5565042228
1041.0 more points at r = 13.0772594652
1016.0 more points at r = 12.6093770509
991.0 more points at r = 12.1527222975
967.0 more points at r = 11.707160503
943.0 more points at r = 11.2725569457
919.0 more points at r = 10.8487768835
896.0 more points at r = 10.4356855535
872.0 more points at r = 10.0331481711
850.0 more points at r = 9.64102993012
827.0 more points at r = 9.25919600154
805.0 more points at r = 8.88751153329
783.0 more points at r = 8.52584164948
761.0 more points at r = 8.17405144976
740.0 more points at r = 7.83200600865
718.0 more points at r = 7.49957037478
698.0 more points at r = 7.17660957023
677.0 more points at r = 6.86298858965
657.0 more points at r = 6.55857239952
637.0 more points at r = 6.26322593726
618.0 more points at r = 5.97681411037
598.0 more points at r = 5.69920179546
579.0 more points at r = 5.43025383729
561.0 more points at r = 5.16983504778
542.0 more points at r = 4.91781020487
524.0 more points at r = 4.67404405146
506.0 more points at r = 4.43840129415
489.0 more points at r = 4.21074660206
472.0 more points at r = 3.9909446055
455.0 more points at r = 3.77885989456
438.0 more points at r = 3.57435701766
422.0 more points at r = 3.37730048004
406.0 more points at r = 3.1875547421
390.0 more points at r = 3.00498421767
375.0 more points at r = 2.82945327223
360.0 more points at r = 2.66082622092
345.0 more points at r = 2.49896732654
331.0 more points at r = 2.34374079733
316.0 more points at r = 2.19501078464
303.0 more points at r = 2.05264138052
289.0 more points at r = 1.91649661498
276.0 more points at r = 1.78644045325
263.0 more points at r = 1.66233679273
250.0 more points at r = 1.54404945973
238.0 more points at r = 1.43144220603
226.0 more points at r = 1.32437870508
214.0 more points at r = 1.22272254805
203.0 more points at r = 1.1263372394
192.0 more points at r = 1.03508619218
181.0 more points at r = 0.94883272297
170.0 more points at r = 0.867440046252
160.0 more points at r = 0.790771268402
150.0 more points at r = 0.718689381062
140.0 more points at r = 0.65105725389
131.0 more points at r = 0.587737626612
122.0 more points at r = 0.528593100237
113.0 more points at r = 0.473486127367
105.0 more points at r = 0.422279001431
97.0 more points at r = 0.374833844693
89.0 more points at r = 0.331012594847
82.0 more points at r = 0.290676989951
75.0 more points at r = 0.253688551418
68.0 more points at r = 0.219908564725
61.0 more points at r = 0.189198057381
55.0 more points at r = 0.161417773651
49.0 more points at r = 0.136428145311
44.0 more points at r = 0.114089257597
38.0 more points at r = 0.0942608092113
33.0 more points at r = 0.0768020649149
29.0 more points at r = 0.0615717987589
24.0 more points at r = 0.0484282253244
20.0 more points at r = 0.0372289153633
17.0 more points at r = 0.0278306908104
13.0 more points at r = 0.0200894920319
10.0 more points at r = 0.013860207063
8.0 more points at r = 0.00899644813842
5.0 more points at r = 0.00535025545232
total number of points fitted in sphere: 55600.0
This seems to confirm that you really can't get to a million, no matter how you try...
There are many things you could do to improve your program - both algorithm, and code.
On the code side, one of the things that is REALLY slowing you down is the fact that not only you use a for loop (which is slow), but in the line
P = [P;P1];
you append elements to an array. Every time that happens, Matlab needs to find a new place to put the data, copying all the points in the process. This quickly becomes very slow. Preallocating the array with
P = zeros(1000000, 3);
keeping track of the number N of points you have found so far, and changing your calculation of distance to
D = pdist2(P1, P(1:N, :), 'euclidean');
would at least address that...
The other issue is that you check new points against all previously found points - so when you have 100 points, you check about 100x100, for 1000 it is 1000x1000. You can see then that this algorithm is O(N^3) at least... not counting the fact that you will get more "misses" as the density goes up. A O(N^3) algorithm with N=10E6 takes at least 10E18 cycles; if you had a 4 GHz machine with one clock cycle per comparison, you would need 2.5E9 seconds = approximately 80 years. You can try parallel processing, but that's just brute force - who wants that?
I recommend that you think about breaking the problem into smaller pieces (quite literally): for example, if you divide your sphere into little boxes that are about the size of your maximum distance, and for each box you keep track of what points are in it, then you only need to check against points in "this" box and its immediate neighbors - 27 boxes in all. If your boxes are 2.5 mm across, you would have 100x100x100 = 1M boxes. That seems like a lot, but now your computation time will be reduced drastically, as you will have (by the end of the algorithm) only 1 point on average per box... Of course with the distance criterion you are using, you will have more points near the center, but that's a detail.
The data structure you would need would be a cell array of 100x100x100, and each cell contains the index of the good points found so far "in that cell". The problem with a cell array is that it doesn't lend itself to vectorization. If instead you have the memory, you could assign it as a 4D array of 10x100x100x100, assuming you will have no more than 10 points per cell (if you do, you will have to handle that separately; work with me here...). Use an index of -1 for points not yet found
Then your check would be something like this:
% initializing:
bigList = zeros(10,102,102,102)-1; % avoid hitting the edge...
NPlist = zeros(102, 102, 102); % track # valid points in each box
bottomcorner = [-25.5, -25.5, -25.5]; % boxes span from -25.5 to +25.5
cellSize = 0.5;
.
% in your loop:
P1= [x, y, z];
cellCoords = ceil(P1/cellSize);
goodFlag = true;
pointsSoFar = bigList(:, cellCoords(1)+(-1:1), cellCoords(2)+(-1:1), cellCoords(3)+(-1:1));
pointsToCheck = find(pointsSoFar>0); % this is where the big gains come...
r=sum(P1.^2);
D = pdist2(P1,P(pointsToCheck, :),'euclidean'); % euclidean distance
if D>0.146*r^(2/3)
P(k,:) = P1;
% check the syntax of this line...
cci = ind2sub([102 102 102], cellCoords(1), cellCoords(2), cellCoords(3));
NP(cci)=NP(cci)+1; % increasing number of points in this box
% you want to handle the case where this > 10!!!
bigList(NP(cci), cci) = k;
k=k+1;
end
....
I don't know if you can take it from here; if you can't, say so in the notes and I may have some time this weekend to code this up in more detail. There are ways to speed it up more with some vectorization, but it quickly becomes hard to manage.
I think that putting a larger number of points randomly in space, and then using the above for a giant vectorized culling, may be the way to go. But I recommend to take little steps first... if you can get the above to work at all well, you can then optimize further (array size, etc).
I found the reference - "Simulated Brain Tumor Growth Dynamics Using a Three-Dimensional Cellular Automaton", Ansal et al (2000).
I agree it is puzzling - until you realize one important thing. They are reporting their results in mm, but your code was written in cm. While that may seem insignificant, the formula for "critical radius", rc = 0.146r^(2/3) includes a constant, 0.146, that is dimensional - the dimensions are mm^(1/3), not cm^(1/3).
When I make that change in my python code to evaluate the number of possible lattice sites, it jumps by a factor 10. Now they claimed that they were using a "jamming limit" of 0.38 - the number where you really cannot find any more sites. If you include that limit, I predict no more than 200k points could be found - still short of their 1.5M, but not quite so crazy.
You might consider contacting the authors to discuss this with them? If you want to include me in the conversation, you can email me at: SO (just two letters) at my handle name dot united states. Same domain as where I posted links above...

Matlab Loss of precision computing variance

I have this vector [10000000000 10000000001 10000000002]
and i try to calculate its variance using this formula
i calculate it but the answer that i get is 3.33333333466667e+19
which is wrong, because the correct answer is 1.
What i am doing wrong?
the MATLAB code is
total=0;
m1=data(1);
m2=(data(2)-m1)/2;
q1=0;
q2=q1+(((2-1)/2)*((data(2)-m1)^2));
q3=q2+(((3-1)/3)*((data(3)-m2)^2));
variance=q3/(3-1)
Thanks
M is a mean calculation, it is supposed to be
Mk = ((k-1) M(k-1) + xk)/k
thus
m1=data(1);
m2=(data(2)+m1)/2;
q1=0;
q2=q1+(((2-1)/2)*((data(2)-m1)^2));
q3=q2+(((3-1)/3)*((data(3)-m2)^2));
variance=q3/(3-1)
variance =
1
what the heck, I'm feeling generous, the complete code for a generic size data:
sizle = size(data,2);
M = zeros(1, sizle);
Q = M;
Variance = Q;
M(1)=data(1);
for i = 2:sizle
M(i)=((i-1)*M(i-1) + data(i))/i;
Q(i)=Q(i-1)+(i-1)*((data(i)-M(i-1))^2)/i;
Variance(i) = Q(i)/(i-1);
end
Variance(end)
var(data)