Microfacets Theory: A CDF for normal distribution given incident direction - cdf

The paper Microfacet Models for Refraction through Rough Surfaces, page 3 section 3.1, describes the NDF. The problem: Given incident ray v I'd like to generate a CDF F that given an angle θ gives the probability that normals are distributed at or less than θ from v, i.e. dot(v,m)<=cos(θ) w.p. F(θ).
For v = n where n is the macrosurface normal this is simple (Assuming isotropic NDF):
Which is very easy to compute for the common GGX or Beckmann NDFs.
The paper gives the formula but it is harder to integrate and still not useful.
Any help would be appreciated.

Related

Symmetric Regression In Stan

I have to vectors of data points (Gene expression in Tissue A and B) and I want to see, if their is any systematic bias along its magnitude (same expression of Gene X in A and B).
The idea was to build a simple regression model in stan and see how much the posterior for the slope (beta) overlaps with 1.
model {
for (n in 1:N){
y[n] ~ normal(alpha[i[n]] + beta[i[n]] * x[n], sigma[i[n]]);
}
}
However, depending on which vector is x and which is y, I get different results, where one slope is about 1 and other not (see Image, where x and y a swapped and the colored lines represents the regressions I get from the model (gray is slope 1)). As I found out, this a typical thing for regression methods like ordinary least squares, which makes sense if one value is dependent on the other. However, here there is no dependency and both vectors are "equal".
Now the question is, what would be an appropriate model to perform a symmetrical regression in stan.
Following the suggestion from LukasNeugebauer by standardizing the data first and working without an intercept, does not solve the problem.
I cheated a bit and found a solution:
When you rotate the coordinate system by 45 degrees, the new y-Axis (y') represents the information of x and y in equal amounts. Therefor, assuming a variance only on the new y-Axis involves both x and y.
x' = x*cos((pi/180)*45) + y*sin((pi/180)*45)
y' = -x*sin((pi/180)*45) + y*cos((pi/180)*45)
The above model now results in symmetric results. Where a slope of 0, represents a slope of 1 in the old system.

Find point on the Spline Curve in Flutter

I am having some discrete points, by using which I can plot spline curve(Syncfusion chart) in flutter. But now I have to find the point on that curve i.e. by giving values of x, I need value of y. I am stucked here and don't have any algorithm to apply for that. How did they make graph using discrete point ? There should be some algorithm which can be applied here and get those point.
Please help me out
Thanks in advance!!
I am here with a great solution to this problem, So The idea goes like if we have n points for equilibrium data, then we will assume a polynomial of order n-1(For eg. no. of points on equilibrium curve to be 3 then the polynomial should be quadratic of form y= Ax²+Bx+C). Now as we have 3 variables(A, B, C), then to solve this equation we need 3 equations in terms of A, B and C. These equations are obtained by putting the equilibrium data points, in this case 3 points so we will get 3 equations. These three equations can be solved by using Cramer's rule. After solving the equation we will get the equation of the curve.
The equation thus obtained will be more accurate and as cramer's rule can be obtained to any number of equations, then we can easily obtain polynomial equation of any order.This method is quite big and will be time taking to apply.
This will give you the curve for a given number of points

PCA with correlated dimensions

I am trying to describe the limit cycle of the waveforms of the 'arms' of a swimming alga in terms of the shape scores of its principal components. So I have the shapes of the arms stored in terms of xy coordinates at nodes distributed along the arc length of the arm. I am trying to do a principal component analysis on this, but I am struggling a bit.
Before, I had the shapes described in terms of curvature along the arc length. Each curve had 25 nodes, so I got a nice 25x25 covariance matrix. The analysis was very straightforward, everything worked fine.
Now for reasons irrelevant here, it is more convenient to have the curves described in terms of x and y values of the nodes. So 25 nodes with an x and a y value. So 50 features per sample, although features 1:25 and 26:50 form 'sets'.
This can be viewed as a matrix of n samples with m nodes with k features (3D), or simply as a 2D matrix with n samples with k features, where x and y are separate features.
Just chaining the x and y vectors and doing PCA on that did not really help me - I don't understand what I am doing anymore. I get the basics of PCA, but how to do this for a more complex data set is beyond me. Also, I am not too great at matrix algebra, so a more intuitive explanation is welcome.
The question: Am I doing entirely the wrong thing or is there some way to retrieve shape modes of 25 nodes with an x and y value?

Normalized cut: what does this code do?

I'm going through some MATLAB code for Normalized Cut for image segmentation, and I can't figure out what this code below does:
% degrees and regularization
d = sum(abs(W),2);
dr = 0.5 * (d - sum(W,2));
d = d + offset * 2;
dr = dr + offset;
W = W + spdiags(dr,0,n,n);
offset is defined to be 0.5.
W is a square, sparse, symmetric matrix (w_ij is defined by the similarity between pixels i and j).
W is then used to solve the eigenvalue problem d^(-1/2)(D-W)d^(-1/2) x = \lambda x
The w_ij's are all positives because of the way the weights are defined, so dr is a vector of 0's.
What are the offsets for? How are they chosen? What's the reason behind offset*2? I have the feeling this is to avoid some potential pitfalls in certain cases. What could these be?
Any help would be really appreciated, thanks!
I believe you came across a piece of code written by Prof Stella X Yu.
Indeed, when W is positive this code has no effect and this is the usual case for NCuts.
However, in a CVPR 2001 paper Yu and Shi extend NCuts to handle negative interactions as well as positive ones. In these circumstances dr (r for "repulsion") plays a significant role.
Speaking of negative weights, I must say that personally I do not agree with the approach of Yu and Shi.
I strongly believe that when there is repulsion information Correlation Clustering is a far better objective function than the extended NCuts objective. Results of some image segmentation experiments I conducted with negative weights suggested that Correlation clustering objective is better than the extended NCuts.

Find minimum distance between a point and a curve in MATLAB

I would like to use a MATLAB function to find the minimum length between a point and a curve? The curve is described by a complicated function that is not quite smooth. So I hope to use an existing tool of matlab to compute this. Do you happen to know one?
When someone says "its complicated" the answer is always complicated too, since I never know exactly what you have. So I'll describe some basic ideas.
If the curve is a known nonlinear function, then use the symbolic toolbox to start with. For example, consider the function y=x^3-3*x+5, and the point (x0,y0) =(4,3) in the x,y plane.
Write down the square of the distance. Euclidean distance is easy to write.
(x - x0)^2 + (y - y0)^2 = (x - 4)^2 + (x^3 - 3*x + 5 - 3)^2
So, in MATLAB, I'll do this partly with the symbolic toolbox. The minimal distance must lie at a root of the first derivative.
sym x
distpoly = (x - 4)^2 + (x^3 - 3*x + 5 - 3)^2;
r = roots(diff(distpoly))
r =
-1.9126
-1.2035
1.4629
0.82664 + 0.55369i
0.82664 - 0.55369i
I'm not interested in the complex roots.
r(imag(r) ~= 0) = []
r =
-1.9126
-1.2035
1.4629
Which one is a minimzer of the distance squared?
subs(P,r(1))
ans =
35.5086
subs(P,r(2))
ans =
42.0327
subs(P,r(3))
ans =
6.9875
That is the square of the distance, here minimized by the last root in the list. Given that minimal location for x, of course we can find y by substitution into the expression for y(x)=x^3-3*x+5.
subs('x^3-3*x+5',r(3))
ans =
3.7419
So it is fairly easy if the curve can be written in a simple functional form as above. For a curve that is known only from a set of points in the plane, you can use my distance2curve utility. It can find the point on a space curve spline interpolant in n-dimensions that is closest to a given point.
For other curves, say an ellipse, the solution is perhaps most easily solved by converting to polar coordinates, where the ellipse is easily written in parametric form as a function of polar angle. Once that is done, write the distance as I did before, and then solve for a root of the derivative.
A difficult case to solve is where the function is described as not quite smooth. Is this noise or is it a non-differentiable curve? For example, a cubic spline is "not quite smooth" at some level. A piecewise linear function is even less smooth at the breaks. If you actually just have a set of data points that have a bit of noise in them, you must decide whether to smooth out the noise or not. Do you wish to essentially find the closest point on a smoothed approximation, or are you looking for the closest point on an interpolated curve?
For a list of data points, if your goal is to not do any smoothing, then a good choice is again my distance2curve utility, using linear interpolation. If you wanted to do the computation yourself, if you have enough data points then you could find a good approximation by simply choosing the closest data point itself, but that may be a poor approximation if your data is not very closely spaced.
If your problem does not lie in one of these classes, you can still often solve it using a variety of methods, but I'd need to know more specifics about the problem to be of more help.
There's two ways you could go about this.
The easy way that will work if your curve is reasonably smooth and you don't need too high precision is to evaluate your curve at a dense number of points and simply find the minimum distance:
t = (0:0.1:100)';
minDistance = sqrt( min( sum( bxsfun(#minus, [x(t),y(t)], yourPoint).^2,2)));
The harder way is to minimize a function of t (or x) that describes the distance
distance = #(t)sum( (yourPoint - [x(t),y(t)]).^2 );
%# you can use the minimum distance from above as a decent starting guess
tAtMin = fminsearch(distance,minDistance);
minDistanceFitte = distance(tAtMin);