For some reason, org-mode is left-aligning my display equations when I run org-latex-preview (C-c C-x C-l). But the margin between each equation and its equation number is calculated, as if the equation was centered, which ruins the alignment between the equation numbers.
Is it possible to turn on Centering for displayed equations, so that the equation numbers line up (like with AUCTeX)?
Here is a sample .org document with display equations:
A numbered display equation:
\begin{equation}
\frac{\partial u}{\partial t}-\alpha\nabla^2u=0\tag{1}
\end{equation}
A second numbered equation:
\begin{equation}
E=MC^2\tag{2}
\end{equation}
And here is a screenshot of org-mode after running org-latex-preview:
It looks like the alignment can be altered by editing org-format-latex-header.
Here are three different configurations:
Equations on the left: \documentclass[reqno]{article}
Equations on the right: \documentclass[leqno]{article}
Equations justified: \documentclass[fleqn]{article}
So option (2) actually centers display equations. But only if they are numbered! Option (3) probably looks the cleanest, with equations on the left and numbers aligned on the right. And option (1) for some reason refuses to center...??
I notice however, that enabling Clean Mode M-x org-indent-mode throws everything off. Here we can see how the alignment of the equation numbers in option (3) changes:
It would be great if someone could explain why option (1) does not center and why the unnumbered equations in (2) remain left-aligned... Maybe someone else can offer some eLisp to adjust things for org-indent-mode...?
Best,
-Adam
Related
I want to approximate the integral of the function x*sin(x) from 0 to 1 with:
Left rectangular rule
Right rectangular rule
Midpoint rule
Trapezodial rule
For the first one, I use the following peace of code and it works nicely
n=1000; a=0; b=1; f=#(x)x.*sin(x);
x=linspace(a,b,n+1);
h=(b-a)/n;
q=sum(h*f(x(1:n)))
But I'm stuck on how to proceed. For the first one, they use the formula
For the right rectangular rule, they use
Does the x(1:n) imply f(x_{i-1})? I'm especially lost on how I should handle the qsum for the third point, using the formula
For the 4th problem the formula that is used is
There are probably other ways to do this, but I want to apply the code I made for the first problem, and expand it onto the other problems.
The second problem, right rectangular rule can be computed using the same linspace, but from 2 to n+1. For the midpoint formula, one has to compute the values in between the current linspace, as the formulas quite elegantly show it. For the trapezoid, one has to sum the areas of n semi-rectangles (no idea of the correct term), which are just the area of the rectangles, whose height is the average of the endpoints.
Multilinear function is such that it is linear with respect to each variable. For example, x1+x2x1-x4x3 is a multilinear function. Working with them requires proper data-structure and algrorihms for fast assignment, factorization and basic aritmetics.
Does there exist some library for processing multilinear function in Matlab?
No, not that much so.
For example, interp2 an interpn have 'linear' methods, which are effectively as you describe. But that is about the limit of what is supplied. And there is nothing for more general functions of this form.
Anyway, this class of functions has some significant limitations. For example, as applied to color image processing, they are often a terribly poor choice because of what they do to neutrals in your image. Other functional forms are strongly preferred there.
Of course, there is always the symbolic toolbox for operations such as factorization, etc., but that tool is not a speed demon.
Edit: (other functional forms)
I'll use a bilinear form as the example. This is the scheme that is employed by tools like Photoshop when bilinear interpolation is chosen. Within the square region between a group of four pixels, we have the form
f(x,y) = f_00*(1-x)*(1-y) + f_10*x*(1-y) + f_01*(1-x)*y + f_11*x*y
where x and y vary over the unit square [0,1]X[0,1]. I've written it here as a function parameterized by the values of our function at the four corners of the square. Of course, those values are given in image interpolation as the pixel values at those locations.
As has been said, the bilinear interpolant is indeed linear in x and in y. If you hold either x or y fixed, then the function is linear in the other variable.
An interesting question is what happens along the diagonal of the unit square? Thus, as we follow the path between points (0,0) and (1,1). Since x = y along this path, substitute x for y in that expression, and expand.
f(x,x) = f_00*(1-x)*(1-x) + f_10*x*(1-x) + f_01*(1-x)*x + f_11*x*x
= (f_11 + f_00 - f_10 - f_01)*x^2 + (f_10 + f_01 - 2*f_00)*x + f_00
So we end up with a quadratic polynomial along the main diagonal. Likewise, had we followed the other diagonal, it too would have been quadratic in form. So despite the "linear" nature of this beast, it is not truly linear along any linear path. It is only linear along paths that are parallel to the axes of the interpolation variables.
In three dimensions, which is where we really care about this behavior for color space interpolation, that main diagonal will now show a cubic behavior along that path, despite that "linear" name for the function.
Why are these diagonals important? What happens along the diagonal? If our mapping takes colors from an RGB color space to some other space, then the neutrals in your image live along the path R=G=B. This is the diagonal of the cube. The problem is when you interpolate an image with a neutral gradient, you will see a gradient in the result after color space conversion that moves from neutral to some non-neutral color as the gradient moves along the diagonal through one cube after another. Sadly, the human eye is very able to see differences from neutrality, so this behavior is critically important. (By the way, this is what happens inside the guts of your color ink jet printer, so people do care about it.)
The alternative chosen is to dissect the unit square into a pair of triangles, with the shared edge along that main diagonal. Linear interpolation now works inside a triangle, and along that edge, the interpolant is purely a function of the endpoints of that shared edge.
In three dimensions, the same thing happens, except we use a dissection of the unit cube into SIX tetrahedra, all of which share the main diagonal of the cube. The difference is indeed critically important, with a dramatic reduction in the deviation of your neutral gradients from neutrality. As it turns out, the eye is NOT so perceptive to deviations along other gradients, so the loss along other paths does not hurt nearly so much. It is neutrals that are crucial, and the colors we must reproduce as accurately as possible.
So IF you do color space interpolation using mappings defined by what are commonly called 3-d lookup tables, this is the agreed way to do that interpolation (agreed upon by the ICC, an acronym for the International Color Consortium.)
So I have this matrix here, and it is of size 13 x 8198. (I have called it 'blah').
This is a sparse matrix, in that, most of its entries are 0. When I do an imagesc(blah), I get the following image:
Clearly this is worthless because I cannot clearly see the non-zero elements. I have tried playing around with the color scaling, but to no avail.
Anyway, I was wondering if there might be a nicer way to be able to visualize this matrix in MATLAB somehow? I am designing an algorithm and would like to be able to see certain things int teh matrix.
Thanks!
Try spy; it's intended for exactly that.
The problem is that spy makes the axes equal, and your data is 13 x 8198, so the first axis is almost invisible compared to the second one. daspect can fix that.
>> spy(blah)
>> daspect([400 1 1])
spy doesn't have an option to plot differently by signs. One option would be to edit the source to add that capability (it's implemented in matlab, and you can get the source by running edit spy). An easier hack, though, is to just spy the positive and negative parts separately:
>> daspect([400 1 1]);
>> hold on;
>> spy(max(blah, 0), 'b');
>> spy(min(blah, 0), 'r');
This has the unfortunate side effect of making places where positives and negatives are close together appear dominated by the second one plotted, here the negatives (e.g. in the top rows of your matrix). I'm not sure what to do about that other than maybe fiddling with marker sizes. You could of course do it in both orders and compare.
Its just a basic question. I am fitting lines to scatter points using polyfit.
I have some cases where my scatter points have same X values and polyfit cant fit a line to it. There has to be something that can handle this situation. After all, its just a line fit.
I can try swapping X and Y and then fir a line. Any easier method because I have lots of sets of scatter points and want a general method to check lines.
Main goal is to find good-fit lines and drop non-linear features.
First of all, this happens due to the method of fitting that you are using. When doing polyfit, you are using the least-squares method on Y distance from the line.
(source: une.edu.au)
Obviously, it will not work for vertical lines. By the way, even when you have something close to vertical lines, you might get numerically unstable results.
There are 2 solutions:
Swap x and y, as you said, if you know that the line is almost vertical. Afterwards, compute the inverse linear function.
Use least-squares on perpendicular distance from the line, instead of vertical (See image below) (more explanation in here)
(from MathWorld - A Wolfram Web Resource: wolfram.com)
Polyfit uses linear ordinary least-squares approximation and will not allow repeated abscissa as the resulting Vandermonde matrix will be rank deficient. I would suggest trying to find something of a more statistical nature.
If you wish to research Andreys method it usually goes by the names Total least squares or Orthogonal distance regression http://en.wikipedia.org/wiki/Total_least_squares
I would tentatively also put forward the possibility of detecting when you have simultaneous x values, then rotating your data about the origin, fitting the line and then transform the line back. I could not say how poorly this would perform and only you could decide if it was an option based on your accuracy requirements.
Starting from the plot of one curve, it is possible to obtain the parametric equation of that curve?
In particular, say x={1 2 3 4 5 6....} the x axis, and y = {a b c d e f....} the corresponding y axis. I have the plot(x,y).
Now, how i can obtain the equation that describe the plotted curve? it is possible to display the parametric equation starting from the spline interpolation?
Thank you
If you want to display a polynomial fit function alongside your graph, the following example should help:
x=-3:.1:3;
y=4*x.^3-5*x.^2-7.*x+2+10*rand(1,61);
p=polyfit(x,y,3); %# third order polynomial fit, p=[a,b,c,d] of ax^3+bx^2+cx+d
yfit=polyval(p,x); %# evaluate the curve fit over x
plot(x,y,'.')
hold on
plot(x,yfit,'-g')
equation=sprintf('y=%2.2gx^3+%2.2gx^2+%2.2gx+%2.2g',p); %# format string for equation
equation=strrep(equation,'+-','-'); %# replace any redundant signs
text(-1,-80,equation) %# place equation string on graph
legend('Data','Fit','Location','northwest')
Last year, I wrote up a set of three blogs for Loren, on the topic of modeling/interpolationg a curve. They may cover some of your questions, although I never did find the time to add another 3 blogs to finish the topic to my satisfaction. Perhaps one day I will get that done.
The problem is to recognize there are infinitely many curves that will interpolate a set of data points. A spline is a nice choice, because it can be made well behaved. However, that spline has no simple "equation" to write down. Instead, it has many polynomial segments, pieced together to be well behaved.
You're asking for the function/mapping between two data sets. Knowing the physics involved, the function can be derived by modeling the system. Write down the differential equations and solve it.
Left alone with just two data series, an input and an output with a 'black box' in between you may approximate the series with an arbitrary function. You may start with a polynomial function
y = a*x^2 + b*x + c
Given your input vector x and your output vector y, parameters a,b,c must be determined applying a fitness function.
There is an example of Polynomial Curve Fitting in the MathWorks documentation.
Curve Fitting Tool provides a flexible graphical user interfacewhere you can interactively fit curves and surfaces to data and viewplots. You can:
Create, plot, and compare multiple fits
Use linear or nonlinear regression, interpolation,local smoothing regression, or custom equations
View goodness-of-fit statistics, display confidenceintervals and residuals, remove outliers and assess fits with validationdata
Automatically generate code for fitting and plottingsurfaces, or export fits to workspace for further analysis