Matlab: Solving a logarithmic equation - matlab

I have the following equation that I want to solve with respect to a:
x = (a-b-c+d)/log((a-b)/(c-d))
where x, b, c, and d are known. I used Wolfram Alpha to solve the equation, and the result is:
a = b-x*W(-((c-d)*exp(d/x-c/x))/x)
where W is the is the product log function (Lambert W function). It might be easier to see it at the Wolfram Alpha page.
I used the Matlab's built-in lambertW function to solve the equation. This is rather slow, and is the bottleneck in my script. Is there another, quicker, way to do this? It doesn't have to be accurate down to the 10th decimal place.
EDIT:
I had no idea that this equation is so hard to solve. Here is a picture illustrating my problem. The temperatures b-d plus LMTD varies in each time step, but are known. Heat is transferred from red line (CO2) to blue line (water). I need to find temperature "a". I didn't know that this was so hard to calculate! :P

Another option is based on the simpler Wright ω function:
a = b - x.*wrightOmega(log(-(c-d)./x) - (c-d)./x);
provided that d ~= c + x.*wrightOmega(log(-(c-d)./x) - (c-d)./x) (i.e., d ~= c+b-a, x is 0/0 in this case). This is equivalent to the principal branch of the Lambert W function, W0, which I think is the solution branch you want.
Just as with lambertW, there's a wrightOmega function in the Symbolic Math toolbox. Unfortunately, this will probably also be slow for a large number of inputs. However, you can use my wrightOmegaq on GitHub for complex-valued floating-point (double- or single-precison) inputs. The function is more accurate, fully-vectorized, and can be three to four orders of magnitude faster than using the built-in wrightOmega for floating-point inputs.
For those interested, wrightOmegaq is based on this excellent paper:
Piers W. Lawrence, Robert M. Corless, and David J. Jeffrey, "Algorithm 917: Complex Double-Precision Evaluation of the Wright omega Function," ACM Transactions on Mathematical Software, Vol. 38, No. 3, Article 20, pp. 1-17, Apr. 2012.
This algorithm goes beyond the cubic convergence of the Halley's method used in Cleve Moler's Lambert_W and uses a root-finding method with fourth-order convergence (Fritsch, Shafer, & Crowley, 1973) to converge in no more than two iterations.
Also, to further speed up Moler's Lambert_W using series expansions, see my answer at Math.StackExchange.

Two (combinable) options:
Is your script already vectorized? Evaluate the function for more than a single argument. Executing for i = 1:100, a(i)=lambertw(rhs(i)); end is slower than a=lambertw(rhs).
If you are dealing with the real valued branch of LambertW (i.e. your arguments are in the interval [-1/e, inf) ), you can use the implementation of Lambert_W submitted by Cleve Moler on the File Exchange.

Do you know the mass flow rates at both sides of the heat exchanger at each time-step?
If yes, temperature 'a' can be solved by the 'effectiveness-NTU' approach which does not need any iteration, rather than the LMTD approach. Reference: e.g. http://ceng.tu.edu.iq/ched/images/lectures/chem-lec/st3/c2/Lec23.pdf

Related

How to compute inverse of a matrix accurately?

I'm trying to compute an inverse of a matrix P, but if I multiply inv(P)*P, the MATLAB does not return the identity matrix. It's almost the identity (non diagonal values in the order of 10^(-12)). However, in my application I need more precision.
What can I do in this situation?
Only if you explicitly need the inverse of a matrix you use inv(), otherwise you just use the backslash operator \.
The documentation on inv() explicitly states:
x = A\b is computed differently than x = inv(A)*b and is recommended for solving systems of linear equations.
This is because the backslash operator, or mldivide() uses whatever method is most suited for your specific matrix:
x = A\B solves the system of linear equations A*x = B. The matrices A and B must have the same number of rows. MATLAB® displays a warning message if A is badly scaled or nearly singular, but performs the calculation regardless.
Just so you know what algorithm MATLAB chooses depending on your input matrices, here's the full algorithm flowchart as provided in their documentation
The versatility of mldivide in solving linear systems stems from its ability to take advantage of symmetries in the problem by dispatching to an appropriate solver. This approach aims to minimize computation time. The first distinction the function makes is between full (also called "dense") and sparse input arrays.
As a side-note about error of order of magnitude 10^(-12), besides the above mentioned inaccuracy of the inv() function, there's floating point accuracy. This post on MATLAB issues on it is rather insightful, with a more general computer science post on it here. Basically, if you are computing numerics, don't worry (too much at least) about errors 12 orders of magnitude smaller.
You have what's called an ill-conditioned matrix. It's risky to try to take the inverse of such a matrix. In general, taking the inverse of anything but the smallest matrices (such as those you see in an introduction to linear algebra textbook) is risky. If you must, you could try taking the Moore-Penrose pseudoinverse (see Wikipedia), but even that is not foolproof.

What is the fastest method for solving exp(ax)-ax+c=0 for x in matlab

What is the least computational time consuming way to solve in Matlab the equation:
exp(ax)-ax+c=0
where a and c are constants and x is the value I'm trying to find?
Currently I am using the in built solver function, and I know the solution is single valued, but it is just taking longer than I would like.
Just wanting something to run more quickly is insufficient for that to happen.
And, sorry, but if fzero is not fast enough then you won't do much better for a general root finding tool.
If you aren't using fzero, then why not? After all, that IS the built-in solver you did not name. (BE EXPLICIT! Otherwise we must guess.) Perhaps you are using solve, from the symbolic toolbox. It will be more slow, since it is a symbolic tool.
Having said the above, I might point out that you might be able to improve by recognizing that this is really a problem with a single parameter, c. That is, transform the problem to solving
exp(y) - y + c = 0
where
y = ax
Once you know the value of y, divide by a to get x.
Of course, this way of looking at the problem makes it obvious that you have made an incorrect statement, that the solution is single valued. There are TWO solutions for any negative value of c less than -1. When c = -1, the solution is unique, and for c greater than -1, no solutions exist in real numbers. (If you allow complex results, then there will be solutions there too.)
So if you MUST solve the above problem frequently and fzero was inadequate, then I would consider a spline model, where I had precomputed solutions to the problem for a sufficient number of distinct values of c. Interpolate that spline model to get a predicted value of y for any c.
If I needed more accuracy, I might take a single Newton step from that point.
In the event that you can use the Lambert W function, then solve actually does give us a solution for the general problem. (As you see, I am just guessing what you are trying to solve this with, and what are your goals. Explicit questions help the person trying to help you.)
solve('exp(y) - y + c')
ans =
c - lambertw(0, -exp(c))
The zero first argument to lambertw yields the negative solution. In fact, we can use lambertw to give us both the positive and negative real solutions for any c no larger than -1.
X = #(c) c - lambertw([0 -1],-exp(c));
X(-1.1)
ans =
-0.48318 0.41622
X(-2)
ans =
-1.8414 1.1462
Solving your system symbolically
syms a c x;
fx0 = solve(exp(a*x)-a*x+c==0,x)
which results in
fx0 =
(c - lambertw(0, -exp(c)))/a
As #woodchips pointed out, the Lambert W function has two primary branches, W0 and W−1. The solution given is with respect to the upper (or principal) branch, denoted W0, your equation actually has an infinite number of complex solutions for Wk (the W0 and W−1 solutions are real if c is in [−∞, 0]). In Matlab, lambertw is only implemented for symbolic inputs and thus is very slow method of solving your equation if you're interested in numerical (double precision) solutions.
If you wish to solve such equations numerically in an efficient manner, you might look at Corless, et al. 1996. But, as long as your parameter c is in [−∞, 0], i.e., -exp(c) in [−1/e, 0] and you're interested in the W0 branch, you can use the Matlab code that I wrote to answer a similar question at Math.StackExchange. This code should be much much more efficient that using a naïve approach with fzero.
If your values of c are not in [−∞, 0] or you want the solution corresponding to a different branch, then your solution may be complex-valued and you won't be able to use the simple code I linked to above. In that case, you can more fully implement the function by reading the Corless, et al. 1996 paper or you can try converting the Lambert W to a Wright ω function: W0(z) = ω(log(z)), W−1(z) = ω(log(z)−2πi). In your case, using Matlab's wrightOmega, the W0 branch corresponds to:
fx0 =
(c - wrightOmega(log(-exp(c))))/a
and the W−1 branch to:
fxm1 =
(c - wrightOmega(log(-exp(c))-2*sym(pi)*1i))/a
If c is real, then the above reduces to
fx0 =
(c - wrightOmega(c+sym(pi)*1i))/a
and
fxm1 =
(c - wrightOmega(c-sym(pi)*1i))/a
Matlab's wrightOmega function is also symbolic only, but I have written a double precision implementation (based on Lawrence, et al. 2012) that you can find on my GitHub here and that is 3+ orders of magnitude faster than evaluating the function symbolically. As your problem is technically in terms of a Lambert W, it may be more efficient, and possibly more numerically accurate, to implement that more complicated function for the regime of interest (this is due to the log transformation and the extra evaluation of a complex log). But feel free to test.

Goodness of fit with MATLAB and chi-square test

I would like to measure the goodness-of-fit to an exponential decay curve. I am using the lsqcurvefit MATLAB function. I have been suggested by someone to do a chi-square test.
I would like to use the MATLAB function chi2gof but I am not sure how I would tell it that the data is being fitted to an exponential curve
The chi2gof function tests the null hypothesis that a set of data, say X, is a random sample drawn from some specified distribution (such as the exponential distribution).
From your description in the question, it sounds like you want to see how well your data X fits an exponential decay function. I really must emphasize, this is completely different to testing whether X is a random sample drawn from the exponential distribution. If you use chi2gof for your stated purpose, you'll get meaningless results.
The usual approach for testing the goodness of fit for some data X to some function f is least squares, or some variant on least squares. Further, a least squares approach can be used to generate test statistics that test goodness-of-fit, many of which are distributed according to the chi-square distribution. I believe this is probably what your friend was referring to.
EDIT: I have a few spare minutes so here's something to get you started. DISCLAIMER: I've never worked specifically on this problem, so what follows may not be correct. I'm going to assume you have a set of data x_n, n = 1, ..., N, and the corresponding timestamps for the data, t_n, n = 1, ..., N. Now, the exponential decay function is y_n = y_0 * e^{-b * t_n}. Note that by taking the natural logarithm of both sides we get: ln(y_n) = ln(y_0) - b * t_n. Okay, so this suggests using OLS to estimate the linear model ln(x_n) = ln(x_0) - b * t_n + e_n. Nice! Because now we can test goodness-of-fit using the standard R^2 measure, which matlab will return in the stats structure if you use the regress function to perform OLS. Hope this helps. Again I emphasize, I came up with this off the top of my head in a couple of minutes, so there may be good reasons why what I've suggested is a bad idea. Also, if you know the initial value of the process (ie x_0), then you may want to look into constrained least squares where you bind the parameter ln(x_0) to its known value.

How to overcome singularities in numerical integration (in Matlab or Mathematica)

I want to numerically integrate the following:
where
and a, b and β are constants which for simplicity, can all be set to 1.
Neither Matlab using dblquad, nor Mathematica using NIntegrate can deal with the singularity created by the denominator. Since it's a double integral, I can't specify where the singularity is in Mathematica.
I'm sure that it is not infinite since this integral is based in perturbation theory and without the
has been found before (just not by me so I don't know how it's done).
Any ideas?
(1) It would be helpful if you provide the explicit code you use. That way others (read: me) need not code it up separately.
(2) If the integral exists, it has to be zero. This is because you negate the n(y)-n(x) factor when you swap x and y but keep the rest the same. Yet the integration range symmetry means that amounts to just renaming your variables, hence it must stay the same.
(3) Here is some code that shows it will be zero, at least if we zero out the singular part and a small band around it.
a = 1;
b = 1;
beta = 1;
eps[x_] := 2*(a-b*Cos[x])
n[x_] := 1/(1+Exp[beta*eps[x]])
delta = .001;
pw[x_,y_] := Piecewise[{{1,Abs[Abs[x]-Abs[y]]>delta}}, 0]
We add 1 to the integrand just to avoid accuracy issues with results that are near zero.
NIntegrate[1+Cos[(x+y)/2]^2*(n[x]-n[y])/(eps[x]-eps[y])^2*pw[Cos[x],Cos[y]],
{x,-Pi,Pi}, {y,-Pi,Pi}] / (4*Pi^2)
I get the result below.
NIntegrate::slwcon:
Numerical integration converging too slowly; suspect one of the following:
singularity, value of the integration is 0, highly oscillatory integrand,
or WorkingPrecision too small.
NIntegrate::eincr:
The global error of the strategy GlobalAdaptive has increased more than
2000 times. The global error is expected to decrease monotonically after a
number of integrand evaluations. Suspect one of the following: the
working precision is insufficient for the specified precision goal; the
integrand is highly oscillatory or it is not a (piecewise) smooth
function; or the true value of the integral is 0. Increasing the value of
the GlobalAdaptive option MaxErrorIncreases might lead to a convergent
numerical integration. NIntegrate obtained 39.4791 and 0.459541
for the integral and error estimates.
Out[24]= 1.00002
This is a good indication that the unadulterated result will be zero.
(4) Substituting cx for cos(x) and cy for cos(y), and removing extraneous factors for purposes of convergence assessment, gives the expression below.
((1 + E^(2*(1 - cx)))^(-1) - (1 + E^(2*(1 - cy)))^(-1))/
(2*(1 - cx) - 2*(1 - cy))^2
A series expansion in cy, centered at cx, indicates a pole of order 1. So it does appear to be a singular integral.
Daniel Lichtblau
The integral looks like a Cauchy Principal Value type integral (i.e. it has a strong singularity). That's why you can't apply standard quadrature techniques.
Have you tried PrincipalValue->True in Mathematica's Integrate?
In addition to Daniel's observation about integrating an odd integrand over a symmetric range (so that symmetry indicates the result should be zero), you can also do this to understand its convergence better (I'll use latex, writing this out with pen and paper should make it easier to read; it took a lot longer to write than to do, it's not that complicated):
First, epsilon(x)-\epsilon(y)\propto\cos(y)-\cos(x)=2\sin(\xi_+)\sin(\xi_-) where I have defined \xi_\pm=(x\pm y)/2 (so I've rotated the axes by pi/4). The region of integration then is \xi_+ between \pi/\sqrt{2} and -\pi/\sqrt{2} and \xi_- between \pm(\pi/\sqrt{2}-\xi_-). Then the integrand takes the form \frac{1}{\sin^2(\xi_-)\sin^2(\xi_+)} times terms with no divergences. So, evidently, there are second-order poles, and this isn't convergent as presented.
Perhaps you could email the persons who obtained an answer with the cos term and ask what precisely it is they did. Perhaps there's a physical regularisation procedure being employed. Or you could have given more information on the physical origin of this (some sort of second order perturbation theory for some sort of bosonic system?), had that not been off-topic here...
May be I am missing something here, but the integrand
f[x,y]=Cos^2[(x+y)/2]*(n[x]-n[y])/(eps[x]-eps[y]) with n[x]=1/(1+Exp[Beta*eps[x]]) and eps[x]=2(a-b*Cos[x]) is indeed a symmetric function in x and y: f[x,-y]= f[-x,y]=f[x,y].
Therefore its integral over any domain [-u,u]x[-v,v] is zero. No numerical integration seems to be needed here. The result is just zero.

How to add white noise process term for a couple of ODEs, assuming the Gaussian distribution?

This question has already confused me several days. While I referred to senior students, they also cannot give a reply.
We have ten ODEs, into which each a noise term should be added. The noise is defined as follows. since I always find that I cannot upload a picture, the formula below maybe not very clear. In order to understand, you can either read my explanation or go the this address: Plos one. You could find the description of the equations directly above the Support Information in this address
The white noise term epislon_i(t) is assumed with Gaussian distribution. epislon_i(t) means that for equation i, and at t timepoint, the value of the noise.
the auto-correlation of noise are given:
(EQ.1)
where delta(t) is the Dirac delta function and the diffusion matrix D is defined by
(EQ.2)
Our problem focuses on how to explain the Dirac delta function in the diffusion matrix. Since the property of Dirac delta function is delta(0) = Inf and delta(t) = 0 if t neq 0, we don't know how to calculate the epislonif we try to sqrt of 2D(x, t)delta(t-t'). So we simply assume that delta(0) = 1 and delta(t) = 0 if t neq 0; But we don't know whether or not this is right. Could you please tell me how to use Delta function of diffusion equation in MATLAB?
This question associates with the stochastic process in MATLAB. So we review different stochastic process to inspire our ideas. In MATLAB, the Wienner process is often defined as a = sqrt(dt) * rand(1, N). N is the number of steps, dt is the length of the steps. Correspondingly, the Brownian motion can be defined as: b = cumsum(a); All of these associate with stochastic process. However, they doesn't related to the white noise process which has a constraints on the matrix of auto-correlation, noted by D.
Then we consider that, we may simply use randn(1, 10) to generate a vector representing the noise. However, since the definition of the noise must satisfy the equation (2), this cannot enable noise term in different equation have the predefined partial correlation (D_ij). Then we try to use mvnrnd to generate a multiple variable normal distribution at each time step. Unfortunately, the function mvnrnd in MATLAB return a matrix. But we need to return a vector of length 10.
We are rather confused, so could you please give me just a light? Thanks so much!
NOTE: I see two hazy questions in here: 1) how to deal with a stochastic term in a DE and 2) how to deal with a delta function in a DE. Both of these are math related questions and http://www.math.stackexchange.com will be a better place for this. If you had a question pertaining to MATLAB, I haven't been able to pin it down, and you should perhaps add code examples to better illustrate your point. That said, I'll answer the two questions briefly, just to put you on the right track.
What you have here are not ODEs, but Stochastic differential equations (SDE). I'm not sure how you're using MATLAB to work with this, but routines like ode45 or ode23 will not be of any help. For SDEs, your usual mathematical tools of separation of variables/method of characteristics etc don't work and you'll need to use Itô calculus and Itô integrals to work with them. The solutions, as you might have guessed, will be stochastic. To learn more about SDEs and working with them, you can consider Stochastic Differential Equations: An Introduction with Applications by Bernt Øksendal and for numerical solutions, Numerical Solution of Stochastic Differential Equations by Peter E. Kloeden and Eckhard Platen.
Coming to the delta function part, you can easily deal with it by taking the Fourier transform of the ODE. Recall that the Fourier transform of a delta function is 1. This greatly simplifies the DE and you can take an inverse transform in the very end to return to the original domain.