I have a huge expression around 231 terms and each of these expressions has some power of cos(e) or sin(e) and they can be mixed as well, each term also has an r(distance) term in the denominator raised to some power as well.
Here is a small portion of the expression
What I'd like to do is sum the expression over all angle e's and then over all r's and use lambdify and scipy to minimize the expression with respect to 4 other parameters present in the equation.
Things I tried
I have tried to do the sums using sum indexed in scipy but am not
able to make it work, the power bit is tricky also once I have the
sum indexed expression and I expand it how do i pass the list of
angle values at which to calculate the expression
Also since the expression is pretty large I'd like to do the sum indexing etc. in a loop without individually resolving expression for each power.
(If my question is not clear, let me know.)
This is how I finally managed to solve my problem -
Replaced cos(e) and sin(e) with variable cose and sine.
Iterated over the list of angles and used sympy.subs to replace cose and sine with math.cos(e) and math.sin(e) and kept adding the expressions obtained ditto for r as well.
This left me with only p1 and p2 and Q1 and Q2 which was required.
I couldnt use sympy lambdify and sum indexed but this got the job done.
Related
Why do I get different results when using the same code running in different version of MATLAB (2016 vs 2021) for sum(b.*x1) where b is single and x1 is double. How to avoid such error between MATLAB version?
MATLAB v.2021:
sum(b.*x1)
ans =
single
-0.0013286
MATLAB 2016
sum(b.*x1)
ans =
single
-0.0013283
In R2017b, they changed the behavior of sum for single-precision floats, and in R2020b they made the same changes for other data types too.
The change speeds up the computation, and improves accuracy by reducing the rounding errors. Simply put, previously the algorithm would just run through the array in sequence, adding up the values. The new behavior computes the sum over smaller portions of the array, and then adds up those results. This is more precise because the running total can become a very large number, and adding smaller numbers to it causes more rounding in those smaller numbers. The speed improvement comes from loop unrolling: the loop now steps over, say, 8 values at the time, and in the loop body, 8 running totals are computed (they don’t specify the number they use, the 8 here is an example).
Thus, your newer result is a better approximation to the sum of your array than the old one.
For more details (a better explanation of the new algorithm and the reason for the change), see this blog post.
Regarding how to avoid the difference: you could implement your own sum function, and use that instead of the builtin one. I would suggest writing it as a MEX-file for efficiency. However, do make sure you match the newer behavior of the builtin sum, as that is the better approximation.
Here is an example of the problem. Let's create an array with N+1 elements, where the first one has a value of N and the rest have a value of 1.
N = 1e8;
a = ones(N+1,1,'single');
a(1) = N;
The sum over this array is expected to be 2*N. If we set N large enough w.r.t. the data type, I see this in R2017a (before the change):
>> sum(a)
ans =
single
150331648
And I see this in R2018b (after the change for single-precision sum):
>> sum(a)
ans =
single
199998976
Both implementations make rounding errors here, but one is obviously much, much closer to the expected result (2e8, or 200000000).
There are many mathematical programs out there out of which some are able to solve calculus-based problems, GeoGebra, Qalculate! to name a few.
How are those programs able to solve calculus-based problems which humans need to evaluate using a long procedure?
For example, the problem:
It takes a lot of steps for humans to solve this problem as shown here on Quora.
How can those mathematical programs solve them with such a good accuracy?
The Church-Turing thesis implies that anything a human being can calculate can be calculated by any Turing-equivalent system of computation - including programs running on computers. That is to say, if we can solve the problem (or calculate an approximate answer that meets some criteria) then a computer program can be made to do the same thing. Let's consider a simpler example:
f(x) = x
a = Integral(f, 0, 1)
A human being presented with this problem has two options:
try to compute the antiderivative using some procedure, then use procedures to evaluate the definite integral over the supplied range
use some numerical method to calculate an approximate value for the definite integral which meets some criteria for closeness to the true value
In either case, human beings have a set of tools that allow them to do this:
recognize that f(x) is a polynomial in x. There are rules for constructing the antiderivatives of polynomials. Specifically, each term ax^b in the polynomial can be converted to a/(b+1)x^(b+1) and then an arbitrary constant c added to the end. We then say Sf(x)dx = (1/2)x^2 + c. Now that we have the antiderivative, we have a procedure for computing the antiderivative over a range: calculate Sf(x)dx for the high value, then subtract from that the result of calculating Sf(x)dx for the low value. This gives ((1/2)1^2) - ((1/2)0^2) = 1/2 - 0 = 1/2.
decide that for our purposes a Riemann sum with dx=1/10 is sufficient and that we'll take the midpoint value. We get 10 rectangles with base 1/10 and heights 1/20, 3/20, 5/20, 7/20, 9/20, 11/20, 13/20, 15/20, 17/20 and 19/20, respectively. The areas are 1/200, 3/200, 5/200, 7/200, 9/200, 11/200, 13/200, 15/200, 17/200 and 19/200. The sum of these is (1+3+5+7+9+11+13+15+17+19)/200 = 100/200 = 1/2. We happened to get the exact answer since we used the midpoint value and evaluated the definite integral of a linear function; in general, we'd have been close but not exact.
The only difficulty is in adequately specifying the procedure human beings use to solve these problems in various ways. Once specified, computers are perfectly capable of doing them. And make no mistake, human beings have a procedure - conscious or subconscious - for doing these problems reliably.
Maple helpfully can work out the solution to Laplace's equation in a square region and give me the answer in closed form (in terms of an infinite sum). If I try to plot the function of two variables as a 3d plot it gives me most of the surface but not all of it:
Here is the Maple code which produces the solution and turns it into an expression suitable for plotting
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=eval(v(x,y),sol1);
the result of which is
All good so far. Plotting it via
plot3d(vxy,x=0..1,y=0..1);
gives a result which is fine for x in the full range (0<x<1) but only for y between 0 and around 0.9:
I have tried to evalf some point in the unknown region and Maple can't tell me numerical values there. Is there any way to get Maple to "try a bit harder" to evaluate those numbers?
You could try setting the number of terms in the sum
Compare
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=subs(infinity=100,sol1);
plot3d(rhs(vxy),x=0..1,y=0..1);
With
restart;
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=eval(v(x,y),sol1);
plot3d(vxy,x=0..1,y=0..1);
I'm not a huge fan of chopping the infinite sum at some value of the upper bound for n, without at least demonstrating either symbolically or numerically that it is justified. Ie, that chopping does not provide a false idea of convergence.
So, you asked how to make it work "harder". I'll take that to mean that you too might prefer to let evalf/Sum itself decide whether each infinite numeric sum converges -- rather than manually truncate it yourself at some finite value for the upper value of the range for n.
For fun, and caution, I also divide both numerator and denominator of K by the potentially large exp call (potentially much larger than 1). That may not be necessary here.
restart;
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0:
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100:
sol1:=pdsolve({lapeq,bcs}):
vxy:=eval(v(x,y),sol1):
K:=op(1,vxy):
J:=simplify(combine(numer(K)/exp(2*Pi*n)))
/simplify(combine(denom(K)/exp(2*Pi*n))):
F:=subs(__d=J,
proc(x,y) local k, m, n, r;
if y<0.8 then
r:=Sum(__d,n=1..infinity);
else
UseHardwareFloats:=false;
m := ceil(1*abs(y/0.80)^16);
r:=add(Sum(eval(__d,n=m*n-k),n=1..infinity),
k=0..m-1);
end if;
evalf(r);
end proc):
plot3d( F, 0..1, 0..0.99 );
Naturally this is slower than mere chopping of terms to obtain a finite sum. And you might be satisfied with some technique that establishes that the excluded terms' sums are negligible.
i am trying to learn the basics of matlab ,
i wanted to write a mattlab script ,
in this script i defined a vector x with a "d" step that it's length is (2*pi/1000)
and i wanted to plot two sin function according to x :
the first sin is with a frequency of 1, and the second sin frequency 10.3 ..
this is what i did:
d=(2*pi/1000);
x=-pi:d:pi;
first=sin(x);
second=sin(10.3*x);
plot(x,first,x,second);
my question:
what is the different between :
x=linspace(-pi,pi,1000);
and ..
d=(2*pi/1000);
x=-pi:d:pi;
? i am asking because i got confused since i think they both are the same but i think there is something wrong with my assumption ..
also is there is a more sufficient way to write sin function with a giveng frequency ?
The main difference can be summarizes as predefined size vs predefined step. And your example highlights it very well, indeed (1000 elements vs 1001 elements).
The linspace function produces a fixed-length vector (the length being defined by the third input argument, which defaults to 100) whose lower and upper limits are set, respectively, by the first and the second input arguments. The correct step to use is internally computed by the function itself (step = (x2 - x1) / n).
The colon operator defines a vector of elements whose values range between the specified lower and upper limits. The step, which is an optional parameter that defaults to 1, is the discriminant of the vector length. This means that the length of the result is determined by the number of steps that must be accomplished in order to reach the upper limit, starting from the lower one. On an side note, on this MathWorks thread you can find a very interesting discussion concerning the behavior of the colon operator in respect of floating-point management.
Another difference, related to the first one, is that linspace always includes the upper limit value while the colon operator only contains it if the specified step allows it (0:5:14 = [0 5 10]).
As a general rule, I prefer to use the former when I want to produce a vector of a predefined length (pretty obvious, isn't it?), and the latter when I need to create a sequence whose length has only a marginal relevance (or no relevance at all)
I am trying to calculate some integrals that use very high power exponents. An example equation is:
(-exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)).^2 ...
./( exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)) ...
/ (2*sqrt(pi))
where p is constant (1000 being a typical value), and I need the integral for x=[-inf,inf]. If I use the integral function for numeric integration I get NaN as a result. I can avoid that if I set the limits of the integration to something like [-20,20] and a low p (<100), but ideally I need the full range.
I have also tried setting syms x and using int and vpa, but in this case vpa returns:
1.0 - 1.0*numeric::int((1125899906842624*(exp(-(x - 10*10^(1/2))^2) - exp(-(x + 10*10^(1/2))^2))^2)/(3991211251234741*(exp(-(x - 10*10^(1/2))^2) + exp(-(x + 10*10^(1/2))^2)))
without calculating a value. Again, if I set the limits of the integration to lower values I do get a result (also for low p), but I know that the result that I get is wrong – e.g., if x=[-100,100] and p=1000, the result is >1, which should be wrong as the equation should be asymptotic to 1 (or alternatively the codomain should be [0,1) ).
Am I doing something wrong with vpa or is there another way to calculate high precision values for my integrals?
First, you're doing something that makes solving symbolic problems more difficult and less accurate. The variable pi is a floating-point value, not an exact symbolic representation of the fundamental constant. In Matlab symbolic math code, you should always use sym('pi'). You should do the same for any other special numeric values, e.g., sqrt(sym('2')) and exp(sym('1')), you use or they will get converted to an approximate rational fraction by default (the source of strange large number you see in the code in your question). For further details, I recommend that you read through the documentation for the sym function.
Applying the above, here's a runnable example:
syms x;
p = 1000;
f = (-exp(-(x+sqrt(p)).^2)+exp(-(x-sqrt(p)).^2)).^2./(exp(-(x+sqrt(p)).^2)...
+exp(-(x-sqrt(p)).^2))/(2*sqrt(sym('pi')));
Now vpa(int(f,x,-100,100)) and vpa(int(f,x,-1e3,1e3)) return exactly 1.0 (to 32 digits of precision, see below).
Unfortunately, vpa(int(f,x,-Inf,Inf)), does not return an answer, but a call to the underlying MuPAD function numeric::int. As I explain in this answer, this is what can happen when int cannot obtain a result. Normally, it should try to evaluate the the integral numerically, but your function appears to be ill-defined at ±∞, resulting in divide by zero issues that the variable precision quadrature methods can't handle well. You can evaluate the integral at wider bounds by increasing the variable precision using the digits function (just remember to set digits back to the default of 32 when done). Setting digits(128) allowed me to evaluate vpa(int(f,x,-1e4,1e4)). You can also more efficiently evaluate your integral over a wider range via 2*vpa(int(f,x,0,1e4)) at lower effective digits settings.
If your goal is to see exactly how much less than one p = 1000 corresponds to, you can use something like vpa(1-2*int(f,x,0,1e4)). At digits(128), this returns
0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000086457415971094118490438229708839420392402555445545519907545198837816908450303280444030703989603548138797600750757834260181259102
Applying double to this shows that it is approximately 8.6e-89.