can't really understand the math behind the mandelbrot set - fractals

according the following article : wolfram Mandelbrot set, I'm trying to understand how they exactly managed to calculate the Ln(C)=Zn=R(max) values.
i do understand that Rmax is a constant, equals 2,(|Zn| < 4 for all points that are inside the Mandelbrot set), and Ln(C) should be the amount of iterations i spent for each C(point), but how using these 2 i get to calculate
L1(C) = C
L2(C) = C(C+1)
....
....
thanks for your help!

You start by setting z=C (or, basically equivalently as it happens, z=0) and then repeatedly setting z := z^2+C. Keep doing this until you get a z with |z|>Rmax.
If you never do -- of course in practice you won't go on literally for ever, but will stop after a certain maximum number of iterations -- then your point is in the Mandelbrot set, and if you're drawing a picture you typically colour it black.
If after N iterations you do get |z|>Rmax, then your point wasn't in the Mandelbrot set, and N gives some indication of how thoroughly outside the set it is; if you're drawing a picture, you typically plot the point in a colour determined by N.
The description of L_n on the Wolfram page is pretty bad. What they mean is: define L_n(C) to be the value of z after n iterations when you use the parameter C; then you can plot the curves defined by |L_n(c)|=Rmax. These are the boundaries between the different-coloured regions when you plot points as described above.

Related

Maple unable to evalute function in whole range of plot

Maple helpfully can work out the solution to Laplace's equation in a square region and give me the answer in closed form (in terms of an infinite sum). If I try to plot the function of two variables as a 3d plot it gives me most of the surface but not all of it:
Here is the Maple code which produces the solution and turns it into an expression suitable for plotting
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=eval(v(x,y),sol1);
the result of which is
All good so far. Plotting it via
plot3d(vxy,x=0..1,y=0..1);
gives a result which is fine for x in the full range (0<x<1) but only for y between 0 and around 0.9:
I have tried to evalf some point in the unknown region and Maple can't tell me numerical values there. Is there any way to get Maple to "try a bit harder" to evaluate those numbers?
You could try setting the number of terms in the sum
Compare
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=subs(infinity=100,sol1);
plot3d(rhs(vxy),x=0..1,y=0..1);
With
restart;
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0;
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100;
sol1:=pdsolve({lapeq,bcs});
vxy:=eval(v(x,y),sol1);
plot3d(vxy,x=0..1,y=0..1);
I'm not a huge fan of chopping the infinite sum at some value of the upper bound for n, without at least demonstrating either symbolically or numerically that it is justified. Ie, that chopping does not provide a false idea of convergence.
So, you asked how to make it work "harder". I'll take that to mean that you too might prefer to let evalf/Sum itself decide whether each infinite numeric sum converges -- rather than manually truncate it yourself at some finite value for the upper value of the range for n.
For fun, and caution, I also divide both numerator and denominator of K by the potentially large exp call (potentially much larger than 1). That may not be necessary here.
restart;
lapeq:=diff(v(x,y),x$2)+diff(v(x,y),y$2)=0:
bcs:=v(x,0)=0,v(0,y)=0,v(1,y)=0,v(x,1)=100:
sol1:=pdsolve({lapeq,bcs}):
vxy:=eval(v(x,y),sol1):
K:=op(1,vxy):
J:=simplify(combine(numer(K)/exp(2*Pi*n)))
/simplify(combine(denom(K)/exp(2*Pi*n))):
F:=subs(__d=J,
proc(x,y) local k, m, n, r;
if y<0.8 then
r:=Sum(__d,n=1..infinity);
else
UseHardwareFloats:=false;
m := ceil(1*abs(y/0.80)^16);
r:=add(Sum(eval(__d,n=m*n-k),n=1..infinity),
k=0..m-1);
end if;
evalf(r);
end proc):
plot3d( F, 0..1, 0..0.99 );
Naturally this is slower than mere chopping of terms to obtain a finite sum. And you might be satisfied with some technique that establishes that the excluded terms' sums are negligible.

Combination of SVD perturbation

To apply the combination of SVD perturbation:
I = imread('image.jpg');
Ibw = single(im2double(I));
[U S V] = svd(Ibw);
% calculate derviced image
P = U * power(S, i) * V'; % where i is between 1 and 2
%To compute the combined image of SVD perturbations:
J = (single(I) + (alpha*P))/(1+alpha); % where alpha is between 0 and 1
I applied this method to a specific face recognition model and I noticed the accuracy was highly increased!! So it is very efficient!. Interestingly, I used the value i=3/4 and alpha=0.25 according to a paper that was published in a journal in 2012 in which the authors used i=3/4 and alpha=0.25. But I didn't make attention that i must be between 1 and 2! (I don't know if the authors make an error of dictation or they in fact used the value 3/4). So I tried to change the value of i to a value greater than 1, the accuracy decreased!!. So can I use the value 3/4 ? If yes, how can I argument therefore my approach?
The paper that I read is entitled "Enhanced SVD based face recognition". In page 3, they used the value i=3/4.
(http://www.oalib.com/paper/2050079)
Kindly I need your help and opinions. Any help will be very appreciated!
The idea to have the value between one and two is to magnify the singular values to make them invariant to illumination changes.
Refer to this paper: A New Face Recognition Method based on SVD Perturbation for Single Example Image per Person: Daoqiang Zhang,Songcan Chen,and Zhi-Hua Zhou
Note that when n equals to 1, the derived image P is equivalent to the original image I . If we
choose n>1, then the singular values satisfying s_i > 1 will be magnified. Thus the reconstructed
image P emphasizes the contribution of the large singular values, while restraining that of the
small ones. So by integrating P into I , we get a combined image J which keeps the main
information of the original image and is expected to work better against minor changes of
expression, illumination and occlusions.
My take:
When you scale the singular values in the exponent, you are basically introducing a non-linearity, so its possible that for a specific dataset, scaling down the singular values may be beneficial. Its like adjusting the gamma correction factor in a monitor.

Creating image profiles in some parts of the image

I've been struggling with a problem for a while:) in Matlab.
I have an image (A.tif) in which I would like to find maxima (with defined threshold) but more specific coordinates of these maxima. My goal is to create short profiles on the image crossing these maxima (let say +- 20 pixels on both sides of the maximum)
I tried this:
[r c]=find(A==max(max(A)));
I suppose that r and c are coordinates of maximum (only one/first or every maximum?)
How can I implement these coordinates into ,for example improfile function?
I think it should be done using nested loops?
Thanks for every suggestion
Your code is working but it finds only global maximum coordinates.I would like to find multiple maxima (with defined threshold) and properly address its coordinates to create multiple profiles crossing every maximum found. I have little problem with improfile function :
improfile(IMAGE,[starting point],[ending point]) .
Lets say that I get [rows, columns] matrix with coordinates of each maximum and I'm trying to create one direction profile which starts in the same row where maximum is (about 20 pixels before max) and of course ends in the same row (also about 20 pixels from max) .
is this correct expression :improfile(IMAGE,[rows columns-20],[rows columns+20]); It plots something but it seems to only joins maxima rather than making intensity profiles
You're not giving enough information so I had to guess a few things. You should apply the max() to the vectorized image and store the index:
[~,idx] = max(I(:))
Then transform this into x and y coordinates:
[ix,iy] = ind2sub(size(I),idx)
This is your x and y of the maximum of the image. It really depends what profile section you want. Something like this is working:
I = imread('peppers.png');
Ir = I(:,:,1);
[~,idx]=max(Ir(:))
[ix,iy]=ind2sub(size(Ir),idx)
improfile(Ir,[0 ix],[iy iy])
EDIT:
If you want to instead find the k largest values and not just the maximum you can do an easy sort:
[~,idx] = sort(I(:),'descend');
idxk = idx(1:k);
[ix,iy] = ind2sub(size(I),idxk)
Please delete your "reply" and instead edit your original post where you define your problem better

Finding the maximum value of a function under uncertainty

I have three values X,Y and Z. These values have a range of values between 0 and 1 (0 and 1 included).
When I call a function f(X,Y,Z) it returns a value V (value between 0 and 1). My Goal is to choose X,Y,Z so that the returned value V is as close as possible to 1.
The selection Process should be automated and the right values for X,Y,Z are unknown.
Due to my Use Case it is possible to set Y and Z to 1 (the value 1 hasn't any influence on the output) and search for the best value of X.
After that I can replace X by that value and do the same for Y. Same procedure for Z.
How can I find the "maximum of the function"? Is there somekind of "gradient descend" or hill climbing algorithm or something like that?
The whole modul is written in perl so maybe there is an package for perl that can solve that problem?
You can use Simulated Annealing. Its a multi-variable optimization technique. It is also used to get a partial solution for the Travelling Salesperson problem. Its one of the search algorithms mentioned in Peter Norvig's Intro to AI book as well.
Its a hill climbing algorithm which depends on random variables. Also it won't necessarily give you the 'optimal' answer. You can also vary the iterations required by it as per your computational/time needs.
http://en.wikipedia.org/wiki/Simulated_annealing
http://www1bpt.bridgeport.edu/sed/projects/449/Fall_2000/fangmin/chapter2.htm
I suggest you take a look at Math::Amoeba which implements the Nelder–Mead method for finding stationary points on functions.

Arbitrary distribution -> Uniform distribution (Probability Integral Transform?)

I have 500,000 values for a variable derived from financial markets. Specifically, this variable represents distance from the mean (in standard deviations). This variable has a arbitrary distribution. I need a formula that will allow me to select a range around any value of this variable such that an equal (or close to it) amount of data points fall within that range.
This will allow me to then analyze all of the data points within a specific range and to treat them as "similar situations to the input."
From what I understand, this means that I need to convert it from arbitrary distribution to uniform distribution. I have read (but barely understood) that what I am looking for is called "probability integral transform."
Can anyone assist me with some code (Matlab preferred, but it doesn't really matter) to help me accomplish this?
Here's something I put together quickly. It's not polished and not perfect, but it does what you want to do.
clear
randList=[randn(1e4,1);2*randn(1e4,1)+5];
[xCdf,xList]=ksdensity(randList,'npoints',5e3,'function','cdf');
xRange=getInterval(5,xList,xCdf,0.1);
and the function getInterval is
function out=getInterval(yPoint,xList,xCdf,areaFraction)
yCdf=interp1(xList,xCdf,yPoint);
yCdfRange=[-areaFraction/2, areaFraction/2]+yCdf;
out=interp1(xCdf,xList,yCdfRange);
Explanation:
The CDF of the random distribution is shown below by the line in blue. You provide a point (here 5 in the input to getInterval) about which you want a range that gives you 10% of the area (input 0.1 to getInterval). The chosen point is marked by the red cross and the
interval is marked by the lines in green. You can get the corresponding points from the original list that lie within this interval as
newList=randList(randList>=xRange(1) & randList<=xRange(2));
You'll find that on an average, the number of points in this example is ~2000, which is 10% of numel(randList)
numel(newList)
ans =
2045
NOTE:
Please note that this was done quickly and I haven't made any checks to see if the chosen point is outside the range or if yCdfRange falls outside [0 1], in which case interp1 will return a NaN. This is fairly straightforward to implement, and I'll leave that to you.
Also, ksdensity is very CPU intensive. I wouldn't recommend increasing npoints to more than 1e4. I assume you're only working with a fixed list (i.e., you have a list of 5e5 points that you've obtained somehow and now you're just running tests/analyzing it). In that case, you can run ksdensity once and save the result.
I do not speak Matlab, but you need to find quantiles in your data. This is Mathematica code which would do this:
In[88]:= data = RandomVariate[SkewNormalDistribution[0, 1, 2], 10^4];
Compute quantile points:
In[91]:= q10 = Quantile[data, Range[0, 10]/10];
Now form pairs of consecutive quantiles:
In[92]:= intervals = Partition[q10, 2, 1];
In[93]:= intervals
Out[93]= {{-1.397, -0.136989}, {-0.136989, 0.123689}, {0.123689,
0.312232}, {0.312232, 0.478551}, {0.478551, 0.652482}, {0.652482,
0.829642}, {0.829642, 1.02801}, {1.02801, 1.27609}, {1.27609,
1.6237}, {1.6237, 4.04219}}
Verify that the splitting points separate data nearly evenly:
In[94]:= Table[Count[data, x_ /; i[[1]] <= x < i[[2]]], {i, intervals}]
Out[94]= {999, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000}