I know how to generate a basic Mandelbrot fractal based on the system Z -> Z + C, which breaks down to:
X -> X² - Y² + A
Y -> 2XY + B
Is there a basic definition for a Julia set though on the same simple level, or at least a relatively simple level?
The "classic" Julia set is described by Zn+1 = Zn + C just as the Mandelbrot set.
However, while the Mandelbrot set starts with Z0 = 0 and varies C across "the pixels", the Julia set varies Z0 across "the pixels" and has a constant C throughout the set.
Related
I have been working on this all day but I haven't figured it out yet. So I thought I may as well ask on here and see if someone can help.
The problem is as follow:
----------
F(input)(t) --> | | --> F(output)(t)
----------
Given a sample with a known length, density, and spring constant (or young's modulus), find the 'output' force against time when a known variable force is applied at the 'input'.
My current solution can already discretise the sample into finite elements, however I am struggling to figure out how the force should transmit given that the change in transmission speed in the material changes itself with respect to the force (using equation c = sqrt(force*area/density)).
If someone could point me to a solution or any other helpful resources, it would be highly appreciated.
A method for applying damping to the system would also be helpful but I should be able to figure out that part myself. (losses to the environment via sound or internal heating)
I will remodel the porbem in the following way:
___ ___
F_input(t) --> |___|--/\/\/\/\/\/\/\/\--|___|
At time t=0 the system is in equilibrium, the distance between the two objects is L, the mass of the left one (object 1) is m1 and the mass of the right one (object 2) is m2.
___ ___
F_input(t) --> |<-x->|___|--/\/\/\/\/\/-|<-y->|___|
During the application of the force F_input(t), at time t > 0, denote by x the oriented distance of the position of object 1 from its original position at time t=0. Similarly, at time t > 0, denote by y the oriented distance of the position of object 2 from its original position at time t=0 (see the diagram above). Then the system is subject to the following system of ordinary differential equations:
x'' = -(k/m1) * x + (k/m2) * y + F_input(t)/m2
y'' = (k/m2) * x - (k/m2) * y
When you solve it, you get the change of x and y with time, i.e. you get two functions x = x(t), y = y(t). Then, the output force is
F_output(t) = m2 * y''(t)
The problem isn't well defined at all. For starters for F_out to exist, there must be some constraint it must obey. Otherwise, the system will have more unknowns than equations.
The discretization will lead you to a system like
M*xpp = -K*x + F
with m=ρ*A*Δx and k=E*A/Δx
But to solve this system with n equations, you either need to know F_in and F_out, or prescribe the motion of one of the nodes, like x_n = 0, which would lead to xpp_n = 0
As far as damping, usually, you employ proportional damping, with a damping matrix proportional to the stiffness matrix D = α*K multiplied by the vector of speeds.
Where can I find the proof for the linear context free languages pumping lemma?
I am looking for the proof that is specific for the linear context free language
I also looked for the formal prof and could not find one.Not sure if the below is a formal prof but it may give you some idea.
The lemma : For every linear context free languages L there is an n>0 so that for every w in L with |w| > n we can write w as uvxyz such that |vy|> 0,|uvyz| <= n and uv^ixy^iz for every i>= 0 is in L.
"Proof":
Imagine a parse tree for some long string w in L with a start symbol S. Also lets assume that the tree does not contains non useful nodes. If w is long enough, there will be at least one non terminal repeating more than once. Lets call the first repeating non terminal going down the tree X, its first occurrence (from the top) as X[1] and its second occurrence as X[2].Let x be the string in w generated by X[2], vxy the string generated by X[1]and uvxyz the full string w generated by S. Since the movement from X[1] to X[2] generates v,y we could theoretically generate a new tree where we replicate this move multiple times before moving from X[1] down.This proves that uv^ixy^iz for every i>= 0 is in L. Since our tree contains no useless nodes, moving from X[1] to X[2] must generate some terminals and this proves that |vy|> 0.L is linear which means that on every level of the tree we have a single non terminal symbol. Each node in the tree covers some substring in w that its length is bounded by a linear function of the node height. Moving from S to X[2] covers uv and yz from w and the number of tree levels traveled is bounded by (2 * the number of non-terminals symbols + 1). Since the number of levels traveled is bounded and the tree is linear it also puts a bound on the yield of the movement from S to X[2] which means ,|uvyz| <= n for some n >= 0.
Note: Keep in mind that we construct X[1] , X[2] top down , in contradiction to how we prove the “regular” pumping lemma for context free grammar in general. In the "regular” pumping lemma there is a bound on the height of X[1] and therefore a bound on |vxy|. In our case there is no bound on the height of X[1]and it can be as high as required by the length of w. There is a bound,however,on the number of tree levels from S to X[2].This does not means much if the grammar is not linear as the output going from S to X[2] is still bounded only by the high of S (that is unbounded). But in the linear case,this output is bounded and therefore |uvyz| <= n
Say I give something like AB+AB+BA to matlab (or mupad), and ask it to simplify it. the answer should be: 2AB+BA. Can this be done in matlab or mupad?
Edit:
Ok, this is feeling rediculous. I'm trying to do this in either matlab or mulab, and.. it's frustrating not knowing how to do what should be the simplest things, and not being able to find the answers right away via google.
I want to expand the following, multiplied together, as a taylor series:
eq1 := exp(g*l*B):
eq2 := exp(l*A):
eq3 := exp((1-g)*l*B):
g is gamma, l is lambda (don't know how to represent either of these in matlab or mulab). A and B don't commute. I want to multiply the three exponentials together, expand, select all terms of a given power in lambda, and simplify the result. Is there a simple way to do this? or should I give up and go to another system, like maple?
This is mupad, not matlab:
operator("x", _vector_product, Binary, 1999):
A x B + A x B + B x A
returns
2 A x B + B x A
The vetor product is used, simply because it matches the described requirements.
I can't figure out how a genetically programmed A.I. can determine when there should be a constant in the final equation. If I take the formula F(m) = ma; F(m) = m9.8, how can the A.I. know what the real number 9.8 actually is? I understand that instead of putting the final number in the binary tree, you can actually put a symbol that describes a constant and then later calculate or guess what is its value in a certain way.
Thank you
Given a predefined set of constants (part of the terminal set) they'll be combined to form new constants (using a tree-representation, any sub-tree with only numeric constants as leaves can itself be thought of as a new numeric constant).
Even with a single constant (c) the system will create:
the 1.0 constant (constant divided by itself: c / c);
the 2.0 constant (1.0 + 1.0 i.e. c / c + c / c);
the 0.5 constant (1.0 / 2.0 i.e. c / c / (c / c + c / c));
many constants will be created this way (if you are lucky... 9.8).
Sometimes special terminals named "ephemeral random constant" (Koza) are used. For each ephemeral in the initial population, a random number in a specified range is generated. Then these random constants are moved around and combined.
Anyway, even with the use of the ephemeral random constant, GP can be hard put to generate the right constants (Koza said "the finding of numeric constants is a skeleton in the GP closet").
So other techniques can be used during/after the evolution, e.g. numeric mutation, hill climbing...
These hybrid systems often have significant improvements in the success ratios (at least for regression problems).
I'm having trouble creating a random vector V in Matlab subject to the following set of constraints: (given parameters N,D, L, and theta)
The vector V must be N units long
The elements must have an average of theta
No 2 successive elements may differ by more than +/-10
D == sum(L*cosd(V-theta))
I'm having the most problems with the last one. Any ideas?
Edit
Solutions in other languages or equation form are equally acceptable. Matlab is just a convenient prototyping tool for me, but the final algorithm will be in java.
Edit
From the comments and initial answers I want to add some clarifications and initial thoughts.
I am not seeking a 'truly random' solution from any standard distribution. I want a pseudo randomly generated sequence of values that satisfy the constraints given a parameter set.
The system I'm trying to approximate is a chain of N links of link length L where the end of the chain is D away from the other end in the direction of theta.
My initial insight here is that theta can be removed from consideration until the end, since (2) in essence adds theta to every element of a 0 mean vector V (shifting the mean to theta) and (4) simply removes that mean again. So, if you can find a solution for theta=0, the problem is solved for all theta.
As requested, here is a reasonable range of parameters (not hard constraints, but typical values):
5<N<200
3<D<150
L==1
0 < theta < 360
I would start by creating a "valid" vector. That should be possible - say calculate it for every entry to have the same value.
Once you got that vector I would apply some transformations to "shuffle" it. "Rejection sampling" is the keyword - if the shuffle would violate one of your rules you just don't do it.
As transformations I come up with:
switch two entries
modify the value of one entry and modify a second one to keep the 4th condition (Theoretically you could just shuffle two till the condition is fulfilled - but the chance that happens is quite low)
But maybe you can find some more.
Do this reasonable often and you get a "valid" random vector. Theoretically you should be able to get all valid vectors - practically you could try to construct several "start" vectors so it won't take that long.
Here's a way of doing it. It is clear that not all combinations of theta, N, L and D are valid. It is also clear that you're trying to simulate random objects that are quite complex. You will probably have a hard time showing anything useful with respect to these vectors.
The series you're trying to simulate seems similar to the Wiener process. So I started with that, you can start with anything that is random yet reasonable. I then use that as a starting point for an optimization that tries to satisfy 2,3 and 4. The closer your initial value to a valid vector (satisfying all your conditions) the better the convergence.
function series = generate_series(D, L, N,theta)
s(1) = theta;
for i=2:N,
s(i) = s(i-1) + randn(1,1);
end
f = #(x)objective(x,D,L,N,theta)
q = optimset('Display','iter','TolFun',1e-10,'MaxFunEvals',Inf,'MaxIter',Inf)
[sf,val] = fminunc(f,s,q);
val
series = sf;
function value= objective(s,D,L,N,theta)
a = abs(mean(s)-theta);
b = abs(D-sum(L*cos(s-theta)));
c = 0;
for i=2:N,
u =abs(s(i)-s(i-1)) ;
if u>10,
c = c + u;
end
end
value = a^2 + b^2+ c^2;
It seems like you're trying to simulate something very complex/strange (a path of a given curvature?), see questions by other commenters. Still you will have to use your domain knowledge to connect D and L with a reasonable mu and sigma for the Wiener to act as initialization.
So based on your new requirements, it seems like what you're actually looking for is an ordered list of random angles, with a maximum change in angle of 10 degrees (which I first convert to radians), such that the distance and direction from start to end and link length and number of links are specified?
Simulate an initial guess. It will not hold with the D and theta constraints (i.e. specified D and specified theta)
angles = zeros(N, 1)
for link = 2:N
angles (link) = theta(link - 1) + (rand() - 0.5)*(10*pi/180)
end
Use genetic algorithm (or another optimization) to adjust the angles based on the following cost function:
dx = sum(L*cos(angle));
dy = sum(L*sin(angle));
D = sqrt(dx^2 + dy^2);
theta = atan2(dy/dx);
the cost is now just the difference between the vector given by my D and theta above and the vector given by the specified D and theta (i.e. the inputs).
You will still have to enforce the max change of 10 degrees rule, perhaps that should just make the cost function enormous if it is violated? Perhaps there is a cleaner way to specify sequence constraints in optimization algorithms (I don't know how).
I feel like if you can find the right optimization with the right parameters this should be able to simulate your problem.
You don't give us a lot of detail to work with, so I'll assume the following:
random numbers are to be drawn from [-127+theta +127-theta]
all random numbers will be drawn from a uniform distribution
all random numbers will be of type int8
Then, for the first 3 requirements, you can use this:
N = 1e4;
theta = 40;
diffVal = 10;
g = #() randi([intmin('int8')+theta intmax('int8')-theta], 'int8') + theta;
V = [g(); zeros(N-1,1, 'int8')];
for ii = 2:N
V(ii) = g();
while abs(V(ii)-V(ii-1)) >= diffVal
V(ii) = g();
end
end
inline the anonymous function for more speed.
Now, the last requirement,
D == sum(L*cos(V-theta))
is a bit of a strange one...cos(V-theta) is a specific way to re-scale the data to the [-1 +1] interval, which the multiplication with L will then scale to [-L +L]. On first sight, you'd expect the sum to average out to 0.
However, the expected value of cos(x) when x is a random variable from a uniform distribution in [0 2*pi] is 2/pi (see here for example). Ignoring for the moment the fact that our limits are different from [0 2*pi], the expected value of sum(L*cos(V-theta)) would simply reduce to the constant value of 2*N*L/pi.
How you can force this to equal some other constant D is beyond me...can you perhaps elaborate on that a bit more?