I'm new to Clingo.
I want to know how to express an OR condition inside a count aggregate.
I'm writing this rule.
countPreviousSlots(C1, C2, TotalCount) :-
firstLecture(C2, S2, G2, I2),
TotalCount = #count{S1,G1,I1 : slot(S1, G1, I1, C1), (S1 < S2; (S1==S2, G1 < G2); (S1==S2, G1==G2, I1 < I2))},
slot(_, _, _, _, C1).
But the round brackets aren't admitted by clingo. How do I have to to formulate this condition in clingo?
And what's the difference if I move the condition and I write:
countPreviousSlots(C1, C2, TotalCount) :-
firstLecture(C2, S2, G2, I2),
TotalCount = #count{S1,G1,I1 : slot(S1, G1, I1, C1)},
slot(_, _, _, _, C1), (S1 < S2; (S1==S2, G1 < G2); (S1==S2, G1==G2, I1 < I2)).
You can formulate the lexicographic ordering with a further predicate. Disjunction can be nicely described with multiple rules with the same head. Here is a possible example
num(1..2).
lexorder(X1, X2, Y1, Y2) :-
num(X1),
num(X2),
num(Y1),
num(Y2),
X1 < Y1.
lexorder(X1, X2, Y1, Y2) :-
num(X1),
num(X2),
num(Y1),
num(Y2),
X1 = Y1,
X2 < Y2.
The difference is that you count differently. The set you are creating in the first version {...} is smaller than in the second version as it has less constraints. Moreover, in the second version, some variables are not bounded (S1) such that the grounder will not know how to replace S1 with an atom. The variables inside the count statement are scoped between the brackets.
Related
I am trying to build a deep learning model in julia. I have two models m1 and m2 which are neural networks. Here is my code:
using Flux
function even_mask(x)
s1, s2 = size(x)
weight_mask = zeros(s1, s2)
weight_mask[2:2:s1,:] = ones(Int(s1/2), s2)
return weight_mask
end
function odd_mask(x)
s1, s2 = size(x)
weight_mask = zeros(s1, s2)
weight_mask[1:2:s1,:] = ones(Int(s1/2), s2)
return weight_mask
end
function even_duplicate(x)
s1, s2 = size(x)
x_ = zeros(s1, s2)
x_[1:2:s1,:] = x[1:2:s1,:]
x_[2:2:s1,:] = x[1:2:s1,:]
return x_
end
function odd_duplicate(x)
s1, s2 = size(x)
x_ = zeros(s1, s2)
x_[1:2:s1,:] = x[2:2:s1,:]
x_[2:2:s1,:] = x[2:2:s1,:]
return x_
end
function Even(m)
x -> x .+ even_mask(x).*m(even_duplicate(x))
end
function InvEven(m)
x -> x .- even_mask(x).*m(even_duplicate(x))
end
function Odd(m)
x -> x .+ odd_mask(x).*m(odd_duplicate(x))
end
function InvOdd(m)
x -> x .- odd_mask(x).*m(odd_duplicate(x))
end
m1 = Chain(Dense(4,6,relu), Dense(6,5,relu), Dense(5,4))
m2 = Chain(Dense(4,7,relu), Dense(7,4))
forward = Chain(Even(m1), Odd(m2))
inverse = Chain(InvOdd(m2), InvEven(m1))
function loss(x)
z = forward(x)
return 0.5*sum(z.*z)
end
opt = Flux.ADAM()
x = rand(4,100)
for i=1:100
Flux.train!(loss, Flux.params(forward), x, opt)
println(loss(x))
end
The forward model is a combination of m1 and m2. I need to optimize m1 and m2 so I could optimize both forward and inverse models. But it seems that params(forward) is empty. How could I train my model?
I don't think plain functions can be used as layers in Flux. You need to use the #functor macro to add the extra functionality to collect parameters: https://fluxml.ai/Flux.jl/stable/models/basics/#Layer-helpers-1
In your case, rewriting Even, InvEven, Odd and InvOdd like this should help:
struct Even
model
end
(e::Even)(x) = x .+ even_mask(x).*e.model(even_duplicate(x))
Flux.#functor Even
After adding this definition,
Flux.params(Even(m1))
Should return a non-empty list
EDIT
An even simpler way to implement Even and friends is to use the built-in SkipConnection layer:
Even(m) = SkipConnection(Chain(even_duplicate, m),
(mx, x) -> x .+ even_mask(x) .* mx)
I suspect this is a version difference, but with Julia 1.4.1 and Flux v0.10.4, I get the error BoundsError: attempt to access () at index [1] when running your training loop, I need to replace the data with
x = [(rand(4,100), 0)]
Otherwise the loss is applied to each entry in the array x. since train! splats loss over x.
The next error mutating arrays is not supported is due to the implementation of *_mask and *_duplicate. These functions construct an array of zeros and then mutate it by replacing values from the input.
You can use Zygote.Buffer to implement this code in a way that can be differentiated.
using Flux
using Zygote: Buffer
function even_mask(x)
s1, s2 = size(x)
weight_mask = Buffer(x)
weight_mask[2:2:s1,:] = ones(Int(s1/2), s2)
weight_mask[1:2:s1,:] = zeros(Int(s1/2), s2)
return copy(weight_mask)
end
function odd_mask(x)
s1, s2 = size(x)
weight_mask = Buffer(x)
weight_mask[2:2:s1,:] = zeros(Int(s1/2), s2)
weight_mask[1:2:s1,:] = ones(Int(s1/2), s2)
return copy(weight_mask)
end
function even_duplicate(x)
s1, s2 = size(x)
x_ = Buffer(x)
x_[1:2:s1,:] = x[1:2:s1,:]
x_[2:2:s1,:] = x[1:2:s1,:]
return copy(x_)
end
function odd_duplicate(x)
s1, s2 = size(x)
x_ = Buffer(x)
x_[1:2:s1,:] = x[2:2:s1,:]
x_[2:2:s1,:] = x[2:2:s1,:]
return copy(x_)
end
Even(m) = SkipConnection(Chain(even_duplicate, m),
(mx, x) -> x .+ even_mask(x) .* mx)
InvEven(m) = SkipConnection(Chain(even_duplicate, m),
(mx, x) -> x .- even_mask(x) .* mx)
Odd(m) = SkipConnection(Chain(odd_duplicate, m),
(mx, x) -> x .+ odd_mask(x) .* mx)
InvOdd(m) = SkipConnection(Chain(odd_duplicate, m),
(mx, x) -> x .- odd_mask(x) .* mx)
m1 = Chain(Dense(4,6,relu), Dense(6,5,relu), Dense(5,4))
m2 = Chain(Dense(4,7,relu), Dense(7,4))
forward = Chain(Even(m1), Odd(m2))
inverse = Chain(InvOdd(m2), InvEven(m1))
function loss(x, y)
z = forward(x)
return 0.5*sum(z.*z)
end
opt = Flux.ADAM(1e-6)
x = [(rand(4,100), 0)]
function train!()
for i=1:100
Flux.train!(loss, Flux.params(forward), x, opt)
println(loss(x[1]...))
end
end
At this point, you get to the real fun of deep networks. After one training step, the training diverges to NaN with the default learning rate. Reducing the initial training rate to 1e-6 helps, and the loss looks like it is decreasing.
I would like to achieve the above for the following:
Rn = 0.009; % Resolution of simulation (in m^3)
Xs = -1 : Rn : 1;
Ys = -1 : Rn : 1;
Zs = 0 : Rn : 1;
[X Y Z] = meshgrid(Xs, Ys, Zs);
alpha = atan2(Z,X);
ze = x.^2 + y.^2; % define some condition
m = 0.59; % manual input
cond = (pi/3 <= alpha) & ...
(alpha <= (2*pi/3)) & ...
(m <= Z) & ...
(Z <= ze); % more conditions
xl = nnz(cond); % the number of non-zero elements
f = abs(xl*1000 - 90) % guessing m to get f as low as possible
How do I turn m into a variable for some f function so I can call fminsearch to quickly find the corresponding m for f ≈ 0?
In order to use m as a variable, you need to define a function handle. So you need to write:
cond = #(m) ((pi/3) <= alpha) & (alpha <= (2*pi/3)) & (m <= Z) & (Z <= ze);
However, you cannot use a function handle in the nnz routine, since it only accepts matrices as inputs. But, the solution to the problem is that you only have Boolean variables in cond. This means, you can simply sum over cond and get the same result as with nnz.
The only issue I see is how to implement the sum in fminsearch. Unfortunately, I do not have access to fminsearch, however I would assume that you can do something with reshape and then multiply with dot (i.e. .*) with the unity vector to get a sum. But you'll have to try that one out, not sure about it.
{a^p b^p; p is a prime number}
{a^p b^p; p is a prime number, m is a fixed number and m≥p≥0}
How do I prove if this is a regular language/context free language (or not)?
1) L = {a^n b^n; n is a prime number} :
So the prove can be done by contradiction. Suppose L is regular, and p is the pumping length.
The test string is w = a^p b^p, w belongs to L, and |w| = 2p >= p
We subdivide w=xyz. There are 3 conditions to prove the pumping lemma:
from the third condition, |xy| < p, so xy contains only a's
from the second condition, |y| > 0, so y has the form y = a^k, where 1 <= k <= p
from the first condition, xy^iz belongs to L for i = 0, 1, 2, ... So if you pump down (i = 0) you got:
w = a^(p - k) b^p , and w does not belongs to L (Because the quantity of a's and b's are different)
So you prove that L is not regular.
I'm trying to calculate Euler-Lagrange equations for a robotic structure.
I'll use q to indicate the vector of the joint variables.
In my code, I use
syms t;
q1 = sym('q1(t)');
q2 = sym('q2(t)');
q = [q1, q2];
to declare that q1 and q2 depend on time t.
After I calculate the Lagrangian L (in this case it is a simple link with a rotoidal joint)
L = (I1z*diff(q1(t), t)^2)/2 + (L1^2*M1*diff(q1(t), t)^2)/8
The problem is that when I try to differentiate L respect to q using diff(L, q), I get this error
Error using sym/diff (line 69)
The second argument must be a variable or a nonnegative integer specifying the number of differentiations.
How can I differentiate L respect to q to have the first term of the Euler-Lagrange equation?
I also tried to write q simply as
syms q1 q2
q = [q1 q2]
without the time dependency but differentiation will not work, i.e. will obviously give me [0, 0]
That's what I've got in the workspace (I1z is the inertia of the link respect to z-axis, M1 is the mass of the link, L1 is the length of the link)
q = [q1(t), q2(t)]
diff(q, t) = [diff(q1(t), t), diff(q2(t), t)]
L = (I1z*diff(q1(t), t)^2)/2 + (L1^2*M1*diff(q1(t), t)^2)/8
If you want to run the full code, you have to download all the .m files from here and then use
[t, q, L, M, I] = initiate();
L = lagrangian(odof(q, L), q, M, I, t, 1)
otherwise the following code should be the same.
syms t I1z L1 M1
q1 = sym('q1(t)');
q2 = sym('q2(t)');
q = [q1, q2];
qp = diff(q, t);
L = (I1z*qp(1)^2)/2 + (L1^2*M1*qp(1)^2)/8;
EDIT
Thanks to AVK's answer I realized the problem.
Example 1 (AVK's code)
syms t q1 q2 q1t q2t I1z L1 M1 % variables
L = (I1z*q1t^2)/2 + (L1^2*M1*q1t^2)/8
dLdqt = [diff(L,q1t), diff(L,q2t)]
This will work and its result will be
dLdqt = [(M1*q1t*L1^2)/4 + I1z*q1t, 0]
Example 2 (wrong)
syms t q1 q2 q1t q2t I1z L1 M1
L = (I1z*q1t^2)/2 + (L1^2*M1*q1t^2)/8;
qt = [q1t q2t];
dLdqt = diff(L, qt)
This will not work, because diff expects a single variable of differentiation
Example 3 (right)
syms t q1 q2 q1t q2t I1z L1 M1
L = (I1z*q1t^2)/2 + (L1^2*M1*q1t^2)/8;
qt = [q1t q2t];
dLdqt = jacobian(L, qt)
This will work, because jacobian expects at least a variable of differentiation
EDIT 2
Seems that MATLAB's Symbolit Toolbox can't handle differentiation with respect to q(t), so you have to use the variable q.
So using these as functions
q = [q1(t), q2(t), q3(t), q4(t), q5(t), q6(t)]
qp = [diff(q1(t), t), diff(q2(t), t), diff(q3(t), t), diff(q4(t), t), diff(q5(t), t), diff(q6(t), t)]
and these as variables
qv = [q1, q2, q3, q4, q5, q6];
qvp = [q1p, q2p, q3p, q4p, q5p, q6p];
solved the problem.
The whole code will looks like this
syms q1 q2 q3 q4 q5 q6;
syms q1p q2p q3p q4p q5p q6p;
qv = [q1, q2, q3, q4, q5, q6];
qvp = [q1p, q2p, q3p, q4p, q5p, q6p];
Lagv = subs(Lag, [q, qp], [qv, qvp]);
dLdq = jacobian(Lagv, qv);
dLdqp = jacobian(Lagv, qvp);
dLdq = subs(dLdq, [qv, qvp], [q, qp]);
dLdqp = subs(dLdqp, [qv, qvp], [q, qp]);
m_eq = diff(dLdqp, t) - dLdq;
If you want to differentiate L with respect to q, q must be a variable. You can use subs to replace it with a function and calculate
later:
syms t q1 q2 q1t q2t I1z L1 M1 % variables
L = (I1z*q1t^2)/2 + (L1^2*M1*q1t^2)/8
dLdqt= [diff(L,q1t), diff(L,q2t)]
dLdq = [diff(L,q1), diff(L,q2)]
syms q1_f(t) q2_f(t) % functions
q1t_f(t)= diff(q1_f,t)
q2t_f(t)= diff(q2_f,t)
% replace the variables with the functions
dLdq_f= subs(dLdq,{q1 q2 q1t q2t},{q1_f q2_f q1t_f q2t_f})
dLdqt_f= subs(dLdqt,{q1 q2 q1t q2t},{q1_f q2_f q1t_f q2t_f})
% now we can solve the equation
dsolve(diff(dLdqt_f,t)-dLdq_f==0)
I developed the Euler-Lagrange Library in MATLAB, with a list of illustrative examples. you can download it using the following link:
https://www.mathworks.com/matlabcentral/fileexchange/86563-matlab-euler-lagrange-library
I have groups of scalars and two groups of vectors respertively:
w1, w2... wn
b1, b2... bn
c1, c2... cn
w1, w2... wn are scalars and stored in w,
b1, b2... bn stored in B and
c1, c2... cn stored in C. How efficiently get
w1*(b1*c1') + w2*(b2*c2') + ... + wn*(bn*cn')
Where bi and ci are vectors but bi*ci' is matrix, not a scalar?
Sizes: 1 x N for w, P x N for B and Q x N for C. wi = w(i), bi = B(:, i) and Ci = C(:, i)
Simply:
result = B*diag(W)*C';
If N is much bigger than P and Q, you might prefer to compute the weight matrix diag(W) in its sparse form with spdiags(W', 0, N, N) instead.