Solving a Logic Puzzle with Answer Set Programming - answer-set-programming

Disclaimer: I am almost entirely new to clingo, and answer set programming in general.
I am trying to solve a grid logic puzzle using clingo. To start, I want to generate all models that include one instance of each category.
For example, if there are three people:
person(a; b; c).,
three houses: house(w; x; z).,
and three colors:
color(r; g; y).
I would want one potential stable model to be
assign(a, r, x), assign(b, g, z), assign(c, y, w)
and another potential stable model to be
assign(a, g, w), assign(b, y, z), assign(c, r, x), etc. That is, each person appears exactly once and likewise for the colors. I figure that once I have these models I can use constraints to eliminate models until the puzzle is solved.
I have tried using choice rules and constraints:
{assign(P, C, H)} :- person(P), color(C), house(H).
P1=P2 :- assign(P1, C, H), assign(P2, C, H).
But this is not quite scalable to large puzzles with many variables. Can anyone advise of a better way of doing this?

Assuming you wanted to write color(r;g;y). how about the following?
% assign each house exactly one person/color
1 {assign(P, C, H) : person(P), color(C) } 1 :- house(H).
% assign each person exactly one house/color
1 {assign(P, C, H) : house(H), color(C) } 1 :- person(P).
% assign each color exactly one person/house
1 {assign(P, C, H) : house(H), person(P) } 1 :- color(C).

Related

Backpropagation formula seems to be unimplementable as is

I've been working on getting some proficiency on backpropagation, and have run across the standard mathematical formula for doing this. I implemented a solution which seemed to work properly (and passed the relevant test with flying colours).
However ... the actual solution (implemented in MATLAB, and using vectorization) is at odds with the formula in two important respects.
The formula looks like this:
delta-layer2 = (Theta-layer2 transpose) x delta-layer3 dot x gprime(-- not important right now)
The working code looks like this:
% d3 is delta3, d2 is delta2, Theta2 is minus the bias column
% dimensions: d3--[5000x10], d2--[5000x25], Theta2--[10x25]
d3 = (a3 - y2);
d2 = (d3 * Theta2) .* gPrime(z2);
I can't reconcile what I implemented with the mathematical formula, on two counts:
The working code reverses the terms in the first part of the expression;
The working code does not transpose Theta-layer2, but the formula does.
How can this be? The dimensions of the individual matrices don't seem to allow for any other working combination.
Josh
This isn't a wrong question, I don't not why those downvotes; the implementation of a backpropagation algorithm is not intuitive as it appears. I'm not so great in math and I've never used MATLAB ( usually c ), so I avoided to answer this question first, but it deserve it.
First of all we have to do some simplifications.
1° we will use only a in_Data set so: vector in_Data[N] ( in the case below N = 2 ) ( If we succeed whit only a pat is not difficult extend it in a matrix ).
2° we will use this structure: 2 I, 2 H, 2 O ( I we succeed whit this; we will succeed with all ) this Network ( that I've stolen from: this blog )
Let's start: we know that for update the weights:
note: M=num_pattern, but we have previous declare in_data as vector, so you can delete the sum in the formula above and the matrix in the formula below. So this is your new formula:
we will study 2 connections: w1 and w5. Let's write the derivative:
let's code them: ( I really don't know MATLAB so I'll write a pseudocode )
vector d[num_connections+num_output_neurons] // num derivatives = num connections whitout count bias there are 8 connections. ; +2 derivative of O)
vector z[num_neurons] // z is the output of each neuron.
vector w[num_connections] // Yes a Vector! we have previous removed matrix and the sum.
// O layer
d[10] = (a[O1] - y[O1]); // Start from last to calculate the error.
d[9] = (a[O2] - y[O2]);
// H -> O layer
for i=5; i<=8; i++ ( Hidden to Out layer connections){
d[i] = (d)*g_prime(z[i])
}
// I -> H layer
for i=1; i<=8 i++ (Input to Hidden layer connections){
for j=1; i<=num_connection_from_neuron i++ (Take for example d[1] it depends on how many connections have H1 versus Outputs){
d[i] = d1 + (d[j+4]*w[j+4] )
}
d[i] = d[i]*g_prime(z[i]);
}
If you need to extend it in a Matrix write it in a comment that I'll extend the code.
And so you have found all the derivatives. Maybe this is not what you are exactly searching. I'm not even sure that all what I wrote is correct (I hope it is) I will try to code backpropagation in these days so I will able to correct errors if there are. I hope this will be a little helpful; better than nothing.
Best regards, Marco.

Returning Conditions on System of Linear INequalities

I am trying to solve linear inequalities for the conditions on the set of solutions. For example:
syms p C L D W
assume([p, C, W, D, L] >= 0)
eqn5 = p*C + L - D < 0;
eqn6 = p*C > 0;
solp2 = solve([eqn5, eqn6], [p, C, W, D, L], 'ReturnConditions', true);
Solp2p = solp2.p
Solp2C = solp2.C
Solp2W = solp2.W
Solp2D = solp2.D
Solp2L = solp2.L
Solp2cond = solp2.conditions
solp2par = solp2.parameters`
The conditions to solving this system of inequalities is clearly 0 < p*C < D- L. However it reports no solutions or conditions exist to satisfy this system of linear inequalities.
When using equalities these are the solutions I would receive using the solve function, however, when switching to inequalities it doesn't seem to work anymore. I also tried using vpasolve which didn't result in a solution either.
So far I have only found questions on Stack Overflow that give answers on how to find corner solutions or whether a solution exists for a system of linear inequalities.
I understand that the solution above implies an infinite number of solutions but this is easily captured using conditions as the solve function does for equalities. Does anyone know how to get these kind of solutions for a system of linear inequalities?
I switched to using mathematica and used the reduce function to find the solutions I am looking for. Have not yet figured out how to do it in matlab.

How to find coefficients for a possible exponential approximation

I have data like this:
y = [0.001
0.0042222222
0.0074444444
0.0106666667
0.0138888889
0.0171111111
0.0203333333
0.0235555556
0.0267777778
0.03]
and
x = [3.52E-06
9.72E-05
0.0002822918
0.0004929136
0.0006759156
0.0008199029
0.0009092797
0.0009458332
0.0009749509
0.0009892005]
and I want y to be a function of x with y = a(0.01 − b*n^−cx).
What is the best and easiest computational approach to find the best combination of the coefficients a, b and c that fit to the data?
Can I use Octave?
Your function
y = a(0.01 − b*n−cx)
is in quite a specific form with 4 unknowns. In order to estimate your parameters from your list of observations I would recommend that you simplify it
y = β1 + β2β3x
This becomes our objective function and we can use ordinary least squares to solve for a good set of betas.
In default Matlab you could use fminsearch to find these β parameters (lets call it our parameter vector, β), and then you can use simple algebra to get back to your a, b, c and n (assuming you know either b or n upfront). In Octave I'm sure you can find an equivalent function, I would start by looking in here: http://octave.sourceforge.net/optim/index.html.
We're going to call fminsearch, but we need to somehow pass in your observations (i.e. x and y) and we will do that using anonymous functions, so like example 2 from the docs:
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0) %// beta0 are your initial guesses for beta, e.g. [0,0,0] or [1,1,1]. You need to pick these to be somewhat close to the correct values.
And we define our objective function like this:
function sse = objfun(x, y, beta)
f = beta(1) + beta(2).^(beta(3).*x);
err = sum((y-f).^2); %// this is the sum of square errors, often called SSE and it is what we are trying to minimise!
end
So putting it all together:
y= [0.001; 0.0042222222; 0.0074444444; 0.0106666667; 0.0138888889; 0.0171111111; 0.0203333333; 0.0235555556; 0.0267777778; 0.03];
x= [3.52E-06; 9.72E-05; 0.0002822918; 0.0004929136; 0.0006759156; 0.0008199029; 0.0009092797; 0.0009458332; 0.0009749509; 0.0009892005];
beta0 = [0,0,0];
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0)
Now it's your job to solve for a, b and c in terms of beta(1), beta(2) and beta(3) which you can do on paper.

Symbolic integration vs numeric integration in MATLAB

I have an expression with three variables x,y and v. I want to first integrate over v, and so I use int function in MATLAB.
The command that I use is the following:
g =int((1-fxyz)*pv, v, y,+inf)%
PS I haven't given you what the function fxyv is but it is very complicated and so int is taking so long and I am afraid after waiting it might not solve it.
I know one option for me is to integrate numerically using for example integrate, however I want to note that the second part of this problem requires me to integrate exp[g(x,y)] over x and y from 0 to infinity and from x to infinity respectively. So I can't take numerical values of x and y when I want to integrate over v I think or maybe not ?
Thanks
Since the question does not contain sufficient detail to attempt analytic integration, this answer focuses on numeric integration.
It is possible to solve these equations numerically. However, because of complex dependencies between the three integrals, it is not possible to simply use integral3. Instead, one has to define functions that compute parts of the expressions using a simple integral, and are themselves fed into other calls of integral. Whether this approach leads to useful results in terms of computation time and precision cannot be answered generally, but depends on the concrete choice of the functions f and p. Fiddling around with precision parameters to the different calls of integral may be necessary.
I assume that the functions f(x, y, v) and p(v) are defined in the form of Matlab functions:
function val = f(x, y, v)
val = ...
end
function val = p(v)
val = ...
end
Because of the way they are used later, they have to accept multiple values for v in parallel (as an array) and return as many function values (again as an array, of the same size). x and y can be assumed to always be scalars. A simple example implementation would be val = ones(size(v)) in both cases.
First, let's define a Matlab function g that implements the first equation:
function val = g(x, y)
val = integral(#gIntegrand, y, inf);
function val = gIntegrand(v)
% output must be of the same dimensions as parameter v
val = (1 - f(x, y, v)) .* p(v);
end
end
The nested function gIntegrand defines the object of integration, the outer performs the numeric integration that gives the value of g(x, y). Integration is over v, parameters x and y are shared between the outer and the nested function. gIntegrand is written in such a way that it deals with multiple values of v in the form of arrays, provided f and p do so already.
Next, we define the integrand of the outer integral in the second equation. To do so, we need to compute the inner integral, and therefore also have a function for the integrand of the inner integral:
function val = TIntegrandOuter(x)
val = nan(size(x));
for i = 1 : numel(x)
val(i) = integral(#TIntegrandInner, x(i), inf);
end
function val = TIntegrandInner(y)
val = nan(size(y));
for j = 1 : numel(y)
val(j) = exp(g(x(i), y(j)));
end
end
end
Because both function are meant to be fed as an argument into integral, they need to be able to deal with multiple values. In this case, this is implemented via an explicit for loop. TIntegrandInner computes exp(g(x, y)) for multiple values of y, but the fixed value of x that is current in the loop in TIntegrandOuter. This value x(i) play both the role of a parameter into g(x, y) and of an integration limit. Variables x and i are shared between the outer and the nested function.
Almost there! We have the integrand, only the outermost integration needs to be performed:
T = integral(#TIntegrandOuter, 0, inf);
This is a very convoluted implementation, which is not very elegant, and probably not very efficient. Again, whether results of this approach prove to be useful needs to be tested in practice. However, I don't see any other way to implement these numeric integrations in Matlab in a better way in general. For specific choices of f(x, y, v) and p(v), there might be possible improvements.

Matlab: Finding two unknown constants/parameters in an equation

I've read up on fsolve and solve, and tried various methods of curve fitting/regression but I feel I need a bit of guidance here before I spend more time trying to make something work that might be the wrong approach.
I have a series of equations I am trying to fit to a data set (x) separately:
for example:
(a+b*c)*d = x
a*(1+b*c)*d = x
x = 1.9248
3.0137
4.0855
5.0097
5.7226
6.2064
6.4655
6.5108
6.3543
6.0065
c= 0.0200
0.2200
0.4200
0.6200
0.8200
1.0200
1.2200
1.4200
1.6200
1.8200
d = 1.2849
2.2245
3.6431
5.6553
8.3327
11.6542
15.4421
19.2852
22.4525
23.8003
I know c, d and x - they are observations. My unknowns are a and b, and should be constant.
I could do it manually for each x observation but there must be an automatic and far superior way or at least another approach.
Very grateful if I could receive some guidance. Thanks for the time!
Given your two example equations; let y=x./d, then
y = a+b*c
y = a+a*b*c
The first case is just a line, for which you can obtain a least squares fit (values for a and b) with polyfit(). In the second case, you can just say k=a*b (since these are both fitted anyway), then rewrite it as:
y = a+k*c
Which is exactly the same line as the first problem, except now b = k/a. In fact, b=b1/a is the solution to the second problem where b1 is the fit from the first problem. In short, to solve them both, you need one call to polyfit() and a couple of divisions.
Will that work for you?
I see two different equations to fit here. To spell out the code:
For (a+b*c)*d = x
p = polyfit(c, x./d, 1);
a = p(2);
b = p(1);
For a*(1+b*c)*d = x
p = polyfit(c, x./d, 1);
a = p(2);
b = p(1) / a;
No need for polyfit; this is just a linear least squares problem, which is best solved with MATLAB's slash operator:
>> ab = [ones(size(c)) c] \ (x./d)
ans =
1.411437211703194e+000 % 'a'
-7.329687661579296e-001 % 'b'
Faster, cleaner, more educative :)
And, as Emmet already said, your second equation is nothing more than a different form of your first equation, the difference being that the b in your first equation, is equal to a*b in your second one.