Clingo flexible but maximum count of literals and how to prevent negation cycles - answer-set-programming

I'm programming a Sudoku solver and have come across two problems.
I would like to generate a specific number of literals, but keep the total number flexible
How can I prevent a negation cycle, so that I have a clean solution for declaring a digit as not possible?
General code with generator regarding my first question:
row(1..3). %coordinates are declared via position of sub-grid (row, col) and position of
col(1..3). %field in sub-grid (sr, sc)
sr(1..3).
sc(1..3).
num(1..9).
1 { candidate(R,C,A,B,N) : num(N) } 9 :- row(R), col(C), sr(A), sc(B).
Here I want to create all candidates for a field, which at the beginning are all the numbers from 1 to 9. So I want for all candidate(1,1,1,1,1-9). But it would be nice to keep the number of candidates for each field flexible, so I can declare a solution if through integrity constraints like
:- candidate(R,C,A,B,N), solution(R1,C,A1,B,N), R != R1, A != A1. %excludes candidates if digit is present in solution in same column
I have excluded all 8 other candidates:
solution(R,C,A,B,N) :- candidate(R,C,A,B,N), { N' : candidate(R,C,A,B,N') } = 1.
Regarding my second question, I basically want to declare a solution, if a specific condition is fulfilled. The problem is, if I have a solution, the condition is no longer true and this leads to a negation cycle:
solution(R,C,A,B,N) :- candidate(R,C,A,B,N), { set1(R',C',A',B') } = { posDigit(N') }, { negDigit(N'') } = { set2(R'',C'',A'',B'') } - 1, not taken(R,C,A,B), not takenDigit(N).
taken(R,C,A,B) :- solution(R,C,A,B,N).
I would be glad I somebody offers input on how to solve these problems.

Related

In swift, is there a way to only check part of an array in a for loop (with a set beginning and ending point)

So lets say we have an array a = [20,50,100,200,500,1000]
Generally speaking we could do for number in a { print(a) } if we wanted to check the entirety of a.
How can you limit what indexes are checked? As in have a set beginning and end index (b, and e respectively), and limit the values of number that are checked to between b and e?
For an example, in a, if b is set to 1, and e is set to 4, then only a1 through a[4] are checked.
I tried doing for number in a[b...e] { print(number) }, I also saw here someone do this,
for j in 0..<n { x[i] = x[j]}, which works if we want just a ending.
This makes me think I can do something like for number in b..<=e { print(a[number]) }
Is this correct?
I'm practicing data structures in Swift and this is one of the things I've been struggling with. Would really appreciate an explanation!
Using b..<=e is not the correct syntax. You need to use Closed Range Operator ... instead, i.e.
for number in b...e {
print(a[number])
}
And since you've already tried
for number in a[b...e] {
print(number)
}
There is nothing wrong with the above syntax as well. You can use it either way.
An array has a subscript that accepts a Range: array[range] and returns a sub-array.
A range of integers can be defined as either b...e or b..<e (There are other ways as well), but not b..<=e
A range itself is a sequence (something that supports a for-in loop)
So you can either do
for index in b...e {
print(a[index])
}
or
for number in a[b...e] {
print(number)
}
In both cases, it is on you to ensure that b...e are valid indices into the array.

"Find the Parity Outlier Code Wars (Scala)"

I am doing some of CodeWars challenges recently and I've got a problem with this one.
"You are given an array (which will have a length of at least 3, but could be very large) containing integers. The array is either entirely comprised of odd integers or entirely comprised of even integers except for a single integer N. Write a method that takes the array as an argument and returns this "outlier" N."
I've looked at some solutions, that are already on our website, but I want to solve the problem using my own approach.
The main problem in my code, seems to be that it ignores negative numbers even though I've implemented Math.abs() method in scala.
If you have an idea how to get around it, that is more than welcome.
Thanks a lot
object Parity {
var even = 0
var odd = 0
var result = 0
def findOutlier(integers: List[Int]): Int = {
for (y <- 0 until integers.length) {
if (Math.abs(integers(y)) % 2 == 0)
even += 1
else
odd += 1
}
if (even == 1) {
for (y <- 0 until integers.length) {
if (Math.abs(integers(y)) % 2 == 0)
result = integers(y)
}
} else {
for (y <- 0 until integers.length) {
if (Math.abs(integers(y)) % 2 != 0)
result = integers(y)
}
}
result
}
Your code handles negative numbers just fine. The problem is that you rely on mutable sate, which leaks between runs of your code. Your code behaves as follows:
val l = List(1,3,5,6,7)
println(Parity.findOutlier(l)) //6
println(Parity.findOutlier(l)) //7
println(Parity.findOutlier(l)) //7
The first run is correct. However, when you run it the second time, even, odd, and result all have the values from your previous run still in them. If you define them inside of your findOutlier method instead of in the Parity object, then your code gives correct results.
Additionally, I highly recommend reading over the methods available to a Scala List. You should almost never need to loop through a List like that, and there are a number of much more concise solutions to the problem. Mutable var's are also a pretty big red flag in Scala code, as are excessive if statements.

Big-O of a recursive function

just wondering, what's the big o of this function,
let say the initial value of the parameters is as following:
numOfCourseIndex = 0
maximumScheduleCount = 1000
schedule = [[Section]]()
result = [[[Section]]]()
orderdGroupOfSections = n
.
func foo(numOfCourseIndex: Int, orderdGroupOfSections: [[[Section]]], maximumScheduleCount: Int) {
if (result.count >= maximumScheduleCount) {
return
}
for n in 0..<orderdGroupOfSections[numOfCourseIndex].count {
for o in 0..<orderdGroupOfSections[numOfCourseIndex][n].count {
for p in 0..<orderdGroupOfSections[numOfCourseIndex][n][o].sectionTime!.count {
for q in 0..<orderdGroupOfSections[numOfCourseIndex][n][o].sectionTime![p].day!.count {
///do something
}
}
}
if (numOfCourseIndex == orderdGroupOfSections.count - 1) {
result.append(schedule)
}
else {
foo(numOfCourseIndex: numOfCourseIndex + 1, orderdGroupOfSections: orderdGroupOfSections, maximumScheduleCount: maximumScheduleCount)
}
}
}
I'm saying it's a Big-O of (n!) as the worst case, but I'm not sure.
There are two simple things that you can do to help you analyze the complexity of your function. The first is to simplify the input and see how the function behaves. Instead of running the function for a large number of courses or schedules or whatever, look at what it does for just one. How many steps does it take to process one course? How many for two? Three? Four? Make a table with the results, and then look at the difference between one and two courses, two and three, three and four, etc. Can you see a pattern?
The second thing you can do is break the function down into parts and analyze the parts separately. You're probably not going to be able to just see the complexity of the whole thing because it's, well, complex. So simplify it... what's the complexity of the innermost loop? How about the second innermost loop, ignoring the innermost one? What's the complexity of the two together? Rinse and repeat.

system verilog 2 dimensional dynamic array randomization

I am trying to use system verilog constraint solver to solve the following problem statement :
We have N balls each with unique weight and these balls need to be distributed into groups , such that weight of each group does not exceed a threshold ( MAX_WEIGHT) .
Now i want to find all such possible solutions . The code I wrote in SV is as follows :
`define NUM_BALLS 5
`define MAX_WEIGHT_BUCKET 100
class weight_distributor;
int ball_weight [`NUM_BALLS];
rand int unsigned solution_array[][];
constraint c_solve_bucket_problem
{
foreach(solution_array[i,j]) {
solution_array[i][j] inside {ball_weight};
//unique{solution_array[i][j]};
foreach(solution_array[ii,jj])
if(!((ii == i) & (j == jj))) solution_array[ii][jj] != solution_array[i][j];
}
foreach(solution_array[i,])
solution_array[i].sum() < `MAX_WEIGHT_BUCKET;
}
function new();
ball_weight = {10,20,30,40,50};
endfunction
function void post_randomize();
foreach(solution_array[i,j])
$display("solution_array[%0d][%0d] = %0d", i,j,solution_array[i][j]);
$display("solution_array size = %0d",solution_array.size);
endfunction
endclass
module top;
weight_distributor weight_distributor_o;
initial begin
weight_distributor_o = new();
void'(weight_distributor_o.randomize());
end
endmodule
The issue i am facing here is that i want the size of both the dimentions of the array to be randomly decided based on the constraint solution_array[i].sum() < `MAX_WEIGHT_BUCKET; . From my understanding of SV constraints i believe that the size of the array will be solved before value assignment to the array .
Moreover i also wanted to know if unique could be used for 2 dimentional dynamic array .
You can't use the random number generator (RNG) to enumerate all possible solutions of your problem. It's not built for this. An RNG can give you one of these solutions with each call to randomize(). It's not guaranteed, though, that it gives you a different solution with each call. Say you have 3 solutions, S0, S1, S2. The solver could give you S1, then S2, then S1 again, then S1, then S0, etc. If you know how many solutions there are, you can stop once you've seen them all. Generally, though, you don't know this beforehand.
What an RNG can do, however, is check whether a solution you provide is correct. If you loop over all possible solutions, you can filter out only the ones that are correct. In your case, you have N balls and up to N groups. You can start out by putting each ball into one group and trying if this is a correct solution. You can then put 2 balls into one group and all the other N - 2 into a groups of one. You can put two other balls into one group and all the others into groups of one. You can start putting 2 balls into one group, 2 other balls into one group and all the other N - 4 into groups of one. You can continue this until you put all N balls into the same group. I'm not really sure how you can easily enumerate all solutions. Combinatorics can help you here. At each step of the way you can check whether a certain ball arrangement satisfies the constraints:
// Array describing an arrangement of balls
// - the first dimension is the group
// - the second dimension is the index within the group
typedef unsigned int unsigned arrangement_t[][];
// Function that gives you the next arrangement to try out
function arrangement_t get_next_arrangement();
// ...
endfunction
arrangement = get_next_arrangement();
if (weight_distributor_o.randomize() with {
solution.size() == arrangement.size();
foreach (solution[i]) {
solution[i].size() == arrangement[i].size();
foreach (solution[i][j])
solution[i][j] == arrangement[i][j];
}
})
all_solutions.push_back(arrangement);
Now, let's look at weight_distributor. I'd recommend you write each requirement in an own constraint as this makes the code much more readable.
You can shorten you uniqueness constraint that you wrote as a double loop to using the unique operator:
class weight_distributor;
// ...
constraint unique_balls {
unique { solution_array };
}
endclass
You already had a constraint that each group can have at most MAX_WEIGHT in it:
class weight_distributor;
// ...
constraint max_weight_per_group {
foreach (solution_array[i])
solution_array[i].sum() < `MAX_WEIGHT_BUCKET;
}
endclass
Because of the way array sizes are solved, it's not possible to write constraints that will ensure that you can compute a valid solution using simple calls randomize(). You don't need this, though, if you want to check whether a solution is valid. This is due to the constraints on array sizes in the between arrangement and solution_array.
Try this.!
class ABC;
rand bit[3:0] md_array [][]; // Multidimansional Arrays with unknown size
constraint c_md_array {
// First assign the size of the first dimension of md_array
md_array.size() == 2;
// Then for each sub-array in the first dimension do the following:
foreach (md_array[i]) {
// Randomize size of the sub-array to a value within the range
md_array[i].size() inside {[1:5]};
// Iterate over the second dimension
foreach (md_array[i][j]) {
// Assign constraints for values to the second dimension
md_array[i][j] inside {[1:10]};
}
}
}
endclass
module tb;
initial begin
ABC abc = new;
abc.randomize();
$display ("md_array = %p", abc.md_array);
end
endmodule
https://www.chipverify.com/systemverilog/systemverilog-foreach-constraint

Merge Sort algorithm efficiency

I am currently taking an online algorithms course in which the teacher doesn't give code to solve the algorithm, but rather rough pseudo code. So before taking to the internet for the answer, I decided to take a stab at it myself.
In this case, the algorithm that we were looking at is merge sort algorithm. After being given the pseudo code we also dove into analyzing the algorithm for run times against n number of items in an array. After a quick analysis, the teacher arrived at 6nlog(base2)(n) + 6n as an approximate run time for the algorithm.
The pseudo code given was for the merge portion of the algorithm only and was given as follows:
C = output [length = n]
A = 1st sorted array [n/2]
B = 2nd sorted array [n/2]
i = 1
j = 1
for k = 1 to n
if A(i) < B(j)
C(k) = A(i)
i++
else [B(j) < A(i)]
C(k) = B(j)
j++
end
end
He basically did a breakdown of the above taking 4n+2 (2 for the declarations i and j, and 4 for the number of operations performed -- the for, if, array position assignment, and iteration). He simplified this, I believe for the sake of the class, to 6n.
This all makes sense to me, my question arises from the implementation that I am performing and how it effects the algorithms and some of the tradeoffs/inefficiencies it may add.
Below is my code in swift using a playground:
func mergeSort<T:Comparable>(_ array:[T]) -> [T] {
guard array.count > 1 else { return array }
let lowerHalfArray = array[0..<array.count / 2]
let upperHalfArray = array[array.count / 2..<array.count]
let lowerSortedArray = mergeSort(array: Array(lowerHalfArray))
let upperSortedArray = mergeSort(array: Array(upperHalfArray))
return merge(lhs:lowerSortedArray, rhs:upperSortedArray)
}
func merge<T:Comparable>(lhs:[T], rhs:[T]) -> [T] {
guard lhs.count > 0 else { return rhs }
guard rhs.count > 0 else { return lhs }
var i = 0
var j = 0
var mergedArray = [T]()
let loopCount = (lhs.count + rhs.count)
for _ in 0..<loopCount {
if j == rhs.count || (i < lhs.count && lhs[i] < rhs[j]) {
mergedArray.append(lhs[i])
i += 1
} else {
mergedArray.append(rhs[j])
j += 1
}
}
return mergedArray
}
let values = [5,4,8,7,6,3,1,2,9]
let sortedValues = mergeSort(values)
My questions for this are as follows:
Do the guard statements at the start of the merge<T:Comparable> function actually make it more inefficient? Considering we are always halving the array, the only time that it will hold true is for the base case and when there is an odd number of items in the array.
This to me seems like it would actually add more processing and give minimal return since the time that it happens is when we have halved the array to the point where one has no items.
Concerning my if statement in the merge. Since it is checking more than one condition, does this effect the overall efficiency of the algorithm that I have written? If so, the effects to me seems like they vary based on when it would break out of the if statement (e.g at the first condition or the second).
Is this something that is considered heavily when analyzing algorithms, and if so how do you account for the variance when it breaks out from the algorithm?
Any other analysis/tips you can give me on what I have written would be greatly appreciated.
You will very soon learn about Big-O and Big-Theta where you don't care about exact runtimes (believe me when I say very soon, like in a lecture or two). Until then, this is what you need to know:
Yes, the guards take some time, but it is the same amount of time in every iteration. So if each iteration takes X amount of time without the guard and you do n function calls, then it takes X*n amount of time in total. Now add in the guards who take Y amount of time in each call. You now need (X+Y)*n time in total. This is a constant factor, and when n becomes very large the (X+Y) factor becomes negligible compared to the n factor. That is, if you can reduce a function X*n to (X+Y)*(log n) then it is worthwhile to add the Y amount of work because you do fewer iterations in total.
The same reasoning applies to your second question. Yes, checking "if X or Y" takes more time than checking "if X" but it is a constant factor. The extra time does not vary with the size of n.
In some languages you only check the second condition if the first fails. How do we account for that? The simplest solution is to realize that the upper bound of the number of comparisons will be 3, while the number of iterations can be potentially millions with a large n. But 3 is a constant number, so it adds at most a constant amount of work per iteration. You can go into nitty-gritty details and try to reason about the distribution of how often the first, second and third condition will be true or false, but often you don't really want to go down that road. Pretend that you always do all the comparisons.
So yes, adding the guards might be bad for your runtime if you do the same number of iterations as before. But sometimes adding extra work in each iteration can decrease the number of iterations needed.