I have a MiniZinc model for wolf-goat-cabbage in which I store the locations of each entity in its own array, e.g., array[1..max] of Loc: wolf where Loc is defined as an enum: enum Loc = {left, rght}; and max is the maximum possible number of steps needed, e.g., 20..
To find a shortest plan I define a variable var 1..max: len; and constrain the end state to occur at step len.
constraint farmer[len] == left /\ wolf[len] == left /\ goat[len] == left /\ cabbage[len] == left
Then I ask for
solve minimize len
I get all the right answers.
I'd like to display the arrays from 1..len, but I can't find a way to do it. When I try, for example, to include in the output:
[ "\(wolf[n]), " | n in 1..max where n <= len ]
I get an error message saying that I can't display an array of opt string.
Is there a way to display only an initial portion of an array, where the length of the initial portion is determined by the model?
Thanks.
Did you try to fix the len variable in the output statement like n <= fix(len)?. See also What is the use of minizinc fix function?
Related
I have an array of 1 x 400, where all element values are above 1500. However, I have some elements that have values<50 which are wrong measures and I would like to have the mean of the elements before and after the wrong measured data points and replace it in the main array.
For instance, element number 17 is below 50 so I want to take the mean of elements 16 and 18 and replace element 17 with the new mean.
Can someone help me, please? many thanks in advance.
No language is specified in the question, but for Python you could work with List Comprehension:
# array with 400 values, some of which are incorrect
arr = [...]
arr = [arr[i] if arr[i] >= 50 else (arr[i-1]+arr[i+1])/2 for i in range(len(arr))]
That is, if arr[i] is less than 50, it'll be replaced by the average value of the element before and after it. There are two issues with this approach.
If i is the first or last element, then one of the two values will be undefined, and no mean can be obtained. This can be fixed by just using the value of the available neighbour, as specified below
If two values in a row are very low, the leftmost one will use the rightmost one to calculate its value, which will result in a very low value. This is a problem that may not occur for you in practice, but it is an inherent result of the way you wish to recalculate values, and you might want to keep it in mind.
Improved version, keeping in mind the edge cases:
# don't alter the first and last item, even if they're low
arr = [arr[i] if arr[i] >= 50 or i == 0 or i+1 == len(arr) else (arr[i-1]+arr[i+1])/2 for i in range(len(arr))]
# replace the first and last element if needed
if arr[0] < 50:
arr[0] = arr[1]
if arr[len(arr)-1] < 50:
arr[len(arr)-1] = arr[len(arr)-2]
I hope this answer was useful for you, even if you intend to use another language or framework than python.
I am using cp_model to solve a problem very similar to the multiple-knapsack problem (https://developers.google.com/optimization/bin/multiple_knapsack). Just like in the example code, I use some boolean variables to encode membership:
# Variables
# x[i, j] = 1 if item i is packed in bin j.
x = {}
for i in data['items']:
for j in data['bins']:
x[(i, j)] = solver.IntVar(0, 1, 'x_%i_%i' % (i, j))
What is specific to my problem is that there are a large number of fungible items. There may be 5 items of type 1 and 10 items of type 2. Any item is exchangeable with items of the same type. Using the boolean variables to encode the problem implicitly assumes that the order of the assignment for the same type of items matter. But in fact, the order does not matter and only takes up unnecessary computation time.
I am wondering if there is any way to design the model so that it accurately expresses that we are allocating from fungible pools of items to save computation.
Instead of creating 5 Boolean variables for 5 items of type 'i' in bin 'b', just create an integer variable 'count' from 0 to 5 of items 'i' in bin 'b'. Then sum over b (count[i][b]) == #item b
I have a table with features = {A,B}. B is a column of integers. Scanning the table, when I have a value change in B column, I increment a variable "changes" of 1:
if data[i,B]!=data[i-1,B]
then changes=changes+1
I want to find an order that minimizes changes and at the same time keep the repetition of a value in B in [0,upper_bound].
I'm thinking to use an array as a decision variable where save the position j for the element i:
order[i]=j means i element in data is the j-th element in ordering.
How can I model with constraint? This is what I do until now:
array[1..n, Features] of int: data;
int: changes=0;
constraint
forall(i in 1..n) (
if data[i,B] != data[i-1,B] then
changes=changes+1
endif
)
;
minimize changes;
I think I'm wrong using changes as a constant variable, right? Thank you in advance.
In MiniZinc (and in constraint programming in general) you cannot increment a variable as changes=changes+1).
If changes is a variable used only for the total count of changes you can use sum instead, something like:
% ...
var 0..n: num_changes;
constraint
changes = sum([data[i,B] != data[i-1,B] | i in 2..n])
;
% ...
However, if you want to use the number of accumulated changes for each i then you have to create a changes array to collect the values for each step, e.g.
var[1..n-1] of var 0..n: changes;
% the total number of changes (to minimize)
var 0..n-1: total_changes = changes[n-1];
constraint
forall(i in 1..n-1) (
if data[i,B] != data[i-1,B] then
changes[i] = changes[i-1]+1
else
changes[i] = changes[i-1]
endif
)
;
I am trying to use system verilog constraint solver to solve the following problem statement :
We have N balls each with unique weight and these balls need to be distributed into groups , such that weight of each group does not exceed a threshold ( MAX_WEIGHT) .
Now i want to find all such possible solutions . The code I wrote in SV is as follows :
`define NUM_BALLS 5
`define MAX_WEIGHT_BUCKET 100
class weight_distributor;
int ball_weight [`NUM_BALLS];
rand int unsigned solution_array[][];
constraint c_solve_bucket_problem
{
foreach(solution_array[i,j]) {
solution_array[i][j] inside {ball_weight};
//unique{solution_array[i][j]};
foreach(solution_array[ii,jj])
if(!((ii == i) & (j == jj))) solution_array[ii][jj] != solution_array[i][j];
}
foreach(solution_array[i,])
solution_array[i].sum() < `MAX_WEIGHT_BUCKET;
}
function new();
ball_weight = {10,20,30,40,50};
endfunction
function void post_randomize();
foreach(solution_array[i,j])
$display("solution_array[%0d][%0d] = %0d", i,j,solution_array[i][j]);
$display("solution_array size = %0d",solution_array.size);
endfunction
endclass
module top;
weight_distributor weight_distributor_o;
initial begin
weight_distributor_o = new();
void'(weight_distributor_o.randomize());
end
endmodule
The issue i am facing here is that i want the size of both the dimentions of the array to be randomly decided based on the constraint solution_array[i].sum() < `MAX_WEIGHT_BUCKET; . From my understanding of SV constraints i believe that the size of the array will be solved before value assignment to the array .
Moreover i also wanted to know if unique could be used for 2 dimentional dynamic array .
You can't use the random number generator (RNG) to enumerate all possible solutions of your problem. It's not built for this. An RNG can give you one of these solutions with each call to randomize(). It's not guaranteed, though, that it gives you a different solution with each call. Say you have 3 solutions, S0, S1, S2. The solver could give you S1, then S2, then S1 again, then S1, then S0, etc. If you know how many solutions there are, you can stop once you've seen them all. Generally, though, you don't know this beforehand.
What an RNG can do, however, is check whether a solution you provide is correct. If you loop over all possible solutions, you can filter out only the ones that are correct. In your case, you have N balls and up to N groups. You can start out by putting each ball into one group and trying if this is a correct solution. You can then put 2 balls into one group and all the other N - 2 into a groups of one. You can put two other balls into one group and all the others into groups of one. You can start putting 2 balls into one group, 2 other balls into one group and all the other N - 4 into groups of one. You can continue this until you put all N balls into the same group. I'm not really sure how you can easily enumerate all solutions. Combinatorics can help you here. At each step of the way you can check whether a certain ball arrangement satisfies the constraints:
// Array describing an arrangement of balls
// - the first dimension is the group
// - the second dimension is the index within the group
typedef unsigned int unsigned arrangement_t[][];
// Function that gives you the next arrangement to try out
function arrangement_t get_next_arrangement();
// ...
endfunction
arrangement = get_next_arrangement();
if (weight_distributor_o.randomize() with {
solution.size() == arrangement.size();
foreach (solution[i]) {
solution[i].size() == arrangement[i].size();
foreach (solution[i][j])
solution[i][j] == arrangement[i][j];
}
})
all_solutions.push_back(arrangement);
Now, let's look at weight_distributor. I'd recommend you write each requirement in an own constraint as this makes the code much more readable.
You can shorten you uniqueness constraint that you wrote as a double loop to using the unique operator:
class weight_distributor;
// ...
constraint unique_balls {
unique { solution_array };
}
endclass
You already had a constraint that each group can have at most MAX_WEIGHT in it:
class weight_distributor;
// ...
constraint max_weight_per_group {
foreach (solution_array[i])
solution_array[i].sum() < `MAX_WEIGHT_BUCKET;
}
endclass
Because of the way array sizes are solved, it's not possible to write constraints that will ensure that you can compute a valid solution using simple calls randomize(). You don't need this, though, if you want to check whether a solution is valid. This is due to the constraints on array sizes in the between arrangement and solution_array.
Try this.!
class ABC;
rand bit[3:0] md_array [][]; // Multidimansional Arrays with unknown size
constraint c_md_array {
// First assign the size of the first dimension of md_array
md_array.size() == 2;
// Then for each sub-array in the first dimension do the following:
foreach (md_array[i]) {
// Randomize size of the sub-array to a value within the range
md_array[i].size() inside {[1:5]};
// Iterate over the second dimension
foreach (md_array[i][j]) {
// Assign constraints for values to the second dimension
md_array[i][j] inside {[1:10]};
}
}
}
endclass
module tb;
initial begin
ABC abc = new;
abc.randomize();
$display ("md_array = %p", abc.md_array);
end
endmodule
https://www.chipverify.com/systemverilog/systemverilog-foreach-constraint
As simple as in title. I have nx1 sized vector p. I'm interested in the maximum value of r = p/foo - floor(p/foo), with foo being a scalar, so I just call:
max_value = max(p/foo-floor(p/foo))
How can I get which value of p gave out max_value?
I thought about calling:
[max_value, max_index] = max(p/foo-floor(p/foo))
but soon I realised that max_index is pretty useless. I'm sorry asking this, real beginner here.
Having dropped the issue to pieces, I realized there's no unique corrispondence between values p and values in my related vector p/foo-floor(p/foo), so there's a logical issue rather than a language one.
However, given my input data, I know that the solution is unique. How can I fix this?
I ended up doing:
result = p(p/foo-floor(p/foo) == max(p/foo-floor(p/foo)))
Looks terrible, so if you know any other way...
Once you have the index, use it:
result = p(max_index)
You can create a new vector with your lets say "transformed" values:
p2 = (p/foo-floor(p/foo))
and then just use find to find the max values on p2:
max_index = find(p2 == max(p2))
that will return the index or indices of p2 with the max value of that operation, and finally just lookup the original value in p
p(max_index)
in 1 line, this is:
p(find((p/foo-floor(p/foo) == max((p/foo-floor(p/foo))))))
which is basically the same thing you did in the end :)