How to pick an element from matrix (list of list in python) based on decision variables (one for row, and one for column) | OR-Tools, Python - or-tools

I am new to constraint programming and OR-Tools. A brief about the problem. There are 8 positions, for each position I need to decide which move of type A (move_A) and which move of type B (move_B) should be selected such that the value achieved from the combination of the 2 moves (at each position) is maximized. (This is only a part of the bigger problem though). And I want to use AddElement approach to do the sub setting.
Please see the below attempt
from ortools.sat.python import cp_model
model = cp_model.CpModel()
# value achieved from combination of different moves of type A
# (moves_A (rows)) and different moves of type B (moves_B (columns))
# for e.g. 2nd move of type A and 3rd move of type B will give value = 2
value = [
[ -1, 5, 3, 2, 2],
[ 2, 4, 2, -1, 1],
[ 4, 4, 0, -1, 2],
[ 5, 1, -1, 2, 2],
[ 0, 0, 0, 0, 0],
[ 2, 1, 1, 2, 0]
]
# 6 moves of type A
num_moves_A = len(value)
# 5 moves of type B
num_moves_B = len(value[0])
num_positions = 8
type_move_A_position = [model.NewIntVar(0, num_moves_A - 1, f"move_A[{i}]") for i in range(num_positions)]
type_move_B_position = [model.NewIntVar(0, num_moves_B - 1, f"move_B[{i}]") for i in range(num_positions)]
value_position = [model.NewIntVar(0, 10, f"value_position[{i}]") for i in range(num_positions)]
# I am getting an error when I run the below
objective_terms = []
for i in range(num_positions):
model.AddElement(type_move_B_position[i], value[type_move_A_position[i]], value_position[i])
objective_terms.append(value_position[i])
The error is as follows:
Traceback (most recent call last):
File "<ipython-input-65-3696379ce410>", line 3, in <module>
model.AddElement(type_move_B_position[i], value[type_move_A_position[i]], value_position[i])
TypeError: list indices must be integers or slices, not IntVar
In MiniZinc the below code would have worked
var int: obj = sum(i in 1..num_positions ) (value [type_move_A_position[i], type_move_B_position[i]])
I know in OR-Tools we will have to create some intermediary variables to store results first, so the above approach of minizinc will not work. But I am struggling to do so.
I can always create a 2 matrix of binary binary variables one for num_moves_A * num_positions and the other for num_moves_B * num_positions, add re;evant constraints and achieve the purpose
But I want to learn how to do the same thing via AddElement constraint
Any help on how to re-write the AddElement snippet is highly appreciated. Thanks.

AddElement is 1D only.
The way it is translated from minizinc to CP-SAT is to create an intermediate variable p == index1 * max(index2) + index2 and use it in an element constraint with a flattened matrix.

Following Laurent's suggestion (using AddElement constraint):
from ortools.sat.python import cp_model
model = cp_model.CpModel()
# value achieved from combination of different moves of type A
# (moves_A (rows)) and different moves of type B (moves_B (columns))
# for e.g. 2 move of type A and 3 move of type B will give value = 2
value = [
[-1, 5, 3, 2, 2],
[2, 4, 2, -1, 1],
[4, 4, 0, -1, 2],
[5, 1, -1, 2, 2],
[0, 0, 0, 0, 0],
[2, 1, 1, 2, 0],
]
min_value = min([min(i) for i in value])
max_value = max([max(i) for i in value])
# 6 moves of type A
num_moves_A = len(value)
# 5 moves of type B
num_moves_B = len(value[0])
# number of positions
num_positions = 5
# flattened matrix of values
value_flat = [value[i][j] for i in range(num_moves_A) for j in range(num_moves_B)]
# flattened indices
flatten_indices = [
index1 * len(value[0]) + index2
for index1 in range(len(value))
for index2 in range(len(value[0]))
]
type_move_A_position = [
model.NewIntVar(0, num_moves_A - 1, f"move_A[{i}]") for i in range(num_positions)
]
model.AddAllDifferent(type_move_A_position)
type_move_B_position = [
model.NewIntVar(0, num_moves_B - 1, f"move_B[{i}]") for i in range(num_positions)
]
model.AddAllDifferent(type_move_B_position)
# below intermediate decision variable is created which
# will store index corresponding to the selected move of type A and
# move of type B for each position
# this will act as index in the AddElement constraint
flatten_index_num = [
model.NewIntVar(0, len(flatten_indices), f"flatten_index_num[{i}]")
for i in range(num_positions)
]
# another intermediate decision variable is created which
# will store value corresponding to the selected move of type A and
# move of type B for each position
# this will act as the target in the AddElement constraint
value_position_index_num = [
model.NewIntVar(min_value, max_value, f"value_position_index_num[{i}]")
for i in range(num_positions)
]
objective_terms = []
for i in range(num_positions):
model.Add(
flatten_index_num[i]
== (type_move_A_position[i] * len(value[0])) + type_move_B_position[i]
)
model.AddElement(flatten_index_num[i], value_flat, value_position_index_num[i])
objective_terms.append(value_position_index_num[i])
model.Maximize(sum(objective_terms))
# Solve
solver = cp_model.CpSolver()
status = solver.Solve(model)
solver.ObjectiveValue()
for i in range(num_positions):
print(
str(i)
+ "--"
+ str(solver.Value(type_move_A_position[i]))
+ "--"
+ str(solver.Value(type_move_B_position[i]))
+ "--"
+ str(solver.Value(value_position_index_num[i]))
)

The below version uses AddAllowedAssignments constraint to achieve the same purpose (per Laurent's alternate approach) :
from ortools.sat.python import cp_model
model = cp_model.CpModel()
# value achieved from combination of different moves of type A
# (moves_A (rows)) and different moves of type B (moves_B (columns))
# for e.g. 2 move of type A and 3 move of type B will give value = 2
value = [
[-1, 5, 3, 2, 2],
[2, 4, 2, -1, 1],
[4, 4, 0, -1, 2],
[5, 1, -1, 2, 2],
[0, 0, 0, 0, 0],
[2, 1, 1, 2, 0],
]
min_value = min([min(i) for i in value])
max_value = max([max(i) for i in value])
# 6 moves of type A
num_moves_A = len(value)
# 5 moves of type B
num_moves_B = len(value[0])
# number of positions
num_positions = 5
type_move_A_position = [
model.NewIntVar(0, num_moves_A - 1, f"move_A[{i}]") for i in range(num_positions)
]
model.AddAllDifferent(type_move_A_position)
type_move_B_position = [
model.NewIntVar(0, num_moves_B - 1, f"move_B[{i}]") for i in range(num_positions)
]
model.AddAllDifferent(type_move_B_position)
value_position = [
model.NewIntVar(min_value, max_value, f"value_position[{i}]")
for i in range(num_positions)
]
tuples_list = []
for i in range(num_moves_A):
for j in range(num_moves_B):
tuples_list.append((i, j, value[i][j]))
for i in range(num_positions):
model.AddAllowedAssignments(
[type_move_A_position[i], type_move_B_position[i], value_position[i]],
tuples_list,
)
model.Maximize(sum(value_position))
# Solve
solver = cp_model.CpSolver()
status = solver.Solve(model)
solver.ObjectiveValue()
for i in range(num_positions):
print(
str(i)
+ "--"
+ str(solver.Value(type_move_A_position[i]))
+ "--"
+ str(solver.Value(type_move_B_position[i]))
+ "--"
+ str(solver.Value(value_position[i]))
)

Related

nvim-treesitter: query for jsdoc description within javascript file

I'm experimenting with a testing regime for my neovim plugin regexplainer. The idea is to put a JSDoc docblock containing the expected output on top of each query.
/**
* #group CaptureGroups
* **0-9** (_>= 0x_)
* capture group 1:
* **0-9**
*/
/\d*(\d)/;
My test file would then iterate through the file, getting each regex from the javascript tree, along with it's expected output from the comment. I could then run my plugin on the regexp and check the output against the comment.
I wrote this function to query the document for comments with regexp following.
function M.get_regexes_with_descriptions()
local log = require'regexplainer.utils'.debug
local query_js = vim.treesitter.query.parse_query('javascript', [[
(comment) #comment
(expression_statement
(regex
(regex_pattern) #regex))
]])
local query_jsdoc = vim.treesitter.query.parse_query('jsdoc', [[
(tag_name) #tag_name
(description) #description
]])
local parser = vim.treesitter.get_parser(0, 'javascript')
local tree, other = unpack(parser:parse())
for id, node, metadata in query_js:iter_captures(tree:root(), 0) do
local name = query_js.captures[id] -- name of the capture in the query
if node:type() == 'comment' then
for cid, cnode, cmetadata in node:iter_children() do
local cname = query_jsdoc.captures[cid] -- name of the capture in the query
log(get_info_on_capture(cid, cnode, cname, cmetadata))
end
end
end
end
I expect this to log the tag name and description nodes from the 'jsdoc' grammar, but i actually get nothing
So how do I "query down" into the embedded JSDoc part of the tree? I tried this query, but got a query: invalid node type at position 10 error when parsing the query:
(comment (description) #desc) #comment
(expression_statement
(regex
(regex_pattern) #regex))
The TSPlayground output for the file in question looks like this:
comment [0, 0] - [5, 3]
tag [1, 3] - [4, 12]
tag_name [1, 3] - [1, 9]
description [1, 10] - [4, 12]
expression_statement [6, 0] - [6, 10]
regex [6, 0] - [6, 9]
pattern: regex_pattern [6, 1] - [6, 8]
term [6, 1] - [6, 8]
character_class_escape [6, 1] - [6, 3]
zero_or_more [6, 3] - [6, 4]
anonymous_capturing_group [6, 4] - [6, 8]
pattern [6, 5] - [6, 7]
term [6, 5] - [6, 7]
character_class_escape [6, 5] - [6, 7]
EDIT: A Workaround
I developed this workaround but it's a bit unsatisfying, I'd prefer to get the comment's contents along with the regexp in a single query
function M.get_regexes_with_descriptions()
local log = require'regexplainer.utils'.debug
local parsers = require "nvim-treesitter.parsers"
local query_js = vim.treesitter.query.parse_query('javascript', [[
(expression_statement
(regex
(regex_pattern) #regex)) #expr
]])
local query_jsdoc = vim.treesitter.query.parse_query('jsdoc', [[
(tag
(tag_name) #tag_name
((description) #description
;; (#eq? #tag_name "example")
))
]])
local parser = parsers.get_parser(0)
local tree = unpack(parser:parse())
local caps = {}
for id, node in query_js:iter_captures(tree:root(), 0) do
local cap = {}
local name = query_js.captures[id] -- name of the capture in the query
log(id, name, ts_utils.get_node_text(node))
if name == 'expr' then
cap.regex = node:named_child('pattern')
cap.comment = node:prev_sibling()
table.insert(caps, cap)
end
end
local results = {}
for _, cap in ipairs(caps) do
local result = {}
result.text = ts_utils.get_node_text(cap.regex)
local comment_str = table.concat(ts_utils.get_node_text(cap.comment), '\n')
local jsdoc_parser = vim.treesitter.get_string_parser(comment_str, 'jsdoc')
local jsdoc_tree = jsdoc_parser:parse()[1]
for id, ch, metadata in query_jsdoc:iter_captures(jsdoc_tree:root()) do
if query_jsdoc.captures[id] == 'description' then
result.description = table.concat(
vim.tbl_map(function(line)
return line:gsub('^%s+%*', '')
end, ts_utils.get_node_text(ch)), '\n')
end
end
table.insert(results, result)
end
return results
end

about or-tools OnlyEnforceIf,AddBoolAnd question

I have a Bool array, I want some ones to appear in certain places like this:
[1,1,1,1,0,0,0,0,0,0] or [0,0,0,0,0,0,1,1,1,1],here I want four ones appear in array's head,or tail,just two places,how can I do this?
model = cp_model.CpModel()
solver = cp_model.CpSolver()
shifts = {}
ones={}
sequence = []
for i in range(10):
shifts[(i)] = model.NewIntVar(0, 10, "shifts(%i)" % i)
ones[(i)] = model.NewBoolVar( '%i' % i)
for i in range(10):
model.Add(shifts[(i)] ==8).OnlyEnforceIf(ones[(i)])
model.Add(shifts[(i)] == 0).OnlyEnforceIf(ones[(i)].Not())
#I want the four 8s in the array to only appear in two positions at the head or tail of the array, and not in other positions.
model.AddBoolAnd([ones[(0)],ones[(1)],ones[(2)],ones[(3)]])# appear in head
#model.AddBoolAnd([ones[(6)],ones[(7)],ones[(8)],ones[(9)]]) #appear in tauk ,error!
model.Add(sum(ones[(i)] for i in range(10)) == 4)
status = solver.Solve(model)
print("status:",status)
res=[]
for i in range(10):
res.append(solver.Value(shifts[(i)]))
print(res)
bold
italic
quote
Try AddAllowedAsignments:
model = cp_model.CpModel()
ones = [model.NewBoolVar("") for _ in range(10)]
model.AddAllowedAssignments(
ones, [[1, 1, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]]
)
Edit: #Laurent's suggestion
model = cp_model.CpModel()
ones = [model.NewBoolVar("") for _ in range(10)]
head = model.NewBoolVar("head")
# HEAD
model.AddImplication(head, ones[0])
model.AddImplication(head, ones[1])
model.AddImplication(head, ones[2])
model.AddImplication(head, ones[3])
model.AddImplication(head, ones[4].Not())
model.AddImplication(head, ones[5].Not())
...
# TAIL
model.AddImplication(head.Not(), ones[0].Not())
model.AddImplication(head.Not(), ones[1].Not())
....
model.AddImplication(head.Not(), ones[6])
model.AddImplication(head.Not(), ones[7])
model.AddImplication(head.Not(), ones[8])
model.AddImplication(head.Not(), ones[9])

Google OR-Tools doesn't find solution on VRPtw problem

I'm tackling with VRPtw problem and struggling that the solver finds no solution with any data except for artificial small one.
The setting is as below.
There are several depots and locations to visit. Each locations have the time-window. Each vehicles have break time and work time. Also, the locations have some constraints and only the vehicles which satisfy that demand can visit there.
Based on this experiment setting, I wrote the code below.
As I wrote, it looks that it is working with small artificial data, but with real data, it never found the solution. I tried with 5 different data sets.
Although I set the 7200 sec time limit, previously I ran for longer than 10 hours and it was same.
The data's scale is 40~50 vehicles and 200~300 locations.
Does this code have a problem? If not, on what kind of order, should I change the approach(such as initialization, searching method and so on)?
(Edited to use integer for time matrix)
from dataclasses import dataclass
from typing import List, Tuple
from ortools.constraint_solver import pywrapcp
from ortools.constraint_solver import routing_enums_pb2
# TODO: Refactor
BIG_ENOUGH = 100000000
TIME_DIMENSION = 'Time'
TIME_LIMIT = 7200
#dataclass
class DataSet:
time_matrix: List[List[int]]
locations_num: int
vehicles_num: int
vehicles_break_time_window: List[Tuple[int, int, int]]
vehicles_work_time_windows: List[Tuple[int, int]]
location_time_windows: List[Tuple[int, int]]
vehicles_depots_indices: List[int]
possible_vehicles: List[List[int]]
def execute(data: DataSet):
manager = pywrapcp.RoutingIndexManager(data.locations_num,
data.vehicles_num,
data.vehicles_depots_indices,
data.vehicles_depots_indices)
routing_parameters = pywrapcp.DefaultRoutingModelParameters()
routing_parameters.solver_parameters.trace_propagation = True
routing_parameters.solver_parameters.trace_search = True
routing = pywrapcp.RoutingModel(manager, routing_parameters)
def time_callback(source_index, dest_index):
from_node = manager.IndexToNode(source_index)
to_node = manager.IndexToNode(dest_index)
return data.time_matrix[from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(time_callback)
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
routing.AddDimension(
transit_callback_index,
BIG_ENOUGH,
BIG_ENOUGH,
False,
TIME_DIMENSION)
time_dimension = routing.GetDimensionOrDie(TIME_DIMENSION)
# set time window for locations start time
# set condition restrictions
possible_vehicles = data.possible_vehicles
for location_idx, time_window in enumerate(data.location_time_windows):
index = manager.NodeToIndex(location_idx + data.vehicles_num)
time_dimension.CumulVar(index).SetRange(time_window[0], time_window[1])
routing.SetAllowedVehiclesForIndex(possible_vehicles[location_idx], index)
solver = routing.solver()
for i in range(data.vehicles_num):
routing.AddVariableMinimizedByFinalizer(
time_dimension.CumulVar(routing.Start(i)))
routing.AddVariableMinimizedByFinalizer(
time_dimension.CumulVar(routing.End(i)))
# set work time window for vehicles
for vehicle_index, work_time_window in enumerate(data.vehicles_work_time_windows):
start_index = routing.Start(vehicle_index)
time_dimension.CumulVar(start_index).SetRange(work_time_window[0],
work_time_window[0])
end_index = routing.End(vehicle_index)
time_dimension.CumulVar(end_index).SetRange(work_time_window[1],
work_time_window[1])
# set break time for vehicles
node_visit_transit = {}
for n in range(routing.Size()):
if n >= data.locations_num:
node_visit_transit[n] = 0
else:
node_visit_transit[n] = 1
break_intervals = {}
for v in range(data.vehicles_num):
vehicle_break = data.vehicles_break_time_window[v]
break_intervals[v] = [
solver.FixedDurationIntervalVar(vehicle_break[0],
vehicle_break[1],
vehicle_break[2],
True,
'Break for vehicle {}'.format(v))
]
time_dimension.SetBreakIntervalsOfVehicle(
break_intervals[v], v, node_visit_transit
)
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
search_parameters.local_search_metaheuristic = (
routing_enums_pb2.LocalSearchMetaheuristic.GREEDY_DESCENT)
search_parameters.time_limit.seconds = TIME_LIMIT
search_parameters.log_search = True
solution = routing.SolveWithParameters(search_parameters)
return solution
if __name__ == '__main__':
data = DataSet(
time_matrix=[[0, 0, 4, 5, 5, 6],
[0, 0, 6, 4, 5, 5],
[1, 3, 0, 6, 5, 4],
[2, 1, 6, 0, 5, 4],
[2, 2, 5, 5, 0, 6],
[3, 2, 4, 4, 6, 0]],
locations_num=6,
vehicles_num=2,
vehicles_depots_indices=[0, 1],
vehicles_work_time_windows=[(720, 1080), (720, 1080)],
vehicles_break_time_window=[(720, 720, 15), (720, 720, 15)],
location_time_windows=[(735, 750), (915, 930), (915, 930), (975, 990)],
possible_vehicles=[[0], [1], [0], [1]]
)
solution = execute(data)
if solution is not None:
print("solution is found")

Sub-titles for matrix columns in table

Here's my toy example:
t = table([1,2,3;4,5,6;7,8,9],[10,11,12;13,14,15;16,17,18]);
t.Properties.VariableNames = {'system1', 'system2'};
t.Properties.RowNames = {'obs1', 'obs2', 'obs3'};
I am wondering if it's possible to assign sub titles to the three columns of every variable, such as {'min', 'mean', 'max'}?
You can put those subtitles within the variables using a cell array like this:
t = table({'min', 'mean', 'max'; 1, 2, 3; 4, 5, 6; 7, 8, 9},...
{'min', 'mean', 'max'; 10, 11, 12; 13, 14, 15; 16, 17, 18});
t.Properties.VariableNames = {'system1', 'system2'};
t.Properties.RowNames = {'.','obs1', 'obs2', 'obs3'};
%if you don't like dot (.) as a row name, replace it with char(8203) to have nameless row
which will give:
t =
4×2 table
system1 system2
________________________ ________________________
. 'min' 'mean' 'max' 'min' 'mean' 'max'
obs1 [ 1] [ 2] [ 3] [ 10] [ 11] [ 12]
obs2 [ 4] [ 5] [ 6] [ 13] [ 14] [ 15]
obs3 [ 7] [ 8] [ 9] [ 16] [ 17] [ 18]
If you're looking for functional solution (e.g. t.system1.min) You can nest sub-tables for system1 and system2 with {'min', 'mean', 'max'} as Variable Names. Visually it won't be as useful as other solutions.
dat1 = [1,2,3;4,5,6;7,8,9];
dat2 = [10,11,12;13,14,15;16,17,18];
s1 = table(dat1(:,1),dat1(:,2),dat1(:,3));
s2 = table(dat2(:,1),dat2(:,2),dat2(:,3));
s1.Properties.VariableNames = {'min','mean','max'};
s1.Properties.RowNames = {'obs1', 'obs2', 'obs3'};
s2.Properties.VariableNames = {'min','mean','max'};
s2.Properties.RowNames = {'obs1', 'obs2', 'obs3'};
t = table(s1,s2);
t.Properties.VariableNames = {'system1', 'system2'};
t.Properties.RowNames = {'obs1', 'obs2', 'obs3'};

How to check if a number can be represented as a sum of some given numbers

I've got a list of some integers, e.g. [1, 2, 3, 4, 5, 10]
And I've another integer (N). For example, N = 19.
I want to check if my integer can be represented as a sum of any amount of numbers in my list:
19 = 10 + 5 + 4
or
19 = 10 + 4 + 3 + 2
Every number from the list can be used only once. N can raise up to 2 thousand or more. Size of the list can reach 200 integers.
Is there a good way to solve this problem?
4 years and a half later, this question is answered by Jonathan.
I want to post two implementations (bruteforce and Jonathan's) in Python and their performance comparison.
def check_sum_bruteforce(numbers, n):
# This bruteforce approach can be improved (for some cases) by
# returning True as soon as the needed sum is found;
sums = []
for number in numbers:
for sum_ in sums[:]:
sums.append(sum_ + number)
sums.append(number)
return n in sums
def check_sum_optimized(numbers, n):
sums1, sums2 = [], []
numbers1 = numbers[:len(numbers) // 2]
numbers2 = numbers[len(numbers) // 2:]
for sums, numbers_ in ((sums1, numbers1), (sums2, numbers2)):
for number in numbers_:
for sum_ in sums[:]:
sums.append(sum_ + number)
sums.append(number)
for sum_ in sums1:
if n - sum_ in sums2:
return True
return False
assert check_sum_bruteforce([1, 2, 3, 4, 5, 10], 19)
assert check_sum_optimized([1, 2, 3, 4, 5, 10], 19)
import timeit
print(
"Bruteforce approach (10000 times):",
timeit.timeit(
'check_sum_bruteforce([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 200)',
number=10000,
globals=globals()
)
)
print(
"Optimized approach by Jonathan (10000 times):",
timeit.timeit(
'check_sum_optimized([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 200)',
number=10000,
globals=globals()
)
)
Output (the float numbers are seconds):
Bruteforce approach (10000 times): 1.830944365834205
Optimized approach by Jonathan (10000 times): 0.34162875449254027
The brute force approach requires generating 2^(array_size)-1 subsets to be summed and compared against target N.
The run time can be dramatically improved by simply splitting the problem in two. Store, in sets, all of the possible sums for one half of the array and the other half separately. It can now be determined by checking for every number n in one set if the complementN-n exists in the other set.
This optimization brings the complexity down to approximately: 2^(array_size/2)-1+2^(array_size/2)-1=2^(array_size/2 + 1)-2
Half of the original.
Here is a c++ implementation using this idea.
#include <bits/stdc++.h>
using namespace std;
bool sum_search(vector<int> myarray, int N) {
//values for splitting the array in two
int right=myarray.size()-1,middle=(myarray.size()-1)/2;
set<int> all_possible_sums1,all_possible_sums2;
//iterate over the first half of the array
for(int i=0;i<middle;i++) {
//buffer set that will hold new possible sums
set<int> buffer_set;
//every value currently in the set is used to make new possible sums
for(set<int>::iterator set_iterator=all_possible_sums1.begin();set_iterator!=all_possible_sums1.end();set_iterator++)
buffer_set.insert(myarray[i]+*set_iterator);
all_possible_sums1.insert(myarray[i]);
//transfer buffer into the main set
for(set<int>::iterator set_iterator=buffer_set.begin();set_iterator!=buffer_set.end();set_iterator++)
all_possible_sums1.insert(*set_iterator);
}
//iterator over the second half of the array
for(int i=middle;i<right+1;i++) {
set<int> buffer_set;
for(set<int>::iterator set_iterator=all_possible_sums2.begin();set_iterator!=all_possible_sums2.end();set_iterator++)
buffer_set.insert(myarray[i]+*set_iterator);
all_possible_sums2.insert(myarray[i]);
for(set<int>::iterator set_iterator=buffer_set.begin();set_iterator!=buffer_set.end();set_iterator++)
all_possible_sums2.insert(*set_iterator);
}
//for every element in the first set, check if the the second set has the complemenent to make N
for(set<int>::iterator set_iterator=all_possible_sums1.begin();set_iterator!=all_possible_sums1.end();set_iterator++)
if(all_possible_sums2.find(N-*set_iterator)!=all_possible_sums2.end())
return true;
return false;
}
Ugly and brute force approach:
a = [1, 2, 3, 4, 5, 10]
b = []
a.size.times do |c|
b << a.combination(c).select{|d| d.reduce(&:+) == 19 }
end
puts b.flatten(1).inspect