Merging time distributed tensor gives 'inbound node error' - merge

In my network, I have some time distributed convolutions.Batch size = 1 image, which breaks down into 32 sub-images, for each sub-image, 3 feature of dimension 6x6x256. I need to merge all the 3 features corresponding to a particular image.
Tensor definition are like:
out1 = TimeDistributed(Convolution2D(256, (3, 3), strides=(2, 2), padding='same', activation='relu'))(out1)
out2 = TimeDistributed(Convolution2D(256, (3, 3), strides = (2,2), padding='same', activation='relu'))(out2)
out3 = TimeDistributed(Convolution2D(256, (1, 1), padding='same', activation='relu'))(out3)
out1: <tf.Tensor 'time_distributed_3/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
out2: <tf.Tensor 'time_distributed_5/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
out4: <tf.Tensor 'time_distributed_6/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
I have tried different techniques to merge like TimeDistributed(merge(... )), etc but nothing works.
out = Lambda(lambda x:merge([x[0],x[1],x[2]],mode='concat'))([out1,out2,out3])
It gives correct dimension tensor (1,32,6,6,768), but then it further passes through some flatten and dense layers. When i build the model like
model = Model( .... , .... ), it gives error
File "/home/adityav/.virtualenvs/cv/local/lib/python2.7/site-packages/keras/engine/topology.py", line 1664, in build_map_of_graph
next_node = layer.inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute 'inbound_nodes'
Any idea on how to do this time distributed concatenation, when the tensors are 5dimensional.
Thanks

Related

OR-Tools: TSP with Profits

is there an option to calculate the TSP with Profits using OR-Tools?
I tried to implement a revenue for every end node
data['revenue'] = {(1, 0): 0,
(1, 2): 100,
(1, 3): 10,
(1, 4): 10000000000,
(2, 0): 0,
(2, 1): 1000,
(2, 3): 10,
(2, 4): 10000000000,
(3, 0): 0,
(3, 1): 1000,
(3, 2): 100,
(3, 4): 10000000000,
(4, 0): 0,
(4, 1): 1000,
(4, 2): 100,
(4, 3): 10
}
and then i added from this question: Maximize profit with disregard to number of pickup-delivery done OR tools
for node, revenue in data["revenue"].items():
start, end = node
routing.AddDisjunction(
[manager.NodeToIndex(end)], revenue
)
routing.AddDisjunction(
[manager.NodeToIndex(start)], 0
)
this is not working unfortunately. I always get solutions, that are not making sense.
Can someone help or give me an advice, how to implement profits into the TSP with OR-Tools?
Adding start or end node in a Disjunction makes no sense...
AddDisjunction must be applied to each node (or pair) BUT start and end nodes.
for node, revenue in data["revenue"].items():
pickup, delivery = node
pickup_index = manager.NodeToIndex(pickup)
delivery_index = manager.NodeToIndex(delivery)
routing.AddDisjunction(
[pickup_index, delivery_index], revenue, 2 # cardinality
)
Does your key value in your dict are pair of nodes ? seems to be coordinate to me...

Group_by_key in order in Pyspark

rrr = sc.parallelize([1, 2, 3])
fff = sc.parallelize([5, 6, 7, 8])
test = rrr.cartesian(fff)
Here's test:
[(1, 5),(1, 6),(1, 7),(1, 8),
(2, 5),(2, 6),(2, 7),(2, 8),
(3, 5),(3, 6),(3, 7),(3, 8)]
Is there a way to preserve the order after calling groupByKey:
test.groupByKey().mapValues(list).take(2)
Output is this where the list is in random order:
Out[255]: [(1, [8, 5, 6, 7]), (2, [5, 8, 6, 7]), (3, [6, 8, 7, 5])]
The desired output is:
[(1, [5,6,7,8]), (2, [5,6,7,8]), (3, [5,6,7,8])]
How to achieve this?
You can add one more mapValues to sort the lists:
result = test.groupByKey().mapValues(list).mapValues(sorted)

Scala flattening loses the desired grouping of the subsets by size

Count of the number of sum of the cubes equal to the target value.
For a small number of sets this code works (target is 100 vs 1000). When the target value increases the system runs out of resources. I have not flattened allsets with the intention of only creating & processing the smaller subsets as needed.
How do I lazily create/use the subsets by size until the sums for all the Sets of one size equal or exceed the target, at which point nothing more needs to be examined because the rest of the sums will be larger than the target.
val target = 100; val exp = 3; val maxi = math.pow(target, 1.0/exp).toInt;
target: Int = 100
exp: Int = 3
maxi: Int = 4
val allterms=(1 to maxi).map(math.pow(_,exp).toInt).to[Set];
allterms: Set[Int] = Set(1, 8, 27, 64)
val allsets = (1 to maxi).map(allterms.subsets(_).to[Vector]); allsets.mkString("\n");
allsets: scala.collection.immutable.IndexedSeq[Vector[scala.collection.immutable.Set[Int]]] = Vector(Vector(Set(1), Set(8), Set(27), Set(64)), Vector(Set(1, 8), Set(1, 27), Set(1, 64), Set(8, 27), Set(8, 64), Set(27, 64)), Vector(Set(1, 8, 27), Set(1, 8, 64), Set(1, 27, 64), Set(8, 27, 64)), Vector(Set(1, 8, 27, 64)))
res7: String =
Vector(Set(1), Set(8), Set(27), Set(64))
Vector(Set(1, 8), Set(1, 27), Set(1, 64), Set(8, 27), Set(8, 64), Set(27, 64))
Vector(Set(1, 8, 27), Set(1, 8, 64), Set(1, 27, 64), Set(8, 27, 64))
Vector(Set(1, 8, 27, 64))
allsets.flatten.map(_.sum).filter(_==target).size;
res8: Int = 1
This implementation loses the separation of the subsets by size.
You can add laziness to your calculations in two ways:
Use combinations() instead of subsets(). This creates an Iterator so the combination (collection of Int values) won't be realized until it is needed.
Use a Stream (or LazyList if Scala 2.13.0) so that each "row" (same sized combinations) won't be realized until it is needed.
Then you can trim the number of rows to be realized by using the fact that the first combination of each row is going to have the minimum sum of that row.
val target = 125
val exp = 2
val maxi = math.pow(target, 1.0/exp).toInt //maxi: Int = 11
val allterms=(1 to maxi).map(math.pow(_,exp).toInt)
//allterms = Seq(1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121)
val allsets = Stream.range(1,maxi+1).map(allterms.combinations)
//allsets: Stream[Iterator[IndexedSeq[Int]]] = Stream(<iterator>, ?)
// 11 rows, 2047 combinations, all unrealized
allsets.map(_.map(_.sum).buffered) //Stream[BufferedIterator[Int]]
.takeWhile(_.head <= target) // 6 rows
.flatten // 1479 combinations
.count(_ == target)
//res0: Int = 5

Reformatting Dataframe Containing Array to RowMatrix

I have this dataframe in the following format:
+----+-----+
| features |
+----+-----+
|[1,4,7,10]|
|[2,5,8,11]|
|[3,6,9,12]|
+----+----+
Script to create sample dataframe:
rows2 = sc.parallelize([ IndexedRow(0, [1, 4, 7, 10 ]),
IndexedRow(1, [2, 5, 8, 1]),
IndexedRow(1, [3, 6, 9, 12]),
])
rows_df = rows2.toDF()
row_vec= rows_df.drop("index")
row_vec.show()
The feature column contains 4 features, and there are 3 row ids. I want to convert this data to a rowmatrix, where the columns and rows will be in the following mat format:
from pyspark.mllib.linalg.distributed import RowMatrix
rows = sc.parallelize([(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12)])
# Convert to RowMatrix
mat = RowMatrix(rows)
# Calculate exact and approximate similarities
exact = mat.columnSimilarities()
approx = mat.columnSimilarities(0.05)
Basically, I want to transpose the dataframe into the new format so that I can run the columnSimilarities() function. I have a much larger dataframe that contains 50 features, and 39000 rows.
Is this what you are trying to do? Hate using collect() but don't think it can be avoided here since you want to reshape/convert structured object to matrix ... right?
X = np.array(row_vec.select("_2").collect()).reshape(-1,3)
X = sc.parallelize(X)
for i in X.collect(): print(i)
[1 4 7]
[10 2 5]
[8 1 3]
[ 6 9 12]
I figured it out, I used the following:
from pyspark.mllib.linalg.distributed import RowMatrix
features_rdd = row_vec.select("features").rdd.map(lambda row: row[0])
features_mat = RowMatrix(features_rdd )
from pyspark.mllib.linalg.distributed import CoordinateMatrix, MatrixEntry
coordmatrix_features = CoordinateMatrix(
features_mat .rows.zipWithIndex().flatMap(
lambda x: [MatrixEntry(x[1], j, v) for j, v in enumerate(x[0])]
)
)
transposed_rowmatrix_features = coordmatrix_features.transpose().toRowMatrix()

How to check if a list of matrix contains a given matrix in Maple

I have some problems in Maple.
If I have a matrix:
Matrix1 := Matrix(2, 2, {(1, 1) = 31, (1, 2) = -80, (2, 1) = -50, (2, 2) = 43});
I want to decide if it is in the below list:
MatrixList := [Matrix(2, 2, {(1, 1) = 31, (1, 2) = -80, (2, 1) = -50, (2, 2) = 43}), Matrix(2, 2, {(1, 1) = -61, (1, 2) = 77, (2, 1) = -48, (2, 2) = 9})];
I did the following:
evalb(Matrix1 in MatrixList);
but got "false".
Why? And how do I then make a program that decide if a matrix is
contained in a list of matrices.
Here's a much cheaper way than DrC's
ormap(LinearAlgebra:-Equal, MatrixList, Matrix1)