How to check if a list of matrix contains a given matrix in Maple - maple

I have some problems in Maple.
If I have a matrix:
Matrix1 := Matrix(2, 2, {(1, 1) = 31, (1, 2) = -80, (2, 1) = -50, (2, 2) = 43});
I want to decide if it is in the below list:
MatrixList := [Matrix(2, 2, {(1, 1) = 31, (1, 2) = -80, (2, 1) = -50, (2, 2) = 43}), Matrix(2, 2, {(1, 1) = -61, (1, 2) = 77, (2, 1) = -48, (2, 2) = 9})];
I did the following:
evalb(Matrix1 in MatrixList);
but got "false".
Why? And how do I then make a program that decide if a matrix is
contained in a list of matrices.

Here's a much cheaper way than DrC's
ormap(LinearAlgebra:-Equal, MatrixList, Matrix1)

Related

Getting the first item for a tuple for each row in a list in Scala

I am looking to do this in Scala, but nothing works. In pyspark it works obviously.
from operator import itemgetter
rdd = sc.parallelize([(0, [(0,'a'), (1,'b'), (2,'c')]), (1, [(3,'x'), (5,'y'), (6,'z')])])
mapped = rdd.mapValues(lambda v: map(itemgetter(0), v))
Output
mapped.collect()
[(0, [0, 1, 2]), (1, [3, 5, 6])]
val rdd = sparkContext.parallelize(List(
(0, Array((0, "a"), (1, "b"), (2, "c"))),
(1, Array((3, "x"), (5, "y"), (6, "z")))
))
rdd
.mapValues(v => v.map(_._1))
.foreach(v=>println(v._1+"; "+v._2.toSeq.mkString(",") ))
Output:
0; 0,1,2
1; 3,5,6

OR-Tools: TSP with Profits

is there an option to calculate the TSP with Profits using OR-Tools?
I tried to implement a revenue for every end node
data['revenue'] = {(1, 0): 0,
(1, 2): 100,
(1, 3): 10,
(1, 4): 10000000000,
(2, 0): 0,
(2, 1): 1000,
(2, 3): 10,
(2, 4): 10000000000,
(3, 0): 0,
(3, 1): 1000,
(3, 2): 100,
(3, 4): 10000000000,
(4, 0): 0,
(4, 1): 1000,
(4, 2): 100,
(4, 3): 10
}
and then i added from this question: Maximize profit with disregard to number of pickup-delivery done OR tools
for node, revenue in data["revenue"].items():
start, end = node
routing.AddDisjunction(
[manager.NodeToIndex(end)], revenue
)
routing.AddDisjunction(
[manager.NodeToIndex(start)], 0
)
this is not working unfortunately. I always get solutions, that are not making sense.
Can someone help or give me an advice, how to implement profits into the TSP with OR-Tools?
Adding start or end node in a Disjunction makes no sense...
AddDisjunction must be applied to each node (or pair) BUT start and end nodes.
for node, revenue in data["revenue"].items():
pickup, delivery = node
pickup_index = manager.NodeToIndex(pickup)
delivery_index = manager.NodeToIndex(delivery)
routing.AddDisjunction(
[pickup_index, delivery_index], revenue, 2 # cardinality
)
Does your key value in your dict are pair of nodes ? seems to be coordinate to me...

Group_by_key in order in Pyspark

rrr = sc.parallelize([1, 2, 3])
fff = sc.parallelize([5, 6, 7, 8])
test = rrr.cartesian(fff)
Here's test:
[(1, 5),(1, 6),(1, 7),(1, 8),
(2, 5),(2, 6),(2, 7),(2, 8),
(3, 5),(3, 6),(3, 7),(3, 8)]
Is there a way to preserve the order after calling groupByKey:
test.groupByKey().mapValues(list).take(2)
Output is this where the list is in random order:
Out[255]: [(1, [8, 5, 6, 7]), (2, [5, 8, 6, 7]), (3, [6, 8, 7, 5])]
The desired output is:
[(1, [5,6,7,8]), (2, [5,6,7,8]), (3, [5,6,7,8])]
How to achieve this?
You can add one more mapValues to sort the lists:
result = test.groupByKey().mapValues(list).mapValues(sorted)

How to store each element to dictionary and count dictionary value with pyspark?

I want to count elements value of dictionary. I try with this code:
def f_items(data, steps=0):
items = defaultdict(int)
for element in data:
if element in data:
items[element] += 1
else:
items[element] = 1
return items.items()
data = [[1, 2, 3, 'E'], [1, 2, 3, 'E'], [5, 2, 7, 112, 'A'] ]
rdd = sc.parallelize(data)
items = rdd.flatMap(lambda data: [y for y in f_items(data)], True)
print (items.collect())
The output of this code is shown below:
[(1, 1), (2, 1), (3, 1), ('E', 1), (1, 1), (2, 1), (3, 1), ('E', 1), (5, 1), (2, 1), (7, 1), (112, 1), ('A', 1)]
But, it should show the result following:
[(1, 2), (2, 3), (3, 3), ('E', 2), (5, 1), (7, 1), (112, 1), ('A', 1)]
How to achieve this?
Your last step should be a reduceByKey function call on the items rdd.
final_items = items.reduceByKey(lambda x,y: x+y)
print (final_items.collect())
You can look into this link to see some examples of reduceByKey in scala, java and python.

Merging time distributed tensor gives 'inbound node error'

In my network, I have some time distributed convolutions.Batch size = 1 image, which breaks down into 32 sub-images, for each sub-image, 3 feature of dimension 6x6x256. I need to merge all the 3 features corresponding to a particular image.
Tensor definition are like:
out1 = TimeDistributed(Convolution2D(256, (3, 3), strides=(2, 2), padding='same', activation='relu'))(out1)
out2 = TimeDistributed(Convolution2D(256, (3, 3), strides = (2,2), padding='same', activation='relu'))(out2)
out3 = TimeDistributed(Convolution2D(256, (1, 1), padding='same', activation='relu'))(out3)
out1: <tf.Tensor 'time_distributed_3/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
out2: <tf.Tensor 'time_distributed_5/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
out4: <tf.Tensor 'time_distributed_6/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
I have tried different techniques to merge like TimeDistributed(merge(... )), etc but nothing works.
out = Lambda(lambda x:merge([x[0],x[1],x[2]],mode='concat'))([out1,out2,out3])
It gives correct dimension tensor (1,32,6,6,768), but then it further passes through some flatten and dense layers. When i build the model like
model = Model( .... , .... ), it gives error
File "/home/adityav/.virtualenvs/cv/local/lib/python2.7/site-packages/keras/engine/topology.py", line 1664, in build_map_of_graph
next_node = layer.inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute 'inbound_nodes'
Any idea on how to do this time distributed concatenation, when the tensors are 5dimensional.
Thanks