How To Use kmedoids from pyclustering with set number of clusters - cluster-analysis

I am trying to use k-medoids to cluster some trajectory data I am working with (multiple points along the trajectory of an aircraft). I want to cluster these into a set number of clusters (as I know how many types of paths there should be).
I have found that k-medoids is implemented inside the pyclustering package, and am trying to use that. I am technically able to get it to cluster, but I do not know how to control the number of clusters. I originally thought it was directly tied to the number of elements inside what I called initial_medoids, but experimentation shows that it is more complicated than this. My relevant code snippet is below.
Note that D holds a list of lists. Each list corresponds to a single trajectory.
def hausdorff( u, v):
d = max(directed_hausdorff(u, v)[0], directed_hausdorff(v, u)[0])
return d
traj_count = len(traj_lst)
D = np.zeros((traj_count, traj_count))
for i in range(traj_count):
for j in range(i + 1, traj_count):
distance = hausdorff(traj_lst[i], traj_lst[j])
D[i, j] = distance
D[j, i] = distance
from pyclustering.cluster.kmedoids import kmedoids
initial_medoids = [104, 345, 123, 1]
kmedoids_instance = kmedoids(traj_lst, initial_medoids)
kmedoids_instance.process()
cluster_lst = kmedoids_instance.get_clusters()[0]
num_clusters = len(np.unique(cluster_lst))
print('There were %i clusters found' %num_clusters)
I have a total of 1900 trajectories, and the above-code finds 1424 clusters. I had expected that I could control the number of clusters through the length of initial_medoids, as I did not see any option to input the number of clusters into the program, but this seems unrelated. Could anyone guide me as to the mistake I am making? How do I choose the number of clusters?

In case of requirement to obtain clusters you need to call get_clusters():
cluster_lst = kmedoids_instance.get_clusters()
Not get_clusters()[0] (in this case it is a list of object indexes in the first cluster):
cluster_lst = kmedoids_instance.get_clusters()[0]
And that is correct, you can control amount of clusters by initial_medoids.

It is true you can control the number of cluster, which correspond to the length of initial_medoids.
The documentation is not clear about this. The get__clusters function "Returns list of medoids of allocated clusters represented by indexes from the input data". so, this function does not return the cluster labels. It returns the index of rows in your original (input) data.
Please check the shape of cluster_lst in your example, using .get_clusters() and not .get_clusters()[0] as annoviko suggested. In your case, this shape should be (4,). So, you have a list of four elements (clusters), each containing the index or rows in your original data.
To get, for example, data from the first cluster, use:
kmedoids_instance = kmedoids(traj_lst, initial_medoids)
kmedoids_instance.process()
cluster_lst = kmedoids_instance.get_clusters()
traj_lst_first_cluster = traj_lst[cluster_lst[0]]

Related

Testing a tensorflow network: in_top_k() replacement for multilabel classification

I've created a neural network in tensorflow. This network is multilabel. Ergo: it tries to predict multiple output labels for one input set, in this case three. Currently I use this code to test how accurate my network is at predicting the three labels:
_, indices_1 = tf.nn.top_k(prediction, 3)
_, indices_2 = tf.nn.top_k(item_data, 3)
correct = tf.equal(indices_1, indices_2)
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
percentage = accuracy.eval({champion_data:input_data, item_data:output_data})
That code works fine. The problem is now that I'm trying to create code that tests if the top 3 items it finds in indices_1 are amongst the top 5 images in indices_2. I know tensorflow has an in_top_k() method, but as far as I know that doesn't accept multilabel. Currently I've been trying to compare them using a for loop:
_, indices_1 = tf.nn.top_k(prediction, 5)
_, indices_2 = tf.nn.top_k(item_data, 3)
indices_1 = tf.unpack(tf.transpose(indices_1, (1, 0)))
indices_2 = tf.unpack(tf.transpose(indices_2, (1, 0)))
correct = []
for element in indices_1:
for element_2 in indices_2:
if element == element_2:
correct.append(True)
else:
correct.append(False)
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
percentage = accuracy.eval({champion_data:input_data, item_data:output_data})
However, that doesn't work. The code runs but my accuracy is always 0.0.
So I have one of two questions:
1) Is there an easy replacement for in_top_k() that accepts multilabel classification that I can use instead of custom writing code?
2) If not 1: what am I doing wrong that results in me getting an accuracy of 0.0?
When you do
correct = tf.equal(indices_1, indices_2)
you are checking not just whether those two indices contain the same elements but whether they contain the same elements in the same positions. This doesn't sound like what you want.
The setdiff1d op will tell you which indices are in indices_1 but not in indices_2, which you can then use to count errors.
I think being too strict with the correctness check might be what is causing you to get a wrong result.

Most efficient way to store dictionaries in Matlab

I have a set of IDs associated with costs which is just a double value. IDs are integers and unique. Two IDs may have same costs. I stored them as:-
a=containers.Map('KeyType','uint32','ValueType','double');
a(1)=7.3
a(2)=8.4
a(3)=7.3
Now i want to find the minimum cost.
b=[];
c=values(a);
b=[b,c{:}];
cost_min=min(b);
Now i want to find all IDs associated i.e. 1 and 3 with the minimum cost i.e. 7.3. I can collect all the keys into an array and then do a for loop over this array. Is there a better way to do this entire thing in Matlab so that for loops are not required?
sparse matrix can work as a hashmap, just do this:
a= sparse(1:3,1,[7.3 8.4 7.3])
find(a == min(nonzeros(a))
There are methods which can be used on maps for this kind of operations
http://se.mathworks.com/help/matlab/ref/containers.map-class.html
The approach finding minimum values and minimum keys can be done something like this,
a=containers.Map('KeyType','uint32','ValueType','double');
a(1)=7.3;
a(3)=8.4;
a(4)=7.3;
minval = inf;
minkeys = -1;
for k = keys(a)
val = a.values(k);
val = val{1};
if (val < minval(1))
minkeys = k;
minval = val;
elseif (val == minval(1))
minkeys = [minkeys,k];
end
end
disp(minval);
disp(minkeys);
This is not efficient though and value search is clumsy for maps. This is not what they are intended for. Maps is supposed to do efficient key lookup. In case you are going to do a lot of lookups and this is what takes time, then use a map. If you need to do a lot of value searches, I would recommend that you use a matrix (or two arrays) for this instead.
idx = [1;3;4];
val = [7.3,8.3,7.3];
minval = min(val);
minidx = idx(val==minval);
disp(minval);
disp(minidx);
There is also another post with an example where it is shown how a sparse matrix can be used as a hashmap. Let the index become the key. This will take about 3 times the memory as all non-zero elements an ordinary array, but a map uses more memory than an array as well.

Is there a range data structure in Scala?

I'm looking for a way to handle ranges in Scala.
What I need to do is:
given a set of ranges and a range(A) return the range(B) where range(A) intersect range (B) is not empty
given a set of ranges and a range(A) remove/add range(A) from/to the set of ranges.
given range(A) and range(B) create a range(C) = [min(A,B), max(A,B)]
I saw something similar in java - http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/RangeSet.html
Though subRangeSet returns only the intersect values and not the range in the set (or list of ranges) that it intersects with.
RangeSet rangeSet = TreeRangeSet.create();
rangeSet.add(Range.closed(0, 10));
rangeSet.add(Range.closed(30, 40));
Range range = Range.closed(12, 32);
System.out.println(rangeSet.subRangeSet(range)); //[30,32] (I need [30,40])
System.out.println(range.span(Range.closed(30, 40))); //[12,40]
There is an Interval[A] type in the spire math library. This allows working with ranges of arbitrary types that define an Order. Boundaries can be inclusive, exclusive or omitted. So e.g. (-∞, 0.0] or [0.0, 1.0) would be possible intervals of doubles.
Here is a library intervalset for working with sets of non-overlapping intervals (IntervalSeq or IntervalTrie) as well as maps of intervals to arbitrary values (IntervalMap).
Here is a related question that describes how to use IntervalSeq with DateTime.
Note that if the type you want to use is 64bit or less (basically any primitive), IntervalTrie is extremely fast. See the Benchmarks.
As Tzach Zohar has mentioned in the comment, if all you need is range of Int - go for scala.collection.immutable.Range:
val rangeSet = Set(0 to 10, 30 to 40)
val r = 12 to 32
rangeSet.filter(range => range.contains(r.start) || range.contains(r.end))
If you need it for another underlying type - implement it by yourself, it's easy for your usecase.

I'm having trouble shuffling a deck of cards in Matlab. Need help to see where I went wrong

I currently have the deck of cards coded, but it is unshuffled. This is for programming the card game of War if it helps. I need to shuffle the deck, but whenever I do, it will only shuffle together the card numbers and the suits, not the full card. For example, I have A identified as an ace and the suits come after each number. A normal card would be "AH" (an ace of hearts) or "6D" (a six of diamonds). Instead, it will output "5A" as one of the cards, as in a 5 of aces. I don't know how to fix this, but the code that I currently have is this:
card_nums = ('A23456789TJQK')';
card_suits = ('HDSC')';
unshuffled_deck = [repmat(card_nums,4,1),repmat(card_suits,13,1)];
disp(unshuffled_deck)
shuffled_deck = unshuffled_deck(randperm(numel(unshuffled_deck)));
disp(shuffled_deck)
I would appreciate any help with this, and thank you very much for your time!
You're creating a random permutation of all of the elements from both columns of unshuffled_deck combined. Instead you need to create a random permutation of the rows of unshuffled_deck:
shuffled_deck = unshuffled_deck(randperm(size(unshuffled_deck,1)),:);
The call to size gives you the number of rows in the deck array, then we get a random permutation of the row indices, and copy the row (value, suit) as a single entity.
Here's a version using a structure array in response to #Carl Witthoft's comment. I was afraid it would add too much complexity to the solution, but it really isn't bad:
card_nums = ('A23456789TJQK')';
card_suits = ('HDSC')';
deck_nums = repmat(card_nums,4,1);
deck_suits = repmat(card_suits,13,1);
cell_nums = cellstr(deck_nums).'; %// Change strings to cell arrays...
cell_suits = cellstr(deck_suits).'; %// so we can use them in struct
%// Construct a struct array with fields 'value' and 'suit'
unshuffled_deck = struct('value',cell_nums,'suit',cell_suits);
disp('unshuffled deck:');
disp([unshuffled_deck.value;unshuffled_deck.suit]);
%// Shuffle the deck using the number of elements in the structure array
shuffled_deck = unshuffled_deck(randperm(numel(unshuffled_deck)));
disp('shuffled deck:');
disp([shuffled_deck.value; shuffled_deck.suit]);
Here's a test run:
unshuffled deck:
A23456789TJQKA23456789TJQKA23456789TJQKA23456789TJQK
HDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSC
shuffled deck:
4976TT93KTJQJATK953A75QA82Q6226K5J784J4A3372486K859Q
CHSSSHCDSCSSHDDCDSHHCDHSDDCDHCCHHCHHHDDCSCDSSCHDSCSD
To access an individual card, you can do:
>> shuffled_deck(2)
ans =
scalar structure containing the fields:
value = 9
suit = H
Or you can access the individual fields:
>> shuffled_deck(2).value
ans = 9
>> shuffled_deck(2).suit
ans = H
Unfortunately, I don't know of any way to simply index the struct array and get, for instance, 9H as you would in a regular array using disp(shuffled_deck(2,:)). In this case, the only option I know of is to explicitly concatenate each field:
disp([shuffled_deck(2).value,shuffled_deck(2).suit]);

networkx: efficiently find absolute longest path in digraph

I want networkx to find the absolute longest path in my directed,
acyclic graph.
I know about Bellman-Ford, so I negated my graph lengths. The problem:
networkx's bellman_ford() requires a source node. I want to find the
absolute longest path (or the shortest path after negation), not the
longest path from a given node.
Of course, I could run bellman_ford() on each node in the graph and
sort, but is there a more efficient method?
From what I've read (eg,
http://en.wikipedia.org/wiki/Longest_path_problem) I realize there
actually may not be a more efficient method, but was wondering if
anyone had any ideas (and/or had proved P=NP (grin)).
EDIT: all the edge lengths in my graph are +1 (or -1 after negation), so a method that simply visits the most nodes would also work. In general, it won't be possible to visit ALL nodes of course.
EDIT: OK, I just realized I could add an additional node that simply connects to every other node in the graph, and then run bellman_ford from that node. Any other suggestions?
There is a linear-time algorithm mentioned at http://en.wikipedia.org/wiki/Longest_path_problem
Here is a (very lightly tested) implementation
EDIT, this is clearly wrong, see below. +1 for future testing more than lightly before posting
import networkx as nx
def longest_path(G):
dist = {} # stores [node, distance] pair
for node in nx.topological_sort(G):
pairs = [[dist[v][0]+1,v] for v in G.pred[node]] # incoming pairs
if pairs:
dist[node] = max(pairs)
else:
dist[node] = (0, node)
node, max_dist = max(dist.items())
path = [node]
while node in dist:
node, length = dist[node]
path.append(node)
return list(reversed(path))
if __name__=='__main__':
G = nx.DiGraph()
G.add_path([1,2,3,4])
print longest_path(G)
EDIT: Corrected version (use at your own risk and please report bugs)
def longest_path(G):
dist = {} # stores [node, distance] pair
for node in nx.topological_sort(G):
# pairs of dist,node for all incoming edges
pairs = [(dist[v][0]+1,v) for v in G.pred[node]]
if pairs:
dist[node] = max(pairs)
else:
dist[node] = (0, node)
node,(length,_) = max(dist.items(), key=lambda x:x[1])
path = []
while length > 0:
path.append(node)
length,node = dist[node]
return list(reversed(path))
if __name__=='__main__':
G = nx.DiGraph()
G.add_path([1,2,3,4])
G.add_path([1,20,30,31,32,4])
# G.add_path([20,2,200,31])
print longest_path(G)
Aric's revised answer is a good one and I found it had been adopted by the networkx library link
However, I found a little flaw in this method.
if pairs:
dist[node] = max(pairs)
else:
dist[node] = (0, node)
because pairs is a list of tuples of (int,nodetype). When comparing tuples, python compares the first element and if they are the same, will process to compare the second element, which is nodetype. However, in my case the nodetype is a custom class whos comparing method is not defined. Python therefore throw out an error like 'TypeError: unorderable types: xxx() > xxx()'
For a possible improving, I say the line
dist[node] = max(pairs)
can be replaced by
dist[node] = max(pairs,key=lambda x:x[0])
Sorry about the formatting since it's my first time posting. I wish I could just post below Aric's answer as a comment but the website forbids me to do so stating I don't have enough reputation (fine...)