I already have k means output and i have segmented my users accordingly. Now, I have to predict cluster number for new users whenever they come. Do I have to run kmeans each time a new user comes into the picture or is there any easy way like using predict function. Please let me know if there is any ready example for this. Thanks,
Sagar
Each cluster has a corresponding centroid. When a new datapoint arrives, you can just assign to the cluster corresponding to the centroid that is closest to the new point.
Alternatively, you can instead do another step of the k-means algorithm, but you don't have to run it from scratch.
Perhaps more importantly, is there a bound on the number of users that can enter? If the number (of new users that may enter) is large enough to change the clusters (so that what you have computed is no longer valid), then you might want to consider data stream clustering.
Related
I am using http://networkrepository.com/socfb-B-anon.php dataset for my analysis. I would like to do some analysis of how this present graph is formed from scratch. Is there any existing social network simulation framework for this kind of problem?
I am also open to use any other dataset if available. I would need the timestamp for every edge( nodes connected at).
The Barabási–Albert (BA) model describes a preferential attachment model for generating networks, or graphs. It iteratively builds a graph by adding new nodes and connecting them to previously added nodes. The new node is attached to some other nodes with a probability proportional to the degree of the old node with relation to the total number of edges in the graph.
This algorithm has been shown to produce graphs that are scale-free, which means the distribution of degrees follows a power law, which is a typical property of social networks.
This can be seen as a 'simulation' of a growing social network, where users are more likely to 'befriend' or 'follow' popular existing users. Of course it's not a complete simulation because it assumes a new user is done befriending or following other users right after they created an account, but it might be a good starting point for your exploration.
A timestamp for each edge or node creation can be generated by maintaining one during the creation process and increment it as you add more edges or nodes to the graph.
Hopefully this answer gives you enough terminology to further your research.
I am simulating the case "Cavity driven lid" and I try to get all the stream lines with the stream tracer of paraview, but I only get the ones that intersect the reference line, and because of that there are vortices that are not visible. How can I see all the stream-lines in the domain?
Thanks a lot in adavance.
To add a little bit to Mathieu's answer, if you really want streamlines everywhere, then you can create a Stream Tracer With Custom Source (as Mathieu suggested) and set your data to both the Input and the Seed Source. That will create a streamline originating from every point in your dataset, which is pretty much what you asked for.
However, while you can do this, you will probably not be happy with the results. First of all, unless your data is trivially small, this will take a long time to compute and create a large amount of data. Even worse, the result will be so dense that you won't be able to see anything. You will get all those interesting streamlines through vortices, but they will be completely hidden by all the boring streamlines around them.
Thus, you are better off with trying to derive a data set that contains seed points that are likely to trace a stream through the vortices that you are interested in. One thing you might want to try is to compute the vorticity of your vector field (Gradient Of Unstructured Data Set when turning on advanced option Compute Vorticity), find the magnitude of that (Calculator), and then use the Threshold filter to pull out the cells with large vorticity. Then use that as your Seed Source.
Another (probably better) option if your data is 2D or you can extract an interesting surface along the flow of your data is to use the Surface LIC plugin. Details can be found at https://www.paraview.org/Wiki/ParaView/Line_Integral_Convolution.
You have to choose a representative source for your streamline.
You could use a "Sphere Source", so in the StreamTracer properties.
If that fails, you can use a StreamTracerWithCustomSource and use your own source that you will have to create yourself first.
I have the following graph with 2 different parameters called p and t.
Their relationship is experimentally found. Manually by knowing (t,p), you can simply find the area number (group) of the point based on where it is located. For example, point M(t,p), locates in area 3 and belongs to group number 3. However, I would like to write a code/logical approach which automatically finds the group numbers. therefore when it reads (t,p) it will find the location of the point and give the group/Area number it belongs.
Is there any solution in Matlab for this scope? Graph
If you have the Image Processing Toolbox and your contours are closed, you can use imfill to fill them up (a bit like the bucket tool in Paint) and assign different values to each filled up region. Does this make sense to you? Let me know if you would like more detail.
Marta
I'm trying to make a neural network try to figure out the meaning of input(keyboard keys in this case) according to the user.
I have multiple possible output "commands" that the NN can interpret the inputs to mean, and at each state certain outputs can count as beneficial while others are a detriment
When the NN starts up for the first time, no input should have any particular meaning to it but as time goes on I want the NN to be able to figure out what the user most likely meant.
I've tried a Multilayer perceptron NN that has as many input nodes as there are physical inputs and as many output nodes as there are commands and a number of nodes equal to the sum of the other two layers as a single hidden layer, in this case it is then a 5,15,10.
The NN assumes that the user will only make moves that are in the NN's best interest.
So far it seems the NN is just figuring out what is the command it can take that will most likely result in a beneficial move, regardless of the input key rather than trying to figure out what key should result in what move according to the user.
Because of this I'm wondering (most likely wrong) if I should produce a separate NN for each input to try and figure out the current output according to the user.
Is there a different type of NN I should look into that will work better, and is there a recommended configuration for this problem?
I'll be happy with some recommendations of reading material that would help in this particular problem.
I'm at best an amateur in NN and would like to learn a lot more about the whole field, But I'm trying to focus my efforts on this problem for now.
Accordng to me you want the output to be according to behaviour of the player as number of inputs are more than in actual case. So according to me there should be some type of memory for the actions taken by the player in order to find the patterns.This can be done using Long Short term Memory.
Here's a theoretical question. Let's assume that I have implemented two types of collaborative filtering: user-based CF and item-based CF (in the form of Slope One).
I have a nice data set for these algorithms to run on. But then I want to do two things:
I'd like to add a new rating to the data set.
I'd like to edit an existing rating.
How should my algorithms handle these changes (without doing a lot of unnecessary work)? Can anyone help me with that?
For both cases, the strategy is very similar:
user-based CF:
update all similarities for the affected user (that is, one row and one column in the similarity matrix)
if your neighbors are precomputed, compute the neighbors for the affected user (for a complete update, you may have to recompute all neighbors, but I would stick with the approximate solution)
Slope-One:
update the frequency (only in the 'add' case) and the diff matrix entries for the affected item (again, one row and one column)
Remark: If your 'similarity' is asymmetric, you need to update one row and one column. If it is symmetric, updating one row automatically results in updating the corresponding column.
For Slope-One, the matrices are symmetric (frequency) and skew symmetric (diffs), so if you handle you also need to update one row or column, and get the other one for free (if your matrix storage works like this).
If you want to see an example of how this could be implemented, have a look at MyMediaLite (disclaimer: I am the main author): https://github.com/zenogantner/MyMediaLite/blob/master/src/MyMediaLite/RatingPrediction/ItemKNN.cs
The interesting code is in the method RetrainItem(), which is called from AddRatings() and UpdateRatings().
The general thing are called online algorithms.
Instead of retraining the whole predictor, it can be updated "online" (while remaining useable) with the new data only.
If you google for "online slope one predictor" you should be able to find some relevant approaches from literature.