I'm trying to calculate degree centrality of each node in a small-world network using NetLogo tool. I'm using network extension "nw". I calculated closeness, betweenness and eigne vector easily using this extension.
but I don't know how to code for degree centrality. Any help plz.
Related
I want to derive a simple model that can predict a current position of an object with respect of a target.
To be more specific, I have a head that has 4 identical light sensors placed between 90 degree. There is a light source (LED) emitting visual light. Since each sensor has angle spectrum (maximum at 90 degree and decrease its sensitivity while the angle of the incident of light increases), the receiving value at each sensor is determined by the angle and distance of the head with respect of the target.
I measured the values at four sensors at various angles and distances.
Each sensor has maximum values around 9.5 when incoming light is low (either the sensor is far from the target or the sensor faces away the target), while the value decreases as the sensor gets close to the target or faces directly toward to the target.
my inputs and outputs look like
[0.1234 0.0124 8.342 9.232] = [angle, distance]: an example of the head placed toward next to the light.
four inputs from the sensors and two outputs for the angle and distance.
What strategy can I implement to derive an equation that I can use for predicting the angle and distance by putting current incoming sensor values?
I was thinking of multivariate regression, but my outputs are not a single scalar (more of vectors). I am not sure it will work.
Therefore, I am writing here for asking some help.
Any help would be appreciated.
Thanks
Your idea about multivariate regression looks reasonable.
IMHO you need to train two models instead of one. The first one will predict angle, and the second one will predict distance.
Why you want to combine these two models? This is looks strange in the sense of the optimization metric. When you build angle model you minimize the error in radians. When you build distance model you minimize the error in meters. So what the metric you will minimize in single model case?
I believe next links will be useful for you:
https://www.mathworks.com/help/curvefit/surface-fitting.html
https://www.mathworks.com/help/matlab/math/example-curve-fitting-via-optimization.html
Note: in some cases the data normalization (for example via zscore) greatly increases the fitting performance.
P.S. Try also ask at the https://stats.stackexchange.com/
I used the Network Analyzer core app to get the basic parameters of an undirected network on Cytoscape. All the parameters are satisfactorily measured like the degrees, centrality measures of each node, diameter of the network etc. However, the clustering co-efficient of each node is given as 0.0 and the overall clustering co-efficent of the network is calculated as 0.0. I am next going to compare my network with a random network and network co-efficient is a key measure that I would like to compare in order to prove that my network is a scale free network. What could be going wrong. There are 361 nodes and 695 edges in my network. Any ideas are appreciated
Already answered on cytoscape-helpdesk, but for completeness, I've repeated it here....
Hi Rahul,
1) So, with 361 nodes and 695 edges, the average degree of your network is 2. that could certainly lead to a cluster coefficient of 0.0 since that measure depends on the extent to which a node's neighbors are connected. Look for nodes that have well connected neighbors and take a look at the clustering coefficient of those nodes.
2) First, understand that comparing your network with a single random network will not yield a p value (or if it does, it's honestly worthless). You need to generate a distribution of random networks, then compare your network to the distribution to see if you are outside of the distribution. Take a looks at Tosadori, et al., 2016 for their discussion on the use of Network Randomizer with cytoscape.
-- scooter
In the model I am currently developing is a graph/network based model and the diameter is required. Is it possible for the diameter to be calculated? The diameter will be found using links. For example a line graph of 5 nodes would have a diameter of 4, this becomes more complex with random graph's.
This is a qoute of the diamter:
The shortest distance between the two most distant nodes in the
network. In other words, once the shortest path length from every node
to all other nodes is calculated, the diameter is the longest of all
the calculated path lengths.
I have tried to design this but unable to implement this. Any advice or examples would be appreciated.
Have a look at the networks extension for NetLogo (see http://ccl.northwestern.edu/netlogo/docs/nw.html). Unfortunately, it doesn't have the diameter as one of the built-in functions, but you can calculate the distances between each pair of nodes and take the maximum.
I am reading about applications of clustering in human motion analysis. I started out with random numbers and applied k-means clustering algorithm but I wanted to have some graphs that circle the clusters as shown in the picture. Basically, the lines represent the motion trajectory. I will appreciate ideas on how to obtain motion trajectory of a person. Application is patient monitoring where the trajectory will be used in abnormal behavior activity.
I will be using a kinect and recording the motion trajectory based on skeleton tracking. So, I will be recording the 4 quaternion values of Head, Shoulder and Torso joints and the RGBD (Red green blue Depth) that is combined as 1 value for these joints. So, a total of 4*3 + 3 = 15 time series. So, there are 15 variables. How do I convert them to represent the trajectories shown below and then apply clustering to cluster trajectories. The clusters will allow in classification.
Can somebody please show how to obtain the diagram similar to the one attached? and how do I fuse and convert the 15 time series from each person into a single trajectory.
The picture illustrates the number of clusters that are generated for the time series. Thank you in advance.
K-means is a bad fit for trajectories.
It needs to be able to compute the mean (which is why it is called "k-means"). Having a stable, sensible mean is important. But how meaningful is the mean of some time series, even if you could define it (and the series weren't e.g. of different length, and different movement speed)?
Try hierarchical clustering, and multivariate dynamic time warping.
I am new to this neural network in matlab. I wanted to create a Neural Network using matlab simulation.
This matlab simulation is using pattern recognition.
I am running on a windows XP platform.
For example, I have a sets of waveforms of circular shape.
I have extracted out the poles.
These poles will teach my Neural Network that it is circular in shape, hence whenever I input another set of slightly different circular shape waveform, the Neural Network is able to distinguish between the shape.
Currently, I have extracted the poles of these 3 shapes, cylinder, circle and rectangle.
But I am clueless of how I should go about creating my Neural Network.
I'd recommend utilizing SOM (Self-organizing map) for pattern recognition since it's really robust. Also there's a Som Toolbox for Matlab you might be interested in. However, to make it learn waves while neglecting their offsets, you'd need to make some changes to the "similarity function". These changes will affect quite a lot on the SOM's training time but if that's not a problem, keep reading.
For the SOM you'll have to sample your waves to constant sized vectors, let say:
sin x -> sin_vector = (a1, a2, a3, ..., aN)
cos x -> cos_vector = (b1, b2, b3, ..., bN)
Usually similarity of "SOM-vectors" is calculated with euclidian distance. Euclidian distance of those two vectors is huge since they have a different offset. In your case they should be considered to be similar ie. distance to be small. So.. if you don't sample all the similar waves from the same starting point, they will be classified in different classes. That is probably a problem. But! Similarity of vectors in SOM is calculated in order to find the BMU (best-matching unit) from the map and pulling the BMU's and its neigborhood's vectors torwards the values of the given sample. So all you need to change is the way to compare those vectors and the way to pull the vectors' values torwards the sample so that both will be "offset-tolerent".
Slow but working solution is first finding the best offset index for each vector. Best offset index is the one that will produce the smallest value with euclidian distance for the sample. Smallest distance calculated with some node of the net will then be the BMU. Then the BMU's and its neigborhood's vectors are pulled torwards the given sample using the offset index calculated for each node just before. Everything else should work out-of-the-box.
This solution is relatively slow but should work great. I'd recommend studying the consept of SOM thoroughly and then reading this post (and angry comments) again :)
PLEASE comment if you know some mathematical solution that would be better than that previous one!
You can try to use Matlab's Neural network pattern recognition tool nprtool as it is specialize to train and test neural network for pattern recognition.