I have drawn a graph using arborjs with about 15-20 nodes. Is there a way to ensure that two particular nodes never overlap. In fact, it would be better if they have a quite large distance between them or are simply situated at the extremes of the canvas.
Related
I'm going to be using sktilemapnodes for my 2d iOS platformer. What is better or more efficient?
Using a single size sktilemapnode that is the entire level or breaking the area up into multiple sktilemap nodes.
Example:
I have 3 layers of backgrounds that I'm going to use for a parallax background effect.
1st Layer (furthest back) is just a gradient sky. I have it broken up into 512x512 px tiles. I only have 8 different tiles that can be used as a 1x8 grid of tiles. I could then continue this pattern left/right or up/down in order to have the sky be as large as I need it to be.
Question: My question is whether I should use 1 tile map node for the entire sky, or if I should break it up into smaller chunks that are repeatable (like the 1x8 grid). If I break it up into smaller chunks I wouldn't need it to be so big, and as the camera moves around in my game I can move these chunks around.
I'm wondering if this would consume less resources this way.
2nd Layer are hills. I have about 8 different tiles that are 128x64 px each. I can arrange them into a repeatable pattern to my liking. So again I can have a tilemapnode that is the size of the pattern that I can repeat, and I can create multiple nodes or I can just create the entire map in 1 node.
3rd layer is a little different because it is basically a pattern image of trees that holds 27 512x512 px tiles. 9x3 grid. But again, I can use 1 node, or multiple.
I'm just concerned with efficiency. What is going to give me the most bang for my buck so that I can have room to process other game objects. This is just the background u know....
With the Tilemapnode, I'm not sure if tiles that are not visible are not processed each cycle or if I need to do some sort of check manually. I want to have the options of having massive maps on certain levels. I'm new to using sktilemapnodes, so I'm trying to figure out how I would use them in the most efficient way possible.
Each SKTileMapNode is a single node. Apple does a good job of making sure that multiple map nodes don't use too many resources. That is why there is a single node for a map instead of a node for every tile in a map. It is common practice to layer multiple SKTileMapNode(s) to create a parallax effect or simply to create layers.
For example, a platform game with mountains in the background and clouds behind that would use a single SKTileMapNode for the mountains and a single one for clouds. This gives the added benefit of being able to use transparency in tiles.
I would like to add additional forces to networkx spring_layout.
I have a directected graph and I would like nodes to move to different sides according to the edges that they have. Nodes that have more outgoing edges should drift to nodes that have more ingoing edges should drift right. Another alternative would be. That these groups of nodes would drift towards each other, nodes with outgoing edges would get closer while nodes with ingoing edges would also get closer to each other.
I managed to look into to the source code of spring_layout of networkx http://networkx.lanl.gov/archive/networkx-0.37/networkx.drawing.layout-pysrc.html#spring_layout
but everything there is just beyond my comprehension
G.DiGraph()
G.add_edges_from([(1,5),(2,5),(3,5),(5,6),(5,7)])
The layout should show edges 1,2,3 closer to each other, the same regarding 6 and 7.
I imagine, that I could solve this by adding invisible edges via using MultiDiGraph. I could count ingoing and outgoing edges of each node and add invisible edges that connect the two groups. However, I am very sure that there are better ways of solving the problem.
Adding weights into the mix would be a good way to group things (with those invisible nodes). But the layouts have no way of knowing left from right. To get the exact layout you want you could specify each point's x,y coordinates.
import networkx as nx
G=nx.Graph()
G.add_node(1,pos=(1,1))
G.add_node(2,pos=(2,3))
G.add_node(3,pos=(3,4))
G.add_node(4,pos=(4,5))
G.add_node(5,pos=(5,6))
G.add_node(6,pos=(6,7))
G.add_node(7,pos=(7,9))
G.add_edges_from([(1,5),(2,5),(3,5),(5,6),(5,7)])
pos=nx.get_node_attributes(G,'pos')
nx.draw(G,pos)
The problem I have is that there are rectangles within rectangles. Think of a map, except with the following traits with the key point being: rectangles with similar density often share similar dimensions and similar position on the x axis with other rectangles, but sometimes the distance between these rectangles may be big but usually small. If the position on x axis or dimensions are clearly way off, they would not be similar.
rectangles do not intersect, smaller rectangles are completely inside a larger rectangle.
rectangles often have similar x position and similar dimensions
(similar height and width), and have smaller rectangles inside it. The rectangle itself would be considered a cluster of it's own.
Sometimes the distance of these cluster from another cluster may be quite
big (think of islands). Often these clusters share the same or
similar dimension and same or similar density of sub rectangles. if so, they should be considered as part of the same cluster despite a distance between the two clusters.
The more dense a rectangle is (smaller rectangles inside), the more likely there is a similar or same dense rectangle sharing same or similar dimension nearby.
I've attached a diagram to describe the situation more clearly:
Red border means those groups are outliers, not part of any cluster
and are ignored.
Blue border has many clusters (black borders containing black solid
rectangles). They form a group of clusters that are similar due to
the criteria mentioned above (similar width, similar X position,
similar density). Even the clusters towards the bottom right corner
is still considered part of this group because of the criteria
(similar width, similar X position, similar density).
Turquois border has many clusters (black borders containing black
solid rectangles). However, these clusters differ in dimension, x
position, and density, from the ones in Blue border. They are
considered a group of their own.
So far I found density clustering such as DBSCAN which seems to be perfect since it takes noise (outliers) into consideration, and you do not need to know ahead of time how many clusters there will be.
However, you need to define the minimum number of points needed to form a cluster and a threshold distance. What happens if you don't know these two and it can vary based on the problem described above?
Another seemingly plausible solution would be hierarchial (agglomerative) clustering (r-tree), but I'm concerned that I would still need to know the cutoff point in the tree depth level to determine if it's a cluster.
You certainly will need to take all your constraints into account.
In general, your task looks more like constraint satisfaction to me than clustering.
Maybe some constraint clustering approaches are useful to you, but I'm not sure if they allow your kind of constraints. Usually, they only support must-link and must-not-link constraints.
But of course you should try DBSCAN (in particular also: generalized DBSCAN, since the generalization might allow you to add the constraints you have!) and R-trees (which aren't actually a clustering algorithm, but a data index).
Note that R-trees will put the "outliers" into some leaf, to ensure minimum fill.
As is, I cannot give you more detailed recommendations, because even from above sketch, your constraints are not well defined IMHO. Try putting them into pseudocode. You probably only have a small number of rectangles (say, 100); so you can afford to run really expensive algorithms, such as linkage clustering with a customized linkage criterion. Putting your criterions into code may already be 99% of the effort!
I am trying to do an animation using arbor.js. I want to display about 50 to 100 weighted nodes, where the weight defines the size of the node. I was able to display all the nodes but the heavier nodes concentrate in the center and overlap each other. There is a lot of empty space in the canvas that can be used to spread out the nodes.
Is there a setting I can use to not have the nodes overlap?
When creating your particle system use more repulsion, ie. 10000 instead of the default 1000:
arbor.ParticleSystem(10000);
This will not prevent overlapping, but may fix your display problem.
I have an application in which users interact with each-other. I want to visualize these interactions so that I can determine whether clusters of users exist (within which interactions are more frequent).
I've assigned a 2D point to each user (where each coordinate is between 0 and 1). My idea is that two users' points move closer together when they interact, an "attractive force", and I just repeatedly go through my interaction logs over and over again.
Of course, I need a "repulsive force" that will push users apart too, otherwise they will all just collapse into a single point.
First I tried monitoring the lowest and highest of each of the XY coordinates, and normalizing their positions, but this didn't work, a few users with a small number of interactions stayed at the edges, and the rest all collapsed into the middle.
Does anyone know what equations I should use to move the points, both for the "attractive" force between users when they interact, and a "repulsive" force to stop them all collapsing into a single point?
Edit: In response to a question, I should point out that I'm dealing with about 1 million users, and about 10 million interactions between users. If anyone can recommend a tool that could do this for me, I'm all ears :-)
In the past, when I've tried this kind of thing, I've used a spring model to pull linked nodes together, something like: dx = -k*(x-l). dx is the change in the position, x is the current position, l is the desired separation, and k is the spring coefficient that you tweak until you get a nice balance between spring strength and stability, it'll be less than 0.1. Having l > 0 ensures that everything doesn't end up in the middle.
In addition to that, a general "repulsive" force between all nodes will spread them out, something like: dx = k / x^2. This will be larger the closer two nodes are, tweak k to get a reasonable effect.
I can recommend some possibilities: first, try log-scaling the interactions or running them through a sigmoidal function to squash the range. This will give you a smoother visual distribution of spacing.
Independent of this scaling issue: look at some of the rendering strategies in graphviz, particularly the programs "neato" and "fdp". From the man page:
neato draws undirected graphs using ``spring'' models (see Kamada and
Kawai, Information Processing Letters 31:1, April 1989). Input files
must be formatted in the dot attributed graph language. By default,
the output of neato is the input graph with layout coordinates
appended.
fdp draws undirected graphs using a ``spring'' model. It relies on a
force-directed approach in the spirit of Fruchterman and Reingold (cf.
Software-Practice & Experience 21(11), 1991, pp. 1129-1164).
Finally, consider one of the scaling strategies, an attractive force, and some sort of drag coefficient instead of a repulsive force. Actually moving things closer and then possibly farther later on may just get you cyclic behavior.
Consider a model in which everything will collapse eventually, but slowly. Then just run until some condition is met (a node crosses the center of the layout region or some such).
Drag or momentum can just be encoded as a basic resistance to motion and amount to throttling the movements; it can be applied differentially (things can move slower based on how far they've gone, where they are in space, how many other nodes are close, etc.).
Hope this helps.
The spring model is the traditional way to do this: make an attractive force between each node based on the interaction, and a repulsive force between all nodes based on the inverse square of their distance. Then solve, minimizing the energy. You may need some fairly high powered programming to get an efficient solution to this if you have more than a few nodes. Make sure the start positions are random, and run the program several times: a case like this almost always has several local energy minima in it, and you want to make sure you've got a good one.
Also, unless you have only a few nodes, I would do this in 3D. An extra dimension of freedom allows for better solutions, and you should be able to visualize clusters in 3D as well if not better than 2D.