arbor.js z-order - z-order

I am using arbor.js, specifically I am using nearest to mouse to detect which node the mouse is over and highlight it, all good.
My problem is I would like to ensure the node under mouse is on top, i.e. the last drawn, so it is not obscured by other nodes. Has anyone had any luck finding a way to alter the draw order, or z-order of a node?
Cheers Ben

Seems true Z-order is not possible, best that can be done is to sort the nodes, the last node in the list will be rendered last.
Have to use an array rather than json when loading data if you want to preserve the order.
D3 has an inbuilt sort function which goes some way to address this, I think arbor could use the same.

Related

Finding a better method to identify points exist "inside" of a cube in swift

I want to find a way to check if points(vectors) in my scene are contained within a SCNBox I have displayed on screen. Currently I have an array of about 83000 SCNVector3's. So far, I do this by simply running a for loop on each point and checking against the SCNBox bounding box. If it falls within that bounding box I save the point to a separate array. My goal however is not 1 bounding box. I further subdivide the bounding box into equal sections. For each of those I then have to check each individual point again to see if they fall into each one of those individual bounding boxes. If they do, I save those boxes so that when I go to subdivide them again, I am not subdividing boxes that contain no points. This helps performance a little bit as large sections of boxes that don't have points are not needlessly checked. This works okay for a small amount of boxes however I need to subdivide the boxes into much larger amounts, sometimes in the hundreds to thousands of boxes. As you can imagine, it takes a long time to check all the boxes. Currently I am having to iterate through all the points every time for each box. Is there a faster approach to this?
Your question says is there a better way, so using the physics engine would be a more 'standard' approach IMO, but performance wise I wouldn't know without trying it myself. It seems you either check manually, or let the physics engine check it for you.
Try this post - it's similar and there are some examples. [67575481].
The physics engine checks nodes for nodes and it may have changed since I did this, but I had to generate node names for my nodes and check it that way. There may be other options now.
My example is a moving 'missile' that collides with a moving node containing 'some monster', so I wasn't dealing with the size you are.
83000 vectors, dunno, that seems like a lot. You'll still have to do the checks you do now, just differently.
Hope that helps...

Can you do pathfinding based on the pixelgrid of a .png file in Unity?

TL;DR: Can someone please help with pathfinding with no obstacles, fixed and known starting points, and edges based on transparency of the pixel grid of a .png file.
I'm trying to make a simple app for my students to teach them the correct stroke order and direction of the Chinese strokes.
So far I have achieved this by layering "start" and "end" game objects with CircleCollider2D components on top of the PolygonCollider2D generated by the sprite to check if they started the stroke, stayed within the stroke, and exited it correctly.
It does the job, yes, but it doesn't animate the fill in process like you'd expect from such an app, not to mention that I need to manually add "start" and "end" points myself.
Ideally I could just provide the stroke sprite, tell it which way I want the stroke to go (left to right, right to left etc.) and let the program create the ends based on the first/last 10% of the pixels, and of course animate it to fill in once completed correctly.
But baby steps!
First, I'd be grateful if someone could please tell me how to even get the pixel grid to begin with so I can perhaps attempt an A* approach.
Thank you!
This is the same case for validating AI racers if they are in racing in the correct way. You will have to indicate some sort of waypoint system that has is ordered by the way of strokes you want.
Imagine you're teaching them to write the number 2. You will have to create an array of nodes starting from the upper left most of the number until you get to the other end. You can validate the strokes if their fingers pass through the correct order or not.
No need for a complicated A* algorithm.
However, this won't do if you want to automate everything. This will require you to do some sort of image processing, editor tool, and loads of validations. I wouldn't suggest the automated one though.

Blender - Intersection between 2 objects deletes part of my first object

I'm making a chess piece (a bishop) and I am trying to make the top notch.
For this purpose, I made a new cube, resized it and put on the place to make the cut.
I want to make a cut using a modifier: Boolean, intersecting the two objects.
The problem I am facing is that while intersecting, the top UV Sphere that simulates the 'hat' of the bishop disappears.
What I did so far:
- Remove Doubles
- CTRL+J to join Bishop+Hat(UV Sphere) to make 1 component
Nothing helped and when trying to intersect, the UVSphere-hat disappears.
Why? How to solve?
Here is the bishop before modifier, with hat:
Here is the .blend file to catch the problem faster:
Thank you for your help :-)
The boolean modifier offers two different solvers that produce different results. You want the intersect operation with the carve solver. You also want to hide the cube that you are using for the intersection otherwise you won't see the hole that it has cut out.
Just to go straight to the point I'll add the reply here and select sambler's excellent answer as right.
In my case the Cube which was going to intersect with the bishop HAD NEGATIVE SCALE.
If you have similar problems, check if your objects scale/parameters have negative parameters

Drawing a polygon by a lot of dots

My desired output is moving a lot of dots to visualize some words.
The effect is similar to this video http://www.youtube.com/watch?v=Le13by2WM70 .
I think this problem could be split into two sub-problem.
The first is how to extract the path from a vector font.
The second is how to moving dots to visulize that polygon.
There are some tools could solve first part, but I have not idea about the second part.
Anyone has done this?
You could probably do pretty well by just sampling points on a regular grid, with a little jitter added in to avoid looking too computery. All you need to do is check if you are "inside" or "outside" of the path. For inside, place a fish (or dot); for outside, no fish.

Matlab: Track point on object in video

I would like to track (if that is the right word for this) the movement of a point on an object and return the co-ordinates for the point in each frame to arrays for plotting. How would you go about doing this?
The point on the video is a certain color and so my first effort was to eliminate all other colors and change the part I wish to follow to black and everything else to white. Doing this left me with some areas in the background which are the same color but I wish to ignore them and just focus on the moving point. I do not know where to even begin with this or if I've even been trying to do the right thing so far?
Any help would be greatly appreciated! :)
Try searching for terms like 'tracking', 'morphological', 'computer vision', 'matlab'
Here's a project that I found that will probably get you started.
http://www.mathworks.com/matlabcentral/fileexchange/28757-tracking-red-color-objects-using-matlab
if your object of interests is of a certain specific color. You can always apply a color-filter. To give you a bit of a background, i was trying to track not a point on an object, but a moving object in one of the videos i have. (it was a ping-pong video and my goal was to track the ping-pong ball). My algorithm was simple and fast (as i did not want any of my filters to induce heavy computations at one single frame). The basic idea was to apply a color filter. Similar to other shape filters, if your target is of high similarity to the filter, the response will be distinctive enough for you to notice. In other words, if you minus two objects that are extremely similar, you will get 0, otherwise, it will be far greater than 0.