Eclipse GEF/draw2d Coordinate System Transformation - eclipse

Can someone please explain to me how coordinate transformations work in draw2d?
I have a hierarchical diagram where a figure can contain figures which also contain figures. At first I added internal figures by using the request's getLocation, fetching the host figure of the EditPolicy and appliying hostFigure.translateToRelative(location) but does not work! neither combinations of translateToParent and other things.
At the end I copied the implementation from the Logic example, which uses getConstraintFor, a method provided by the policy which does the translation itself. I checked this could but also could not understand how it works.
I read in a number of threads in the eclipse forums on this subject, but still don't understand why a simple method like translateToAbsolute does not behave as expected. Could anyone please explan? Thanks

Two pieces of information that might shed some light on your problem:
Depending on the request type I would expect the location to already be in absolute coordinates.
Unless explicitly implemented otherwise, Figures don't have a local coordinate system for their children. So converting a location up and down the Figure hierarchy does not necessarily change the coordinates.

Related

How do I determine the shape of a child of a MultiChildRenderObjectWidget in Flutter?

I'm working on a widget that displays a graph of nodes and edges using a MultiChildRenderObjectWidget that accepts a list of node widgets as children. I can determine the size and position of the children during layout and thus have the graph edges align with the square intrinsic size of the nodes. However, what if the nodes are not square (if they have a border radius for example)? Then the edges do not line up with the node's border on the diagonals. Here's a picture of what I mean:
My first guess on how to do this would be to layout all children and then during painting, keep performing hit tests along the edge line until the hit test doesn't find the child. Is there a better way of doing this?
I like this question :)
As the scope of the question is broad, I will also present you with a broad answer. That means that this is not a specific implementation but rather an explanation of the concepts needed for this.
Hit testing
You presented hit testing as a way to deal with this issue. I believe that this is not feasible in most cases, let me explain.
Iteration problem: "keep performing hit tests along the edge line" - maybe there is a good algorithm for doing this in a somewhat efficient fashion, however, if you think about it, you would have to perform a lot of checks to get results depending on the approach you take (the difficult question here is how you determine success for your algorithm, i.e. when it should stop searching).
Also note that "Hit testing requires layout to be up-to-date but does not require painting to be up-to-date.", which means that it is not intended to rely on painting in hitTest - I am also not aware of a way to perform hit tests on a canvas, so the idea of easily checking where the canvas painted might not actually be possible.
Parent data
The way I would approach this problem is using parent data, specifically BoxParentData.
Ideally, you would paint your nodes using render objects as well because that allows you to work with the parent data easily.
Before I go into a little bit of how it can be implemented, here is my idea:
You have a render object container (your MultiChildRenderObjectWidget) that can handle your nodes.
The nodes will have GraphContainerNodeParentData (example name).
Each node paints based on a description of the shape. This description could be a Path (you could use PathMetrics to evaluate that later) or something simpler if you can find a way to simplify e.g. the description of the rounded rectangle.
The node sets that shape description as its parent data (variables in the GraphContainerNodeParentData.
The render object container will be able to read the GraphContainerNodeParentData, which contains the information about the shape. Now, you will be able to go through your children during painting and read the parent data, where the shape description is stored → problem solved :)
Implementation
This is the way Stack et al. work. You can find the implementation of rendering for Stack in the framework:
Parent data implementation
Container render box implementation (btw, "container" in my answer refers to the concept of a render box that is a container for other render boxes; it has nothing to do with the Container widget :D)
Furthermore, I used an abstract way of dealing with parent data in my open source Flutter Clock submission. If you are interested in understanding parent data better, it could be helpful. The abstract multi child (container) render object can be found here.
Simplification
You might not need to go that deep (depending on what you are trying to achieve).
You can also set parent data using a ParentDataWidget and potentially combine that with simpler ways of composing your shapes.
For example, you could just use a ClipRRect or something with a specific border radius and pass that border radius to the parent data. With some math, you will always be able to find the correct edges for your shapes with variable border radii in your multi child render object paint method :)
Abstraction
If you do not need to handle abstract cases, i.e. in your case all kinds of different shapes (which could be implemented using the parent data shape description as I outlined), you could also just leave out all of this.
Imagine you always use the same border radius. Why would you worry about even passing parent data then? You could simply calcuate where the edges are based on the size when you have a fixed border radius or fixed shape.
So I want you to keep in mind that even though I proposed this abstract way of dealing with it (which is not difficult at all to work with when you understand but can be cumbersome to get into), you should find the simplest way of solving the problem for your specific case.
More abstraction is always possible - I could e.g. pour a lot of effort into something like this, creating an extremely abstract API that can handle shapes of any kind (using PathMetrics e.g.) to always find the perfect spots, no matter what kind of cubics you used to paint your nodes. However, that might be completely unnecessary and even lead you off track because you are not able to handle the more difficult solution.
Approach 1: abstraction for all cases
If you are looking for something abstract, look at my canvas_clock implementation for inspiration - it uses basically only RenderBoxes, so you will find what you are searching for in that :) In hindsight, the code quality is not amazing, the structure was not well chosen, and it obviously glosses over hit testing, intrinsic sizing, etc., however, for what it does, it goes the way of the abstract extreme (:
Approach 2: pragmatism for a specific case
There are a bunch of exisitng abstractions (like ParentDataWidget and CustomPainter) that can be used instead and you might not even need to handle different shapes (just a bit of math if you e.g. always draw the same rounded rectangle).
If you are only interested in one specific shape, I think that most of the parent data stuff is not strictly necessary :)
Conclusion
I think that I presented you with a few approaches for how this could be pulled off. I did not go into any specifics (maths or how to do it using PathMetrics - hint: you can use one Path object for canvas.drawPath and also extract information using PathMetrics), however, that is due to the broad nature of the question.
I hope that this information was useful to you in any way - I sure did enjoy sharing my thought :)
Btw, I am sorry for the ramble. I would consider this a low quality answer because I only quickly wrote down my thoughts instead of thoroughly structuring the answer and conducting some more research.

Find path of specific length between point A and B on grid

I'm working on a game that will always be an n x n grid. I'd like to be able to always generate a new start position from the outer edge and have a path to the center cell. My issue is that I want to be able to specify the path length instead of choosing the shortest path. Does anyone have an idea of how to do this?
I've looked at several CompSci forums, but unfortunately I'm unable to make heads or tails of what they're saying.
https://cs.stackexchange.com/questions/44401/what-algorithm-to-use-to-generate-random-path-of-given-length
In order to avoid the hardcore math, I would suggest to always use the shortest possible path and randomly generate and add some pieces to it to make it the length you want. It would not be absolute random, but for sure it should look good enough. Although it depends on your needs.

Object Tracking in non static environment

I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.

Object detection/recognition using matlab [duplicate]

This question already exists:
Closed 10 years ago.
Possible Duplicate:
Object recognition system using matlab
I need help to develop an object recognition system. It needs to identify an object in an image by comparing it with an image in an existing database. For example my database may consist of images of cars, buses, cups, etc. If i give a certain image as an input i want the code to check and tell me whether a car(as in the car in the database) can be found to exist in the input image or not. This is strictly to be implemented in matlab. I have tried correlation, image subtraction and a few other algorithms but to no effect. Thanks in advance.
This is a complex subject, that is really on the bleeding edge of technology, but let me give you a few pointers to help start things out.
Somehow, you need to take into account the different sizes, angles, etc that might be around. A car looks very different if photographed from a few feet away as compared to 50 feet, as would it photographed from the front vs the side.
Edge detection algorithms generally work well at pulling the target object's shape away. Take the edges, identify lines in them, and you can try to compare these lines with those from your model.
Range to objects really makes a huge difference in building a successful algorithm. If you know the difference from the front of the car to the back, it can make all of the difference in the world.
Focus, noise, lighting, etc need to somehow be dealt with, to ensure that the system works well.
All in all, I would recommend taking some image analysis classes, reading several papers on the subject, or at least reading the Wikipedia Article, and then starting to work on your project.
The problem you have described is sometimes called object category recognition or object class recognition to emphasize that you are not trying to recognize a particular object, but a member of a category such as "car" or "person".
One popular approach for solving this problem is called Bag of Features of "Bag of Words". If you have access to the Computer Vision System Toolbox for Matlab, it has functions for detecting SURF features, which can be used for this approach.
Also, a better place to ask this question might be Signal and Image Processing stack exchange.

Check if drawn path/CGPath intersects itself in an iPhone game

I have a path drawn in OpenGL ES. I can convert it to a CGPath if needed.
How would I check if it intersects itself (If the user created a complete loop)?
Graham Cox has some very interesting thoughts on how to detect the intersection of a CGPathRef and a CGRect, which is similar to your problem and may be educational. The underlying problem is difficult, and most practical solutions are going to be approximations.
You may also want to look at this SO article on CGPathRef intersection, which is also simliar to your problem, and some of the proposed solutions are in the same space as Graham's above.
Note: This answer is to an earlier version of the question, where I thought the problem was to determine if the path was closed or not.
I think a path is considered closed iff the current point == the starting point.
The easiest way I know of to check this is to keep track of these two points on your own, and check for equality. You can also use CGPathGetCurrentPoint, and only track the starting point to compare with this.
Here's a roundabout way to find the starting point, if it's hard to just keep track of it directly:
make a copy of the path
store its current point
call CGPathCloseSubpath
check to see if the current point changed
If it did change, the original path was open; otherwise closed.
This is a way to check if a path composed of a single continuous segment is self-intersecting.
I'm sure that if you wanted a faster implementation, you could get one by using some good thinking and full access to the CGPath internal data. This idea focuses on quick coding, although I suspect it will still be reasonably fast:
Basically, take two copies of the path, and fill it in two different ways. One fill uses CGContextEOFillPath, while the other uses CGContextFillPath. The results will be different iff the path is self-intersecting.
You can check if the result is different by blending the results together in difference blend mode, and testing if the resulting raw image data is all 0 (all black).
Hacky, yes. But also (relatively) easy to code.
** Addendum ** I just realized this won't work 100% of the time - for example, it won't detect a figure eight, although it will detect a pretzel.