How do I determine the shape of a child of a MultiChildRenderObjectWidget in Flutter? - flutter

I'm working on a widget that displays a graph of nodes and edges using a MultiChildRenderObjectWidget that accepts a list of node widgets as children. I can determine the size and position of the children during layout and thus have the graph edges align with the square intrinsic size of the nodes. However, what if the nodes are not square (if they have a border radius for example)? Then the edges do not line up with the node's border on the diagonals. Here's a picture of what I mean:
My first guess on how to do this would be to layout all children and then during painting, keep performing hit tests along the edge line until the hit test doesn't find the child. Is there a better way of doing this?

I like this question :)
As the scope of the question is broad, I will also present you with a broad answer. That means that this is not a specific implementation but rather an explanation of the concepts needed for this.
Hit testing
You presented hit testing as a way to deal with this issue. I believe that this is not feasible in most cases, let me explain.
Iteration problem: "keep performing hit tests along the edge line" - maybe there is a good algorithm for doing this in a somewhat efficient fashion, however, if you think about it, you would have to perform a lot of checks to get results depending on the approach you take (the difficult question here is how you determine success for your algorithm, i.e. when it should stop searching).
Also note that "Hit testing requires layout to be up-to-date but does not require painting to be up-to-date.", which means that it is not intended to rely on painting in hitTest - I am also not aware of a way to perform hit tests on a canvas, so the idea of easily checking where the canvas painted might not actually be possible.
Parent data
The way I would approach this problem is using parent data, specifically BoxParentData.
Ideally, you would paint your nodes using render objects as well because that allows you to work with the parent data easily.
Before I go into a little bit of how it can be implemented, here is my idea:
You have a render object container (your MultiChildRenderObjectWidget) that can handle your nodes.
The nodes will have GraphContainerNodeParentData (example name).
Each node paints based on a description of the shape. This description could be a Path (you could use PathMetrics to evaluate that later) or something simpler if you can find a way to simplify e.g. the description of the rounded rectangle.
The node sets that shape description as its parent data (variables in the GraphContainerNodeParentData.
The render object container will be able to read the GraphContainerNodeParentData, which contains the information about the shape. Now, you will be able to go through your children during painting and read the parent data, where the shape description is stored → problem solved :)
Implementation
This is the way Stack et al. work. You can find the implementation of rendering for Stack in the framework:
Parent data implementation
Container render box implementation (btw, "container" in my answer refers to the concept of a render box that is a container for other render boxes; it has nothing to do with the Container widget :D)
Furthermore, I used an abstract way of dealing with parent data in my open source Flutter Clock submission. If you are interested in understanding parent data better, it could be helpful. The abstract multi child (container) render object can be found here.
Simplification
You might not need to go that deep (depending on what you are trying to achieve).
You can also set parent data using a ParentDataWidget and potentially combine that with simpler ways of composing your shapes.
For example, you could just use a ClipRRect or something with a specific border radius and pass that border radius to the parent data. With some math, you will always be able to find the correct edges for your shapes with variable border radii in your multi child render object paint method :)
Abstraction
If you do not need to handle abstract cases, i.e. in your case all kinds of different shapes (which could be implemented using the parent data shape description as I outlined), you could also just leave out all of this.
Imagine you always use the same border radius. Why would you worry about even passing parent data then? You could simply calcuate where the edges are based on the size when you have a fixed border radius or fixed shape.
So I want you to keep in mind that even though I proposed this abstract way of dealing with it (which is not difficult at all to work with when you understand but can be cumbersome to get into), you should find the simplest way of solving the problem for your specific case.
More abstraction is always possible - I could e.g. pour a lot of effort into something like this, creating an extremely abstract API that can handle shapes of any kind (using PathMetrics e.g.) to always find the perfect spots, no matter what kind of cubics you used to paint your nodes. However, that might be completely unnecessary and even lead you off track because you are not able to handle the more difficult solution.
Approach 1: abstraction for all cases
If you are looking for something abstract, look at my canvas_clock implementation for inspiration - it uses basically only RenderBoxes, so you will find what you are searching for in that :) In hindsight, the code quality is not amazing, the structure was not well chosen, and it obviously glosses over hit testing, intrinsic sizing, etc., however, for what it does, it goes the way of the abstract extreme (:
Approach 2: pragmatism for a specific case
There are a bunch of exisitng abstractions (like ParentDataWidget and CustomPainter) that can be used instead and you might not even need to handle different shapes (just a bit of math if you e.g. always draw the same rounded rectangle).
If you are only interested in one specific shape, I think that most of the parent data stuff is not strictly necessary :)
Conclusion
I think that I presented you with a few approaches for how this could be pulled off. I did not go into any specifics (maths or how to do it using PathMetrics - hint: you can use one Path object for canvas.drawPath and also extract information using PathMetrics), however, that is due to the broad nature of the question.
I hope that this information was useful to you in any way - I sure did enjoy sharing my thought :)
Btw, I am sorry for the ramble. I would consider this a low quality answer because I only quickly wrote down my thoughts instead of thoroughly structuring the answer and conducting some more research.

Related

Some questions about the render order of the Uis in Unity3D

Found something beyond my understanding.
two images with a text overlapping one of them.
how they're ordered.
3 batches!
This is confusing me. As some official article I've read, "Unity UIs are constructed back-to-front, with objects’ order in the hierarchy determining their sort order. Objects earlier in the hierarchy are considered behind objects later in the hierarchy. Batches are built by walking the hierarchy top-to-bottom and collecting all objects which use the same material, the same texture and do not have intermediate layers."
In my understanding, the text should be rendered before the two images,and that the batches should be 2 rather than 3. So what's really happening here?
Because the text is rendering first, when the square renders, it goes over top of the already drawn pixels.
"As for batches, batching means that it takes all the same elements, that have the same texture and material, and tries to put them in 1 draw call if it can. Text, on the other hand, is always in a different atlas. So if you have an Image followed by Text , you'll have 2 draw calls as they cannot be batched together." - referenced here
You may need to enable static or dynamic batching in the Player Settings as well, if they are static, try marking them batching static in the inspector. I would refer to the link above for more of the reasons as to why batching may not work, as it seems there can be plenty of reasons!
EDIT: Are you using the default Unity text, or Text Mesh Pro? It will most likely work the same in terms of render orders and batches, but if you are not using TMP, you should be, as it is much more efficient, and renders better, and has more capabilities in general. There is a good reason Unity bought the product to adapt into their base engine.

Object Tracking in non static environment

I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.

Creating simplified bounds with an inaccuracy threshold

Given a set of non-rotated AABB bounds, I'm hoping to create a simpler set of bounds from the original set, that allows for a specified amount of inaccuracy.
Some examples:
I'm working with this in Unity with Bounds, but it's just basic AABB comparison stuff, nothing Unity-specific. I figure someone must have worked out a system for this at some point in the past, but I had no luck searching around. Encapsulating bounds are easy but this is harder, since you can't just iterate through each bounds one by one. Sometimes a simpler solution can only be seen by looking at the whole thing.
Fast performance isn't critical but would be nice. Inaccuracy is OK in both directions (i.e. the bounds may cover a little less than the actual size or a little more). If it helps, I can expect all bounds in the original set to be connected somewhere - no free-floating pieces in a separate group.
I don't expect anyone to write up a whole system to solve this, I'm more hoping that it's already been solved or that maybe there's an obvious process to achieve it that I haven't thought of yet.
This sounds something that could be handled with Surface Area Heuristics (SAH). SAH is commonly used in ray tracing to build better tree like structures were the triangles are stored. There are multiple sources discussing it more. One good is Wald's thesis chapter 7.3.
The basic idea in the SAH built is to start with the whole space and divide it recursively. Division position is decided by sweeping through all reasonable positions and calculating surface area of both child nodes. The reasonable positions are the positions were any triangle has its upper or lower bound. After sweeping through all the candidates, the division with the smallest total surface area in the children is used.
If SAH is not a good idea for your application, you could use similar sweeping through all candidates, but calculate for example the extra space inside the AABBs.

Is using MapServer to merge several MapLayers on runtime to use with Leaflet a good idea?

MY PROBLEM
We´re doing a Project right now where we have to display a huge image (containing chemical compounds and elements, so not geo referenced) as a map within a Web-Application (with Leaflet). The image itself is an Adobe Illustrator-File, so its actually a bunch of vector graphics. To makes things easy, we just converted it into a large .png (27.000x19.000 px) and then used MapTiler to create the needed MapRessources for Leaflet, easily included within a TileLayer.
The Problem is:
The user needs to be able to dynamically add and remove different Layers (== Filter) of the map to show more or less informationes from the picture. So we first created those layers within the Illustrator-File, then exported every layer as its own transparent .png-File, mapTiled it and included it as an own Leaflet-Layer.
Right now, we have 6 Filter-Layers and two more base Layers for the background and an overlay. This means that when all Filters are activated (which is the default), we have 8 Leaflet-Layers stacked on top of each other at once. As you can imagine, this causes some performance issues in the Browser, since Leaflet has to load and render 8 Layers with all its Tiles (depending on Screen size up to 25 at once) for every zoom or drag-action. Its still in a point that is not impossible to bear, but we are expecting several more filters to come and therefore wanna stay scalable in the future.
This means we will somehow have to change our approach of generating the Layers.
MY APPROACH SO FAR
Since we actually have a vector-graphics based map, i thought there have to be better alternatives. But it seems that we have a rare case of requirements, since my researches mostly ended in dead ends, especially since most of the cases only cover REAL geographical maps, but what we have is a raster map. I also thought about somehow putting the map into a GeoJSON or redraw it somehow directly with SVG, but since we have LOTS of single elements on the map (> 20k), I dont think this would perform much better.
So I kind of need to stay with the Bitmaps, and therfore my main goal is simple: I wanna reduce the number of layers by merging the tiles of the currently activated Filter into one single .png which then gets delivered to leaflet within ONE Layer. I spent some hours now researching, but I always run into dead ends since it seems we have a rare case of requirements here (especially since most people deal with georeferenced data, not with custom raster maps).
So right now, I can think of 2 different options:
Create ONE Layer for every Filter-Combination. This means we would have to create 2^n layers, so this would only work up to a certain number of filters (which probably will increase) - therefore, i would prefer another solution (this is only last case)
Use MapServer and somehow import my Layers. Then we could merge the Layers on runtime with a query (I read about Union Layer here) and therefore only deliver ONE Layer to leaflet.
MY QUESTION
I have absolutely no experience with MapServer and im therefore not even sure if that is a use-case or if its capabale of doing this, and more important: If it would really give us a performacne boost, since it probably requires a lot of logic ServerSide.
Before i start spending another hours to try this out:
Can someone who already worked with MapServer give me some feedback if that is even a good idea or if I am misunderstanding something with MapServer completely?
Also, if someone has another alternative or idea for me, you´re more than welcome to share it, im grateful for every input. :)
Thanks in advance!
You might want to look at OpenLayers where you can display a mix of raster and vector layers. another option might be mapcache a tile caching engine part of the mapserver project. This has the ability to do vertical assembly of tiles. So if you case where you have 8 layers you can ask mapcache to stack all the eight tiles into a single tile. You can give it a list of layers to stack and it takes care of it for you. You can also do this with mapserver. The difference being that mapcache is a lightweight apache module that just works with tiles and is probably a little faster. Mapserver is a cgi-bin process and work efficiently at rendering and combining raster layers but is probably not as fast as mapcache for simple assembly of tiles.

Tips on creating a custom view layout for a diagram

I need to create an algorithm to layout some hierarchical data but have never done this kind of thing before and need some broad tips.
Basically I need to recreate this diagram (with dynamic data):
diagram http://dl.dropbox.com/u/15126868/diagram.png
bigger
I don't have a problem with most of it but need help with two things:
How do I approach writing a layout algorithm?
Should I use UIView subclasses for all discs or use quartz (I do need interaction)
Any suggestions most welcome. I don't need too much detail.
A bit more detail:
I'm currently thinking I should use UIView subclasses and layoutSubviews. Trouble is I need to know the size (at least roughly) of all nodes before I can start to position them. Then, as the positioning involves rotation, I may need to adjust child positioning again - and I can't add labels until after any rotation.
Other considerations seem to be: that the presentation area is rectangular, not square; that I can't spill off the page; and that I will need to animate changes to the sizes of the discs.
Any pointers would be great, thanks.
This sort of thing is very difficult.
Interestingly the perhaps main actual initial constraint here is the size of the typography.
In the example given: Observe they could have chosen a different SCPT** somewhat larger (perhaps, 10%-15% larger) or somewhat smaller and it would have still worked. They made an aesthetic decision on the SCPT.
White space is critical to design. Their particular graphic designer happened to like the particular feel of white space which you see. But it would have by no means been "wrong" with a smaller SCTP. Further, observe they could have used an even larger SCPT ... IF ... they used a smaller point size on the typography.
Note that in any event you simply won't be able to display that much type that small on an iPad (or Fone4).
So straight away you have to make decisions about how the type will appear, popup, audio or whatever. Even the white type ("on the discs" type) will give you trouble.
You will have to do lots of tests with photoshop first on to your iPad before even proceeding with an algorithm. So purely for what it's worth...
Here's how I personally would do this sort of thing. In general plan: I would try to do a squishy algorithm that retries itself until it finds a result it is happy with.
IMHO, based on previously doing this type of thing: this problem is too hard to get it done in one go with some particularly smart-ass heuristic. Since there is no one smart-ass heuristic that will save the day, I'd do this:
1) calculate the total trillions to display. (it looks like about 2.5 is the total in the example image)
2) guess a SCPT value to begin with. what about for example "18" based on the actual image at the screen size we see above as posted inside your question.
3) put the big one (sun) in the dead center, and for the middle ones (planets) -- just choose a very easy heuristic, what about from biggest to smallest going anticlockwise srtaing at the top left (don't try to get clever than that with that part of the problem - which indeed could be a huge research project purely on it's own) .. and do the same with the small ones (moons).
4) for the sticks between planets and moons - adopt a trivial solution (like "always 0.5 cm"!!) and that's that. with AI you gotta cut your losses .. everywhere! :) Fix the moons to the planets and forget about them.
5) Now a hard part .. run some sort of heuristic over them that evenly balances what you have so far. treat color as mass and no color as no mass and move the "sun" until it is balancedish. (to be clear, as an example that would be likely downwards if you followed the "planet" layout mentioned in 3.) maybe also move all the planet/moon systems in-out to try to balance it.
6) next the iteration. look at that result and decide if you like it! go back to (2) and pick a new value. (maybe "16!" for example)
(7) there are two possible outcomes here. it might be that during development, there is one magic value for SCPT that always works. perhaps "14.3" or "18.2" or whatever. if you find such a value, never tell anyone. keep it as your own secret information!!!! milk it for everything it is worth with clients. conversely and more difficultly, you might find you need a different value each time. in that case: your AI will have to on it's own iterate through values until it finds one it likes. (for example, by determining whether all your labels fit or not .. and obvious things like "are they touching" "all on screen" etc.)
Anyway FWIW (perhaps nothing) that is what I would do - an iterative approach based on a first guess for the SCPT.
Incidentally: you may well want to buy and study the classic and brilliant book on this sort of display of information!!! Everyone should have a copy.
Tufte's The visual display of Quantatative Information
by Edward R. Tufte
ISBN 0961392142
Regarding the mechanics of laying out the image. You should use quartz or any other low-level drawing - forget about UIViews and the like. You should surely completely separate the logic from the drawing layer, so (even if you do want to change to UIViews, OpenGLES, or whatever) it's only changing a few lines of code.
Hope it helps somehow.
Notes...
** SCPT .. square centimeters per trillion
Followup...
"To keep the logic separate would you use a manager-type pattern?"
To be honest: if I was doing it, I would just start a whole new app purely for the "research" of getting this part, this challenge, working right. In that app (to be honest!) I would make bugger all effort to do anything in any tidy manner whatsoever! :-/ Globals everywhere! :) Unfortunately for me I can only think of the one thing at a time, so at that stage I would only be thinking about the algorithm, per se.
I believe, once you cracked the problem per se, once you came to implement it in a bigger project ... really, FWIW, if it was me, I'd simply make it a class (let's say AmazingClass) nothing more complicated than that. Personally I would set the data somewhere separately (whether in a DB or just an array or whatever) and I would just let the AmazingClass take care of getting the data, even. (My thinking - you never know how the hell you're going to need the data and when, at what point in the process of AmazingClass. So, just give up and let AmazingClass take it as and when it wants it.)
If you are familiar with these awesome-sounding manager-patterns of which you speak - yeah, why not! In short I would heavily separate it out as much as possible. I'm not good enough to speak on the best way to do that - but just completely separate it out somewhere. Sorry I can't help on that one.