I need to create an algorithm to layout some hierarchical data but have never done this kind of thing before and need some broad tips.
Basically I need to recreate this diagram (with dynamic data):
diagram http://dl.dropbox.com/u/15126868/diagram.png
bigger
I don't have a problem with most of it but need help with two things:
How do I approach writing a layout algorithm?
Should I use UIView subclasses for all discs or use quartz (I do need interaction)
Any suggestions most welcome. I don't need too much detail.
A bit more detail:
I'm currently thinking I should use UIView subclasses and layoutSubviews. Trouble is I need to know the size (at least roughly) of all nodes before I can start to position them. Then, as the positioning involves rotation, I may need to adjust child positioning again - and I can't add labels until after any rotation.
Other considerations seem to be: that the presentation area is rectangular, not square; that I can't spill off the page; and that I will need to animate changes to the sizes of the discs.
Any pointers would be great, thanks.
This sort of thing is very difficult.
Interestingly the perhaps main actual initial constraint here is the size of the typography.
In the example given: Observe they could have chosen a different SCPT** somewhat larger (perhaps, 10%-15% larger) or somewhat smaller and it would have still worked. They made an aesthetic decision on the SCPT.
White space is critical to design. Their particular graphic designer happened to like the particular feel of white space which you see. But it would have by no means been "wrong" with a smaller SCTP. Further, observe they could have used an even larger SCPT ... IF ... they used a smaller point size on the typography.
Note that in any event you simply won't be able to display that much type that small on an iPad (or Fone4).
So straight away you have to make decisions about how the type will appear, popup, audio or whatever. Even the white type ("on the discs" type) will give you trouble.
You will have to do lots of tests with photoshop first on to your iPad before even proceeding with an algorithm. So purely for what it's worth...
Here's how I personally would do this sort of thing. In general plan: I would try to do a squishy algorithm that retries itself until it finds a result it is happy with.
IMHO, based on previously doing this type of thing: this problem is too hard to get it done in one go with some particularly smart-ass heuristic. Since there is no one smart-ass heuristic that will save the day, I'd do this:
1) calculate the total trillions to display. (it looks like about 2.5 is the total in the example image)
2) guess a SCPT value to begin with. what about for example "18" based on the actual image at the screen size we see above as posted inside your question.
3) put the big one (sun) in the dead center, and for the middle ones (planets) -- just choose a very easy heuristic, what about from biggest to smallest going anticlockwise srtaing at the top left (don't try to get clever than that with that part of the problem - which indeed could be a huge research project purely on it's own) .. and do the same with the small ones (moons).
4) for the sticks between planets and moons - adopt a trivial solution (like "always 0.5 cm"!!) and that's that. with AI you gotta cut your losses .. everywhere! :) Fix the moons to the planets and forget about them.
5) Now a hard part .. run some sort of heuristic over them that evenly balances what you have so far. treat color as mass and no color as no mass and move the "sun" until it is balancedish. (to be clear, as an example that would be likely downwards if you followed the "planet" layout mentioned in 3.) maybe also move all the planet/moon systems in-out to try to balance it.
6) next the iteration. look at that result and decide if you like it! go back to (2) and pick a new value. (maybe "16!" for example)
(7) there are two possible outcomes here. it might be that during development, there is one magic value for SCPT that always works. perhaps "14.3" or "18.2" or whatever. if you find such a value, never tell anyone. keep it as your own secret information!!!! milk it for everything it is worth with clients. conversely and more difficultly, you might find you need a different value each time. in that case: your AI will have to on it's own iterate through values until it finds one it likes. (for example, by determining whether all your labels fit or not .. and obvious things like "are they touching" "all on screen" etc.)
Anyway FWIW (perhaps nothing) that is what I would do - an iterative approach based on a first guess for the SCPT.
Incidentally: you may well want to buy and study the classic and brilliant book on this sort of display of information!!! Everyone should have a copy.
Tufte's The visual display of Quantatative Information
by Edward R. Tufte
ISBN 0961392142
Regarding the mechanics of laying out the image. You should use quartz or any other low-level drawing - forget about UIViews and the like. You should surely completely separate the logic from the drawing layer, so (even if you do want to change to UIViews, OpenGLES, or whatever) it's only changing a few lines of code.
Hope it helps somehow.
Notes...
** SCPT .. square centimeters per trillion
Followup...
"To keep the logic separate would you use a manager-type pattern?"
To be honest: if I was doing it, I would just start a whole new app purely for the "research" of getting this part, this challenge, working right. In that app (to be honest!) I would make bugger all effort to do anything in any tidy manner whatsoever! :-/ Globals everywhere! :) Unfortunately for me I can only think of the one thing at a time, so at that stage I would only be thinking about the algorithm, per se.
I believe, once you cracked the problem per se, once you came to implement it in a bigger project ... really, FWIW, if it was me, I'd simply make it a class (let's say AmazingClass) nothing more complicated than that. Personally I would set the data somewhere separately (whether in a DB or just an array or whatever) and I would just let the AmazingClass take care of getting the data, even. (My thinking - you never know how the hell you're going to need the data and when, at what point in the process of AmazingClass. So, just give up and let AmazingClass take it as and when it wants it.)
If you are familiar with these awesome-sounding manager-patterns of which you speak - yeah, why not! In short I would heavily separate it out as much as possible. I'm not good enough to speak on the best way to do that - but just completely separate it out somewhere. Sorry I can't help on that one.
Related
I'm working on a widget that displays a graph of nodes and edges using a MultiChildRenderObjectWidget that accepts a list of node widgets as children. I can determine the size and position of the children during layout and thus have the graph edges align with the square intrinsic size of the nodes. However, what if the nodes are not square (if they have a border radius for example)? Then the edges do not line up with the node's border on the diagonals. Here's a picture of what I mean:
My first guess on how to do this would be to layout all children and then during painting, keep performing hit tests along the edge line until the hit test doesn't find the child. Is there a better way of doing this?
I like this question :)
As the scope of the question is broad, I will also present you with a broad answer. That means that this is not a specific implementation but rather an explanation of the concepts needed for this.
Hit testing
You presented hit testing as a way to deal with this issue. I believe that this is not feasible in most cases, let me explain.
Iteration problem: "keep performing hit tests along the edge line" - maybe there is a good algorithm for doing this in a somewhat efficient fashion, however, if you think about it, you would have to perform a lot of checks to get results depending on the approach you take (the difficult question here is how you determine success for your algorithm, i.e. when it should stop searching).
Also note that "Hit testing requires layout to be up-to-date but does not require painting to be up-to-date.", which means that it is not intended to rely on painting in hitTest - I am also not aware of a way to perform hit tests on a canvas, so the idea of easily checking where the canvas painted might not actually be possible.
Parent data
The way I would approach this problem is using parent data, specifically BoxParentData.
Ideally, you would paint your nodes using render objects as well because that allows you to work with the parent data easily.
Before I go into a little bit of how it can be implemented, here is my idea:
You have a render object container (your MultiChildRenderObjectWidget) that can handle your nodes.
The nodes will have GraphContainerNodeParentData (example name).
Each node paints based on a description of the shape. This description could be a Path (you could use PathMetrics to evaluate that later) or something simpler if you can find a way to simplify e.g. the description of the rounded rectangle.
The node sets that shape description as its parent data (variables in the GraphContainerNodeParentData.
The render object container will be able to read the GraphContainerNodeParentData, which contains the information about the shape. Now, you will be able to go through your children during painting and read the parent data, where the shape description is stored → problem solved :)
Implementation
This is the way Stack et al. work. You can find the implementation of rendering for Stack in the framework:
Parent data implementation
Container render box implementation (btw, "container" in my answer refers to the concept of a render box that is a container for other render boxes; it has nothing to do with the Container widget :D)
Furthermore, I used an abstract way of dealing with parent data in my open source Flutter Clock submission. If you are interested in understanding parent data better, it could be helpful. The abstract multi child (container) render object can be found here.
Simplification
You might not need to go that deep (depending on what you are trying to achieve).
You can also set parent data using a ParentDataWidget and potentially combine that with simpler ways of composing your shapes.
For example, you could just use a ClipRRect or something with a specific border radius and pass that border radius to the parent data. With some math, you will always be able to find the correct edges for your shapes with variable border radii in your multi child render object paint method :)
Abstraction
If you do not need to handle abstract cases, i.e. in your case all kinds of different shapes (which could be implemented using the parent data shape description as I outlined), you could also just leave out all of this.
Imagine you always use the same border radius. Why would you worry about even passing parent data then? You could simply calcuate where the edges are based on the size when you have a fixed border radius or fixed shape.
So I want you to keep in mind that even though I proposed this abstract way of dealing with it (which is not difficult at all to work with when you understand but can be cumbersome to get into), you should find the simplest way of solving the problem for your specific case.
More abstraction is always possible - I could e.g. pour a lot of effort into something like this, creating an extremely abstract API that can handle shapes of any kind (using PathMetrics e.g.) to always find the perfect spots, no matter what kind of cubics you used to paint your nodes. However, that might be completely unnecessary and even lead you off track because you are not able to handle the more difficult solution.
Approach 1: abstraction for all cases
If you are looking for something abstract, look at my canvas_clock implementation for inspiration - it uses basically only RenderBoxes, so you will find what you are searching for in that :) In hindsight, the code quality is not amazing, the structure was not well chosen, and it obviously glosses over hit testing, intrinsic sizing, etc., however, for what it does, it goes the way of the abstract extreme (:
Approach 2: pragmatism for a specific case
There are a bunch of exisitng abstractions (like ParentDataWidget and CustomPainter) that can be used instead and you might not even need to handle different shapes (just a bit of math if you e.g. always draw the same rounded rectangle).
If you are only interested in one specific shape, I think that most of the parent data stuff is not strictly necessary :)
Conclusion
I think that I presented you with a few approaches for how this could be pulled off. I did not go into any specifics (maths or how to do it using PathMetrics - hint: you can use one Path object for canvas.drawPath and also extract information using PathMetrics), however, that is due to the broad nature of the question.
I hope that this information was useful to you in any way - I sure did enjoy sharing my thought :)
Btw, I am sorry for the ramble. I would consider this a low quality answer because I only quickly wrote down my thoughts instead of thoroughly structuring the answer and conducting some more research.
I am developing (or atleast trying to develop)
a decently big real time tactics game (something similar to RTS) using SpriteKit.
I am using GamePlay kit for pathfinding.
Initially I used SKActions to move the sprites within the Path but fast enough I realized that it was a big mistake.
Then I tried to implement it with GKAgents (this is my current state)
I feel that GKAgents are very raw and premature also they are following some strange Newton Law #1 that makes them to move forever (I can't think of any scenario where it would be useful - maybe for presentations at WWDC)
as well as I see that they have some Angular speed to perform rotations
which I don't need at all and can't really find how to disable it...
As well as GKBehaviors given a GKGoals seems to make some weird thing...
Setting behavior to avoid obstacles makes my units to joggle around them...
Setting behavior with follow path goal completely ignores everything unless the maxPredictionTime is low enough...
I am not even willing to tell what happens when I combine both them.
I feel broken...
I feel like I have 2 options now:
1) to struggle more with those agents and trying to make them behave as I wish
2) To roll all the movement on my own with help by GKObstacleGraph and a path Finding (which is buggy as well I have to say at some points the path to the point will generate the most awful path like "go touch that obstacle then reverse touch that one then go to the actual point (which from the beginning could be achieved by a straight line)").
Question is:
What would be the best out of those options ?
One of the best ways (in SpriteKit/GameplayKit) to get the kind of behavior you're after is to recognize that path planning and path following need not be the same operation. GameplayKit provides tools for both — GKObstacleGraph is good for planning and GKAgent is good for following a planned path — and they work best when you combine the strengths of each.
(It can be a bit misleading that GKAgent provides obstacle avoidance; don't think of this in the same way as finding a route around obstacles, more like reacting to sudden obstacles in your way.)
To put it another way, GKObstacleGraph and GKAgent are like the difference between navigating with a map and safely driving a car. The former is where you decide to take CA-85 and US-101 instead of I-280. (And maybe reevaluate your decision once in awhile — say, to pick a different set of roads around a traffic jam.) The latter is where you, continuously moment-to-moment, change lanes, avoid potholes, pass slower vehicles, slow down for heavy traffic, etc.
In Apple's DemoBots sample code, they break this out into two steps:
Use GKObstacleGraph to do high level path planning. That is, when the bad guys are "here" and the hero is "way over there", and there are some walls in between, select a series of waypoints that roughly approximates a route from here to there.
Use GKAgent behaviors to make the character roughly follow that path while also reacting to other factors (like making the bad guys not step on each other and giving them vaguely realistic movement curves instead of simply following the lines between waypoints).
You can find most of the relevant stuff behind this in TaskBotBehavior.swift in that sample code — start from addGoalsToFollowPath and look at both the places that gets called and the calls it makes.
As for the "moving forever" and "angular speed" issues...
The agent simulation is a weird mix of a motivation analogy (i.e. the agent does what's needed to move it toward where it "wants" within constraints) and a physics system (i.e. those movements are modeled like forces/impulses). If you take away an agent's goals, it doesn't know that it needs to stop — instead, you need to give it a goal of stopping. (That is, a movement speed goal of zero.) There might be a better model than what Apple's chosen here — file bugs if you have suggestions for design improvements.
Angular speed is trickier. The notion of agents' intrinsic physical constraints being sort of analogous to, say, vehicles on land or boats at sea is pretty well baked into the system. It can't really handle things like space fighters that have to reorient to vector their thrust, or walking creatures that can just as happily walk sideways or backwards as forward — at least, not on its own. You can get some mileage toward changing the "feel" of agent movement with the maxAcceleration property, but you're limited by the fact that said property covers both linear and angular acceleration.
Remember, though, that the interface between what the agent system "wants" and what "actually happens" in your game world is under your control. The easiest way to implement GKAgentDelegate is to just sync the velocity and position properties of the agent and the sprite that it represents. However, you don't have to do it that way — you could calculate a different force/impulse and apply it to your sprite.
I can't comment yet so I post as an answer. I faced the same problem recently: agent wiggling around the target or the agent that keeps moving even if you remove the behavior. Then I realized that the behavior is just the algorithm controling the movement, but you can still access and set the agent's speed, position and angle by hand.
In my case, I have a critter entity that chases for food in the scene. When it makes contact with the food agent, the food entity is removed. I tried many things to make the critter stop after eating the food (it would keep going in a straight line). And all I had to do was to set its speed to 0. That is because the behavior will influence not the position directly, but the speed/angle combination instead (from what I understand). When there is no goal for the entity, it doesn't "want" to change it's state, so whatever speed and direction it reached, it will keep. It will simply not update/change it. So unless you create a goal to make it want to stop, it will wiggle/keep going. The easy way is to set the behavior to nil and set the speed to 0 yourself.
If the behavior/goal system doesn't do it for the type of animation you are looking for, you can still use the Agent system and customize the movement with the AgenDelegate protocol and the update method and make it interact with other agents later on. You can even synchronize the agent with a node that is moved with the physics engine or with actions (or any other way).
I think the agent system is nice to keep around since you can use it later, even if it's only for special effects. But just as mixing actions and physics can give some weird results, mixing goal/behaviors and any other "automated" tool will probably result in an erratic behavior.
You can also use the agent system for other stuff than moving an actual sprite around. For example, you could use an agent to act as a "target seeker" to simulate reaction time for your enemies. The agent moves around the scene and finds other agents, when it makes contact with a suitable target, the enemy entity would attack it (random idea).
It's not a "one size fits all" solution, but it's a very nice tool to have.
I am doing a project on face recognition. I have a dataset containing image of 21 actors(each 150). Now I want to increase the no. of image of each actor to 300+ for the training purpose. How can I do it using MATLAB. Some solutions can be we can vary the contrast/brightness level of each image. But what are some other factors through which I can increase the no. of images.
One option is for you to flip the images: If a person is looking to the right, after the flip he will be looking to the left.
Further more, depending on your possible toolkit and set of skills, you could do some more advanced technique. If you can find some interesting characteristic from the pictures, like: eyes, nose, mouth, background. With those, you could make some intelligent transformations - swap peoples eyes, change the background, switch noses.
There are some particular objects of the faces which you could also distort - like the eyes and nose - stretch them. Maybe for bold guys you could built some synthetic hair, and so on...
You could do the contrast/brightness level change, but usually it doesn't do so well, as your features probably doesn't have (almost) anything to do with it, so it will just be a duplication of your data.
Anyway, as it's not a very large data set, if you don't have the set of skills to pull the more advances options I proposed, or the time to deal with it, you can make some of those stuff manually. It won't take you as much as you think. And usually, with that amount of data, this will give a good boost to your results.
What you are looking for is called "data augmentation". Common transformations are mirroring (flipping left / right side of the image) and rotation of the image. You might also be able to zoom (crop) a part of the image.
Maybe scaled versions with the rotated ones may help. If your features are not robust to the changes such as lightning contras etc you can modfy the images accordingly.
MY PROBLEM
We´re doing a Project right now where we have to display a huge image (containing chemical compounds and elements, so not geo referenced) as a map within a Web-Application (with Leaflet). The image itself is an Adobe Illustrator-File, so its actually a bunch of vector graphics. To makes things easy, we just converted it into a large .png (27.000x19.000 px) and then used MapTiler to create the needed MapRessources for Leaflet, easily included within a TileLayer.
The Problem is:
The user needs to be able to dynamically add and remove different Layers (== Filter) of the map to show more or less informationes from the picture. So we first created those layers within the Illustrator-File, then exported every layer as its own transparent .png-File, mapTiled it and included it as an own Leaflet-Layer.
Right now, we have 6 Filter-Layers and two more base Layers for the background and an overlay. This means that when all Filters are activated (which is the default), we have 8 Leaflet-Layers stacked on top of each other at once. As you can imagine, this causes some performance issues in the Browser, since Leaflet has to load and render 8 Layers with all its Tiles (depending on Screen size up to 25 at once) for every zoom or drag-action. Its still in a point that is not impossible to bear, but we are expecting several more filters to come and therefore wanna stay scalable in the future.
This means we will somehow have to change our approach of generating the Layers.
MY APPROACH SO FAR
Since we actually have a vector-graphics based map, i thought there have to be better alternatives. But it seems that we have a rare case of requirements, since my researches mostly ended in dead ends, especially since most of the cases only cover REAL geographical maps, but what we have is a raster map. I also thought about somehow putting the map into a GeoJSON or redraw it somehow directly with SVG, but since we have LOTS of single elements on the map (> 20k), I dont think this would perform much better.
So I kind of need to stay with the Bitmaps, and therfore my main goal is simple: I wanna reduce the number of layers by merging the tiles of the currently activated Filter into one single .png which then gets delivered to leaflet within ONE Layer. I spent some hours now researching, but I always run into dead ends since it seems we have a rare case of requirements here (especially since most people deal with georeferenced data, not with custom raster maps).
So right now, I can think of 2 different options:
Create ONE Layer for every Filter-Combination. This means we would have to create 2^n layers, so this would only work up to a certain number of filters (which probably will increase) - therefore, i would prefer another solution (this is only last case)
Use MapServer and somehow import my Layers. Then we could merge the Layers on runtime with a query (I read about Union Layer here) and therefore only deliver ONE Layer to leaflet.
MY QUESTION
I have absolutely no experience with MapServer and im therefore not even sure if that is a use-case or if its capabale of doing this, and more important: If it would really give us a performacne boost, since it probably requires a lot of logic ServerSide.
Before i start spending another hours to try this out:
Can someone who already worked with MapServer give me some feedback if that is even a good idea or if I am misunderstanding something with MapServer completely?
Also, if someone has another alternative or idea for me, you´re more than welcome to share it, im grateful for every input. :)
Thanks in advance!
You might want to look at OpenLayers where you can display a mix of raster and vector layers. another option might be mapcache a tile caching engine part of the mapserver project. This has the ability to do vertical assembly of tiles. So if you case where you have 8 layers you can ask mapcache to stack all the eight tiles into a single tile. You can give it a list of layers to stack and it takes care of it for you. You can also do this with mapserver. The difference being that mapcache is a lightweight apache module that just works with tiles and is probably a little faster. Mapserver is a cgi-bin process and work efficiently at rendering and combining raster layers but is probably not as fast as mapcache for simple assembly of tiles.
I have a path drawn in OpenGL ES. I can convert it to a CGPath if needed.
How would I check if it intersects itself (If the user created a complete loop)?
Graham Cox has some very interesting thoughts on how to detect the intersection of a CGPathRef and a CGRect, which is similar to your problem and may be educational. The underlying problem is difficult, and most practical solutions are going to be approximations.
You may also want to look at this SO article on CGPathRef intersection, which is also simliar to your problem, and some of the proposed solutions are in the same space as Graham's above.
Note: This answer is to an earlier version of the question, where I thought the problem was to determine if the path was closed or not.
I think a path is considered closed iff the current point == the starting point.
The easiest way I know of to check this is to keep track of these two points on your own, and check for equality. You can also use CGPathGetCurrentPoint, and only track the starting point to compare with this.
Here's a roundabout way to find the starting point, if it's hard to just keep track of it directly:
make a copy of the path
store its current point
call CGPathCloseSubpath
check to see if the current point changed
If it did change, the original path was open; otherwise closed.
This is a way to check if a path composed of a single continuous segment is self-intersecting.
I'm sure that if you wanted a faster implementation, you could get one by using some good thinking and full access to the CGPath internal data. This idea focuses on quick coding, although I suspect it will still be reasonably fast:
Basically, take two copies of the path, and fill it in two different ways. One fill uses CGContextEOFillPath, while the other uses CGContextFillPath. The results will be different iff the path is self-intersecting.
You can check if the result is different by blending the results together in difference blend mode, and testing if the resulting raw image data is all 0 (all black).
Hacky, yes. But also (relatively) easy to code.
** Addendum ** I just realized this won't work 100% of the time - for example, it won't detect a figure eight, although it will detect a pretzel.