I have created a whole heap of overlays using MKPolygon and created into a MKPolygonView. This works fine but one of the overlays has a butt load of points (about 800 points) and this causes memory and performance issues. I tried shouldRasterize on the MKPolygonView but this had the opposite affect which I am not surprised.
Is there any other thing I can do to increase the performance of it besides lowing the amount of points (which I am in the process of doing)?
This is an issue that is known by Apple but unlikely to change. Basically anything more then a couple of MKOverlayViews you will have performance issues no matter what your hardware. What you have to basically do is to subclass MKPolygonView and merge all the MKPolygons into one MKPolygonView.
Code is available on Apple Forums but as I didn't write it I don't think I should post it here.
I would look at reducing the number of points in the polygon. depending on wher you got it from. Most geopatial manipulation data has functions that will alow you to reduce the number of points in a polygon. (all you need to do is supply an accuracy measurement.)
Related
I am doing a project on face recognition. I have a dataset containing image of 21 actors(each 150). Now I want to increase the no. of image of each actor to 300+ for the training purpose. How can I do it using MATLAB. Some solutions can be we can vary the contrast/brightness level of each image. But what are some other factors through which I can increase the no. of images.
One option is for you to flip the images: If a person is looking to the right, after the flip he will be looking to the left.
Further more, depending on your possible toolkit and set of skills, you could do some more advanced technique. If you can find some interesting characteristic from the pictures, like: eyes, nose, mouth, background. With those, you could make some intelligent transformations - swap peoples eyes, change the background, switch noses.
There are some particular objects of the faces which you could also distort - like the eyes and nose - stretch them. Maybe for bold guys you could built some synthetic hair, and so on...
You could do the contrast/brightness level change, but usually it doesn't do so well, as your features probably doesn't have (almost) anything to do with it, so it will just be a duplication of your data.
Anyway, as it's not a very large data set, if you don't have the set of skills to pull the more advances options I proposed, or the time to deal with it, you can make some of those stuff manually. It won't take you as much as you think. And usually, with that amount of data, this will give a good boost to your results.
What you are looking for is called "data augmentation". Common transformations are mirroring (flipping left / right side of the image) and rotation of the image. You might also be able to zoom (crop) a part of the image.
Maybe scaled versions with the rotated ones may help. If your features are not robust to the changes such as lightning contras etc you can modfy the images accordingly.
I'm a complete illiterate when it comes to working with geographical data, so bear with me.
For our application we will be tracking a fairly large amount of rapidly changing points on a map. It would be nice to be able to cache the location of these points in some kind of map-tile structure so it would be easy to find all points currently in the same tile or neighbouring tiles, making it easier to quickly determine the nearest neigbours and have special logic for specific tiles, etc.
Although we're working for one specific (but already large) location, it would be nice if a solution would scale to other locations as well. Since we would only cache tiles that concern the system, would just tiling the enitre planet be the best option? The dimensions of a tile would then be measured in arc seconds/minutes, or is that a bad idea?
We already work with Postgres and this seems like something that could be done with PostGIS (is this what rasters are?), but jumping in to the documentation/tutorials without knowing what exactly I'm looking for is proving difficult. Any ideas?
PostGIS is all that you need. It can store your points in any coordinate reference system, but you'll probably be using longitude/latitude. Are your points coming from a GPS device?
PostGIS uses GIST indexing, making the search for points close to a given point quite efficient. One option you might want to look at, seeing that you are interested in tiling, is to "geohash" your points. Basically, this turns an (X,Y) coordinate pair into a single "string" with a length depending on the level of partitioning. Nearby points will have the same geohash value (= 1 tile) and are then easily identified with standard database search tools. See this answer and related question for some more considerations and an example in PostgreSQL.
You do not want to look at rasters. These are gridded data, think aerial photography or satellite images, weather maps, etc.
But if you want a more specific answer you should give some more details:
How many points? How are they collected?
Do you have large clusters?
Local? Regional? Global?
What other data does this relate to?
Pseudo table structure? Data layout?
etc
More info = better answer
Cheers, hope you get your face back
I want to draw many spheres.They are all the same but the position.When the number of spheres increase to 10,000, it become very slow. I would if there is any method to draw same things quickly?
I did some experiments to find the problem.
At first I instantiate a simple object with 224 verts 10,000 times with dynamic batching. the result is like this:
Then I add two faces to the object and instantiate it 10,000 again. There is no batching but become quicker:
Third time I increase the verts 100 times and instantiate it 100 times. It become much quicker:
I wonder where is the different between them. Maybe I should use static batching to increase the speed?
What you are looking for is called instances. Here is a starting ressource for you:
http://docs.unity3d.com/ScriptReference/Object.Instantiate.html
Depending on what it is that you want to instantiate a thousand times, you can also checkout the concept of a billboard. It is basically a planar object with a fixed texture that will always face the camera no matter from what perspective you look at it. It is mainly used for things that are far away or should not use too much performance (e.g. grass).
Another thing that you need to watch out for is how many times you make draw calls. Try to use draw call batching, when possible.
MY PROBLEM
We´re doing a Project right now where we have to display a huge image (containing chemical compounds and elements, so not geo referenced) as a map within a Web-Application (with Leaflet). The image itself is an Adobe Illustrator-File, so its actually a bunch of vector graphics. To makes things easy, we just converted it into a large .png (27.000x19.000 px) and then used MapTiler to create the needed MapRessources for Leaflet, easily included within a TileLayer.
The Problem is:
The user needs to be able to dynamically add and remove different Layers (== Filter) of the map to show more or less informationes from the picture. So we first created those layers within the Illustrator-File, then exported every layer as its own transparent .png-File, mapTiled it and included it as an own Leaflet-Layer.
Right now, we have 6 Filter-Layers and two more base Layers for the background and an overlay. This means that when all Filters are activated (which is the default), we have 8 Leaflet-Layers stacked on top of each other at once. As you can imagine, this causes some performance issues in the Browser, since Leaflet has to load and render 8 Layers with all its Tiles (depending on Screen size up to 25 at once) for every zoom or drag-action. Its still in a point that is not impossible to bear, but we are expecting several more filters to come and therefore wanna stay scalable in the future.
This means we will somehow have to change our approach of generating the Layers.
MY APPROACH SO FAR
Since we actually have a vector-graphics based map, i thought there have to be better alternatives. But it seems that we have a rare case of requirements, since my researches mostly ended in dead ends, especially since most of the cases only cover REAL geographical maps, but what we have is a raster map. I also thought about somehow putting the map into a GeoJSON or redraw it somehow directly with SVG, but since we have LOTS of single elements on the map (> 20k), I dont think this would perform much better.
So I kind of need to stay with the Bitmaps, and therfore my main goal is simple: I wanna reduce the number of layers by merging the tiles of the currently activated Filter into one single .png which then gets delivered to leaflet within ONE Layer. I spent some hours now researching, but I always run into dead ends since it seems we have a rare case of requirements here (especially since most people deal with georeferenced data, not with custom raster maps).
So right now, I can think of 2 different options:
Create ONE Layer for every Filter-Combination. This means we would have to create 2^n layers, so this would only work up to a certain number of filters (which probably will increase) - therefore, i would prefer another solution (this is only last case)
Use MapServer and somehow import my Layers. Then we could merge the Layers on runtime with a query (I read about Union Layer here) and therefore only deliver ONE Layer to leaflet.
MY QUESTION
I have absolutely no experience with MapServer and im therefore not even sure if that is a use-case or if its capabale of doing this, and more important: If it would really give us a performacne boost, since it probably requires a lot of logic ServerSide.
Before i start spending another hours to try this out:
Can someone who already worked with MapServer give me some feedback if that is even a good idea or if I am misunderstanding something with MapServer completely?
Also, if someone has another alternative or idea for me, you´re more than welcome to share it, im grateful for every input. :)
Thanks in advance!
You might want to look at OpenLayers where you can display a mix of raster and vector layers. another option might be mapcache a tile caching engine part of the mapserver project. This has the ability to do vertical assembly of tiles. So if you case where you have 8 layers you can ask mapcache to stack all the eight tiles into a single tile. You can give it a list of layers to stack and it takes care of it for you. You can also do this with mapserver. The difference being that mapcache is a lightweight apache module that just works with tiles and is probably a little faster. Mapserver is a cgi-bin process and work efficiently at rendering and combining raster layers but is probably not as fast as mapcache for simple assembly of tiles.
I need to create an algorithm to layout some hierarchical data but have never done this kind of thing before and need some broad tips.
Basically I need to recreate this diagram (with dynamic data):
diagram http://dl.dropbox.com/u/15126868/diagram.png
bigger
I don't have a problem with most of it but need help with two things:
How do I approach writing a layout algorithm?
Should I use UIView subclasses for all discs or use quartz (I do need interaction)
Any suggestions most welcome. I don't need too much detail.
A bit more detail:
I'm currently thinking I should use UIView subclasses and layoutSubviews. Trouble is I need to know the size (at least roughly) of all nodes before I can start to position them. Then, as the positioning involves rotation, I may need to adjust child positioning again - and I can't add labels until after any rotation.
Other considerations seem to be: that the presentation area is rectangular, not square; that I can't spill off the page; and that I will need to animate changes to the sizes of the discs.
Any pointers would be great, thanks.
This sort of thing is very difficult.
Interestingly the perhaps main actual initial constraint here is the size of the typography.
In the example given: Observe they could have chosen a different SCPT** somewhat larger (perhaps, 10%-15% larger) or somewhat smaller and it would have still worked. They made an aesthetic decision on the SCPT.
White space is critical to design. Their particular graphic designer happened to like the particular feel of white space which you see. But it would have by no means been "wrong" with a smaller SCTP. Further, observe they could have used an even larger SCPT ... IF ... they used a smaller point size on the typography.
Note that in any event you simply won't be able to display that much type that small on an iPad (or Fone4).
So straight away you have to make decisions about how the type will appear, popup, audio or whatever. Even the white type ("on the discs" type) will give you trouble.
You will have to do lots of tests with photoshop first on to your iPad before even proceeding with an algorithm. So purely for what it's worth...
Here's how I personally would do this sort of thing. In general plan: I would try to do a squishy algorithm that retries itself until it finds a result it is happy with.
IMHO, based on previously doing this type of thing: this problem is too hard to get it done in one go with some particularly smart-ass heuristic. Since there is no one smart-ass heuristic that will save the day, I'd do this:
1) calculate the total trillions to display. (it looks like about 2.5 is the total in the example image)
2) guess a SCPT value to begin with. what about for example "18" based on the actual image at the screen size we see above as posted inside your question.
3) put the big one (sun) in the dead center, and for the middle ones (planets) -- just choose a very easy heuristic, what about from biggest to smallest going anticlockwise srtaing at the top left (don't try to get clever than that with that part of the problem - which indeed could be a huge research project purely on it's own) .. and do the same with the small ones (moons).
4) for the sticks between planets and moons - adopt a trivial solution (like "always 0.5 cm"!!) and that's that. with AI you gotta cut your losses .. everywhere! :) Fix the moons to the planets and forget about them.
5) Now a hard part .. run some sort of heuristic over them that evenly balances what you have so far. treat color as mass and no color as no mass and move the "sun" until it is balancedish. (to be clear, as an example that would be likely downwards if you followed the "planet" layout mentioned in 3.) maybe also move all the planet/moon systems in-out to try to balance it.
6) next the iteration. look at that result and decide if you like it! go back to (2) and pick a new value. (maybe "16!" for example)
(7) there are two possible outcomes here. it might be that during development, there is one magic value for SCPT that always works. perhaps "14.3" or "18.2" or whatever. if you find such a value, never tell anyone. keep it as your own secret information!!!! milk it for everything it is worth with clients. conversely and more difficultly, you might find you need a different value each time. in that case: your AI will have to on it's own iterate through values until it finds one it likes. (for example, by determining whether all your labels fit or not .. and obvious things like "are they touching" "all on screen" etc.)
Anyway FWIW (perhaps nothing) that is what I would do - an iterative approach based on a first guess for the SCPT.
Incidentally: you may well want to buy and study the classic and brilliant book on this sort of display of information!!! Everyone should have a copy.
Tufte's The visual display of Quantatative Information
by Edward R. Tufte
ISBN 0961392142
Regarding the mechanics of laying out the image. You should use quartz or any other low-level drawing - forget about UIViews and the like. You should surely completely separate the logic from the drawing layer, so (even if you do want to change to UIViews, OpenGLES, or whatever) it's only changing a few lines of code.
Hope it helps somehow.
Notes...
** SCPT .. square centimeters per trillion
Followup...
"To keep the logic separate would you use a manager-type pattern?"
To be honest: if I was doing it, I would just start a whole new app purely for the "research" of getting this part, this challenge, working right. In that app (to be honest!) I would make bugger all effort to do anything in any tidy manner whatsoever! :-/ Globals everywhere! :) Unfortunately for me I can only think of the one thing at a time, so at that stage I would only be thinking about the algorithm, per se.
I believe, once you cracked the problem per se, once you came to implement it in a bigger project ... really, FWIW, if it was me, I'd simply make it a class (let's say AmazingClass) nothing more complicated than that. Personally I would set the data somewhere separately (whether in a DB or just an array or whatever) and I would just let the AmazingClass take care of getting the data, even. (My thinking - you never know how the hell you're going to need the data and when, at what point in the process of AmazingClass. So, just give up and let AmazingClass take it as and when it wants it.)
If you are familiar with these awesome-sounding manager-patterns of which you speak - yeah, why not! In short I would heavily separate it out as much as possible. I'm not good enough to speak on the best way to do that - but just completely separate it out somewhere. Sorry I can't help on that one.