Hello i have some problems with the Bing Map control.
If i zoom to near to the polylines they begin to disappear (from bottom to top and from right to left)
The Polylines are generated dynamically with an ItemsControl (that one which is included in the maps namespace) bound to a collection of my own LocationData from ViewModel thats converted by a IValueConverter to the map specific LocationPoints.
Some values that are not accessible from ViewModel are set in the loaded event.
The map and the container stretch over the whole screen.
So if the lines begin to disappear and i zoom out via a button in my ApplicationBar
private void ZoomOut_Click(object sender, RoutedEventArgs e)
{
map1.ZoomLevel -= 1.0;
}
the Application exits without exception...
I have tested it on a real device with and without debugger and the debugger only says that he have lost the connection to the device.
Anyone have this or similar problems and hopefully solved it?
Thanks for any help.
PS: My LocationData contains approximately 100 - 200 points that are split up to 3 - 7 lines that can't be to much or?
Yes, hundreds of points is too much, but that's the least of your problems. The way you have coded this, you are reconverting and replotting your points every time there is a pan or zoom.
Don't use the type converter. Convert your points once, cache the converted points and bind to the converted points.
Research quadtrees and how they apply to culling your point set in proportion to zoom level.
Apply a clipping rectangle. In my experience, half a degree larger each side of your display region works well.
Study the Bing map event model and redesign your code so that you only cull, clip and plot when map manipulation stops.
Ideally, write your cull, clip and plot logic so that it is asynch and can be signalled to abort so that if manipulation restarts before cull, clip and plot is finished, it can be aborted and restarted.
Using the techniques above I am able to get performance comparable to the built-in map.
Related
I have tried using the methods "Transform.scale" (for "zooming" ) and "Transform.translate" (for the moving), but they seem to trigger the paint method in the CustomPainter.
(even though the method "shouldRepaint" returns false, but that method is not even invoked)
Maybe there is some other way of doing what I want i.e. be able to move and zoom something I have created with a Canvas (without again executing the paint method)?
In the below example code there are three sliders, one for zooming out (i.e. reducing the size) and one for moving horizontally and one for moving vertically.
The "paint" method simply draws a polygon (see the below attached screenshot picture).
While the example code below is simple (i.e. small amount of hardcoded vector data and fast to render), I want to emphasize that the solution I am looking for need to support MANY complicated drawings (with LOTS OF data, slow to render), i.e. it is not an acceptable solution to suggest something like instead manually converting this one vector image to a raster image (e.g. PNG/JPG/GIF).
Below I try do describe how you can think regarding a scenario that need to be supported:
Imagine you want to implement an app with a huge dropdown list with lots of different vector data images to be selected.
The data of those VECTOR images may be retrieved from the internet or from a big local SQLite database.
IMPORTANT: The DATA in those images are NOT raster images such as jpg, png, gif... but the VECTOR data to become retrieved is defined as lots of screen coordinates for points, lines, polygons, and textual labels, and icons, and color values... and so on.
Such VECTOR data will then be used for creating the image, and as far as I understand you should use CustomPainter with the paint method unless there are better options?
Also imagine that each of such selected image with vector data is HUGE with MANY THOUSANDS of lines, polygons, icons, ... and so on, and that the paint method might take seconds for creating the image.
BUT, once it is drawn the data will not change.
So, since the "paint" method might take seconds to render a huge amount of vector data, you want to avoid invoking it frequently, when moving or zooming.
Therefore I think it would be desirable if it would be possible to use the method "shouldRepaint" to return false, but it seems as that method is not even invoked at all when resizing or moving with the Transform methods "scale" and "translate".
But maybe there is some other solution to support the above described scenario, maybe some other class than CustomPainter that do not automatically trigger the paint method when applying Transform scale/translate ?
I hope there is a solution with the Flutter framework somehow being able to automatically reuse the bitmap (e.g color values at certain bits and bytes) that was created potentially slowly with a paint method but can scale/zoom and move it in a faster way than having to execute the paint method again.
If you just want to zoom, scale, pan on your images, then you can try the new InteractiveViewer widget.
InteractiveViewer class
A widget that enables pan and zoom
interactions with its child.
The user can transform the child by dragging to pan or pinching to
zoom.
https://api.flutter.dev/flutter/widgets/InteractiveViewer-class.html
I am working on the real-time plot application where a stream of data is to be plotted on screen. Earlier using gtkmm2 I had done this using a custom widget (derived from Gtk::Bin) where I have a member function which creates a cairo context and does the plotting.
Now with gtkmm3 I am unable to plot in any method other than on_draw. Here's what my custom draw method body looks like
Gtk::Allocation oAllocation = get_allocation();
Glib::RefPtr <Gdk::Window> refWindow = get_window();
Cairo::RefPtr <Cairo::Context> refContext =
refWindow->create_cairo_context();
refWindow->begin_paint_rect(oAllocation); //added later
refContext->save();
refContext->reset_clip();
refContext->set_source_rgba(1,
1,
1,
1);
refContext->move_to(oAllocation.get_x(),
oAllocation.get_y());
refContext->line_to(oAllocation.get_x()
+ oAllocation.get_width(),
oAllocation.get_y()
+ oAllocation.get_height());
refContext->stroke();
refContext->restore();
refWindow->end_paint();
Initially I derived the class from Gtk::DrawingArea then tried with Gtk::Bin while adding the begin_paint_rect call.
Is it forbidden to draw in any place other than on_draw?
For something like a plot (or anything that is rather complex to draw) I advise to use a buffer; I lost a month of my life because I read that gtkmm3 does buffering so that using "double buffering" isn't needed anymore (as opposed to gtkmm2), but it aint that simple (read: that isn't true).
So, what you should do is just draw to your own surface; and every time you change something call queue_draw_region or queue_draw_area.
Then in on_draw get the list of clip rectangles and copy those from your private surface to the cr that is passed to the on_draw function. Cairo normally does
the exact same thing (or so they claim), copying what you just copied again, to the screen; so you should turn that off (this should be possible I read).
The reason you can't use Cairo's buffering is because it doesn't KEEP that buffer; what you get is some corrupted surface, so you are forced to redraw EVERYTHING inside the clip rectangle list. That wouldn't be too bad if you (your application) was the only one making changes (as per your queue_draw_* calls): then you could set a flag, invalidate the part(s) that needs redrawing and simply postpone the draw until you get to on_draw. But sometimes on_draw is called for other reasons, for example, when you open a menu that goes over your drawing area. I think this is a bug (or a design error) but it is the way it is. The result is that you can't know what you have to redraw EXCEPT by looking at the clip rectangle list; which makes it incredibly hard to just draw a part of your area unless your drawing is made up of many separate rectangles (like, say, a chess board). The only feasible way is to keep a full copy of the image in memory (your private surface) and just copy the clip rectangle list from there when in on_draw.
Is it forbidden to draw in any place other than on_draw?
Basically: Yes.
The idea is that you call gtk_widget_queue_draw() or gtk_widget_queue_draw_area() when you want to cause a redraw.
https://developer.gnome.org/gtk3/stable/GtkWidget.html#gtk-widget-queue-draw
https://developer.gnome.org/gtk3/stable/GtkWidget.html#gtk-widget-queue-draw-area
Seems like a simple question, but I have been tearing my hair out for hours now.
I have a series of files ie.
kml_image_L1_0_0.jpg
kml_image_L2_0_0.jpg
kml_image_L2_0_1.jpg
kml_image_L2_1_0.jpg
kml_image_L2_1_1.jpg
etc. However just plotting them on the leaflet map surface understandibly puts the images at 0,0 on the earths surface, and the 0 zoom level inferred by the files should really be about 15 or so.
So I want to specify the latitude and longitude where the images should originate , and what zoom level they should start at. I have tried bounds (which doesn't display anything) and I have tried playing with offsetting the zoom level.
I need this because a user needs to click on an offline map to specify where they are and I need the GPS coordinates.
I also have a KML file but it seems to be of more help for plotting vector data on the map.
Any help is much appreciated, cheers.
If I understand correctly, the "kml_image_Lz_x_y.jpg" images that you have are actually tiles, with zoom, horizontal and vertical indices in their file name?
And your issue is that they use (z,x,y) numbers as if they started from the top-most level (zoom 0, single tile for entire world), but in fact they are just a small portion of the pyramid of tiles?
And you cannot use them as is because you still want to get actual geographic coordinates (latitude, longitude), which would be totally wrong if you used the tiles as if they were showing the entire world?
In that case, you have several options as workarounds:
The most simple and reliable would probably be to simply write a small script to rename all your tiles to their true (z,x,y) numbers.
Another option would be to modify the (z,x,y) numbers before they are written in the tile src attribute, and apply the appropriate offset (constant for z, scaled by z for x and y). That should probably happen in L.TileLayer.getTileUrl() method.
Good luck! :-)
OK, here's the deal:
I have two views: simple and advanced. On the iPad, they come with a big-ass map view, with a marker that can be moved to indicate a position.
Each view has a different instance of MkMapView. When I switch from one to the other, I want to keep the map at exactly the same position and zoom level, so the user feels as if it is the same map.
However, the shape of the map view is slightly different for each of the views. This is because the advanced search has more stuff above the map.
When I open the map (this is code from an abstract superclass, so both instances get it), I set the region and marker position, like so:
[mapSearchView setRegion:[mapSearchView regionThatFits:[[BMLTAppDelegate getBMLTAppDelegate] searchMapRegion]]];
[myMarker setCoordinate:[[BMLTAppDelegate getBMLTAppDelegate] searchMapMarkerLoc]];
searchMapRegion and searchMapMarkerLoc are static, and reflect the currently displayed map's region and marker location (the center of the map).
Here's the problem:
Because the map is a slightly different shape, there is always a bit of adjusting. This can "bounce" back and forth, so that the map zoom keeps decreasing every time you switch, until you are looking at the whole world.
It doesn't matter whether or not I use regionThatFits. The same thing happens, even with this code:
[mapSearchView setRegion:[[BMLTAppDelegate getBMLTAppDelegate] searchMapRegion]];
[myMarker setCoordinate:[[BMLTAppDelegate getBMLTAppDelegate] searchMapMarkerLoc]];
All I want, is for the exact same zoom and center to be displayed. I don't care is the advanced view cuts a bit off.
How do I get the $##!! MapKit to stop tweaking the zoom factor?
Just FYI. I solved this by creating a custom model layer class that maintains the scale and center point, and is used by multiple MKMapViews. It works pretty well, but the MapKit does sometimes tweak the scale very slightly to fit one of its "detents."
I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}