We have a web app that uses MapBox.
In the Leaflet control we're drawing polygons in a feature group, laid over a basemap.
What we're seeing is that for certain polygons, when we try to edit the polygon, when we drag points they snap right back to where they were.
Other polygons on the same map can be edited without issue.
I've attached an animated GIF that shows the behavior.
This is something that seems to be going on entirely within the Leaflat Javascript code, and I'm at a loss as to how to track it down.
I certainly don't expect anyone to be able to identify what I'm doing wrong, or what might be wrong within the Leaflet library, from as sketchy a report as this.
But I was hoping someone might be able to provide me with some guidance as to where I might start?
Specifically, if I was to try to walk through the javascript in a debugger, where should I put my breakpoints?
I've downloaded the uncompressed mapbox source, and I'm able to set breakpoints in it, but I've been unable to find which code executes on mouseup, in this situation.
Related
I am developing a weather radar viewer using Mapbox. In a certain mode, there are 2 Mapbox maps on the screen at the same time showing different modes of the radar. The maps are locked to each other. When one map moves, rotates, or pans - the other one does as well. I did this by simply passing the properties of one map to the other. In the below screenshot, you will see how they are showing identical locations.
What I want to do is - when the user is hovering the mouse over "map1", I would like an identical (ghost or false) cursor on "map2". Here is what I am looking to do:
(edit: What you are looking at is an actual screenshot. Each map is enclosed in a DIV with 50% width of the screen, if this helps to explain)
I don't know if this is even possible in Mapbox. Hopefully someone can give some guidance as I can't find any other questions related to this and I really have no code to show without knowing where to start.
If you attempt to do this inside Mapbox-GL-JS (for instance, by constantly updating the location of a GeoJSON feature layer), I think the performance will be pretty poor.
But since the two views are exactly locked (and presumably the exact same dimensions), you can just do this at an HTML/CSS level. Detect mouse movement on the first map, and update the location of an absolutely-positioned element hovering over the second map to match.
Another approach would be using a canvas element overlaid over the second map, similarly updated.
I have the, in my opinion, simple problem of disabling image detection with the AR Camera. I have the problem, that my app detects an image from the image library and spawns an object etc. everything according to plan.
But the problem is that if move the camera over another detectable image, it recognizes it. This is bad not because it spawns something additionaly but because you can "collect" the images in my app, so it unlocked the other detected one even though it shouldn´t.
So how can I disable image detection without turning off the AR-Camera?
I so far tried to simply disable the "ARManager" and the "ARTrackedImageManager" script (.enabled=false), but it didn´t solve my problem, because the app still detects other images.
Hope I could explain what my question and problem is properly. Any help is appreciated!
It really depends on what library you're using to detect the image. Generally, most marker tracking libraries will create a marker object in your Unity scene. You can disable these marker objects after you find one, and only leave the marker you're interested in. Make sure you also set the number of tracked images to 1 so you won't accidentally find two markers in one frame.
For an Android App I have created a custom overlay here, to display various items of game data on the map.
In general the overlay works fine and smoothly!
However there is one geometrical constellation which is not working as expected: Some objects within the overlay are not bound to geocoordinates, but to current user location. Now it happens, that Osmdroid is selectively redrawing the screen. When the screen is not focussed to user location, the user location bound stuff is not updated correctly: New stuff gets only drawn in some selective rectangle, old stuff isn't deleted outside that rectangle.
So far I failed to find a mechanism to communicate the required redraw of an overlay to the basic Osmdroid system? I.e. to invalidate the surroundings of the current user location? Any hint, clue or pointer?
By studying the sample code I realized they really consider it the overlay responsibility to issue appropriate invalidate calls to the map component to ensure their own optical integrity.
I am still struggling with the coordinates to do the right invalidate(left, top, right, buttom) calls cause my updates actually happen on location changes and it is unclear to me if the screen pixel of the map to be invalidated need to get measured relative to the old location or the new one. This is actually a timing question.
However taking the CPU byte and issuing postInvalidate() just looks as intended and it is unclear how much performance is really lost.
I am working on a project where I have to render 4 different sides of a 3D object at the same time on the screen. The output should have 4 different camera outputs rendering the front side, left side, right side and back side of the 3d object.
I found that a gaming engine like Unity may help to do something like this. However, I have just started using Unity and can't figure out how to do it.
Here is the link for some examples. This is how I want the output to look like
Well first of all, welcome to Stackoverflow. And you are right, Unity is an excellent IDE to achieve what you described.
As stated in the FAQ and here, I'm going to give you an answer I deem fitting to your question. I can post the code here in about 30 minutes which does exactly what you asked for, but then we'd miss the point of learning to program and posting at StackOverflow in general. I'll show you the way on how to start on this project, but then you'll have to try yourself. If you have any troubles after trying some more, we can help you with specific problems, provided you have researched some before and show us what you tried.
As to your question, it's relative easy to do so. First create your object in the scene, then drag and place four different Camera-objects in the screen. Using the Camera's Normalized View Port Rect (Four values that indicate where on the screen this camera view will be drawn, in Screen Coordinates (values 0-1)), you can then split up the view to show the feed of each Camera.
This ofcourse happens in a script. You can read here about Scripting in Unity. Even if you are an expert in programming, that link is worth a read when you are new to Unity.
Good luck.
I'm completely newbie to Qt
i want to create a 800X600 window that just show some circle and be able to manipulate pixels of the form. there is no interaction between user and form(no click, no dblclick,...) it just shows some circles with one color and lines with different pixel colors(each line may have different pixel colors)
also i want to be able to change the coordination system, i mean change it from top-left to the center of the window. could anyone help me do that with some sample code?
thanks in advance for your reply.
Please try downloading the Qt Creator (IDE), then reading through the tutorials. There's a whole host of very useful information provided for free, including a lot of the code samples you are looking for.
The following examples might also be of particular interest:
Animation Framework Examples
Graphics View Examples
Painting Examples