I am asking about the definition of imageExtent and bounding box in openlayers ?. I saw these words in other type of mapping library like leaflet. I did not found a real definition.
There is an answer in https://gis.stackexchange.com/questions/240979/difference-between-bounding-box-envelope-extent-bounds
If you are dealing with rotated images you will need to know whether the extent is that of the image before rotation (shown in blue) or the bounding extent (shown in green) of the rotated (red) image. For example, a KML ground overlay is defined by the extent of the image before rotation, but you may find images where the extent is given as the outer extent after rotation, similar to the extent of a rotated view.
Related
In Unity UI, I have an ordinary RawImage
(It's just sitting on a Panel)
I have a png, mask.png which is just a white shape on transparent.
How do you mask the RawImage ?
I have tried Mask, SpriteMask in many ways and it just won't work.
Even RectMask2D would be fine (to mask to a square shape) but it just doesn't seem to work?
Should I use Mask or SpriteMask
If so, do you perhaps have to / have to not set a Material, on the mask? On the RawImage?
I assume the Mask game object should be the parent of the RawImage, but??
What is the secret ?
The RawImagecomponent should work with masks just like your normal Image component does. Granted that the checkmark Maskable is ticked.
Note that the Mask or rect Mask 2D should be the parent of the (raw)images you are trying to mask. The hierarchy should be something like this:
Canvas<br>
| MaskObject (Contains (Raw)Image and Mask or Rect Mask 2d components)
| Object to mask (Contains the (raw)image to mask)
Notice how the white square (Image) gets cut off by the red square (Mask).
The component types between the masking image and the masked image do not need to match either. A RawImage can mask an Image and vice versa.
The Masking objects are again shown in red, where the white are the masked objects. The GameObject's names show the used (raw)image component for that gameobject.
The only exception is the SpriteMask which exclusively works with the Sprite Renderer component
There is not much explanation from Unity on masks... This being the closest to an explanation there is
Some more info about masks:
Masks work by comparing the ref(erence) values of the stencil buffers of the two (or more) objects (in this case images) and only drawing the pixels where the stencil buffer for both equals to 1 using the Stencil's Comp(arison) function. Meaning it is possible to create your own implementation of masks by creating a shader and implementing the Stencil buffer, this comes in handy when for example you want something like an inversed mask, where pixels are drawn everywhere except where the mask is (creating holes in an image) :)
just put the raw image as a child of a UI image and call the parent image "Mask" then put in any shape of sprite in the "Mask" image. Go to 'add component' then 'UI' then add 'mask'. Please look at this link https://learn.unity.com/tutorial/ui-masking#6032b6fdedbc2a06c234cd3e it work well for me
I need to tile a texture across a plane with updating geometry (floor fill), and I need the texture to be scaled to fit real-world dimensions in centimeters. It is a square floor tile of 50cm, and the texture size is 1024 pixels. How do I convert pixels to meters in ARKit? i know that I have to use SCNMatrix4MakeScale on the SCNMaterial diffuse.contentsTransform but not sure what properties to set to get it accurate.
What you might do is use the physical size of SCNNode that you are working with and determine how much squares of 50x50cm could it fit. After you get this coefficient, use it inside the contentsTransform to achieve needed behavior. Please refer to this answer for code snippets and more hints that you might find useful.
In my scene, the smileys(Quad with png image) are placed at Y:0 and the dots(Quad with tiling 3X3) are placed at Y: -0.25.
The shader I need to use for the smileys is Transparent-Diffuse as I am using a circle png image.
But the dots I use below are showing up above the smiley. Using any other shader like Diffuse solves the issue but the smiley becomes a square.
Screenshot:
If you need any more clarifications please dont hesitate to ask.
Edit:
I have attached the shader details of both the smiley and the dots from the inspector panel.
link: http://postimg.org/image/cvws1os7d/
Edit 2:
I have found that the issue should be with the MainCamera and especially with distance & "Field Of View".
I need to use "Perspective" as projection type and 140 as Field of View.
If I change the projection type to Orthographic the issue is completely fixed.
The below screenshots show how the distance and Field of View controls the appearance of the dots over the smiley.
Screenshot 1:
Y position: 8.48
Field of View: 30
link: http://postimg.org/image/s31tttrkp/
Screenshot 2:
Y position: 9.7
Field of View: 30
link: http://postimg.org/image/f71sq0y4b/
Screenshot 3:
Y position: 11.41
Field of View: 30
link: http://postimg.org/image/3uk4az3d3/
Screenshot 4:
Y position: 1
Field of View: 140
link: http://postimg.org/image/bul9zwg7z/
Can this be a clue?
Just a couple of info, on how transparency is typically implemented (not only by Unity).
Meanwhile opaque objects can be drawn in any order (even if sorting them in front-to-back order can eventually improve some GPU performances relying on an early z-cull). Which pixels are visible can be deduced using the depth value stored into the z-buffer.
You can't rely on z-buffer for transparency.
For rendering translucent objects a typical approach is to draw them after all opaque objects, and sorting them in back-to-front order (transparent objects more distant from the camera are drawn first).
Now the question is: how do you sort objects? with a perspective camera and meshes of a generic shape, the solution is not obvious.
For quad meshes oriented parallel to a ortographic camera view plane, the z order is implicitly correct (that's why it always works for you).
You can also notice that camera position influences the drawing order, because with perspective camera the order is calculated as distance between object position and camera.
So what can you do with Unity3d, in your specific use case scenario?
A couple of tricks:
Explicitly set the render queue of the material
Explicitly set the render order inside the shader (similar of the above, but equals to every object with the same shader)
Fake the depth using Offset into the shader (not that useful in your case but worth to be known)
hope this helps
EDIT
I didn't know that, the camera transparency sorting mode appears to be customizable. So this is another solution, maybe the best for your case if you want to use a perspective camera.
If you are using Sprite Renderer component to render the images, you have to change the rendering order with Sorting Layer and Order in Layer parameters instead of changing the Y position.
Sorting layers can be added by clicking the "default" and choosing "Add Sorting Layer..". The order of the layers is changed by dragging them into different order. With Order in Layer lower numbers are rendered first. This means that higher numbers will be drawn on top of lower ones.
I had join few pieces image to be a Map, and i make it able to click also.
but the problem is the image itself had transparent part, so when i click "Section A", maybe will trigger "Section B". Because "Section B" had transparent part is overlap on the Section A area.
So my question is, is that possible had any properties can adjust like it will auto remove transparent part?
or is must manual to adjust the Collider area? because my images had a lot, if manual adjust one by one, then is really take a lot of time.
And i using Box Collider for additional information.
Option 1. Pick some layered sprites. Access the texture of each sprite and read pixel from it, providing coordinates sophisticatedly extracted from mouse position, sprite position on screen and texture bounds provided by sprite. Supposing that opaque parts of sprites are not intersected, any sprite that have opaque pixel at given coordinates will be the result of picking.
Option 2. Replace box colliders with procedurally generated mesh colliders. The procedure will receive the same texture of sprite as an input and generate outline(s) using, say, marching squares algorithm. To convert outline vertices into mesh the procedure may use any trianulation algorithm that works well with concave polygons.
I'm having an issue with drawing to areas outside of the MKMapRect passed to drawMapRect:mapRect:zoomScale:inContext in my MKOverlayView derived class. I'm trying to draw a triangle for each coordinate in a collection and the problem occurs when the coordinate is near the edge of the MKMapRect. See the below image for an example of the problem.
In the image, the light red boxes indicate the MKMapRect being rendered in each call to drawMapRect. The problem is illustrated in the red circle where, as you can see, only part of the triangle is being rendered. I'm assuming that its being clipped to the MKMapRect, though the documentation for MKOverlayView:drawMapRect makes me think this shouldn't be happening.
From the documentation:
You should also not make assumptions that the view’s frame matches the bounding rectangle of the overlay. The view’s frame is actually bigger than the bounding rectangle to allow you to draw lines for things like roads that might be located directly on the border of that rectangle.
My current solution is to draw objects more than once if they are in a maprect that is slightly larger than then maprect given to drawMapRect but this causes me to draw some things more than needed.
Does anyone know of a way to increase the size of the clipping area in drawMapRect so this isn't an issue? Any other suggestions are also welcome.
I ended up adding a buffer to the rect passed in to drawMapRect:mapRect:zoomScale:inContext and using that to determine which objects to draw. This results in more objects being drawn than needed, but not by much.