I'm trying to remove specular reflection from a ring light from my image. I want to remove specular reflection from both the test image and the flat image before performing flat-field correction. Here are the two images:
Test Image
Flat / Background Image
The light and the camera are placed directly above the flat object (changing geometry is not feasible). My understanding is that the signal received by the camera is the sum of diffused color and the specular reflection. So to estimate the reflection component, I placed a black surface (same material as the original object) and captured the reflected component. The black surface was captured as follows:
Black image
However, when I try to subtract it from my image, the ring shaped area in both the images become darker, which implies that the black surface had a stronger specular reflection component than both the test image and the flat image. After subtraction of specular component, the image looks like this:
Specular free image
Can someone tell me why is that?
Reflectance is a function of wavelength. So is the sensitivity of your camera.
Any calibration done with a homogeneous reflectance target will fail on anything inhomogeneous. No matter if it's due to surface roughness or colour.
Also your image is terribly underexposed.
Related
I have a 3D model in .obj form and a corresponding .mtl file.
I dragged them both into xcode and converted the .obj model to a .scn file.
Xcode loads the file correctly and also applies the material. So in diffuse there is a color. (See image below) The problem is: The model stays white, except if I add an emission color. (See the two attached images - one with and one without emission color - once the model is visible and the other time not)
How do I apply the material color correctly?
Assuming the first image shows the result you don't want and the second image is a result more or less good, I can tell you this: The emission color of the first image is set to a pure white, which means, the emission is set to its maximum, what makes it almost invisible on white backgrounds. the second image just has some emission (a gray color) and so the geometry becomes better visible.
If your geometry is not a lamp and neither a light source, set the emission color to UIColor.black, and your object should be just fine.
In Unity UI, I have an ordinary RawImage
(It's just sitting on a Panel)
I have a png, mask.png which is just a white shape on transparent.
How do you mask the RawImage ?
I have tried Mask, SpriteMask in many ways and it just won't work.
Even RectMask2D would be fine (to mask to a square shape) but it just doesn't seem to work?
Should I use Mask or SpriteMask
If so, do you perhaps have to / have to not set a Material, on the mask? On the RawImage?
I assume the Mask game object should be the parent of the RawImage, but??
What is the secret ?
The RawImagecomponent should work with masks just like your normal Image component does. Granted that the checkmark Maskable is ticked.
Note that the Mask or rect Mask 2D should be the parent of the (raw)images you are trying to mask. The hierarchy should be something like this:
Canvas<br>
| MaskObject (Contains (Raw)Image and Mask or Rect Mask 2d components)
| Object to mask (Contains the (raw)image to mask)
Notice how the white square (Image) gets cut off by the red square (Mask).
The component types between the masking image and the masked image do not need to match either. A RawImage can mask an Image and vice versa.
The Masking objects are again shown in red, where the white are the masked objects. The GameObject's names show the used (raw)image component for that gameobject.
The only exception is the SpriteMask which exclusively works with the Sprite Renderer component
There is not much explanation from Unity on masks... This being the closest to an explanation there is
Some more info about masks:
Masks work by comparing the ref(erence) values of the stencil buffers of the two (or more) objects (in this case images) and only drawing the pixels where the stencil buffer for both equals to 1 using the Stencil's Comp(arison) function. Meaning it is possible to create your own implementation of masks by creating a shader and implementing the Stencil buffer, this comes in handy when for example you want something like an inversed mask, where pixels are drawn everywhere except where the mask is (creating holes in an image) :)
just put the raw image as a child of a UI image and call the parent image "Mask" then put in any shape of sprite in the "Mask" image. Go to 'add component' then 'UI' then add 'mask'. Please look at this link https://learn.unity.com/tutorial/ui-masking#6032b6fdedbc2a06c234cd3e it work well for me
In HLSL, how can I calculate lighting based on pixels of a texture, instead of pixels that make up the object?
In other words, if I have a 64x64px texture being rendered on a 1024x768px screen, I want to calculate the lighting as it affects the 64x64px space, resulting in jagged pixels instead of a smooth line.
I've researched dozens of answers but I'm not sure how I can determine at all times if a fragment is a part of a pixel that should be fully lit or not. Maybe this is the wrong approach?
The current implementation uses a diffuse texture and a normal map. It results in what appear as artifacts (diagonal lines) in the output:
Note: The reason it almost looks correct is because of the normal map, which causes some adjacent pixels to have normals that are angled just enough to light some pixels and not others.
I made a new diffused material for grass in unity5 when applied it to a somrthing small in size it has all the details of the texture grass image but when applied the same material to a much larger objects only solid color is visible with no details of the texture.
refer the down given image.
both the cube and the floor has same material.
MaterialSetting
Increase the tiling. The image will stretch to be 1 x 1 on a huge object the way you have it in your image. The higher the tile count the more the image will be repeated across the object.
Be aware the x and y values may be different to one another depending on the dimensions of the gameobject it is attached to.
Say the user taps 4 spots on the iphone, defining an irregular 4 sided polygon (in 2d space). Is there a way to map/fit a (potentially highly distorted) image onto this shape, without using OpenGL?
Something like:
Is my only option to somehow calculate the 3d space that my irregular 4 sided shape sits in (based on where the tapped 2d points sit), create an OpenGL plane in that space, and map my texture to it flatly? Seems like there should be an easier way...
Thanks in advance.
Update: After diving into OpenGL I'm almost there... but I still can't get the texture to distort correctly. The triangulation seems to be messing with the texture mapping:
I can't answer your question completely, but one thing I would say is that you don't need to think about any conversion / mapping to 3D. Using OpenGL you can easily draw the shape in 2D and have the texture mapped as you desire. No need for any fancy maths or conversions. It's no more complicated than drawing a rectangle. OpenGL doesn't care that your 4-sided shape isn't actually rectangular.