Extracting similar image patches captured from 2 different lenses of the same camera - matlab

Example Image
I have 2 images captured from my iPhone 8 plus and each of them have been captured at different zoom levels of the iPhone camera (this implies one image was captured with a 1x zoom and the other at 2x zoom). And now I wanna try and extract an image patch in Picture with 1x zoom corresponding to the zoomed in image i.e. Image with 2x Zoom. How would one go about doing that ?
I understand that SIFT features maybe helpful but is there a way I could use the information about the camera extrinsic and intrinsic matrices to find out the desired region of interest ?
(I'm just looking for hints)

Related

Unity blurry and pixelated sprites in editor (no pixel art)

I am currently making a mobile match-3 like game in unity. I have made all the graphics for the gems(the objects with which you make the matches) in Inkscape at 256x256 and exported them(PNG Files) with 90 dpi(also tried with 360 but nothing changed). My problem is that when I run the game in the editor the graphics seem to be "pixelated" and blurry. In my sprite settings I've set Pixels per Unit to 256, checked Generate Mip Maps, I am using Bilinear Filter Mode and the aniso level is 0. I have also set the max size to 256 and compression to high quality(My Main Camera's size is 10 but I tried to change that and nothing changed as far as the quality of the sprites). What can I do to "perfectly" display my sprites? Do I have to export them in some other way from Inkscape or do I have to change some Unity's settings?
Thank you.
NOTE: My sprites are not "pixel art"!
Edit(Added photos of the purple gem as file and how it is shown in editor):
Because scaling
You're display resolution on the images isn't a 256x256 region where those images are displayed, which means that they must be scaled in some manner in order to display in the desired region. Camera rendering is notoriously bad at scaling. As your images aren't Vector (and Unity doesn't support vector graphic formats anyway), scaling will always result in a loss of detail. Detail like hard edges.
Your options are:
smaller images where you have complete control over how the image is scaled down
bilinear filtering (which is fundamentally blurry)
mipmaps (which are automatically scaled down versions of your image in powers of two)
If the later two aren't giving satisfactory results, your only option is the first.

ImageJ : Overlay 2 Images When One is Distorted

I am asking for a step-by-step process with the appropriate plugins (I have been attempting with multipoint and landmark correspondence). Please include images in answer if possible.
I want to overlay two scientific images
The images are not oriented the same due to distortion of the second image from collection at a 45o angle and the object was also at a different orientation (flipped horizontally and slightly rotated)
In Adobe Photoshop I transformed the distorted image to overlay with the
first image by eyeballing the match as you can see below but I am having
difficulty using ImageJ to perform this overlay. I have been told that my
eyeballing method in Adobe Photoshop will not be sufficient for my methods
section of my manuscript and that I must use a scientific program such as
ImageJ.
I tried to follow instructions from the ImageJ forum for Multipoint and Landmark Correspondence but it does not overlay the two images or transform the second image to match the first. Rather, it distorts a portion of the second image and appears to crop the rest out.

Pixel-perfect shader in Unity ShaderLab

In Unity, when writing shaders,
is it possible for the shader itself to "know" what the screen resolution is, and indeed for the shader to control single physical pixels?
I'm thinking only of the case of writing shaders for 2D objects (such as for UI use, or at any event with an ortho camera).
(Of course, normally to show a physical-pixel perfect PNG on screen, you merely have a say 400 pixel PNG, and you arrange scaling so that the shader, happens to be drawing to, precisely 400 physical pixels. What I'm wondering about is a shader that just draws, for example a physical-pixel perfect black line - it would have to "know" exactly where the physical-pixels are.)
There is a ShaderLab built-in value called _ScreenParams.
_ScreenParams.x is the screen width in pixels.
_ScreenParams.y is the screen height in pixels.
Here's the documentation page: http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
I don't think this is going to happen. Your rendering is tied to current selected video mode and it doesn't even have to match your physical screen size (if that is what you mean by pixel-perfect).
The closest you are going to get with this is if you render at recommended resolution for your display device and use pixel shader to shade an entire screen. This way, one 'physical pixel' is going to be roughly equal to one actual rendered pixel. Other than that, it is impossible to associate physical (that is your display's) pixels to rendered ones.
This is unless, of course, I somehow misunderstood your intentions.
is it possible for the shader itself to "know" what the screen resolution is
I don't think so.
and indeed for the shader to control single physical pixels?
Yes. Pixel shaders know what pixel they are drawing and can also sample other pixels.
First of all, please define 'Pixel perfect' and 'Physical pixel'.
If by physical pixel you mean your display's pixel (monitor, laptop display, any other hardware you might use) then you are out of luck. Shaders don't operate on those, they operate on their own 'abstract pixels'.
You can think about it in this way:
Your graphics are rendered in a picture with some configurable resolution (say 800x600 pixels). You can still display this picture on a 1920x1080 display in full screen no problem, it would look crappy though. This is what's happening with actual display and video card rendering. What determines the actual amount of rendered pixels is your video mode (picture's resolution in the above example). And physical pixels are your display's pixels. When rendering you can only operate on the first kind.
This leads us to a conclusion that when you render the graphics at the exact same resolution as your display's native resolution, you can safely say that you endeed render it as 'Physical Pixels'.
In unity, you can pass the renderer some external data (this might include your current screen resolution (for example as a Vector2, see this).
However you most likely don't need any of this, since pixel shaders already operate on pixels (rendered pixels, determined by your current video mode). That means that if you use some resolution which is lesser than your native one, you most likely will not be able to render a single pixel.
Hope it helped.

Image size incorrect in Unity

I have a Unity 2D project with a fixed screen size of 800x450 pixels.
I have imported a background image that is also 800x450 pixels.
When placed on the stage, the image only takes up half of the screen.
The scale of the image is set to 1,1. The Z position is 0.
Why is the image displayed too small? How can I display the image at the correct resolution?
Does this mean that I have to design all my game assets at 2x the required size? Or that I somehow have to set the scale for all imported assets at 2? What is the recommended workflow?
EDIT
I have added a screenshot of the camera settings:
I would trying making your camera orthographic, and set the size of the camera (not the transform) to be half the height that you would like it to be (225)
Also if you are looking for pixel perfect game. here is a pretty good article from Unity about how to make that work and it explains some of the camera aspect ratios and scaling
http://blogs.unity3d.com/2015/06/19/pixel-perfect-2d/

iPhone - save image at higher resolution without pixelating it

I am using the image picker controller to get an image from the user. After some operations on the image, I want the user to be able to save the image at 1600x1200 px, 1024x1024 px or 640x480 px (something like iFlashReady app).
The last option is the size of image I get in the UIImagePickerControllerDelegate method (when using image from camera roll)
Is there any way we can save the image at these resolutions without pixelating the images?
I tried creating a bitmap context with the width and height I want (CGBitmapContextCreate) and drawing the image there. But the image gets pixelated at 1600x1200.
Thanks
This is non-trivial. Your image just doesn't have enough data. To enlarge it you'll need to resample the image and interpolate between pixels (like photoshop when you resize an image).
Most likely you'll want to use a 3rd party library such as:
http://code.google.com/p/simple-iphone-image-processing/
This performs this and many other image processing functions.
From faint memories of computer vision class from long ago, I think what you do is to blur the image after the up-convert.
Before drawing try adjusting your CGBitmapContext's antialiasing and/or interpolation quality:
CGContextSetShouldAntialias( context, 1 == 1 )
CGContextSetInterpolationQuality( context, kCGInterpolationHigh ) ;
If I remember right, antialiasing is turned off on CGContext by default.