resizing images for stereo calibration - matlab

I try to stereo calibrate 2 different cameras.
It turned out that images should be the same size. But in my case resolution is different (1920x1080 and 2048x1024).
I'm not sure that change resolution of images of one of cameras is a good idea.
What is the best solution in this situation? Should I change resolution, if yes for what images (reduce big resolution to smaller or vice versa)?

Related

Agora Video Resolution In Unity

I'm working on an app that uses agora sdk for unity. I'm rendering video on UI Raw Images and using a video resolution of 480x480. The thing is that the video in those images looked a bit stretched and to handle that I set the orientation to fixed portrait and it looks better but not perfect. So I want to ask, do I need to resize the images according to video resolution i.e. if video resolution is of 480x480 then the raw image size should be 480x480 in width and height as well? I also need to present a user's video in a much larger image so what video resolution should I choose given the size of raw image 980(w) x 1600(h). I just need the videos to best fit the images so it looks good. Any help would be much appreciated.
Yes, you should resize the image according the resolution in general. There is a callback method that tells what the video resolution size is. You may find that information in VideoSurface's code where reference to UpdateVideoRawData() is. The actual Raw Image itself doesn't have to match the size and the data will fill in within boundary. You should maintain a width/height ratio. Or if you want to crop the image, you may need to apply some masking technique. Crop the video data itself maybe another approach but that will add CPU overhead for you.

Extracting similar image patches captured from 2 different lenses of the same camera

Example Image
I have 2 images captured from my iPhone 8 plus and each of them have been captured at different zoom levels of the iPhone camera (this implies one image was captured with a 1x zoom and the other at 2x zoom). And now I wanna try and extract an image patch in Picture with 1x zoom corresponding to the zoomed in image i.e. Image with 2x Zoom. How would one go about doing that ?
I understand that SIFT features maybe helpful but is there a way I could use the information about the camera extrinsic and intrinsic matrices to find out the desired region of interest ?
(I'm just looking for hints)

Unity blurry and pixelated sprites in editor (no pixel art)

I am currently making a mobile match-3 like game in unity. I have made all the graphics for the gems(the objects with which you make the matches) in Inkscape at 256x256 and exported them(PNG Files) with 90 dpi(also tried with 360 but nothing changed). My problem is that when I run the game in the editor the graphics seem to be "pixelated" and blurry. In my sprite settings I've set Pixels per Unit to 256, checked Generate Mip Maps, I am using Bilinear Filter Mode and the aniso level is 0. I have also set the max size to 256 and compression to high quality(My Main Camera's size is 10 but I tried to change that and nothing changed as far as the quality of the sprites). What can I do to "perfectly" display my sprites? Do I have to export them in some other way from Inkscape or do I have to change some Unity's settings?
Thank you.
NOTE: My sprites are not "pixel art"!
Edit(Added photos of the purple gem as file and how it is shown in editor):
Because scaling
You're display resolution on the images isn't a 256x256 region where those images are displayed, which means that they must be scaled in some manner in order to display in the desired region. Camera rendering is notoriously bad at scaling. As your images aren't Vector (and Unity doesn't support vector graphic formats anyway), scaling will always result in a loss of detail. Detail like hard edges.
Your options are:
smaller images where you have complete control over how the image is scaled down
bilinear filtering (which is fundamentally blurry)
mipmaps (which are automatically scaled down versions of your image in powers of two)
If the later two aren't giving satisfactory results, your only option is the first.

How to handle resolution change 320x480 => 640x960 related to gameplay

I have decided to have 2 set of images for my iPod game. One with 320x480 and the other for the retina one. I can switch happily between them but this forces me to add extra code to handle the change in resolution.
My game is played in screen space on a grid, so, if I have 32 pixel tiles, I will have to use 32 offsets in low res and 64 in retina (because of doubling resolution). For a simple game this can be no problem, but what about other more complex games? How do you handle this without hardcoding things depending on the target resolution.
Of course an easy way to bypass this is just releasing a 320x480 version an let the hardware upscale, but this is not what I want because of blurry images. I'm a bit lost here.
If you have to, you can do the conversion from points to pixels (and vice versa), easily by either multiplying or dividing the pixel/point position with the contentScaleFactor of your view. However, normally this is done automatically by you if you just keep it to using points instead of pixels.
This is automatic. You only need to add image files suffixed '#2x' for the retina resolution.
Regarding pixels, from your program you work in points which are translated to pixels by the system. Screen dimensions are 320x480 points for iphone retina and non-retina.

iPhone camera images as OpenGL ES textures

Is it possible to use an image captured with the iPhone's camera as a texture that is then manipulated in OpenGL ES (flag wave effect, etc.)? The main problem being the size of the iPhone screen being 320x480 (no status bar) and thus the image won't have dimensions that are power-of-2. Is the main option copying it into a 512x512 texture and adjusting the vertices?
Yes, that's the way to do it.
Just use a larger texture. It's a waste of memory but unfortunately there is no way around this problem.
An alternative would be deviding the picture into squares with a length and height of 32 pixels (aka tiling), resulting into 15x8 tiles. Displaying it would however involve many texture switches while drawing which might become a bottleneck. On the other hand you would save a lot of memory using a tiled approach.