How to create image mosaic like Lego app? - iphone

I'm trying to play around with turning an image into mosaic bricks like the Lego Photo app.
How is it done, and where can I find more info?

You basically need to iterate through the pixels, to calculate the average colour for say every 4x4 block of pixels. Once you have this average colour, you 'round' it to the nearest colour that you can use in your mosaic. I don't know the specifics of it but this sample code does exactly what you want.

Related

Mapbox GL JS - Stacking/Layering Image overlays on each other

I am wanting to layer multiple image overlays over each other on a map (minimum will be 3, maximum 10). These are weather radar images with each image being a higher elevation in the storm. I have attached a screenshot. Just imagine several layers over that image in the same spot.
I am hoping there's an easy image style like "HeightAboveGround" or something like that that will let me do this.
If this is possible, is there also a way to tilt/rotate image overlays? This would be such a nice feature in the map.
I am wanting to layer multiple image overlays over each other...with each image being a higher elevation in the storm.
There is no way to render image overlays at any height other than zero.
If this is possible, is there also a way to tilt/rotate image overlays?
If by "tilt", you mean, along an axis parallel to the ground, such that one end of the image is higher than the other, then there's no way to do that.
If, by "rotate" you mean, along an axis perpendicular to the ground, so that the image no longer aligns with north, then there's no way to do that either.
Sorry this couldn't be more helpful. :) If 3D is important to your application, you might want to consider using a true 3D library (as opposed to 2.5D) such as Cesium.

Is it possible to create a 3D photo from a normal photo?

If I have understand well, 3D 360 photos are created from a panorama photo, so I guess it should be possible to create a 3D photo (non 360) from a normal photo. But how? I did not find anything in Google! Any idea of what should I search??
So far, if nothing available (I don't think so), I'll try to duplicate the same photo in each eye. One of the pictures a little bit moved to the right, and the other one moved a little bit to the left. But I think the distortion algorithm is much more complicated.
Note: I'm also receiving answers here: https://plus.google.com/u/0/115463690952639951338/posts/4KdqFcqUTT9
I am in no way certain of this, but my intuition on how 3D 360 images are created in GoogleVR is this:
As you take a panorama image, it actually takes a series of images. As you turn the phone around, the perspective changes slightly with each image, not only by angle, but also offset (except in the unlikely event you spin the phone around its own axis). When it stitches together the final image, it creates one image for each eye, picking suitable images from the series so that it creates a 3D effect when viewed together. The same "area" of the image for each eye comes from a different source image.
You can't do anything similar with a single image. It's the multitude of images produced, each with a different perspective coming from the turning of the phone, that enables the algorithm to create a 3D image.
2D lacks a dimension hence cannot be converted to 3D just like that, but there are clever ways for example Google Pixel even though doesn't have 2 camera can make it seem like the image is 3D by applying some Machine learning algorithm that create the effect of perspective and depth by selective blurring.
3d photos can't be taken by normal but you can take 360 photos with normal camera ..... There are many apps via which you can do this ..... Also there are many algorithms to do it programmatically

what is contrast?And it effect on each Pixel value?

I am new at image processing and am working with pixel values.
I just learned image pixel brightness now I want to learn contrast.
I searched the Internet but my brain is not getting it.
I'd like to know if you know some thing that could help?
What is concept of contrast?
What is it's effect on each pixel of image?

Extracting measurements from a finger via ROI and image processing MATLAB

I am trying to do a number of things via MATLAB but I am getting a bit lost with what techniques to use. My ultimate goal is to extract various measurements from a users fingerprint presentation, e.g. how far the finger over/undershoots, the co-ordinates of where the finger enters, the angle of the finger.
In my current setup, I have a web camera recording footage of a top down view of the presentation which I then take the video file and break down into individual frames. https://www.dropbox.com/s/zhvo1vs2615wr29/004.bmp?dl=0
What I am trying to work on at the moment is using ROI based image processing to create a binary mask around the edges of the scanner. I'm using the imbw function to get a binarised image and getting this as a result. https://www.dropbox.com/s/1re7a3hl90pggyl/mASK.bmp?dl=0
What I could use is some guidance on where to go from here. I want to be able to take measurements from the defined ROI to work out various metrics e.g. how far a certain point is from the ROI so I must have some sort of border for the scanner edges. From my experience in image processing so far, this has been hard to clearly define. I would like to get a clearer image where the finger is outlined and defined and the background (i.e. the scanner light/blocks) are removed.
Any help would be appreciated.
Thanks

Shape Detection (circle, square, rectangle, triangle, ellipse) for a camera captured image + i OS 5 + Open CV

I am new to OpenCV and need to know the method of OpenCV which detects different shapes (circle, square, rectangle, triangle, ellipse) in a camera captured image for iPhone.
so, could someone directs me to the right direction (references/articles/anything) that which techniques are better to get it done.
Thanks..
iOmi
First you will probably need to look at an edge detector such as Canny to extract the shapes into a binary image. (Although this may be expensive for the iphone)
For circles I would have a look at the HoughCircles.
For squares and rectangles you should look at the findContours method and the sample code squares.cpp in the samples directory when you downloaded opencv.
With a quick google search I was able to find an article about detecting shapes in C# which roughly corresponds to the methods you would use in another language while using the opencv library.
I have not used opencv in ios but I hope this will help get you started.