Algorithm for "filling in" texture in a 2D image - image-manipulation

I recall seeing a paper a while back for an algorithm that could automatically and seamlessly "graft" texture from parts of an image onto another part of an image.
The approach was something along the lines of the following:
You'd build up a databases of small squares of pixels (perhaps 8X8) from the parts of the picture that are present.
You'd then pick an empty pixel (the "destination" for the texture graft) to fill in, and look for one of the squares in your database that most closely matches the surrounding pixels. You'd then color the empty pixel according to the color of the corresponding pixel in the square you find. Then you pick another empty pixel and repeat until there are no empty pixels remaining.
Of course, this is only a vague description because I can't find any references to this algorithm to refresh my memory of the details! Can anyone help?

Sounds a lot like Texture Synthesis by Non-parametric Sampling

Related

mathematical equations for blending two images

I have seen many tutorials that people blend two images that are placed on top of each other very nicely in Photoshop. For example here are two images that are placed on top of each other:
Then in Photoshop after some work, the edges (around the smaller image) will be erased and two images are nicely mixed.
For example, this is a possible end result:
As it can be seen there is no edge and two images are very nicely blended, without blurring.
Can someone point me to any article or post that shows the math behind it? If there is a MATLAB code that can do it, that would be even better. Or at least if someone can tell me what is the correct term for this so I can do Google search on the topic.
Straight alpha blending alone is not sufficient, as it will perform a uniform mixing of the two images.
To achieve nice-looking results, you will need to define an alpha map, i.e. an image of the same size where you adjust the degree of transparency depending on the image that should dominate.
To obtain the mask, you can draw it by hand, for example as a filled outline, as a path or a polygon. Then you have to strongly blur this mask to get a smooth blend.
It looks very difficult (if not impossible) to automate this, as no software can guess what you want to enhance.
The term you are looking for is alpha blending.
https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
The maths behind it boils down to some alpha weighted sums.
Matlab provides the function imfuse to achieve this:
https://de.mathworks.com/help/images/ref/imfuse.html
Edit: (as it still seems to be unclear)
Let's say you have 2 images A and B wich you want to blend.
You put one image over the other so for each coordinate you have 2 RGB touples.
Now you need to define the weight of both images. Will you only see the colour of image A or B or which ratio will you choose to mix them?
This is done by alpha values.
So all you need is a 2d function that defines the mixing ratio for each pixel.
Usually you have values between 0 and 1 where 0 shows one image, 1 shows the other image, 0.5 will mix them both equally and so on...
Just read the article I have linked. It gives you a clear mathematical definition. I can't provide more detail than that.
If you have problems understanding that I urge you to read a book on image processing fundamentals.

how to find distance between black points in a image using image processing

how to find distance between black points in a image using image processing image is taken by web camera and it is a snap of moving belt which is cover by white paper having black dots
Lots of way of doing it.
Firstly you need to identify the dots. Use Otsu thresholding to separate foreground from background. Then convent to binary, and label connected components. Eliminate everything that is smaller or larger than a threshold, or anything that isn't roughly circular.
Then you get a list of frames, so you need a blob-following algorithm. Eliminate any stationary blob (not on the paper).
Finally output the distances based on the blob identifications.

Automatic assembly of multiple Kinect 2 grayscale depth images to a depth map

The project is about measurement of different objects under the Kinect 2. The image acquisition code sample from the SDK is adapted in a way to save the depth information at the whole range and without limiting the values at 0-255.
The Kinect is mounted on a beam hoist and moved in a straight line over the model. At every stop, multiple images are taken, the mean calculated and the error correction applied. Afterwards the images are put together to have a big depth map instead of multiple depth images.
Due to the reduced image size (to limit the influence of the noisy edges), every image has a size of 350x300 pixels. For the moment, the test is done with three images to be put together. As with the final program, I know in which direction the images are taken. Due to the beam hoist, there is no rotation, only translation.
In Matlab the images are saved as matrices with the depth values going from 0 to 8000. As I could only find ideas on how to treat images, the depth maps are transformed into images with a colorbar. Then only the color part is saved and put into the stitching script, i.e. not the axes and the grey part around the image.
The stitching algorithm doesn’t work. It seems to me that the grayscale-images don’t have enough contrast to be treated by the algorithms. Maybe I am just searching in the wrong direction?
Any ideas on how to treat this challenge?

Drawing image stamps along a path with Cairo

As part of my initial research, to see if using Cairo is a good fit for us, I'm looking to see if I can obtain an (x,y) point at a given distance from the start of a path. I have looked over the Cairo examples and APIs but I haven't found anything to suggest this is possible. It would be a pain if we had to build our own Bezier path implementation from scratch.
On android there is a class called PathMeasure. This allows getting an (x,y) point at a given distance from the start of the path. This allows me to draw a stamp easily at the wanted distance being able to produce something like the image below.
Hopefully someone can point me in the right direction.
Unless I have an uncomplete understanding of what you mean with "path", it seems that you can accomplish the task by starting from this guide. You would use multiple cr (image location, rotation, scale) and just one image instance.
From what I can understand from your image, you'll need to use blending (e.g. the alpha channel), I would say setting pixel by pixel the alpha channel (transparency) proportional to/same as your original grayscale values, and all the R G B pixels values to black (0).
For acting directly on your input image (on the file you will be loading), by googling "convert grayscale image to alpha" I found several results for Photoshop, some for gimp, I don't know what would you have available.
Otherwise you will have to do directly within your code accessing the image pixels. To read/edit pixel values you can use cairo_image_surface_get_data. First you have to create a destination image with cairo_image_surface_create
using format CAIRO_FORMAT_ARGB32
Similarly, you can use cairo_mask drawing a black rectangle of the size of your image, after having created an alpha channel image of format CAIRO_FORMAT_A8 from your original image (again, accessing pixel by pixel seems the only possible way given the limitations of cairo_image_surface_create_from_png).
Using cairo_paint_with_alpha in place of cairo_paint is not suitable because the alpha channel would be constant for the whole image.

MATLAB image processing of small circles

I have an image which looks like this:
I have a task in which I should circle all the bottles around their opening. I created a simple algorithm and started working it. My algorithm follows:
Threshold the original image
Do some morphological opening in it
Fill the empty holes
Separate the portion of the image using region props such that only the area equivalent to the mouth of the bottles is selected.
Find the centroid for each and draw circle around each bottle.
I did according to the algorithm above and but I have some portion of the image around which I draw a circle. This is because I have selected the area since the area of the mouth of bottle and the remained noise is almost same. And so I yielded a figure like this.
The processing applied on the image look like this:
And my final image after plotting the circle over the original image is like this:
I think I can deal with the extra circle, that is, because of some white portion of the image remained as shown in the figure 2 below. This can be filtered out using regionproping for eccentricity. Is that a good idea or there are some other approaches to this? How would I deal with other bottles behind the glass and select them?
Nice example images you provide for your question!
One thing you can use to detect the remaining bottles (if there are any) is the well defined structure of the placement of the bottles.
The 4 by 5 grid of the bottle should be relatively easy to locate, and when the grid is located you can test if a bottle is detected at each expected bottle location.
With respect to the extra detected bottle, you can use shape features like
eccentricity,
the first Hu moment
a ratio between the perimeter length squared over the area (which is minimized for a circle) details here
If you are able to detect the grid, it should be easy to located it as an outlier (far from an expected bottle location) and discard accordingly.
Good luck with your project!
I've used the same approach as midtiby's third suggestion using the ratio between area and perimeter called shape factor:
4π * Area /perimeter^2
to detect circles from a contour traced image (from the thresholded image) to great success;
http://www.empix.com/NE%20HELP/functions/glossary/morphometric_param.htm
Regarding the 4 unfound bottles, this is rather tricky without some a priori knowledge of what it is you're looking at (as discussed using the 4 x 5 grid, then looking from the centre of each cell). I did think that from the list of contours, most would be of the bottle tops (which you can test using the shape factor stuff), however, one would be of a large rectangle. If you could find the extremities of the rectangle (from the largest contour in terms of area), then remove it from the third image, you'd be left with partial circles. If you then contour traced those partial circles and used a mixture of shape factor/curve detection etc. may help? And yes, good luck again!