Drawing image stamps along a path with Cairo - cairo

As part of my initial research, to see if using Cairo is a good fit for us, I'm looking to see if I can obtain an (x,y) point at a given distance from the start of a path. I have looked over the Cairo examples and APIs but I haven't found anything to suggest this is possible. It would be a pain if we had to build our own Bezier path implementation from scratch.
On android there is a class called PathMeasure. This allows getting an (x,y) point at a given distance from the start of the path. This allows me to draw a stamp easily at the wanted distance being able to produce something like the image below.
Hopefully someone can point me in the right direction.

Unless I have an uncomplete understanding of what you mean with "path", it seems that you can accomplish the task by starting from this guide. You would use multiple cr (image location, rotation, scale) and just one image instance.
From what I can understand from your image, you'll need to use blending (e.g. the alpha channel), I would say setting pixel by pixel the alpha channel (transparency) proportional to/same as your original grayscale values, and all the R G B pixels values to black (0).
For acting directly on your input image (on the file you will be loading), by googling "convert grayscale image to alpha" I found several results for Photoshop, some for gimp, I don't know what would you have available.
Otherwise you will have to do directly within your code accessing the image pixels. To read/edit pixel values you can use cairo_image_surface_get_data. First you have to create a destination image with cairo_image_surface_create
using format CAIRO_FORMAT_ARGB32
Similarly, you can use cairo_mask drawing a black rectangle of the size of your image, after having created an alpha channel image of format CAIRO_FORMAT_A8 from your original image (again, accessing pixel by pixel seems the only possible way given the limitations of cairo_image_surface_create_from_png).
Using cairo_paint_with_alpha in place of cairo_paint is not suitable because the alpha channel would be constant for the whole image.

Related

Find Corner in image with low resolution (Checkerboard)

i need some help with a corner detection.
I printed a checkerboard and created an image of this checkerboard with a webcam. The problem is that the webcam has a low resolution, therefore it do not find all corners. So i enhanced the number of searched corner. Now it finds all corner but different one for the same Corner.
All Points are stored in a matrix therefore i don't know which element depends to which point.
(I can not use the checkerboard function because the fuction is not available in my Matlab Version)
I am currently using the matlab function corner.
My Question:
Is it possible to search the Extrema of all the point clouds to get 1 Point for each Corner? Or has sb an idea what i could do ? --> Please see the attached photo
Thanks for your help!
Looking at the image my guess is that the false positives of the corner detection are caused by compression artifacts introduced by the lossy compression algorithm used by your webcam's image acquisition software. You can clearly spot ringing artifacts around the edges of the checkerboard fields.
You could try two different things:
Check in your webcam's acquisition software whether you can disable the compression or change to a lossless compression
Working with the image you already have, you could try to alleviate the impact of the compression by binarising the image using a simple thresholding operation (which in the case of a checkerboard would not even mean loosing information since the image is intrinsically binary).
In case you want to go for option 2) I would suggest to do the following steps. Let's assume the variable storing your image is called img
look at the distribution of grey values using e.g. the imhist function like so: imhist(img)
Ideally you would see a clean bimodal distribution with no overlap. Choose an intensity value I in the middle of the two peaks
Then simply binarize by assigning img(img<I) = 0; img(img>I) = 255 (assuming img is of type uint8).
Then run the corner algorithm again and see if the outliers have disappeared

how to find distance between black points in a image using image processing

how to find distance between black points in a image using image processing image is taken by web camera and it is a snap of moving belt which is cover by white paper having black dots
Lots of way of doing it.
Firstly you need to identify the dots. Use Otsu thresholding to separate foreground from background. Then convent to binary, and label connected components. Eliminate everything that is smaller or larger than a threshold, or anything that isn't roughly circular.
Then you get a list of frames, so you need a blob-following algorithm. Eliminate any stationary blob (not on the paper).
Finally output the distances based on the blob identifications.

Automatic assembly of multiple Kinect 2 grayscale depth images to a depth map

The project is about measurement of different objects under the Kinect 2. The image acquisition code sample from the SDK is adapted in a way to save the depth information at the whole range and without limiting the values at 0-255.
The Kinect is mounted on a beam hoist and moved in a straight line over the model. At every stop, multiple images are taken, the mean calculated and the error correction applied. Afterwards the images are put together to have a big depth map instead of multiple depth images.
Due to the reduced image size (to limit the influence of the noisy edges), every image has a size of 350x300 pixels. For the moment, the test is done with three images to be put together. As with the final program, I know in which direction the images are taken. Due to the beam hoist, there is no rotation, only translation.
In Matlab the images are saved as matrices with the depth values going from 0 to 8000. As I could only find ideas on how to treat images, the depth maps are transformed into images with a colorbar. Then only the color part is saved and put into the stitching script, i.e. not the axes and the grey part around the image.
The stitching algorithm doesn’t work. It seems to me that the grayscale-images don’t have enough contrast to be treated by the algorithms. Maybe I am just searching in the wrong direction?
Any ideas on how to treat this challenge?

Calculating corresponding pixels

I have a computer vision set up with two cameras. One of this cameras is a time of flight camera. It gives me the depth of the scene at every pixel. The other camera is standard camera giving me a colour image of the scene.
We would like to use the depth information to remove some areas from the colour image. We plan on object, person and hand tracking in the colour image and want to remove far away background pixel with the help of the time of flight camera. It is not sure yet if the cameras can be aligned in a parallel set up.
We could use OpenCv or Matlab for the calculations.
I read a lot about rectification, Epipolargeometry etc but I still have problems to see the steps I have to take to calculate the correspondence for every pixel.
What approach would you use, which functions can be used. In which steps would you divide the problem? Is there a tutorial or sample code available somewhere?
Update We plan on doing an automatic calibration using known markers placed in the scene
If you want robust correspondences, you should consider SIFT. There are several implementations in MATLAB - I use the Vedaldi-Fulkerson VL Feat library.
If you really need fast performance (and I think you don't), you should think about using OpenCV's SURF detector.
If you have any other questions, do ask. This other answer of mine might be useful.
PS: By correspondences, I'm assuming you want to find the coordinates of a projection of the same 3D point on both your images - i.e. the coordinates (i,j) of a pixel u_A in Image A and u_B in Image B which is a projection of the same point in 3D.

Algorithm for "filling in" texture in a 2D image

I recall seeing a paper a while back for an algorithm that could automatically and seamlessly "graft" texture from parts of an image onto another part of an image.
The approach was something along the lines of the following:
You'd build up a databases of small squares of pixels (perhaps 8X8) from the parts of the picture that are present.
You'd then pick an empty pixel (the "destination" for the texture graft) to fill in, and look for one of the squares in your database that most closely matches the surrounding pixels. You'd then color the empty pixel according to the color of the corresponding pixel in the square you find. Then you pick another empty pixel and repeat until there are no empty pixels remaining.
Of course, this is only a vague description because I can't find any references to this algorithm to refresh my memory of the details! Can anyone help?
Sounds a lot like Texture Synthesis by Non-parametric Sampling