.nii to jpeg conversion, coordinates mismatch after conversion - matlab

I am converting .nii file to jpeg images using this matlab tool using following program:
a=load_nii('file.nii')
a_images=a.img
imtool(a_images(:,:,image_number),[]) #for checking pixel location of particular image
after checking i got different pixel location of desired thing which i got using MITK. I tried some rotation and flip operation using imrotate fliplr and flipud but not getting correct result. I know some rotation and flip operation is needed but in which order i don't know. Please help.

Related

Find Corner in image with low resolution (Checkerboard)

i need some help with a corner detection.
I printed a checkerboard and created an image of this checkerboard with a webcam. The problem is that the webcam has a low resolution, therefore it do not find all corners. So i enhanced the number of searched corner. Now it finds all corner but different one for the same Corner.
All Points are stored in a matrix therefore i don't know which element depends to which point.
(I can not use the checkerboard function because the fuction is not available in my Matlab Version)
I am currently using the matlab function corner.
My Question:
Is it possible to search the Extrema of all the point clouds to get 1 Point for each Corner? Or has sb an idea what i could do ? --> Please see the attached photo
Thanks for your help!
Looking at the image my guess is that the false positives of the corner detection are caused by compression artifacts introduced by the lossy compression algorithm used by your webcam's image acquisition software. You can clearly spot ringing artifacts around the edges of the checkerboard fields.
You could try two different things:
Check in your webcam's acquisition software whether you can disable the compression or change to a lossless compression
Working with the image you already have, you could try to alleviate the impact of the compression by binarising the image using a simple thresholding operation (which in the case of a checkerboard would not even mean loosing information since the image is intrinsically binary).
In case you want to go for option 2) I would suggest to do the following steps. Let's assume the variable storing your image is called img
look at the distribution of grey values using e.g. the imhist function like so: imhist(img)
Ideally you would see a clean bimodal distribution with no overlap. Choose an intensity value I in the middle of the two peaks
Then simply binarize by assigning img(img<I) = 0; img(img>I) = 255 (assuming img is of type uint8).
Then run the corner algorithm again and see if the outliers have disappeared

Image processing: where do you think the issue is originated from?

I am working on a lane detection project. Figure 1 shows one of the frames of the dashboard recording.
I have taken the following steps:
Removed the noise.
Detected the edges using Canny Edge Detector.
Masked out the top half of the image.
Applied a close morpholody to make continuous lines.
Applied a skeleton morphology to get the thinnest lines.
Performed a Hough transform on binary mask to obtain Hough matrix.
Identified the peaks in Hough matrix.
Extracted lines from the binary mask using houghlines function.
Calculated the line and marker positions from the previous step.
Visualized the line ([165,1,28]) and marker ([255,157,11]) on Figure 1.
But, as it can be seen, the white dashed lanes on the RHS are not detected. Where do you think the issue is and do you have any suggestions to fix it? Thanks.
I have tried to threshold the image by creating a binary mask using the histogram of the image and obtained better results but that method required some parameter tuning, which I am trying to avoid as this code (eventually) will be tested on a continuous video instead of a single frame.
The Matlab implementation of houghlines accepts the arguments 'MinLength' and 'FillGap'.
'MinLength'
By playing around (=decreasing) with the 'MinLength' Parameter you should be able to make houghlines() accept also the short line on the right side. As your Image looks quite clean, you can even try to reduce it to a very small value such that even the very small part in the lower right corner can be detected as a line.
'FillGap'
Next increase 'FillGap' to allow larger gaps between two line sgments. This might help you, produce one single line, for both right segments.
Hope this helps.

How do I interpret an Intel Realsense camera depth map in MATLAB?

I was able to view and capture the image from the depth stream in MATLAB (using the webcam from the Hardware Support Package) from an F200 Intel Realsense camera. However, it does not look the same way as it does in the Camera Explorer.
What I see from MATLAB -
I have also linked Depth.mat that contains the image in the variable "D".
The image is returned as a 3 dimensional array of uint8. I assumed that the depth stream is a larger number that is broken in bits in each plane and tried bitshifting each plane and adding it to the next while taking care of the datatypes. Then displayed it using imagesc, but did not get a proper depth image.
How do I properly interpret this image? Or, is there an alternate way to capture images in MATLAB?

Drawing image stamps along a path with Cairo

As part of my initial research, to see if using Cairo is a good fit for us, I'm looking to see if I can obtain an (x,y) point at a given distance from the start of a path. I have looked over the Cairo examples and APIs but I haven't found anything to suggest this is possible. It would be a pain if we had to build our own Bezier path implementation from scratch.
On android there is a class called PathMeasure. This allows getting an (x,y) point at a given distance from the start of the path. This allows me to draw a stamp easily at the wanted distance being able to produce something like the image below.
Hopefully someone can point me in the right direction.
Unless I have an uncomplete understanding of what you mean with "path", it seems that you can accomplish the task by starting from this guide. You would use multiple cr (image location, rotation, scale) and just one image instance.
From what I can understand from your image, you'll need to use blending (e.g. the alpha channel), I would say setting pixel by pixel the alpha channel (transparency) proportional to/same as your original grayscale values, and all the R G B pixels values to black (0).
For acting directly on your input image (on the file you will be loading), by googling "convert grayscale image to alpha" I found several results for Photoshop, some for gimp, I don't know what would you have available.
Otherwise you will have to do directly within your code accessing the image pixels. To read/edit pixel values you can use cairo_image_surface_get_data. First you have to create a destination image with cairo_image_surface_create
using format CAIRO_FORMAT_ARGB32
Similarly, you can use cairo_mask drawing a black rectangle of the size of your image, after having created an alpha channel image of format CAIRO_FORMAT_A8 from your original image (again, accessing pixel by pixel seems the only possible way given the limitations of cairo_image_surface_create_from_png).
Using cairo_paint_with_alpha in place of cairo_paint is not suitable because the alpha channel would be constant for the whole image.

exact working of hough transform in matlab

Can anyone guide me how the function 'hough transform' works in matlab?? The problem is that i have an image containing two straight rectangles and one rectangle is tilted at some angle. According to me after applying hough transform; i should get a line structure of 1X6 but i am getting a structure of 1x14. Can anyone help me? I have also uploaded the images:
You can't restrict Hough Transform to give a structure of 1x6.It doesn't produce stabile results.It also doesn't work well when looking further ahead on curved roads. I should not acquire 1x6 structure from each frame.Instead, I should take all returned line segments and use some logic to determine the lane markings.
First of all, your image actually looks slightly blurred. I don't know if it actually is , but if so, you need to run an edge detection algorithm, so your hough transform does not detect the blurred part of the line.
Second of all, you need to reduce the number of detected lines, simply by taking out any lines which does not have enough points going through it. This can be done by thresholding the H variable in [H,t,r]=hough(image).
Additional Sources:
http://en.wikipedia.org/wiki/Hough_transform
http://www.mathworks.com/help/toolbox/images/ref/hough.html