I faced with difficulties when trying to get result similar to this.
After camera calibration I obtained the following parameters:
DIM=(800, 600)
K=np.array([[177.41548430658207, 0.0, 258.68599972062486], [0.0, 177.57591422482173, 205.71268583567885], [0.0, 0.0, 1.0]])
D=np.array([[-0.04397357230351177], [-0.03399404486757072], [0.03174104028771482], [-0.01131815456157867]])
My initial fish-eye image looks like this.
With the following code:
def undistort(img_path):
img = cv2.imread(img_path)
nk = K.copy()
nk[0,0]=K[0,0]
nk[1,1]=K[1,1]
map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), nk, DIM, cv2.CV_16SC2)
undistorted_img = cv2.remap( img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
cv2.imshow("undistorted", undistorted_img)
cv2.imwrite('recoPersp_wrong.jpg', undistorted_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
reconstructed perspective image looks
like this.
However by changing:
nk[0,0]=K[0,0]
nk[1,1]=K[1,1]
to
nk[0,0]=K[0,0]/2
nk[1,1]=K[1,1]/2
I don't get desired result as described by Hello Sam's answer. Instead my image is blurred and smeared on the edges like this (it is different image but still the same problem)
Could you please tell me what am I doing wrong?
Related
Following the example from the documentation page of the centerCropWindow2d function, I am trying to dynamically crop an image based on a 'scale' value that is set by the user. In the end, this code would be used in a loop that would scale an image at different increments, and compare the landmarks between them using feature detection and extraction methods.
I wrote some test code to try and isolate 1 instance of this user-specified image cropping,
file = 'frameCropped000001.png';
image = imread(file);
scale = 1.5;
scaled_width = scale * 900;
scaled_height = scale * 635;
target_size = [scaled_width scaled_height];
scale_window = centerCropWindow2d(size(image), target_size);
image2 = imcrop(image, scale_window);
figure;
imshow(image);
figure;
imshow(image2);
but I am met with this error:
Error using centerCropWindow2d (line 30)
Expected input to be integer-valued.
Error in testRIA (line 20)
scale_window = centerCropWindow2d(size(image), target_size);
Is there no way to do use this function the way I explained above? If not, what's the easiest way to "scale" an image without just resizing it [that is, if I scale it by 0.5, the image stays the same size but is zoomed in by 2x].
Thank you in advance.
I didn't take into account that the height and width for some scales would NOT be whole integers. Since Matlab cannot crop images that are inbetween whole pixel numbers, the "Expected input to be integer-valued." popped up.
I solved my issue by using Math.floor() on the 'scaled_width' and 'scaled_height' variables.
I want to segment my grayscale image as following:
img = io.imread(curr_img_path)
gray = color.rgb2gray(img)
assignment1 = slic(image=gray, n_segments=500, sigma=2, max_iter=100)
I am looking at the segmented image using
fig, ax = plt.subplots(2, 2, figsize=(10, 10), sharex=True, sharey=True)
ax[0, 0].imshow(mark_boundaries(gray, assignment1))
plt.show()
My problem: This shows me a normal grid. Like a chessboard. I do not understand why, and the docs say its possible using grayscale images. Any help? Btw: My image is of shape (352,1216), dtype= float64. There is no error message or something else. Would be glad for any help.
While the compactness parameter can be left to the default value for image in lab-space, the default parameter is to high for RGB/Grayscale images.
I have an image like this:
my goal is to get the output under background normalization at this link.
Following the above link, I did the following:
(1). I first dilate the image to get the background
(2). then try to remove it via normalization
I got the background:
However, when I try to do the normalized division, I get this :
(black borders added to make clear of the boundary of the image)
this is my code:
image = imread('image.png');
image = rgb2gray(image);
se = offsetstrel('ball',9,9);
dilatedI = imdilate(image,se);
output = imdivide(image,dilatedI);
imshow(output,[]);
using
imshow(output)
just gives a black image.
I thought it might be a type conversion issue, but based on the resources mentioned earlier, I am uncertain if it is the case...
Any advice would be appreciated
Just make sure you dont do integer division! your images are integer type, so 4/5 returns 0 and 5/4 returns 1, not a floating point number. Just convert to float before dividing:
image = imread('https://i.stack.imgur.com/bIVRT.png');
%image = rgb2gray(image); The image is not a RGB online
se = offsetstrel('ball',21,21);
dilatedI = imdilate(image,se);
output = imdivide(double(image),double(dilatedI));
figure
subplot(121)
imshow(image);
subplot(122)
imshow(output);
I have an image and a subimage which is cropped out of the original image.
Here's the code I have written so far:
val1 = imread(img);
val2 = imread(img_w);
gray1 = rgb2gray(val1);%grayscaling both images
gray2 = rgb2gray(val2);
matchingval = normxcorr2(gray1,gray2);%normalized cross correlation
[max_c,imax]=max(abs(matchingval(:)));
After this I am stuck. I have no idea how to change the whole image grayscale except for the sub image which should be in color.
How do I do this?
Thank you.
If you know what the coordinates are for your image, you can always just use the rgb2gray on just the section of interest.
For instance, I tried this on an image just now:
im(500:1045,500:1200,1)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,2)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,3)=rgb2gray(im(500:1045,500:1200,1:3));
Where I took the rows (500 to 1045), columns (500 to 1200), and the rgb depth (1 to 3) of the image and applied the rgb2gray function to just that. I did it three times as the output of rgb2gray is a 2d matrix and a color image is a 3d matrix, so I needed to change it layer by layer.
This worked for me, making only part of the image gray but leaving the rest in color.
The issue you might have though is this, a color image is 3 dimensions while a gray scale need only be 2 dimensions. Combining them means that the gray scale must be in a 3d matrix.
Depending on what you want to do, this technique may or may not help.
Judging from your code, you are reading the image and the subimage in MATLAB. What you need to know are the coordinates of where you extracted the subimage. Once you do that, simply take your original colour image, convert that to grayscale, then duplicate this image in the third dimension three times. You need to do this so that you can place colour pixels in this image.
For RGB images, grayscale images have the RGB components to all be the same. Duplicating this image in the third dimension three times creates the RGB version of the grayscale image. Once you do that, simply use the row and column coordinates of where you extracted the subimage and place that into the equivalent RGB grayscale image.
As such, given your colour image that is stored in img and your subimage stored in imgsub, and specifying the rows and columns of where you extracted the subimage in row1,col1 and row2,col2 - with row1,col1 being the top left corner of the subimage and row2,col2 is the bottom right corner, do this:
img_gray = rgb2gray(img);
img_gray = cat(3, img_gray, img_gray, img_gray);
img_gray(row1:row2, col1:col2,:) = imgsub;
To demonstrate this, let's try this with an image in MATLAB. We'll use the onion.png image that's part of the image processing toolbox in MATLAB. Therefore:
img = imread('onion.png');
Let's also define our row1,col1,row2,col2:
row1 = 50;
row2 = 90;
col1 = 80;
col2 = 150;
Let's get the subimage:
imgsub = img(row1:row2,col1:col2,:);
Running the above code, this is the image we get:
I took the same example as rayryeng's answer and tried to solve by HSV conversion.
The basic idea is to set the second layer i.e saturation layer to 0 (so that they are grayscale). then rewrite the layer with the original saturation layer only for the sub image area, so that, they alone have the saturation values.
Code:
img = imread('onion.png');
img = rgb2hsv(img);
sPlane = zeros(size(img(:,:,1)));
sPlane(50:90,80:150) = img(50:90,80:150,2);
img(:,:,2) = sPlane;
img = hsv2rgb(img);
imshow(img);
Output: (Same as rayryeng's output)
Related Answer with more details here
I am trying to delete edges whose length is under a limit in OpenCv. In Matlab Canny edge detecor+bwareaopen does the job, however I could not get the cvBlobsLib CBlobResult::Filter function and cannot show the results using FillBlob. Here is my code
image = imread(imageflname, CV_LOAD_IMAGE_COLOR);
Mat gray;
cvtColor(image,gray, CV_BGR2GRAY);
float min_thr = 54;
GaussianBlur(gray, gray, Size(5,5), 0.8, BORDER_REPLICATE);
imwrite("gray.png", gray);
Canny(gray, edge_im, min_thr, min_thr*3, 3);
imwrite("edge_im.png", edge_im);
CBlobResult blobs;
Mat binary_image(edge_im.size(), CV_8UC1);
blobs = CBlobResult(binary_image,Mat(),4);
cout<<"# blobs found: "<<blobs.GetNumBlobs()<<endl;
blobs.Filter( blobs, B_INCLUDE, CBlobGetArea(), B_GREATER, 70 );
cout<<"After deletion found: "<<blobs.GetNumBlobs()<<endl;
Mat edge_open(edge_im.size(), image.type());
edge_open.setTo(0);
for(int i=0;i<blobs.GetNumBlobs();i++){
blobs.GetBlob(i)->FillBlob(edge_open,CV_RGB(255,255,255));
}
imwrite("edge_im_open.png", edge_open);
Edge image comes out ok, but the area opening part does not work properly. 'edge_open' is not what it is supposed to be. What could be the reason for this? Any coding suggestions?
Thanks.