Boundry detect paper sheet opencv - iphone

I am new in openCV, I already detect edge of paper sheet but my result image is blurred after draw lines on edge, How I can draw lines on edges of paper sheet so my image quality remain unaffected.
what I am Missing..
My code is below.
Many thanks.
-(void)forOpenCV
{
if( imageView.image != nil )
{
cv::Mat greyMat=[self cvMatFromUIImage:imageView.image];
vector<vector<cv::Point> > squares;
cv::Mat img= [self debugSquares: squares: greyMat ];
imageView.image =[self UIImageFromCVMat: img];
}
}
- (cv::Mat) debugSquares: (std::vector<std::vector<cv::Point> >) squares : (cv::Mat &)image
{
NSLog(#"%lu",squares.size());
// blur will enhance edge detection
Mat blurred(image);
medianBlur(image, blurred, 9);
Mat gray0(image.size(), CV_8U), gray;
vector<vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&image, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, Mat(), cv::Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
vector<cv::Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
NSLog(#"%lu",squares.size());
for( size_t i = 0; i < squares.size(); i++ )
{
cv:: Rect rectangle = boundingRect(Mat(squares[i]));
if(i==squares.size()-1)////Detecting Rectangle here
{
const cv::Point* p = &squares[i][0];
int n = (int)squares[i].size();
NSLog(#"%d",n);
line(image, cv::Point(507,418), cv::Point(507+1776,418+1372), Scalar(255,0,0),2,8);
polylines(image, &p, &n, 1, true, Scalar(255,255,0), 5, CV_AA);
fx1=rectangle.x;
fy1=rectangle.y;
fx2=rectangle.x+rectangle.width;
fy2=rectangle.y+rectangle.height;
line(image, cv::Point(fx1,fy1), cv::Point(fx2,fy2), Scalar(0,0,255),2,8);
}
}
return image;
}

Instead of
Mat blurred(image);
you need to do
Mat blurred = image.clone();
Because the first line does not copy the image, but just creates a second pointer to the same data.
When you blurr the image, you are also changing the original.
What you need to do instead is, to create a real copy of the actual data and operate on this copy.
The OpenCV reference states:
by using a copy constructor or assignment operator, where on the right side it can
be a matrix or expression, see below. Again, as noted in the introduction, matrix assignment is O(1) operation because it only copies the header and increases the reference counter.
Mat::clone() method can be used to get a full (a.k.a. deep) copy of the matrix when you need it.

The first problem is easily solved by doing the entire processing on a copy of the original image. That way, after you get all the points of the square you can draw the lines on the original image and it will not be blurred.
The second problem, which is cropping, can be solved by defining a ROI (region of interested) in the original image and then copying it to a new Mat. I've demonstrated that in this answer:
// Setup a Region Of Interest
cv::Rect roi;
roi.x = 50
roi.y = 10
roi.width = 400;
roi.height = 450;
// Crop the original image to the area defined by ROI
cv::Mat crop = original_image(roi);
cv::imwrite("cropped.png", crop);

Related

Converting grayscale 1D list of Image Pixels to Grayscale Image Dart

I am trying to convert a binary mask predicted with the pytorch_mobile package to an image I can show in my app.
The Prediction I receive is a 1-Dimensional list containing the predictions that my model spits out, these are negative for pixels assigned to the background and positive for pixels assigned to the area of interest. After this, I create a list that assigns the value 0 to all previously negative values, and 255 to all previously positive values yielding a 1-Dimensional list containing the values 0 or 255 depending on what the pixel was classified as.
The image prediction is a size of 512x512 pixels, and the length of the list is subsequently 262,144.
How would I be able to convert this list into an image that I could save to storage or show via the flutter UI?
Here is my current code:
customModel = await PyTorchMobile
.loadModel('assets/segmentation_model.pt');
result_list = [];
File image = File(filePath);
List prediction = await customModel.getImagePredictionList(image, 512, 512);
prediction.forEach((element) {
if (element >0){
result_list.add(255);
}else if(element <= 0){
result_list.add(0);
}
});
result_list_Uint8 = Uint8List.fromList(result_list);
The following should do the trick. Just use Image.setPixelSafe to set every pixel in the image and then convert it to a Flutter Image widget with encodePng and Image.memory.
import 'package:image/image.dart' as im;
...
final img = im.Image(512, 512);
for (var i = 0, len = 512; i < len; i++) {
for (var j = 0, len = 512; j < len; j++) {
final color = result_list_Uint8[i * 512 + j] == 0 ? 0 : 0xffffff;
img.setPixelSafe(i, j, 0xff000000 | color);
}
}
final pngBytes = Uint8List.fromList(im.encodePng(img));
photoImage = Image.memory(pngBytes);

EaselJS shape x,y properties confusion

I generate a 4x4 grid of squares with below code. They all draw in correct position, rows and columns, on canvas on stage.update(). But the x,y coordinates for all sixteen of them on inspection are 0,0. Why? Does each shape has it's own x,y coordinate system? If so, if I get a handle to a shape, how do I determine where it was drawn originally onto the canvas?
The EaselJS documentation is silent on the topic ;-). Maybe you had to know Flash.
var stage = new createjs.Stage("demoCanvas");
for (i = 0; i < 4; i++) {
for (j = 0; j < 4; j++) {
var square = new createjs.Shape();
square.graphics.drawRect(i*100, j*100, 100, 100);
console.log("Created square + square.x + "," + square.y);
stage.addChild(square);
}
}
You are drawing the graphics at the coordinates you want, instead of drawing them at 0,0, and moving them using x/y coordinates. If you don't set the x/y yourself, it will be 0. EaselJS does not infer the x/y or width/height based on the graphics content (more info).
Here is an updated fiddle where the graphics are all drawn at [0,0], and then positioned using x/y instead: http://jsfiddle.net/0o63ty96/
Relevant code:
square.graphics.beginStroke("red").drawRect(0,0,100,100);
square.x = i * 100;
square.y = j * 100;

relation between harris detector results in matlab and opencv

I am working on corner feature detection using harris detector. I wrote program detect feature in image in matlab using following code to detect harris feature
corners = detectHarrisFeatures(img, 'MinQuality', 0.0001);
S = corners.selectStrongest(100);
then I transfer all program from matlab to opencv
I used following code to detect harris corner points
int thresh = 70;
for( int j = 0; j < dst_norm.rows && cont < 100; j++ )
{
for( int i = 0; i < dst_norm.cols && cont < 100; i++ )
{
if((int) dst_norm.at<float>(j, i) > thresh )
{
S.at<int>(cont, 0) = i;
S.at<int>(cont, 1) = j;
I.at<int>(cont, 0) = i;
I.at<int>(cont, 1) = j;
cont = cont + 1;
}
}
}
extracted region was different in both program and I discovered that harris detected corner points in matlab not as harris detected corner points in opencv.
How can I make detected corner points from both programs are same?
Is dst_norm an array of Harris corner metric values? In that case you are choosing first 100 pixels with the corner metric above the threshold, which is incorrect.
In your MATLAB code, detectHarrisFeatures finds points which are local maxima of the corner metric. Then selectStrongest method selects 100 of those points with the highest metric. So, first you have to find the local maxima. Then you have to sort them, and take the top 100.
Even then, the results will not be exactly the same, because detectHarrisFeatures locates the corners with sub-pixel accuracy, using interpolation.

Segmenting Lungs and nodules in CT images

I am new with Image processing in Matlab, I am trying to segment LUNG and nodules from CT image. I have done initial image enhancement.
I searched lot on the same but I haven't found any relevant materials.
Trying to segment lung part from the given image; and then detecting nodules on Lung part.
Code I tried in Matlab:
d1 = dicomread('000000.dcm');
d1ca = imadjust(d1);
d1nF = wiener2(d1ca);
d1Level = graythresh(d1nF);
d1sBW = im2bw(d1nF,d1Level);
sed = strel('diamon',1);
BWfinal = imerode(d1sBW,sed);
BWfinal = imerode(BWfinal,sed);
BWoutline = bwperim(BWfinal);
Segout = d1nF;
Segout(BWoutline) = 0;
edgePrewitt = edge(d1nF,'prewitt');
Result of above code:
Want results like this:
http://oi41.tinypic.com/35me7pj.jpg
http://oi42.tinypic.com/2jbtk6p.jpg
http://oi44.tinypic.com/w0kthe.jpg
http://oi40.tinypic.com/nmfaio.jpg
http://oi41.tinypic.com/2nvdrie.jpg
http://oi43.tinypic.com/2nvdnhk.jpg
I know its may be easy for experts. Please help me out.
Thank you!
The following is not a Matlab answer! However, OpenCV and Matlab share many features in common, and I'm sure you will be able to translate this C++ code to Matlab with no problems.
For more information about the methods being called, check the OpenCV documentation.
#include <iostream>
#include <vector>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
int main(int argc, char* argv[])
{
// Load input image (colored, i.e. 3-channel)
cv::Mat input = cv::imread(argv[1]);
if (input.empty())
{
std::cout << "!!! failed imread()" << std::endl;
return -1;
}
// Convert input image to grayscale (1-channel)
cv::Mat grayscale = input.clone();
cv::cvtColor(input, grayscale, cv::COLOR_BGR2GRAY);
What grayscale looks like:
// Erode & Dilate to remove noises and improve the result of the next operation (threshold)
int erosion_type = cv::MORPH_RECT; // MORPH_RECT, MORPH_CROSS, MORPH_ELLIPSE
int erosion_size = 3;
cv::Mat element = cv::getStructuringElement(erosion_type,
cv::Size(2 * erosion_size + 1, 2 * erosion_size + 1),
cv::Point(erosion_size, erosion_size));
cv::erode(grayscale, grayscale, element);
cv::dilate(grayscale, grayscale, element);
What grayscale looks like after morphological operations:
// Threshold to segment the area of the lungs
cv::Mat thres;
cv::threshold(grayscale, thres, 80, 150, cv::THRESH_BINARY);
What thres looks like:
// Find the contours of the lungs in the thresholded image
std::vector<std::vector<cv::Point> > contours;
cv::findContours(thres, contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);
// Fill the areas of the lungs with BLUE for better visualization
cv::Mat lungs = input.clone();
for (size_t i = 0; i < contours.size(); i++)
{
std::vector<cv::Point> cnt = contours[i];
double area = cv::contourArea(cv::Mat(cnt));
if (area > 15000 && area < 35000)
{
std::cout << "* Area: " << area << std::endl;
cv::drawContours(lungs, contours, i, cv::Scalar(255, 0, 0),
CV_FILLED, 8, std::vector<cv::Vec4i>(), 0, cv::Point() );
}
}
What lungs looks like:
// Using the image with blue lungs as a mask, we create a new image containing only the lungs
cv::Mat blue_mask = cv::Mat::zeros(input.size(), CV_8UC1);
cv::inRange(lungs, cv::Scalar(255, 0, 0), cv::Scalar(255, 0, 0), blue_mask);
cv::Mat output;
input.copyTo(output, blue_mask);
What output looks like:
At this point you have the lungs isolated in the image and can proceed to execute other filter operations to isolate the nodules.
Good luck.
Try this:
% dp6BK.png is your original image, I downloaded directly
I = im2double(imread('dp6BK.png'));
I=I(:,:,1);
imshow(I)
cropped = I(50:430,8:500); %% Crop region of interest
thresholded = cropped < 0.35; %% Threshold to isolate lungs
clearThresh = imclearborder(thresholded); %% Remove border artifacts in image
Liver = bwareaopen(clearThresh,100); %% Remove objects less than 100 pixels
Liver1 = imfill(Liver,'hole'); % fill in the vessels inside the lungs
figure,imshow(Liver1.*cropped)
What you will get:

findContours(OpenCV) vs. regionprops(Matlab)

My final purpose is to calculate moments of all connected regions in image.
Problem: Regions storage methods in OpenCV and Matlab are different. It is clear that moments of regions and contours are different too.
So if I want to get result of extracting regions in Matlab:
test = imread('test.bmp');
l = bwlabel(test, 8);
it is necessary follow code (OpenCV):
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
Mat test = imread("..\\test.bmp", CV_LOAD_IMAGE_UNCHANGED);
findContours(test , contours, hierarchy, CV_RETR_CCOMP , CV_CHAIN_APPROX_NONE);
for ( int i = 0; i < contours.size(); i++ )
{
if (hierarchy[i][3] > -1)
continue;
Mat imProcessing = Mat::zeros(test .size(), test .type());
drawContours(imProcessing, contours, i, Scalar(255), CV_FILLED, 8, hierarchy, 1);
// now imProcessing contain connected region, not contour!
}
Is there a more efficient way?