Displaying a 500x500 pixel image in a 640x852 pixel NSImageView without any kind or blurriness (Swift on OS X) - swift

I have spent hours on Google searching for an answer to this and trying pieces of code but I have just not been able to find one. I also recognise that this is a question that has been asked lots of times, however I do not know what else to do now.
I have access to 500x500 pixel rainfall radar images from the Met Offices' DataPoint API, covering the UK. They must be displayed in a 640x852 pixel area (an NSImageView, which I currently have the scaling property set to axis independent) because this is the correct size of the map generated for the boundaries covered by the imagery. I want to display them at the enlarged size of 640x852 using the nearest neighbour algorithm and in an aliased format. This can be achieved in Photoshop by going to Image > Image Size... and setting resample to nearest neighbour (hard edges). The source images should remain at 500x500 pixels, I just want to display them in a larger view.
I have tried setting the magnificationFilter of the NSImageView.layer to all three of the different kCAFilter... options but this has made no difference. I have also tried setting the shouldRasterize property of the NSImageView.layer to true, which also had no effect. The images always end up being smoothed or anti-aliased, which I do not want.
Having recently come from C#, there could be something I have missed as I have not been programming in Swift for very long. In C# (using WPF), I was able to get what I want by setting the BitmapScalingOptions of the image element to NearestNeighbour.
To summarise, I want to display a 500x500 pixel image in a 640x852 pixel NSImageView in a pixelated form, without any kind of smoothing (irrespective of whether the display is retina or not) using Swift. Thanks for any help you can give me.
Below is the image source:
Below is the actual result (screenshot from a 5K iMac):
This was created by simply setting the image property on an NSImageSource with the tableViewSelectionDidChange event of my NSTableView used to select the times to show the image for, using:
let selected = times[timesTable.selectedRow]
let formatter = NSDateFormatter()
formatter.dateFormat = "d/M/yyyy 'at' HH:mm"
let date = formatter.dateFromString(selected)
formatter.dateFormat = "yyyyMMdd'T'HHmmss"
imageData.image = NSImage(contentsOfFile: basePathStr +
"RainObs_" + formatter.stringFromDate(date!) + ".png")
Below is what I want it to look like (ignoring the background and cropped out parts). If you save the image yourself you will see it is pixellated and aliased:
Below is the map that the source is displayed over (the source is just in an NSImageView laid on top of another NSImageView containing the map):

Try using a custom subclass of NSView instead of an NSImageView. It will need an image property with a didSet observer that sets needsDisplay. In the drawRect() method, either:
use the drawInRect(_:fromRect:operation:fraction:respectFlipped:hints:) method of the NSImage with a hints dictionary of [NSImageHintInterpolation:NSImageInterpolation.None], or
save the current value of NSGraphicsContext.currentContext.imageInterpolation, change it to .None, draw the NSImage with any of the draw...(...) methods, and then restore the context's original imageInterpolation value

Related

Resizing image generated by PaintCode app

I have imported a vector image to PaintCode app and then export its Swift to code. I want to use this vector image in a small View (30x30) but since I want it to work on different devices, I need it to be size-independent.
The original size of the vector image is 512x512. When I add its class to a UIView, only a very small part of the vector image can be seen:
I need to somehow resize the image that can be fit in any size of a frame. I read somewhere, that I have to draw a frame in PaintCode app around the image, I did it but nothing changed.
Start by selecting the "Frame" option from the toolbar
Apply the frame to you canvas...
nb: If you mess up the frame DELETE IT and start again, modifying the frame can change the underlying vector, which is annoying
Apply the desired resize options. This can be confusing the first time.
I group all the elements into a single group. Select the group and on the "box" next to the coordinates of the group, change all the lines to "wiggly" lines. This allows paint code the greatest amount of flexibility when resizing the image...
Finally, change the export options. I tend to use both "Drawing" and "Image" as it provides me the greatest amount of flexibility during development
You should also look at Resizing Constraints, Resizing Drawing Methods and PaintCode Power User: Frames for more details

Extract Rectangular Image from Scanned Image

I have scanned copies of currency notes from which I need to extract only the rectangular notes.
Although the scanned copies have a very blank background, the note itself can be rotated or aligned correctly. I'm using matlab.
Example input:
Example output:
I have tried using thresholding and canny/sobel edge detection to no avail.
I also tried the solution given here but it detects the entire image for cropping and it would not work for rotated images.
PS: My primary objective is to determine the denomination of the currency. There are a couple of methods I thought I could use:
Color based, since all currency notes have varying primary colors.
The advantage of this method is that it's independent of the
rotation or scale of the input image.
Detect the small black triangle on the lower left corner of the note. This shape is unique
for each denomination.
Calculating the difference between 2 images. Since this is a small project, all input images will be of the same dpi and resolution and hence, once aligned, the difference between the input and the true images can give a rough estimate.
Which method do you think is the most viable?
It seems you are further advanced than you looked (seeing you comments) which is good! Im going to show you more or less the way you can go to solve you problem, however im not posting the whole code, just the important parts.
You have an image quite cropped and segmented. First you need to ensure that your image is without holes. So fill them!
Iinv=I==0; % you want 1 in money, 0 in not-money;
Ifill=imfill(Iinv,8,'holes'); % Fill holes
After that, you want to get only the boundary of the image:
Iedge=edge(Ifill);
And in the end you want to get the corners of that square:
C=corner(Iedge);
Now that you have 4 corners, you should be able to know the angle of this rotated "square". Once you get it do:
Irotate=imrotate(Icroped,angle);
Once here you may want to crop it again to end up just with the money! (aaah money always as an objective!)
Hope this helps!

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

Perspective correction of UIImage from Points

I'm working on a app where I'll let the user take a picture e.g of a business card or photograph.
The user will then mark the four corners of the object (which they took a picture off) - Like it is seen in a lot of document/image/business card scanning apps:
My question is how do i crop and fix the perspective according to these four points? I've been searching for days and looked at several image proccessing libraries without any luck.
Any one who can point me in the right direction?
From iOS8+ there is Filter for Core Image called CIPerspectiveCorrection. All you need to do is pass the image and four points.
Also there is one more filter supporting iOS6+ called CIPerspectiveTransform which can be used in similar way (skewing image).
If this image were loaded in as a texture, it'd be extremely simple to skew it using OpenGL. You'd literally just draw a full-screen quad and use the yellow correction points as the UV coordinate at each point.
I'm not sure if you've tried the Opencv library yet, but it has a very nice way to deskew an image. I've got here a small snippet that takes an array of corners, your four corners for example, and a final size to map it into.
You can read the man page for warpPerspective on the OpenCV site.
cv::Mat deskew(cv::Mat& capturedFrame, cv::Point2f source_points[], cv::Size finalSize)
{
cv::Point2f dest_points[4];
// Output of deskew operation has same color space as source frame, but
// is proportional to the area the document occupied; this is to reduce
// blur effects from a scaling component.
cv::Mat deskewedMat = cv::Mat(finalSize, capturedFrame.type());
cv::Size s = capturedFrame.size();
// Deskew to full output image corners
dest_points[0] = cv::Point2f(0,s.height); // lower left
dest_points[1] = cv::Point2f(0,0); // upper left
dest_points[2] = cv::Point2f(s.width,0); // upper right
dest_points[3] = cv::Point2f(s.width,s.height); // lower right
// Build quandrangle "de-skew" transform matrix values
cv::Mat transform = cv::getPerspectiveTransform( source_points, dest_points );
// Apply the deskew transform
cv::warpPerspective( capturedFrame, deskewedMat, transform, s, cv::INTER_CUBIC );
return deskewedMat;
}
I don't know exact solution of your case, but there is approach for trapezoid: http://www.comp.nus.edu.sg/~tants/tsm/TSM_recipe.html - the idea is to continuously build transformation matrix. Theoretically you can add transformation that converts your shape into trapecy.
And there are many questions like this: https://math.stackexchange.com/questions/13404/mapping-irregular-quadrilateral-to-a-rectangle , but I didn't check solutions.

Iphonesdk boundries checking for coloring

im creating and app where user already have an image (with different objects) without colors, i have to check the object and then color with respected color with the touch on that objects. how should i do this. can anyone help me.
I would say that that is non-trivial. I can only give hints since I have not done such an app yet.
First, you need to convert the image into a CGImageRef, for example by doing [uiimage_object CGImage].
Next you need convert the CGImageRef into array of pixel colors. You can follow the tutorial at http://www.fiveminutes.eu/iphone-image-processing/ for sample code. But for your app you need to convert the array into two dimension based on image width and height.
Then, use the coordinate of the user touch to access the exact pixel color value from the array. Next you read off the color values of the surrounding pixels and determine if color is similar to the touched pixel or not (you might need to read some wikipedia articles etc on doing the color comparison). If the color is similar, change the color to the one you want. Recurse until the surrounding color is different (i.e. you hit the boundary).
When you are finished modifying the pixel color value array, you need to convert the array back into CGImageRef using CGImageCreate function. Then you convert back to UIImage using [UIImage imageWithCGImage:imageref].
Now you are on your own to implement the steps into code. It would be unreasonable if you expect me to code all that for you, wouldn't it?