Nearest neighbour scaling in Core Image - swift

I'd like to efficiently create an up-scaled CIImage from a minimally sized one, using Nearest Neighbour scaling.
Say I want to create an image at arbitrary resolutions such as these EBU Color Bars:
In frameworks like OpenGL, we can store this a tiny 8x1 pixel texture and render it to arbitrary sized quads, and as long as we use Nearest Neighbour scaling the resulting image is sharp.
Our options with CIImage appear to be limited to .transformedBy(CAAffineTransform(scaleX:y:)) and .filteredBy(filterName: "CILanczosScaleTransform") which both use smooth sampling, which is a good choice for photographic images but will blur edges of line art images such as these color bars - I specifically want a pixellated effect.
Because I'm trying to take advantage of GPU processing in the Core Image backend, I'd rather not provide an already upscaled bitmap image to the process (using CGImage, for example)
Is there some way of either telling Core Image to use Nearest Neighbour sampling, or perhaps write a custom subclass of CIImage to achieve this?

I think you can use samplingNearest() for that:
let scaled = image.samplingNearest().transformedBy(…)

Related

Aligning two images

I have two images of the same shoe sole, one taken with a scanning machine and another with a digital camera. I want to scale one of the images so that it can be easily aligned with the other without having to do it all by hand.
My thought was to use edge detection, connect all the points on the outside of the shoe, scale one image to fit right inside the other, and then scale the original image at the same rate.
I've messed around using different tools in the Image Processing toolbox in MatLab, but am making no progress.
Is there a better way to go about this?
My advise would be to firstly use the function activecontour to obtain the outer contour of the shoe on both images. Then use the function procrustes with the binary images as input.
[~, CameraFittedToScan] = procrustes(Scan,Camera);
This transforms the camera image to best fit with the scanned image. If the scan and camera are not the same size then this needs to be adjusted first using the function imresize.

how to find distance between black points in a image using image processing

how to find distance between black points in a image using image processing image is taken by web camera and it is a snap of moving belt which is cover by white paper having black dots
Lots of way of doing it.
Firstly you need to identify the dots. Use Otsu thresholding to separate foreground from background. Then convent to binary, and label connected components. Eliminate everything that is smaller or larger than a threshold, or anything that isn't roughly circular.
Then you get a list of frames, so you need a blob-following algorithm. Eliminate any stationary blob (not on the paper).
Finally output the distances based on the blob identifications.

Detecting shape from the predefined shape and cropping the background

I have several images of the pugmark with lots of irrevelant background region. I cannot do intensity based algorithms to seperate background from the foreground.
I have tried several methods. one of them is detecting object in Homogeneous Intensity image
but this is not working with rough texture images like
http://img803.imageshack.us/img803/4654/p1030076b.jpg
http://imageshack.us/a/img802/5982/cub1.jpg
http://imageshack.us/a/img42/6530/cub2.jpg
Their could be three possible methods :
1) if i can reduce the roughness factor of the image and obtain the more smoother texture i.e more flat surface.
2) if i could detect the pugmark like shape in these images by defining rough pugmark shape in the database and then removing the background to obtain image like http://i.imgur.com/W0MFYmQ.png
3) if i could detect the regions with depth and separating them from the background based on difference in their depths.
please tell if any of these methods would work and if yes then how to implement them.
I have a hunch that this problem could benefit from using polynomial texture maps.
See here: http://www.hpl.hp.com/research/ptm/
You might want to consider top-down information in the process. See, for example, this work.
Looks like you're close enough from the pugmark, so I think that you should be able to detect pugmarks using Viola Jones algorithm. Maybe a PCA-like algorithm such as Eigenface would work too, even if you're not trying to recognize a particular pugmark it still can be used to tell whether or not there is a pugmark in the image.
Have you tried edge detection on your image ? I guess it should be possible to finetune Canny edge detector thresholds in order to get rid of the noise (if it's not good enough, low pass filter your image first), then do shape recognition on what remains (you would then be in the field of geometric feature learning and structural matching) Viola Jones and possibly PCA-like algorithm would be my first try though.

MATLAB texture mapping: Resize the image using imresize or built-in resize?

I'm plotting a 3D box and I use texture mapping to map the six sides with some images, I use following code line:
Z=zeros(height,width);
surface(Z, 'FaceColor','texturemap','EdgeColor','none','Cdata',image);
hold on;
And then the next side and so on. What I did so far is that I resized the images using imresize.
image = imresize(image,[height width]);
My question is if there will be a big difference if I just use the original sized image for texture mapping in terms of resolution and speed? Is it maybe even better not to use imresize? The thing is I'd have to change some code inbetween these lines and also come up with some other solutions but if the resolution of the mapped images would be better without imresize it would be totally worth it.

Getting pixels data from image on iPhone

I want to use bitmap images as a "map" for levels in iphone game. Basicly it's all about the location of obstacles in the rectangular world. The obstacles would be color-coded -- where the white pixel is, there's no obstacle. Black means there is one at this point.
Now I need to use this data to do 2 things: (a) display the level map, (b) for in-game calculations. So, in general, I need a way to read the data from the bitmap and create some data structure (matrix-like) with those information - to both overlay the bitmap onto the level graphics as well as to calculate collisions and such.
How should I do it? Is there any easy way to read the data from image? And what's the best format to keep the images for this?
Have you looked at how Texture2D translates an image file to an OpenGL Texture ?
Tip: take a look at this Method in Texture2D.m:
- (id) initWithCGImage:(CGImageRef)image orientation:(UIImageOrientation)orientation sizeToFit:(BOOL)sizeToFit pixelFormat:(Texture2DPixelFormat)pixelFormat filter:(GLenum) filter
In 3D apps, it's quite common to use this kind of representation for height maps, in a height map, you use a Texture with colors that range from black to white ( white represents the maximum altitude )
For example, from this:
To this:
That was just to tell you that your representation is not that crazy :).
About reading the bitmap, I would also recommend you to read this (just in case you want to go deeper)
Hope I helped a bit!