How to display small size image as large size image without losing the clarity of image in python? - python-imaging-library

image = Image.open('red.jpg')
image1 = image.resize((160,120), Image.ANTIALIAS)
The original image red.jpg was of dimension 80x60.....
i obtained an image with less clarity or with blurness...
so please specify some methods to increase the clarity of the resized image

Related

Resizing command changes image shape

I have to resize image i.e if its dimension is 3456x5184 to 700X700 as my code needs image with less number of pixels otherwise it takes too much time to give results.So, when I use imresize command it changes the dimensions of image but at the same time it changes the shape of image i.e the circle in image which I also need to detect looks like oval instead of being cirle. I need your suggestions to resolve this problem. I am really grateful to you people.
Resizing images is done by either subsampling (to get smaller images) or some kind of interpolation (to get larger images)
Input is either a factor or a final dimension for width and height.
The only way to fit a rectangle into a square by simply resizing it is to use different scales for width and height. Which of course will yield in a distorted image.
To achieve what you want you can either crop a 700x700 region from your image or resize image using the same factor for with and height. Then you can fit the larger dimension into 700 and fill the rest around the other dimension with black or whatever you prefer.

The image size is changed when the image is too big

I am using an image with size 4000*3000 pixels. When I show this image through the imshow function, the program shows:
Image is too big to fit on screen : displaying at 67%.
After that, when I want to find size of the image with function size(), the number of column is always multiplied by 3 from the original image. For example, when my image is 563*1000, this function show me 563*3000.
Could anyone tell me how to fix this problem?

How to re size a too big image into small by keeping original values

I have an gray scale image of size <2559x3105 uint16>. when I try to open this image, I get warning that it is too big. I have tried imresize() function to make it small<512x512 uint8> in size. When I plot the original image and re-sized image, the intensity gets decreased after re-sizing. I want to re-size original image without changing in its pixel values. Is there any solution?
If you would like to keep your final image as uint8, I think you would be needed to first convert the uint16 image to uint8 image using im2uint8 -
uint8_image = im2uint8(uint16_image);
Then you may apply imresize on uint8_image.
But, if you don't want your final image to be of uint8 type, you can directly use imresize and it would keep the datatype, i.e. the resized image would be of uint16 type.
Read the docs and use the nearest neighbor method. That is,
resized = imresize(original, scale, 'nearest')
This will not use interpolated values. The downside is of course that edges might be jagged.
It sounds like your 16-bit image uses linear codes while the resulting 8-bit image needs to be gamma corrected. If this is the case you can use imadjust with a gamma parameter of 1/2.2 to produce the brighter image.
Do you get the warning when you display it with imshow? Does it say something like "Image to large to fit the screen, resizing to xx%"? If so, then you can simply ignore the warning. Otherwise, you can can set the 'InitialMagnification' parameter of imshow to resize the figure, but not the image itself.

Scaling while conserving ration

Upon using the convert method, I would like to be able to transform a landscape or portrait image given the height and width specify without altering the ratio.
From the documentation, the 'clip' options act as follow:
'clip': Resizes the image to fit within the specified parameters without distorting, cropping, or changing the aspect ratio
If I have a 200x50 image and I want a 150x150 result, this would result in a 150x37px resized image with its ratio identical to the original's.
If I have a 100x50 image and I want a 150x150 result, this would result in a 150x75px resized image with its ratio identical to the original's.
'crop': Resizes the image to fit the specified parameters exactly by removing any parts of the image that don't fit within the
boundaries
If I have a 200x50 image and I want a 150x150 result, this would result in a 150x37px cropped image.
'scale': Resizes the image to fit the specified parameters exactly by scaling the image to the desired size
If I have a 200x50 image and I want a 150x150 result, this would result in a 150x150px resized image where the ratio has been altered to fit.
'max': Resizes the image to fit within the parameters, but as opposed to 'clip' will not scale the image if the image is smaller
than the output size
Same output as in 'clip' except that if I have a 100x50 image and I want a 150x150 result, this would result in a 100x50px resized image with its ratio identical to the original's.
What I would like to have is the ability to make an image conserve its ratio and be of the required dimension (with vertical and horizontal centering if need be). It would result in an image that is not distorted nor clipped.
I understand there are some trickiness to the task as you have to determine what color do you fill the space with (see ImageMagick doc about space filling).
Any insight would be great, hope it is not too much of an edge case.
Take a look at this set of examples in the ImageMagick documentation:
http://www.imagemagick.org/Usage/thumbnails/#square
We don't currently offer the ability to "fill" empty parts of the image with a background color, so do not support this use case. We are looking at adding it in the near term, and will update you when this is added.

How to reduce image size captured from cam

The Picture taken from the iphone cam is nearly 2.5 Mb, How to reduce this size ,I have tried
UIJPEGRepresentation(image,0.1f),but it does not effect the size ?
You really can't reduce the size the images takes up in memory.
When an image is loaded, basically a UIImage object the size wil be width x height x 4 bytes. That is the size the an uncompressed image will take up in memory.
Since you can use compressed images all image, once loaded in a UIImage will be uncompressed.
If you really need so save some memory, save the image to disk and create a thumbnail which you use in your app. Then when need you can load the larger image and use it,
Try using the Resize method in UIImage+Resize.h
https://github.com/AliSoftware/UIImage-Resize
[aImgView setImage:[ImageObjectFromPicker resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:YourSize interpolationQuality:kCGInterpolationHigh]];