How to improve resized image quality in gtk+3 c? - gtk3

I use this code for resize an image in gtk+3 c:
void set_img_zoom()
{
GdkPixbuf *img1buffer_resized;
img1buffer_resized = gdk_pixbuf_scale_simple(img1buffer, width, height, GDK_INTERP_NEAREST);
gtk_image_set_from_pixbuf(GTK_IMAGE(img1), img1buffer_resized);
//set crop area to zero
dest_width = 0;
dest_height = 0;
}
This is my oroginal image:
but when size of image is big, the image quality is disturbed when I resize it:
I change GDK_INTERP_NEAREST to another but it doesn't well done. how to improve it?

Related

How can I read an image from memory in Matlab? (instead of imread)

I'm trying to get an image from buffer memory into Matlab.
For example in C++ and openCV I can get image from memory with;
fn_export double ImgFromMem(char* address,double width, double height) {
Mat image = Mat(height, width, CV_8UC4, address);
in Matlab ;
Img1=imread('testimage.png');
figure,imshow(Img1),title('Original')
[Height, Width, Depth]=size(Img1);
I want to assign Img1 as an image but from memory like C++ example above.
Any idea about this?

How do i make the image fit to the pdf page with no border using iText? image.jpg

Image image = Image.getInstance(picture.jpg);
image.scaleAbsolute(800f, 600f); // i just set the image by a certain size
document.add(image);
I want it fits to the whole page of pdf. What method can i use? i need help, thank you
I was looking for a solution, thank your for your question and the comment of #mkl.
here is the sample code, I hope this might help someone.
int width =x; int height = y;
//define these two based on your required width and height ratio of image
document.setPageSize(new Rectangle(width, height));
document.setMargins(0,0,0,0);
img.scaleToFit(height, width);

Resize an image for Histogram of Gradient

I have an image in size of 150 pixel in height and 188 pixel in width. I'm going to calculate HOG on this image. As this descriptor needs the size of detector window(image) to be 64x128, Should I resize my image to 64x128 and then use this descriptor? like this :
Image<Gray, Byte> GrayFrame = new Image<Gray, Byte>(_TrainImg.Size);
GrayFrame = GrayFrame.Resize(64, 128, INTER.CV_INTER_LINEAR);
I concern resizing may change the original gradients orientation depending on how it is resized since we are ignoring the aspect ratio of the image?
By the way, The image is croped and I can't crop it anymore. It means this is the size of image after cropping and this is my final bounding box.
Unfortunately the openCV HoGDescriptor documentation is missing.
In openCV you can change the values for detection window, cell size, blockStride and block size.
cv::HOGDescriptor hogDesc;
hogDesc.blockSize = cv::Size(2*binSize, 2*binSize);
hogDesc.blockStride = cv::Size(binSize, binSize);
hogDesc.cellSize = cv::Size(binSize, binSize);
hogDesc.winSize = cv::Size(imgWidth, imgHeight);
Then extract features using
std::vector<float> descriptors;
std::vector<cv::Point> locations;
hogDesc.compute(img, descriptors, cv::Size(0,0), cv::Size(0,0), locations);
Note:
I guess, that the winSize has to be divisible by the blockSize and the blockSize by the cellSize.
The size of the features is dependent on all these variables, so ensure to use images of same size and do not change the settings to not run into trouble.

How can i resize an image to a specified size?

Actually, i am shrinking high qualitiy images. I need to have one parameter (width or heigth) fixed and the other is flexible but with a defined minimum.
I want to keep the widht/height - ratio of the image.
Example:
I have an image (width x heigth) = 2000px x 3000px and i want it to shrink to a width of 968px and a minimum height of 640px while keeping the widht/height - ratio of the image.
Using the imagemagick Perl API, what do i need to issue to shrink such an image?
So far i have used this, but the results were only some white images:
my $image = Image->new();
$image->Read('my_2000x3000_image.jpg');
$image = $image->[0];
$image->Resize('geometry' => '968' . 'x' . '>');
$image->Write('image_968_min_640.jpg');
What you need is Image::Magick and the Scale method, which takes a maximum width and height
The following creates a thumbnail from an existing image:
# Thumbnail Dimensions
my ($max_height, $max_width) = (60,60);
my $thumbImage = new Image::Magick;
$thumbImage->Read($oldfile);
$thumbImage->Scale(geometry => qq{${max_height}x${max_width}});
$thumbImage->Write($newfile);

Create masking effect over a view

I would like to create a masking effect over a UIView in order to accomplish the following. I will display a sealed box in the screen and the user will be able to touch (scratch) the screen in order to reveal whats behind that image(UIView). Something similar to those lottery tickets where u r suppose to scratch some cover material thats on top of the results..
If someone could point me in the right direction would be awesome, I'm not sure how to start doing this...
thanks
Sorry I'm late. I made some example code which might be of help: https://github.com/oyiptong/CGScratch
drawnonward's approach works.
Pixel editing is usually done with a CGBitmapContext. In this case, I think you will want to create a grayscale bitmap that represents just the alpha channel of the scratch area. As the user scratches, you will paint in this bitmap.
To use a GCBitmapContext as the mask for an image, you must first create a masking image from the bitmap. Use CGImageMaskCreate and pass in a data provider that points to the same pixels used to create the bitmap. Then use CGImageCreateWithMask with your scratch off image and the mask image that is the bitmap.
You cannot draw directly in the iPhone. Every time the user moves a finger, you will have to modify the mask bitmap then invalidate the UIView that draws the image. You may just be able to draw the same image again, or you may need to reconstruct the mask and masked image each time you draw. As long as the mask image refers directly to the bitmap pixel data, very little memory is actually allocated.
So in psuedocode you want something like this:
scratchableImage = ...
width = CGImageGetWidth( scratchableImage );
height = CGImageGetHeight( scratchableImage );
colorspace = CGColorSpaceCreateDeviceGray();
pixels = CFDataCreateMutable( NULL , width * height );
bitmap = CFBitmapContextCreate( CFDataGetMutableBytePtr( pixels ) , width , height , 8 , width , colorspace , kCGImageAlphaNone );
provider = CGDataProviderCreateWithCFData( pixels );
mask = CGImageMaskCreate( width , height , 8 , 8 , width , provider , NULL , false , kCGRenderingIntentDefault );
scratched = CGImageCreateWithImageInRect( scratchableImage , mask );
At this point, scratched will have an alpha channel dictated by bitmap but bitmap has garbage data. White pixels in bitmap are opaque, and black pixels are clear. Paint all white pixels in bitmap then, as the user scratches, paint black pixels. I think that changes to bitmap will automatically apply every time scratched is drawn, but if not then just recreate mask and scratched every time you draw.
You probably have a custom UIView for tracking user input. You could derive your custom view from UIImageView and let it draw the image or do the following:
-(void) drawRect:(CGRect)inDirty {
// assume scratched is a member
CGContextDrawImage( UIGraphicsGetCurrentContext() , [self bounds] , scratched );
}
Alternately you can skip creating scratched and instead use CGContextClipToMask and draw the original scratchableImage directly. If there is no scratchableImage and your scratch area is a texture or view hierarchy, take this approach.