I am having a problem with iText where the image I am trying to append to a layer within a digital signature appearance (PdfSignatureAppearance) is coming out blurry.
I have tried adding scaling to the image and also removing scaling, but the results seem inconsistent.
I noticed a similar problem mentioned here:
http://itext-general.2136553.n4.nabble.com/JPEG-images-become-blurry-when-added-to-a-document-td2143785.html
Scaling to fit the margins of the field:
if (i1.getWidth() > 180 || i1.getHeight() > 50) {
i1.scaleToFit(180, 50);
}
Scaling down to try to increase quality:
i1.scalePercent(85f);
-Thanks
Related
I need to include many images of unknown origin in a report. I have no idea what the images might be: portrait or landscape fotos, large or small, or even something with an atypical shape, like a 400x80 logo.
I'd like to scale down images with the following rule: proportionally downscale until the larger side is 200. And resulting image shouldn't take more space than needed (i.e. 1000x600 should be downscaled to 200x120, not to 200x200), so that there are no unneeded blank margins around non-square images.
Is what I need possible with JasperReports?
EDIT:
To clarify: "real size" mode is almost what I need. However, I don't see a way to limit height of resulting image. As a result, if the image I want to print is a portrait foto (or has even larger height compared to width), generated PDF looks ugly; in this case I would prefer to somehow downscale it to a smaller width.
I solved the Problem of resizing images of various sizes to a fixed size with "RetainShape" by writing an ImageResizer, based on the idea of the ImageTransformer from https://stackoverflow.com/a/39320863/8957103 , using https://github.com/rkalla/imgscalr for scaling the image.
I noticed lately that in some cases the png will look differently as the pdf. I rendered the preview images in different sizes an realized that the output could be totally different for the same input when I change the output size of the surface.
The problem is, that text_extends reports different normalized sizes for the same text when the surface pixel size is different. In this example the width varies from 113.861 to 120.175. Since I have to write each line separately those errors are some times much bigger in total.
Has anybody an idea how avoid those miscalculation?
Here is a small demonstration of this problem
import cairo
form StringIO import StringIO
def render_png(width, stream):
width_px = height_px = width
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, width_px, height_px)
cr = cairo.Context(surface)
cr.scale(float(width_px) / float(100),
float(height_px) / float(100))
cr.set_antialias(cairo.ANTIALIAS_GRAY)
cr.set_source_rgb (1, 1, 1)
cr.rectangle(0, 0, 100, 100)
cr.fill()
cr.select_font_face('Zapfino Extra LT') # a fancy font
cr.set_font_size(20)
example_string = 'Ein belieber Test Text'
xbearing, ybearing, width, height, xadvance, yadvance = (
cr.text_extents(example_string))
xpos = (100. - width) / 2. # centering text
print width
cr.move_to(xpos,50)
cr.set_source_rgba(0,0,0)
cr.show_text(example_string)
surface.write_to_png(stream)
return width
if __name__ == '__main__':
l=[]
for i in range(100,150,1):
outs=StringIO()
xpos = render_png(i,outs)
l.append((i,xpos))
#out = open('/home/hwmrocker/Desktop/FooBar/png_test%03d.png'%i, 'w')
#outs.seek(0)
#out.write(outs.read())
#out.close()
from operator import itemgetter
l=sorted(l,key=itemgetter(1))
print
print l[0]
print l[-1]
This behavior is likely due to the nature of text rendering itself - as glyphs in a font depend are drawn in different ways, depending on the pixel resolution. More so when the pixel resolution is small when compared to the glyph sizes (I'd say less than 30px height per glyph). This behavior is to be expected to some extend - always in order to prioritize readability of the text. If it is too off - or the png text is "uglier" than on the PDF (instead of incorrect size), then iis a bug in Cairo. Nevertheless, you probably should place this exact question on Cairo's issue tracker, so that the developers can tell wether it is a bug or not (and if it is, it maybe the only possible way for they to get aware of it)
(Apparently they have no public bug tracker - just e-mail it to cairo-bugs#cairographics.org )
As for your specific problem, the workaround I will suggest to you is to render your text to a larger surface -- maybe 5 times larger, and resize that surface and paste the contents on your original surface (if needed at all). This way you might avoid glyph-size variations due to constraints in the number of pixels available for each glyph (at the cost of having a poorer text rendering on your final output).
Using the PhotoScroller example by Apple and ImageMagick I managed to build my catalog app.
But I'm having a rendering bug. The tiled images are rendered with a thin line between them.
My simple script using ImageMagick is this:
#!/bin/sh
file_list=`ls | grep JPG`
for i in 100 50 25; do
for file in $file_list; do
convert $file -scale ${i}%x -crop 256x256 -set filename:tile "%[fx:page.x/256]_%[fx:page.y/256]" +repage +adjoin "${file%.*}_${i}_%[filename:tile].${file#*.}"
done
done
The code from Apple is the same. The bizarre thing is that the images that they provida already tiled works like a charm, in the same run time, side-by-side with my images :(
My first guess was that the size of the tiles was not matching with the calculations on code, but change sizes didn't fix, neither on my script or in the code. My images are usually smaller than those provided by apple, half the size actually.
Anyone got the same issue?
I had problems with both solutions. Damien's approach did not fully eliminate all lines at all zoom scales and Brent's solution removed the lines, but added some artifacts at tile borders.
After googling around for some while, I finally found a solution that worked nicely for me: http://openradar.appspot.com/8503490 (comment by zephyr.renner).
After all, Apple's assumption that CTM.a == CTM.d doesn't seem to be "safe" at all...
I have the exact same issue here, using PhotoScroller code. Problem appears, when scale is not right in - (void)drawRect:(CGRect)rect.
You need to round scale... Add scale = 1.0f / roundf(1.0f / scale); after CGFloat scale = CGContextGetCTM(context).a; (it also prevents tiles from being drawn twice).
And draw tiles 1 pix larger... Add tileRect.size.width += 1; tileRect.size.height += 1; after tileRect = CGRectIntersection(self.bounds, tileRect);.
I have encountered this same PhotoScroller issue and Damien's solution was very close but requires one minor correction to completely eliminate those pesky seams.
Drawing tiles one pixel larger didn't work at all zoom levels for me. The reason is that we are drawing the image at the original resolution and then it is scaled by the CTM to the screen resolution.
So, the 1 pixel we added actually becomes 1/4 of a pixel when drawn at the 25% zoom level on the screen.
Therefore, to enlarge the tile by one pixel on the screen we would need to add 1.0/scale to the width/height. (and this should be done before calling CGRectIntersection)
tileRect.size.width += 1.0/scale; tileRect.size.height += 1.0/scale;
tileRect = CGRectIntersection(self.bounds, tileRect);
I am working with images of size 2 to 4MB. I am working with images of resolution 1200x1600 by performing scaling, translation and rotation operations. I want to add another image on that and save it to photo album. My app is crashing after i successfully edit one image and save to photos. Its happening because of images size i think. I want to maintain the 90% of resolution of the images.
I am releasing some images when i get memory warning. But still it crashes as i am working with 2 images of size 3MB each and context of size 1200x1600 and getting a image from the context at the same time.
Is there any way to compress images and work with it?
I doubt it. Even compressing and decompressing an image without doing anything to it loses information. I suspect that any algorithms to manipulate compressed images would be hopelessly lossy.
Having said that, it may be technically possible. For instance, rotating a Fourier transform also rotates the original image. But practical image compression isn't usually as simple as just computing a Fourier transform.
Alternatively, you could write piecemeal algorithms that chop the image up into bite-sized pieces, transform the pieces and reassemble them afterwards. You might also provide a real-time view of the process by applying the same transform to a smaller version of the full image.
The key will be never to full decode the entire image into memory at full size.
If you need to display the image, there's no reason to do that at full size -- the display on the iPhone is too small to take advantage of that. For image objects that are for display, decode the image in scaled down form.
For processing, you will need to write custom code that works on a stream of pixels rather than an in-memory array. I don't know if this is available on the iPhone already, but you can write it yourself by writing to the libpng library API directly.
For example, your code right now probably looks something like this (pseudo code)
img = ReadImageFromFile("image.png")
img2 = RotateImage(img, 90)
SaveImage(img2, "image2.png")
The key thing to understand, is that in this case, img is not the data in the PNG file (2MB), but the fully uncompressed image (~6mb). RotateImage (or whatever it's called) returns another image of about this same size. If you are scaling up, it's even worse.
You want code that looks more like this (but there might not be any API's for you to do it -- you might have to write it yourself)
imgPixelGetter = PixelDecoderFromFile("image.png")
imgPixelSaver = OpenImageForAppending("image2.png")
w = imgPixelGetter.Width
h = imgPixelGetter.Height
// set up a 90 degree rotate
imgPixelSaver.Width = h
imgPixelSaver.Height = w
// read each vertical scanline of pixels
for (x = 0; x < w; ++x) {
pixelRect = imgPixelGetter.ReadRect(x, 0, 1, h) // x, y, w, h
pixelRect.Rotate(90); // it's now got a width of h and a height of 1
imgPixelSaver.AppendScanLine(pixelRect)
}
In this algorithm, you never had the entire image in memory at once -- you read it out piece by piece and saved it. You can write similar algorithms for scaling and cropping.
The tradeoff is that it will be slower than just decoding it into memory -- it depends on the image format and the code that's doing the ReadRect(). Unfortunately, PNG is not designed for this kind of access to the pixels.
Can anybody please tell me what is the exact difference between stretching and scaling an image? Because you can anyway set the size of image and imageView both to match your requirements.
It depends on how you define stretching, but I would divide scaling into two distinct options based on whether or not the aspect ratio is preserved. Often it is desired to preserve the aspect ratio when scaling an image.
I would consider an increase in one dimension, but not proportionally in the other to be a "stretch". Similarly, a decrease in one dimension, but not proportionally in the other would be a "squash".
You may find this Daring Fireball post interesting.
Stretching sounds like showing small size (10x10) image at (100x100) or (100x10). so some times it gets pix-elated.
And scaling means to show a image to different size either small or big with maintaining its aspect ratio (programmetically), so it will look not improper, because when you stretch to different aspect ratio then some objects in image gets improper visibility.
Stretching (in iphone IB) means '9-slice scaling', scaling means just scaling.
When stretching you can determine which part of the image may be used for stretching and which part may not. For example when you have a rounded square, you do not want the roundings to stretch, especially when you're only stretching horizontally or vertically.
You indicate that you only want to use the middle pixel to stretch by (in IB) setting the X & Y values to 0.50 (half way) and the width & height values to 0.00 (minimum amount of pixels)
Lookup contentStretch in the docs for more info
when you don,t keep the congruence of your image, you see the image incongruous and height and width of your image is not suitable for showing. for resolving this issue you can multiply your image's width and height to to a constant coefficient.
Stretching and scaling don't mean anything different except maybe in connotation.
Is there a particular piece of text somewhere that you are trying to understand? Maybe we can help with that.
stretching image is stretching the size of a small image.
on the other hand scaling of image is scaling the image accoring the the viewport's width and viewport's height....
scaling can be done by small as well as large image.
you should take a good quality image and then should scale it
sprite.setscale(x,y);