How to center image on table cell pdf file in iTextSharp - itext

I can't center on table cell the image using iTextSharp.
I have tried in this mode, without success.
The image is always misaligned with respect to the cell and covers the upper cell in the pdf file.
How to do resolve this?
My code below.
string imagePath = "~/aspnet/Img/pdf.gif";
iTextSharp.text.Image image2 = iTextSharp.text.Image.GetInstance(imagePath);
image2.Alignment = iTextSharp.text.Image.ALIGN_CENTER;
Chunk cImage = new Chunk(image2, 0, 0, false);
Anchor anchor = new Anchor(cImage);
anchor.Reference = "www.mywebsite.com";
table.AddCell(anchor);

Please, try this.
You must use the property Annotation of iTextSharp.
string imagePath = "~/aspnet/Img/pdf.gif";
iTextSharp.text.Image image2 = iTextSharp.text.Image.GetInstance(imagePath);
PdfPCell cell2 = new PdfPCell(image2);
cell2.HorizontalAlignment = Element.ALIGN_LEFT;
cell2.VerticalAlignment = Element.ALIGN_MIDDLE;
cell2.FixedHeight = 20;
image2.Annotation = new Annotation(0, 0, 0, 0, "www.mywebsite.com");
table.AddCell(cell2);

Related

Swift: How to implement Markup (Annotations) for PDF like iOS 13's Book app?

I have searched everywhere but can't find an answer. The closest I have been able to find is
func saveAnnotations_forever() {
//let documentAttributes = pdfDocument?.documentAttributes
//let attachmentData = pdfDocument?.dataRepresentation()
let currentPage = pdfView.currentPage
let page_index = pdfDocument?.index(for: currentPage!)
let documentURL = self.pdfDocument?.documentURL// URL to your PDF document.
// Create a `PDFDocument` object using the URL.
let pdfDocument = PDFDocument(url: documentURL!)!
// `page` is of type `PDFPage`.
let page = self.pdfDocument!.page(at: page_index!)!
// Extract the crop box of the PDF. We need this to create an appropriate graphics context.
let bounds = page.bounds(for: .mediaBox)
// Create a `UIGraphicsImageRenderer` to use for drawing an image.
let renderer = UIGraphicsImageRenderer(bounds: bounds, format: UIGraphicsImageRendererFormat.default())
// This method returns an image and takes a block in which you can perform any kind of drawing.
let image = renderer.image { (context) in
// We transform the CTM to match the PDF's coordinate system, but only long enough to draw the page.
context.cgContext.saveGState()
context.cgContext.translateBy(x: 0, y: bounds.height)
context.cgContext.concatenate(CGAffineTransform.init(scaleX: 1, y: -1))
page.draw(with: .mediaBox, to: context.cgContext)
context.cgContext.restoreGState()
and so on to render an annotation and a PDF page together as an image. However, I cannot search through an annotated page's text (since it was rendered as an image) and I cannot erase my previous annotations once saved. iOS's Book's annotations works very well- I want to achieve that, but how?
Thanks in advance.
If you are trying to save an annotation to a PDF, maybe try this code:
// add annotation to the current page
let rect = NSRect(x: 30, y: 30, width: 30, height: 30)
let annotation = PDFAnnotation(bounds: rect, forType: .text, withProperties: nil)
pdfView.currentPage?.addAnnotation(annotation)
// save the PDF to file
if let doc = self.document,
let url = self.document?.documentURL {
doc.write(to: url)
}
You can use the PDFAnnotation documentation to further customize the annotation.

Crop image in swift

I am trying to crop image in swift. I'm trying to implement something like, user will capture a photo. Once photo is captured user will be allowed to set the crop area. I'm able to get the image from that crop area, but I want that the crop image should be resized to particular width and height. That is, if particular height or width is smaller then it should be resized.
This image should be of frame of it's maximum width and height. Currently it is just adding transparency to the other area.
I had also added my code for cropping
let tempLayer = CAShapeLayer()
tempLayer.frame = self.view.frame
let path = UIBezierPath()
var endPoint: CGPoint!
for (var i = 0; i<4; i++){
let tag = 101+i
let pointView = viewCrop.viewWithTag(tag)
switch (pointView!.tag){
case 101:
endPoint = CGPointMake(pointView!.center.x-20, pointView!.center.y-20)
path.moveToPoint(endPoint)
default:
path.addLineToPoint(CGPointMake(pointView!.center.x-20, pointView!.center.y-20))
}
}
path.addLineToPoint(endPoint)
path.closePath()
tempLayer.path = path.CGPath
tempLayer.fillColor = UIColor.whiteColor().CGColor
tempLayer.backgroundColor = UIColor.clearColor().CGColor
imgReceiptView.layer.mask = tempLayer
UIGraphicsBeginImageContextWithOptions(viewCrop.bounds.size, imgReceiptView.opaque, 0.0);
imgReceiptView.layer.renderInContext(UIGraphicsGetCurrentContext())
let cropImg = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(cropImg, nil, nil, nil)
imgReceiptView.hidden = true
let tempImageView = UIImageView(frame: CGRectMake(20,self.view.center.y-80, self.view.frame.width-40,160))
tempImageView.backgroundColor = UIColor.grayColor()
tempImageView.image = cropImg
tempImageView.tag = 1001
tempImageView.layer.masksToBounds = true
self.view.addSubview(tempImageView)
Any help will be appreciable
Thanks in advance
Use this Library to crop image as User Specific
https://github.com/kishikawakatsumi/PEPhotoCropEditor
Thanks
Hope this will help you!

Blackberry webworks 10 using image from picture taken\uploaded

I am stuck with an issue and I have no idea what to try next.
In My application I offer my user a choice to either take a photo or choose a photo from the gallery. This part works fine, the problem arises with the saving\reading of the photo. Lets take this from the camera invoke perspective.
function startCameraApp() {
PhotoTaken = false;
blackberry.event.addEventListener("onChildCardClosed", sharePhoto);
blackberry.invoke.invoke({
target: "sys.camera.card"
}, onInvokeSuccess, onInvokeError);
}
and in sharePhoto I have the following code...
function sharePhoto(request) {
var image = new Image();
image.src = "file://" + request.data;
image.onload = function () {
// Now here I need to read the image file and convert it into base64.
var resized = resizeMe(image); // the resizeMe function is given below and it simply makes my image smaller
var imagedata = resized.split("base64,");
sessionStorage.setItem("MyNewPicture", imagedata);
}
}
function resizeMe(img) {
var canvas = document.createElement('canvas');
var max_width = 600;
var max_height = 600;
var width = img.width;
var height = img.height;
// calculate the width and height, constraining the proportions
if (width > height) {
if (width > max_width) {
height = Math.round(height * max_width / width);
width = max_width;
}
} else {
if (height > max_height) {
width = Math.round(width * max_height / height);
height = max_height;
}
}
//resize the canvas and draw the image data into it
img.width = width;
img.height = height;
canvas.width = width;
canvas.height = height;
canvas.classname += "ui-hidden";
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
return canvas.toDataURL();
}
So the app runs and it takes the photo and everything seems fine, yet the data uploaded to local storage is simply a blank screen. It works 100% in the blackberry 10 simulator, but not on my device. On the device it saves an empty string.
Edit
Ok. So I added this to my function for testing purposes and I am still stuck and I don't know what to do...
function sharePhoto(request) {
var image = new Image();
image.src = "file://" + request.data;
image.onload = function () {
// Now here I need to read the image file and convert it into base64.
var resized = resizeMe(image); // the resizeMe function is given below and it simply makes my image smaller
var imagedata = resized.split("base64,");
alert(imagedata); // This returns a blank popup
sessionStorage.setItem("MyNewPicture", imagedata);
}
}
I believe when you use the split method it returns an array, so you'd access it like this:
var resized = resizeMe(image);
var imagedata = resized.split("base64,");
imagedata = imagedata[1]; // this gives you everything after the 'base64,' string
However, the main problem I see is that you're splitting the imagedata string which is removing the whole 'this is an image' prefix from the data.
When you display the imagedata as an image, you need to have the data:image/jpeg;base64,
prefix as well.
So that being said, your images source would be
data:image/jpeg;base64,<rest of base64 string here>
I needed to include an extra element in my application's config.xml.
<access subdomains="true" uri="file://accounts/"></access>
<access subdomains="true" uri="file://accounts/100/shared/camera/"></access>
This gives access to your application to these folders and their containting files. Simulator's does not require this permission.

How to resize image with imagebox emgucv?

I have image in folder image
how resize image with imagebox in emgucv at open image?
thnx..
// To get orginal image from the OpenFileDialog
Image<Bgr, Byte> captureImage = new Image<Bgr, byte>(openImageFileDialog.FileName);
// To resize the image
Image<Bgr, byte> resizedImage = captureImage.Resize(width, height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR);
Hope it helps.
The answer is very simple.
Let us suppose the path of image is "C:\image.jpg".
Mat frame = new Mat(); //Declaration
string path = #"C:\image.jpg";
int width = 640, height = 480;
frame = CvInvoke.Imread(path , LoadImageType.AnyColor);
CvInvoke.Resize(frame, frame, new Size(imagebox1.Width, imagebox1.Height), 0, 0, Inter.Linear); //This resizes the image to the size of Imagebox1
OR
CvInvoke.Resize(frame, frame, new Size(width, height), 0, 0, Inter.Linear); //This resizes the image into your specified width and height
This is How i resized image using EmguCV
Bitmap bitmap = new Bitmap(FileUpload1.PostedFile.InputStream);
Image<Hsv, Byte> Iimage = new Image<Hsv, byte>(bitmap);
Image<Hsv, Byte> HsvImage = Iimage.Resize(384, 256,INTER.CV_INTER_LINEAR);

CTFramesetterSuggestFrameSizeWithConstraints sometimes returns incorrect size?

In the code below, CTFramesetterSuggestFrameSizeWithConstraints sometimes returns a CGSize with a height that is not big enough to contain all the text that is being passed into it. I did look at this answer. But in my case the width of the text box needs to be constant. Is there any other/better way to figure out the correct height for my attributed string? Thanks!
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(attributedString);
CGSize tmpSize = CTFramesetterSuggestFrameSizeWithConstraints(framesetter, CFRangeMake(0,0), NULL, CGSizeMake(self.view.bounds.size.width, CGFLOAT_MAX), NULL);
CGSize textBoxSize = CGSizeMake((int)tmpSize.width + 1, (int)tmpSize.height + 1);
CTFramesetterSuggestFrameSizeWithConstraints works correctly. The reason that you get a height that is too short is because of the leading in the default paragraph style attached to attributed strings. If you don't attach a paragraph style to the string then CoreText returns the height needed to render the text, but with no space between the lines. This took me forever to figure out. Nothing in the documentation spells it out. I just happened to notice that my heights were short by an amount equal to (number of lines x expected leading). To get the height result you expect you can use code like the following:
NSString *text = #"This\nis\nsome\nmulti-line\nsample\ntext."
UIFont *uiFont = [UIFont fontWithName:#"Helvetica" size:17.0];
CTFontRef ctFont = CTFontCreateWithName((CFStringRef) uiFont.fontName, uiFont.pointSize, NULL);
// When you create an attributed string the default paragraph style has a leading
// of 0.0. Create a paragraph style that will set the line adjustment equal to
// the leading value of the font.
CGFloat leading = uiFont.lineHeight - uiFont.ascender + uiFont.descender;
CTParagraphStyleSetting paragraphSettings[1] = { kCTParagraphStyleSpecifierLineSpacingAdjustment, sizeof (CGFloat), &leading };
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings, 1);
CFRange textRange = CFRangeMake(0, text.length);
// Create an empty mutable string big enough to hold our test
CFMutableAttributedStringRef string = CFAttributedStringCreateMutable(kCFAllocatorDefault, text.length);
// Inject our text into it
CFAttributedStringReplaceString(string, CFRangeMake(0, 0), (CFStringRef) text);
// Apply our font and line spacing attributes over the span
CFAttributedStringSetAttribute(string, textRange, kCTFontAttributeName, ctFont);
CFAttributedStringSetAttribute(string, textRange, kCTParagraphStyleAttributeName, paragraphStyle);
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(string);
CFRange fitRange;
CGSize frameSize = CTFramesetterSuggestFrameSizeWithConstraints(framesetter, textRange, NULL, bounds, &fitRange);
CFRelease(framesetter);
CFRelease(string);
CTFramesetterSuggestFrameSizeWithConstraints() is broken. I filed a bug on this a while back. Your alternative is to use CTFramesetterCreateFrame() with a path that is sufficiently high. Then you can measure the rect of the CTFrame that you get back. Note that you cannot use CGFLOAT_MAX for the height, as CoreText uses a flipped coordinate system from the iPhone and will locate its text at the "top" of the box. This means that if you use CGFLOAT_MAX, you won't have enough precision to actually tell the height of the box. I recommend using something like 10,000 as your height, as that's 10x taller than the screen itself and yet gives enough precision for the resulting rectangle. If you need to lay out even taller text, you can do this multiple times for each section of text (you can ask CTFrameRef for the range in the original string that it was able to lay out).
Thanks to Chris DeSalvo for the excellent answer! Finally ending 16 hours of debugging. I had a little trouble figuring out the Swift 3 syntax. So sharing the Swift 3 version of setting the paragraph style.
let leading = uiFont.lineHeight - uiFont.ascender + uiFont.descender
let paragraphStyle = NSMutableParagraphStyle()
paragraphStyle.lineSpacing = leading
mutableAttributedString.addAttribute(NSParagraphStyleAttributeName, value: paragraphStyle, range: textRange)
Working on this problem and following a lot of different answers from several posters, I had implemented a solution for the all mighty problem of correct text size, for me CTFramesetterSuggestFrameSizeWithConstraints is not working properly so we need to use CTFramesetterCreateFrame and then measure the size for that frame, (this is a UIFont extension) this is swift 100%
References
CTFramesetterSuggestFrameSizeWithConstraints sometimes returns incorrect size?
How to get the real height of text drawn on a CTFrame
Using CFArrayGetValueAtIndex in Swift with UnsafePointer (AUPreset)
func sizeOfString (string: String, constrainedToWidth width: Double) -> CGSize {
let attributes = [NSAttributedString.Key.font:self]
let attString = NSAttributedString(string: string,attributes: attributes)
let framesetter = CTFramesetterCreateWithAttributedString(attString)
let frame = CTFramesetterCreateFrame(framesetter,CFRange(location: 0,length: 0),CGPath.init(rect: CGRect(x: 0, y: 0, width: width, height: 10000), transform: nil),nil)
return UIFont.measure(frame: frame)
}
Then we measure our CTFrame
static func measure(frame:CTFrame) ->CGSize {
let lines = CTFrameGetLines(frame)
let numOflines = CFArrayGetCount(lines)
var maxWidth : Double = 0
for index in 0..<numOflines {
let line : CTLine = unsafeBitCast(CFArrayGetValueAtIndex(lines, index), to: CTLine.self)
var ascent : CGFloat = 0
var descent : CGFloat = 0
var leading : CGFloat = 0
let width = CTLineGetTypographicBounds(line, &ascent, &descent, &leading)
if(width > maxWidth) {
maxWidth = width
}
}
var ascent : CGFloat = 0
var descent : CGFloat = 0
var leading : CGFloat = 0
CTLineGetTypographicBounds(unsafeBitCast(CFArrayGetValueAtIndex(lines, 0), to: CTLine.self), &ascent, &descent, &leading)
let firstLineHeight = ascent + descent + leading
CTLineGetTypographicBounds(unsafeBitCast(CFArrayGetValueAtIndex(lines, numOflines - 1), to: CTLine.self), &ascent, &descent, &leading)
let lastLineHeight = ascent + descent + leading
var firstLineOrigin : CGPoint = CGPoint(x: 0, y: 0)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 1), &firstLineOrigin);
var lastLineOrigin : CGPoint = CGPoint(x: 0, y: 0)
CTFrameGetLineOrigins(frame, CFRangeMake(numOflines - 1, 1), &lastLineOrigin);
let textHeight = abs(firstLineOrigin.y - lastLineOrigin.y) + firstLineHeight + lastLineHeight
return CGSize(width: maxWidth, height: Double(textHeight))
}
This was driving me batty and none of the solutions above worked for me to calculate layout of long strings over multiple pages and return range values with completely truncated-out lines. After reading the API notes, the parameter in CTFramesetterSuggestFrameSizeWithConstraints for 'stringRange' is thus:
'The string range to which the frame size will apply. The string range is a range over the string that was used to create the framesetter. If the length portion of the range is set to 0, then the framesetter will continue to add lines until it runs out of text or space.'
After setting the string range to 'CFRangeMake(currentLocation, 0)' instead of 'CFRangeMake(currentLocation, string.length)', it all works perfectly.
After weeks of trying everything, any combination possible, I made a break through and found something that works. This issue seems to be more prominent on macOS than on iOS, but still appears on both.
What worked for me was to use a CATextLayer instead of a NSTextField (on macOS) or a UILabel (on iOS).
And using boundingRect(with:options:context:) instead of CTFramesetterSuggestFrameSizeWithConstraints. Even though in theory the latter should be more lower level than the former, and I was assuming would be more precise, the game changer turns out to be NSString.DrawingOptions.usesDeviceMetrics.
The frame size suggested fits like a charm.
Example:
let attributedString = NSAttributedString(string: "my string")
let maxWidth = CGFloat(300)
let size = attributedString.boundingRect(
with: .init(width: maxWidth,
height: .greatestFiniteMagnitude),
options: [
.usesFontLeading,
.usesLineFragmentOrigin,
.usesDeviceMetrics])
let textLayer = CATextLayer()
textLayer.frame = .init(origin: .zero, size: size)
textLayer.contentsScale = 2 // for retina
textLayer.isWrapped = true // for multiple lines
textLayer.string = attributedString
Then you can add the CATextLayer to any NSView/UIView.
macOS
let view = NSView()
view.wantsLayer = true
view.layer?.addSublayer(textLayer)
iOS
let view = UIView()
view.layer.addSublayer(textLayer)
I finally find out.. When uses CTFramesetterSuggestFrameSizeWithConstraints returning CGSize to draw text, the size is considered not big enough(sometimes) for the whole text, so last line is not drew.. I just add 1 to size.height for return, it appears to be right now.
Something like this:
suggestedSize = CGSizeMake(suggestedSize.width, suggestedSize.height + 1);