C# Image resize- quality losing - c#-3.0

I have used a C# method to resize the image size.I have used several methods from the following links.
C# Simple Image Resize : File Size Not Shrinking
shrinking
Resize an Image C#
but quality is missing when re-sizing the image
Please help me to resolve the issue.

You could use WIC (Windows Imaging Component) instead of GDI+. Here's a nice blog post with an example.

First convert your Bitmap to an Image
Bitmap b = new Bitmap("asdf.jpg");
Image i = (Image)b;
then pass the image to this method to resize
public static Image ResizeImage(Image image, int width, int height, bool onlyResizeIfWider)
{
using (image)
{
// Prevent using images internal thumbnail.
image.RotateFlip(RotateFlipType.Rotate180FlipNone);
image.RotateFlip(RotateFlipType.Rotate180FlipNone);
if (onlyResizeIfWider == true)
if (image.Width <= width)
width = image.Width;
// Resize with height instead?
int newHeight = image.Height * width / image.Width;
if (newHeight > height)
{
width = image.Width * height / image.Height;
newHeight = height;
}
Image NewImage = image.GetThumbnailImage(width, newHeight, null, IntPtr.Zero);
return NewImage;
}
}
I hope this helps.

Related

Crop NSImage (macOS) Square without distorting

I am using the code below to resize an image in Swift on macOS. This is working but if the image is not square to begin with, the resizing squashes the image.
How can I resize the image but draw it in the center and keeping the ratio, preventing the squashing if the image is not square to begin with?
func resize(image: NSImage, w: CGFloat, h: CGFloat) -> NSImage {
var destSize = NSMakeSize(CGFloat(w), CGFloat(h))
var newImage = NSImage(size: destSize)
newImage.lockFocus()
image.draw(in: NSMakeRect(0, 0, destSize.width, destSize.height), from: NSMakeRect(0, 0, image.size.width, image.size.height), operation: NSCompositingOperation.sourceOver, fraction: CGFloat(1))
newImage.unlockFocus()
newImage.size = destSize
return NSImage(data: newImage.tiffRepresentation!)!
}
Thank you
The following line of code is the problem:
image.draw(in: NSMakeRect(0, 0, destSize.width, destSize.height), from: NSMakeRect(0, 0, image.size.width, image.size.height), operation: NSCompositingOperation.sourceOver, fraction: CGFloat(1))
You are drawing from the whole rectangle of the source into the whole rectangle of the destination. This will therefore scale the image to fit the destination, whereas you want to maintain aspect ratio. You need to decide how you want the final image to appear and adjust the source or destination rectangles accordingly.
For example, if you want to scale the result so that the whole image appears, you need to adjust either the width or height of the destination rectangle so they're in the same aspect ratio as the source.
Alternatively, if you want to crop the result, you need to adjust the width or height of the source rectangle, once again maintaining the same aspect ratio. You will also have to adjust the origin of the source rectangle if you want to crop (for example) the top and bottom.
Some of the other functions like draw(in:from:operation:fraction:) handle scaling for you, so depending on your needs, they may be better.
Assuming that you want to fit the entire source image into the destination image space(or size), maintaining the aspect ratio of the original image and adding some white space where the image doesn't fit into the destination.
There are three situations to consider.
The aspect ratio of the source and destination are the same.
No problem, just resize.
The aspect ratio destination is wider than the source
Height is the driving value (because it is smaller).
Figure out the height ratio from source to destination.
Use that ratio to calculate the destination width from the source width.
The aspect ratio destination is taller than the source
Width is the driving value.
Figure out the width ratio from source to destination.
Use that ratio to calculate the destination height from the source height.
I created this function to calculate the size you need.
func calcNewSize(source: NSSize, destination: NSSize) -> NSSize {
let widthRatio: Float = Float(destination.width) / Float(source.width)
let heightRatio: Float = Float(destination.height) / Float(source.height)
var newSize = NSSize()
print("widthRatio \(widthRatio) heightRatio \(heightRatio)")
if widthRatio == heightRatio {
print("use same ratio")
newSize = destination
}
else if widthRatio > heightRatio {
print("use height ratio")
newSize.height = source.height * CGFloat(heightRatio)
newSize.width = source.width * CGFloat(heightRatio)
}
else {
print("use width ratio")
newSize.height = source.height * CGFloat(widthRatio)
newSize.width = source.width * CGFloat(widthRatio)
}
return newSize
}

Scaling an image OSX Swift

Im currently trying to scale an image using swift. This shouldnt be a difficult task, since i've implemented a scaling solution in C# in 30 mins - however, i've been stuck for 2 days now.
I've tried googling/crawling through stack posts but to no avail. The two main solutions i have seen people use are:
A function written in Swift to resize an NSImage proportionately
and
resizeNSImage.swift
An Obj C Implementation of the above link
So i would prefer to use the most efficient/least cpu intensive solution, which according to my research is option 2. Due to option 2 using NSImage.lockfocus() and NSImage.unlockFocus, the image will scale fine on non-retina Macs, but double the scaling size on retina macs. I know this is due to the pixel density of Retina macs, and is to be expected, but i need a scaling solution that ignores HiDPI specifications and just performs a normal scale operation.
This led me to do more research into option 1. It seems like a sound function, however it literally doesnt scale the input image, and then doubles the filesize as i save the returned image (presumably due to pixel density). I found another stack post with someone else having the exact same problem as i am, using the exact same implementation (found here). Of the two suggested answers, the first one doesnt work, and the second is the other implementation i've been trying to use.
If people could post Swift-ified answers, as opposed to Obj C, i'd appreciate it very much!
EDIT:
Here's a copy of my implementation of the first solution - I've divided it into 2 functions:
func getSizeProportions(oWidth: CGFloat, oHeight: CGFloat) -> NSSize {
var ratio:Float = 0.0
let imageWidth = Float(oWidth)
let imageHeight = Float(oHeight)
var maxWidth = Float(0)
var maxHeight = Float(600)
if ( maxWidth == 0 ) {
maxWidth = imageWidth
}
if(maxHeight == 0) {
maxHeight = imageHeight
}
// Get ratio (landscape or portrait)
if (imageWidth > imageHeight) {
// Landscape
ratio = maxWidth / imageWidth;
}
else {
// Portrait
ratio = maxHeight / imageHeight;
}
// Calculate new size based on the ratio
let newWidth = imageWidth * ratio
let newHeight = imageHeight * ratio
return NSMakeSize(CGFloat(newWidth), CGFloat(newHeight))
}
func resizeImage(image:NSImage) -> NSImage {
print("original: ", image.size.width, image.size.height )
// Cast the NSImage to a CGImage
var imageRect:CGRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
// Create a new NSSize object with the newly calculated size
let newSize = NSSize(width: CGFloat(450), height: CGFloat(600))
//let newSize = getSizeProportions(oWidth: CGFloat(image.size.width), oHeight: CGFloat(image.size.height))
// Create NSImage from the CGImage using the new size
let imageWithNewSize = NSImage(cgImage: imageRef!, size: newSize)
print("scaled: ", imageWithNewSize.size.width, imageWithNewSize.size.height )
return NSImage(data: imageWithNewSize.tiffRepresentation!)!
}
EDIT 2:
As pointed out by Zneak: i need to save the returned image to disk - Using both implementations, my save function writes the file to disk successfully. Although i dont think my save function could be screwing with my current resizing implementation, i've attached it anyways just in case:
func saveAction(image: NSImage, url: URL) {
if let tiffdata = image.tiffRepresentation,
let bitmaprep = NSBitmapImageRep(data: tiffdata) {
let props = [NSImageCompressionFactor: Appearance.imageCompressionFactor]
if let bitmapData = NSBitmapImageRep.representationOfImageReps(in: [bitmaprep], using: .JPEG, properties: props) {
let path: NSString = "~/Desktop/out.jpg"
let resolvedPath = path.expandingTildeInPath
try! bitmapData.write(to: URL(fileURLWithPath: resolvedPath), options: [])
print("Your image has been saved to \(resolvedPath)")
}
}
To anyone else experiencing this problem - I ended up spending countless hours trying to find a way to do this, and ended up just getting the scaling factor of the screen (1 for normal macs, 2 for retina)... The code looks like this:
func getScaleFactor() -> CGFloat {
return NSScreen.main()!.backingScaleFactor
}
Then once you have the scale factor you either scale normally or half the dimensions for retina:
if (scaleFactor == 2) {
//halve size proportions for saving on Retina Macs
return NSMakeSize(CGFloat(oWidth*ratio)/2, CGFloat(oHeight*ratio)/2)
} else {
return NSMakeSize(CGFloat(oWidth*ratio), CGFloat(oHeight*ratio))
}

I need help integrating a specific UIImage resizing extension into my current draw CGRect function

I found this extension online, it allows me to have images adhere to aspect fit/fill even when drawn inside dynamically growing/shrinking image views (currently when image is saved to camera roll after my draw function the image reverts to "scale fill" regardless of what the content mode of the image view is. I suspect the reasoning for this is because I have it drawing the image to size/bounds of the image view, but since the image view is dynamic, i don't see any way around this without using this extension):
// MARK: - Image Scaling.
extension UIImage {
/// Scales an image to fit within a bounds with a size governed by the passed size. Also keeps the aspect ratio.
/// Switch MIN to MAX for aspect fill instead of fit.
///
/// - parameter newSize: newSize the size of the bounds the image must fit within.
///
/// - returns: a new scaled image.
func scaleImageToSize(newSize: CGSize) -> UIImage {
var scaledImageRect = CGRect.zero
let aspectWidth = newSize.width/size.width
let aspectheight = newSize.height/size.height
let aspectRatio = max(aspectWidth, aspectheight)
scaledImageRect.size.width = size.width * aspectRatio;
scaledImageRect.size.height = size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0;
UIGraphicsBeginImageContext(newSize)
draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
This is my current function I'm using for drawing the image on screen to be able to save it to camera roll (this function combines two images, a frame and an image from camera roll:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
// var newSize = currentImage.scaleImageToSize
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
All the tutorials I've found on how to use extensions don't cover how to pass in and out variables like this one requires. Any insight would be greatly appreciated.
I understand that you don't know how to use the extension, is that correct? Since it just adds a function to every UIImage, you can simply call it on your image like this: currentImage.scaleImageToSize(newSize: someSize) and pass the size you want the image to fit into.
Dorian Roy was telling me to use that call in place of using just "currentImage", and that's what worked!
(I commented on his initial answer saying I was having issues because I was trying to use the return value from the extension itself in place of "currentImage")

Blackberry webworks 10 using image from picture taken\uploaded

I am stuck with an issue and I have no idea what to try next.
In My application I offer my user a choice to either take a photo or choose a photo from the gallery. This part works fine, the problem arises with the saving\reading of the photo. Lets take this from the camera invoke perspective.
function startCameraApp() {
PhotoTaken = false;
blackberry.event.addEventListener("onChildCardClosed", sharePhoto);
blackberry.invoke.invoke({
target: "sys.camera.card"
}, onInvokeSuccess, onInvokeError);
}
and in sharePhoto I have the following code...
function sharePhoto(request) {
var image = new Image();
image.src = "file://" + request.data;
image.onload = function () {
// Now here I need to read the image file and convert it into base64.
var resized = resizeMe(image); // the resizeMe function is given below and it simply makes my image smaller
var imagedata = resized.split("base64,");
sessionStorage.setItem("MyNewPicture", imagedata);
}
}
function resizeMe(img) {
var canvas = document.createElement('canvas');
var max_width = 600;
var max_height = 600;
var width = img.width;
var height = img.height;
// calculate the width and height, constraining the proportions
if (width > height) {
if (width > max_width) {
height = Math.round(height * max_width / width);
width = max_width;
}
} else {
if (height > max_height) {
width = Math.round(width * max_height / height);
height = max_height;
}
}
//resize the canvas and draw the image data into it
img.width = width;
img.height = height;
canvas.width = width;
canvas.height = height;
canvas.classname += "ui-hidden";
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
return canvas.toDataURL();
}
So the app runs and it takes the photo and everything seems fine, yet the data uploaded to local storage is simply a blank screen. It works 100% in the blackberry 10 simulator, but not on my device. On the device it saves an empty string.
Edit
Ok. So I added this to my function for testing purposes and I am still stuck and I don't know what to do...
function sharePhoto(request) {
var image = new Image();
image.src = "file://" + request.data;
image.onload = function () {
// Now here I need to read the image file and convert it into base64.
var resized = resizeMe(image); // the resizeMe function is given below and it simply makes my image smaller
var imagedata = resized.split("base64,");
alert(imagedata); // This returns a blank popup
sessionStorage.setItem("MyNewPicture", imagedata);
}
}
I believe when you use the split method it returns an array, so you'd access it like this:
var resized = resizeMe(image);
var imagedata = resized.split("base64,");
imagedata = imagedata[1]; // this gives you everything after the 'base64,' string
However, the main problem I see is that you're splitting the imagedata string which is removing the whole 'this is an image' prefix from the data.
When you display the imagedata as an image, you need to have the data:image/jpeg;base64,
prefix as well.
So that being said, your images source would be
data:image/jpeg;base64,<rest of base64 string here>
I needed to include an extra element in my application's config.xml.
<access subdomains="true" uri="file://accounts/"></access>
<access subdomains="true" uri="file://accounts/100/shared/camera/"></access>
This gives access to your application to these folders and their containting files. Simulator's does not require this permission.

How to resize image with imagebox emgucv?

I have image in folder image
how resize image with imagebox in emgucv at open image?
thnx..
// To get orginal image from the OpenFileDialog
Image<Bgr, Byte> captureImage = new Image<Bgr, byte>(openImageFileDialog.FileName);
// To resize the image
Image<Bgr, byte> resizedImage = captureImage.Resize(width, height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR);
Hope it helps.
The answer is very simple.
Let us suppose the path of image is "C:\image.jpg".
Mat frame = new Mat(); //Declaration
string path = #"C:\image.jpg";
int width = 640, height = 480;
frame = CvInvoke.Imread(path , LoadImageType.AnyColor);
CvInvoke.Resize(frame, frame, new Size(imagebox1.Width, imagebox1.Height), 0, 0, Inter.Linear); //This resizes the image to the size of Imagebox1
OR
CvInvoke.Resize(frame, frame, new Size(width, height), 0, 0, Inter.Linear); //This resizes the image into your specified width and height
This is How i resized image using EmguCV
Bitmap bitmap = new Bitmap(FileUpload1.PostedFile.InputStream);
Image<Hsv, Byte> Iimage = new Image<Hsv, byte>(bitmap);
Image<Hsv, Byte> HsvImage = Iimage.Resize(384, 256,INTER.CV_INTER_LINEAR);