Draw in image programmatically - iphone

I'm having trouble with my bingo application,I tried drawing a circle programmatically whenever a number is picked in my number generator.
I tried this code, that draws the circle in image then save it to the documentsDirectory. I also have a load implementation that I load it in the view when I call it.
//draw
-(void)draw
{
UIImage *image = [UIImage imageNamed:#"GeneralBingoResult.png"];
UIImage *imageWithCircle1 = [self imageByDrawingCircleOnImage1:image];
// save it to documents
NSString *documentsPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask, YES) lastObject];
NSString *filePath = [documentsPath stringByAppendingPathComponent:#"Output.png"];
NSData *imageData = UIImagePNGRepresentation(imageWithCircle1);
[imageData writeToFile:filePath atomically:YES];
NSLog(#"Saved new image to %#", filePath);
UIImage *image1 = [self loadImage];
[imageToDisplay setImage:image1];
}
//draws the circle in the image
- (UIImage *)imageByDrawingCircleOnImage1:(UIImage *)image
{
// begin a graphics context of sufficient size
UIGraphicsBeginImageContext(image.size);
// draw original image into the context
[image drawAtPoint:CGPointZero];
// get the context for CoreGraphics
CGContextRef ctx = UIGraphicsGetCurrentContext();
// set stroking color and draw circle
[[UIColor redColor] setStroke];
// make circle rect 5 px from border
CGRect circleRect = CGRectMake(420,40,
90,
90);
circleRect = CGRectInset(circleRect, 5, 5);
// draw circle
CGContextStrokeEllipseInRect(ctx, circleRect);
// make image out of bitmap context
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
// free the context
UIGraphicsEndImageContext();
return retImage;
}
//loads from doc directory
- (UIImage*)loadImage
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString* path = [documentsDirectory stringByAppendingPathComponent:
[NSString stringWithString: #"Output.png"] ];
UIImage* image = [UIImage imageWithContentsOfFile:path];
return image;
}
I successfully draw a circle in my image, but my problem is when I save the image with the circle in documentsDirectory I want to be able to load the saved image and draw with that image again. Or rather how am I going to implement it like a bingo app, like this:
Example:
First,Number 7 is picked in the number generator.
Output:
Next, number 55 is picked. It adds another circle to the number.
Output:
By the way, I'm using a UIScrollview. And I am implementing it in the ScrollViewDidEndScrolling. Thanks.
I also tried this code, but it only shows one circle every time the UIScrollView stops.
- (void) scrollViewDidEndScrollingAnimation:(UIScrollView *)scrollview{
if ([[images objectAtIndex:index] intValue] == 1){
[circle setFrame:CGRectMake(420,40,90,90)];
[self.view addSubview:circle];
}else if([[images objectAtIndex:index] intValue] == 2){
[circle setFrame:CGRectMake(460,40,90,90)];
[self.view addSubview:circle];
}
}

Your easiest option is to create an image in photoshop or other similar program which is the required size and which has a transparent background. Save this as a .png file. Then, when you want to add the circles over the board, you just have to add a new UIImageView over the correct location in the grid:
UIImageView *circle;
circle = [[UIImageView alloc] initWithFrame:CGRectMake(xLocation, yLocation, myCircleWidth, myCircleHeight)];
circle.image = [UIImage imageNamed:#"myCircle.png"];
[self.view addSubview:circle];
An alternate option is to create a view that inherits from UIView. You can add methods to this class which allow you to set locations as circled. Then inside the drawRect code you just draw circles over all the locations which have been marked.
- (void)drawRect:(CGRect)rect {
// Circle your marked locations here
}
The final step in this case is to add your new view over your original view which contains the BINGO board.
EDIT:
Here is extended sample code for doing a view overlay. First, OverlayView.h:
#interface OverlayView : UIView {
}
- (void)clearPoints;
- (void)addPointX:(int)whichX Y:(int)whichY;
#end
Note that I use the C++ vector here, so this is in OverlayView.mm file:
#import "OverlayView.h"
#import <UIKit/UIKit.h>
#include <vector>
#implementation OverlayView
std::vector<int> points;
- (id)initWithFrame:(CGRect)frame {
if (self = [super initWithFrame:frame]) {
// Initialization code
[self setOpaque:NO];
[self setUserInteractionEnabled:false];
}
return self;
}
- (void)drawRect:(CGRect)rect {
if (points.size() == 0)
return;
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 15.0);
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0, 0.5);
for (int x = 0; x < points.size(); x++)
CGContextStrokeEllipseInRect(context, CGRectMake(points[x]%width-resolution/2,
points[x]/width-resolution/2,
resolution, resolution));
}
- (void)dealloc {
//printf("Deallocating OverlayView\n");
[super dealloc];
}
- (void)clearPoints {
points.resize(0);
[self setNeedsDisplay];
}
- (void)addPointX:(int)whichX Y:(int)whichY {
points.push_back(whichX+whichY*width);
[self setNeedsDisplay];
printf("Adding point (%d,%d)\n", whichX, whichY);
}
You just need to add this view inside the same view as your board. Call the addPoint function to add a circle centered at that point. You'll need to defined the resolution and width of your view for yourself.

Just paste it over top of your bingo board via UIImageView at the correct coordinates (The bingo boards are all the same size, I assume? Should be easy to calculate). Or am I missing what you are trying to do?

Just give this a try
UIImage *image1 = [UIImage imageNamed:#"image1.png"];
UIImage *image2 = [UIImage imageNamed:#"image2.png"];
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
[image1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[image2 drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

If you want animations like bingo app than you can use Cocos2D framework for building your app.

Related

Issue with attaching an Image file with mail

I am trying to capture a view as an image and then attach that image through mail , but the problem is after capturing the view a white border appears around the image ! , this problem happens only on iPhone 5 device ! here is my code :
Sharing.m
- (void)mailAttachmentWithImage:(UIView*)view openInView:(UIViewController*)viewCont {
MFMailComposeViewController *controller = [[MFMailComposeViewController alloc] init];
controller.mailComposeDelegate = self;
UIView* captureView = view;
captureView.backgroundColor = [UIColor clearColor];
/* Capture the screen shoot at native resolution */
UIGraphicsBeginImageContextWithOptions(captureView.bounds.size, NO, 0.0);
[captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Render the screen shot at custom resolution */
CGRect cropRect = CGRectMake(0 ,0 ,1024 ,1024);
UIGraphicsBeginImageContextWithOptions(cropRect.size, NO, 1.0f);
[screenshot drawInRect:cropRect];
UIImage * customScreenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *myData = UIImagePNGRepresentation(customScreenShot);
[controller addAttachmentData:myData mimeType:#"image/png" fileName:#"Image"];
[viewCont presentViewController:controller animated:YES completion:nil];
}
ViewController.m
and then capture view :
- (IBAction)mail:(id)sender {
[shareIt mailAttachmentWithImage:_captureView openInView:self];
}
you need to capture image of Image-view instead of UIView. your posted answer is not enough to help's Other. as my bellow code you can capture image of your Image-view Frame try with Bellow code:-
-(void)imageWithView:(UIView *)view
{
CGRect rect = CGRectMake(imgview.frame.origin.x, imgview.frame.origin.y,imgview.frame.size.width, imgview.frame.size.height);
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 1.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
// CGRect rect = CGRectMake(0,0,10,10);
CGImageRef imageRef = CGImageCreateWithImageInRect([screenshot CGImage], rect);
UIImage *croppedScreenshot = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *savedImagePath = [documentsDirectory stringByAppendingPathComponent:#"savedImage.png"];
NSData *imageData = UIImagePNGRepresentation(croppedScreenshot);
[controller addAttachmentData:imageData mimeType:#"image/png" fileName:#"Image"];
[viewCont presentViewController:controller animated:YES completion:nil];
}
So this Captured image not contain Border and you can attach this image Without Border in to E-mail.
Problem Solved ! because my View was bigger than targeted image size !!!!

creating a UIImage from a UITableView

Is there a way to create an UIImage out of a UITableView?
I know about this piece of code that will draw a UIImage out of a given UIView:
-(UIImage*) makeImageOutOfView:(UIView*)view {
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
but it would only create an image with the size of the table's frame. I would like to create an image that will display the whole table's content. Another problem is that not only that the created image was limited to the table's frame, but when creating the image after scrolling the table had very weird outcomes (only the visible cells out of the first 6 where shown in the image and that's all, the other visible cells were not drawn..)
EDIT - i want to create an image that was drawn out of the content of a tableView, not setting the tableView's background to display an image..
i just Create a DEMO for you and hope its helps you you can capture table image like this way:-
-(IBAction)savebutn:(id)sender
{
[tbl reloadData];
CGRect frame = tbl.frame;
frame.size.height = tbl.contentSize.height;
tbl.frame = frame;
UIGraphicsBeginImageContext(tbl.bounds.size);
[tbl.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saveImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(saveImage);
NSFileManager *fileMan = [NSFileManager defaultManager];
NSString *fileName = [NSString stringWithFormat:#"%d.png",1];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pdfFileName = [documentsDirectory stringByAppendingPathComponent:fileName];
[fileMan createFileAtPath:pdfFileName contents:imageData attributes:nil];
}
This is a Screen Shot of capture image:-
capture image contain full of table cell top to bottom please download the demo of it:-
http://www.sendspace.com/file/w48sod
One thing you can do is, create an UIView, add all cells one below another you want in the image and finally convert that UIView into an image using the code you have.
I think that should work.
- (UIImage *)imageWithTableView:(UITableView *)tableView{
UIImage* image = nil;
UIGraphicsBeginImageContextWithOptions(tableView.contentSize, NO, 0.0);
CGPoint savedContentOffset = tableView.contentOffset;
CGRect savedFrame = tableView.frame;
tableView.contentOffset = CGPointZero;
tableView.frame = CGRectMake(0, 0, tableView.contentSize.width, tableView.contentSize.height);
[tableView.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
tableView.contentOffset = savedContentOffset;
tableView.frame = savedFrame;
UIGraphicsEndImageContext();
return image;
}

Stuttering when Drawing to Image in GCD Block

I have the following method which takes some CAShapeLayers and converts them into a UIImage. The UIImage is used in a cell for a UITableView (much like the photos app when you select a photo from one of your libraries). This method is called from within a GCD block:
-(UIImage*) imageAtIndex:(NSUInteger)index
{
Graphic *graphic = [[Graphic graphicWithType:index]retain];
CALayer *layer = [[CALayer alloc] init];
layer.bounds = CGRectMake(0,0, graphic.size.width, graphic.size.height);
layer.shouldRasterize = YES;
layer.anchorPoint = CGPointZero;
layer.position = CGPointMake(0, 0);
for (int i = 0; i < [graphic.shapeLayers count]; i++)
{
[layer addSublayer:[graphic.shapeLayers objectAtIndex:i]];
}
CGFloat largestDimension = MAX(graphic.size.width, graphic.size.height);
CGFloat maxDimension = self.thumbnailDimension;
CGFloat multiplicationFactor = maxDimension / largestDimension;
CGSize graphicThumbnailSize = CGSizeMake(multiplicationFactor * graphic.size.width, multiplicationFactor * graphic.size.height);
layer.sublayerTransform = CATransform3DScale(layer.sublayerTransform, graphicThumbnailSize.width / graphic.size.width, graphicThumbnailSize.height / graphic.size.height, 1);
layer.bounds = CGRectMake(0,0, graphicThumbnailSize.width, graphicThumbnailSize.height);
UIGraphicsBeginImageContextWithOptions(layer.bounds.size, NO, 0);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
[layer release];
[graphic release];
return [image autorelease];
}
For whatever reason, when I'm scrolling the UITableView and loading the images in, it is stuttering a little bit. I know the GCD code is fine because it's worked previously so it appears something in this code is causing the stuttering. Does anyone know what that could be? Is CAAnimation not thread safe? Or does anyone know a better way to take a bunch of CAShapeLayers and convert them into a UIImage?
In the end I believe:
[layer renderInContext:UIGraphicsGetCurrentContext()];
Cannot be done on a separate thread, so I had to do the following:
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the mutable paths of the CAShapeLayers to the context
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
There is a great example of this (where I learned to do it) in the WWDC2012 video "Building Concurrent User Interfaces on iOS"

Taking a picture from the camera and show it in a UIImageView

I have a view with some fields (name, price, category) and a segmented control, plus this button to take picture.
If I try this on the simulator (no camera) it works properly: I can select the image from the camera roll, edit it and go back to the view, which will show all the fields with their contents .
But on my iphone, when I select the image after the editing and go back to the view, all the fields are empty exept for the UIImageView.I also tried to save the content of the fields in variables and put them back in the "viewWillApper" method, but the app crashes.
Start to thinking that maybe there is something wrong methods below
EDIT
I found the solution here. I defined a new method to the UIImage class. (follow the link for more information).Then I worked on the frame of the UIImageView to adapt itself to the new dimension, in landscape or portrait.
-(IBAction)takePhoto:(id)sender {
if ([UIImagePickerController isSourceTypeAvailable: UIImagePickerControllerSourceTypeCamera]) {
self.imgPicker.sourceType = UIImagePickerControllerSourceTypeCamera;
self.imgPicker.cameraCaptureMode = UIImagePickerControllerCameraCaptureModePhoto;
} else {
imgPicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
}
[self presentModalViewController:self.imgPicker animated:YES];
}
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
[picker dismissModalViewControllerAnimated:YES];
NSDate *date = [NSDate date];
NSString *photoName = [dateFormatter stringFromDate:date];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUs erDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
imagePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%#.png", photoName]];
UIImage *picture = [info objectForKey:UIImagePickerControllerOriginalImage];
// ---------RESIZE CODE--------- //
if (picture.size.width == 1936) {
picture = [picture scaleToSize:CGSizeMake(480.0f, 720.0f)];
} else {
picture = [picture scaleToSize:CGSizeMake(720.0f, 480.0f)];
}
// --------END RESIZE CODE-------- //
photoPreview.image = picture;
// ---------FRAME CODE--------- //
photoPreview.contentMode = UIViewContentModeScaleAspectFit;
CGRect frame = photoPreview.frame;
if (picture.size.width == 480) {
frame.size.width = 111.3;
frame.size.height =167;
} else {
frame.size.width = 167;
frame.size.height =111.3;
}
photoPreview.frame = frame;
// --------END FRAME CODE-------- //
NSData *webData = UIImagePNGRepresentation(picture);
CGImageRelease([picture CGImage]);
[webData writeToFile:imagePath atomically:YES];
imgPicker = nil;
}
Now I have a new issue! If I take a picture in landscape, and try to take another one in portrait, the app crashs. Do I have to release something?
I had the same issue, there is no edited image when using the camera, you must use the original image :
originalimage = [editingInfo objectForKey:UIImagePickerControllerOriginalImage];
if ([editingInfo objectForKey:UIImagePickerControllerMediaMetadata]) {
// test to chek that the camera was used
// especially I fund out htat you then have to rotate the photo
...
If it was cropped when usign the album you have to re-crop it of course :
if ([editingInfo objectForKey:UIImagePickerControllerCropRect] != nil) {
CGRect cropRect = [[editingInfo objectForKey:UIImagePickerControllerCropRect] CGRectValue];
CGImageRef imageRef = CGImageCreateWithImageInRect([originalimage CGImage], cropRect);
chosenimage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
} else {
chosenimage = originalimage;
}
The croprect info is also present for the camera mode, you need to check how you want it to behave.
To Crop image i think this may help you
UIImage *croppedImage = [self imageByCropping:photo.image toRect:tempview.frame];
CGSize size = CGSizeMake(croppedImage.size.height, croppedImage.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[croppedImage drawAtPoint:pointImg1 ];
[[UIImage imageNamed:appDelegete.strImage] drawInRect:CGRectMake(0,532, 150,80) ];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
croppedImage = result;
UIImageView *mainImageView = [[UIImageView alloc] initWithImage:croppedImage];
CGRect clippedRect = CGRectMake(0, 0, croppedImage.size.width, croppedImage.size.height);
CGFloat scaleFactor = 0.5;
UIGraphicsBeginImageContext(CGSizeMake(croppedImage.size.width * scaleFactor, croppedImage.size.height * scaleFactor));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextClipToRect(currentContext, clippedRect);
//this will automatically scale any CGImage down/up to the required thumbnail side (length) when the CGImage gets drawn into the context on the next line of code
CGContextScaleCTM(currentContext, scaleFactor, scaleFactor);
[mainImageView.layer renderInContext:currentContext];
appDelegete.appphoto = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Aligning Background Within a CALayer

I'm trying to create a UIImage from an image context and set that as the contents of a CALayer (the image will end up split up and spread across multiple CALayers). The only problem is, it doesn't seem to be aligning correctly - it's offset by 30 pixels or so.
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
number = 1;
background = [UIImage imageNamed:#"numberBackground.png"];
// draw images
UIGraphicsBeginImageContext(CGSizeMake(background.size.width, background.size.height));
[background drawAtPoint:CGPointMake(0, 0)];
NSString *numberString = [[NSString alloc] initWithString:[NSString stringWithFormat:#"%d", number]];
[[UIColor whiteColor] set];
[numberString drawAtPoint:CGPointMake(0, 0) withFont:[UIFont fontWithName:#"HelveticaNeue" size:128]];
UIImage *front = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
// top layer
topLayer = [CALayer layer];
[topLayer setContents:(id)front.CGImage];
[topLayer setContentsGravity:#"kCAGravityTop"];
[topLayer setFrame:CGRectMake(0, 0, background.size.width, background.size.height / 2)];
[topLayer setAnchorPoint:CGPointMake(0, 1)];
[topLayer setPosition:CGPointMake(0, background.size.height / 2)];
[topLayer setMasksToBounds:NO];
[topLayer setBackgroundColor:[UIColor lightGrayColor].CGColor];
[self.layer addSublayer:topLayer];
}
return self;
}
I don't have enough reputation points to post an image yet, but here's how the layer renders, as well as how it should render and the full image: link text
Found my mistake - setContentsGravity should take the built-in string constant kCAGravityTop, not my NSString literal #"kCAGravityTop".