GWT - constructing a new ImageData / Bitmap - gwt

I wish to create a new, empty bitmap, manually draw on it, and only then draw it onto a canvas.
This bitmap should not based on any existing image or existing canvas.
Is there a way to create such a bitmap in GWT? The best solution I can find is creating a dummy canvas, then getting its ImageData through context2d. I can hardly believe that this is the right way to do this.
Any help would be appreciated. Thanks!

Take care, due to limitations, your app will not run on google app engine.
JRE GAE white list

Related

Draw on an image in iPhone app

I'm fairly new to iPhone app development. I'm trying to make a scrapbook app (for fun sakes) but I can't figure out how to draw on and add pictures to my template which is also an image file (and a UIView). Should I save my template as a pdf? Essentially I don't know how to add stuff to my template - for example draw shapes on it, add a text box, add a picture, etc
Please help!!!!!
No PDF needed. Check out Quartz 2D:
https://developer.apple.com/library/ios/#documentation/graphicsimaging/conceptual/drawingwithquartz2d/Introduction/Introduction.html
You create a context (e.g., via CGBitmapContextCreate), add you bitmap (e.g., via CGContextDrawImage), draw text (e.g., via CGContextShowTextAtPoint), etc.
Review the Quartz 2D introduction above, and then review some the Quartz 2D code samples. It's not hard, but it's not obvious, either. Just takes a few times to get the hang of it.

Rotating a PNG file

I have zero experience with manipulating image files of any sort with code, so I am lost about where to begin. All I need to do is open a PNG image file and save it rotated 90 degrees in objective-C. I am a quick learner, so even a push in the right direction would help immensely. I know this is no obscure function; any GUI image editor is capable of this, so I figure someone should be able to help. Thanks in advance!
(also, I have tagged this with iPhone to get more exposure; this is not something that needs to be iPhone-exclusive.)
Here's your "push":
Create a CGImage from the original file, using ImageIO (CGImageSourceCreateWithURL, CGImageSourceCreateImageAtIndex).
Create a bitmap context with transposed size, using Core Graphics (CGBitmapContextCreate).
Rotate the context's transformation matrix (CGContextConcatCTM)
Draw the original image into that context (CGContextDrawImage).
Create a new image from the bitmap context (CGBitmapContextCreateImage)
Save the image to a new file (CGImageDestinationCreateWithURL, CGImageDestinationAddImage, CGImageDestinationFinalize).
Have you tried looking at NSImage Rotation asked on Stackoverflow?

Continuously drawing into an iPhone bitmap object? What am I missing?

I am pretty sure this is a straightforward problem I must be confused about (I am a total newbie on the iPhone): I am trying to continuously draw shapes using the CG APIs into a bitmap object, and to always show an up-to-date version of that bitmap on the screen, inside a UIScrollView object so the user can pan, zoom, etc.
Using UIView with drawRect is no good because this is an ongoing thing -- I have a separate thread where all the drawing commands are issued, and I need them to be applied and accumulated on that same framebuffer (kind of like a web page rendering engine). What I keep seeing is that all GCContext APIs related to images seem to CREATE a brand new image whenever you want it rendered to the screen. That sounds bad because it forces the creation of another copy of the image.
I guess I am looking for a way to create an off-screen bitmap, render to it as much as I want using Core Graphics, and whenever I want to, blt that image to the screen, but still retaining the ability to keep drawing to it and blt it to the screen again, later on.
Thanks in advance!
You could use a CGLayer. Create it once with CGLayerCreateWithContext() and later retrieve the layer's context with CGLayerGetContext() whenever you need to draw into it.
To draw the layer into another graphics context, call CGContextDrawLayerAtPoint() or CGContextDrawLayerInRect().

Is it possible to restore a previous GL framebuffer?

I'm working on an iPhone app that lets the user draw using GL. I used the GLPaint sample code project as a firm foundation, but now I want to add the ability for the user to load one of their previous drawings and continue working on it.
I know how to get the framebuffer contents and save it as a UIImage. Is there a way for me to take the UIImage and tell GL to draw that?
Any help is much appreciated.
Typically you would either:
1) Use glDrawPixels()
2) Load the image into a texture and then render a quad.

iPhone, how do you draw from a texturepage using coregraphics?

I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks
What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}
Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.