Mirror Layout / Reverse Layout - basic4android

Is it possible to miror/reverse the display. I am planning to make a app where I will show the display up to the windscreen of a car.

You can draw the complete layout on a bitmap and then flip the bitmap with Canvas.DrawBitmapFlipped.

Related

How to save a Flutter canvas as a bitmap image?

I have a Flutter canvas , and I'd like to save that canvas as a bitmap image (e.g. PNG, but any common bitmap format will do). What's the best way to get the bits out of a canvas, converted to a bitmap image format?
Create a PictureRecorder.
Create a Canvas with your PictureRecorder and draw stuff.
Call endRecording() on the PictureRecorder to get a Picture.
Call toImage() on the Picture.
Call toByteData() on the Image.

GWT - constructing a new ImageData / Bitmap

I wish to create a new, empty bitmap, manually draw on it, and only then draw it onto a canvas.
This bitmap should not based on any existing image or existing canvas.
Is there a way to create such a bitmap in GWT? The best solution I can find is creating a dummy canvas, then getting its ImageData through context2d. I can hardly believe that this is the right way to do this.
Any help would be appreciated. Thanks!
Take care, due to limitations, your app will not run on google app engine.
JRE GAE white list

ios, quartz2d, fastest way of drawing bitmap context into window context?

ios, quartz2d, fastest way of drawing bitmap context into window context?
hallo, sorry for my weak english,
I am looking hardly for fastest possible way of redrawing bitmap
context (which holds pointer to may raw bitmap data) onto iphone
view window context
in the examples i have found in the net people are doing this by
making CGImage from such bitmap context then making UIImage
from this and drawing it onto the view
i am thinking if it is a fastest way of doing it? do i need to create
then release CGImage - in documentation there is info that
making CGImage copy data - is it possible to send my bitmap context
data straight to window context without allocating/ copying then
releasing it in CGImage? (which seem physically not necessary)
parade
Well, i have done some measuring and here is what i have got -
no need to worry about creating CGImage and UIImage stuff becouse it all only takes about 2 miliseconds - my own image processing routines takes the most time (about 100 ms) drawing UIImage at point takes 20 ms - and there is also third thing: when i receive image buffer in my video frame ready delegate i call setNeedsDisplay by performSelectorOnMainThread - and this operation takes sometimes 2 miliseconds and sometimes about 40 miliseconds - does anybody know what it is with that - can i speed up this thing? thanx in advance
parade
I think I see what you are getting at. You have a pointer to the bitmap data and you just want the window to display that. On the old Mac OS (9 and previous) you could write draw directly to video memory, but you can't do that anymore. Back then video memory was part of RAM and now it's all on the OpenGL card.
At some level the bitmap data will have to be copied at least once. You can either do it directly by creating an OpenGL texture from the data and drawing that in an OpenGL context or you can use the UIImage approach. The UIImage approach will be slower and may contain two or more copies of the bitmap data, once to the UIImage and once when rendering the UIImage.
In either case, you need to create and release the CGImage.
The copy is necessary. You first have to get the bitmap into the GPU, as only the GPU has access to compositing any layer to the display window. And the GPU has to make a copy into it's opaque (device dependent) format. One way to do this is to create an image from your bitmap context (other alternatives include uploading an OpenGL texture, etc.)
Once you create an image you can draw it, or assign it to a visible CALayer's contents. The latter may be faster.

Continuously drawing into an iPhone bitmap object? What am I missing?

I am pretty sure this is a straightforward problem I must be confused about (I am a total newbie on the iPhone): I am trying to continuously draw shapes using the CG APIs into a bitmap object, and to always show an up-to-date version of that bitmap on the screen, inside a UIScrollView object so the user can pan, zoom, etc.
Using UIView with drawRect is no good because this is an ongoing thing -- I have a separate thread where all the drawing commands are issued, and I need them to be applied and accumulated on that same framebuffer (kind of like a web page rendering engine). What I keep seeing is that all GCContext APIs related to images seem to CREATE a brand new image whenever you want it rendered to the screen. That sounds bad because it forces the creation of another copy of the image.
I guess I am looking for a way to create an off-screen bitmap, render to it as much as I want using Core Graphics, and whenever I want to, blt that image to the screen, but still retaining the ability to keep drawing to it and blt it to the screen again, later on.
Thanks in advance!
You could use a CGLayer. Create it once with CGLayerCreateWithContext() and later retrieve the layer's context with CGLayerGetContext() whenever you need to draw into it.
To draw the layer into another graphics context, call CGContextDrawLayerAtPoint() or CGContextDrawLayerInRect().

Is there a standard way to create the blue resize handles?

Is there a standard way to create the blue resize handles? Such as what you see in "Pages" when you resize graphics.
Not as far as I know but it shouldnt be too hard to do.
Create a view large enough for the image and a border to add the UIbuttons. Add the image view. Use Animation kit to change the size and location of both views simultaneously.