Monogame rendering text using Graphics.DrawString (instead of SpriteBatch.DrawString) - monogame

Is there any downside to using Graphics.DrawString to render a (rather static) bunch of text to an offscreen bitmap, convert it to a Texture2D once, and then simply call SpriteBatch.Draw, instead of using the content pipeline and rendering text using SpriteFont? This is basically a page of text, drawn to a "sheet of paper", but user can also choose to resize fonts, so this means I would have to include spritefonts of different sizes.
Since this is a Windows only app (not planning to port it), I will have access to all fonts like in a plain old WinForms app, and I believe rendering quality will be much better when using Graphics.DrawString (or even TextRenderer) than using sprite fonts.
Also, it seems performance might be better since SpriteBatch.DrawString needs to "render" the entire text in each iteration (i.e. send vertices for each letter separately), while by using a bitmap I only do it once, so it should be slightly less work at the CPU side.
Are there any problems or downsides I am not seeing here?
Is it possible to get alpha blended text using spritefonts? I've seen Nuclex framework mentioned around, but it's not been ported to Monogame AFAIK.
[Update]
From the technical side, it seems to be working perfectly, much better text rendering than through sprite fonts. If they are rendered horizontally, I even get ClearType. One problem that might exist is that spritesheets for fonts are (might be?) more efficient in terms of texture memory than creating a separate texture for a page of text.

No
There doesn't seem to be any downside
In fact you seem to be following a standard approach to text rendering.
Rendering text 'properly' is a comparatively slow processed compared to rendering a textured quad, even though SpriteFonts cutout all the splining glyphs, if you are rendering a page of text then you can still be racking up a large number of triangles.
Whenever I've been looking at different text rendering solutions for GL/XNA, people tend to recommend your approach. Draw your wall of text once to a reusable texture, then render that texture.
You may also want to consider RenderTarget2D as possible solution that is portable.
As an example:
// Render the new text on the texture
LoadPageText(int pageNum) {
string[] text = this.book[pageNum];
GraphicsDevice.SetRenderTarget(pageTarget);
// TODO: draw page background
this.spriteBatchCache.Begin();
for (int i = 0; i < text.Length; i++) {
this.spriteBatchCache.DrawText(this.font,
new Vector2(10, 10 + this.fontHeight * i),
text[i],
this.color);
}
this.spriteBatchCache.End();
GraphicsDevice.SetRenderTarget(null);
}
Then in the scene render, you can spriteBatch.Draw(..., pageTarget, ...) to render the text.
This way you only need 1 texture for all your pages, just remember to also redraw if your font changes.
Other things to consider is your SpriteBatches sort mode, sometimes that may impact performance when rendering many triangles.
On point 2, as I mentioned above the SpriteFonts are pre-rendered textures, this means that the transparency is baked onto their spritesheet. As such it seems the default library uses no transparency/anti-aliasing.
If you rendered them twice as large and White on Black and used the SourceColor as an alpha channel then rendered them scaled back down blending with Color.Black you could possible get it back.

Please try color mixing with pointer:
MixedColor = ((Alpha1 * Channel1) + (Alpha2 * Channel2))/(Alpha1 + Alpha2)

Related

Can Flutter render images from raw pixel data? [duplicate]

Setup
I am using a custom RenderBox to draw.
The canvas object in the code below comes from the PaintingContext in the paint method.
Drawing
I am trying to render pixels individually by using Canvas.drawRect.
I should point out that these are sometimes larger and sometimes smaller than the pixels on screen they actually occupy.
for (int i = 0; i < width * height; i++) {
// in this case the rect size is 1
canvas.drawRect(
Rect.fromLTWH(index % (width * height),
(index / (width * height)).floor(), 1, 1), Paint()..color = colors[i]);
}
Storage
I am storing the pixels as a List<List<Color>> (colors in the code above). I tried differently nested lists previously, but they did not cause any noticable discrepancies in terms of performance.
The memory on my Android Emulator test device increases by 282.7MB when populating the list with a 999x999 image. Note that it only temporarily increases by 282.7MB. After about half a minute, the increase drops to 153.6MB and stays there (without any user interaction).
Rendering
With a resolution of 999x999, the code above causes a GPU max of 250.1 ms/frame and a UI max of 1835.9 ms/frame, which is obviously unacceptable. The UI freezes for two seconds when trying to draw a 999x999 image, which should be a piece of cake (I would guess) considering that 4k video runs smoothly on the same device.
CPU
I am not exactly sure how to track this properly using the Android profiler, but while populating or changing the list, i.e. drawing the pixels (which is the case for the above metrics as well), CPU usage goes from 0% to up to 60%. Here are the AVD performance settings:
Cause
I have no idea where to start since I am not even sure what part of my code causes the freezing. Is it the memory usage? Or the drawing itself?
How would I go about this in general? What am I doing wrong? How should I store these pixels instead.
Efforts
I have tried so much that did not help at all that I will try to only point out the most notable ones:
I tried converting the List<List<Color>> to an Image from the dart:ui library hoping to use Canvas.drawImage. In order to do that, I tried encoding my own PNG, but I have not been able to render more than a single row. However, it did not look like that would boost performance. When trying to convert a 9999x9999 image, I ran into an out of memory exception. Now, I am wondering how video is rendered as all as any 4k video will easily take up more memory than a 9999x9999 image if a few seconds of it are in memory.
I tried implementing the image package. However, I stopped before completing it as I noticed that it is not meant to be used in Flutter but rather in HTML. I would not have gained anything using that.
This one is pretty important for the following conclusion I will draw: I tried to just draw without storing the pixels, i.e. is using Random.nextInt to generate random colors. When trying to randomly generate a 999x999 image, this resulted in a GPU max of 1824.7 ms/frames and a UI max of 2362.7 ms/frame, which is even worse, especially in the GPU department.
Conclusion
This is the conclusion I reached before trying my failed attempt at rendering using Canvas.drawImage: Canvas.drawRect is not made for this task as it cannot even draw simple images.
How do you do this in Flutter?
Notes
This is basically what I tried to ask over two months ago (yes, I have been trying to resolve this issue for that long), but I think that I did not express myself properly back then and that I knew even less what the actual problem was.
The highest resolution I can properly render is around 10k pixels. I need at least 1m.
I am thinking that abandoning Flutter and going for native might be my only option. However, I would like to believe that I am just approaching this problem completely wrong. I have spent about three months trying to figure this out and I did not find anything that lead me anywhere.
Solution
dart:ui has a function that converts pixels to an Image easily: decodeImageFromPixels
Example implementation
Issue on performance
Does not work in the current master channel
I was simply not aware of this back when I created this answer, which is why I wrote the "Alternative" section.
Alternative
Thanks to #pslink for reminding me of BMP after I wrote that I had failed to encode my own PNG.
I had looked into it previously, but I thought that it looked to complicated without sufficient documentation. Now, I found this nice article explaining the necessary BMP headers and implemented 32-bit BGRA (ARGB but BGRA is the order of the default mask) by copying Example 2 from the "BMP file format" Wikipedia article. I went through all sources but could not find an original source for this example. Maybe the authors of the Wikipedia article wrote it themselves.
Results
Using Canvas.drawImage and my 999x999 pixels converted to an image from a BMP byte list, I get a GPU max of 9.9 ms/frame and a UI max of 7.1 ms/frame, which is awesome!
| ms/frame | Before (Canvas.drawRect) | After (Canvas.drawImage) |
|-----------|---------------------------|--------------------------|
| GPU max | 1824.7 | 9.9 |
| UI max | 2362.7 | 7.1 |
Conclusion
Canvas operations like Canvas.drawRect are not meant to be used like that.
Instructions
First of, this is quite straight-forward, however, you need to correctly populate the byte list, otherwise, you are going to get an error that your data is not correctly formatted and see no results, which can be quite frustrating.
You will need to prepare your image before drawing as you cannot use async operations in the paint call.
In code, you need to use a Codec to transform your list of bytes into an image.
final list = [
0x42, 0x4d, // 'B', 'M'
...];
// make sure that you either know the file size, data size and data offset beforehand
// or that you edit these bytes afterwards
final Uint8List bytes = Uint8List.fromList(list);
final Codec codec = await instantiateImageCodec(bytes));
final Image image = (await codec.getNextFrame()).image;
You need to pass this image to your drawing widget, e.g. using a FutureBuilder.
Now, you can just use Canvas.drawImage in your draw call.

Is it possible to fill transparency with white in a texture in code?

I have some textures containing some transparency parts (a donut, for example, which would show a transparent center). What I want to do is fill the middle of the donut (or anything else) with a plain white, in code (I don't want to have a double of all my assets that need this tweak in one part of my game).
Is there a way to do this? Or do I really have to have 2 of each of my assets?
First it is possible to change a transparent texture to not-transparent, if it wasn't then graphic editors would be in trouble.
Solution 1 - Easy but takes repetitive editing by hand
The question you should be asking yourself is can you afford the transition at run time or would have two sets of textures be more efficient; from experience I find that the later tends to be more efficient.
Solution 2 - Extremely hard
You will need a shader that supports transparency and that it marks the sections that have to be shaded white. That is, it keeps track of which area will be later filled with white. It is implied that since your "donut" is already transparent on some parts then it already uses that texture that has an alpha, but you will have to write your own shader mask and be able to distinguish which is okay to fill white and which is not (fun problem here). What you need to do is find the condition in which that alpha no longer needs to be alpha and has to be white. Once the condition is met you can change the alpha of via the Color's alpha property. The only way I see you able to do this is if there is a pattern to the objects, so that you can apply some mathematical model to them and use that to find which area gets filled. If the objects are very different then the make two sets of textures starts to look more appealing.
Solution 3 - Medium with high re-use value
You could edit the textures to have two different colors, say pink and green. Green is the area that gets turned white and pink is always transparent. When green should not be white then it is transparent. You would have to edit your textures by hand as well.

iOS: How to create and draw into (and save) an image larger than the screen?

We're creating an iOS photo app. In doing this, we have to create dynamically sized images up to about 2500x1600px. Once this image has been created, we want to draw smaller images on top of the big one reasonably quickly.
The problem as we can see it is that it's impossible to get a context larger than the screen resolution. The call does not crash, but it returns a nil-context. How can such a seemingly simple task be achieved?
Secondly, once this context is created, what is the fastest way to draw a small image at a given position on top of the big one?
Edit:
We found the solution. CGBitmapContextCreate returns nil because the width and height parameters were set as floats, not ints. Sometimes the solution is right there in front of you, and you're too blind to see it. Hopefully this answer can help other people that somehow have the same problem.
Make sure you specify integer widths and heights as the arguments to CGBitmapContextCreate, otherwise it returns nil. Otherwise, the size of the context should not matter as long as you can malloc enough memory for it.
It should be possible to get a context for almost any bitmap for which you can allocate/malloc enough memory, in your case multiples of 2500x1600x4 bytes of ARGB pixels.
You might also want to look into using a CATiledLayer, where you would only have to draw into the tiles covered by the smaller image. You may have to tile to support older devices which are limited by the max tile size that will fit into the GPUs texture cache.

perl-tk: Interactivity visualize a large 2d raster data (like xvcg)

I want to write a perl application using tk to visualize a large 2d plot (it can be considered as 2d image). I need scrolling and resizing. Also I need not to store entire image in memory.
It is too big to be saved at one huge picture, but I can easy redraw any part of it.
So, I want to write a graphic application to view this data in interactive mode. This is like what xvcg do for graphs: http://blogs.oracle.com/amitsaha/resource/blog-shots/apt-rdepends.png (it is example of interface. There are x and y scroll bars and zoom bars)
My data looks a bit like http://www.access-excel-vba.com/giantchart.png without any text with thinner (1px) lines, a lot of dots on them and have sizes (at now) from 33000x23000 and will be bigger. I use 2bit-per-pixel Images.
So, How can I progamm a scrollable and zoomable image viewer in perl/tk? The requirement is not to store entire image in memory (190 Mb now and will be more!), but ask some function to draw it in parts.
About language/toolkit selection. My data generator is written on perl, OS is unix/POSIX, so I want not to switch language. I able to switch to another graphic toolkit, but perl/tk is preinstalled on target PCs.
Use a Canvas widget. You can place images or draw directly, in which case the built-in scale method would handle resizing. With the right handlers for scrolling you could dynamically load and unload content as you moved around to keep the memory usage reasonable. e.g. the callback for the -xscrollcommand would detect when you scroll right into an unloaded area and load the content for that area. You could unload items once then go off-screen.
This may sound funny, but I think your best approach would be to take a look at a few articles on writing an efficient 2D tile scrolling game. I've written something like what you've described before in Java, but the core concepts are the same. If you can figure out how to break your image down into smaller tiles, it's just a matter of streaming and scaling only the visible portions.
Alternatively, you could render the entire image to disk, then use something such as http://www.labnol.org/internet/design/embed-large-pictures-panoramas-web-pages-google-maps-image-viewer/2606/ . Google Maps tackles the same problem you've mentioned but on a much larger scale. This technique could break the image you've created down for you, and then allow you to feed it into a browser-based solution. Mind you, this does step outside of your Perl requirement, but it may suit your needs.
If you don't want to handle this by working with tiled photo images in a canvas (which is essentially what Michael Carman and NBJack are suggesting) then you could write your own custom image type (requires some C code). The API you need to implement to is Tk_CreateImageType, which allows you to customize five key aspects of images (how they're created, installed into a displayable context, drawn, released from the context, and deleted). I'm told — but cannot say from experience, admittedly — that this is a reasonably easy API to implement. One advantage of doing this is that you don't need to have nearly as much complexity as exists in the photo image type (which includes all sorts of exotica like handling for rare display types) and so can use a more efficient data structure and faster processing.
From looking at your sample data, it looks like what you are trying to do can fit inside various web technologies (either a massive table with background colors, or rendered from scratch with the HTML <canvas> tag).
For Perl, you could either go with one of the many server side web development techniques, or you could use something like XUL::Gui which is a module I wrote which basically uses Firefox (or other supported browsers) as a gui rendering engine for Perl.
Here is a short example showing how to use the <canvas> element (in this case to draw a Sierpinski triangle, from the module's examples):
use strict;
use warnings;
use XUL::Gui 'g->';
my $width = 400;
my $height = sqrt($width**2 - ($width/2)**2);
g->display(
g->box(
g->fill,
g->middle,
style => q{
background-color: black;
padding: 40px;
},
g->canvas(
id => 'canvas',
width => $width,
height => int $height,
)
),
g->delay(sub {
my $canvas = g->id('canvas')->getContext('2d');
$canvas->fillStyle = 'white';
my #points = ([$width/2, 0],
[0, $height], [$width, $height],
);
my ($x, $y) = #{ $points[0] };
my $num = #points;
my ($frame, $p);
while (1) {
$p = $points[ rand $num ];
$x = ($x + $$p[0]) / 2;
$y = ($y + $$p[1]) / 2;
# draw the point with a little anti-aliasing
$canvas->fillRect($x + 1/4, $y + 1/4, 1/2, 1/2);
if (not ++$frame % 1_000) { # update screen every 1000 points
$frame % 100_000
? g->flush
: g->doevents # keeps firefox happy
}
}
})
);

How does an Android/iPhone device implement text-zooming?

Simple question - how is text-zooming implemented on an Android/iPhone device? Do they pre-compute frequently used bitmaps of a font and replace the text as the scale changes? Or do they extract the contours from the font files and render the text as vector graphics?
Text rendering is a fairly complex subject so any answer here will just be glossing over a lot of stuff. Typefaces are generally stored in vector format, and not bitmaps. The system lays out the text by computing the metrics of each letter and creates a vector shape that is rendered into a bitmap that is displayed on the screen. It's unlikely that the system will cache the individual letterforms as bitmaps because of the way antialiasing and subpixel rendering works. But at some point all vector graphics are converted into bitmaps because the typical computer display is made up of pixels.