How can I create one big Texture2D out of several other small Textures2Ds? Then display that image instead of all the small images.
You can render different textures into one using RenderTarget2D
_renderTarget = new RenderTarget2D(GraphicsDevice, (int)size.X, (int)size.Y);
GraphicsDevice.SetRenderTarget(_renderTarget);
GraphicsDevice.Clear(Color.Transparent);
SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);
//draw some stuff.
SpriteBatch.End()
GraphicsDevice.SetRenderTarget(null);
GraphicsDevice.Clear(Color.Blue);
SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);
SpriteBatch.Draw(_renderTarget, Vector2.Zero, Color.white);
SpriteBatch.End()
Related
First question here, quite new at java (and english not my native language), please be indulgent :)
Didn't find any similar issue.
I am trying to make a 2D game (turn by turn, so no real time issue). My map is displayed in a JPanel with a mix of images for background, grid, and movable objects.
All images are loaded and stored once before displayed. I have a BufferedImage for the background, an other one on which I draw the grid, and lots of images for other objects.
In the paintComponent(), I draw all BufferedImages on the Graphics2D (cast from the Graphics parameter).
My issue is to mask the grid when the player choose to (or when scale is too big, with variables "ruleGrid" and "zoom" respectively). The test text output is correctly logged but the grid is visible anyway.
Both images seems to be merged and I can't mask the 2nd one.
I have tried to display the grid elsewhere (other coordinates) and it works well. But if both images are overlapping, then the grid part on the other image stays (as if drawn on the first one, and not on the JPanel).
Sorry if it isn't clear enough...
Some screenshots might help :
Grid and background with same coordinates
Grid and background with different coordinates
When scrolling and zooming out : here is the problem. The overlapping part of the grid is still 'printed' on the background image and the rest of the grid is shown 'under' the background.
Why is it happening ? What did I do wrong ? Is it due to an optimization/render from the Graphics2D Class ? Should I rather use Layered Panes ?
For both BufferedImages, I use :
BufferedImage.TYPE_INT_ARGB
.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY);
Here is a simplication of my code :
BufferedImage mapZones;
BufferedImage mapGrid;
#Override
public void paintComponent(Graphics g1){
Graphics2D g = (Graphics2D)g1;
//Clear the map
clearBackground(g);
//Display Background
displayMap(g, mapZones);
//Grid
if (Options.ruleGrid && Options.zoom > 4f) {
displayMap(g, mapGrid);
System.out.println("Test if grid should be displayed");
}
}
/*********************************************************************************************************/
private void displayMap(Graphics2D g, BufferedImage bufI) {
g.drawImage(bufI, -x0, -y0, width, height, null);
}
/*********************************************************************************************************/
private void clearBackground(Graphics2D g1) {
g1.setColor(Color.WHITE);
int max = 500000;
g1.clearRect(-max, -max, max*2, max*2);
}
/*********************************************************************************************************/
Any help would be appreciated. Thanks.
Find the reason (though, not the 'why ?').
I had a 3rd call to 'displayMap' with an empty image
//Display Elements
displayMap(g, mapElements);
which is created by
mapElements = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
and I have not yet drawn onto it.
When I comment the call to 'displayMap(g, mapElements);', I finally have the desired behaviour.
But I still don't know why ? I think it's the way the Graphics Class and the 'drawn' functions are coded :
This method returns immediately in all cases, even if the complete image has not yet been loaded, and it has not been dithered and converted for the current output device.
I guess the JVM somehow 'pools' (?) the drawings in the same area and my maps were merged...
If anyone could explain this in a simple manner...
Sorry for my bad English.
Currently I am using this way.
Gdiplus::Image img(L"xxx.png");
Gdiplus::Graphics g(hdc);
g.DrawImage(&img,0,0);
When I use PNG if I draw over 400*200 pixel my FPS drop the 39~45.
but instead use BMP image FPS keep maintain 60.
How I can fix this problem?.
convert pixelformat
I use this way(doesn't work)
img = Image::FromFile(filename);
bmp = new Bitmap(img->GetWidth(), img->GetHeight(), PixelFormat32bppPARGB);
Graphics gra(hdc);
gra.FromImage(bmp);
gra.DrawImage(img, destX, destY, img->GetWidth(), img->GetHeight());
I suppose this is due to the PixelFormat.
GDI(+) uses PixelFormat.Format32bppPArgb internally. All other formats are converted when drawing. The conversion is likely causing your performance issue.
So if you want to draw a picture frequently, convert it yourself on load instead having GDI doing it on each draw.
EDIT:
The PixelFormat can be "converted" like this:
// Load png from disc
Image png = Image.FromFile("x.png");
// Create a Bitmap of same size as png with the right PixelFormat
Bitmap bmp = new Bitmap(png.Width, png.Height, PixelFormat.Format32bppPArgb);
// Create a graphics object which draws to the bitmap
Graphics g = Graphics.FromImage(bmp);
// Draw the png to the bmp
g.DrawImageUnscaled(png, 0, 0);
Be aware that Image, Bitmap and Graphics objects implement IDisposable and should be properly disposed when no longer required.
Cheers
Thomas
Is there any downside to using Graphics.DrawString to render a (rather static) bunch of text to an offscreen bitmap, convert it to a Texture2D once, and then simply call SpriteBatch.Draw, instead of using the content pipeline and rendering text using SpriteFont? This is basically a page of text, drawn to a "sheet of paper", but user can also choose to resize fonts, so this means I would have to include spritefonts of different sizes.
Since this is a Windows only app (not planning to port it), I will have access to all fonts like in a plain old WinForms app, and I believe rendering quality will be much better when using Graphics.DrawString (or even TextRenderer) than using sprite fonts.
Also, it seems performance might be better since SpriteBatch.DrawString needs to "render" the entire text in each iteration (i.e. send vertices for each letter separately), while by using a bitmap I only do it once, so it should be slightly less work at the CPU side.
Are there any problems or downsides I am not seeing here?
Is it possible to get alpha blended text using spritefonts? I've seen Nuclex framework mentioned around, but it's not been ported to Monogame AFAIK.
[Update]
From the technical side, it seems to be working perfectly, much better text rendering than through sprite fonts. If they are rendered horizontally, I even get ClearType. One problem that might exist is that spritesheets for fonts are (might be?) more efficient in terms of texture memory than creating a separate texture for a page of text.
No
There doesn't seem to be any downside
In fact you seem to be following a standard approach to text rendering.
Rendering text 'properly' is a comparatively slow processed compared to rendering a textured quad, even though SpriteFonts cutout all the splining glyphs, if you are rendering a page of text then you can still be racking up a large number of triangles.
Whenever I've been looking at different text rendering solutions for GL/XNA, people tend to recommend your approach. Draw your wall of text once to a reusable texture, then render that texture.
You may also want to consider RenderTarget2D as possible solution that is portable.
As an example:
// Render the new text on the texture
LoadPageText(int pageNum) {
string[] text = this.book[pageNum];
GraphicsDevice.SetRenderTarget(pageTarget);
// TODO: draw page background
this.spriteBatchCache.Begin();
for (int i = 0; i < text.Length; i++) {
this.spriteBatchCache.DrawText(this.font,
new Vector2(10, 10 + this.fontHeight * i),
text[i],
this.color);
}
this.spriteBatchCache.End();
GraphicsDevice.SetRenderTarget(null);
}
Then in the scene render, you can spriteBatch.Draw(..., pageTarget, ...) to render the text.
This way you only need 1 texture for all your pages, just remember to also redraw if your font changes.
Other things to consider is your SpriteBatches sort mode, sometimes that may impact performance when rendering many triangles.
On point 2, as I mentioned above the SpriteFonts are pre-rendered textures, this means that the transparency is baked onto their spritesheet. As such it seems the default library uses no transparency/anti-aliasing.
If you rendered them twice as large and White on Black and used the SourceColor as an alpha channel then rendered them scaled back down blending with Color.Black you could possible get it back.
Please try color mixing with pointer:
MixedColor = ((Alpha1 * Channel1) + (Alpha2 * Channel2))/(Alpha1 + Alpha2)
I'm trying to dynamically create and add a Bitmap image and add it to stage, but when I do the image always comes in incredibly oversized (2-3x), so I always have to find some arbitrary scale numbers to get it right (ex 375x500 image made me use scale of scaleX:.8 scaleY:.33), and even then it comes out crappy and tiled. I even tried messing around with the canvas size, and it's lack of proper scale scales with the allotted space :x
The code I'm using is essentially this:
var stage = new createjs.Stage("canvas");
var bmp = new createjs.Bitmap("images/img.jpg");
stage.addChild(bmp);
stage.update();
What do I need to set to make the image show up like I intend it to?
In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}
Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa
I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.
Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.
Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.
Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.