How to handle this case (avoid excessive DOM size)? i have sent an attachment image on specific areas
Related
Given a png image and a set of data to write to it, is it possible to overwrite pixels in the existing png in a particular area of interest? For example, If I have a block of data in a rectangle between pixels (0,0) (5,10) would it be possible to write this data as a block into a 10X10 png without any concern for the area not being overwritten? My use case is that I have map tiles where half the data will be in one tile and half in the other, with the blank pixels being white squares. I would like to combine them by simply writing the non-white pixels directly to the existing png in a block without having to open, combine, then re-write the entire png. Does the structure of a png allow this?
I'm loath to claim that this is impossible, but it is certainly complicated.
First of all, pixels of a PNG are (sometimes) interlaced, so you'd have to calculate the locations of your target pixels based on the Adam7 scheme.
Furthermore each row is independently filtered, so you'd have to transform each row of your source using the filter of the target row. Depending on the filter you'd also have to adjust additional bytes on the border of the updated target bytes. Straight from the horse's mouth:
Though the concept is simple, there are quite a few subtleties in the actual mechanics of filtering.
Finally, all the filtered bytes are compressed using a generic compression algorithm called "deflate." Unless you want to decompress the whole thing beforehand, you need to make sure both that (1) your source data can be properly decoded and (2) the bytes near the border of the target bytes are properly compressed in the context of their new neighbors.
I'm not a compression expert, so I won't argue in more detail. One piece of good news is that the algorithm seems to preserve independence between distant regions due to its sliding window scheme: data are only compressed based on data in some preceding range, say 13,000 bytes.
If this seems at all easy to you, give it a try. If you're like me, though, you'll just decode the whole thing, overwrite the pixels as bitmap data, and encode the result.
This is practically impossible because the pixels data (after a row-by-row "filtering") is compressed with ZLIB. And it's practically impossible to change part of a compressed stream.
Is there any downside to using Graphics.DrawString to render a (rather static) bunch of text to an offscreen bitmap, convert it to a Texture2D once, and then simply call SpriteBatch.Draw, instead of using the content pipeline and rendering text using SpriteFont? This is basically a page of text, drawn to a "sheet of paper", but user can also choose to resize fonts, so this means I would have to include spritefonts of different sizes.
Since this is a Windows only app (not planning to port it), I will have access to all fonts like in a plain old WinForms app, and I believe rendering quality will be much better when using Graphics.DrawString (or even TextRenderer) than using sprite fonts.
Also, it seems performance might be better since SpriteBatch.DrawString needs to "render" the entire text in each iteration (i.e. send vertices for each letter separately), while by using a bitmap I only do it once, so it should be slightly less work at the CPU side.
Are there any problems or downsides I am not seeing here?
Is it possible to get alpha blended text using spritefonts? I've seen Nuclex framework mentioned around, but it's not been ported to Monogame AFAIK.
[Update]
From the technical side, it seems to be working perfectly, much better text rendering than through sprite fonts. If they are rendered horizontally, I even get ClearType. One problem that might exist is that spritesheets for fonts are (might be?) more efficient in terms of texture memory than creating a separate texture for a page of text.
No
There doesn't seem to be any downside
In fact you seem to be following a standard approach to text rendering.
Rendering text 'properly' is a comparatively slow processed compared to rendering a textured quad, even though SpriteFonts cutout all the splining glyphs, if you are rendering a page of text then you can still be racking up a large number of triangles.
Whenever I've been looking at different text rendering solutions for GL/XNA, people tend to recommend your approach. Draw your wall of text once to a reusable texture, then render that texture.
You may also want to consider RenderTarget2D as possible solution that is portable.
As an example:
// Render the new text on the texture
LoadPageText(int pageNum) {
string[] text = this.book[pageNum];
GraphicsDevice.SetRenderTarget(pageTarget);
// TODO: draw page background
this.spriteBatchCache.Begin();
for (int i = 0; i < text.Length; i++) {
this.spriteBatchCache.DrawText(this.font,
new Vector2(10, 10 + this.fontHeight * i),
text[i],
this.color);
}
this.spriteBatchCache.End();
GraphicsDevice.SetRenderTarget(null);
}
Then in the scene render, you can spriteBatch.Draw(..., pageTarget, ...) to render the text.
This way you only need 1 texture for all your pages, just remember to also redraw if your font changes.
Other things to consider is your SpriteBatches sort mode, sometimes that may impact performance when rendering many triangles.
On point 2, as I mentioned above the SpriteFonts are pre-rendered textures, this means that the transparency is baked onto their spritesheet. As such it seems the default library uses no transparency/anti-aliasing.
If you rendered them twice as large and White on Black and used the SourceColor as an alpha channel then rendered them scaled back down blending with Color.Black you could possible get it back.
Please try color mixing with pointer:
MixedColor = ((Alpha1 * Channel1) + (Alpha2 * Channel2))/(Alpha1 + Alpha2)
I'm curios what happens to the size of the host image after an invisible watermark has been inserted. I'm guessing the size will increase but by how much?
For example, the cover image to be inserted is 1kb and the host image is 2kb. Since your adding additional information the size will be 3kb after the embedding process?
I think it depends on the algorithm used for inserting the watermark. If the watermark is inserted without any quality loss, it is likely that the file would increase by at least 2KB.
In my application I'm creating a large image dynamically and then loading it up for display in my image explorer class. Because I can't add new images to the bundle at run time, it seems I have to use imageWithContentsOfFile - however, this gives me major speed issues further down the line.
The way my image explorer works is that it takes in an image, splits it up into tiles, caches those tiles and then only loads those tiles into memory for display that need to be shown on the screen. Using a bunch of NSLogs, I've managed to find out where all the slowdown is. It's not in the imageWithContentsOfFile function itself, it's when I try to call this line:
CGContextDrawImage(context_ref,
CGRectMake(0, 0, imgWidth, imgHeight), tileImage);
This is when I'm writing the tile to the cache file. tileImage is a CGImageRef that is returned from CGImageCreateWithImageInRect, which is how I get subsets of my larger image to save separately.
The odd thing is that splitting up a large image this way takes about 45 seconds (!), but when I split up an image from the bundle using imageNamed rather than imageWithContentsOfFile, it takes only about 2 seconds.
Anyone have any ideas? Thanks in advance :)
I think U should split up your image.
Because, CGContextDrawImage will take fully loaded "tileImage".
If your "tileImage" size is 8 MB, your app must load 8MB data to memory.
It takes long time for loading. and It may create memory issue and so on.
If you want to use a single big image and you can wait for loading,
there is solution that U can use another thread.
It can avoid to UI lock during loading a big image.
An 8MB JPG image will use over 8MB memory, UIImage should use noncompressed format for fast drawing.
imageNamed uses caching, and may reduce the amount of scaling.
UIImage is immutable. imageNamed may note this and return a reference to a cached image, rather than loading and creating a new image... wherever you load your image.
if you create an images, you can setup your own (in memory) caching scheme and pass references in many cases. then purge the cache when you receive a memory warning.
if you need to scale the image and the size is static, determine the size to draw, and create a UIImage using imageWithCGImage:scale:orientation: -- or you can approach the problem in a similar way using CoreGraphics apis directly too.
beyond that, hold onto/reuse what you need, and use a profiler to balance your allocations and to measure timings.
I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.