EaselJS adds Bitmap to stage oversized - easeljs

I'm trying to dynamically create and add a Bitmap image and add it to stage, but when I do the image always comes in incredibly oversized (2-3x), so I always have to find some arbitrary scale numbers to get it right (ex 375x500 image made me use scale of scaleX:.8 scaleY:.33), and even then it comes out crappy and tiled. I even tried messing around with the canvas size, and it's lack of proper scale scales with the allotted space :x
The code I'm using is essentially this:
var stage = new createjs.Stage("canvas");
var bmp = new createjs.Bitmap("images/img.jpg");
stage.addChild(bmp);
stage.update();
What do I need to set to make the image show up like I intend it to?

Related

How to scale up image where objects's size are remained same?

If you have a file which include objects for example for EE like transistors, resistors etc and if you group them into one and then from the corner drag it to zoom in a bigger figure.
How can I make sure that these components are not zoom in only wiring changes?
The problem is that I have like 30 images with different sizes and I'm placing them in a table with many images side by side. However, if I keep the same scale then some images looks small compared to other. So I tried to scale them to get the same size. However, this make the components's sizes are also scaled up with different scale factors.
Here is an example of circuit using the bult-in shapes in Visio. As you can see the components'sizes got bigger when I scaled up the object. This is usually desired. However, in my specific case I want to keep the component's size same.
Here is the Visio file or I think you can also use any available components in Visio.
https://file.io/VRUCR8yVgYxs

Resizing image generated by PaintCode app

I have imported a vector image to PaintCode app and then export its Swift to code. I want to use this vector image in a small View (30x30) but since I want it to work on different devices, I need it to be size-independent.
The original size of the vector image is 512x512. When I add its class to a UIView, only a very small part of the vector image can be seen:
I need to somehow resize the image that can be fit in any size of a frame. I read somewhere, that I have to draw a frame in PaintCode app around the image, I did it but nothing changed.
Start by selecting the "Frame" option from the toolbar
Apply the frame to you canvas...
nb: If you mess up the frame DELETE IT and start again, modifying the frame can change the underlying vector, which is annoying
Apply the desired resize options. This can be confusing the first time.
I group all the elements into a single group. Select the group and on the "box" next to the coordinates of the group, change all the lines to "wiggly" lines. This allows paint code the greatest amount of flexibility when resizing the image...
Finally, change the export options. I tend to use both "Drawing" and "Image" as it provides me the greatest amount of flexibility during development
You should also look at Resizing Constraints, Resizing Drawing Methods and PaintCode Power User: Frames for more details

Monogame rendering text using Graphics.DrawString (instead of SpriteBatch.DrawString)

Is there any downside to using Graphics.DrawString to render a (rather static) bunch of text to an offscreen bitmap, convert it to a Texture2D once, and then simply call SpriteBatch.Draw, instead of using the content pipeline and rendering text using SpriteFont? This is basically a page of text, drawn to a "sheet of paper", but user can also choose to resize fonts, so this means I would have to include spritefonts of different sizes.
Since this is a Windows only app (not planning to port it), I will have access to all fonts like in a plain old WinForms app, and I believe rendering quality will be much better when using Graphics.DrawString (or even TextRenderer) than using sprite fonts.
Also, it seems performance might be better since SpriteBatch.DrawString needs to "render" the entire text in each iteration (i.e. send vertices for each letter separately), while by using a bitmap I only do it once, so it should be slightly less work at the CPU side.
Are there any problems or downsides I am not seeing here?
Is it possible to get alpha blended text using spritefonts? I've seen Nuclex framework mentioned around, but it's not been ported to Monogame AFAIK.
[Update]
From the technical side, it seems to be working perfectly, much better text rendering than through sprite fonts. If they are rendered horizontally, I even get ClearType. One problem that might exist is that spritesheets for fonts are (might be?) more efficient in terms of texture memory than creating a separate texture for a page of text.
No
There doesn't seem to be any downside
In fact you seem to be following a standard approach to text rendering.
Rendering text 'properly' is a comparatively slow processed compared to rendering a textured quad, even though SpriteFonts cutout all the splining glyphs, if you are rendering a page of text then you can still be racking up a large number of triangles.
Whenever I've been looking at different text rendering solutions for GL/XNA, people tend to recommend your approach. Draw your wall of text once to a reusable texture, then render that texture.
You may also want to consider RenderTarget2D as possible solution that is portable.
As an example:
// Render the new text on the texture
LoadPageText(int pageNum) {
string[] text = this.book[pageNum];
GraphicsDevice.SetRenderTarget(pageTarget);
// TODO: draw page background
this.spriteBatchCache.Begin();
for (int i = 0; i < text.Length; i++) {
this.spriteBatchCache.DrawText(this.font,
new Vector2(10, 10 + this.fontHeight * i),
text[i],
this.color);
}
this.spriteBatchCache.End();
GraphicsDevice.SetRenderTarget(null);
}
Then in the scene render, you can spriteBatch.Draw(..., pageTarget, ...) to render the text.
This way you only need 1 texture for all your pages, just remember to also redraw if your font changes.
Other things to consider is your SpriteBatches sort mode, sometimes that may impact performance when rendering many triangles.
On point 2, as I mentioned above the SpriteFonts are pre-rendered textures, this means that the transparency is baked onto their spritesheet. As such it seems the default library uses no transparency/anti-aliasing.
If you rendered them twice as large and White on Black and used the SourceColor as an alpha channel then rendered them scaled back down blending with Color.Black you could possible get it back.
Please try color mixing with pointer:
MixedColor = ((Alpha1 * Channel1) + (Alpha2 * Channel2))/(Alpha1 + Alpha2)

iOS: How to create and draw into (and save) an image larger than the screen?

We're creating an iOS photo app. In doing this, we have to create dynamically sized images up to about 2500x1600px. Once this image has been created, we want to draw smaller images on top of the big one reasonably quickly.
The problem as we can see it is that it's impossible to get a context larger than the screen resolution. The call does not crash, but it returns a nil-context. How can such a seemingly simple task be achieved?
Secondly, once this context is created, what is the fastest way to draw a small image at a given position on top of the big one?
Edit:
We found the solution. CGBitmapContextCreate returns nil because the width and height parameters were set as floats, not ints. Sometimes the solution is right there in front of you, and you're too blind to see it. Hopefully this answer can help other people that somehow have the same problem.
Make sure you specify integer widths and heights as the arguments to CGBitmapContextCreate, otherwise it returns nil. Otherwise, the size of the context should not matter as long as you can malloc enough memory for it.
It should be possible to get a context for almost any bitmap for which you can allocate/malloc enough memory, in your case multiples of 2500x1600x4 bytes of ARGB pixels.
You might also want to look into using a CATiledLayer, where you would only have to draw into the tiles covered by the smaller image. You may have to tile to support older devices which are limited by the max tile size that will fit into the GPUs texture cache.

How to work with images(png's) of size 2-4Mb

I am working with images of size 2 to 4MB. I am working with images of resolution 1200x1600 by performing scaling, translation and rotation operations. I want to add another image on that and save it to photo album. My app is crashing after i successfully edit one image and save to photos. Its happening because of images size i think. I want to maintain the 90% of resolution of the images.
I am releasing some images when i get memory warning. But still it crashes as i am working with 2 images of size 3MB each and context of size 1200x1600 and getting a image from the context at the same time.
Is there any way to compress images and work with it?
I doubt it. Even compressing and decompressing an image without doing anything to it loses information. I suspect that any algorithms to manipulate compressed images would be hopelessly lossy.
Having said that, it may be technically possible. For instance, rotating a Fourier transform also rotates the original image. But practical image compression isn't usually as simple as just computing a Fourier transform.
Alternatively, you could write piecemeal algorithms that chop the image up into bite-sized pieces, transform the pieces and reassemble them afterwards. You might also provide a real-time view of the process by applying the same transform to a smaller version of the full image.
The key will be never to full decode the entire image into memory at full size.
If you need to display the image, there's no reason to do that at full size -- the display on the iPhone is too small to take advantage of that. For image objects that are for display, decode the image in scaled down form.
For processing, you will need to write custom code that works on a stream of pixels rather than an in-memory array. I don't know if this is available on the iPhone already, but you can write it yourself by writing to the libpng library API directly.
For example, your code right now probably looks something like this (pseudo code)
img = ReadImageFromFile("image.png")
img2 = RotateImage(img, 90)
SaveImage(img2, "image2.png")
The key thing to understand, is that in this case, img is not the data in the PNG file (2MB), but the fully uncompressed image (~6mb). RotateImage (or whatever it's called) returns another image of about this same size. If you are scaling up, it's even worse.
You want code that looks more like this (but there might not be any API's for you to do it -- you might have to write it yourself)
imgPixelGetter = PixelDecoderFromFile("image.png")
imgPixelSaver = OpenImageForAppending("image2.png")
w = imgPixelGetter.Width
h = imgPixelGetter.Height
// set up a 90 degree rotate
imgPixelSaver.Width = h
imgPixelSaver.Height = w
// read each vertical scanline of pixels
for (x = 0; x < w; ++x) {
pixelRect = imgPixelGetter.ReadRect(x, 0, 1, h) // x, y, w, h
pixelRect.Rotate(90); // it's now got a width of h and a height of 1
imgPixelSaver.AppendScanLine(pixelRect)
}
In this algorithm, you never had the entire image in memory at once -- you read it out piece by piece and saved it. You can write similar algorithms for scaling and cropping.
The tradeoff is that it will be slower than just decoding it into memory -- it depends on the image format and the code that's doing the ReadRect(). Unfortunately, PNG is not designed for this kind of access to the pixels.