CCLabelBMFont (font size 600) crashes retina device (not simulator) - iphone

I'm using a CCLabelBMFont to display a very large letter in my game. The SD font size size is 300, HD font size is 600
letter =[CCLabelBMFont labelWithString:#"A" fntFile:#"font-test4.fnt"];
with the 4 supporting files (font-test4.fnt / -hd.fnt and font-test4.png / -hd.png
Everything works fine in both simulator modes (retina and non-retina).
However when running on an iPhone4 the CCLabelBMFont class asserts when sanity checking
// scaleW. sanity check
propertyValue = [nse nextObject];
NSAssert( [propertyValue intValue] <= [[CCConfiguration sharedConfiguration] maxTextureSize], #"CCLabelBMFont: page can't be larger than supported");
// scaleH. sanity check
propertyValue = [nse nextObject];
NSAssert( [propertyValue intValue] <= [[CCConfiguration sharedConfiguration] maxTextureSize], #"CCLabelBMFont: page can't be larger than supported");
I have no idea why this is happening.

You're hitting the maximum texture size limit. Check the png files of your bitmap font, specifically the HD variant. If it is larger than 2048 pixels in either dimension (width or height) then only the iPad 2 (with iOS 5.1), iPad 3 and iPhone 4S can load that texture. These devices support a maximum of 4096x4096 textures, the older devices only 2048x2048.
Besides that, a font with font size 300/600 is just ridiculously large. You should think of alternative ways of doing what you're trying to achieve, because such a large font size is a huge waste of (still precious) memory.
The Simulator doesn't care much about these issues though. It's running on your Mac, can use all your Mac's memory and other resources.

Related

Flutter: How to scale text based on screen size

I have an app I'm building and I recently made the screens I have so far scale automatically based on screen size so that they can fit multiple devices and so that I can test them on my device (iPhone 7 Plus). I did that and it worked fine but then I wanted to test it on a smaller screen and see how it goes, on the iPhone 6s emulator the app had render flex overflows, after changing any sized boxes i had to set their size based on their (original size / size of container on iPhone 12 Pro Max) * current container size and it barely worked (i was building the app on the iPhone 12 Pro Max emulator). Any smaller screen sizes won't work and that is because the elements them selves don't scale because the text size is too big (e.g. text form fields and buttons don't scale down because text size remains the same).
I saw this (Flutter: How can I resize text based on device's screen size) and was going to try it as a potential solution as that is how I rescaled the containers on the Welcome/Authentication screens but wanted to see if there was an eaiser or built in solution that Flutter has that would be easier to implement.
iPhone 12 Pro Max:
Welcome/Authentication Screens: https://cln.sh/asdQyc
Settings Screen: https://cln.sh/yQSM5o
iPhone 6s:
Welcome/Authentication Screens: https://cln.sh/mZZZ2F
Settings Screen: https://cln.sh/oUss9m
Thanks for the help!
Edit: Also using the solution in Flutter: How can I resize text based on device's screen size makes the text too small but I can't make it bigger because then it would be too big on bigger screens.
You can use an if-else condition too. It worked for me when I was very specific about size of text. I used both height and width to calculate my text size, like -
if(height<A && width<B) {
size = MediaQuery.of(context).size.width*MediaQuery.of(context).size.height*(some ratio)
}
if(height>A && width<B) {
size = MediaQuery.of(context).size.width*MediaQuery.of(context).size.height*(some ratio)
}
if(height<A && width>B) {
size = MediaQuery.of(context).size.width*MediaQuery.of(context).size.height*(some ratio)
}
if(height>A && width>B) {
size = MediaQuery.of(context).size.width*MediaQuery.of(context).size.height*(some ratio)
}
I was using android, so it was a bit longer, but i-phones have very less range of varieties in sizes, so 4-5 statements would be more-than enough.
Or, if you wish to go even further, make a linear or logarithmic function by using appropriate values of size, height and width

Confusion over #2x and Image Resolution

I have some image I got from the web. Let's say that the max size I can make it without pix elating is 500 x 500. So with that said, should I make the #2x version of it, simply the 500 x 500 version, and regular version (ie for non retina) 250 x 250? Just a little confused about sizing the image correctly for the right screen resolution and any help would be appreciated.
Yes what you said is correct.
Keep in mind though that once you put it on the device the #2x will display as 250x250 pts on a retina screen.
If the #2x version is 500 x 500 then it will be treated at load time as a 250 x 250 double-resolution image (scale = 2).

Should the dimensions (height/width) of the retina images (#2X) always be multiples of two?

I've gotten some graphics files for buttons etc. from the designer. Most of the retina files have one or both dimensions odd, like 29 x 30 or 79 x 61, and then the dimensions of the corresponding non-retina files will be 15 x 15 or 39 x 31, for example. The dimensions of the UIImageView s that hold each image exactly match the size of the non-retina files they hold, so on a non-retina phone there is no distortion and everything looks fine.
On a retina phone, these images (icons and such) only look fine when the images happen to be even dimensions (like 30 x 30 or 46 x 80); when there's an odd dimension to the image, it gets distorted slightly.
Should the pixel dimensions of a retina image always be twice the size of the non-retina dimensions, and of the dimensions of the frame that displays it?
As the name (#2X) implies, it is indeed assumed that the retina-version is exactly twice the size of the non-retina version. Otherwise, as you have seen, there might be distortions.
On a side note, this has only indirectly to do with the displaying frame, e.g. think of scrollviews.
Ask your designer to always design the UI (not necessarily the components themselves) for the non-retina version first, and then just double up the sizes for the retina version. This way, you won't run into distortion-problems. If he designs for retina-first and then scales all components back to half their sizes, he will likely end up with odd dimensions.
Oh, and give your designer this link:
http://www.smashingmagazine.com/2010/11/17/designing-for-iphone-4-retina-display-techniques-and-workflow/
Yes, image files that have #2x appended should be exactly double the size of the 'non'-retina image. Thus should only have even dimensions.
It would appear so.
When you create a view which is 30 points by 30 points on a regular display the backing store (the data that gets drawn on the screen) will be created 30 pixels by 30 pixels.
On a retina display that backing store is simply multiplied by a scale factor. Currently that scale factor is 2 for iPhone 4 and iPhone 4s. This means that the backing stores on retina displays will always be a multiple of 2.
Your 30 point by 30 point view would have a 60 pixel by 60 pixel backing store. If your images aren't drawn properly for retina displays it would seem that the #2x image needs to be the full size of the backing store, and hence exactly double the size of the view in points.

What is the internal format of UIImage?

for ipad
e.g.
for pixel size 10x10 image, how much memory is used?
provide some other contribution:
usually for a normal iPad, it start with app-usable memory of 190-200mb
this number decrease if background process / other apps is running
many thanks!~~~
10 x 10 x 4 = 400bytes for that image, so it's 4 bytes per pixel. (GRBA).
Also it is normal for background app to take some memory. iOS will free memory if needed by any app.

CGImageGetBytesPerRow() returns different values on iOS simulator and iOS Device

I have an image taken by my ipod touch 4 that's 720x960. In the simulator calling CGImageGetBytesPerRow() on the image returns 2880 (720 * 4 bytes) which is what I expected. However on the device CGImageGetBytesPerRow() will return 3840, the "bytes per row" along the height. Does anyone know why there is different behavior even though the image I'm CGImageGetBytesPerRow() on has a width of 720 and height of 960 in both cases?
Thanks in advance.
Bytes per row can be anything as long as it is sufficient to hold the image bounds, so best not to make assumptions that it will be the minimum to fit the image.
I would guess that on the device, bytes per row is dictated by some or other optimisation or hardware consideration: perhaps an image buffer that does not have to be changed if the orientation is rotated, or the image sensor transfers extra bytes of dead data per row that are then ignored instead of doing a second transfer into a buffer with minimum bytes per row, or some other reason that would only make sense if we knew the inner workings of these devices.
It may slightly different because the internal memory allocation: "The number of bytes used in memory for each row of the specified bitmap image (or image mask)."1
Consider using NSBitmapRepresention for some special tasks.