Webdesign: PNG-Image width limitations - png

I want to make a kind of slideshow based on scrolling the webpage.
My problem is that I have an image width of 78720x1015px in png-format.
The width of the image is determined by one single image of 1920px which is 41 times arranged next to each other. - It should be like a cartoon where an image moves by 100% (margin: -100%) and generates a feeling of a movie.
However, this results in an image width of 1920px x 41pics = 78720px.
This is of an enormous width, but what I am wondering about is that the filesize is only 975kB which is in my opinion not that big!? - However, somehow it takes a very long time to load the picture in the Webbrowser and the image is not of such quality as in my ImageViewer on Desktop.
Question 1: What do I have to consider when dealing with such a big image-width? What are the limits?
Question 2: Is there a better way make such kind of a slideshow? - Consider that the sliding itself shouldn't be visible. It should be like a movie based on about 40 pictures.
Thanks in advance!

PNG compresses very well, especially if it's a cartoon like you say that may use only a limited number of colours.
However, when loaded into memory, the device must load all that pixel data into RAM to display it. That's almost 80 million pixels in your case, which would be around 320Mb of uncompressed data. This is probably why the browser is struggling, and especially so if you're using margin to move it around as that requires a full re-draw of the image.
You may have better results with transform, as this should use hardware acceleration and also avoids the reflow part of a redraw since transforms don't affect page layout.
But the better solution would be to break it down into individual images. Have your code load in the next image, scroll it across, then load the next while scrolling and unload the one that's now off-screen to provide a relatively seamless view.

Related

Use Metal to Efficiently Render a Scrolling Image

I would like to use Metal to efficiently render a particular type of time-varying image - namely an image which continuously scrolls vertically. That is, each frame is exactly the same as the previous frame except shifted down by one row. The bottom row scrolls off the screen and is discarded. The top row is new data inputted from some Swift array. My application uses real-time audio data to render a spectrogram - but this "rendering a scrolling image" is a general-purpose need with broad applicability.
The brute-force way to implement this is to store the data in an array and read the entire image array into the rendering screen for each frame - ignoring the fact that 99.9% of the data is already in the video memory buffer and simply needs to be accessed differently.
In the old days, when developers had direct access to the display's VRAM video memory, one could achieve this effect by simply incrementing the VRAM's row-access pointer by one for each frame (and writing in the top row's new data). Today, OS's abstract the screen's buffer memory - but hopefully Metal provided a clever way of cutting through this abstraction to allow this low-level data shifting.
The SO question "Moving an Image using Metal Shader" (here) is helpful, but leaves me wondering what's the right approach. Can someone please point me in the right direction to implement this?

Preloading a large image

I have an image with the dimensions of 5534 × 3803, and size of 2.4mb. The UIView references notes that:
"In iOS 3.0 and later, views are no longer restricted to this maximum
size but are still limited by the amount of memory they consume."
When the image loads, it lags for half a second, then slides in. The image sits in the UIImageView at 1024x704, but can be scaled up to 4x that size for the purpose of my app.
Are you able to preload the image in the AppDelegate? Or is there another way of working around having such a large image?
Thanks
EDIT: The scaling is done via UIPinchGestureRecognizer, and scales up and done (scale x4 - x1) based on the image's center point. There is no panning of the image when zoomed in.
Personally, I would try to write a tile-based system (think Google Maps) that slices your big image into a grid of small images to avoid loading in that gigantic image all at once into RAM. I don't really know what your user interactions are for this image, or whether the images are changing or baked into your project, but I'd assume you can let users scroll around since that image is bigger than any iOS screen. With a tile-based system, you only load the images that are on-screen. CATiledLayer is an Apple class for doing just such a thing. That's probably what you want to look into.
See this StackOverflow question for some different approaches. The accepted answer uses code from Apple's sample PhotoScroller project, which may work for your needs and uses CATiledLayer.
This ScrollViewSuite Apple code might also get on your way (check out the Tiling code).

Maximum zoom for UIImageView

I'm trying to come up with some kind of standard to determine a maximum zoom setting for a given imageview (using a UIScrollView). In the apple docs it says that you should not initialize a uiimage object that is anymore than 1024 x 1024 into memory. Does it make sense to also apply this rule to a zoomed image, like for instance would it be best for me to make sure that the longest side of my image isn't zoomed any larger than x1024? Does anyone have any insight on this, I don't know too much about the memory requirements of ios and haven't been able to find anything from Apple (aside from the uimage requirements I stated).
You are really talking about two different things. Loading an image into memory takes up memory space so a picture that is 1024 x 1024 when loaded will always occupy that space in memory.
Zoom is something totally different. When you zoom, you arent adding any extra data to the image, you are simply taking the loaded image (lets say 1024 x 1024) and 'stretching' the bits across the screen.
The amount of data being stored doesnt change, the renderer just does different things with it i.e strectching out the pixels. With this being the case, I can only assume that you can set the max zoom to anything you want, but obviously the more you zoom in the worse the image will look.

How to animate 120 images efficiently?

There are 120 images which are 360-degree round shootings of a product showcase, each is 3 degree apart, size 320 x 320 pixels. Displaying the images continuously will give the effect of rotating the product in front of the user.
So far, the performance is far from satisfactory, there is delay in loading and when the user touch to slide to a particular image.
If OpenGL is not an option, what is the recommended way to handle this kind of animation efficiently? Thanks!
I don't really understand your UI but here is my guess:
Your UI is a circle contains a lots of images and all are in the same screen. I think you can show the thumbnail images (50x50-80x80) in the circle and you can store all of thumbnails in memory without big issue. With that size, each of them is just around 8kb. 120 x 9 ~ 1 MB of memory
When you animate to an image, you can load the thumbnail first and then loading the big image in the background. It will create some effects that user see some images not clear and then see it clear. This will also help when users touch some to slide to a particular image. At least, you have something to show.
You can save some memory and loading issue by resizing the big image to be 75-80% of the size 320x320. This one will save you a little bit of time for loading the images and doesn't impact the image's quality a lot. It all depends on your app as well.
You can try to use eager loading. You can load images ahead before you actually animate them. You can have an array to store 3 - 5 images (320 x 320 is not a big size, so 5 of them will just around 500-1000 KB). When you show the first image, you start loading the 4 th image before hand.
If you want to really compress this and have smooth playback on mobile, you should store it as a movie and then use AVFoundation's AVPlayer to have custom playback. You can set it to roughly any frame or change the speed of playback forward or backward and loop to make it seamless. You could, for example, set it up to change the location or play velocity in the movie based on a swipe. It is more work to set up than MPMoviePlayer but is quite customizable.
How about a UIWebView + HTML5?
Look at the apple showcase: http://www.apple.com/html5/showcase/threesixty/
I would make lowres copies of all 120 images, maybe 80x80 in size. All 120 should fit in the GPU's texture cache. Load all 120 images into 120 offscreen image views, zoomed up to 320x320. Animate using those 120 image views, showing one image at a time. Whenever the animation stops, quickly replace the last image with one more view that has the corresponding full resolution 320x320 copy. Whenever the user starts to start to animation, switch back to the lowres set of images.
This works because the eye can't see changing objects (illusion of motion) with the same resolution as they can a static image, depending on the rate of animation of course.

Image strategy in iPhone app

I'm writing a card game for the iPhone, and I'm not sure about the best strategy for displaying the cards. I have a basic prototype that creates a UIImageView that can be dragged for each card with a dummy image. I wanted to use one large UIImage that contains the faces of all of the cards, and then have each draggable UIImageView display a part of that image. I must be misunderstanding what setBounds is for - I thought that controlled which part of the underlying image is displayed. So, two questions:
Is this the right approach?
How do I display just a part of the image?
Depending on your resolution, this might not be the best approach.
From Apple:
You should avoid creating UIImage
objects that are greater than 1024 x
1024 in size. Besides the large amount
of memory such an image would consume,
you may run into problems when using
the image as a texture in OpenGL ES or
when drawing the image to a view or
layer. This size restriction does not
apply if you are performing code-based
manipulations, such as resizing an
image larger than 1024 x 1024 pixels
by drawing it to a bitmap-backed
graphics context. In fact, you may
need to resize an image in this manner
(or break it into several smaller
images) in order to draw it to one of
your views.
Now, you are talking about breaking it up into several smaller pieces, but given UIIMage's caching, I am not sure what happens to memory every time you access the image and copy a sub-rect out of it. I think the approach I would take is to have an array of images, instead of one big one.