I'm using the newest version of Xcode and Swift.
I was googling around to make my KWWebView even faster and found the following:
webConfiguration.suppressesIncrementalRendering = true
Documentation says the following:
A Boolean value indicating whether the web view suppresses content
rendering until it is fully loaded into memory.
But what does this mean? Does it mean, the html doesn't not get rendered and shown as long as not all resources like images and javascript files are completely loaded by the WKWebView?
As stated in the documentation it's a flag to tell the webview engine to wait or not till things are set and ready. whether to scan the document (html+related resources) to check for what to be redrawn periodically, or just await the full stuff to be loaded and ready.
WebEngine:
Rendering is a progressive process that depends on the assets (js, Css, images..) that will compose the page. It is important to understand that Turning this feature on or off will simply turn the algorithm of rendering on/off for loaded content.
How to make my page faster?
A lot of factors, rendering algorithm (engine's), how heavy your scripts are(bundle, memory allocation,event pass through and handling, etc..), the size of the images, how well structured your CSS is, and its hierarchical selector orgnisation(css parsing and application).
The order of which the assets are loaded( included) in the page.
You can always check the profiling of your page in (devtools for example) on a modern browser to see how things go, what size of memory it allocates, the bundle size, time for scripting, how the page is designed to consume/utilise device resources.
To make the long story short:
Generally speaking there are THREE main phases with a total of five steps through which your page has to go while living in the browser:
PHASE A: MEMORY/CALCULATION (CPU)
1- Scripting:
PHASE B:( PROCESSING CPU mainly)
2- Styling
3- Layout
PHASE C: (GPU POWER!)
4- Paint
5- Composition
When the browser decides to to update it has to go through these, either a full pass or a partial pass will make a lot of difference. consider the following example
if you have a div, and you decided to create an animation that moves it from the left edge of the screen to the right edge, you see developers doing two approaches:
THOSE WHO JUST WRITE CODE:
change the left value of the div style over time. ( simple right?)
THOSE WHO KNOW THE STUFF:
do a transfom by using translateX or translate3D.
both ways will work, the first will eat up your CPU, while the second will run at a very high FPS.
WHY ?
The first approach plays with sacred left value, that means the browser will have to re-calculate the new left (STEP1) > check style (STEP2) > THEN do a new LAYOUT (STEP 3) > Do a Paint ( step 4) > THEN enter Composition phase (STEP 5)
this will cost a full pass 5 stages that is totally unnecessary!!
The other approach on the other hand, will not require anything but a composition (one step #5) because the matrix manipulation in the GPU (pretty strong ability!) can handle displacements implied by using the translate3d or translateX!! you will see people talking about including a translate3d prop on your CSS elements to push performance (huh!) but the real reason is the above explained. so knowing what happens under the hood can save you.
supperess rendering is about waiting to load everything before starting to show things up, or simply try to handle things as they load..
Related
I have a lot of images on my website and that makes my website load too slowly. should I create a small image with low quality and small size and make a blur effect on it and make the real images load slowly (lazy load) after all page files are downloaded or what should I do
That's a very interesting issue. I love the answers posted here (very detailed and informative), but - I think that the crux of your problem is the coupling between your page resources and your page load. Thus I think that de-coupling those 2 will make a HUGE difference
The problem
You page is loading with all of it's resources. this results in a slow and leggy page load that reminds us of old-fashion web sites. We want a more modern approach that loads the page in a more elegant way
The Solution
Load your page with light-weight content (such as text)
Present the partially-loaded page to the user with placeholders for the rest of content
in the background - lazy-load heavy page resources (such as images)
let the page populate itself while the user has full access to it
How To Implement
Let's focus on the problem you mentioned - images. You may think they are loading slowly because of their size. But, videos (which are heavier than images) are usually loaded with no problem - because modern video players are loading them in chunks and have a really good strategy to give the user a great experience. So, for now, we will NOT focus on the weight of your resources but instead focus on how to load them properly. It goes without saying (but I'll say it anyway) - AFTER you are getting the results your expected - it is advise to properly handle the size of your images as noted in the other answers
A very simple and effective demo for this:
<style>
img.loader {
display: none;
}
img.loader.active {
display: block;
}
</style>
<script>
function loadImage(target, url) {
const imgElement = target.querySelector('img.target')
const imgLoaderElement = document.createElement('img')
const loaderElement = target.querySelector('img.loader')
imgLoaderElement.onload = event => {
// nice to have - the actual size of the requested image
console.log(imgLoaderElement.width, imgLoaderElement.height)
loaderElement.classList.remove('active')
imgElement.src = url
}
imgLoaderElement.src = url
}
</script>
<div id="image-loader-1">
<img class="active loader" src="https://cdnjs.cloudflare.com/ajax/libs/galleriffic/2.0.1/css/loader.gif" alt="" width="48" height="48">
<img class="target" />
</div>
<button onclick="loadImage(document.querySelector('#image-loader-1'), 'https://picsum.photos/600/400')">
Load Image
</button>
Let's break this down
Your image container is now a div element constructed of 2 elements
a pre-loader - to have a nice effect of loading something
the image element - this will contain the actual image
The function loadImage asks you to specify the target image container (in our example image-loader-1) and the image url (your site or 3rd0party sites - no difference here). It will then create a new image element (without populating it to the dom) and load the image there (while still playing the pre-loader in the background). Only after the image is full loaded (and the url is cached in the browser) - you may attach this url to your real image element (and then - make the pre-loader disappear).
This way, you will always have the benefit to use a loader until the image is ready for view - making your users experience better
Does this looks good? Right now - no, not really. I cleaned all the styling properties from it in order to make this solution as clear as possible.
Can it be better? yes. Just put some styling efforts into it and you are ready
to go
Note: because this is a general question which does not rely on a specific modern framework, I posted a very generic solution using vanilla JS. Hope it is clear enough for future users to understand and implement in their own projects.
There are several things that can be done.
is definitely make the size of images smaller. if they are not taking a lot of website space then it is better to make them smaller because if they are not shown in the resolution the image belongs to you are just wasting a lot of bandwidth and then downscale the image anyway. There are many online image compressor that you can use or you can directly decrease the size from your image viewer most of them have a resize option.
instead of using jpg/jpeg/png use webp. webp is a better format for images since it provides further compression of data and also have a lossless compression 25% better then png. The biggest pro of webp is faster load time and less storage. BUT this might not be supported by all browsers, so before implementing this just check if anyone is still running netscape or not. Last i checked all browser that are used support it.(https://en.wikipedia.org/wiki/WebP#Support)
If you need more optimization there is also a browser addon called lighthouse created by google. just install it, goto your website and click "generate report". It will tell you all the places you can actually optimize your website.
(Chrome- https://developer.chrome.com/docs/lighthouse/overview/
Firefox- https://addons.mozilla.org/en-US/firefox/addon/google-lighthouse/)
Why does it get slow?
Images are too large
Images are not optimized according to the device
Images have unspecified dimensions
You use heavy formats
The browser starts loading images all at once
Your cache doesn’t store images
Solution:
Resize and compress images
Lossy = a filter that eliminates some of the data. The quality of the image is impacted.
Lossless = a filter that compresses the data without touching the quality of the image.
Use
Imagify
Ewww Image Optimizer
Optimole (Image optimization & Lazy Load by Optimole)
ShortPixel Image Optimizer
reSmush.it
Set image dimensions
Serve images optimized for each device
Lazy load your images
Implementing Lazy Loading using a WordPress plugin.
Lazy Loading by WP Rocket is a free plugin that implements the lazy
load script on the images.
Check this interesting guide if you want to compare the best lazy
load plugins available on the market.
Implementing Lazy Loading manually: follow this guide from CodeInWP
that explains the two ways to implement lazy loading manually (not so
easy to follow for beginners, though).
Convert your images to WebP
Credit:https://imagify.io/blog/reasons-images-slow-websites/
just in simple language....
Sites that use too many images, or have images that are too large, have longer loading times. This can slow down your entire page, irritating visitors and actually hurting your site's ranking in online search results.
As part of automation, I want to compare images by taking screenshot and the baseline image. Will be executing the same on different OS(linux, win). On executing in different OS, the screen resolution changes and image comparison fails. Currently I am doing it with Selenium+java. Can anybody suggest me any way or a different tool to achieve it?
So make baselines for each environment you're testing on and have the test compare with the correct one for that platform (or just with all of them and flag as passed if any of them matches).
I'm not so sure this entire "compare screenshots" is a good way to test things (you just hit one snag, another one can be that different browsers (and even versions of the same browser) render things slightly differently.
The key thing here is the resolution, you must ensure that resolution is the same for base images and those you capture during test.
Set size of the browser window to fixed size, eg. (1920x1080),
Make all screenshots in this resolution,
During the test: before each image-comparison check if the window size is (1920x1080) if not I change it temporarily,
Take screenshot,
Compare image with original one
Do window maximize
Other solution is to capture screenshots of single WebElement rather
than whole page, because WebElements often are resolution independent.
A presentable image starts out in VK_IMAGE_LAYOUT_UNDEFINED but will be VK_IMAGE_LAYOUT_PRESENT_SRC_KHR after they have been presented once.
A lot of examples do a transition of all vkImages to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR immediately after creating the vkSwapchain. Which allows them to use an VK_IMAGE_LAYOUT_PRESENT_SRC_KHR for oldLayout. But doing the transition right after creation of the swapchain is not allowed.
Use of a presentable image must occur only after the image is returned by vkAcquireNextImageKHR, and before it is presented by vkQueuePresentKHR. This includes transitioning the image layout and rendering commands.
What are my options to deal with the swapchain image layouts correctly?
There are 3 options. Ordered from best to worst (IMO):
Simply set the initialLayout of the attachment in the renderPass to VK_IMAGE_LAYOUT_UNDEFINED or transition from VK_IMAGE_LAYOUT_UNDEFINED every time. This is allowed and will imply that you don't care about the data still in the image. Most often you will be clearing or fully overwriting the image anyway.
valid Usage [of VkImageMemoryBarrier]
[...]
oldLayout must be VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_PREINITIALIZED or the current layout of the image region affected by the barrier
Keep track of which images have been through the pipeline already and select the oldLayout accordingly when recording the commandBuffer.
Do the transitions after creating the swapchain but using vkAcquireNextImageKHR and vkQueuePresentKHR to ensure the application owns the image while transitioning. There is no guarantee in which order you get the images So it may be possible one image never gets returned.
I've been trying a fourth option but some input on its validity would be useful. When creating the swapchain the images are in VK_IMAGE_LAYOUT_UNDEFINED, which to me seems to indicate that they're all available to the application because they need VK_IMAGE_LAYOUT_PRESENT_SRC_KHR for presentation and so shouldn't be displayed or in queue. I didn't find anything in the spec that would guarantee this however.
The spec says that we can acquire multiple images from the swapchain if we want:
If a swapchain has enough presentable images, applications can acquire multiple images without an intervening vkQueuePresentKHR. Applications can present images in a different order than the order in which they were acquired.
Using the conclusion above I just called vkAcquireNextImageKHR to get every image of the swapchain and changed layout on all of them at once. After that I presented all of them to get them into the system so to speak.
It seems to work in the sense that all images are handed to me by the swapchain, but then again, I found no guarantee that all of them can actually be acquired immediately after creating the swapchain.
I created a very complex web app using HTML5, CSS3 and jQueryMobile.
It seems like jQueryMobile turns on hardware acceleration here and there via translate3D and/or translateZ.
Now I want to turn this off for certain HTML elements.
This gives me two questions:
Is there a css property/attribute or something that I can use to tell the browser to turn off hardware acceleration for certain elements?
If not: I will have to find the places where either translate3D or translateZ is used and simply remove them, right? How can I do that? The whole markup is very complex with many HTML elements. I can't go through each element in the inspector and search for it.
Update: The reason why I want to fix this
In my web app there are some elements which need to be swipeable (e.g. an image gallery). In this case I need hardware acceleration. Same for div containers that require iScroll and every other element which should be animated (e.g. slide- and fade-animations).
However, there are many parts of the app which are static (not animated). Using a special startup option in Safari, I was able to make the parts which get hardware-accelerated visible. This way I noticed that THE WHOLE app gets hardware-accelerated, not only the necessary parts.
IMHO this is not a good thing because:
Accelerating the whole thing will cause heavy load to the GPU which makes the whole app stutter while scrolling.
AFAIK it's best practice to let the CPU do the static stuff while the GPU only handles all the fancy animated stuff.
When animations have ended, hardware acceleration should be deactived because it's not necessary anymore and would shorten battery lifetime.
After going through thousands of thousands of lines of CSS code, I found this:
.ui-page{-webkit-backface-visibility: hidden !important}
This was active for all pages and caused the problem. Removing that line fixed it for me.
I'm making an application for the iPhone. Essentially it'll be a guide of sorts, and all the generated information will be in one long window. Each block of information can have a 'link' in it to generate another block of connected information (for example a block about Wallace could link to a block about Gromit) that would appear at the top.
For example, at the start 1 block of data:
Wallace: Owner of Gromit
would become 2 blocks (on clicking Gromit):
Gromit: Wallace's Dog
Wallace:Owner of Gromit
Each block would also have the ability to be added to favorites list by clicking an icon. The text would need to be laid out with HTML and each block may be of a different length. A search on a different could also add a block to the top.
I'm OK with objects in 'easy' languages like PHP, but am basically new to iPhone and Cocoa, and I want to start off with the right approach here. A Table and cells looks like the correct approach, but is there any advantage of doing it as a long list (like I might do on a web version) or are there any restrictions in the way cells can hold/layout information that will cause me trouble down the line.
I believe this approach is popular for dictionaries.
I'm committed to doing it the way with a single scroll for a couple of reasons. The main one is that I want the user to be able to scroll instantly back to entries they've looked at before. i.e. the single view essentially represents a history of the data they've looked at. (if it's a lot stuff can drop off the end). Each entry will be very short but there will be a lot in total. So if the user has looked at
Wallace
Gromit
The Wrong Trousers
Cheese
Penguin
And they are not looking at Wallace, a quick half second scroll takes them back to 'penguin'.
Hierarchy is the way to go on the iPhone.
Remember that the iPhone has a small screen and that users can only see a very small amount of information at anyone time. (One interface expert compared it to driving while peering down a two inch pipe with one eye.) Users can easily get lost scrolling up and down a very long list even if it has index. (That's assuming your information can be easily indexed in a form that users will instantly recognize.) It's usually easier for users to click through several views with the data in each view getting more and more specific with each level. In addition, so many apps use this hierarchal system that your users will be used to it and expect it.
System wise, its easier for the iPhone to display just one level of hierarchy at a time so your app feels more responsive. The hardware doesn't to maintain all the data in memory but just the data it needs to immediately display.
If I understand you data model correctly, you would be best off with a hierarchy of two tables and a detail view. The first table would have an list of letters A-Z. The second table would be list of all records starting with that letter. The third would be a detail view showing links to that record. So, to see the example in the OP, a user would select W-->Wallace-->(Detail) Gromit.
Edit01:
I think you should do a test scroll of either a very long web page or UIScrollView and see how it affects performance and usability. I would caution you that layouts that seem perfectly usable and fast on laptop or desktop hardware become unusable and slow on mobiles with their weaker processors and much smaller screens. It's much more difficult to do " a quick half second scroll" back to a specific point on a long page on a mobile than on a larger screen.
You do have the option of creating a outline-like table view that inserts new indented cells as needed. I still think hierarchy is the quickest and most usable layout on a mobile.