Why my website keeps requesting resources from server even after the website is fully loaded - web-performance

I am working on the web vitals for a website and I was checking the Chrome Developer Tools the Network tab. The website loads fully, but I see that in the network tab, the server requests keep on increasing and the resources requested go up to 7.8MB and the website has a slider that keeps repeating in the network. How can I check why so many requests are made?
Here is the picture of the network tab of the website.

I see that the resource names are slide-X.jpg. Without seeing the website or its code, I can only guess that there's a carousel on the page that cycles through images. If the images aren't cacheable, they'd continue to be loaded over the network. Otherwise if they are cacheable, I'd expect to see no network requests at all or at worst a 304 HTTP "Not Modified" response code.
So I'd recommend confirming what kinds of widgets are on the page like a carousel with repetitive behavior and checking the cache control headers of static content like images to avoid the need to load the images each time. Personally, I think carousels are bad UX so I'd even suggest you consider removing it all together! Regardless, you should still cache your content more efficiently.

Related

Precaching with service worker, why does it matter? What did I miss?

I was looking at service worker practices and workbox.
There are many articles talking about precaching, workbox even provides special method precachingAndRoute() for just that. I guess I understand the conceptual difference between precache and runtime cache, but what confuses me is why precache is treated so specially?
All articles I've read about precaching emphasize how it makes web app available when client is offline. Isn't that what cache (even it's not precache) is for? I mean it seems that runtime cache can also achieve just that if configured properly. Does it have to be precache to have web app work offline?
The only obvious difference is when the caches are to be created. Well, if client is offline, no cache can be created, no matter it is a precache or runtime cache, and if caches were created during last visit when client was online, how does it matter whether the cache to respond with for current visit was a precache or runtime cache?
Consider 2 abstract cases for compare. Say we have two different service workers, one (/precache/sw.js) only does precache and the other (/runtime/sw.js) only does runtime cache, where /precache and /runtime host same web app (meaning same assets to be cached).
Under what scenario, web app /precache and /runtime could run differently due to different sw setup?
In my understanding,
If cache can not be created (e.g. offline on first visit), then precache and runtime cache shouldn't be any different.
If precache can be created successfully (i.e. client is online on
first visit), runtime cache should too. (Let's not go too wild with
cases like the client may be online only for some certain moment, they still should be the same in my examples.)
If cache are available, then precache and runtime cache have nothing to do, hence are still the same.
The only scenario I could think of when precache shows advantages, is when cache need to be updated on current visit, where precache makes sure current visit get up to date info. If this is the case, wouldn't a NetworkFirst runtime cache do just about the same? And still, there are nothing to do with "offline", what almost every article I've read about sw precaching would mention.
How online/offline makes precache a hero?
What did I miss here, what's so special about precaching?
One scenario where it is different could be the following.
What the app is like:
You have a landing page for your app.
You have a handful of routes that can be navigated to
Cache Strat:
If the user goes to the landing page, only the landing page assets would get cached.
Pre-cache Strat:
If the user goes to the landing page, all of the configured pre-cached assets would get cached.
Difference:
So if the user only goes to the landing page, and then later goes offline, the pre-cache strat would allow them to navigate and interact in some way with the other routes of your app, while the cached strat would not allow any navigation to the other routes.
First, your side by side service workers are restricted to those folders or paths. So they are isolated from each other.
Second, you should define a caching strategy for your application that has a mixture of preCached assets as well as dynamic plus an invalidation routine/logic.
You want to preCache as much as possible without breaking any dynamic nature of your application. So cache common JS, CSS, images, fonts and pages that are used over and over.
Of course have an invalidation strategy in place to keep these up to date.
Next handle non-cached network addressable resources (URLs) from the fetch event handler. Cache them as it makes sense. And invalidate cached assets as it makes sense.
For some applications I cache the entire thing. They are usually on the small side, a few dozen to a few hundred pages for example. For a site like Amazon I would never do that LOL. No mater how much is cached I always have an invalidation and update strategy that makes sense for the application/site.

Google Search Console: Fetch-and-Render fails on random resources

It successfully fetches page itself, but then breaks on whatever page tries to load: images, styles, fonts, js files, API calls – whatever. Every time something different. And it says that resources are "Temporarily unreachable".
And sometimes it successfully loads and renders entire page with no errors.
Their doc says that "Temporarily unreachable" means that either server took too long to respond, or that fetch was cancelled "because too many consecutive requests were made to the server for different URLs".
The page I tested is completely loaded within 1.5-2s. Is it too long?
It makes 20 requests: 1 html, 4 css files (3 of them are 3-party font-face), 6 js files, 4 api calls (1 failed, it's intentional), 4 font files, 1 image. Total data size is 2.5Mb. Is it too much?
I checked every failed resource with their "robots.txt tester" – each of them is allowed for googlebot.
I don't have any noindex/nofollow directives nowhere on the site.
And I remind, that sometimes it just succeeds, like everything is ok.
With all of that, I have 3 questions:
Do I have to care of google's rendering at all? If I will just pre-render my html (with phantomjs or whatever) for googlebot, won't it be enough for normal indexing?
If I need google's rendering – do I have to care of that random fails? If at least sometimes fetch-and-render succeeds, maybe, it means that my site will be indexed normally?
If I have to care about that fails – what else do I have to do to make it work stably?! Such random behavior doesn't make any sense and doesn't give me any clues about that.
you may want to see this related post:
https://webmasters.stackexchange.com/questions/118727/what-else-can-i-test-when-troubleshooting-a-fetch-issue-in-google-search-console/118730#118730
My thinking is that the failed API call may have some bearing, or more likely, DNS or shared hosting issues because of the randomness of your Fetch results. Some people have reported better performance when having a robots.txt file (even if empty or just User-agent: * ), and others have found that it was just overuse of the fetch tool on their domain.

how can we we replicate a user experience of a e-commerce application with multiple images without loading images?

1.Through jmeter recorded the script with out images.
2.Run the script by keeping 10 users.
3.Jmeter will show the execution and response time.
But how can we justify and show evidence to top level managemnt that even though with out capturing images the application response time is same as live user experience.
The better way to do it would be to check the option "Retrieve All Embedded Resources" in: Thread Group Right Click -> Sampler -> HTTP Request -> Advanced -> Retrieve All Embedded Resources so all resources all loaded.
If you don't want to measure the embedded resources response times, for example if you are using a CDN or 3rd parties, you can use a "View Results Table" and enable the option "Child samples". This way you can see the response time from the main requests and from the embedded resources separately.
The issue is, secondary requests are made in parallel threads, so the sum of the response times is larger than the response time registered by the Transaction Controller. To avoid this you can select the option "Parallel downloads. Number", next to "Retrieve All Embedded Resources" in "HTTP Request" and enter the number of "parallel downloads".
Moreover you may link to follow blog :
https://www.redline13.com/blog/
Comparing the two directly is not an apples-to-apples comparison, because they measure different things. Actually, Load Tester measures a lot of the same things for each, but what is generally considered the most important metric – Page Duration – is actually measuring a different aspect of performance in each case.
Virtual Browsers:
Virtual Browsers work at the HTTP layer – they send the same HTTP messages to the server that real browsers would send. The Page Duration measures the time from the beginning of the first request that is sent to the server to the end of the last response for a resource on that page.
Our Virtual Browsers(JMeter) will use the same number of connections to the server as a real browser. And it will distribute the requests amongst those connections in a very similar way: it will use inactive connections first, connections remain open for a while, etc. When done correctly, the target application cannot tell the difference between our Virtual Browser and a human operating a real browser.
Real Browsers:
well, REAL browsers that are driven by our virtual user instead of a human user. The driving takes place via APIs into the browser that are designed for automation (Eg : JMeter Selenium Web Driver)
For example, a Go To URL step instructs the browser to navigate to a URL. The Duration of that step measures from the time the command is sent to the browser until the browser reports completion (or failure). In the case of the Go To URL command, the command is complete when the browser fires the “On Load” event. This step will include the amount of time to get all the resources from the server – which is what is measured by the Virtual Browsers. It will ALSO include the amount of time the browser takes to render the page on the screen, which is not measured by the Virtual Browsers (since they never render the page).

HTML5 LocalStorage limit hit but I only use offline cache

I'm developing an offline web-app for a client of ours, designed to run on an iPad in airplane mode, mounted on a stand. It has no server-side dynamic pages, only a static HTML page, many JavaScript components to handle navigation and interactivity, and a bunch of small graphics assets. The whole website (static html + css + js + graphics) weighs exactly 8.3Mb.
I'm caching the whole site via an offline.manifest declared in my single HTML file, this manifest references absolutely all the files under the root directory, so that all files needed are cached.
I'm not using localStorage, IndexedDB or other offline-storage techs in my JS code. Apart from the "automatic" caching, I don't store anything on-device.
So atfer checking my webserver logs, when my client installs the webapp on its iPad homescreen, it downloads all the files once, then never downloads anything from my server afterwards. That's fine, exactly what he wanted in the first place : a full offline webapp.
Then, how comes that after several minutes of testing from my client, his iPad asks him to “increase local storage from 10Mb to 25Mb” ???
FYI, the app consists of a kind of quizz: one welcome screen, 19 question screens, one result screen ; the user can navigate backwards/forwards in the questions sequence, but they're created and nullified on-the-fly so as to minimize memory footprint. Anyway I don't believe this problem has to do with RAM access, only with "hard", permanent, cached storage.
I've noticed that with all apps, it's like the iPad has to realize it has everything, and waits a few seconds to realize that it's going to go over it's app limit.
it would be nice to have it default to a larger amount, or let you set it up with a larger amount to begin with.
Seems like my client doesn't have this problem anymore. As I'm not in direct, physical contact with him, I can't tell what he did to get rid of it.

iPhone and HTML5 Cache Manifest

I am trying to build an iPhone web application using ASP.NET. The page is dynamically rendered once for each visitor. At this point the page can be bookmarked and it will never change again for that visitor. For this reason it should be cached locally from that point on so the application will run if referenced from the bookmark even if no network connection is available. No matter what I try the phone continues to request the page from the server forcing a re-render or it fails if the phone is offline.
Louis Gerbarg suggested in this post that I use HTML5 Cache Manifest to get this working however following the w3.org docs does not appear to work for the iPhone. Does anyone have a good example where application cache is working?
The cache manifest file has to be served with a 'text/cache-manifest' mime-type. This is absolutely critical, it will not work without it. If you navigate to the url of your manifest file, it should trigger a download...
Also, I've found that putting the manifest location in the tag as an absolute location, as well as all the entries in the manifest file to be more effective.
I answered your previous question related to this, but it was not clear from that question that you were trying to cache dynamic content. The cache manifest is for getting static content you want for offline web apps to work.
I am not sure you can do what you want. Do you want the app to be able to function offline, or are you just trying to peg something in the cache because it is slow to download? Unless you are actually constructing an offline webapp (which the user will add to as a bookmark or an app in the Spring Board) then your page can (and must necessarily) be evicted from local storage at the browsers discretion, regardless of how loose a cache policy you set on the page.
You should use the Safari Javascript Database API which should work for iPhone and Safari 3.1. It works great for local caching and data storage:
http://developer.apple.com/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/
It could be to do with the size of the output.
I can't talk from any serious experience in tweaking things specifically for an iphone, but there is an intersting read from the YUI team here: http://yuiblog.com/blog/2008/02/06/iphone-cacheability/, which indicates that the largest unzipped cache file that can be held in an iphone is 25k, and that for optimal caching, as many components as possible should be <25k.
That may be the cause of your problems, but that's only a guess.