Our website is built in Magento 2.1.7, and recently enabled a Layered Navigation plugin. It it a very good feature for customers and we all like it except the page loading speed.
The obvious difference we can feel is the number of cached pages right now is much less than before. For example, when you open one category page, the speed is extremely slow, and then if you open it on another device, it is as fast as normal. In other words, it seems that the cache memory cannot cover all our frequently visisted pages.
I have been working on it by setting up more attribute sets to eliminate useless attributes in products, and reducing filterable attributes, but we are still not pleased with the performance.
My questions are:
Is there any other way I can improve it by settings or coding?
If I upgrade the hosting plan, which aspect is more important?
Now we are using build-in cache, a server with 24GB RAM shared by 13 stores. Next plan is 24GB shared by 4, plus 250GB CDN. Or I just need to upgrade to a varnish cache?
Thanks
From my understanding, the slowness is caused by the complexity of MySQL queries. (The layered navigation adds more joins on the eav tables).
Because of this reason, I believe that Varnish won't help here (since it will still need to cache every page of the layered navigation on the first visit).
From reading here, it seems that more RAM should help (so the next plan that you've mentioned might help).
Another thing that might help (depending on the Layered Navigation implementation) is to try to enable the flat catalog. This should create an entry in the index table for each product, possibly reducing the number of joins that layered navigation uses.
Related
So I've been looking into page speed and have been reading about Google's Server Side GTM. Wondering if a lot of people have transitioned to this and what the costs could be for a high traffic marketing site. If the site already has a ton of tags, is this something that can be set up by the marketing department or developers, or is it more likely needed to hire someone who specializes in GTM.
If anyone has experienced this transition and has any information to share regarding page speed results that would be greatly appreciated as well.
The page speed change will vary depending on your site and the health of your container. You can measure it. Just go to your site, go to devtools, block GTM, reload the page. Compare that to how fast it loads with it.
General conclusion: Most GTM containers have zero influence on page load speed. Even if the library is not cached. Before it's non-blocking.
Keep in mind that best practice for implementing server-side GTM is using it in conjunction with the regular GTM. Which kinda defeats the purpose since sending GA hits is not something affecting your site speed. Unless you create real hardware load with them. Which is pretty difficult.
Now, to maintain your sGTM, you do need to increase spendings, but the main expense is usually not on maintaining the instance with the container. The main expense has a more long-term nature. The maintenance of the backend instance will require more expensive experts, much tougher debugging, typically involvement of normally more expensive backend devs and so on.
Generally, going sGTM route just because it sounds like fun is a bad idea to do in production. There should be a good business case to justify it.
I have a CouchDB instance running on AWS (m4.large EC2 on CentOS) which is in excess of 75GB in size and constantly growing. I have encountered problems when modifying and indexing views on this database, which is taking almost 2 days now.
What are the optimization strategies available to me to make sure that:
Re-indexing after a map-reduce view change can be done more
efficiently
Fetching from a map-reduce view can be done faster (with
a custom reduce function in place)
I've read the suggestions on the CouchDB guide, but they're targeted more towards optimizing inserts.
Since you have such a large database, it does take time to re-index views when one is changed, it will not be near instant like it is when you have a smaller database. Now that I have said that, here is a solution for #1.
Whenever a design document is updated, it re-indexes all views within that document, so having each view in its own design document may increase some re-index speed. Since you do have a massive database, it will still take time to go through every document and re-index them, just now it will do one view and not all views.
Edit: Links
CouchDB Views Intro -> This is the overview of the CouchDB views documentation. I've read and re-read this page multiple times and always find something new each time. I suggest reading it over a few times just to be sure.
CouchDB One vs Multiple Design Documents -> Same page, but it will bring you to the section about my answer above. Please give it a read through and I hope it helps.
I do not know how to address #2, sorry.
Flat file CMS's don't use databases. So how are searches implemented? Is searching more or less computationally expensive with this type of setup compared to a database powered search?
The problem with a static site and search together is that one is by definition static, while the other is highly dynamic. So out of the box there is no simple way to make the two live happily together.
Flat file CMS arn't static websites. While parsing files is more costly than parsing databases (usually?), a search functionality can easily be provided by the underlying CMS. Look for plugins that can provide what you want.
However, there is some non trivial solutions that can achieve what you want, depending on your infrastructure and your volumetry and if you site can achieve server side computations or not (grav can, gatsby and hugo can't).
The simplest way to do it is to create an index of all your content in a special file, then load that and do the search client side. You can even use already made package to speedup dev time on this option. (for example: https://www.npmjs.com/package/react-fuzzy-search )
The pro is that it's quite trivial to do. the cons are that the index will get quite big with large side and all the search is done client side (so, maybe a long waiting time for the user if the index is large enough). This solution will also NOT scale well.
Another way to do it is to use a search service (as a SAAS or on your own premises) to externalize the search functionality. Basically this service run a your server, will have a way to index your content (via an API) and search ie (via an API). Just make sure the search API is public and you can query it in realtime from client side.
This solution scales really well because these sort of services are made from the ground up to scale ! However the setup costs are really high, and not worth it if you don't plan to scale to millions of pages.
Is there a limit to the no. of versions a content item can have in AEM? I want to retain all the versions of my page. As in, unlimited.
Want to know if AEM has a limit internally after which it automatically removes older versions?
Appreciate any thoughts on this.
Although this is not recommended but you can disable the version manager by configuring the versionmanager.purgingEnabled to false. You will need to configure this as described in the document below:
https://docs.adobe.com/docs/en/aem/6-3/deploy/configuring/version-purging.html#Version Manager
Retaining lots of versions will gradually slow down your instance and result in poor authoring performance as the storage (Tar or Mongo) will grow large with stale data.
It is normally recommended to retain versions by a fixed number of days or fixed number of version counts.
For performance reasons, it is better to backup your AEM instance for older archived versions and rely on a restore function to access those versions.
I was asking this question once to Adobe DayCare and received the similar response like in i.net post - it is possible to disable purging the versioning of the page however it comes with the risk of authoring performance issues - pages can start loading very slowly.
The solutions that were suggested (depending on the requirements):
backing up an instance, which is not the best one if you need to be able to retrieve or compare old content anytime, recover if needed; the disadvantage is that all copy of instance needs to be stored and it needs to be repeated from time to time (when you notice performance issues)
designing and implementing a custom solution with an additional instance that would be responsible for storing these versions - I have no much details on that solution however as I understood, it would require deep analysis how it can be done
if the access to previous content is needed only for historical reasons (no need to retrieve it and publish once again) then taking use of the page to PDF extraction mechanism and storing history in DAM or another place; you can then also consider saving to PDF screenshot all page with design (not content only), presenting different browser breakpoints, annotations, etc. depending on requirements
When reading about frameworks (.net. ruby on rails, django, spring, etc.), I keep seeing so and so does or doesn't scale well.
What does it mean when someone says that a framework "scales well" and what does it mean to say a framework "doesn't scale well"?
Thank you.
When you plot some resource use (memory, time, disk space, network bandwidth) against concurrent users, you get a function that describes how the application works at different scale factors.
Small-scale -- a few users -- uses a few resources.
Large-scale -- a large number of users -- uses a large number of resources.
The critical question is "how close to linear is the scaling?" If it scales linearly, then serving 2,000 concurrent users costs 2 times as much as serving 1,000 users and 4 times as much as serving 500 users. This is a tool/framework/language/platform/os that scales well. It's predictable, and the prediction is linear.
If it does not scale linearly, then serving 4,000 users costs 1,000 times as much as serving 2,000 users which cost 100 times serving 500 users. This did not scale well. Something went wrong as usage went up; it does not appear predictable and it is not linear.
It means that a particular framework does (or does not) meet the increased demand that more users put on it. If you have an application written in VBScript, it might not do a good job of handling the 40,000,000 users of Facebook, for example.
This blog post explains some of the scalability pains Twitter experienced a year or so ago. It could provide some more insight into the answer to your question.
Sometimes lack of scalability is used to denigrate a language or framework, so watch out for that. Stick to studies that show real metrics. This applies to my VBScript example in the previous paragraph as well.
If a framework or an application scales well, it means that it can handle larger loads. As your site becomes more popular with more visitors and more hits per day, a framework that scales well will handle the larger load the same as it handles a smaller load. A framework that scales well will act the same when it receives 200,000 hits an hour as it does when it gets 1 hit an hour. Not only hits, but being deployed across multiple servers, possibly behind load balancing, possibly with several different database servers. A framework that scales well can handle these increasing demands well.
For instance, twitter exploded almost overnight last year. It was developed using Ruby On Rails, and it was hotly featured in the ongoing debate on whether Rails scales well or not.
substitute the phrase "handle expansion" for "scale"
There are a few elements to it in my mind. The first is the obvious one -- performance scaling. Can your framework be used to build hight capacity, high throughput system or can it just be used to build smaller applications. Will it scale vertically on hardware (parallel libraries for example) and will it scale horizontally (web farms, for example).
The second is can it scale to larger teams or the enterprise. That is, does it work well with large code bases? Large development teams? Does it have good tool support? How easy is it to deploy? Can you roll out to tens or hundreds or even thousands of users? All the way down to is it easy to hire people that have this skill. Think of trying to put together a development team of 20 or 50 people that all work on this framework. Would it be easy or next to impossible?
IMHO, saying that a framework "scales well" usually means that someone in the hearsay chain was able to use it to handle lots of volume.
In parallel programming scalability is usually used to describe how an algorithm performs as it is parallelized. An algorithm that has a 1:1 speedup is a rare beast but will double in performance on twice the hardware/cpu, treble on three times the hardware/cpu, etc...
In my experience practically any framework can be made to scale given sufficient expertise.
The easier the framework is to use the greater the chance that a developer with insufficient expertise will run into scalability problems.
It means that some respected company is doing something serious with it and isn't having any problems with it.
Scaling means how easy it is to satisfy more demand by using more hardware.
Example: You have a website written in some language that gets 1000 visits a day. You get featured in some prominent magazin and your number of users grows. Suddenly you have 1000000 visits a day, thats 1000 times as much. If you can just use 1000 more servers to satisfy the grown need of resources, your website scales well. If on the other hand you add 2000 servers but still users can't connect, because your database can only handle 1000 requests per day, than your website does not scale well.