Magento 2 website goes down every day and need to restart server - server

I have one e-commerce website in Magento 2.2.2 and it keeps on going down almost every day. Whenever it goes down, users get site took too long too respond and it never loads. To get web site working again I have to restart the server and then it works.
Total space on the server is 50GB. Out of which the whole website is around 18GB (11GB Media files and then vendor files etc.). Here are things that i cannot figure out why:
a.) The server shows that 33GB has been used although it should show only 18GB has been used. I have checked everywhere and I can't find what is consuming additional 15GB of space. Complete HTML folder is only 18GB.
b.) When I checked log files: it shows the following:
WARNING: Memory size allocated for the temporary table is more than 20% of innodb_buffer_pool_size. Please update innodb_buffer_pool_size or decrease batch size value (which decreases memory usages for the temporary table). Current batch size: 100000; Allocated memory size: 280000000 bytes; InnoDB buffer pool size: 1073741824 bytes.
I have already set innodb_buffer_pool_size to 2GB. But still, this problem keeps coming.
The server is an Amazon EC2 server and Magento is in production mode. Can allocating 100GB instead of 50GB will solve the problem?
Increased innodb buffer pool size to 10GB and logs do not show error anymore but server still goes down every day. Since RAM is only 4GB on our server, can that be the main cause? Because everyone is suggesting at least 8GB RAM?

Try the things below.
Magento2 has big log files and caching system. There may be increase your files in var folder.
But still you have to check whether your site belongs to more than 3000 products with high size images for products and you are storing all these in your server itself.
The suggestions what I can give, If your site have more products which I already mentioned better you have to use CDN for better performance. So the entire image will be process from the third party.
Next is You have to setup cloud flare to avoid the down time errors or customer side effect. You can make your index page to load while the server is down. And obviously you have to write script to restart the site automatically while its down.
In your server side check the memory size for php, you can better to give to 2G.
In Mysql side : Check the whether its making sleep query or not. If its making through your custom extension area ask your developer to optimize the code.
for eg : May be the code passing 'collection' for a single item fetch.
You can use the tool like nurelic
If everything is fine from your developer area please try to optimize the site with making memory limit mysql killing etc.. with your server side.
In the mean while magento is a big platform for e-commerce sector, so it has more area to cover by default. Better to avoid the unwanted modules from your active site, like disable the core modules which you are not using yet.
For an average site Use 16gb RAM,

A restart your mysql to make it effect ?
Also you need to set that buffer up to 20971520000, thats around 20GB.
Magento uses a lot of sessions and cache.

Related

Possible big mistake. What exactly does "db.repairDatabase()" do? MONGODB

I have a mongodb database with several million users.
I wanted to free space and I created a bot to remove inactive users of more than 6 months.
I have been looking at the disk for several minutes
and I have seen that it varied but it will not release large space, not even 1 mb. That's weird.
I've read that "remove" does not actually delete the disc if it does not simply mark that it can be deleted or overwritten. It is true?
That seemed to make a lot of sense to me. So, I've looked for something that forces space to really free up...
I've applied repairDatabase() and I think I've done wrong.
Everything has been blocked!
I have tried the luck and I have restarted the server.
There is a MongoDB service working but its status is maintained in "Starting" (not Running).
I'm reading from other sites that repairDatabase() requires twice as much space as the original size of the database, it does not have it.
I do not know, what is doing, and this could in several hours, days ...
Is the database lost? I think I will stop all services and delete the database.
repairDatabase is similar to fsck. That is, it attempts to clean up the database of any corrupt documents which may be preventing MongoDB to start up. How it works in detail is different depending on your storage engine, but repairDatabase could potentially remove documents from the database.
The details of what the command does is outlined quite clearly (with all the warnings) in the MongoDB documentation page: https://docs.mongodb.com/manual/reference/command/repairDatabase/
I would suggest that next time it's better to read the official documentation first rather than reading what people said in forums. Second-hand information like these could be outdated, or just plain wrong.
Having said that, you should leave the process running until completion, and perform any troubleshooting if the database cannot be started. It may require 2x the disk space of your data, but it's also possible that the command just needs time to finish.

Out of memory - Java heap space

I have a report which should give around 18000 pages after exporting and has 600K rows of record. It is giving me out of memory error after running. I have applied Virualization but it's not working. I also tried with increasing the memory size in tomcat server but after increasing the size the server is not starting.
From my experience you have not enough of RAM on your server.
Is it absolutely necessary to display report as web page? From our customers we have feedback that they never want to browse throught so many pages. It can be better to export them data directly into excel file where they have many option how to work with them.
One solution can be to have more records on one page which leads to generate less number of pages. But it this case with your RAM memory I am not sure that it helps
How to control the number of rows in a JasperReport

In Mongodb how does the cache get warmed up?

I'm wondering how the cache (memory) gets warmed up. I understand that MongoDB uses memory mapped files and the OS's virtual memory to swap pages in and out as needed. What I don't understand is how it gets warmed up on startup.
Upon startup does mongod map all of the pages in the database to virtual memory or is there some other mechanism to load pages that are not yet mapped which get mapped as queries are run against the database?
Similarly, is the size of the database limited to the amount of virtual memory available to the system. I understand that on a 64-bit system this is a lot. Is there another mechanism other than memory mapping for pages to moved to and from disk?
Memory mapping means that there is a representation of all the on disk files available but only a portion of these files may be present in RAM. When a given page is needed (and it is not in RAM) it can be read from disk into RAM to be accessed.
Regarding limitations, you can see them on the MongoDB limits page
MongoDB does not do any specific "warming" of pages on startup, as it does not have any concept of which pages would be useful and which not.
If you wish to "Warm" certain collections manually before using them you should look at the touch command.

Migrating from Joomla 1.5.x to 1.7

I thought of migrating from J 1.5.23 to 1.7 and like almost everyone i too ran into problems (Good i backed-up my site)
The problem i am facing is that my jUpgrade gets stuck at 'Migrating undefined'. 1.7 gets downloaded completely and also extracts correctly. I think i am still facing this problem because i somehow run out of space during the installation. what i wanted to know was How much disk space does migration require?
I have like 25 Mb free on my server and i am allowed only 100 MB so.
Thank You?
and btw i also unchecked the skip downloads options, didnt work for me
You will probably need more disk space than you have available. Your current site, plus the downloaded zip file, plus space for extracting the files plus any backups you have on the server are likely to exceed your 100MB.
I'd recommend taking a backup of your site, setting up the site on a localhost (xampp, wamp, etc) server on your own machine and run the migration there. This will have the benefits of not hitting arbitrary limits of what sounds like a very low budget web host.
Obviously you'll have the extra complexity of setting up your own server on your PC - but there are many tutorials out there that will walk you through the process, and the learning of new skills is always good.

Why the SessionSize in the WicketDebugBar differs from the pagemap serialized on the disk?

I activated the wicket DebugBar in order to trace my session size. When I navigate on the web site, the indicated session size is stable at about 25k.
In the same time, the pagemap serialiazed on the disk continuously grows from about 25k for each page view.
What does that means? From what I understood, the pagemap on disk keeps all the pages. But why the session stays always at about 25k.
What is the impact on a big website. If I have 1000 parallel web sessions, the web server will need 25Mo to hold them and the disk 250Mo (10 pages * 25k * 1000)?
I will make some load test to check.
The debug bar value is telling you the size of your session in memory. As you browse to another page, the old page is serialized to the session store. This provides, among other things, back button support without killing your memory footprint.
So, to answer your first question, the size on disk grows because it is holding historical data while your session stays about the same because it is holding active data.
To answer your second question, its been some time since I have looked at it, but I believe the disk session store is capped at 10MB or so. Furthermore, you can change the behavior of the session store to meet your needs, but that's a whole different discussion.
See this Wiki page which describes the storage mechanisms in Wicket 1.5. It is a bit different than 1.4 but there is no such document for 1.4
Update: the Wiki page has been moved to the guide: https://ci.apache.org/projects/wicket/guide/7.x/guide/internals.html#pagestoring